International Nuclear Information System (INIS)
Gardner, R.P.; Zhang, W.; Metwally, W.A.
2005-01-01
The Center for Engineering Applications of Radioisotopes (CEAR) has been working for about ten years on the Monte Carlo - Library Least-Squares (MCLLS) approach for treating the nonlinear inverse analysis problem for PGNAA bulk analysis. This approach consists essentially of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required libraries. These libraries are then used in the linear Library Least-Squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. The other libraries include all sources of background which includes: (1) gamma-rays emitted by the neutron source, (2) prompt gamma-rays produced in the analyzer construction materials, (3) natural gamma-rays from K-40 and the uranium and thorium decay chains, and (4) prompt and decay gamma-rays produced in the NaI detector by neutron activation. A number of unforeseen problems have arisen in pursuing this approach including: (1) the neutron activation of the most common detector (NaI) used in bulk analysis PGNAA systems, (2) the nonlinearity of this detector, and (3) difficulties in obtaining detector response functions for this (and other) detectors. These problems have been addressed by CEAR recently and have either been solved or are almost solved at the present time. Development of Monte Carlo simulation for all of the libraries has been finished except the prompt gamma-ray library from the activation of the NaI detector. Treatment for the coincidence schemes for Na and particularly I must be first determined to complete the Monte Carlo simulation of this last library. (author)
Monte Carlo Library Least Square (MCLLS) Method for Multiple Radioactive Particle Tracking in BPR
Wang, Zhijian; Lee, Kyoung; Gardner, Robin
2010-03-01
In This work, a new method of radioactive particles tracking is proposed. An accurate Detector Response Functions (DRF's) was developed from MCNP5 to generate library for NaI detectors with a significant speed-up factor of 200. This just make possible for the idea of MCLLS method which is used for locating and tracking the radioactive particle in a modular Pebble Bed Reactor (PBR) by searching minimum Chi-square values. The method was tested to work pretty good in our lab condition with a six 2" X 2" NaI detectors array only. This method was introduced in both forward and inverse ways. A single radioactive particle tracking system with three collimated 2" X 2" NaI detectors is used for benchmark purpose.
International Nuclear Information System (INIS)
Meric, Ilker; Johansen, Geir A; Holstad, Marie B; Mattingly, John; Gardner, Robin P
2012-01-01
Prompt gamma-ray neutron activation analysis (PGNAA) has been and still is one of the major methods of choice for the elemental analysis of various bulk samples. This is mostly due to the fact that PGNAA offers a rapid, non-destructive and on-line means of sample interrogation. The quantitative analysis of the prompt gamma-ray data could, on the other hand, be performed either through the single peak analysis or the so-called Monte Carlo library least-squares (MCLLS) approach, of which the latter has been shown to be more sensitive and more accurate than the former. The MCLLS approach is based on the assumption that the total prompt gamma-ray spectrum of any sample is a linear combination of the contributions from the individual constituents or libraries. This assumption leads to, through the minimization of the chi-square value, a set of linear equations which has to be solved to obtain the library multipliers, a process that involves the inversion of the covariance matrix. The least-squares solution may be extremely uncertain due to the ill-conditioning of the covariance matrix. The covariance matrix will become ill-conditioned whenever, in the subsequent calculations, two or more libraries are highly correlated. The ill-conditioning will also be unavoidable whenever the sample contains trace amounts of certain elements or elements with significantly low thermal neutron capture cross-sections. In this work, a new iterative approach, which can handle the ill-conditioning of the covariance matrix, is proposed and applied to a hydrocarbon multiphase flow problem in which the parameters of interest are the separate amounts of the oil, gas, water and salt phases. The results of the proposed method are also compared with the results obtained through the implementation of a well-known regularization method, the truncated singular value decomposition. Final calculations indicate that the proposed approach would be able to treat ill-conditioned cases appropriately. (paper)
Optimization of sequential decisions by least squares Monte Carlo method
DEFF Research Database (Denmark)
Nishijima, Kazuyoshi; Anders, Annett
change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which......The present paper considers the sequential decision optimization problem. This is an important class of decision problems in engineering. Important examples include decision problems on the quality control of manufactured products and engineering components, timing of the implementation of climate....... For the purpose to demonstrate the use and advantages two numerical examples are provided, which is on the quality control of manufactured products....
Directory of Open Access Journals (Sweden)
Xisheng Yu
2014-01-01
Full Text Available The paper by Liu (2010 introduces a method termed the canonical least-squares Monte Carlo (CLM which combines a martingale-constrained entropy model and a least-squares Monte Carlo algorithm to price American options. In this paper, we first provide the convergence results of CLM and numerically examine the convergence properties. Then, the comparative analysis is empirically conducted using a large sample of the S&P 100 Index (OEX puts and IBM puts. The results on the convergence show that choosing the shifted Legendre polynomials with four regressors is more appropriate considering the pricing accuracy and the computational cost. With this choice, CLM method is empirically demonstrated to be superior to the benchmark methods of binominal tree and finite difference with historical volatilities.
DEFF Research Database (Denmark)
Anders, Annett; Nishijima, Kazuyoshi
The present paper aims at enhancing a solution approach proposed by Anders & Nishijima (2011) to real-time decision problems in civil engineering. The approach takes basis in the Least Squares Monte Carlo method (LSM) originally proposed by Longstaff & Schwartz (2001) for computing American option...... prices. In Anders & Nishijima (2011) the LSM is adapted for a real-time operational decision problem; however it is found that further improvement is required in regard to the computational efficiency, in order to facilitate it for practice. This is the focus in the present paper. The idea behind...
Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods
Directory of Open Access Journals (Sweden)
Qian Liu
2015-01-01
Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.
Energy Technology Data Exchange (ETDEWEB)
Moralles, M. [Centro do Reator de Pesquisas, Instituto de Pesquisas Energeticas e Nucleares, Caixa Postal 11049, CEP 05422-970, Sao Paulo SP (Brazil)], E-mail: moralles@ipen.br; Bonifacio, D.A.B. [Centro do Reator de Pesquisas, Instituto de Pesquisas Energeticas e Nucleares, Caixa Postal 11049, CEP 05422-970, Sao Paulo SP (Brazil); Bottaro, M.; Pereira, M.A.G. [Instituto de Eletrotecnica e Energia, Universidade de Sao Paulo, Av. Prof. Luciano Gualberto, 1289, CEP 05508-010, Sao Paulo SP (Brazil)
2007-09-21
Spectra of calibration sources and X-ray beams were measured with a cadmium telluride (CdTe) detector. The response function of the detector was simulated using the GEANT4 Monte Carlo toolkit. Trapping of charge carriers were taken into account using the Hecht equation in the active zone of the CdTe crystal associated with a continuous function to produce drop of charge collection efficiency near the metallic contacts and borders. The rise time discrimination is approximated by a cut in the depth of the interaction relative to cathode and corrections that depend on the pulse amplitude. The least-squares method with truncation was employed to unfold X-ray spectra typically used in medical diagnostics and the results were compared with reference data.
Moralles, M.; Bonifácio, D. A. B.; Bottaro, M.; Pereira, M. A. G.
2007-09-01
Spectra of calibration sources and X-ray beams were measured with a cadmium telluride (CdTe) detector. The response function of the detector was simulated using the GEANT4 Monte Carlo toolkit. Trapping of charge carriers were taken into account using the Hecht equation in the active zone of the CdTe crystal associated with a continuous function to produce drop of charge collection efficiency near the metallic contacts and borders. The rise time discrimination is approximated by a cut in the depth of the interaction relative to cathode and corrections that depend on the pulse amplitude. The least-squares method with truncation was employed to unfold X-ray spectra typically used in medical diagnostics and the results were compared with reference data.
AKLSQF - LEAST SQUARES CURVE FITTING
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
Tikhonov Regularization and Total Least Squares
DEFF Research Database (Denmark)
Golub, G. H.; Hansen, Per Christian; O'Leary, D. P.
2000-01-01
formulation involves a least squares problem, can be recast in a total least squares formulation suited for problems in which both the coefficient matrix and the right-hand side are known only approximately. We analyze the regularizing properties of this method and demonstrate by a numerical example that...
Weighted conditional least-squares estimation
International Nuclear Information System (INIS)
Booth, J.G.
1987-01-01
A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered
Regularization by truncated total least squares
DEFF Research Database (Denmark)
Hansen, Per Christian; Fierro, R.D; Golub, G.H
1997-01-01
The total least squares (TLS) method is a successful method for noise reduction in linear least squares problems in a number of applications. The TLS method is suited to problems in which both the coefficient matrix and the right-hand side are not precisely known. This paper focuses on the use...... matrix. We express our results in terms of the singular value decomposition (SVD) of the coefficient matrix rather than the augmented matrix. This leads to insight into the filtering properties of the truncated TLS method as compared to regularized least squares solutions. In addition, we propose...
Least Squares Data Fitting with Applications
DEFF Research Database (Denmark)
Hansen, Per Christian; Pereyra, Víctor; Scherer, Godela
As one of the classical statistical regression techniques, and often the first to be taught to new students, least squares fitting can be a very effective tool in data analysis. Given measured data, we establish a relationship between independent and dependent variables so that we can use the data...... predictively. The main concern of Least Squares Data Fitting with Applications is how to do this on a computer with efficient and robust computational methods for linear and nonlinear relationships. The presentation also establishes a link between the statistical setting and the computational issues...... that help readers to understand and evaluate the computed solutions • many examples that illustrate the techniques and algorithms Least Squares Data Fitting with Applications can be used as a textbook for advanced undergraduate or graduate courses and professionals in the sciences and in engineering....
Partial update least-square adaptive filtering
Xie, Bei
2014-01-01
Adaptive filters play an important role in the fields related to digital signal processing and communication, such as system identification, noise cancellation, channel equalization, and beamforming. In practical applications, the computational complexity of an adaptive filter is an important consideration. The Least Mean Square (LMS) algorithm is widely used because of its low computational complexity (O(N)) and simplicity in implementation. The least squares algorithms, such as Recursive Least Squares (RLS), Conjugate Gradient (CG), and Euclidean Direction Search (EDS), can converge faster a
Deformation analysis with Total Least Squares
Directory of Open Access Journals (Sweden)
M. Acar
2006-01-01
Full Text Available Deformation analysis is one of the main research fields in geodesy. Deformation analysis process comprises measurement and analysis phases. Measurements can be collected using several techniques. The output of the evaluation of the measurements is mainly point positions. In the deformation analysis phase, the coordinate changes in the point positions are investigated. Several models or approaches can be employed for the analysis. One approach is based on a Helmert or similarity coordinate transformation where the displacements and the respective covariance matrix are transformed into a unique datum. Traditionally a Least Squares (LS technique is used for the transformation procedure. Another approach that could be introduced as an alternative methodology is the Total Least Squares (TLS that is considerably a new approach in geodetic applications. In this study, in order to determine point displacements, 3-D coordinate transformations based on the Helmert transformation model were carried out individually by the Least Squares (LS and the Total Least Squares (TLS, respectively. The data used in this study was collected by GPS technique in a landslide area located nearby Istanbul. The results obtained from these two approaches have been compared.
Stochastic gradient versus recursive least squares learning
Czech Academy of Sciences Publication Activity Database
Slobodyan, Sergey; Bogomolova, Anna; Kolyuzhnov, Dmitri
-, č. 309 (2006), s. 1-21 ISSN 1211-3298 Institutional research plan: CEZ:AV0Z70850503 Keywords : constant gain adaptive learning * stochastic gradient learning * recursive least squares Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp309.pdf
Least-squares variance component estimation
Teunissen, P.J.G.; Amiri-Simkooei, A.R.
2007-01-01
Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight
Time Scale in Least Square Method
Directory of Open Access Journals (Sweden)
Özgür Yeniay
2014-01-01
Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.
Group-wise partial least square regression
Camacho, José; Saccenti, Edoardo
2018-01-01
This paper introduces the group-wise partial least squares (GPLS) regression. GPLS is a new sparse PLS technique where the sparsity structure is defined in terms of groups of correlated variables, similarly to what is done in the related group-wise principal component analysis. These groups are
Least-squares finite element methods
Bochev, Pavel
2009-01-01
Since their emergence, finite element methods have taken a place as one of the most versatile and powerful methodologies for the approximate numerical solution of Partial Differential Equations. This book presents the theory and practice of least-square finite element methods, their strengths and weaknesses, successes, and open problems
Iterative methods for weighted least-squares
Energy Technology Data Exchange (ETDEWEB)
Bobrovnikova, E.Y.; Vavasis, S.A. [Cornell Univ., Ithaca, NY (United States)
1996-12-31
A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.
Groupwise Retargeted Least-Squares Regression.
Wang, Lingfeng; Pan, Chunhong
2018-04-01
In this brief, we propose a new groupwise retargeted least squares regression (GReLSR) model for multicategory classification. The main motivation behind GReLSR is to utilize an additional regularization to restrict the translation values of ReLSR, so that they should be similar within same class. By analyzing the regression targets of ReLSR, we propose a new formulation of ReLSR, where the translation values are expressed explicitly. On the basis of the new formulation, discriminative least-squares regression can be regarded as a special case of ReLSR with zero translation values. Moreover, a groupwise constraint is added to ReLSR to form the new GReLSR model. Extensive experiments on various machine leaning data sets illustrate that our method outperforms the current state-of-the-art approaches.
Total least squares for anomalous change detection
Theiler, James; Matsekh, Anna M.
2010-04-01
A family of subtraction-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQbased anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and special cases of it are equivalent to canonical correlation analysis and optimized covariance equalization. What whitened TLSQ offers is a generalization of these algorithms with the potential for better performance.
Least Squares Moving-Window Spectral Analysis.
Lee, Young Jong
2017-08-01
Least squares regression is proposed as a moving-windows method for analysis of a series of spectra acquired as a function of external perturbation. The least squares moving-window (LSMW) method can be considered an extended form of the Savitzky-Golay differentiation for nonuniform perturbation spacing. LSMW is characterized in terms of moving-window size, perturbation spacing type, and intensity noise. Simulation results from LSMW are compared with results from other numerical differentiation methods, such as single-interval differentiation, autocorrelation moving-window, and perturbation correlation moving-window methods. It is demonstrated that this simple LSMW method can be useful for quantitative analysis of nonuniformly spaced spectral data with high frequency noise.
Elastic least-squares reverse time migration
Feng, Zongcai
2016-09-06
Elastic least-squares reverse time migration (LSRTM) is used to invert synthetic particle-velocity data and crosswell pressure field data. The migration images consist of both the P- and Svelocity perturbation images. Numerical tests on synthetic and field data illustrate the advantages of elastic LSRTM over elastic reverse time migration (RTM). In addition, elastic LSRTM images are better focused and have better reflector continuity than do the acoustic LSRTM images.
Optimistic semi-supervised least squares classification
DEFF Research Database (Denmark)
Krijthe, Jesse H.; Loog, Marco
2017-01-01
The goal of semi-supervised learning is to improve supervised classifiers by using additional unlabeled training examples. In this work we study a simple self-learning approach to semi-supervised learning applied to the least squares classifier. We show that a soft-label and a hard-label variant...... of self-learning can be derived by applying block coordinate descent to two related but slightly different objective functions. The resulting soft-label approach is related to an idea about dealing with missing data that dates back to the 1930s. We show that the soft-label variant typically outperforms...
Nonlinear least squares and super resolution
Energy Technology Data Exchange (ETDEWEB)
Chung, J; Nagy, J G [Department of Mathematics and Computer Science Emory University Atlanta, GA, 30322 (United States)], E-mail: jmchung@mathcs.emory.edu, E-mail: nagy@mathcs.emory.edu
2008-07-15
Digital super resolution is a term used to describe the inverse problem of reconstructing a high resolution image from a set of known low resolution images, each of which is shifted by subpixel displacements. Simple models assume the subpixel displacements are known, but if the displacements are not known then nonlinear approaches must be used to jointly find the displacements and the reconstructed high resolution image. Furthermore, regularization is needed to stabilize the inversion process. This paper describes a separable nonlinear least squares formulation and a solution scheme based on the Gauss-Newton method. In addition, an approach is proposed to choose appropriate regularization parameters at each Gauss-Newton iteration.
Total least squares for anomalous change detection
Energy Technology Data Exchange (ETDEWEB)
Theiler, James P [Los Alamos National Laboratory; Matsekh, Anna M [Los Alamos National Laboratory
2010-01-01
A family of difference-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQ-based anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and furthermore it is shown to be equivalent to the optimized covariance equalization algorithm. What whitened TLSQ offers, in addition to connecting with a common language the derivations of two of the most popular anomalous change detection algorithms - chronochrome and covariance equalization - is a generalization of these algorithms with the potential for better performance.
Vehicle detection using partial least squares.
Kembhavi, Aniruddha; Harwood, David; Davis, Larry S
2011-06-01
Detecting vehicles in aerial images has a wide range of applications, from urban planning to visual surveillance. We describe a vehicle detector that improves upon previous approaches by incorporating a very large and rich set of image descriptors. A new feature set called Color Probability Maps is used to capture the color statistics of vehicles and their surroundings, along with the Histograms of Oriented Gradients feature and a simple yet powerful image descriptor that captures the structural characteristics of objects named Pairs of Pixels. The combination of these features leads to an extremely high-dimensional feature set (approximately 70,000 elements). Partial Least Squares is first used to project the data onto a much lower dimensional sub-space. Then, a powerful feature selection analysis is employed to improve the performance while vastly reducing the number of features that must be calculated. We compare our system to previous approaches on two challenging data sets and show superior performance.
Multiples least-squares reverse time migration
Zhang, Dongliang
2013-01-01
To enhance the image quality, we propose multiples least-squares reverse time migration (MLSRTM) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. Since each recorded trace is treated as a virtual source, knowledge of the source wavelet is not required. Numerical tests on synthetic data for the Sigsbee2B model and field data from Gulf of Mexico show that MLSRTM can improve the image quality by removing artifacts, balancing amplitudes, and suppressing crosstalk compared to standard migration of the free-surface multiples. The potential liability of this method is that multiples require several roundtrips between the reflector and the free surface, so that high frequencies in the multiples are attenuated compared to the primary reflections. This can lead to lower resolution in the migration image compared to that computed from primaries.
Cichocki, A; Unbehauen, R
1994-01-01
In this paper a new class of simplified low-cost analog artificial neural networks with on chip adaptive learning algorithms are proposed for solving linear systems of algebraic equations in real time. The proposed learning algorithms for linear least squares (LS), total least squares (TLS) and data least squares (DLS) problems can be considered as modifications and extensions of well known algorithms: the row-action projection-Kaczmarz algorithm and/or the LMS (Adaline) Widrow-Hoff algorithms. The algorithms can be applied to any problem which can be formulated as a linear regression problem. The correctness and high performance of the proposed neural networks are illustrated by extensive computer simulation results.
Multisource Least-squares Reverse Time Migration
Dai, Wei
2012-12-01
Least-squares migration has been shown to be able to produce high quality migration images, but its computational cost is considered to be too high for practical imaging. In this dissertation, a multisource least-squares reverse time migration algorithm (LSRTM) is proposed to increase by up to 10 times the computational efficiency by utilizing the blended sources processing technique. There are three main chapters in this dissertation. In Chapter 2, the multisource LSRTM algorithm is implemented with random time-shift and random source polarity encoding functions. Numerical tests on the 2D HESS VTI data show that the multisource LSRTM algorithm suppresses migration artifacts, balances the amplitudes, improves image resolution, and reduces crosstalk noise associated with the blended shot gathers. For this example, multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution, and fewer migration artifacts compared to conventional RTM. The empirical results suggest that the multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with similar or less computational cost. The caveat is that LSRTM image is sensitive to large errors in the migration velocity model. In Chapter 3, the multisource LSRTM algorithm is implemented with frequency selection encoding strategy and applied to marine streamer data, for which traditional random encoding functions are not applicable. The frequency-selection encoding functions are delta functions in the frequency domain, so that all the encoded shots have unique non-overlapping frequency content. Therefore, the receivers can distinguish the wavefield from each shot according to the frequencies. With the frequency-selection encoding method, the computational efficiency of LSRTM is increased so that its cost is
Estimating errors in least-squares fitting
Richter, P. H.
1995-01-01
While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.
Skeletonized Least Squares Wave Equation Migration
Zhan, Ge
2010-10-17
The theory for skeletonized least squares wave equation migration (LSM) is presented. The key idea is, for an assumed velocity model, the source‐side Green\\'s function and the geophone‐side Green\\'s function are computed by a numerical solution of the wave equation. Only the early‐arrivals of these Green\\'s functions are saved and skeletonized to form the migration Green\\'s function (MGF) by convolution. Then the migration image is obtained by a dot product between the recorded shot gathers and the MGF for every trial image point. The key to an efficient implementation of iterative LSM is that at each conjugate gradient iteration, the MGF is reused and no new finitedifference (FD) simulations are needed to get the updated migration image. It is believed that this procedure combined with phase‐encoded multi‐source technology will allow for the efficient computation of wave equation LSM images in less time than that of conventional reverse time migration (RTM).
Elastic least-squares reverse time migration
Feng, Zongcai
2017-03-08
We use elastic least-squares reverse time migration (LSRTM) to invert for the reflectivity images of P- and S-wave impedances. Elastic LSRTMsolves the linearized elastic-wave equations for forward modeling and the adjoint equations for backpropagating the residual wavefield at each iteration. Numerical tests on synthetic data and field data reveal the advantages of elastic LSRTM over elastic reverse time migration (RTM) and acoustic LSRTM. For our examples, the elastic LSRTM images have better resolution and amplitude balancing, fewer artifacts, and less crosstalk compared with the elastic RTM images. The images are also better focused and have better reflector continuity for steeply dipping events compared to the acoustic LSRTM images. Similar to conventional leastsquares migration, elastic LSRTM also requires an accurate estimation of the P- and S-wave migration velocity models. However, the problem remains that, when there are moderate errors in the velocity model and strong multiples, LSRTMwill produce migration noise stronger than that seen in the RTM images.
Multilevel weighted least squares polynomial approximation
Haji-Ali, Abdul-Lateef
2017-06-30
Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.
New approach to breast cancer CAD using partial least squares and kernel-partial least squares
Land, Walker H., Jr.; Heine, John; Embrechts, Mark; Smith, Tom; Choma, Robert; Wong, Lut
2005-04-01
Breast cancer is second only to lung cancer as a tumor-related cause of death in women. Currently, the method of choice for the early detection of breast cancer is mammography. While sensitive to the detection of breast cancer, its positive predictive value (PPV) is low, resulting in biopsies that are only 15-34% likely to reveal malignancy. This paper explores the use of two novel approaches called Partial Least Squares (PLS) and Kernel-PLS (K-PLS) to the diagnosis of breast cancer. The approach is based on optimization for the partial least squares (PLS) algorithm for linear regression and the K-PLS algorithm for non-linear regression. Preliminary results show that both the PLS and K-PLS paradigms achieved comparable results with three separate support vector learning machines (SVLMs), where these SVLMs were known to have been trained to a global minimum. That is, the average performance of the three separate SVLMs were Az = 0.9167927, with an average partial Az (Az90) = 0.5684283. These results compare favorably with the K-PLS paradigm, which obtained an Az = 0.907 and partial Az = 0.6123. The PLS paradigm provided comparable results. Secondly, both the K-PLS and PLS paradigms out performed the ANN in that the Az index improved by about 14% (Az ~ 0.907 compared to the ANN Az of ~ 0.8). The "Press R squared" value for the PLS and K-PLS machine learning algorithms were 0.89 and 0.9, respectively, which is in good agreement with the other MOP values.
Least squares polynomial chaos expansion: A review of sampling strategies
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
A Risk Comparison of Ordinary Least Squares vs Ridge Regression
Dhillon, Paramveer S.; Foster, Dean P.; Kakade, Sham M.; Ungar, Lyle H.
2011-01-01
We compare the risk of ridge regression to a simple variant of ordinary least squares, in which one simply projects the data onto a finite dimensional subspace (as specified by a Principal Component Analysis) and then performs an ordinary (un-regularized) least squares regression in this subspace. This note shows that the risk of this ordinary least squares method is within a constant factor (namely 4) of the risk of ridge regression.
Discrete Wavelet Transform-Partial Least Squares Versus Derivative ...
African Journals Online (AJOL)
Discrete Wavelet Transform-Partial Least Squares Versus Derivative Ratio Spectrophotometry for Simultaneous Determination of Chlorpheniramine Maleate and Dexamethasone in the Presence of Parabens in Pharmaceutical Dosage Form.
Least-Squares Neutron Spectral Adjustment with STAYSL PNNL
Directory of Open Access Journals (Sweden)
Greenwood L.R.
2016-01-01
Full Text Available The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002 [3], which was sponsored by the International Atomic Energy Agency (IAEA, though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the
Least-Squares Neutron Spectral Adjustment with STAYSL PNNL
Greenwood, L. R.; Johnson, C. D.
2016-02-01
The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002) [3], which was sponsored by the International Atomic Energy Agency (IAEA), though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF) [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up) within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the SigPhi Calculator
Comparision of the estimation of the least square and genetic ...
African Journals Online (AJOL)
This article aims at evaluating the functions existed in the R software which are employed for approximate solution in optimization. As a result, there has been produced least squares by usual methods for linear and non-linear models through genetic algorithm in this research. Keywords: Least Squares, Genetic Algorithm, ...
Spectrum unfolding by the least-squares methods
International Nuclear Information System (INIS)
Perey, F.G.
1977-01-01
The method of least squares is briefly reviewed, and the conditions under which it may be used are stated. From this analysis, a least-squares approach to the solution of the dosimetry neutron spectrum unfolding problem is introduced. The mathematical solution to this least-squares problem is derived from the general solution. The existence of this solution is analyzed in some detail. A chi 2 -test is derived for the consistency of the input data which does not require the solution to be obtained first. The fact that the problem is technically nonlinear, but should be treated in general as a linear one, is argued. Therefore, the solution should not be obtained by iteration. Two interpretations are made for the solution of the code STAY'SL, which solves this least-squares problem. The relationship of the solution to this least-squares problem to those obtained currently by other methods of solving the dosimetry neutron spectrum unfolding problem is extensively discussed. It is shown that the least-squares method does not require more input information than would be needed by current methods in order to estimate the uncertainties in their solutions. From this discussion it is concluded that the proposed least-squares method does provide the best complete solution, with uncertainties, to the problem as it is understood now. Finally, some implications of this method are mentioned regarding future work required in order to exploit its potential fully
Source Localization using Stochastic Approximation and Least Squares Methods
International Nuclear Information System (INIS)
Sahyoun, Samir S.; Djouadi, Seddik M.; Qi, Hairong; Drira, Anis
2009-01-01
This paper presents two approaches to locate the source of a chemical plume; Nonlinear Least Squares and Stochastic Approximation (SA) algorithms. Concentration levels of the chemical measured by special sensors are used to locate this source. Non-linear Least Squares technique is applied at different noise levels and compared with the localization using SA. For a noise corrupted data collected from a distributed set of chemical sensors, we show that SA methods are more efficient than Least Squares method. SA methods are often better at coping with noisy input information than other search methods.
A Newton Algorithm for Multivariate Total Least Squares Problems
Directory of Open Access Journals (Sweden)
WANG Leyang
2016-04-01
Full Text Available In order to improve calculation efficiency of parameter estimation, an algorithm for multivariate weighted total least squares adjustment based on Newton method is derived. The relationship between the solution of this algorithm and that of multivariate weighted total least squares adjustment based on Lagrange multipliers method is analyzed. According to propagation of cofactor, 16 computational formulae of cofactor matrices of multivariate total least squares adjustment are also listed. The new algorithm could solve adjustment problems containing correlation between observation matrix and coefficient matrix. And it can also deal with their stochastic elements and deterministic elements with only one cofactor matrix. The results illustrate that the Newton algorithm for multivariate total least squares problems could be practiced and have higher convergence rate.
Moving least-squares corrections for smoothed particle hydrodynamics
Bilotta, G.; Russo, G.; Herault, A.; Del Negro, C.
2011-01-01
First-order moving least-squares are typically used in conjunction with smoothed particle hydrodynamics in the form of post-processing filters for density fields, to smooth out noise that develops in most applications of smoothed particle hydrodynamics. We show how an approach based on higher-order moving least-squares can be used to correct some of the main limitations in gradient and second-order derivative computation in classic smoothed particle hydrodynamics formulations. With a small in...
Least squares in calibration: dealing with uncertainty in x.
Tellinghuisen, Joel
2010-08-01
The least-squares (LS) analysis of data with error in x and y is generally thought to yield best results when carried out by minimizing the "total variance" (TV), defined as the sum of the properly weighted squared residuals in x and y. Alternative "effective variance" (EV) methods project the uncertainty in x into an effective contribution to that in y, and though easier to employ are considered to be less reliable. In the case of a linear response function with both sigma(x) and sigma(y) constant, the EV solutions are identically those from ordinary LS; and Monte Carlo (MC) simulations reveal that they can actually yield smaller root-mean-square errors than the TV method. Furthermore, the biases can be predicted from theory based on inverse regression--x upon y when x is error-free and y is uncertain--which yields a bias factor proportional to the ratio sigma(x)(2)/sigma(xm)(2) of the random-error variance in x to the model variance. The MC simulations confirm that the biases are essentially independent of the error in y, hence correctable. With such bias corrections, the better performance of the EV method in estimating the parameters translates into better performance in estimating the unknown (x(0)) from measurements (y(0)) of its response. The predictability of the EV parameter biases extends also to heteroscedastic y data as long as sigma(x) remains constant, but the estimation of x(0) is not as good in this case. When both x and y are heteroscedastic, there is no known way to predict the biases. However, the MC simulations suggest that for proportional error in x, a geometric x-structure leads to small bias and comparable performance for the EV and TV methods.
LSL: a logarithmic least-squares adjustment method
International Nuclear Information System (INIS)
Stallmann, F.W.
1982-01-01
To meet regulatory requirements, spectral unfolding codes must not only provide reliable estimates for spectral parameters, but must also be able to determine the uncertainties associated with these parameters. The newer codes, which are more appropriately called adjustment codes, use the least squares principle to determine estimates and uncertainties. The principle is simple and straightforward, but there are several different mathematical models to describe the unfolding problem. In addition to a sound mathematical model, ease of use and range of options are important considerations in the construction of adjustment codes. Based on these considerations, a least squares adjustment code for neutron spectrum unfolding has been constructed some time ago and tentatively named LSL
Sparse least-squares reverse time migration using seislets
Dutta, Gaurav
2015-08-19
We propose sparse least-squares reverse time migration (LSRTM) using seislets as a basis for the reflectivity distribution. This basis is used along with a dip-constrained preconditioner that emphasizes image updates only along prominent dips during the iterations. These dips can be estimated from the standard migration image or from the gradient using plane-wave destruction filters or structural tensors. Numerical tests on synthetic datasets demonstrate the benefits of this method for mitigation of aliasing artifacts and crosstalk noise in multisource least-squares migration.
Multi-source least-squares migration of marine data
Wang, Xin
2012-11-04
Kirchhoff based multi-source least-squares migration (MSLSM) is applied to marine streamer data. To suppress the crosstalk noise from the excitation of multiple sources, a dynamic encoding function (including both time-shifts and polarity changes) is applied to the receiver side traces. Results show that the MSLSM images are of better quality than the standard Kirchhoff migration and reverse time migration images; moreover, the migration artifacts are reduced and image resolution is significantly improved. The computational cost of MSLSM is about the same as conventional least-squares migration, but its IO cost is significantly decreased.
Algorithms for unweighted least-squares factor analysis
Krijnen, WP
Estimation of the factor model by unweighted least squares (ULS) is distribution free, yields consistent estimates, and is computationally fast if the Minimum Residuals (MinRes) algorithm is employed, MinRes algorithms produce a converging sequence of monotonically decreasing ULS function values.
Preconditioned Iterative Methods for Solving Weighted Linear Least Squares Problems
Czech Academy of Sciences Publication Activity Database
Bru, R.; Marín, J.; Mas, J.; Tůma, Miroslav
2014-01-01
Roč. 36, č. 4 (2014), A2002-A2022 ISSN 1064-8275 Institutional support: RVO:67985807 Keywords : preconditioned iterative methods * incomplete decompositions * approximate inverses * linear least squares Subject RIV: BA - General Mathematics Impact factor: 1.854, year: 2014
Least-squares Bilinear Clustering of Three-way Data
P.C. Schoonees (Pieter); P.J.F. Groenen (Patrick); M. van de Velden (Michel)
2015-01-01
markdownabstract__Abstract__ A least-squares bilinear clustering framework for modelling three-way data, where each observation consists of an ordinary two-way matrix, is introduced. The method combines bilinear decompositions of the two-way matrices into overall means, row margins, column
Moving least squares simulation of free surface flows
DEFF Research Database (Denmark)
Felter, C. L.; Walther, Jens Honore; Henriksen, Christian
2014-01-01
In this paper a Moving Least Squares method (MLS) for the simulation of 2D free surface flows is presented. The emphasis is on the governing equations, the boundary conditions, and the numerical implementation. The compressible viscous isothermal Navier–Stokes equations are taken as the starting ...
A Genetic Algorithm Approach to Nonlinear Least Squares Estimation
Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.
2004-01-01
A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…
A hybrid partial least squares and random forest approach to ...
African Journals Online (AJOL)
The aim of this study was to examine the utility of the partial least squares regression (PLSR), random forest (RF) and a PLSR-RF hybrid machine learning approach for the prediction of four forest structural attributes: (basal area, volume, dominant tree height and mean tree height) within a commercial Eucalyptus forest ...
SELECTION OF REFERENCE PLANE BY THE LEAST SQUARES FITTING METHODS
Directory of Open Access Journals (Sweden)
Przemysław Podulka
2016-06-01
For least squares polynomial fittings it was found that applied method for cylinder liners gave usually better robustness for scratches, valleys and dimples occurrence. For piston skirt surfaces better edge-filtering results were obtained. It was also recommended to analyse the Sk parameters for proper selection of reference plane in surface topography measurements.
Separation of Regional-Residual Anomaly Using Least Square ...
African Journals Online (AJOL)
Separation of Regional-Residual Anomaly Using Least Square Polynomial Fitting Method. ... The data were obtained by digitizing the maps of the above areas, picking the Total magnetic values along the profile line, processed and analyzed. The result of the residual separation revealed that the area is underlain by a ...
Least-Squares Adaptive Control Using Chebyshev Orthogonal Polynomials
Nguyen, Nhan T.; Burken, John; Ishihara, Abraham
2011-01-01
This paper presents a new adaptive control approach using Chebyshev orthogonal polynomials as basis functions in a least-squares functional approximation. The use of orthogonal basis functions improves the function approximation significantly and enables better convergence of parameter estimates. Flight control simulations demonstrate the effectiveness of the proposed adaptive control approach.
Plane-wave Least-squares Reverse Time Migration
Dai, Wei
2012-11-04
Least-squares reverse time migration is formulated with a new parameterization, where the migration image of each shot is updated separately and a prestack image is produced with common image gathers. The advantage is that it can offer stable convergence for least-squares migration even when the migration velocity is not completely accurate. To significantly reduce computation cost, linear phase shift encoding is applied to hundreds of shot gathers to produce dozens of planes waves. A regularization term which penalizes the image difference between nearby angles are used to keep the prestack image consistent through all the angles. Numerical tests on a marine dataset is performed to illustrate the advantages of least-squares reverse time migration in the plane-wave domain. Through iterations of least-squares migration, the migration artifacts are reduced and the image resolution is improved. Empirical results suggest that the LSRTM in plane wave domain is an efficient method to improve the image quality and produce common image gathers.
Integer least-squares theory for the GNSS compass
Teunissen, P.J.G.
2010-01-01
Global navigation satellite system (GNSS) carrier phase integer ambiguity resolution is the key to highprecision positioning and attitude determination. In this contribution, we develop new integer least-squares (ILS) theory for the GNSS compass model, together with efficient integer search
Wave-equation Q tomography and least-squares migration
Dutta, Gaurav
2016-03-01
This thesis designs new methods for Q tomography and Q-compensated prestack depth migration when the recorded seismic data suffer from strong attenuation. A motivation of this work is that the presence of gas clouds or mud channels in overburden structures leads to the distortion of amplitudes and phases in seismic waves propagating inside the earth. If the attenuation parameter Q is very strong, i.e., Q<30, ignoring the anelastic effects in imaging can lead to dimming of migration amplitudes and loss of resolution. This, in turn, adversely affects the ability to accurately predict reservoir properties below such layers. To mitigate this problem, I first develop an anelastic least-squares reverse time migration (Q-LSRTM) technique. I reformulate the conventional acoustic least-squares migration problem as a viscoacoustic linearized inversion problem. Using linearized viscoacoustic modeling and adjoint operators during the least-squares iterations, I show with numerical tests that Q-LSRTM can compensate for the amplitude loss and produce images with better balanced amplitudes than conventional migration. To estimate the background Q model that can be used for any Q-compensating migration algorithm, I then develop a wave-equation based optimization method that inverts for the subsurface Q distribution by minimizing a skeletonized misfit function ε. Here, ε is the sum of the squared differences between the observed and the predicted peak/centroid-frequency shifts of the early-arrivals. Through numerical tests on synthetic and field data, I show that noticeable improvements in the migration image quality can be obtained from Q models inverted using wave-equation Q tomography. A key feature of skeletonized inversion is that it is much less likely to get stuck in a local minimum than a standard waveform inversion method. Finally, I develop a preconditioning technique for least-squares migration using a directional Gabor-based preconditioning approach for isotropic
Least-Square Prediction for Backward Adaptive Video Coding
Directory of Open Access Journals (Sweden)
Li Xin
2006-01-01
Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.
Colorimetric characterization of LCD based on constrained least squares
LI, Tong; Xie, Kai; Wang, Qiaojie; Yao, Luyang
2017-01-01
In order to improve the accuracy of colorimetric characterization of liquid crystal display, tone matrix model in color management modeling of display characterization is established by using constrained least squares for quadratic polynomial fitting, and find the relationship between the RGB color space to CIEXYZ color space; 51 sets of training samples were collected to solve the parameters, and the accuracy of color space mapping model was verified by 100 groups of random verification samples. The experimental results showed that, with the constrained least square method, the accuracy of color mapping was high, the maximum color difference of this model is 3.8895, the average color difference is 1.6689, which prove that the method has better optimization effect on the colorimetric characterization of liquid crystal display.
Least squares orthogonal polynomial approximation in several independent variables
International Nuclear Information System (INIS)
Caprari, R.S.
1992-06-01
This paper begins with an exposition of a systematic technique for generating orthonormal polynomials in two independent variables by application of the Gram-Schmidt orthogonalization procedure of linear algebra. It is then demonstrated how a linear least squares approximation for experimental data or an arbitrary function can be generated from these polynomials. The least squares coefficients are computed without recourse to matrix arithmetic, which ensures both numerical stability and simplicity of implementation as a self contained numerical algorithm. The Gram-Schmidt procedure is then utilised to generate a complete set of orthogonal polynomials of fourth degree. A theory for the transformation of the polynomial representation from an arbitrary basis into the familiar sum of products form is presented, together with a specific implementation for fourth degree polynomials. Finally, the computational integrity of this algorithm is verified by reconstructing arbitrary fourth degree polynomials from their values at randomly chosen points in their domain. 13 refs., 1 tab
Moving least-squares corrections for smoothed particle hydrodynamics
Directory of Open Access Journals (Sweden)
Ciro Del Negro
2011-12-01
Full Text Available First-order moving least-squares are typically used in conjunction with smoothed particle hydrodynamics in the form of post-processing filters for density fields, to smooth out noise that develops in most applications of smoothed particle hydrodynamics. We show how an approach based on higher-order moving least-squares can be used to correct some of the main limitations in gradient and second-order derivative computation in classic smoothed particle hydrodynamics formulations. With a small increase in computational cost, we manage to achieve smooth density distributions without the need for post-processing and with higher accuracy in the computation of the viscous term of the Navier–Stokes equations, thereby reducing the formation of spurious shockwaves or other streaming effects in the evolution of fluid flow. Numerical tests on a classic two-dimensional dam-break problem confirm the improvement of the new approach.
Source allocation by least-squares hydrocarbon fingerprint matching
Energy Technology Data Exchange (ETDEWEB)
William A. Burns; Stephen M. Mudge; A. Edward Bence; Paul D. Boehm; John S. Brown; David S. Page; Keith R. Parker [W.A. Burns Consulting Services LLC, Houston, TX (United States)
2006-11-01
There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS. 15 refs., 5 figs.
Linearized least-square imaging of internally scattered data
Aldawood, Ali
2014-01-01
Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single-scattering energy such as nearly vertical faults. Standard migration of these multiples provide subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. Hence, we apply a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. Application to synthetic data demonstrated the effectiveness of the proposed inversion in imaging a reflector that is poorly illuminated by single-scattering energy. The least-square inversion of doublescattered data helped delineate that reflector with minimal acquisition fingerprint.
Least-squares fit for precise determination of decay constants
International Nuclear Information System (INIS)
Katano, R.; Isozumi, Y.
1984-01-01
A description is given for a method of determining decay constants, which is based on non-linear least-squares fits to decay curves measured by counting radiations. While the analysis is straightforward because of the known statistical behaviour of radiation counts, a serious problem arises from the count loss caused by the dead time inherent in radiation counting systems. The limit of the present method coming from the count loss is discussed in detail, since the loss is almost impossible to correct for high count rates. An analytical expression for the statistical precision in determining decay constants by the least-squares fit is deduced as a function of appropriate parameters, i.e., half-life, initial count rate, dead time, and measuring period. (orig.)
Multisplitting for linear, least squares and nonlinear problems
Energy Technology Data Exchange (ETDEWEB)
Renaut, R.
1996-12-31
In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.
A mechanical interpretation of least squares fitting in 3D
Penne, Rudi
2008-01-01
We address the computation of the line that minimizes the sum of the least squared distances with respect to a given set of $n$ points in 3-space. This problem has a well known satisfying solution by means of PCA. We offer an alternative interpretation for this optimal line as the center of the screw motion that minimizes the sum of squared velocities in the given points. The numerical translation of this viewpoint is a generalized eigenproblem, where the total residue...
Variable Metric Methods for Unconstrained Optimization and Nonlinear Least Squares
Czech Academy of Sciences Publication Activity Database
Lukšan, Ladislav; Spedicato, E.
2000-01-01
Roč. 124, č. 1-2 (2000), s. 61-95 ISSN 0377-0427 R&D Projects: GA ČR GA201/00/0080 Institutional research plan: AV0Z1030915 Keywords : quasi-Newton methods * variable metric methods * unconstrained optimization * nonlinear least squares * sparse problems * partially separable problems * limited-memory methods Subject RIV: BA - General Mathematics Impact factor: 0.455, year: 2000
A FORTRAN program for a least-square fitting
International Nuclear Information System (INIS)
Yamazaki, Tetsuo
1978-01-01
A practical FORTRAN program for a least-squares fitting is presented. Although the method is quite usual, the program calculates not only the most satisfactory set of values of unknowns but also the plausible errors associated with them. As an example, a measured lateral absorbed-dose distribution in water for a narrow 25-MeV electron beam is fitted to a Gaussian distribution. (auth.)
Inter-class sparsity based discriminative least square regression.
Wen, Jie; Xu, Yong; Li, Zuoyong; Ma, Zhongli; Xu, Yuanrong
2018-02-21
Least square regression is a very popular supervised classification method. However, two main issues greatly limit its performance. The first one is that it only focuses on fitting the input features to the corresponding output labels while ignoring the correlations among samples. The second one is that the used label matrix, i.e., zero-one label matrix is inappropriate for classification. To solve these problems and improve the performance, this paper presents a novel method, i.e., inter-class sparsity based discriminative least square regression (ICS_DLSR), for multi-class classification. Different from other methods, the proposed method pursues that the transformed samples have a common sparsity structure in each class. For this goal, an inter-class sparsity constraint is introduced to the least square regression model such that the margins of samples from the same class can be greatly reduced while those of samples from different classes can be enlarged. In addition, an error term with row-sparsity constraint is introduced to relax the strict zero-one label matrix, which allows the method to be more flexible in learning the discriminative transformation matrix. These factors encourage the method to learn a more compact and discriminative transformation for regression and thus has the potential to perform better than other methods. Extensive experimental results show that the proposed method achieves the best performance in comparison with other methods for multi-class classification. Copyright © 2018 Elsevier Ltd. All rights reserved.
Multi-source least-squares reverse time migration
Dai, Wei
2012-06-15
Least-squares migration has been shown to improve image quality compared to the conventional migration method, but its computational cost is often too high to be practical. In this paper, we develop two numerical schemes to implement least-squares migration with the reverse time migration method and the blended source processing technique to increase computation efficiency. By iterative migration of supergathers, which consist in a sum of many phase-encoded shots, the image quality is enhanced and the crosstalk noise associated with the encoded shots is reduced. Numerical tests on 2D HESS VTI data show that the multisource least-squares reverse time migration (LSRTM) algorithm suppresses migration artefacts, balances the amplitudes, improves image resolution and reduces crosstalk noise associated with the blended shot gathers. For this example, the multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with a comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution and fewer migration artefacts compared to conventional RTM. The empirical results suggest that multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with a similar or less computational cost. The caveat is that the LSRTM image is sensitive to large errors in the migration velocity model. © 2012 European Association of Geoscientists & Engineers.
Solving linear inequalities in a least squares sense
Energy Technology Data Exchange (ETDEWEB)
Bramley, R.; Winnicka, B. [Indiana Univ., Bloomington, IN (United States)
1994-12-31
Let A {element_of} {Re}{sup mxn} be an arbitrary real matrix, and let b {element_of} {Re}{sup m} a given vector. A familiar problem in computational linear algebra is to solve the system Ax = b in a least squares sense; that is, to find an x* minimizing {parallel}Ax {minus} b{parallel}, where {parallel} {center_dot} {parallel} refers to the vector two-norm. Such an x* solves the normal equations A{sup T}(Ax {minus} b) = 0, and the optimal residual r* = b {minus} Ax* is unique (although x* need not be). The least squares problem is usually interpreted as corresponding to multiple observations, represented by the rows of A and b, on a vector of data x. The observations may be inconsistent, and in this case a solution is sought that minimizes the norm of the residuals. A less familiar problem to numerical linear algebraists is the solution of systems of linear inequalities Ax {le} b in a least squares sense, but the motivation is similar: if a set of observations places upper or lower bounds on linear combinations of variables, the authors want to find x* minimizing {parallel} (Ax {minus} b){sub +} {parallel}, where the i{sup th} component of the vector v{sub +} is the maximum of zero and the i{sup th} component of v.
Handbook of Partial Least Squares Concepts, Methods and Applications
Vinzi, Vincenzo Esposito; Henseler, Jörg
2010-01-01
This handbook provides a comprehensive overview of Partial Least Squares (PLS) methods with specific reference to their use in marketing and with a discussion of the directions of current research and perspectives. It covers the broad area of PLS methods, from regression to structural equation modeling applications, software and interpretation of results. The handbook serves both as an introduction for those without prior knowledge of PLS and as a comprehensive reference for researchers and practitioners interested in the most recent advances in PLS methodology.
Single directional SMO algorithm for least squares support vector machines.
Shao, Xigao; Wu, Kun; Liao, Bifeng
2013-01-01
Working set selection is a major step in decomposition methods for training least squares support vector machines (LS-SVMs). In this paper, a new technique for the selection of working set in sequential minimal optimization- (SMO-) type decomposition methods is proposed. By the new method, we can select a single direction to achieve the convergence of the optimality condition. A simple asymptotic convergence proof for the new algorithm is given. Experimental comparisons demonstrate that the classification accuracy of the new method is not largely different from the existing methods, but the training speed is faster than existing ones.
Plane-wave least-squares reverse-time migration
Dai, Wei
2013-06-03
A plane-wave least-squares reverse-time migration (LSRTM) is formulated with a new parameterization, where the migration image of each shot gather is updated separately and an ensemble of prestack images is produced along with common image gathers. The merits of plane-wave prestack LSRTM are the following: (1) plane-wave prestack LSRTM can sometimes offer stable convergence even when the migration velocity has bulk errors of up to 5%; (2) to significantly reduce computation cost, linear phase-shift encoding is applied to hundreds of shot gathers to produce dozens of plane waves. Unlike phase-shift encoding with random time shifts applied to each shot gather, plane-wave encoding can be effectively applied to data with a marine streamer geometry. (3) Plane-wave prestack LSRTM can provide higher-quality images than standard reverse-time migration. Numerical tests on the Marmousi2 model and a marine field data set are performed to illustrate the benefits of plane-wave LSRTM. Empirical results show that LSRTM in the plane-wave domain, compared to standard reversetime migration, produces images efficiently with fewer artifacts and better spatial resolution. Moreover, the prestack image ensemble accommodates more unknowns to makes it more robust than conventional least-squares migration in the presence of migration velocity errors. © 2013 Society of Exploration Geophysicists.
Decision-Directed Recursive Least Squares MIMO Channels Tracking
Directory of Open Access Journals (Sweden)
2006-01-01
Full Text Available A new approach for joint data estimation and channel tracking for multiple-input multiple-output (MIMO channels is proposed based on the decision-directed recursive least squares (DD-RLS algorithm. RLS algorithm is commonly used for equalization and its application in channel estimation is a novel idea. In this paper, after defining the weighted least squares cost function it is minimized and eventually the RLS MIMO channel estimation algorithm is derived. The proposed algorithm combined with the decision-directed algorithm (DDA is then extended for the blind mode operation. From the computational complexity point of view being O3 versus the number of transmitter and receiver antennas, the proposed algorithm is very efficient. Through various simulations, the mean square error (MSE of the tracking of the proposed algorithm for different joint detection algorithms is compared with Kalman filtering approach which is one of the most well-known channel tracking algorithms. It is shown that the performance of the proposed algorithm is very close to Kalman estimator and that in the blind mode operation it presents a better performance with much lower complexity irrespective of the need to know the channel model.
Nonlinear Partial Least Squares for Consistency Analysis of Meteorological Data
Directory of Open Access Journals (Sweden)
Zhen Meng
2015-01-01
Full Text Available Considering the different types of error and the nonlinearity of the meteorological measurement, this paper proposes a nonlinear partial least squares method for consistency analysis of meteorological data. For a meteorological element from one automated weather station, the proposed method builds the prediction model based on the corresponding meteorological elements of other surrounding automated weather stations to determine the abnormality of the measured values. For the proposed method, the latent variables of the independent variables and the dependent variables are extracted by the partial least squares (PLS, and then they are, respectively, used as the inputs and outputs of neural network to build the nonlinear internal model of PLS. The proposed method can deal with the limitation of traditional nonlinear PLS whose inner model is the fixed quadratic function or the spline function. Two typical neural networks are used in the proposed method, and they are the back propagation neural network and the adaptive neuro-fuzzy inference system (ANFIS. Moreover, the experiments are performed on the real data from the atmospheric observation equipment operation monitoring system of Shaanxi Province of China. The experimental results verify that the nonlinear PLS with the internal model of ANFIS has higher effectiveness and could realize the consistency analysis of meteorological data correctly.
Least Square Fitted Scaling Factor for Radioactive Waste Storage Drum
International Nuclear Information System (INIS)
Park, Chang Je; Han, Hyuk; Yoo, Seunguk; Kim, Junhyeuk; Ahn, Hong Ju
2016-01-01
In this paper, a simplified simulation test for scaling factors is carried out by using the ORIGEN-S code. Fuel depletion and decay effects are solely taken into consideration for various uranium enrichments and fuel burnups. In order to obtain explicit formula for scaling factors as a function of enrichment and burnup, the generalized least square fitting (LSF) method is applied, too. After obtaining scaling factors from the LSF method, the decay effects are also implemented by multiplying exponential decay term including decay constant of each isotope. The results of scaling factors are compared with those of simulation results from the ORGIEN-S code. By using the ORIGEN-S code, scaling factors are evaluated with a function of enrichment, burnup, and decay time through the least square fitting method and Lagrange interpolation scheme. The fitted results are confirmed by comparing with the direct results of the ORIGEN-S. These simulations are adaptable for various initial conditions such as different fuel type and fuel burnup, too
Feature extraction through least squares fit to a simple model
International Nuclear Information System (INIS)
Demuth, H.B.
1976-01-01
The Oak Ridge National Laboratory (ORNL) presented the Los Alamos Scientific Laboratory (LASL) with 18 radiographs of fuel rod test bundles. The problem is to estimate the thickness of the gap between some cylindrical rods and a flat wall surface. The edges of the gaps are poorly defined due to finite source size, x-ray scatter, parallax, film grain noise, and other degrading effects. The radiographs were scanned and the scan-line data were averaged to reduce noise and to convert the problem to one dimension. A model of the ideal gap, convolved with an appropriate point-spread function, was fit to the averaged data with a least squares program; and the gap width was determined from the final fitted-model parameters. The least squares routine did converge and the gaps obtained are of reasonable size. The method is remarkably insensitive to noise. This report describes the problem, the techniques used to solve it, and the results and conclusions. Suggestions for future work are also given
BRGLM, Interactive Linear Regression Analysis by Least Square Fit
International Nuclear Information System (INIS)
Ringland, J.T.; Bohrer, R.E.; Sherman, M.E.
1985-01-01
1 - Description of program or function: BRGLM is an interactive program written to fit general linear regression models by least squares and to provide a variety of statistical diagnostic information about the fit. Stepwise and all-subsets regression can be carried out also. There are facilities for interactive data management (e.g. setting missing value flags, data transformations) and tools for constructing design matrices for the more commonly-used models such as factorials, cubic Splines, and auto-regressions. 2 - Method of solution: The least squares computations are based on the orthogonal (QR) decomposition of the design matrix obtained using the modified Gram-Schmidt algorithm. 3 - Restrictions on the complexity of the problem: The current release of BRGLM allows maxima of 1000 observations, 99 variables, and 3000 words of main memory workspace. For a problem with N observations and P variables, the number of words of main memory storage required is MAX(N*(P+6), N*P+P*P+3*N, and 3*P*P+6*N). Any linear model may be fit although the in-memory workspace will have to be increased for larger problems
Single Object Tracking With Fuzzy Least Squares Support Vector Machine.
Zhang, Shunli; Zhao, Sicong; Sui, Yao; Zhang, Li
2015-12-01
Single object tracking, in which a target is often initialized manually in the first frame and then is tracked and located automatically in the subsequent frames, is a hot topic in computer vision. The traditional tracking-by-detection framework, which often formulates tracking as a binary classification problem, has been widely applied and achieved great success in single object tracking. However, there are some potential issues in this formulation. For instance, the boundary between the positive and negative training samples is fuzzy, and the objectives of tracking and classification are inconsistent. In this paper, we attempt to address the above issues from the fuzzy system perspective and propose a novel tracking method by formulating tracking as a fuzzy classification problem. First, we introduce the fuzzy strategy into tracking and propose a novel fuzzy tracking framework, which can measure the importance of the training samples by assigning different memberships to them and offer more strict spatial constraints. Second, we develop a fuzzy least squares support vector machine (FLS-SVM) approach and employ it to implement a concrete tracker. In particular, the primal form, dual form, and kernel form of FLS-SVM are analyzed and the corresponding closed-form solutions are derived for efficient realizations. Besides, a least squares regression model is built to control the update adaptively, retaining the robustness of the appearance model. The experimental results demonstrate that our method can achieve comparable or superior performance to many state-of-the-art methods.
LEAST SQUARES FITTING OF ELLIPSOID USING ORTHOGONAL DISTANCES
Directory of Open Access Journals (Sweden)
SEBAHATTIN BEKTAS
Full Text Available In this paper, we present techniques for ellipsoid fitting which are based on minimizing the sum of the squares of the geometric distances between the data and the ellipsoid. The literature often uses "orthogonal fitting" in place of "geometric fitting" or "best-fit". For many different purposes, the best-ﬁt ellipsoid fitting to a set of points is required. The problem offitting ellipsoid is encounteredfrequently intheimage processing, face recognition, computer games, geodesy etc. Today, increasing GPS and satellite measurementsprecisionwill allow usto determine amore realistic Earth ellipsoid. Several studies have shown that the Earth, other planets, natural satellites, asteroids and comets can be modeled as triaxial ellipsoids Burša and Šima (1980, Iz et all (2011. Determining the reference ellipsoid for the Earth is an important ellipsoid fitting application, because all geodetic calculations are performed on the reference ellipsoid. Algebraic fitting methods solve the linear least squares (LS problem, and are relatively straightforward and fast. Fitting orthogonal ellipsoid is a difficult issue. Usually, it is impossible to reach a solution with classic LS algorithms. Because they are often faced with the problem of convergence. Therefore, it is necessary to use special algorithms e.g. nonlinear least square algorithms. We propose to use geometric fitting as opposed to algebraic ﬁtting. This is computationally more intensive, but it provides scope for placing visually apparent constraints on ellipsoid parameter estimation and is free from curvature bias Ray and Srivastava (2008.
Bounded Perturbation Regularization for Linear Least Squares Estimation
Ballal, Tarig
2017-10-18
This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.
Making the most out of least-squares migration
Huang, Yunsong
2014-09-01
Standard migration images can suffer from (1) migration artifacts caused by an undersampled acquisition geometry, (2) poor resolution resulting from a limited recording aperture, (3) ringing artifacts caused by ripples in the source wavelet, and (4) weak amplitudes resulting from geometric spreading, attenuation, and defocusing. These problems can be remedied in part by least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), which aims to linearly invert seismic data for the reflectivity distribution. Given a sufficiently accurate migration velocity model, LSM can mitigate many of the above problems and can produce more resolved migration images, sometimes with more than twice the spatial resolution of standard migration. However, LSM faces two challenges: The computational cost can be an order of magnitude higher than that of standard migration, and the resulting image quality can fail to improve for migration velocity errors of about 5% or more. It is possible to obtain the most from least-squares migration by reducing the cost and velocity sensitivity of LSM.
Making the most out of the least (squares migration)
Dutta, Gaurav
2014-08-05
Standard migration images can suffer from migration artifacts due to 1) poor source-receiver sampling, 2) weak amplitudes caused by geometric spreading, 3) attenuation, 4) defocusing, 5) poor resolution due to limited source-receiver aperture, and 6) ringiness caused by a ringy source wavelet. To partly remedy these problems, least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), proposes to linearly invert seismic data for the reflectivity distribution. If the migration velocity model is sufficiently accurate, then LSM can mitigate many of the above problems and lead to a more resolved migration image, sometimes with twice the spatial resolution. However, there are two problems with LSM: the cost can be an order of magnitude more than standard migration and the quality of the LSM image is no better than the standard image for velocity errors of 5% or more. We now show how to get the most from least-squares migration by reducing the cost and velocity sensitivity of LSM.
International Nuclear Information System (INIS)
Haddad, Khaled; Egodawatta, Prasanna; Rahman, Ataur; Goonetilleke, Ashantha
2013-01-01
Reliable pollutant build-up prediction plays a critical role in the accuracy of urban stormwater quality modelling outcomes. However, water quality data collection is resource demanding compared to streamflow data monitoring, where a greater quantity of data is generally available. Consequently, available water quality datasets span only relatively short time scales unlike water quantity data. Therefore, the ability to take due consideration of the variability associated with pollutant processes and natural phenomena is constrained. This in turn gives rise to uncertainty in the modelling outcomes as research has shown that pollutant loadings on catchment surfaces and rainfall within an area can vary considerably over space and time scales. Therefore, the assessment of model uncertainty is an essential element of informed decision making in urban stormwater management. This paper presents the application of a range of regression approaches such as ordinary least squares regression, weighted least squares regression and Bayesian weighted least squares regression for the estimation of uncertainty associated with pollutant build-up prediction using limited datasets. The study outcomes confirmed that the use of ordinary least squares regression with fixed model inputs and limited observational data may not provide realistic estimates. The stochastic nature of the dependent and independent variables need to be taken into consideration in pollutant build-up prediction. It was found that the use of the Bayesian approach along with the Monte Carlo simulation technique provides a powerful tool, which attempts to make the best use of the available knowledge in prediction and thereby presents a practical solution to counteract the limitations which are otherwise imposed on water quality modelling. - Highlights: ► Water quality data spans short time scales leading to significant model uncertainty. ► Assessment of uncertainty essential for informed decision making in water
Regularization Techniques for Linear Least-Squares Problems
Suliman, Mohamed
2016-04-01
Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA
Least-squares reverse time migration of multiples
Zhang, Dongliang
2013-12-06
The theory of least-squares reverse time migration of multiples (RTMM) is presented. In this method, least squares migration (LSM) is used to image free-surface multiples where the recorded traces are used as the time histories of the virtual sources at the hydrophones and the surface-related multiples are the observed data. For a single source, the entire free-surface becomes an extended virtual source where the downgoing free-surface multiples more fully illuminate the subsurface compared to the primaries. Since each recorded trace is treated as the time history of a virtual source, knowledge of the source wavelet is not required and the ringy time series for each source is automatically deconvolved. If the multiples can be perfectly separated from the primaries, numerical tests on synthetic data for the Sigsbee2B and Marmousi2 models show that least-squares reverse time migration of multiples (LSRTMM) can significantly improve the image quality compared to RTMM or standard reverse time migration (RTM) of primaries. However, if there is imperfect separation and the multiples are strongly interfering with the primaries then LSRTMM images show no significant advantage over the primary migration images. In some cases, they can be of worse quality. Applying LSRTMM to Gulf of Mexico data shows higher signal-to-noise imaging of the salt bottom and top compared to standard RTM images. This is likely attributed to the fact that the target body is just below the sea bed so that the deep water multiples do not have strong interference with the primaries. Migrating a sparsely sampled version of the Marmousi2 ocean bottom seismic data shows that LSM of primaries and LSRTMM provides significantly better imaging than standard RTM. A potential liability of LSRTMM is that multiples require several round trips between the reflector and the free surface, so that high frequencies in the multiples suffer greater attenuation compared to the primary reflections. This can lead to lower
Implementation of the Least-Squares Lattice with Order and Forgetting Factor Estimation for FPGA
Czech Academy of Sciences Publication Activity Database
Pohl, Zdeněk; Tichý, Milan; Kadlec, Jiří
2008-01-01
Roč. 2008, č. 2008 (2008), s. 1-11 ISSN 1687-6172 R&D Projects: GA MŠk(CZ) 1M0567 EU Projects: European Commission(XE) 027611 - AETHER Program:FP6 Institutional research plan: CEZ:AV0Z10750506 Keywords : DSP * Least-squares lattice * order estimation * exponential forgetting factor estimation * FPGA implementation * scheduling * dynamic reconfiguration * microblaze Subject RIV: IN - Informatics, Computer Science Impact factor: 1.055, year: 2008 http://library.utia.cas.cz/separaty/2008/ZS/pohl-tichy-kadlec-implementation%20of%20the%20least-squares%20lattice%20with%20order%20and%20forgetting%20factor%20estimation%20for%20fpga.pdf
semPLS: Structural Equation Modeling Using Partial Least Squares
Directory of Open Access Journals (Sweden)
Armin Monecke
2012-05-01
Full Text Available Structural equation models (SEM are very popular in many disciplines. The partial least squares (PLS approach to SEM offers an alternative to covariance-based SEM, which is especially suited for situations when data is not normally distributed. PLS path modelling is referred to as soft-modeling-technique with minimum demands regarding mea- surement scales, sample sizes and residual distributions. The semPLS package provides the capability to estimate PLS path models within the R programming environment. Different setups for the estimation of factor scores can be used. Furthermore it contains modular methods for computation of bootstrap confidence intervals, model parameters and several quality indices. Various plot functions help to evaluate the model. The well known mobile phone dataset from marketing research is used to demonstrate the features of the package.
Least Squares Adjustment: Linear and Nonlinear Weighted Regression Analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2007-01-01
This note primarily describes the mathematics of least squares regression analysis as it is often used in geodesy including land surveying and satellite positioning applications. In these fields regression is often termed adjustment. The note also contains a couple of typical land surveying...... and satellite positioning application examples. In these application areas we are typically interested in the parameters in the model typically 2- or 3-D positions and not in predictive modelling which is often the main concern in other regression analysis applications. Adjustment is often used to obtain...... the clock error) and to obtain estimates of the uncertainty with which the position is determined. Regression analysis is used in many other fields of application both in the natural, the technical and the social sciences. Examples may be curve fitting, calibration, establishing relationships between...
Improved linear least squares estimation using bounded data uncertainty
Ballal, Tarig
2015-04-01
This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.
Regularized plane-wave least-squares Kirchhoff migration
Wang, Xin
2013-09-22
A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity model. Both synthetic and field results show that: 1) LSM with a reflectivity model common for all the plane-wave gathers provides the best image when the migration velocity model is accurate, but it is more sensitive to the velocity errors, 2) the regularized plane-wave LSM is more robust in the presence of velocity errors, and 3) LSM achieves both computational and IO saving by plane-wave encoding compared to shot-domain LSM for the models tested.
Partial least squares regression in the social sciences
Directory of Open Access Journals (Sweden)
Megan L. Sawatsky
2015-06-01
Full Text Available Partial least square regression (PLSR is a statistical modeling technique that extracts latent factors to explain both predictor and response variation. PLSR is particularly useful as a data exploration technique because it is highly flexible (e.g., there are few assumptions, variables can be highly collinear. While gaining importance across a diverse number of fields, its application in the social sciences has been limited. Here, we provide a brief introduction to PLSR, directed towards a novice audience with limited exposure to the technique; demonstrate its utility as an alternative to more classic approaches (multiple linear regression, principal component regression; and apply the technique to a hypothetical dataset using JMP statistical software (with references to SAS software.
A Galerkin least squares approach to viscoelastic flow.
Energy Technology Data Exchange (ETDEWEB)
Rao, Rekha R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-10-01
A Galerkin/least-squares stabilization technique is applied to a discrete Elastic Viscous Stress Splitting formulation of for viscoelastic flow. From this, a possible viscoelastic stabilization method is proposed. This method is tested with the flow of an Oldroyd-B fluid past a rigid cylinder, where it is found to produce inaccurate drag coefficients. Furthermore, it fails for relatively low Weissenberg number indicating it is not suited for use as a general algorithm. In addition, a decoupled approach is used as a way separating the constitutive equation from the rest of the system. A Pressure Poisson equation is used when the velocity and pressure are sought to be decoupled, but this fails to produce a solution when inflow/outflow boundaries are considered. However, a coupled pressure-velocity equation with a decoupled constitutive equation is successful for the flow past a rigid cylinder and seems to be suitable as a general-use algorithm.
Nonlinear Least Squares Methods for Joint DOA and Pitch Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
In this paper, we consider the problem of joint direction-of-arrival (DOA) and fundamental frequency estimation. Joint estimation enables robust estimation of these parameters in multi-source scenarios where separate estimators may fail. First, we derive the exact and asymptotic Cram......\\'{e}r-Rao bounds for the joint estimation problem. Then, we propose a nonlinear least squares (NLS) and an approximate NLS (aNLS) estimator for joint DOA and fundamental frequency estimation. The proposed estimators are maximum likelihood estimators when: 1) the noise is white Gaussian, 2) the environment...... estimation. Moreover, simulations on real-life data indicate that the NLS and aNLS methods are applicable even when reverberation is present and the noise is not white Gaussian....
Estimating Frequency by Interpolation Using Least Squares Support Vector Regression
Directory of Open Access Journals (Sweden)
Changwei Ma
2015-01-01
Full Text Available Discrete Fourier transform- (DFT- based maximum likelihood (ML algorithm is an important part of single sinusoid frequency estimation. As signal to noise ratio (SNR increases and is above the threshold value, it will lie very close to Cramer-Rao lower bound (CRLB, which is dependent on the number of DFT points. However, its mean square error (MSE performance is directly proportional to its calculation cost. As a modified version of support vector regression (SVR, least squares SVR (LS-SVR can not only still keep excellent capabilities for generalizing and fitting but also exhibit lower computational complexity. In this paper, therefore, LS-SVR is employed to interpolate on Fourier coefficients of received signals and attain high frequency estimation accuracy. Our results show that the proposed algorithm can make a good compromise between calculation cost and MSE performance under the assumption that the sample size, number of DFT points, and resampling points are already known.
RNA structural motif recognition based on least-squares distance.
Shen, Ying; Wong, Hau-San; Zhang, Shaohong; Zhang, Lin
2013-09-01
RNA structural motifs are recurrent structural elements occurring in RNA molecules. RNA structural motif recognition aims to find RNA substructures that are similar to a query motif, and it is important for RNA structure analysis and RNA function prediction. In view of this, we propose a new method known as RNA Structural Motif Recognition based on Least-Squares distance (LS-RSMR) to effectively recognize RNA structural motifs. A test set consisting of five types of RNA structural motifs occurring in Escherichia coli ribosomal RNA is compiled by us. Experiments are conducted for recognizing these five types of motifs. The experimental results fully reveal the superiority of the proposed LS-RSMR compared with four other state-of-the-art methods.
Robust Homography Estimation Based on Nonlinear Least Squares Optimization
Directory of Open Access Journals (Sweden)
Wei Mou
2014-01-01
Full Text Available The homography between image pairs is normally estimated by minimizing a suitable cost function given 2D keypoint correspondences. The correspondences are typically established using descriptor distance of keypoints. However, the correspondences are often incorrect due to ambiguous descriptors which can introduce errors into following homography computing step. There have been numerous attempts to filter out these erroneous correspondences, but it is unlikely to always achieve perfect matching. To deal with this problem, we propose a nonlinear least squares optimization approach to compute homography such that false matches have no or little effect on computed homography. Unlike normal homography computation algorithms, our method formulates not only the keypoints’ geometric relationship but also their descriptor similarity into cost function. Moreover, the cost function is parametrized in such a way that incorrect correspondences can be simultaneously identified while the homography is computed. Experiments show that the proposed approach can perform well even with the presence of a large number of outliers.
Partial Least Squares Structural Equation Modeling with R
Directory of Open Access Journals (Sweden)
Hamdollah Ravand
2016-09-01
Full Text Available Structural equation modeling (SEM has become widespread in educational and psychological research. Its flexibility in addressing complex theoretical models and the proper treatment of measurement error has made it the model of choice for many researchers in the social sciences. Nevertheless, the model imposes some daunting assumptions and restrictions (e.g. normality and relatively large sample sizes that could discourage practitioners from applying the model. Partial least squares SEM (PLS-SEM is a nonparametric technique which makes no distributional assumptions and can be estimated with small sample sizes. In this paper a general introduction to PLS-SEM is given and is compared with conventional SEM. Next, step by step procedures, along with R functions, are presented to estimate the model. A data set is analyzed and the outputs are interpreted
Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares
Grauer, Jared A.; Morelli, Eugene A.
2016-01-01
A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.
Least-squares Spectral Analysis of GRACE SST Data
Naeimi, M.; Nikkhoo, M.; Sharifi, M.
2008-05-01
Since March 2002, GRACE mission is clearly showing the time variable components of the long and medium wave lengths of the earth gravity field, mostly related to the hydrological processes in the earth system. Up to now, several works have been done to increase the spatial resolution of GRACE solutions using different mathematical methods and constraining geophysical models. In this paper we performed the least squares spectral analysis on GRACE level1B data to illustrate the exact capability of GRACE observations in detecting the time dependent changes of the earth gravity field. The derived spectra explicitly reveal the existence of a strong seasonal cycle as well as some other significant periodicities present in the data. Such methodology could be used effectively in classifying different geophysical phenomena which bring gravity changes about.
Least-squares reverse time migration with radon preconditioning
Dutta, Gaurav
2016-09-06
We present a least-squares reverse time migration (LSRTM) method using Radon preconditioning to regularize noisy or severely undersampled data. A high resolution local radon transform is used as a change of basis for the reflectivity and sparseness constraints are applied to the inverted reflectivity in the transform domain. This reflects the prior that for each location of the subsurface the number of geological dips is limited. The forward and the adjoint mapping of the reflectivity to the local Radon domain and back are done through 3D Fourier-based discrete Radon transform operators. The sparseness is enforced by applying weights to the Radon domain components which either vary with the amplitudes of the local dips or are thresholded at given quantiles. Numerical tests on synthetic and field data validate the effectiveness of the proposed approach in producing images with improved SNR and reduced aliasing artifacts when compared with standard RTM or LSRTM.
Risk and Management Control: A Partial Least Square Modelling Approach
DEFF Research Database (Denmark)
Nielsen, Steen; Pontoppidan, Iens Christian
and interrelations between risk and areas within management accounting. The idea is that management accounting should be able to conduct a valid feed forward but also predictions for decision making including risk. This study reports the test of a theoretical model using partial least squares (PLS) on survey data...... collected from 72 different types of organizations within different Danish sectors. The results show direct relationships between risk practices and two dimensions; an identify/control dimension and an internal attitude dimension. Also indirect relationships exist between a future expectation dimension...... and a external attitude dimension. The results have important implications for both management control research and for the management control systems design for the way accountants consider the element of risk in their different tasks, both operational and strategic. Specifically, it seems that different risk...
Optimization Method of Fusing Model Tree into Partial Least Squares
Directory of Open Access Journals (Sweden)
Yu Fang
2017-01-01
Full Text Available Partial Least Square (PLS can’t adapt to the characteristics of the data of many fields due to its own features multiple independent variables, multi-dependent variables and non-linear. However, Model Tree (MT has a good adaptability to nonlinear function, which is made up of many multiple linear segments. Based on this, a new method combining PLS and MT to analysis and predict the data is proposed, which build MT through the main ingredient and the explanatory variables(the dependent variable extracted from PLS, and extract residual information constantly to build Model Tree until well-pleased accuracy condition is satisfied. Using the data of the maxingshigan decoction of the monarch drug to treat the asthma or cough and two sample sets in the UCI Machine Learning Repository, the experimental results show that, the ability of explanation and predicting get improved in the new method.
Cao, Jiguo
2012-01-01
Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online.
Amigo, José Manuel; Ravn, Carsten; Gallagher, Neal B; Bro, Rasmus
2009-05-21
In hyperspectral analysis, PLS-discriminant analysis (PLS-DA) is being increasingly used in conjunction with pure spectra where it is often referred to as PLS-Classification (PLS-Class). PLS-Class has been presented as a novel approach making it possible to obtain qualitative information about the distribution of the compounds in each pixel using little a priori knowledge about the image (only the pure spectrum of each compound is needed). In this short note it is shown that the PLS-Class model is the same as a straightforward classical least squares (CLS) model and it is highlighted that it is more appropriate to view this approach as CLS rather than PLS-DA. A real example illustrates the results of applying both PLS-Class and CLS.
Recursive least squares background prediction of univariate syndromic surveillance data
Directory of Open Access Journals (Sweden)
Burkom Howard
2009-01-01
Full Text Available Abstract Background Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Methods Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. Results We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. Conclusion The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold
RCS Leak Rate Calculation with High Order Least Squares Method
International Nuclear Information System (INIS)
Lee, Jeong Hun; Kang, Young Kyu; Kim, Yang Ki
2010-01-01
As a part of action items for Application of Leak before Break(LBB), RCS Leak Rate Calculation Program is upgraded in Kori unit 3 and 4. For real time monitoring of operators, periodic calculation is needed and corresponding noise reduction scheme is used. This kind of study was issued in Korea, so there have upgraded and used real time RCS Leak Rate Calculation Program in UCN unit 3 and 4 and YGN unit 1 and 2. For reduction of the noise in signals, Linear Regression Method was used in those programs. Linear Regression Method is powerful method for noise reduction. But the system is not static with some alternative flow paths and this makes mixed trend patterns of input signal values. In this condition, the trend of signal and average of Linear Regression are not entirely same pattern. In this study, high order Least squares Method is used to follow the trend of signal and the order of calculation is rearranged. The result of calculation makes reasonable trend and the procedure is physically consistence
Weighted least-squares criteria for electrical impedance tomography
International Nuclear Information System (INIS)
Kallman, J.S.; Berryman, J.G.
1992-01-01
Methods are developed for design of electrical impedance tomographic reconstruction algorithms with specified properties. Assuming a starting model with constant conductivity or some other specified background distribution, an algorithm with the following properties is found: (1) the optimum constant for the starting model is determined automatically; (2) the weighted least-squares error between the predicted and measured power dissipation data is as small as possible; (3) the variance of the reconstructed conductivity from the starting model is minimized; (4) potential distributions with the largest volume integral of gradient squared have the least influence on the reconstructed conductivity, and therefore distributions most likely to be corrupted by contact impedance effects are deemphasized; (5) cells that dissipate the most power during the current injection tests tend to deviate least from the background value. The resulting algorithm maps the reconstruction problem into a vector space where the contribution to the inversion from the background conductivity remains invariant, while the optimum contributions in orthogonal directions are found. For a starting model with nonconstant conductivity, the reconstruction algorithm has analogous properties
Parsimonious extreme learning machine using recursive orthogonal least squares.
Wang, Ning; Er, Meng Joo; Han, Min
2014-10-01
Novel constructive and destructive parsimonious extreme learning machines (CP- and DP-ELM) are proposed in this paper. By virtue of the proposed ELMs, parsimonious structure and excellent generalization of multiinput-multioutput single hidden-layer feedforward networks (SLFNs) are obtained. The proposed ELMs are developed by innovative decomposition of the recursive orthogonal least squares procedure into sequential partial orthogonalization (SPO). The salient features of the proposed approaches are as follows: 1) Initial hidden nodes are randomly generated by the ELM methodology and recursively orthogonalized into an upper triangular matrix with dramatic reduction in matrix size; 2) the constructive SPO in the CP-ELM focuses on the partial matrix with the subcolumn of the selected regressor including nonzeros as the first column while the destructive SPO in the DP-ELM operates on the partial matrix including elements determined by the removed regressor; 3) termination criteria for CP- and DP-ELM are simplified by the additional residual error reduction method; and 4) the output weights of the SLFN need not be solved in the model selection procedure and is derived from the final upper triangular equation by backward substitution. Both single- and multi-output real-world regression data sets are used to verify the effectiveness and superiority of the CP- and DP-ELM in terms of parsimonious architecture and generalization accuracy. Innovative applications to nonlinear time-series modeling demonstrate superior identification results.
Robust regularized least-squares beamforming approach to signal estimation
Suliman, Mohamed Abdalla Elhag
2017-05-12
In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill-conditioned covariance matrix of the received signals. Secondly, the steering vector pertaining to the direction of arrival of the signal of interest is not known precisely. To tackle these two challenges, the standard capon beamformer is manipulated to a form where the beamformer output is obtained as a scaled version of the inner product of two vectors. The two vectors are linearly related to the steering vector and the received signal snapshot, respectively. The linear operator, in both cases, is the square root of the covariance matrix. A regularized least-squares (RLS) approach is proposed to estimate these two vectors and to provide robustness without exploiting prior information. Simulation results show that the RLS beamformer using the proposed regularization algorithm outperforms state-of-the-art beamforming algorithms, as well as another RLS beamformers using a standard regularization approaches.
Non-parametric and least squares Langley plot methods
Kiedron, P. W.; Michalsky, J. J.
2016-01-01
Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V = V0e-τ ṡ m, where a plot of ln(V) voltage vs. m air mass yields a straight line with intercept ln(V0). This ln(V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The 11 techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln(V0)'s are smoothed and interpolated with median and mean moving window filters.
Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares
Orr, Jeb S.
2012-01-01
A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed
Quasi-least squares with mixed linear correlation structures.
Xie, Jichun; Shults, Justine; Peet, Jon; Stambolian, Dwight; Cotch, Mary Frances
2010-01-01
Quasi-least squares (QLS) is a two-stage computational approach for estimation of the correlation parameters in the framework of generalized estimating equations. We prove two general results for the class of mixed linear correlation structures: namely, that the stage one QLS estimate of the correlation parameter always exists and is feasible (yields a positive definite estimated correlation matrix) for any correlation structure, while the stage two estimator exists and is unique (and therefore consistent) with probability one, for the class of mixed linear correlation structures. Our general results justify the implementation of QLS for particular members of the class of mixed linear correlation structures that are appropriate for analysis of data from families that may vary in size and composition. We describe the familial structures and implement them in an analysis of optical spherical values in the Old Order Amish (OOA). For the OOA analysis, we show that we would suffer a substantial loss in efficiency, if the familial structures were the true structures, but were misspecified as simpler approximate structures. To help bridge the interface between Statistics and Medicine, we also provide R software so that medical researchers can implement the familial structures in a QLS analysis of their own data.
BER analysis of regularized least squares for BPSK recovery
Ben Atitallah, Ismail
2017-06-20
This paper investigates the problem of recovering an n-dimensional BPSK signal x
Ordinary least squares regression is indicated for studies of allometry.
Kilmer, J T; Rodríguez, R L
2017-01-01
When it comes to fitting simple allometric slopes through measurement data, evolutionary biologists have been torn between regression methods. On the one hand, there is the ordinary least squares (OLS) regression, which is commonly used across many disciplines of biology to fit lines through data, but which has a reputation for underestimating slopes when measurement error is present. On the other hand, there is the reduced major axis (RMA) regression, which is often recommended as a substitute for OLS regression in studies of allometry, but which has several weaknesses of its own. Here, we review statistical theory as it applies to evolutionary biology and studies of allometry. We point out that the concerns that arise from measurement error for OLS regression are small and straightforward to deal with, whereas RMA has several key properties that make it unfit for use in the field of allometry. The recommended approach for researchers interested in allometry is to use OLS regression on measurements taken with low (but realistically achievable) measurement error. If measurement error is unavoidable and relatively large, it is preferable to correct for slope attenuation rather than to turn to RMA regression, or to take the expected amount of attenuation into account when interpreting the data. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.
3D plane-wave least-squares Kirchhoff migration
Wang, Xin
2014-08-05
A three dimensional least-squares Kirchhoff migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images and the computational efficiency. Due to the limitation of current 3D marine acquisition geometries, a cylindrical-wave encoding is adopted for the narrow azimuth streamer data. To account for the mispositioning of reflectors due to errors in the velocity model, a regularized LSM is devised so that each plane-wave or cylindrical-wave gather gives rise to an individual migration image, and a regularization term is included to encourage the similarities between the migration images of similar encoding schemes. Both synthetic and field results show that: 1) plane-wave or cylindrical-wave encoding LSM can achieve both computational and IO saving, compared to shot-domain LSM, however, plane-wave LSM is still about 5 times more expensive than plane-wave migration; 2) the regularized LSM is more robust compared to LSM with one reflectivity model common for all the plane-wave or cylindrical-wave gathers.
Fast Dating Using Least-Squares Criteria and Algorithms.
To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier
2016-01-01
Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that
Finding a Minimally Informative Dirichlet Prior Distribution Using Least Squares
International Nuclear Information System (INIS)
Kelly, Dana; Atwood, Corwin
2011-01-01
In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straight-forward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in closed form, and so an approximate beta distribution is used in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial aleatory model for common-cause failure, must be estimated from data that is often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.
Partitioned Alternating Least Squares Technique for Canonical Polyadic Tensor Decomposition
Czech Academy of Sciences Publication Activity Database
Tichavský, Petr; Phan, A. H.; Cichocki, A.
2016-01-01
Roč. 23, č. 7 (2016), s. 993-997 ISSN 1070-9908 R&D Projects: GA ČR(CZ) GA14-13713S Institutional support: RVO:67985556 Keywords : canonical polyadic decomposition * PARAFAC * tensor decomposition Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.528, year: 2016 http://library.utia.cas.cz/separaty/2016/SI/tichavsky-0460710.pdf
Linear least squares compartmental-model-independent parameter identification in PET
International Nuclear Information System (INIS)
Thie, J.A.; Smith, G.T.; Hubner, K.F.
1997-01-01
A simplified approach involving linear-regression straight-line parameter fitting of dynamic scan data is developed for both specific and nonspecific models. Where compartmental-model topologies apply, the measured activity may be expressed in terms of: its integrals, plasma activity and plasma integrals -- all in a linear expression with macroparameters as coefficients. Multiple linear regression, as in spreadsheet software, determines parameters for best data fits. Positron emission tomography (PET)-acquired gray-matter images in a dynamic scan are analyzed: both by this method and by traditional iterative nonlinear least squares. Both patient and simulated data were used. Regression and traditional methods are in expected agreement. Monte-Carlo simulations evaluate parameter standard deviations, due to data noise, and much smaller noise-induced biases. Unique straight-line graphical displays permit visualizing data influences on various macroparameters as changes in slopes. Advantages of regression fitting are: simplicity, speed, ease of implementation in spreadsheet software, avoiding risks of convergence failures or false solutions in iterative least squares, and providing various visualizations of the uptake process by straight line graphical displays. Multiparameter model-independent analyses on lesser understood systems is also made possible
See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.
2018-04-01
This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.
Error Estimates Derived from the Data for Least-Squares Spline Fitting
Energy Technology Data Exchange (ETDEWEB)
Jerome Blair
2007-06-25
The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.
Taipe, Donny
2017-01-01
This article sustains the transfer of the national standard of mass (KP1) of INACAL to two reference standards ‘Weight 1’, ‘Weight 2’ and also KP2 (as witnessed mass standard and with known error). The dissemination was done using the Gauss Markov method by Generalized Least Squares. The uncertainty calculation was performed using Univariate Gaussian Distribution and Multivariate Gaussian Distribution; the latter was developed with the Monte Carlo method using a programming language called 'R...
Fitting of two and three variate polynomials from experimental data through the least squares method
International Nuclear Information System (INIS)
Sanchez-Miro, J.J.; Sanz-Martin, J.C.
1994-01-01
Obtaining polynomial fittings from observational data in two and three dimensions is an interesting and practical task. Such an arduous problem suggests the development of an automatic code. The main novelty we provide lies in the generalization of the classical least squares method in three FORTRAN 77 programs usable in any sampling problem. Furthermore, we introduce the orthogonal 2D-Legendre function in the fitting process. These FORTRAN 77 programs are equipped with the options to calculate the approximation quality standard indicators, obviously generalized to two and three dimensions (correlation nonlinear factor, confidence intervals, cuadratic mean error, and so on). The aim of this paper is to rectify the absence of fitting algorithms for more than one independent variable in mathematical libraries
Least-squares dual characterization for ROI assessment in emission tomography.
Ben Bouallègue, F; Crouzet, J F; Dubois, A; Buvat, I; Mariano-Goulart, D
2013-06-21
Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.
Least-squares dual characterization for ROI assessment in emission tomography
International Nuclear Information System (INIS)
Ben Bouallègue, F; Mariano-Goulart, D; Crouzet, J F; Dubois, A; Buvat, I
2013-01-01
Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff. (paper)
2015-04-12
Avoiding communication in the Lanczos bidiagonalization routine and associated Least Squares QR solver Erin Carson Electrical Engineering and...Bidiagonalization Routine and Associated Least Squares QR Solver 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...ASSOCIATED LEAST SQUARES QR SOLVER ERIN CARSON Abstract. Communication – the movement of data between levels of memory hierarchy or between processors
Multi-output regression using a locally regularised orthogonal least square algorithm
Chen, S.
2002-01-01
The paper proposes a locally regularised orthogonal least squares (LROLS) algorithm for constructing sparse multi-output regression models that generalise well. By associating each regressor in the regression model with an individual regularisation parameter, the ability for the multi-output orthogonal least squares (OLS) model selection to produce a parsimonious model with good generalisation performance is greatly enhanced.
Can you trust the parametric standard errors in nonlinear least squares? Yes, with provisos.
Tellinghuisen, Joel
2018-04-01
Questions about the reliability of parametric standard errors (SEs) from nonlinear least squares (LS) algorithms have led to a general mistrust of these precision estimators that is often unwarranted. The importance of non-Gaussian parameter distributions is illustrated by converting linear models to nonlinear by substituting e A , ln A, and 1/A for a linear parameter a. Monte Carlo (MC) simulations characterize parameter distributions in more complex cases, including when data have varying uncertainty and should be weighted, but weights are neglected. This situation leads to loss of precision and erroneous parametric SEs, as is illustrated for the Lineweaver-Burk analysis of enzyme kinetics data and the analysis of isothermal titration calorimetry data. Non-Gaussian parameter distributions are generally asymmetric and biased. However, when the parametric SE is Parametric SEs are rigorously correct in linear LS under the usual assumptions, and are a trustworthy approximation in nonlinear LS provided they are sufficiently small - a condition favored by the abundant, precise data routinely collected in many modern instrumental methods. Copyright © 2018 Elsevier B.V. All rights reserved.
Dutta, Gaurav
2013-08-20
Attenuation leads to distortion of amplitude and phase of seismic waves propagating inside the earth. Conventional acoustic and least-squares reverse time migration do not account for this distortion which leads to defocusing of migration images in highly attenuative geological environments. To account for this distortion, we propose to use the visco-acoustic wave equation for least-squares reverse time migration. Numerical tests on synthetic data show that least-squares reverse time migration with the visco-acoustic wave equation corrects for this distortion and produces images with better balanced amplitudes compared to the conventional approach. © 2013 SEG.
Autcha Araveeporn
2013-01-01
This paper compares a Least-Squared Random Coefficient Autoregressive (RCA) model with a Least-Squared RCA model based on Autocorrelated Errors (RCA-AR). We looked at only the first order models, denoted RCA(1) and RCA(1)-AR(1). The efficiency of the Least-Squared method was checked by applying the models to Brownian motion and Wiener process, and the efficiency followed closely the asymptotic properties of a normal distribution. In a simulation study, we compared the performance of RCA(1) an...
Least-squares methods involving the H{sup -1} inner product
Energy Technology Data Exchange (ETDEWEB)
Pasciak, J.
1996-12-31
Least-squares methods are being shown to be an effective technique for the solution of elliptic boundary value problems. However, the methods differ depending on the norms in which they are formulated. For certain problems, it is much more natural to consider least-squares functionals involving the H{sup -1} norm. Such norms give rise to improved convergence estimates and better approximation to problems with low regularity solutions. In addition, fewer new variables need to be added and less stringent boundary conditions need to be imposed. In this talk, I will describe some recent developments involving least-squares methods utilizing the H{sup -1} inner product.
Multilevel solvers of first-order system least-squares for Stokes equations
Energy Technology Data Exchange (ETDEWEB)
Lai, Chen-Yao G. [National Chung Cheng Univ., Chia-Yi (Taiwan, Province of China)
1996-12-31
Recently, The use of first-order system least squares principle for the approximate solution of Stokes problems has been extensively studied by Cai, Manteuffel, and McCormick. In this paper, we study multilevel solvers of first-order system least-squares method for the generalized Stokes equations based on the velocity-vorticity-pressure formulation in three dimensions. The least-squares functionals is defined to be the sum of the L{sup 2}-norms of the residuals, which is weighted appropriately by the Reynolds number. We develop convergence analysis for additive and multiplicative multilevel methods applied to the resulting discrete equations.
The crux of the method: assumptions in ordinary least squares and logistic regression.
Long, Rebecca G
2008-10-01
Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.
Chen, Shanqiu; Dong, LiZhi; Chen, XiaoJun; Tan, Yi; Liu, Wenjin; Wang, Shuai; Yang, Ping; Xu, Bing; Ye, YuTang
2016-04-10
Adaptive optics is an important technology for improving beam quality in solid-state slab lasers. However, there are uncorrectable aberrations in partial areas of the beam. In the criterion of the conventional least-squares reconstruction method, it makes the zones with small aberrations nonsensitive and hinders this zone from being further corrected. In this paper, a weighted least-squares reconstruction method is proposed to improve the relative sensitivity of zones with small aberrations and to further improve beam quality. Relatively small weights are applied to the zones with large residual aberrations. Comparisons of results show that peak intensity in the far field improved from 1242 analog digital units (ADU) to 2248 ADU, and beam quality β improved from 2.5 to 2.0. This indicates the weighted least-squares method has better performance than the least-squares reconstruction method when there are large zonal uncorrectable aberrations in the slab laser system.
Iterative least-squares solvers for the Navier-Stokes equations
Energy Technology Data Exchange (ETDEWEB)
Bochev, P. [Univ. of Texas, Arlington, TX (United States)
1996-12-31
In the recent years finite element methods of least-squares type have attracted considerable attention from both mathematicians and engineers. This interest has been motivated, to a large extent, by several valuable analytic and computational properties of least-squares variational principles. In particular, finite element methods based on such principles circumvent Ladyzhenskaya-Babuska-Brezzi condition and lead to symmetric and positive definite algebraic systems. Thus, it is not surprising that numerical solution of fluid flow problems has been among the most promising and successful applications of least-squares methods. In this context least-squares methods offer significant theoretical and practical advantages in the algorithmic design, which makes resulting methods suitable, among other things, for large-scale numerical simulations.
Least-squares finite element discretizations of neutron transport equations in 3 dimensions
Energy Technology Data Exchange (ETDEWEB)
Manteuffel, T.A [Univ. of Colorado, Boulder, CO (United States); Ressel, K.J. [Interdisciplinary Project Center for Supercomputing, Zurich (Switzerland); Starkes, G. [Universtaet Karlsruhe (Germany)
1996-12-31
The least-squares finite element framework to the neutron transport equation introduced in is based on the minimization of a least-squares functional applied to the properly scaled neutron transport equation. Here we report on some practical aspects of this approach for neutron transport calculations in three space dimensions. The systems of partial differential equations resulting from a P{sub 1} and P{sub 2} approximation of the angular dependence are derived. In the diffusive limit, the system is essentially a Poisson equation for zeroth moment and has a divergence structure for the set of moments of order 1. One of the key features of the least-squares approach is that it produces a posteriori error bounds. We report on the numerical results obtained for the minimum of the least-squares functional augmented by an additional boundary term using trilinear finite elements on a uniform tesselation into cubes.
8th International Conference on Partial Least Squares and Related Methods
Vinzi, Vincenzo; Russolillo, Giorgio; Saporta, Gilbert; Trinchera, Laura
2016-01-01
This volume presents state of the art theories, new developments, and important applications of Partial Least Square (PLS) methods. The text begins with the invited communications of current leaders in the field who cover the history of PLS, an overview of methodological issues, and recent advances in regression and multi-block approaches. The rest of the volume comprises selected, reviewed contributions from the 8th International Conference on Partial Least Squares and Related Methods held in Paris, France, on 26-28 May, 2014. They are organized in four coherent sections: 1) new developments in genomics and brain imaging, 2) new and alternative methods for multi-table and path analysis, 3) advances in partial least square regression (PLSR), and 4) partial least square path modeling (PLS-PM) breakthroughs and applications. PLS methods are very versatile methods that are now used in areas as diverse as engineering, life science, sociology, psychology, brain imaging, genomics, and business among both academics ...
Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang
2016-03-01
An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS. © The Author(s) 2016.
Instantaneous Pressure Field Calculation from PIV Data with Least-Square Reconstruction
Zhang, Jiacheng; Scalo, Carlo; Vlachos, Pavlos
2016-11-01
A method using least-square reconstruction of instantaneous pressure fields from PIV velocity measurements is introduced and applied to both planar and volumetric flow data. Pressure gradients are computed on a staggered grid from flow acceleration. An overdetermined system of linear equations which relates the pressure and the computed pressure gradients is formulated. The pressure field is estimated as the least-square solution of the overdetermined system. The flow acceleration is approximated by the vortex-in-cell procedure, providing the pressure field from a single velocity snapshot. The least-square method is compared against the omni-directional pressure gradient integration and solving the pressure Poisson equation. The results demonstrate that the omni-directional integration and the least-square method are more robust to the noise in velocity measurements than the pressure Poisson solver. In addition, the computational cost of the least square method is much lower than the omni-directional integration, and very easily extendable to volumetric data retaining computational efficiency. The least-square method maintains higher accuracy than the pressure Poisson equation while retaining a similar computational burden.
Least-squares fitting method for on-line flux mapping of CANDU-PHWR
International Nuclear Information System (INIS)
Hong, I.S.; Kim, C.H.; Suk, H.C.
2002-01-01
A least-squares fitting method is developed for advanced on-line flux mapping in the CANDU-PHWR system. The method solves both the core neutronics design equations and the detector response equations on the least-squares principle which leads one to normal equations. The fine-mesh finite difference two-group diffusion theory calculations by SCAN code for Wolsong-3 unit are conducted to obtain the simulated real flux distribution and detector signals. The least-squares flux monitoring calculations are compared with the flux distribution calculation by the SCAN code without detector signals. It is shown that the least-squares method produces the flux distribution in better agreement with reference distribution than the coarse mesh SCAN calculation without detector signals. Through the 500 full power day burnup-history simulations of Wolsong-4 unit for benchmark, the mapped detector signals are compared with real detector signals. Maximum root mean squares (RMS) difference between the mapped detector signals and real detector signals are shown to be about 0.04 % by least-squares method, while it is about 5.43 % by the current flux-synthesis method. It is concluded that the least-squares fitting method is very promising as the advanced flux mapping methodology for CANDU-PHWR. (author)
FC LSEI WNNLS, Least-Square Fitting Algorithms Using B Splines
International Nuclear Information System (INIS)
Hanson, R.J.; Haskell, K.H.
1989-01-01
1 - Description of problem or function: FC allows a user to fit dis- crete data, in a weighted least-squares sense, using piece-wise polynomial functions represented by B-Splines on a given set of knots. In addition to the least-squares fitting of the data, equality, inequality, and periodic constraints at a discrete, user-specified set of points can be imposed on the fitted curve or its derivatives. The subprograms LSEI and WNNLS solve the linearly-constrained least-squares problem. LSEI solves the class of problem with general inequality constraints, and, if requested, obtains a covariance matrix of the solution parameters. WNNLS solves the class of problem with non-negativity constraints. It is anticipated that most users will find LSEI suitable for their needs; however, users with inequalities that are single bounds on variables may wish to use WNNLS. 2 - Method of solution: The discrete data are fit by a linear combination of piece-wise polynomial curves which leads to a linear least-squares system of algebraic equations. Additional information is expressed as a discrete set of linear inequality and equality constraints on the fitted curve which leads to a linearly-constrained least-squares system of algebraic equations. The solution of this system is the main computational problem solved
International Nuclear Information System (INIS)
Liu, L.H.; Tan, J.Y.
2007-01-01
A least-squares collocation meshless method is employed for solving the radiative heat transfer in absorbing, emitting and scattering media. The least-squares collocation meshless method for radiative transfer is based on the discrete ordinates equation. A moving least-squares approximation is applied to construct the trial functions. Except for the collocation points which are used to construct the trial functions, a number of auxiliary points are also adopted to form the total residuals of the problem. The least-squares technique is used to obtain the solution of the problem by minimizing the summation of residuals of all collocation and auxiliary points. Three numerical examples are studied to illustrate the performance of this new solution method. The numerical results are compared with the other benchmark approximate solutions. By comparison, the results show that the least-squares collocation meshless method is efficient, accurate and stable, and can be used for solving the radiative heat transfer in absorbing, emitting and scattering media
The possibilities of least-squares migration of internally scattered seismic energy
Aldawood, Ali
2015-05-26
Approximate images of the earth’s subsurface structures are usually obtained by migrating surface seismic data. Least-squares migration, under the single-scattering assumption, is used as an iterative linearized inversion scheme to suppress migration artifacts, deconvolve the source signature, mitigate the acquisition fingerprint, and enhance the spatial resolution of migrated images. The problem with least-squares migration of primaries, however, is that it may not be able to enhance events that are mainly illuminated by internal multiples, such as vertical and nearly vertical faults or salt flanks. To alleviate this problem, we adopted a linearized inversion framework to migrate internally scattered energy. We apply the least-squares migration of first-order internal multiples to image subsurface vertical fault planes. Tests on synthetic data demonstrated the ability of the proposed method to resolve vertical fault planes, which are poorly illuminated by the least-squares migration of primaries only. The proposed scheme is robust in the presence of white Gaussian observational noise and in the case of imaging the fault planes using inaccurate migration velocities. Our results suggested that the proposed least-squares imaging, under the double-scattering assumption, still retrieved the vertical fault planes when imaging the scattered data despite a slight defocusing of these events due to the presence of noise or velocity errors.
Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS
Directory of Open Access Journals (Sweden)
Ade Widyaningsih
2015-04-01
Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.
Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS
Directory of Open Access Journals (Sweden)
Ade Widyaningsih
2014-06-01
Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.
DEFF Research Database (Denmark)
Garcia, Emanuel; Klaas, Ilka Christine; Amigo Rubio, Jose Manuel
2014-01-01
). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3...... it was about half (16%), which makes it more suitable for practical application; the model error rates were, 23 and 19%, respectively. Based on data registered automatically from one AMS farm, we were able to discriminate nonlame and lame cows, where partial least squares discriminant analysis achieved similar...
Filtering Based Recursive Least Squares Algorithm for Multi-Input Multioutput Hammerstein Models
Directory of Open Access Journals (Sweden)
Ziyun Wang
2014-01-01
Full Text Available This paper considers the parameter estimation problem for Hammerstein multi-input multioutput finite impulse response (FIR-MA systems. Filtered by the noise transfer function, the FIR-MA model is transformed into a controlled autoregressive model. The key-term variable separation principle is used to derive a data filtering based recursive least squares algorithm. The numerical examples confirm that the proposed algorithm can estimate parameters more accurately and has a higher computational efficiency compared with the recursive least squares algorithm.
Least Squares Based Iterative Algorithm for the Coupled Sylvester Matrix Equations
Directory of Open Access Journals (Sweden)
Hongcai Yin
2014-01-01
Full Text Available By analyzing the eigenvalues of the related matrices, the convergence analysis of the least squares based iteration is given for solving the coupled Sylvester equations AX+YB=C and DX+YE=F in this paper. The analysis shows that the optimal convergence factor of this iterative algorithm is 1. In addition, the proposed iterative algorithm can solve the generalized Sylvester equation AXB+CXD=F. The analysis demonstrates that if the matrix equation has a unique solution then the least squares based iterative solution converges to the exact solution for any initial values. A numerical example illustrates the effectiveness of the proposed algorithm.
Advanced Online Flux Mapping of CANDU PHWR by Least-Squares Method
International Nuclear Information System (INIS)
Hong, In Seob; Kim, Chang Hyo; Suk, Ho Chun
2005-01-01
A least-squares method that solves both the core neutronics design equations and the in-core detector response equations on the least-squares principle is presented as a new advanced online flux-mapping method for CANada Deuterium Uranium (CANDU) pressurized heavy water reactors (PHWRs). The effectiveness of the new flux-mapping method is examined in terms of online flux-mapping calculations with numerically simulated true flux distribution and detector signals and those with the actual core-follow data for the Wolsong CANDU PHWRs in Korea. The effects of core neutronics models as well as the detector failures and uncertainties of measured detector signals on the effectiveness of the least-squares flux-mapping calculations are also examined.The following results are obtained. The least-squares method predicts the flux distribution in better agreement with the simulated true flux distribution than the standard core neutronics calculations by the finite difference method (FDM) computer code without using the detector signals. The adoption of the nonlinear nodal method based on the unified nodal method formulation instead of the FDM results in a significant improvement in prediction accuracy of the flux-mapping calculations. The detector signals estimated from the least-squares flux-mapping calculations are much closer to the measured detector signals than those from the flux synthesis method (FSM), the current online flux-mapping method for CANDU reactors. The effect of detector failures is relatively small so that the plant can tolerate up to 25% of detector failures without seriously affecting the plant operation. The detector signal uncertainties aggravate accuracy of the flux-mapping calculations, yet the effects of signal uncertainties of the order of 1% standard deviation can be tolerable without seriously degrading the prediction accuracy of the least-squares method. The least-squares method is disadvantageous because it requires longer CPU time than the
Track Circuit Fault Diagnosis Method based on Least Squares Support Vector
Cao, Yan; Sun, Fengru
2018-01-01
In order to improve the troubleshooting efficiency and accuracy of the track circuit, track circuit fault diagnosis method was researched. Firstly, the least squares support vector machine was applied to design the multi-fault classifier of the track circuit, and then the measured track data as training samples was used to verify the feasibility of the methods. Finally, the results based on BP neural network fault diagnosis methods and the methods used in this paper were compared. Results shows that the track fault classifier based on least squares support vector machine can effectively achieve the five track circuit fault diagnosis with less computing time.
Directory of Open Access Journals (Sweden)
Iman Yousefi
2015-01-01
Full Text Available This paper presents parameter estimation of Permanent Magnet Synchronous Motor (PMSM using a combinatorial algorithm. Nonlinear fourth-order space state model of PMSM is selected. This model is rewritten to the linear regression form without linearization. Noise is imposed to the system in order to provide a real condition, and then combinatorial Orthogonal Projection Algorithm and Recursive Least Squares (OPA&RLS method is applied in the linear regression form to the system. Results of this method are compared to the Orthogonal Projection Algorithm (OPA and Recursive Least Squares (RLS methods to validate the feasibility of the proposed method. Simulation results validate the efficacy of the proposed algorithm.
A Least Squares Method for Variance Estimation in Heteroscedastic Nonparametric Regression
Directory of Open Access Journals (Sweden)
Yuejin Zhou
2014-01-01
Full Text Available Interest in variance estimation in nonparametric regression has grown greatly in the past several decades. Among the existing methods, the least squares estimator in Tong and Wang (2005 is shown to have nice statistical properties and is also easy to implement. Nevertheless, their method only applies to regression models with homoscedastic errors. In this paper, we propose two least squares estimators for the error variance in heteroscedastic nonparametric regression: the intercept estimator and the slope estimator. Both estimators are shown to be consistent and their asymptotic properties are investigated. Finally, we demonstrate through simulation studies that the proposed estimators perform better than the existing competitor in various settings.
International Nuclear Information System (INIS)
Khatibinia, Mohsen; Javad Fadaee, Mohammad; Salajegheh, Javad; Salajegheh, Eysa
2013-01-01
An efficient metamodeling framework in conjunction with the Monte-Carlo Simulation (MCS) is introduced to reduce the computational cost in seismic reliability assessment of existing RC structures. In order to achieve this purpose, the metamodel is designed by combining weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, called wavelet weighted least squares support vector machine (WWLS-SVM). In this study, the seismic reliability assessment of existing RC structures with consideration of soil–structure interaction (SSI) effects is investigated in accordance with Performance-Based Design (PBD). This study aims to incorporate the acceptable performance levels of PBD into reliability theory for comparing the obtained annual probability of non-performance with the target values for each performance level. The MCS method as the most reliable method is utilized to estimate the annual probability of failure associated with a given performance level in this study. In WWLS-SVM-based MCS, the structural seismic responses are accurately predicted by WWLS-SVM for reducing the computational cost. To show the efficiency and robustness of the proposed metamodel, two RC structures are studied. Numerical results demonstrate the efficiency and computational advantages of the proposed metamodel for the seismic reliability assessment of structures. Furthermore, the consideration of the SSI effects in the seismic reliability assessment of existing RC structures is compared to the fixed base model. It shows which SSI has the significant influence on the seismic reliability assessment of structures.
Small-kernel constrained-least-squares restoration of sampled image data
Hazra, Rajeeb; Park, Stephen K.
1992-10-01
Constrained least-squares image restoration, first proposed by Hunt twenty years ago, is a linear image restoration technique in which the restoration filter is derived by maximizing the smoothness of the restored image while satisfying a fidelity constraint related to how well the restored image matches the actual data. The traditional derivation and implementation of the constrained least-squares restoration filter is based on an incomplete discrete/discrete system model which does not account for the effects of spatial sampling and image reconstruction. For many imaging systems, these effects are significant and should not be ignored. In a recent paper Park demonstrated that a derivation of the Wiener filter based on the incomplete discrete/discrete model can be extended to a more comprehensive end-to-end, continuous/discrete/continuous model. In a similar way, in this paper, we show that a derivation of the constrained least-squares filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model and, by so doing, an improved restoration filter is derived. Building on previous work by Reichenbach and Park for the Wiener filter, we also show that this improved constrained least-squares restoration filter can be efficiently implemented as a small-kernel convolution in the spatial domain.
Error propagation of partial least squares for parameters optimization in NIR modeling
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-01
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.
Convergence of Inner-Iteration GMRES Methods for Rank-Deficient Least Squares Problems
Czech Academy of Sciences Publication Activity Database
Morikuni, Keiichi; Hayami, K.
2015-01-01
Roč. 36, č. 1 (2015), s. 225-250 ISSN 0895-4798 Institutional support: RVO:67985807 Keywords : least squares problem * iterative methods * preconditioner * inner-outer iteration * GMRES method * stationary iterative method * rank-deficient problem Subject RIV: BA - General Mathematics Impact factor: 1.883, year: 2015
Gauss’s, Cholesky’s and Banachiewicz’s Contributions to Least Squares
DEFF Research Database (Denmark)
Gustavson, Fred G.; Wasniewski, Jerzy
This paper describes historically Gauss’s contributions to the area of Least Squares. Also mentioned are Cholesky’s and Banachiewicz’s contributions to linear algebra. The material given is backup information to a Tutorial given at PPAM 2011 to honor Cholesky on the hundred anniversary of his...
Wu, Chia-Huei; Chen, Lung Hung; Tsai, Ying-Mei
2009-01-01
This study introduced a formative model to investigate the utility of importance weighting on satisfaction scores with partial least squares analysis. Based on the bottom-up theory of satisfaction evaluations, the measurement structure for weighted/unweighted domain satisfaction scores was modeled as a formative model, whereas the measurement…
A rigid-body least-squares program with angular and translation scan facilities
Kutschabsky, L
1981-01-01
The described computer program, written in CERN Fortran, is designed to enlarge the convergence radius of the rigid-body least-squares method by allowing a stepwise change of the angular and/or translational parameters within a chosen range. (6 refs).
Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong
2010-01-01
This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…
Stable Galerkin versus equal-order Galerkin least-squares elements for the stokes flow problem
International Nuclear Information System (INIS)
Franca, L.P.; Frey, S.L.; Sampaio, R.
1989-11-01
Numerical experiments are performed for the stokes flow problem employing a stable Galerkin method and a Galerkin/Least-squares method with equal-order elements. Error estimates for the methods tested herein are reviewed. The numerical results presented attest the good stability properties of all methods examined herein. (A.C.A.S.) [pt
Mis-parametrization subsets for a penalized least squares model selection
Guyon, Xavier; Hardouin, Cécile
2011-01-01
When identifying a model by a penalized minimum contrast procedure, we give a description of the over and under fitting parametrization subsets for a least squares contrast. This allows to determine an accurate sequence of penalization rates ensuring good identification. We present applications for the identification of the covariance for a general time series, and for the variogram identification of a geostatistical model.
Least-Squares Approximation of an Improper Correlation Matrix by a Proper One.
Knol, Dirk L.; ten Berge, Jos M. F.
1989-01-01
An algorithm, based on a solution for C. I. Mosier's oblique Procrustes rotation problem, is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. Results are of interest for missing value and tetrachoric correlation, indefinite matrix correlation, and constrained…
Least-squares approximation of an improper correlation matrix by a proper one
Knol, Dirk L.; ten Berge, Jos M.F.
1989-01-01
An algorithm is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. The proposed algorithm is based upon a solution for Mosier's oblique Procrustes rotation problem offered by ten Berge and Nevels. A necessary and
An Adaptive Wavelet Method for Semi-Linear First-Order System Least Squares
Chegini, N.; Stevenson, R.
2015-01-01
We design an adaptive wavelet scheme for solving first-order system least-squares formulations of second-order elliptic PDEs that converge with the best possible rate in linear complexity. A wavelet Riesz basis is constructed for the space H⃗ 0,ΓN(div;Ω) on general polygons. The theoretical findings
On Solution of Total Least Squares Problems with Multiple Right-hand Sides
Czech Academy of Sciences Publication Activity Database
Hnětynková, I.; Plešinger, Martin; Strakoš, Zdeněk
2008-01-01
Roč. 8, č. 1 (2008), s. 10815-10816 ISSN 1617-7061 R&D Projects: GA AV ČR IAA100300802 Institutional research plan: CEZ:AV0Z10300504 Keywords : total least squares problem * multiple right-hand sides * linear approximation problem Subject RIV: BA - General Mathematics
Bayesian inference for data assimilation using least-squares finite element methods
Dwight, R.P.
2010-01-01
It has recently been observed that Least-Squares Finite Element methods (LS-FEMs) can be used to assimilate experimental data into approximations of PDEs in a natural way, as shown by Heyes et al. in the case of incompressible Navier Stokes ow [1]. The approach was shown to be effective without
Algorithms for global total least squares modelling of finite multivariable time series
Roorda, Berend
1995-01-01
In this paper we present several algorithms related to the global total least squares (GTLS) modelling of multivariable time series observed over a finite time interval. A GTLS model is a linear, time-invariant finite-dimensional system with a behaviour that has minimal Frobenius distance to a given
Harmonic tidal analysis at a few stations using the least squares method
Digital Repository Service at National Institute of Oceanography (India)
Fernandes, A.A.; Das, V.K.; Bahulayan, N.
Using the least squares method, harmonic analysis has been performed on hourly water level records of 29 days at several stations depicting different types of non-tidal noise. For a tidal record at Mormugao, which was free from storm surges (low...
International Nuclear Information System (INIS)
Herda, Trent J; Ryan, Eric D; Costa, Pablo B; DeFreitas, Jason M; Walter, Ashley A; Stout, Jeffrey R; Beck, Travis W; Cramer, Joel T; Housh, Terry J; Weir, Joseph P
2009-01-01
The primary purpose of this study was to examine the consistency of ordinary least-squares (OLS) and generalized least-squares (GLS) polynomial regression analyses utilizing linear, quadratic and cubic models on either five or ten data points that characterize the mechanomyographic amplitude (MMG RMS ) versus isometric torque relationship. The secondary purpose was to examine the consistency of OLS and GLS polynomial regression utilizing only linear and quadratic models (excluding cubic responses) on either ten or five data points. Eighteen participants (mean ± SD age = 24 ± 4 yr) completed ten randomly ordered isometric step muscle actions from 5% to 95% of the maximal voluntary contraction (MVC) of the right leg extensors during three separate trials. MMG RMS was recorded from the vastus lateralis during the MVCs and each submaximal muscle action. MMG RMS versus torque relationships were analyzed on a subject-by-subject basis using OLS and GLS polynomial regression. When using ten data points, only 33% and 27% of the subjects were fitted with the same model (utilizing linear, quadratic and cubic models) across all three trials for OLS and GLS, respectively. After eliminating the cubic model, there was an increase to 55% of the subjects being fitted with the same model across all trials for both OLS and GLS regression. Using only five data points (instead of ten data points), 55% of the subjects were fitted with the same model across all trials for OLS and GLS regression. Overall, OLS and GLS polynomial regression models were only able to consistently describe the torque-related patterns of response for MMG RMS in 27–55% of the subjects across three trials. Future studies should examine alternative methods for improving the consistency and reliability of the patterns of response for the MMG RMS versus isometric torque relationship
Chkifa, Abdellah
2015-04-08
Motivated by the numerical treatment of parametric and stochastic PDEs, we analyze the least-squares method for polynomial approximation of multivariate functions based on random sampling according to a given probability measure. Recent work has shown that in the univariate case, the least-squares method is quasi-optimal in expectation in [A. Cohen, M A. Davenport and D. Leviatan. Found. Comput. Math. 13 (2013) 819–834] and in probability in [G. Migliorati, F. Nobile, E. von Schwerin, R. Tempone, Found. Comput. Math. 14 (2014) 419–456], under suitable conditions that relate the number of samples with respect to the dimension of the polynomial space. Here “quasi-optimal” means that the accuracy of the least-squares approximation is comparable with that of the best approximation in the given polynomial space. In this paper, we discuss the quasi-optimality of the polynomial least-squares method in arbitrary dimension. Our analysis applies to any arbitrary multivariate polynomial space (including tensor product, total degree or hyperbolic crosses), under the minimal requirement that its associated index set is downward closed. The optimality criterion only involves the relation between the number of samples and the dimension of the polynomial space, independently of the anisotropic shape and of the number of variables. We extend our results to the approximation of Hilbert space-valued functions in order to apply them to the approximation of parametric and stochastic elliptic PDEs. As a particular case, we discuss “inclusion type” elliptic PDE models, and derive an exponential convergence estimate for the least-squares method. Numerical results confirm our estimate, yet pointing out a gap between the condition necessary to achieve optimality in the theory, and the condition that in practice yields the optimal convergence rate.
Bauza, María C; Ibañez, Gabriela A; Tauler, Romà; Olivieri, Alejandro C
2012-10-16
A new equation is derived for estimating the sensitivity when the multivariate curve resolution-alternating least-squares (MCR-ALS) method is applied to second-order multivariate calibration data. The validity of the expression is substantiated by extensive Monte Carlo noise addition simulations. The multivariate selectivity can be derived from the new sensitivity expression. Other important figures of merit, such as limit of detection, limit of quantitation, and concentration uncertainty of MCR-ALS quantitative estimations can be easily estimated from the proposed sensitivity expression and the instrumental noise. An experimental example involving the determination of an analyte in the presence of uncalibrated interfering agents is described in detail, involving second-order time-decaying sensitized lanthanide luminescence excitation spectra. The estimated figures of merit are reasonably correlated with the analytical features of the analyzed experimental system.
Real-Time Adaptive Least-Squares Drag Minimization for Performance Adaptive Aeroelastic Wing
Ferrier, Yvonne L.; Nguyen, Nhan T.; Ting, Eric
2016-01-01
This paper contains a simulation study of a real-time adaptive least-squares drag minimization algorithm for an aeroelastic model of a flexible wing aircraft. The aircraft model is based on the NASA Generic Transport Model (GTM). The wing structures incorporate a novel aerodynamic control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF). The drag minimization algorithm uses the Newton-Raphson method to find the optimal VCCTEF deflections for minimum drag in the context of an altitude-hold flight control mode at cruise conditions. The aerodynamic coefficient parameters used in this optimization method are identified in real-time using Recursive Least Squares (RLS). The results demonstrate the potential of the VCCTEF to improve aerodynamic efficiency for drag minimization for transport aircraft.
Prediction of toxicity of nitrobenzenes using ab initio and least squares support vector machines
Energy Technology Data Exchange (ETDEWEB)
Niazi, Ali [Department of Chemistry, Faculty of Sciences, Azad University of Arak, Arak (Iran, Islamic Republic of)], E-mail: ali.niazi@gmail.com; Jameh-Bozorghi, Saeed; Nori-Shargh, Davood [Department of Chemistry, Faculty of Sciences, Azad University of Arak, Arak (Iran, Islamic Republic of)
2008-03-01
A quantitative structure-property relationship (QSPR) study is suggested for the prediction of toxicity (IGC{sub 50}) of nitrobenzenes. Ab initio theory was used to calculate some quantum chemical descriptors including electrostatic potentials and local charges at each atom, HOMO and LUMO energies, etc. Modeling of the IGC{sub 50} of nitrobenzenes as a function of molecular structures was established by means of the least squares support vector machines (LS-SVM). This model was applied for the prediction of the toxicity (IGC{sub 50}) of nitrobenzenes, which were not in the modeling procedure. The resulted model showed high prediction ability with root mean square error of prediction of 0.0049 for LS-SVM. Results have shown that the introduction of LS-SVM for quantum chemical descriptors drastically enhances the ability of prediction in QSAR studies superior to multiple linear regression and partial least squares.
Yao, Zhenjian; Wang, Zhongyu; Yi-Lin Forrest, Jeffrey; Wang, Qiyue; Lv, Jing
2017-04-01
In this paper, an approach combining empirical mode decomposition (EMD) with adaptive least squares (ALS) is proposed to improve the dynamic calibration accuracy of pressure sensors. With EMD, the original output of the sensor can be represented as sums of zero-mean amplitude modulation frequency modulation components. By identifying and excluding those components involved in noises, the noise-free output could be reconstructed with the useful frequency modulation ones. Then the least squares method is iteratively performed to estimate the optimal order and parameters of the mathematical model. The dynamic characteristic parameters of the sensor can be derived from the model in both time and frequency domains. A series of shock tube calibration tests are carried out to validate the performance of this method. Experimental results show that the proposed method works well in reducing the influence of noise and yields an appropriate mathematical model. Furthermore, comparative experiments also demonstrate the superiority of the proposed method over the existing ones.
Multi-source remote-sensing image matching based on epipolar line and least squares
Chen, Peng; Mao, Zhihua; Chen, Jianyu; Zhang, Xiaoping; Li, Zifeng
2013-10-01
In remote sensing image applications, the image matching is a very key technology, its quality directly related to the quality of the subsequent results. This paper studied an improved SIFT features matching method for muili-source remote-sensing image registration based on GPU computing, epipolar line and least squares, its main purpose is to take both accuracy and efficiency into consideration. This method is firstly based on tonal balanced methods matching, and then exracts SIFT features based on the GPU computing technology, and then matchs feature points based on epipolar line and least squares matching method with RANSAC method, finally analies error sources of SIFT mismatch, researchs an improved SIFT mismatch reduce strategy.The experimental results prove that the method can effectively improve the efficiency and precision of SIFT feature matching.
Least squares shadowing sensitivity analysis of a modified Kuramoto–Sivashinsky equation
International Nuclear Information System (INIS)
Blonigan, Patrick J.; Wang, Qiqi
2014-01-01
Highlights: •Modifying the Kuramoto–Sivashinsky equation and changing its boundary conditions make it an ergodic dynamical system. •The modified Kuramoto–Sivashinsky equation exhibits distinct dynamics for three different ranges of system parameters. •Least squares shadowing sensitivity analysis computes accurate gradients for a wide range of system parameters. - Abstract: Computational methods for sensitivity analysis are invaluable tools for scientists and engineers investigating a wide range of physical phenomena. However, many of these methods fail when applied to chaotic systems, such as the Kuramoto–Sivashinsky (K–S) equation, which models a number of different chaotic systems found in nature. The following paper discusses the application of a new sensitivity analysis method developed by the authors to a modified K–S equation. We find that least squares shadowing sensitivity analysis computes accurate gradients for solutions corresponding to a wide range of system parameters
Enhancing Least-Squares Finite Element Methods Through a Quantity-of-Interest
Energy Technology Data Exchange (ETDEWEB)
Chaudhry, Jehanzeb Hameed [Colorado State Univ., Fort Collins, CO (United States). Dept. of Mathematics; Cyr, Eric C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Computational Mathematics Dept.; Liu, Kuo [Univ. of Colorado, Boulder, CO (United States). Dept. of Applied Mathematics; Manteuffel, Thomas A. [Univ. of Colorado, Boulder, CO (United States). Dept. of Applied Mathematics; Olson, Luke N. [Univ. of Illinois at Urbana-Champaign, IL (United States). Dept. of Computer Science; Tang, Lei [Univ. of Colorado, Boulder, CO (United States). Dept. of Applied Mathematics
2014-12-18
Here, we introduce an approach that augments least-squares finite element formulations with user-specified quantities-of-interest. The method incorporates the quantity-of-interest into the least-squares functional and inherits the global approximation properties of the standard formulation as well as increased resolution of the quantity-of-interest. We establish theoretical properties such as optimality and enhanced convergence under a set of general assumptions. Central to the approach is that it offers an element-level estimate of the error in the quantity-of-interest. As a result, we introduce an adaptive approach that yields efficient, adaptively refined approximations. Several numerical experiments for a range of situations are presented to support the theory and highlight the effectiveness of our methodology. Notably, the results show that the new approach is effective at improving the accuracy per total computational cost.
Strong source heat transfer simulations based on a GalerKin/Gradient - least - squares method
International Nuclear Information System (INIS)
Franca, L.P.; Carmo, E.G.D. do.
1989-05-01
Heat conduction problems with temperature-dependent strong sources are modeled by an equation with a laplacian term, a linear term and a given source distribution term. When the linear-temperature-dependent source term is much larger than the laplacian term, we have a singular perturbation problem. In this case, boundary layers are formed to satisfy the Dirichlet boundary conditions. Although this is an elliptic equation, the standard Galerkin method solution is contaminated by spurious oscillations in the neighborhood of the boundary layers. Herein we employ a Galerkin/Gradient-least-squares method which eliminates all pathological phenomena of the Galerkin method. The method is constructed by adding to the Galerkin method a mesh-dependent term obtained by the least-squares form of the gradient of the Euler-Lagrange equation. Error estimates, numerical simulations in one-and multi-dimensions are given that attest the good stability and accuracy properties of the method [pt
A Least Square-Based Self-Adaptive Localization Method for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Baoguo Yu
2016-01-01
Full Text Available In the wireless sensor network (WSN localization methods based on Received Signal Strength Indicator (RSSI, it is usually required to determine the parameters of the radio signal propagation model before estimating the distance between the anchor node and an unknown node with reference to their communication RSSI value. And finally we use a localization algorithm to estimate the location of the unknown node. However, this localization method, though high in localization accuracy, has weaknesses such as complex working procedure and poor system versatility. Concerning these defects, a self-adaptive WSN localization method based on least square is proposed, which uses the least square criterion to estimate the parameters of radio signal propagation model, which positively reduces the computation amount in the estimation process. The experimental results show that the proposed self-adaptive localization method outputs a high processing efficiency while satisfying the high localization accuracy requirement. Conclusively, the proposed method is of definite practical value.
Feasibility study on the least square method for fitting non-Gaussian noise data
Xu, Wei; Chen, Wen; Liang, Yingjie
2018-02-01
This study is to investigate the feasibility of least square method in fitting non-Gaussian noise data. We add different levels of the two typical non-Gaussian noises, Lévy and stretched Gaussian noises, to exact value of the selected functions including linear equations, polynomial and exponential equations, and the maximum absolute and the mean square errors are calculated for the different cases. Lévy and stretched Gaussian distributions have many applications in fractional and fractal calculus. It is observed that the non-Gaussian noises are less accurately fitted than the Gaussian noise, but the stretched Gaussian cases appear to perform better than the Lévy noise cases. It is stressed that the least-squares method is inapplicable to the non-Gaussian noise cases when the noise level is larger than 5%.
Prediction of toxicity of nitrobenzenes using ab initio and least squares support vector machines
International Nuclear Information System (INIS)
Niazi, Ali; Jameh-Bozorghi, Saeed; Nori-Shargh, Davood
2008-01-01
A quantitative structure-property relationship (QSPR) study is suggested for the prediction of toxicity (IGC 50 ) of nitrobenzenes. Ab initio theory was used to calculate some quantum chemical descriptors including electrostatic potentials and local charges at each atom, HOMO and LUMO energies, etc. Modeling of the IGC 50 of nitrobenzenes as a function of molecular structures was established by means of the least squares support vector machines (LS-SVM). This model was applied for the prediction of the toxicity (IGC 50 ) of nitrobenzenes, which were not in the modeling procedure. The resulted model showed high prediction ability with root mean square error of prediction of 0.0049 for LS-SVM. Results have shown that the introduction of LS-SVM for quantum chemical descriptors drastically enhances the ability of prediction in QSAR studies superior to multiple linear regression and partial least squares
Incoherent dictionary learning for reducing crosstalk noise in least-squares reverse time migration
Wu, Juan; Bai, Min
2018-05-01
We propose to apply a novel incoherent dictionary learning (IDL) algorithm for regularizing the least-squares inversion in seismic imaging. The IDL is proposed to overcome the drawback of traditional dictionary learning algorithm in losing partial texture information. Firstly, the noisy image is divided into overlapped image patches, and some random patches are extracted for dictionary learning. Then, we apply the IDL technology to minimize the coherency between atoms during dictionary learning. Finally, the sparse representation problem is solved by a sparse coding algorithm, and image is restored by those sparse coefficients. By reducing the correlation among atoms, it is possible to preserve most of the small-scale features in the image while removing much of the long-wavelength noise. The application of the IDL method to regularization of seismic images from least-squares reverse time migration shows successful performance.
Directory of Open Access Journals (Sweden)
Chunbo Zhang
2017-04-01
Full Text Available Due to unbalanced speed-density observations, the one-regime traffic fundamental diagram and speed-density relationship models using least square method (LSM cannot reflect actual conditions under congested/jam traffic. In that case, it is inevitable to adopt the weighted least square method (WLSM. This paper used freeway Georgia State Route 400 observation data and proposed 5 weight determination methods except the LSM to analyse 5 wellknown one-regime speed-density models to determine the best calibrating models. The results indicated that different one-regime speed-density models have different best calibrating models, for Greenberg, it was possible to find a specific weight using LSM, which is similar for Underwood and Northwestern Models, but different for that one known as 3PL model. An interesting case is the Newell's Model which fits well with two distinct calibration weights. This paper can make contribution to calibrating a more precise traffic fundamental diagram.
Method for exploiting bias in factor analysis using constrained alternating least squares algorithms
Keenan, Michael R.
2008-12-30
Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.
Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine
International Nuclear Information System (INIS)
Xu Ruirui; Bian Guoxing; Gao Chenfeng; Chen Tianlun
2005-01-01
The least squares support vector machine (LS-SVM) is used to study the nonlinear time series prediction. First, the parameter γ and multi-step prediction capabilities of the LS-SVM network are discussed. Then we employ clustering method in the model to prune the number of the support values. The learning rate and the capabilities of filtering noise for LS-SVM are all greatly improved.
Use of correspondence analysis partial least squares on linear and unimodal data
DEFF Research Database (Denmark)
Frisvad, Jens C.; Bergsøe, Merete Norsker
1996-01-01
Correspondence analysis partial least squares (CA-PLS) has been compared with PLS conceming classification and prediction of unimodal growth temperature data and an example using infrared (IR) spectroscopy for predicting amounts of chemicals in mixtures. CA-PLS was very effective for ordinating...... that could only be seen in two-dimensional plots, and also less effective predictions. PLS was the best method in the linear case treated, with fewer components and a better prediction than CA-PLS....
Kim, Sanghong; Kano, Manabu; Nakagawa, Hiroshi; Hasebe, Shinji
2011-01-01
Development of quality estimation models using near infrared spectroscopy (NIRS) and multivariate analysis has been accelerated as a process analytical technology (PAT) tool in the pharmaceutical industry. Although linear regression methods such as partial least squares (PLS) are widely used, they cannot always achieve high estimation accuracy because physical and chemical properties of a measuring object have a complex effect on NIR spectra. In this research, locally weighted PLS (LW-PLS) wh...
Filtering Based Recursive Least Squares Algorithm for Multi-Input Multioutput Hammerstein Models
Wang, Ziyun; Wang, Yan; Ji, Zhicheng
2014-01-01
This paper considers the parameter estimation problem for Hammerstein multi-input multioutput finite impulse response (FIR-MA) systems. Filtered by the noise transfer function, the FIR-MA model is transformed into a controlled autoregressive model. The key-term variable separation principle is used to derive a data filtering based recursive least squares algorithm. The numerical examples confirm that the proposed algorithm can estimate parameters more accurately and has a higher computational...
Optimal Knot Selection for Least-squares Fitting of Noisy Data with Spline Functions
Energy Technology Data Exchange (ETDEWEB)
Jerome Blair
2008-05-15
An automatic data-smoothing algorithm for data from digital oscilloscopes is described. The algorithm adjusts the bandwidth of the filtering as a function of time to provide minimum mean squared error at each time. It produces an estimate of the root-mean-square error as a function of time and does so without any statistical assumptions about the unknown signal. The algorithm is based on least-squares fitting to the data of cubic spline functions.
Least-squares methods for identifying biochemical regulatory networks from noisy measurements
Directory of Open Access Journals (Sweden)
Heslop-Harrison Pat
2007-01-01
Full Text Available Abstract Background We consider the problem of identifying the dynamic interactions in biochemical networks from noisy experimental data. Typically, approaches for solving this problem make use of an estimation algorithm such as the well-known linear Least-Squares (LS estimation technique. We demonstrate that when time-series measurements are corrupted by white noise and/or drift noise, more accurate and reliable identification of network interactions can be achieved by employing an estimation algorithm known as Constrained Total Least Squares (CTLS. The Total Least Squares (TLS technique is a generalised least squares method to solve an overdetermined set of equations whose coefficients are noisy. The CTLS is a natural extension of TLS to the case where the noise components of the coefficients are correlated, as is usually the case with time-series measurements of concentrations and expression profiles in gene networks. Results The superior performance of the CTLS method in identifying network interactions is demonstrated on three examples: a genetic network containing four genes, a network describing p53 activity and mdm2 messenger RNA interactions, and a recently proposed kinetic model for interleukin (IL-6 and (IL-12b messenger RNA expression as a function of ATF3 and NF-κB promoter binding. For the first example, the CTLS significantly reduces the errors in the estimation of the Jacobian for the gene network. For the second, the CTLS reduces the errors from the measurements that are corrupted by white noise and the effect of neglected kinetics. For the third, it allows the correct identification, from noisy data, of the negative regulation of (IL-6 and (IL-12b by ATF3. Conclusion The significant improvements in performance demonstrated by the CTLS method under the wide range of conditions tested here, including different levels and types of measurement noise and different numbers of data points, suggests that its application will enable
Hongkui Li; Tongli Lu; Jianwu Zhang
2016-01-01
This paper focuses on developing an estimation method of clutch drag torque in wet DCT. The modelling of clutch drag torque is investigated. As the main factor affecting the clutch drag torque, dynamic viscosity of oil is discussed. The paper proposes an estimation method of clutch drag torque based on recursive least squares by utilizing the dynamic equations of gear shifting synchronization process. The results demonstrate that the estimation method has good accuracy and efficiency.
Moving Least Squares Method for a One-Dimensional Parabolic Inverse Problem
Directory of Open Access Journals (Sweden)
Baiyu Wang
2014-01-01
Full Text Available This paper investigates the numerical solution of a class of one-dimensional inverse parabolic problems using the moving least squares approximation; the inverse problem is the determination of an unknown source term depending on time. The collocation method is used for solving the equation; some numerical experiments are presented and discussed to illustrate the stability and high efficiency of the method.
Directory of Open Access Journals (Sweden)
H. Hüseyin SAYAN
2009-01-01
Full Text Available In this study, recursive least squares method (RLSM that is one of the adaptable classical methods was used. Firstly forgetting factor was adapted to RLSM. Phase information of voltage signal belonging to an electric power network that contains harmonics and spike was obtained by developed approach. Then responses of the algorithm were investigated for voltage collapse, phase shift and spike. Simulation was implemented by using MATLAB® code. Results of simulation were examined and efficiency of method was presented.
Solving the Axisymmetric Inverse Heat Conduction Problem by a Wavelet Dual Least Squares Method
Directory of Open Access Journals (Sweden)
Fu Chu-Li
2009-01-01
Full Text Available We consider an axisymmetric inverse heat conduction problem of determining the surface temperature from a fixed location inside a cylinder. This problem is ill-posed; the solution (if it exists does not depend continuously on the data. A special project method—dual least squares method generated by the family of Shannon wavelet is applied to formulate regularized solution. Meanwhile, an order optimal error estimate between the approximate solution and exact solution is proved.
Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
Dieuleveut, Aymeric; Flammarion, Nicolas; Bach, Francis
2017-01-01
International audience; We consider the optimization of a quadratic objective function whose gradients are only accessible through a stochastic oracle that returns the gradient at any given point plus a zero-mean finite variance random error. We present the first algorithm that achieves jointly the optimal prediction error rates for least-squares regression, both in terms of forgetting of initial conditions in O(1/n 2), and in terms of dependence on the noise and dimension d of the problem, a...
Seismic time-lapse imaging using Interferometric least-squares migration
Sinha, Mrinal
2016-09-06
One of the problems with 4D surveys is that the environmental conditions change over time so that the experiment is insufficiently repeatable. To mitigate this problem, we propose the use of interferometric least-squares migration (ILSM) to estimate the migration image for the baseline and monitor surveys. Here, a known reflector is used as the reference reflector for ILSM. Results with synthetic and field data show that ILSM can eliminate artifacts caused by non-repeatability in time-lapse surveys.
Speckle evolution with multiple steps of least-squares phase removal
CSIR Research Space (South Africa)
Chen, M
2011-08-01
Full Text Available 84, 023846 (2011) Speckle evolution with multiple steps of least-squares phase removal Mingzhou Chen* and Chris Dainty Applied Optics, School of Physics, National University of Ireland Galway, Galway, Ireland Filippus S. Roux? National Laser...- arithmically with each step. The result is that the vortex density decreases according to a power law as a function of propagation distance. In some cases one or two vortex dipoles still remain in the final field. The separation distances between...
The MCLIB library: Monte Carlo simulation of neutron scattering instruments
International Nuclear Information System (INIS)
Seeger, P.A.
1995-01-01
Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC RUN) which use the library are shown as an example
The MCLIB library: Monte Carlo simulation of neutron scattering instruments
Energy Technology Data Exchange (ETDEWEB)
Seeger, P.A.
1995-09-01
Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC{_}RUN) which use the library are shown as an example.
Theoretical study of the incompressible Navier-Stokes equations by the least-squares method
Jiang, Bo-Nan; Loh, Ching Y.; Povinelli, Louis A.
1994-01-01
Usually the theoretical analysis of the Navier-Stokes equations is conducted via the Galerkin method which leads to difficult saddle-point problems. This paper demonstrates that the least-squares method is a useful alternative tool for the theoretical study of partial differential equations since it leads to minimization problems which can often be treated by an elementary technique. The principal part of the Navier-Stokes equations in the first-order velocity-pressure-vorticity formulation consists of two div-curl systems, so the three-dimensional div-curl system is thoroughly studied at first. By introducing a dummy variable and by using the least-squares method, this paper shows that the div-curl system is properly determined and elliptic, and has a unique solution. The same technique then is employed to prove that the Stokes equations are properly determined and elliptic, and that four boundary conditions on a fixed boundary are required for three-dimensional problems. This paper also shows that under four combinations of non-standard boundary conditions the solution of the Stokes equations is unique. This paper emphasizes the application of the least-squares method and the div-curl method to derive a high-order version of differential equations and additional boundary conditions. In this paper, an elementary method (integration by parts) is used to prove Friedrichs' inequalities related to the div and curl operators which play an essential role in the analysis.
Super-resolution least-squares prestack Kirchhoff depth migration using the L 0-norm
Wu, Shao-Jiang; Wang, Yi-Bo; Ma, Yue; Chang, Xu
2018-01-01
Least-squares migration (LSM) is applied to image subsurface structures and lithology by minimizing the objective function of the observed seismic and reverse-time migration residual data of various underground reflectivity models. LSM reduces the migration artifacts, enhances the spatial resolution of the migrated images, and yields a more accurate subsurface reflectivity distribution than that of standard migration. The introduction of regularization constraints effectively improves the stability of the least-squares offset. The commonly used regularization terms are based on the L 2-norm, which smooths the migration results, e.g., by smearing the reflectivities, while providing stability. However, in exploration geophysics, reflection structures based on velocity and density are generally observed to be discontinuous in depth, illustrating sparse reflectance. To obtain a sparse migration profile, we propose the super-resolution least-squares Kirchhoff prestack depth migration by solving the L 0-norm-constrained optimization problem. Additionally, we introduce a two-stage iterative soft and hard thresholding algorithm to retrieve the super-resolution reflectivity distribution. Further, the proposed algorithm is applied to complex synthetic data. Furthermore, the sensitivity of the proposed algorithm to noise and the dominant frequency of the source wavelet was evaluated. Finally, we conclude that the proposed method improves the spatial resolution and achieves impulse-like reflectivity distribution and can be applied to structural interpretations and complex subsurface imaging.
Regional geoid computation by least squares modified Hotine's formula with additive corrections
Märdla, Silja; Ellmann, Artu; Ågren, Jonas; Sjöberg, Lars E.
2018-03-01
Geoid and quasigeoid modelling from gravity anomalies by the method of least squares modification of Stokes's formula with additive corrections is adapted for the usage with gravity disturbances and Hotine's formula. The biased, unbiased and optimum versions of least squares modification are considered. Equations are presented for the four additive corrections that account for the combined (direct plus indirect) effect of downward continuation (DWC), topographic, atmospheric and ellipsoidal corrections in geoid or quasigeoid modelling. The geoid or quasigeoid modelling scheme by the least squares modified Hotine formula is numerically verified, analysed and compared to the Stokes counterpart in a heterogeneous study area. The resulting geoid models and the additive corrections computed both for use with Stokes's or Hotine's formula differ most in high topography areas. Over the study area (reaching almost 2 km in altitude), the approximate geoid models (before the additive corrections) differ by 7 mm on average with a 3 mm standard deviation (SD) and a maximum of 1.3 cm. The additive corrections, out of which only the DWC correction has a numerically significant difference, improve the agreement between respective geoid or quasigeoid models to an average difference of 5 mm with a 1 mm SD and a maximum of 8 mm.
Directory of Open Access Journals (Sweden)
Mohd Idrus Mohd Nazrul Effendy
2018-01-01
Full Text Available Near infrared spectroscopy (NIRS is a reliable technique that widely used in medical fields. Partial least square was developed to predict blood hemoglobin concentration using NIRS. The aims of this paper are (i to develop predictive model for near infrared spectroscopic analysis in blood hemoglobin prediction, (ii to establish relationship between blood hemoglobin and near infrared spectrum using a predictive model, (iii to evaluate the predictive accuracy of a predictive model based on root mean squared error (RMSE and coefficient of determination rp2. Partial least square with first order Savitzky Golay (SG derivative preprocessing (PLS-SGd1 showed the higher performance of predictions with RMSE = 0.7965 and rp2= 0.9206 in K-fold cross validation. Optimum number of latent variable (LV and frame length (f were 32 and 27 nm, respectively. These findings suggest that the relationship between blood hemoglobin and near infrared spectrum is strong, and the partial least square with first order SG derivative is able to predict the blood hemoglobin using near infrared spectral data.
FFT-based preconditioners for Toeplitz-Block least square problems
Energy Technology Data Exchange (ETDEWEB)
Chan, R.H. (Univ. of Hong Kong (Hong Kong). Dept. of Mathematics); Nagy, J.G.; Plemons, R.J. (Univ. of Minnesota, Minneapolis, MN (United States). Inst. for Mathematics and its Applications)
1993-12-01
Discretized two-dimensional deconvolution problems arising, e.g., in image restoration and seismic tomography, can be formulated as least squares computations, min [parallel] b [minus] Tx [parallel][sub 2], where T is often a large-scale rectangular Toeplitz-block matrix. The authors consider solving such block least squares problems by the preconditioned conjugate gradient algorithm using square nonsingular circulant-block and related preconditioners, constructed from the blocks of the rectangular matrix T. Preconditioning with such matrices allows efficient implementation using the one-dimensional or two-dimensional fast Fourier transform (FFT). Two-block preconditioners, related to those proposed by T. Chan and J. Olkin for square nonsingular Toeplitz-block systems, are derived and analyzed. It is shown that, for important classes of T, the singular values of the preconditioned matrix are clustered around one. This extends the authors' earlier work on preconditioners for Toeplitz least squares iterations for one-dimensional problems. It is well known that the resolution of ill-posed deconvolution problems can be substantially improved by regularization to compensate for their ill-posed nature. It is shown that regularization can easily be incorporated into the preconditioners, and a report is given on numerical experiments on a Cray Y-MP. The experiments illustrate good convergence properties of these FFT-based preconditioned iterations.
Preprocessing in Matlab Inconsistent Linear System for a Meaningful Least Squares Solution
Sen, Symal K.; Shaykhian, Gholam Ali
2011-01-01
Mathematical models of many physical/statistical problems are systems of linear equations Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the . minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.
A cross-correlation objective function for least-squares migration and visco-acoustic imaging
Dutta, Gaurav
2014-08-05
Conventional acoustic least-squares migration inverts for a reflectivity image that best matches the amplitudes of the observed data. However, for field data applications, it is not easy to match the recorded amplitudes because of the visco-elastic nature of the earth and inaccuracies in the estimation of source signature and strength at different shot locations. To relax the requirement for strong amplitude matching of least-squares migration, we use a normalized cross-correlation objective function that is only sensitive to the similarity between the predicted and the observed data. Such a normalized cross-correlation objective function is also equivalent to a time-domain phase inversion method where the main emphasis is only on matching the phase of the data rather than the amplitude. Numerical tests on synthetic and field data show that such an objective function can be used as an alternative to visco-acoustic least-squares reverse time migration (Qp-LSRTM) when there is strong attenuation in the subsurface and the estimation of the attenuation parameter Qp is insufficiently accurate.
A hybrid least squares and principal component analysis algorithm for Raman spectroscopy.
Directory of Open Access Journals (Sweden)
Dominique Van de Sompel
Full Text Available Raman spectroscopy is a powerful technique for detecting and quantifying analytes in chemical mixtures. A critical part of Raman spectroscopy is the use of a computer algorithm to analyze the measured Raman spectra. The most commonly used algorithm is the classical least squares method, which is popular due to its speed and ease of implementation. However, it is sensitive to inaccuracies or variations in the reference spectra of the analytes (compounds of interest and the background. Many algorithms, primarily multivariate calibration methods, have been proposed that increase robustness to such variations. In this study, we propose a novel method that improves robustness even further by explicitly modeling variations in both the background and analyte signals. More specifically, it extends the classical least squares model by allowing the declared reference spectra to vary in accordance with the principal components obtained from training sets of spectra measured in prior characterization experiments. The amount of variation allowed is constrained by the eigenvalues of this principal component analysis. We compare the novel algorithm to the least squares method with a low-order polynomial residual model, as well as a state-of-the-art hybrid linear analysis method. The latter is a multivariate calibration method designed specifically to improve robustness to background variability in cases where training spectra of the background, as well as the mean spectrum of the analyte, are available. We demonstrate the novel algorithm's superior performance by comparing quantitative error metrics generated by each method. The experiments consider both simulated data and experimental data acquired from in vitro solutions of Raman-enhanced gold-silica nanoparticles.
Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).
Bevilacqua, Marta; Marini, Federico
2014-08-01
The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks). Copyright © 2014 Elsevier B.V. All rights reserved.
Huang, Yunsong
2012-05-22
Multisource migration of phase-encoded supergathers has shown great promise in reducing the computational cost of conventional migration. The accompanying crosstalk noise, in addition to the migration footprint, can be reduced by least-squares inversion. But the application of this approach to marine streamer data is hampered by the mismatch between the limited number of live traces/shot recorded in the field and the pervasive number of traces generated by the finite-difference modelling method. This leads to a strong mismatch in the misfit function and results in strong artefacts (crosstalk) in the multisource least-squares migration image. To eliminate this noise, we present a frequency-division multiplexing (FDM) strategy with iterative least-squares migration (ILSM) of supergathers. The key idea is, at each ILSM iteration, to assign a unique frequency band to each shot gather. In this case there is no overlap in the crosstalk spectrum of each migrated shot gather m(x, ω i), so the spectral crosstalk product m(x, ω i)m(x, ω j) =δ i, j is zero, unless i=j. Our results in applying this method to 2D marine data for a SEG/EAGE salt model show better resolved images than standard migration computed at about 1/10 th of the cost. Similar results are achieved after applying this method to synthetic data for a 3D SEG/EAGE salt model, except the acquisition geometry is similar to that of a marine OBS survey. Here, the speedup of this method over conventional migration is more than 10. We conclude that multisource migration for a marine geometry can be successfully achieved by a frequency-division encoding strategy, as long as crosstalk-prone sources are segregated in their spectral content. This is both the strength and the potential limitation of this method. © 2012 European Association of Geoscientists & Engineers.
Proton Exchange Membrane Fuel Cell Modelling Using Moving Least Squares Technique
Directory of Open Access Journals (Sweden)
Radu Tirnovan
2009-07-01
Full Text Available Proton exchange membrane fuel cell, with low polluting emissions, is a great alternative to replace the traditional electrical power sources for automotive applications or for small stationary consumers. This paper presents a numerical method, for the fuel cell modelling, based on moving least squares (MLS. Experimental data have been used for developing an approximated model of the PEMFC function of the current density, air inlet pressure and operating temperature of the fuel cell. The method can be applied for modelling others fuel cell sub-systems, such as the compressor. The method can be used for off-line or on-line identification of the PEMFC stack.
Directory of Open Access Journals (Sweden)
V. I. Djigan
2007-12-01
Full Text Available This paper considers the application of the linear constraints and RLS inverse QR decomposition in adaptive arrays based on constant modulus criterion. The computational procedures of adaptive algorithms are presented. Linearly constrained least squares adaptive arrays, constant modulus adaptive arrays and linearly constrained constant modulus adaptive arrays are compared via simulation. It is demonstrated, that a constant phase shift in the array output signal, caused by desired signal orientation and array weights, is compensated in a simple way in linearly constrained constant modulus adaptive arrays.
A negative-norm least-squares method for time-harmonic Maxwell equations
Copeland, Dylan M.
2012-04-01
This paper presents and analyzes a negative-norm least-squares finite element discretization method for the dimension-reduced time-harmonic Maxwell equations in the case of axial symmetry. The reduced equations are expressed in cylindrical coordinates, and the analysis consequently involves weighted Sobolev spaces based on the degenerate radial weighting. The main theoretical results established in this work include existence and uniqueness of the continuous and discrete formulations and error estimates for simple finite element functions. Numerical experiments confirm the error estimates and efficiency of the method for piecewise constant coefficients. © 2011 Elsevier Inc.
DEFF Research Database (Denmark)
Christensen, Bent Jesper; Varneskov, Rasmus T.
This paper introduces a new estimator of the fractional cointegrating vector between stationary long memory processes that is robust to low-frequency contamination such as level shifts, i.e., structural changes in the means of the series, and deterministic trends. In particular, the proposed medium...... the cointegration strength and testing MBLS against the existing narrow band least squares estimator are developed. Finally, the asymptotic framework for the MBLS estimator is used to provide new perspectives on volatility factors in an empirical application to long-span realized variance series for S&P 500...
Obtention of the parameters of the Voigt function using the least square fit method
International Nuclear Information System (INIS)
Flores Ll, H.; Cabral P, A.; Jimenez D, H.
1990-01-01
The fundamental parameters of the Voigt function are determined: lorentzian wide (Γ L ) and gaussian wide (Γ G ) with an error for almost all the cases inferior to 1% in the intervals 0.01 ≤ Γ L / Γ G ≤1 and 0.3 ≤ Γ G / Γ L ≤1. This is achieved using the least square fit method with an algebraic function, being obtained a simple method to obtain the fundamental parameters of the Voigt function used in many spectroscopies. (Author)
Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.
Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia
2017-06-01
Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors
And still, a new beginning: the Galerkin least-squares gradient method
International Nuclear Information System (INIS)
Franca, L.P.; Carmo, E.G.D. do
1988-08-01
A finite element method is proposed to solve a scalar singular diffusion problem. The method is constructed by adding to the standard Galerkin a mesh-dependent term obtained by taking the gradient of the Euler-lagrange equation and multiplying it by its least-squares. For the one-dimensional homogeneous problem the method is designed to develop nodal exact solution. An error estimate shows that the method converges optimaly for any value of the singular parameter. Numerical results demonstrate the good stability and accuracy properties of the method. (author) [pt
Speed control of induction motor using fuzzy recursive least squares technique
Santiago Sánchez; Eduardo Giraldo
2008-01-01
A simple adaptive controller design is presented in this paper, the control system uses the adaptive fuzzy logic, sliding modes and is trained with the recursive least squares technique. The problem of parameter variation is solved with the adaptive controller; the use of an internal PI regulator produces that the speed control of the induction motor be achieved by the stator currents instead the input voltage. The rotor-flux oriented coordinated system model is used to develop and test the c...
Speed control of induction motor using fuzzy recursive least squares technique
Directory of Open Access Journals (Sweden)
Santiago Sánchez
2008-12-01
Full Text Available A simple adaptive controller design is presented in this paper, the control system uses the adaptive fuzzy logic, sliding modes and is trained with the recursive least squares technique. The problem of parameter variation is solved with the adaptive controller; the use of an internal PI regulator produces that the speed control of the induction motor be achieved by the stator currents instead the input voltage. The rotor-flux oriented coordinated system model is used to develop and test the control system.
Decentralized Gauss-Newton method for nonlinear least squares on wide area network
Liu, Lanchao; Ling, Qing; Han, Zhu
2014-10-01
This paper presents a decentralized approach of Gauss-Newton (GN) method for nonlinear least squares (NLLS) on wide area network (WAN). In a multi-agent system, a centralized GN for NLLS requires the global GN Hessian matrix available at a central computing unit, which may incur large communication overhead. In the proposed decentralized alternative, each agent only needs local GN Hessian matrix to update iterates with the cooperation of neighbors. The detail formulation of decentralized NLLS on WAN is given, and the iteration at each agent is defined. The convergence property of the decentralized approach is analyzed, and numerical results validate the effectiveness of the proposed algorithm.
Small-kernel, constrained least-squares restoration of sampled image data
Hazra, Rajeeb; Park, Stephen K.
1992-01-01
Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.
Estimation of multi-frequency signal parameters by frequency domain non-linear least squares
Zhu, Li-Min; Li, Han-Xiong; Ding, Han
2005-09-01
This paper presents a frequency domain method for estimating the parameters of a multi-frequency signal from the discrete-time observations corrupted by additive noise. With two weak restrictions on the window function used, a concise non-linear least squares-based parameter estimation model, which exploits the joint information carried by the spectral samples nearby each spectrum peak, is established, and utilising its particular structure an efficient two-step iterative algorithm is developed to solve it. The derived analytical expressions of the estimator variances indicate that this approach has superior accuracy over other computationally efficient frequency domain estimation methods. Simulation results confirm the validity of the presented method.
QCL spectroscopy combined with the least squares method for substance analysis
Samsonov, D. A.; Tabalina, A. S.; Fufurin, I. L.
2017-11-01
The article briefly describes distinctive features of quantum cascade lasers (QCL). It also describes an experimental set-up for acquiring mid-infrared absorption spectra using QCL. The paper demonstrates experimental results in the form of normed spectra. We tested the application of the least squares method for spectrum analysis. We used this method for substance identification and extraction of concentration data. We compare the results with more common methods of absorption spectroscopy. Eventually, we prove the feasibility of using this simple method for quantitative and qualitative analysis of experimental data acquired with QCL.
International Nuclear Information System (INIS)
Gillet, M.
1986-07-01
This thesis presents a study for the surveillance of the Primary circuit water inventory of a pressurized water reactor. A reference model is developed for the development of an automatic system ensuring detection and real-time diagnostic. The methods to our application are statistical tests and adapted a pattern recognition method. The estimation of the detected anomalies is treated by the least square fit method, and by filtering. A new projected optimization method with superlinear convergence is developed in this framework, and a segmented linearization of the model is introduced, in view of a multiple filtering. 46 refs [fr
Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem
Energy Technology Data Exchange (ETDEWEB)
Yoo, Jaechil [Univ. of Wisconsin, Madison, WI (United States)
1996-12-31
Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.
Fault Estimation for Fuzzy Delay Systems: A Minimum Norm Least Squares Solution Approach.
Huang, Sheng-Juan; Yang, Guang-Hong
2017-09-01
This paper mainly focuses on the problem of fault estimation for a class of Takagi-Sugeno fuzzy systems with state delays. A minimum norm least squares solution (MNLSS) approach is first introduced to establish a fault estimation compensator, which is able to optimize the fault estimator. Compared with most of the existing fault estimation methods, the MNLSS-based fault estimation method can effectively decrease the effect of state errors on the accuracy of fault estimation. Finally, three examples are given to illustrate the effectiveness and merits of the proposed method.
Krishnamurthy, Thiagarajan
2005-01-01
Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.
Directory of Open Access Journals (Sweden)
Santosh Kumar Singh
2017-06-01
Full Text Available This paper presents a new hybrid method based on Gravity Search Algorithm (GSA and Recursive Least Square (RLS, known as GSA-RLS, to solve the harmonic estimation problems in the case of time varying power signals in presence of different noises. GSA is based on the Newton’s law of gravity and mass interactions. In the proposed method, the searcher agents are a collection of masses that interact with each other using Newton’s laws of gravity and motion. The basic GSA algorithm strategy is combined with RLS algorithm sequentially in an adaptive way to update the unknown parameters (weights of the harmonic signal. Simulation and practical validation are made with the experimentation of the proposed algorithm with real time data obtained from a heavy paper industry. A comparative performance of the proposed algorithm is evaluated with other recently reported algorithms like, Differential Evolution (DE, Particle Swarm Optimization (PSO, Bacteria Foraging Optimization (BFO, Fuzzy-BFO (F-BFO hybridized with Least Square (LS and BFO hybridized with RLS algorithm, which reveals that the proposed GSA-RLS algorithm is the best in terms of accuracy, convergence and computational time.
Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods
Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.
1991-01-01
The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.
Extreme Learning Machine and Moving Least Square Regression Based Solar Panel Vision Inspection
Directory of Open Access Journals (Sweden)
Heng Liu
2017-01-01
Full Text Available In recent years, learning based machine intelligence has aroused a lot of attention across science and engineering. Particularly in the field of automatic industry inspection, the machine learning based vision inspection plays a more and more important role in defect identification and feature extraction. Through learning from image samples, many features of industry objects, such as shapes, positions, and orientations angles, can be obtained and then can be well utilized to determine whether there is defect or not. However, the robustness and the quickness are not easily achieved in such inspection way. In this work, for solar panel vision inspection, we present an extreme learning machine (ELM and moving least square regression based approach to identify solder joint defect and detect the panel position. Firstly, histogram peaks distribution (HPD and fractional calculus are applied for image preprocessing. Then an ELM-based defective solder joints identification is discussed in detail. Finally, moving least square regression (MLSR algorithm is introduced for solar panel position determination. Experimental results and comparisons show that the proposed ELM and MLSR based inspection method is efficient not only in detection accuracy but also in processing speed.
Niazi, Ali; Goodarzi, Mohammad
2008-04-01
The simultaneous determination of cypermethrin and tetramethrin mixtures by using spectrophotometric method is a difficult problem in analytical chemistry, due to spectral interferences. By multivariate calibration methods, such as partial least squares (PLS) regression, it is possible to obtain a model adjusted to the concentration values of the mixtures used in the calibration range. Orthogonal signal correction (OSC) is a preprocessing technique used for removing the information unrelated to the target variables based on constrained principal component analysis. OSC is a suitable preprocessing method for partial least squares calibration of mixtures without loss of prediction capacity using spectrophotometric method. In this study, the calibration model is based on absorption spectra in the 200-350 nm range for 25 different mixtures of cypermethrin and tetramethrin. Calibration matrices were containing 0.1-12.9 and 0.1-13.8 μg mL -1 for cypermethrin and tetramethrin, respectively. The RMSEP for cypermethrin and tetramethrin with OSC and without OSC were 0.0884, 0.0614 and 0.2915, 0.2309, respectively. This procedure allows the simultaneous determination of cypermethrin and tetramethrin in synthetic and real samples good reliability of the determination was proved.
Nobile, Fabio
2015-01-07
We consider a general problem F(u, y) = 0 where u is the unknown solution, possibly Hilbert space valued, and y a set of uncertain parameters. We specifically address the situation in which the parameterto-solution map u(y) is smooth, however y could be very high (or even infinite) dimensional. In particular, we are interested in cases in which F is a differential operator, u a Hilbert space valued function and y a distributed, space and/or time varying, random field. We aim at reconstructing the parameter-to-solution map u(y) from random noise-free or noisy observations in random points by discrete least squares on polynomial spaces. The noise-free case is relevant whenever the technique is used to construct metamodels, based on polynomial expansions, for the output of computer experiments. In the case of PDEs with random parameters, the metamodel is then used to approximate statistics of the output quantity. We discuss the stability of discrete least squares on random points show convergence estimates both in expectation and probability. We also present possible strategies to select, either a-priori or by adaptive algorithms, sequences of approximating polynomial spaces that allow to reduce, and in some cases break, the curse of dimensionality
Equalization of Loudspeaker and Room Responses Using Kautz Filters: Direct Least Squares Design
Directory of Open Access Journals (Sweden)
Tuomas Paatero
2007-01-01
Full Text Available DSP-based correction of loudspeaker and room responses is becoming an important part of improving sound reproduction. Such response equalization (EQ is based on using a digital filter in cascade with the reproduction channel to counteract the response errors introduced by loudspeakers and room acoustics. Several FIR and IIR filter design techniques have been proposed for equalization purposes. In this paper we investigate Kautz filters, an interesting class of IIR filters, from the point of view of direct least squares EQ design. Kautz filters can be seen as generalizations of FIR filters and their frequency-warped counterparts. They provide a flexible means to obtain desired frequency resolution behavior, which allows low filter orders even for complex corrections. Kautz filters have also the desirable property to avoid inverting dips in transfer function to sharp and long-ringing resonances in the equalizer. Furthermore, the direct least squares design is applicable to nonminimum-phase EQ design and allows using a desired target response. The proposed method is demonstrated by case examples with measured and synthetic loudspeaker and room responses.
International Nuclear Information System (INIS)
Ackroyd, R.T.
1987-01-01
A least squares principle is described which uses a penalty function treatment of boundary and interface conditions. Appropriate choices of the trial functions and vectors employed in a dual representation of an approximate solution established complementary principles for the diffusion equation. A geometrical interpretation of the principles provides weighted residual methods for diffusion theory, thus establishing a unification of least squares, variational and weighted residual methods. The complementary principles are used with either a trial function for the flux or a trial vector for the current to establish for regular meshes a connection between finite element, finite difference and nodal methods, which can be exact if the mesh pitches are chosen appropriately. Whereas the coefficients in the usual nodal equations have to be determined iteratively, those derived via the complementary principles are given explicitly in terms of the data. For the further development of the connection between finite element, finite difference and nodal methods, some hybrid variational methods are described which employ both a trial function and a trial vector. (author)
Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems
Energy Technology Data Exchange (ETDEWEB)
Lee, Kookjin [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science; Carlberg, Kevin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Elman, Howard C. [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science and Inst. for Advanced Computer Studies
2018-03-29
Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weighted $\\ell^2$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $\\ell^2$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.
Non-stationary least-squares complex decomposition for microseismic noise attenuation
Chen, Yangkang
2018-02-01
Microseismic data processing and imaging are crucial for subsurface real-time monitoring during hydraulic fracturing processing. Unlike the active-source seismic events or large-scale earthquake events, the microseismic event is usually of very small magnitude, which makes their detection challenging. The biggest trouble of microseismic data is the low signal-to-noise ratio (SNR) issue. Because of the small energy difference between effective microseismic signal and ambient noise, the effective signals are usually buried in strong random noise. I propose a useful microseismic denoising algorithm that is based on decomposing a microseismic trace into an ensemble of components using least-squares inversion. Based on the predictive property of useful microseismic event along the time direction, the random noise can be filtered out via least-squares fitting of multiple damping exponential components. The method is flexible and almost automated since the only parameter needed to be defined is a decomposition number. I use some synthetic and real data examples to demonstrate the potential of the algorithm in processing complicated microseismic datasets.
Directory of Open Access Journals (Sweden)
C.G. Ozoegwu
2016-01-01
Full Text Available The general least squares model for milling process state term is presented. A discrete map for milling stability analysis that is based on the third-order case of the presented general least squares milling state term model is first studied and compared with its third-order counterpart that is based on the interpolation theory. Both numerical rate of convergence and chatter stability results of the two maps are compared using the single degree of freedom (1DOF milling model. The numerical rate of convergence of the presented third-order model is also studied using the two degree of freedom (2DOF milling process model. Comparison gave that stability results from the two maps agree closely but the presented map demonstrated reduction in number of needed calculations leading to about 30% savings in computational time (CT. It is seen in earlier works that accuracy of milling stability analysis using the full-discretization method rises from first-order theory to second-order theory and continues to rise to the third-order theory. The present work confirms this trend. In conclusion, the method presented in this work will enable fast and accurate computation of stability diagrams for use by machinists.
Yao, Yan; Wang, Chang-yue; Liu, Hui-jun; Tang, Jian-bin; Cai, Jin-hui; Wang, Jing-jun
2015-07-01
Forest bio-fuel, a new type renewable energy, has attracted increasing attention as a promising alternative. In this study, a new method called Sparse Partial Least Squares Regression (SPLS) is used to construct the proximate analysis model to analyze the fuel characteristics of sawdust combining Near Infrared Spectrum Technique. Moisture, Ash, Volatile and Fixed Carbon percentage of 80 samples have been measured by traditional proximate analysis. Spectroscopic data were collected by Nicolet NIR spectrometer. After being filtered by wavelet transform, all of the samples are divided into training set and validation set according to sample category and producing area. SPLS, Principle Component Regression (PCR), Partial Least Squares Regression (PLS) and Least Absolute Shrinkage and Selection Operator (LASSO) are presented to construct prediction model. The result advocated that SPLS can select grouped wavelengths and improve the prediction performance. The absorption peaks of the Moisture is covered in the selected wavelengths, well other compositions have not been confirmed yet. In a word, SPLS can reduce the dimensionality of complex data sets and interpret the relationship between spectroscopic data and composition concentration, which will play an increasingly important role in the field of NIR application.
Partial least squares path modeling basic concepts, methodological issues and applications
Noonan, Richard
2017-01-01
This edited book presents the recent developments in partial least squares-path modeling (PLS-PM) and provides a comprehensive overview of the current state of the most advanced research related to PLS-PM. The first section of this book emphasizes the basic concepts and extensions of the PLS-PM method. The second section discusses the methodological issues that are the focus of the recent development of the PLS-PM method. The third part discusses the real world application of the PLS-PM method in various disciplines. The contributions from expert authors in the field of PLS focus on topics such as the factor-based PLS-PM, the perfect match between a model and a mode, quantile composite-based path modeling (QC-PM), ordinal consistent partial least squares (OrdPLSc), non-symmetrical composite-based path modeling (NSCPM), modern view for mediation analysis in PLS-PM, a multi-method approach for identifying and treating unobserved heterogeneity, multigroup analysis (PLS-MGA), the assessment of the common method b...
Least square method of estimation of ecological half-lives of radionuclides in sediments
International Nuclear Information System (INIS)
Ranade, A.K.; Pandey, M.; Datta, D.; Ravi, P.M.
2012-01-01
Long term behavior of radionuclides in the environment is an important issue for estimating probable radiological consequences and associated risks. It is also useful for evaluating potential use of contaminated areas and the possible effectiveness of remediation activities. The long term behavior is quantified by means of ecological half life, a parameter that aggregates all processes except radioactive decay which causes a decrease of activity in a specific medium. The process involved in ecological half life depends upon the environmental condition of the medium involved. A fitting model based on least square regression approach was used to evaluate the ecological half life. This least square method has to run several times to evaluate the number of ecological half lives present in the medium for the radionuclide. The case study data considered here is for 137 Cs in Mumbai Harbour Bay. The study shows the trend of 137 Cs over the years at a location in Mumbai Harbour Bay. First iteration model illustrate the ecological half life as 4.94 y and subsequently it passes through a number of runs for more number of ecological half-life present by goodness of fit test. The paper presents a methodology for evaluating ecological half life and exemplifies it with a case study of 137 Cs. (author)
Least-squares migration of multisource data with a deblurring filter
Dai, Wei
2011-09-01
Least-squares migration (LSM) has been shown to be able to produce high-quality migration images, but its computational cost is considered to be too high for practical imaging. We have developed a multisource least-squares migration algorithm (MLSM) to increase the computational efficiency by using the blended sources processing technique. To expedite convergence, a multisource deblurring filter is used as a preconditioner to reduce the data residual. This MLSM algorithm is applicable with Kirchhoff migration, wave-equation migration, or reverse time migration, and the gain in computational efficiency depends on the choice of migration method. Numerical results with Kirchhoff LSM on the 2D SEG/EAGE salt model show that an accurate image is obtained by migrating a supergather of 320 phase-encoded shots. When the encoding functions are the same for every iteration, the input/output cost of MLSM is reduced by 320 times. Empirical results show that the crosstalk noise introduced by blended sources is more effectively reduced when the encoding functions are changed at every iteration. The analysis of signal-to-noise ratio (S/N) suggests that not too many iterations are needed to enhance the S/N to an acceptable level. Therefore, when implemented with wave-equation migration or reverse time migration methods, the MLSM algorithm can be more efficient than the conventional migration method. © 2011 Society of Exploration Geophysicists.
Yan, Zhengbing; Kuang, Te-Hui; Yao, Yuan
2017-09-01
In recent years, multivariate statistical monitoring of batch processes has become a popular research topic, wherein multivariate fault isolation is an important step aiming at the identification of the faulty variables contributing most to the detected process abnormality. Although contribution plots have been commonly used in statistical fault isolation, such methods suffer from the smearing effect between correlated variables. In particular, in batch process monitoring, the high autocorrelations and cross-correlations that exist in variable trajectories make the smearing effect unavoidable. To address such a problem, a variable selection-based fault isolation method is proposed in this research, which transforms the fault isolation problem into a variable selection problem in partial least squares discriminant analysis and solves it by calculating a sparse partial least squares model. As different from the traditional methods, the proposed method emphasizes the relative importance of each process variable. Such information may help process engineers in conducting root-cause diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Kirsanov, Dmitry; Panchuk, Vitaly; Goydenko, Alexander; Khaydukova, Maria; Semenov, Valentin; Legin, Andrey
2015-11-01
This study addresses the problem of simultaneous quantitative analysis of six lanthanides (Ce, Pr, Nd, Sm, Eu, Gd) in mixed solutions by two different X-ray fluorescence techniques: energy-dispersive (EDX) and total reflection (TXRF). Concentration of each lanthanide was varied in the range 10- 6-10- 3 mol/L, low values being around the detection limit of the method. This resulted in XRF spectra with very poor signal to noise ratio and overlapping bands in case of EDX, while only the latter problem was observed for TXRF. It was shown that ordinary least squares approach in numerical calibration fails to provide for reasonable precision in quantification of individual lanthanides. Partial least squares (PLS) regression was able to circumvent spectral inferiorities and yielded adequate calibration models for both techniques with RMSEP (root mean squared error of prediction) values around 10- 5 mol/L. It was demonstrated that comparatively simple and inexpensive EDX method is capable of ensuring the similar precision to more sophisticated TXRF, when the spectra are treated by PLS.
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
HYDRA: a Java library for Markov Chain Monte Carlo
Directory of Open Access Journals (Sweden)
Gregory R. Warnes
2002-03-01
Full Text Available Hydra is an open-source, platform-neutral library for performing Markov Chain Monte Carlo. It implements the logic of standard MCMC samplers within a framework designed to be easy to use, extend, and integrate with other software tools. In this paper, we describe the problem that motivated our work, outline our goals for the Hydra pro ject, and describe the current features of the Hydra library. We then provide a step-by-step example of using Hydra to simulate from a mixture model drawn from cancer genetics, first using a variable-at-a-time Metropolis sampler and then a Normal Kernel Coupler. We conclude with a discussion of future directions for Hydra.
The neutron instrument Monte Carlo library MCLIB: Recent developments
International Nuclear Information System (INIS)
Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.; Thelliez, T.G.
1998-01-01
A brief review is given of the developments since the ICANS-XIII meeting made in the neutron instrument design codes using the Monte Carlo library MCLIB. Much of the effort has been to assure that the library and the executing code MC RUN connect efficiently with the World Wide Web application MC-WEB as part of the Los Alamos Neutron Instrument Simulation Package (NISP). Since one of the most important features of MCLIB is its open structure and capability to incorporate any possible neutron transport or scattering algorithm, this document describes the current procedure that would be used by an outside user to add a feature to MCLIB. Details of the calling sequence of the core subroutine OPERATE are discussed, and questions of style are considered and additional guidelines given. Suggestions for standardization are solicited, as well as code for new algorithms
International Nuclear Information System (INIS)
Hughes, T.J.R.; Hulbert, G.M.; Franca, L.P.
1988-10-01
Galerkin/least-squares finite element methods are presented for advective-diffusive equations. Galerkin/least-squares represents a conceptual simplification of SUPG, and is in fact applicable to a wide variety of other problem types. A convergence analysis and error estimates are presented. (author) [pt
Sim, K S; Norhisham, S
2016-11-01
A new method based on nonlinear least squares regression (NLLSR) is formulated to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. The estimation of SNR value based on NLLSR method is compared with the three existing methods of nearest neighbourhood, first-order interpolation and the combination of both nearest neighbourhood and first-order interpolation. Samples of SEM images with different textures, contrasts and edges were used to test the performance of NLLSR method in estimating the SNR values of the SEM images. It is shown that the NLLSR method is able to produce better estimation accuracy as compared to the other three existing methods. According to the SNR results obtained from the experiment, the NLLSR method is able to produce approximately less than 1% of SNR error difference as compared to the other three existing methods. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine
Lawi, Armin; Sya'Rani Machrizzandi, M.
2018-03-01
Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.
An improved partial least-squares regression method for Raman spectroscopy.
Momenpour Tehran Monfared, Ali; Anis, Hanan
2017-10-05
It is known that the performance of partial least-squares (PLS) regression analysis can be improved using the backward variable selection method (BVSPLS). In this paper, we further improve the BVSPLS based on a novel selection mechanism. The proposed method is based on sorting the weighted regression coefficients, and then the importance of each variable of the sorted list is evaluated using root mean square errors of prediction (RMSEP) criterion in each iteration step. Our Improved BVSPLS (IBVSPLS) method has been applied to leukemia and heparin data sets and led to an improvement in limit of detection of Raman biosensing ranged from 10% to 43% compared to PLS. Our IBVSPLS was also compared to the jack-knifing (simpler) and Genetic Algorithm (more complex) methods. Our method was consistently better than the jack-knifing method and showed either a similar or a better performance compared to the genetic algorithm. Copyright © 2017 Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
Madsen, Henrik; Rosbjerg, Dan
1997-01-01
A regional estimation procedure that combines the index-flood concept with an empirical Bayes method for inferring regional information is introduced. The model is based on the partial duration series approach with generalized Pareto (GP) distributed exceedances. The prior information of the model...... parameters is inferred from regional data using generalized least squares (GLS) regression. Two different Bayesian T-year event estimators are introduced: a linear estimator that requires only some moments of the prior distributions to be specified and a parametric estimator that is based on specified...... families of prior distributions. The regional method is applied to flood records from 48 New Zealand catchments. In the case of a strongly heterogeneous intersite correlation structure, the GLS procedure provides a more efficient estimate of the regional GP shape parameter as compared to the usually...
A least-squares finite element method for 3D incompressible Navier-Stokes equations
Jiang, Bo-Nan; Lin, T. L.; Hou, Lin-Jun; Povinelli, Louis A.
1993-01-01
The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations, and results in symmetric, positive definite algebraic system. An additional compatibility equation, i.e., the divergence of vorticity vector should be zero, is included to make the first-order system elliptic. The Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. The flow in a half of 3D cubic cavity is calculated at Re = 100, 400, and 1,000 with 50 x 52 x 25 trilinear elements. The Taylor-Gortler-like vortices are observed at Re = 1,000.
Analysis of Shift and Deformation of Planar Surfaces Using the Least Squares Plane
Directory of Open Access Journals (Sweden)
Hrvoje Matijević
2006-12-01
Full Text Available Modern methods of measurement developed on the basis of advanced reflectorless distance measurement have paved the way for easier detection and analysis of shift and deformation. A large quantity of collected data points will often require a mathematical model of the surface that fits best into these. Although this can be a complex task, in the case of planar surfaces it is easily done, enabling further processing and analysis of measurement results. The paper describes the fitting of a plane to a set of collected points using the least squares distance, with previously excluded outliers via the RANSAC algorithm. Based on that, a method for analysis of the deformation and shift of planar surfaces is also described.
First-order system least squares for the pure traction problem in planar linear elasticity
Energy Technology Data Exchange (ETDEWEB)
Cai, Z.; Manteuffel, T.; McCormick, S.; Parter, S.
1996-12-31
This talk will develop two first-order system least squares (FOSLS) approaches for the solution of the pure traction problem in planar linear elasticity. Both are two-stage algorithms that first solve for the gradients of displacement, then for the displacement itself. One approach, which uses L{sup 2} norms to define the FOSLS functional, is shown under certain H{sup 2} regularity assumptions to admit optimal H{sup 1}-like performance for standard finite element discretization and standard multigrid solution methods that is uniform in the Poisson ratio for all variables. The second approach, which is based on H{sup -1} norms, is shown under general assumptions to admit optimal uniform performance for displacement flux in an L{sup 2} norm and for displacement in an H{sup 1} norm. These methods do not degrade as other methods generally do when the material properties approach the incompressible limit.
Khawaja, Taimoor Saleem
A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior
Energy Technology Data Exchange (ETDEWEB)
De Lucia, Frank C., E-mail: frank.delucia@us.army.mil; Gottfried, Jennifer L.
2011-02-15
Using a series of thirteen organic materials that includes novel high-nitrogen energetic materials, conventional organic military explosives, and benign organic materials, we have demonstrated the importance of variable selection for maximizing residue discrimination with partial least squares discriminant analysis (PLS-DA). We built several PLS-DA models using different variable sets based on laser induced breakdown spectroscopy (LIBS) spectra of the organic residues on an aluminum substrate under an argon atmosphere. The model classification results for each sample are presented and the influence of the variables on these results is discussed. We found that using the whole spectra as the data input for the PLS-DA model gave the best results. However, variables due to the surrounding atmosphere and the substrate contribute to discrimination when the whole spectra are used, indicating this may not be the most robust model. Further iterative testing with additional validation data sets is necessary to determine the most robust model.
Quantification of anaesthetic effects on atrial fibrillation rate by partial least-squares
International Nuclear Information System (INIS)
Cervigón, R; Moreno, J; Pérez-Villacastín, J; Reilly, R B; Castells, F
2012-01-01
The mechanism underlying atrial fibrillation (AF) remains poorly understood. Multiple wandering propagation wavelets drifting through both atria under hierarchical models are not understood. Some pharmacological drugs, known as antiarrhythmics, modify the cardiac ionic currents supporting the fibrillation process within the atria and may modify the AF propagation dynamics terminating the fibrillation process. Other medications, theoretically non-antiarrhythmic, may slightly affect the fibrillation process in non-defined mechanisms. We evaluated whether the most commonly used anaesthetic agent, propofol, affects AF patterns. Partial least-squares (PLS) analysis was performed to reduce significant noise into the main latent variables to find the differences between groups. The final results showed an excellent discrimination between groups with slow atrial activity during the propofol infusion. (paper)
ENTREPRENEURIAL ATTITUDE AND STUDENTS BUSINESS START-UP INTENTION: A PARTIAL LEAST SQUARE MODELING
Directory of Open Access Journals (Sweden)
Widayat Widayat
2017-03-01
Full Text Available This article is designed to examine the role of the entrepreneurial spirit, education and in building an attitude about working as an entrepreneur, and his influence on the intention to start a business, to the students. Data were collected using a questionnaire has been prepared and maintained the validity and reliability. Questionnaires given to the respondent students were selected as samples at several universities in Malang, East Java, Indonesia. The collected data were analyzed by using Partial Least Square. The analysis showed entrepreneurial spirit and education contribute to the formation of entrepreneurial attitudes. Attitudes are formed encourage entrepreneurship intentions to start a business significantly.
Distributed weighted least-squares estimation with fast convergence for large-scale systems.
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.
Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding
Directory of Open Access Journals (Sweden)
Ying Chen
2014-05-01
Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.
Least squares approach for initial data recovery in dynamic data-driven applications simulations
Douglas, C.
2010-12-01
In this paper, we consider the initial data recovery and the solution update based on the local measured data that are acquired during simulations. Each time new data is obtained, the initial condition, which is a representation of the solution at a previous time step, is updated. The update is performed using the least squares approach. The objective function is set up based on both a measurement error as well as a penalization term that depends on the prior knowledge about the solution at previous time steps (or initial data). Various numerical examples are considered, where the penalization term is varied during the simulations. Numerical examples demonstrate that the predictions are more accurate if the initial data are updated during the simulations. © Springer-Verlag 2011.
Selecting radial basis function network centers with recursive orthogonal least squares training.
Gomm, J B; Yu, D L
2000-01-01
Recursive orthogonal least squares (ROLS) is a numerically robust method for solving for the output layer weights of a radial basis function (RBF) network, and requires less computer memory than the batch alternative. In this paper, the use of ROLS is extended to selecting the centers of an RBF network. It is shown that the information available in an ROLS algorithm after network training can be used to sequentially select centers to minimize the network output error. This provides efficient methods for network reduction to achieve smaller architectures with acceptable accuracy and without retraining. Two selection methods are developed, forward and backward. The methods are illustrated in applications of RBF networks to modeling a nonlinear time series and a real multiinput-multioutput chemical process. The final network models obtained achieve acceptable accuracy with significant reductions in the number of required centers.
A comparative analysis of the EEDF obtained by Regularization and by Least square fit methods
International Nuclear Information System (INIS)
Gutierrez T, C.; Flores Ll, H.
2004-01-01
The second derived of the characteristic curve current-voltage (I - V) of a Langmuir probe (I - V) is numerically calculated using the Tikhonov method for to determine the distribution function of the electrons energy (EEDF). One comparison of the obtained EEDF and a fit by least square are discussed (LS). The I - V experimental curve is obtained in a plasma source in the electron cyclotron resonance (ECR) using a cylindrical probe. The parameters of plasma are determined of the EEDF by means of the Laframboise theory. For the case of the LS fit, the obtained results are similar to those obtained by the Tikhonov method, but in the first case the procedure is slow to achieve the best fit. (Author)
Directory of Open Access Journals (Sweden)
Victor Aredo
2017-01-01
Full Text Available The aim of this study was to build a model to predict the beef marbling using HSI and Partial Least Squares Regression (PLSR. Totally 58 samples of longissmus dorsi muscle were scanned by a HSI system (400 - 1000 nm in reflectance mode, using 44 samples to build t he PLSR model and 14 samples to model validation. The Japanese Beef Marbling Standard (BMS was used as reference by 15 middle - trained judges for the samples evaluation. The scores were assigned as continuous values and varied from 1.2 to 5.3 BMS. The PLSR model showed a high correlation coefficient in the prediction (r = 0.95, a low Standard Error of Calibration (SEC of 0.2 BMS score, and a low Standard Error of Prediction (SEP of 0.3 BMS score.
Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2016-01-01
Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.
Directory of Open Access Journals (Sweden)
Byambaa Dorj
2016-01-01
Full Text Available The next promising key issue of the automobile development is a self-driving technique. One of the challenges for intelligent self-driving includes a lane-detecting and lane-keeping capability for advanced driver assistance systems. This paper introduces an efficient and lane detection method designed based on top view image transformation that converts an image from a front view to a top view space. After the top view image transformation, a Hough transformation technique is integrated by using a parabolic model of a curved lane in order to estimate a parametric model of the lane in the top view space. The parameters of the parabolic model are estimated by utilizing a least-square approach. The experimental results show that the newly proposed lane detection method with the top view transformation is very effective in estimating a sharp and curved lane leading to a precise self-driving capability.
Least squares fit of bubble chamber tracks taking into account multiple scattering
Laurikainen, P
1972-01-01
Presents a new approach to the problem of taking into account multiple scattering when making the least squares fit of bubble chamber tracks to film measurements. For completeness, the more refined version of the standard fit which has been developed for the new large hydrogen bubble chambers, is also described. In the standard fit, energy loss and inhomogeneity of magnetic field only are taken into account, and five parameters are fitted. In the new method, the scattered track is followed more closely by introducing additional parameters each with known mean and variance. The amount of additional computation depends directly on the total multiple scattering deviation expected. The method is particularly useful for connecting tracks as the parameters at the two vertices are fitted simultaneously. (10 refs).
Least Squares Estimate of the Initial Phases in STFT based Speech Enhancement
DEFF Research Database (Denmark)
Nørholm, Sidsel Marie; Krawczyk-Becker, Martin; Gerkmann, Timo
2015-01-01
In this paper, we consider single-channel speech enhancement in the short time Fourier transform (STFT) domain. We suggest to improve an STFT phase estimate by estimating the initial phases. The method is based on the harmonic model and a model for the phase evolution over time. The initial phases...... are estimated by setting up a least squares problem between the noisy phase and the model for phase evolution. Simulations on synthetic and speech signals show a decreased error on the phase when an estimate of the initial phase is included compared to using the noisy phase as an initialisation. The error...... on the phase is decreased at input SNRs from -10 to 10 dB. Reconstructing the signal using the clean amplitude, the mean squared error is decreased and the PESQ score is increased....
Waller, Niels
2018-01-01
Kristof's Theorem (Kristof, 1970 ) describes a matrix trace inequality that can be used to solve a wide-class of least-square optimization problems without calculus. Considering its generality, it is surprising that Kristof's Theorem is rarely used in statistics and psychometric applications. The underutilization of this method likely stems, in part, from the mathematical complexity of Kristof's ( 1964 , 1970 ) writings. In this article, I describe the underlying logic of Kristof's Theorem in simple terms by reviewing four key mathematical ideas that are used in the theorem's proof. I then show how Kristof's Theorem can be used to provide novel derivations to two cognate models from statistics and psychometrics. This tutorial includes a glossary of technical terms and an online supplement with R (R Core Team, 2017 ) code to perform the calculations described in the text.
DEM4-26, Least Square Fit for IBM PC by Deming Method
International Nuclear Information System (INIS)
Rinard, P.M.; Bosler, G.E.
1989-01-01
1 - Description of program or function: DEM4-26 is a generalized least square fitting program based on Deming's method. Functions built into the program for fitting include linear, quadratic, cubic, power, Howard's, exponential, and Gaussian; others can easily be added. The program has the following capabilities: (1) entry, editing, and saving of data; (2) fitting of any of the built-in functions or of a user-supplied function; (3) plotting the data and fitted function on the display screen, with error limits if requested, and with the option of copying the plot to the printer; (4) interpolation of x or y values from the fitted curve with error estimates based on error limits selected by the user; and (5) plotting the residuals between the y data values and the fitted curve, with the option copying the plot to the printer. 2 - Method of solution: Deming's method
Defense of the Least Squares Solution to Peelle’s Pertinent Puzzle
Directory of Open Access Journals (Sweden)
Nicolas Hengartner
2011-02-01
Full Text Available Generalized least squares (GLS for model parameter estimation has a long and successful history dating to its development by Gauss in 1795. Alternatives can outperform GLS in some settings, and alternatives to GLS are sometimes sought when GLS exhibits curious behavior, such as in Peelle’s Pertinent Puzzle (PPP. PPP was described in 1987 in the context of estimating fundamental parameters that arise in nuclear interaction experiments. In PPP, GLS estimates fell outside the range of the data, eliciting concerns that GLS was somehow flawed. These concerns have led to suggested alternatives to GLS estimators. This paper defends GLS in the PPP context, investigates when PPP can occur, illustrates when PPP can be beneficial for parameter estimation, reviews optimality properties of GLS estimators, and gives an example in which PPP does occur.
Recursive N-way partial least squares for brain-computer interface.
Directory of Open Access Journals (Sweden)
Andrey Eliseyev
Full Text Available In the article tensor-input/tensor-output blockwise Recursive N-way Partial Least Squares (RNPLS regression is considered. It combines the multi-way tensors decomposition with a consecutive calculation scheme and allows blockwise treatment of tensor data arrays with huge dimensions, as well as the adaptive modeling of time-dependent processes with tensor variables. In the article the numerical study of the algorithm is undertaken. The RNPLS algorithm demonstrates fast and stable convergence of regression coefficients. Applied to Brain Computer Interface system calibration, the algorithm provides an efficient adjustment of the decoding model. Combining the online adaptation with easy interpretation of results, the method can be effectively applied in a variety of multi-modal neural activity flow modeling tasks.
Baseline configuration for GNSS attitude determination with an analytical least-squares solution
International Nuclear Information System (INIS)
Chang, Guobin; Wang, Qianxin; Xu, Tianhe
2016-01-01
The GNSS attitude determination using carrier phase measurements with 4 antennas is studied on condition that the integer ambiguities have been resolved. The solution to the nonlinear least-squares is often obtained iteratively, however an analytical solution can exist for specific baseline configurations. The main aim of this work is to design this class of configurations. Both single and double difference measurements are treated which refer to the dedicated and non-dedicated receivers respectively. More realistic error models are employed in which the correlations between different measurements are given full consideration. The desired configurations are worked out. The configurations are rotation and scale equivariant and can be applied to both the dedicated and non-dedicated receivers. For these configurations, the analytical and optimal solution for the attitude is also given together with its error variance–covariance matrix. (paper)
Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.
2017-12-01
In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.
Numerical solution of a nonlinear least squares problem in digital breast tomosynthesis
International Nuclear Information System (INIS)
Landi, G; Piccolomini, E Loli; Nagy, J G
2015-01-01
In digital tomosynthesis imaging, multiple projections of an object are obtained along a small range of different incident angles in order to reconstruct a pseudo-3D representation (i.e., a set of 2D slices) of the object. In this paper we describe some mathematical models for polyenergetic digital breast tomosynthesis image reconstruction that explicitly takes into account various materials composing the object and the polyenergetic nature of the x-ray beam. A polyenergetic model helps to reduce beam hardening artifacts, but the disadvantage is that it requires solving a large-scale nonlinear ill-posed inverse problem. We formulate the image reconstruction process (i.e., the method to solve the ill-posed inverse problem) in a nonlinear least squares framework, and use a Levenberg-Marquardt scheme to solve it. Some implementation details are discussed, and numerical experiments are provided to illustrate the performance of the methods. (paper)
Wavelength detection in FBG sensor networks using least squares support vector regression
Chen, Jing; Jiang, Hao; Liu, Tundong; Fu, Xiaoli
2014-04-01
A wavelength detection method for a wavelength division multiplexing (WDM) fiber Bragg grating (FBG) sensor network is proposed based on least squares support vector regression (LS-SVR). As a kind of promising machine learning technique, LS-SVR is employed to approximate the inverse function of the reflection spectrum. The LS-SVR detection model is established from the training samples, and then the Bragg wavelength of each FBG can be directly identified by inputting the measured spectrum into the well-trained model. We also discuss the impact of the sample size and the preprocess of the input spectrum on the performance of the training effectiveness. The results demonstrate that our approach is effective in improving the accuracy for sensor networks with a large number of FBGs.
Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li; Huang, Rongyong
2017-04-01
Cross-scale cost aggregation (CSCA) allows pixel-wise multiscale interaction in the aggregated cost computation. This kind of multiscale constraint strengthens the consistency of interscale cost volume and behaves well in a textureless region, compared with single-scale cost aggregation. However, the relationship between neighbors' cost is ignored. Based on the prior knowledge that costs should vary smoothly, except at object boundaries, the smoothness constraint on cost in a neighborhood system is integrated into the CSCA model with weighted least squares for reliable matching in this paper. Our improved algorithm not only has the advantage of CSCA in computational efficiency, but also performs better than CSCA, especially on the KITTI data sets. Experimental evidence demonstrates that the proposed algorithm outperforms CSCA in textureless and discontinuous regions. Quantitative evaluations demonstrate the effectiveness and efficiency of the proposed method for improving disparity estimation accuracy.
Cao, Hongliang; Xin, Ya; Yuan, Qiaoxia
2016-02-01
To predict conveniently the biochar yield from cattle manure pyrolysis, intelligent modeling approach was introduced in this research. A traditional artificial neural networks (ANN) model and a novel least squares support vector machine (LS-SVM) model were developed. For the identification and prediction evaluation of the models, a data set with 33 experimental data was used, which were obtained using a laboratory-scale fixed bed reaction system. The results demonstrated that the intelligent modeling approach is greatly convenient and effective for the prediction of the biochar yield. In particular, the novel LS-SVM model has a more satisfying predicting performance and its robustness is better than the traditional ANN model. The introduction and application of the LS-SVM modeling method gives a successful example, which is a good reference for the modeling study of cattle manure pyrolysis process, even other similar processes. Copyright © 2015 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
You, Qiang; Xu, JinXin; Wang, Gang; Zhang, Zhonghua
2016-01-01
The ordinary least-square fitting with polynomial is used in both the dynamic phase of the watt balance method and the weighting phase of joule balance method but few researches have been conducted to evaluate the uncertainty of the fitting data in the electrical balance methods. In this paper, a matrix-calculation method for evaluating the uncertainty of the polynomial fitting data is derived and the properties of this method are studied by simulation. Based on this, another two derived methods are proposed. One is used to find the optimal fitting order for the watt or joule balance methods. Accuracy and effective factors of this method are experimented with simulations. The other is used to evaluate the uncertainty of the integral of the fitting data for joule balance, which is demonstrated with an experiment from the NIM-1 joule balance. (paper)
A Least-Squares Finite Element Method for Electromagnetic Scattering Problems
Wu, Jie; Jiang, Bo-nan
1996-01-01
The least-squares finite element method (LSFEM) is applied to electromagnetic scattering and radar cross section (RCS) calculations. In contrast to most existing numerical approaches, in which divergence-free constraints are omitted, the LSFF-M directly incorporates two divergence equations in the discretization process. The importance of including the divergence equations is demonstrated by showing that otherwise spurious solutions with large divergence occur near the scatterers. The LSFEM is based on unstructured grids and possesses full flexibility in handling complex geometry and local refinement Moreover, the LSFEM does not require any special handling, such as upwinding, staggered grids, artificial dissipation, flux-differencing, etc. Implicit time discretization is used and the scheme is unconditionally stable. By using a matrix-free iterative method, the computational cost and memory requirement for the present scheme is competitive with other approaches. The accuracy of the LSFEM is verified by several benchmark test problems.
Fast lithographic source optimization using a batch-processing sequential least square estimator.
Ma, Xu; Lin, Haijun; Jiao, Guoli; Li, Yanqiu; Arce, Gonzalo R
2017-07-20
This paper proposes a fast source optimization (SO) method for lithography systems to improve the imaging performance of different hotspots on the fullchip layout. Hotspots are referred to as the critical locations on the layout that are difficult to print. A fullchip layout usually includes numerous hotspots with different geometric characteristics. Current SO approaches collect all of the data from different hotspots before the optimization, and then try to calculate the common optimal source for all hotspots. If any new data from unaccounted hotspots become available, the optimal source has to be recalculated. This paper first develops a batch-processing sequential least square estimator, and then uses it to iteratively modify the source pattern based on the ongoing hotspot data. The optimized source for one hotspot can be updated to suit others without redundant computation. Simulations show that the proposed method can significantly accelerate the SO procedure, while improving the imaging performance of multiple hotspots.
Directory of Open Access Journals (Sweden)
Saïda Bedoui
2013-01-01
Full Text Available This paper addresses the problem of simultaneous identification of linear discrete time delay multivariable systems. This problem involves both the estimation of the time delays and the dynamic parameters matrices. In fact, we suggest a new formulation of this problem allowing defining the time delay and the dynamic parameters in the same estimated vector and building the corresponding observation vector. Then, we use this formulation to propose a new method to identify the time delays and the parameters of these systems using the least square approach. Convergence conditions and statistics properties of the proposed method are also developed. Simulation results are presented to illustrate the performance of the proposed method. An application of the developed approach to compact disc player arm is also suggested in order to validate simulation results.
Xu, Zheng; Wang, Sheng; Li, Yeqing; Zhu, Feiyun; Huang, Junzhou
2018-02-08
The most recent history of parallel Magnetic Resonance Imaging (pMRI) has in large part been devoted to finding ways to reduce acquisition time. While joint total variation (JTV) regularized model has been demonstrated as a powerful tool in increasing sampling speed for pMRI, however, the major bottleneck is the inefficiency of the optimization method. While all present state-of-the-art optimizations for the JTV model could only reach a sublinear convergence rate, in this paper, we squeeze the performance by proposing a linear-convergent optimization method for the JTV model. The proposed method is based on the Iterative Reweighted Least Squares algorithm. Due to the complexity of the tangled JTV objective, we design a novel preconditioner to further accelerate the proposed method. Extensive experiments demonstrate the superior performance of the proposed algorithm for pMRI regarding both accuracy and efficiency compared with state-of-the-art methods.
Energy Technology Data Exchange (ETDEWEB)
Clegg, Samuel M [Los Alamos National Laboratory; Barefield, James E [Los Alamos National Laboratory; Wiens, Roger C [Los Alamos National Laboratory; Sklute, Elizabeth [MT HOLYOKE COLLEGE; Dyare, Melinda D [MT HOLYOKE COLLEGE
2008-01-01
Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.
Space-time least-squares Petrov-Galerkin projection in nonlinear model reduction.
Energy Technology Data Exchange (ETDEWEB)
Choi, Youngsoo [Sandia National Laboratories (SNL-CA), Livermore, CA (United States). Extreme-scale Data Science and Analytics Dept.; Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carlberg, Kevin Thomas [Sandia National Laboratories (SNL-CA), Livermore, CA (United States). Extreme-scale Data Science and Analytics Dept.
2017-09-01
Our work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply Petrov-Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discrete-optimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over all space and time in a weighted ℓ^{2}-norm. This norm can be de ned to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time GNAT variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include: (1) a reduction of both the spatial and temporal dimensions of the dynamical system, (2) the removal of spurious temporal modes (e.g., unstable growth) from the state space, and (3) error bounds that exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy.
Directory of Open Access Journals (Sweden)
T. Kim
2012-09-01
Full Text Available Automated generation of digital elevation models (DEMs from high resolution satellite images (HRSIs has been an active research topic for many years. However, stereo matching of HRSIs, in particular based on image-space search, is still difficult due to occlusions and building facades within them. Object-space matching schemes, proposed to overcome these problem, often are very time consuming and critical to the dimensions of voxels. In this paper, we tried a new least square matching (LSM algorithm that works in a 3D object space. The algorithm starts with an initial height value on one location of the object space. From this 3D point, the left and right image points are projected. The true height is calculated by iterative least squares estimation based on the grey level differences between the left and right patches centred on the projected left and right points. We tested the 3D LSM to the Worldview images over 'Terrassa Sud' provided by the ISPRS WG I/4. We also compared the performance of the 3D LSM with the correlation matching based on 2D image space and the correlation matching based on 3D object space. The accuracy of the DEM from each method was analysed against the ground truth. Test results showed that 3D LSM offers more accurate DEMs over the conventional matching algorithms. Results also showed that 3D LSM is sensitive to the accuracy of initial height value to start the estimation. We combined the 3D COM and 3D LSM for accurate and robust DEM generation from HRSIs. The major contribution of this paper is that we proposed and validated that LSM can be applied to object space and that the combination of 3D correlation and 3D LSM can be a good solution for automated DEM generation from HRSIs.
RECURSIVE LEAST SQUARES WITH REAL TIME STOCHASTIC MODELING: APPLICATION TO GPS RELATIVE POSITIONING
Directory of Open Access Journals (Sweden)
F. Zangeneh-Nejad
2017-09-01
Full Text Available Geodetic data processing is usually performed by the least squares (LS adjustment method. There are two different forms for the LS adjustment, namely the batch form and recursive form. The former is not an appropriate method for real time applications in which new observations are added to the system over time. For such cases, the recursive solution is more suitable than the batch form. The LS method is also implemented in GPS data processing via two different forms. The mathematical model including both functional and stochastic models should be properly defined for both forms of the LS method. Proper choice of the stochastic model plays an important role to achieve high-precision GPS positioning. The noise characteristics of the GPS observables have been already investigated using the least squares variance component estimation (LS-VCE in a batch form by the authors. In this contribution, we introduce a recursive procedure that provides a proper stochastic modeling for the GPS observables using the LS-VCE. It is referred to as the recursive LS-VCE (RLS-VCE method, which is applied to the geometry-based observation model (GBOM. In this method, the (covariances parameters can be estimated recursively when the new group of observations is added. Therefore, it can easily be implemented in real time GPS data processing. The efficacy of the method is evaluated using a real GPS data set collected by the Trimble R7 receiver over a zero baseline. The results show that the proposed method has an appropriate performance so that the estimated (covariance parameters of the GPS observables are consistent with the batch estimates. However, using the RLS-VCE method, one can estimate the (covariance parameters of the GPS observables when a new observation group is added. This method can thus be introduced as a reliable method for application to the real time GPS data processing.
Partial least-squares: Theoretical issues and engineering applications in signal processing
Directory of Open Access Journals (Sweden)
Fredric M. Ham
1996-01-01
Full Text Available In this paper we present partial least-squares (PLS, which is a statistical modeling method used extensively in analytical chemistry for quantitatively analyzing spectroscopic data. Comparisons are made between classical least-squares (CLS and PLS to show how PLS can be used in certain engineering signal processing applications. Moreover, it is shown that in certain situations when there exists a linear relationship between the independent and dependent variables, PLS can yield better predictive performance than CLS when it is not desirable to use all of the empirical data to develop a calibration model used for prediction. Specifically, because PLS is a factor analysis method, optimal selection of the number of PLS factors can result in a calibration model whose predictive performance is considerably better than CLS. That is, factor analysis (rank reduction allows only those features of the data that are associated with information of interest to be retained for development of the calibration model, and the remaining data associated with noise are discarded. It is shown that PLS can yield physical insight into the system from which empirical data has been collected. Also, when there exists a non-linear cause-and-effect relationship between the independent and dependent variables, the PLS calibration model can yield prediction errors that are much less than those for CLS. Three PLS application examples are given and the results are compared to CLS. In one example, a method is presented using PLS for parametric system identification. Using PLS for system identification allows simultaneous estimation of the system dimension and the system parameter vector associated with a minimal realization of the system.
Stenlund, Hans; Johansson, Erik; Gottfries, Johan; Trygg, Johan
2009-01-01
Near infrared spectroscopy (NIR) was developed primarily for applications such as the quantitative determination of nutrients in the agricultural and food industries. Examples include the determination of water, protein, and fat within complex samples such as grain and milk. Because of its useful properties, NIR analysis has spread to other areas such as chemistry and pharmaceutical production. NIR spectra consist of infrared overtones and combinations thereof, making interpretation of the results complicated. It can be very difficult to assign peaks to known constituents in the sample. Thus, multivariate analysis (MVA) has been crucial in translating spectral data into information, mainly for predictive purposes. Orthogonal partial least squares (OPLS), a new MVA method, has prediction and modeling properties similar to those of other MVA techniques, e.g., partial least squares (PLS), a method with a long history of use for the analysis of NIR data. OPLS provides an intrinsic algorithmic improvement for the interpretation of NIR data. In this report, four sets of NIR data were analyzed to demonstrate the improved interpretation provided by OPLS. The first two sets included simulated data to demonstrate the overall principles; the third set comprised a statistically replicated design of experiments (DoE), to demonstrate how instrumental difference could be accurately visualized and correctly attributed to Wood's anomaly phenomena; the fourth set was chosen to challenge the MVA by using data relating to powder mixing, a crucial step in the pharmaceutical industry prior to tabletting. Improved interpretation by OPLS was demonstrated for all four examples, as compared to alternative MVA approaches. It is expected that OPLS will be used mostly in applications where improved interpretation is crucial; one such area is process analytical technology (PAT). PAT involves fewer independent samples, i.e., batches, than would be associated with agricultural applications; in
Full-matrix least-squares refinement of lysozymes and analysis of anisotropic thermal motion.
Harata, K; Abe, Y; Muraki, M
1998-02-15
Crystal structures of turkey egg lysozyme (TEL) and human lysozyme (HL) were refined by full-matrix least-squares method using anisotropic temperature factors. The refinement converged at the conventional R-values of 0.104 (TEL) and 0.115 (HL) for reflections with Fo > 0 to the resolution of 1.12 A and 1.15 A, respectively. The estimated r.m.s. coordinate errors for protein atoms were 0.031 A (TEL) and 0.034 A (HL). The introduction of anisotropic temperature factors markedly reduced the R-value but did not significantly affect the main chain coordinates. The degree of anisotropy of atomic thermal motion has strong positive correlation with the square of distance from the molecular centroid. The ratio of the radial component of thermal ellipsoid to the r.m.s. magnitude of three principal components has negative correlation with the distance from the molecular centroid, suggesting the domination of libration rather than breathing motion. The TLS model was applied to elucidate the characteristics of the rigid-body motion. The TLS tensors were determined by the least-squares fit to observed temperature factors. The profile of the magnitude of reproduced temperature factors by the TLS method well fitted to that of observed B(eqv). However, considerable disagreement was observed in the shape and orientation of thermal ellipsoid for atoms with large temperature factors, indicating the large contribution of local motion. The upper estimate of the external motion, 67% (TEL) and 61% (HL) of B(eqv), was deduced from the plot of the magnitude of TLS tensors determined for main chain atoms which were grouped into shells according to the distance from the center of libration. In the external motion, the translational portion is predominant and the contribution of libration and screw motion is relatively small. The internal motion, estimated by subtracting the upper estimate of the external motion from the observed temperature factor, is very similar between TEL and HL in spite
Extending the trend vector: The trend matrix and sample-based partial least squares
Sheridan, Robert P.; Nachbar, Robert B.; Bush, Bruce L.
1994-06-01
Trend vector analysis [Carhart, R.E. et al., J. Chem. Inf. Comput. Sci., 25 (1985) 64], in combination with topological descriptors such as atom pairs, has proved useful in drug discovery for ranking large collections of chemical compounds in order of predicted biological activity. The compounds with the highest predicted activities, upon being tested, often show a several-fold increase in the fraction of active compounds relative to a randomly selected set. A trend vector is simply the one-dimensional array of correlations between the biological activity of interest and a set of properties or `descriptors' of compounds in a training set. This paper examines two methods for generalizing the trend vector to improve the predicted rank order. The trend matrix method finds the correlations between the residuals and the simultaneous occurrence of descriptors, which are stored in a two-dimensional analog of the trend vector. The SAMPLS method derives a linear model by partial least squares (PLS), using the `sample-based' formulation of PLS [Bush, B.L. and Nachbar, R.B., J. Comput.-Aided Mol. Design, 7 (1993) 587] for efficiency in treating the large number of descriptors. PLS accumulates a predictive model as a sum of linear components. Expressed as a vector of prediction coefficients on properties, the first PLS component is proportional to the trend vector. Subsequent components adjust the model toward full least squares. For both methods the residuals decrease, while the risk of overfitting the training set increases. We therefore also describe statistical checks to prevent overfitting. These methods are applied to two data sets, a small homologous series of disubstituted piperidines, tested on the dopamine receptor, and a large set of diverse chemical structures, some of which are active at the muscarinic receptor. Each data set is split into a training set and a test set, and the activities in the test set are predicted from a fit on the training set. Both the trend
Sim, Kok Swee; NorHisham, Syafiq
2016-11-01
A technique based on linear Least Squares Regression (LSR) model is applied to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. In order to test the accuracy of this technique on SNR estimation, a number of SEM images are initially corrupted with white noise. The autocorrelation function (ACF) of the original and the corrupted SEM images are formed to serve as the reference point to estimate the SNR value of the corrupted image. The LSR technique is then compared with the previous three existing techniques known as nearest neighbourhood, first-order interpolation, and the combination of both nearest neighborhood and first-order interpolation. The actual and the estimated SNR values of all these techniques are then calculated for comparison purpose. It is shown that the LSR technique is able to attain the highest accuracy compared to the other three existing techniques as the absolute difference between the actual and the estimated SNR value is relatively small. SCANNING 38:771-782, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Fei Wei
2012-01-01
Full Text Available As both fluid flow measurement techniques and computer simulation methods continue to improve, there is a growing need for numerical simulation approaches that can assimilate experimental data into the simulation in a flexible and mathematically consistent manner. The problem of interest here is the simulation of blood flow in the left ventricle with the assimilation of experimental data provided by ultrasound imaging of microbubbles in the blood. The weighted least-squares finite element method is used because it allows data to be assimilated in a very flexible manner so that accurate measurements are more closely matched with the numerical solution than less accurate data. This approach is applied to two different test problems: a flexible flap that is displaced by a jet of fluid and blood flow in the porcine left ventricle. By adjusting how closely the simulation matches the experimental data, one can observe potential inaccuracies in the model because the simulation without experimental data differs significantly from the simulation with the data. Additionally, the assimilation of experimental data can help the simulation capture certain small effects that are present in the experiment, but not modeled directly in the simulation.
Directory of Open Access Journals (Sweden)
Yong-Hong Zhang
2015-05-01
Full Text Available Assessing the human placental barrier permeability of drugs is very important to guarantee drug safety during pregnancy. Quantitative structure–activity relationship (QSAR method was used as an effective assessing tool for the placental transfer study of drugs, while in vitro human placental perfusion is the most widely used method. In this study, the partial least squares (PLS variable selection and modeling procedure was used to pick out optimal descriptors from a pool of 620 descriptors of 65 compounds and to simultaneously develop a QSAR model between the descriptors and the placental barrier permeability expressed by the clearance indices (CI. The model was subjected to internal validation by cross-validation and y-randomization and to external validation by predicting CI values of 19 compounds. It was shown that the model developed is robust and has a good predictive potential (r2 = 0.9064, RMSE = 0.09, q2 = 0.7323, rp2 = 0.7656, RMSP = 0.14. The mechanistic interpretation of the final model was given by the high variable importance in projection values of descriptors. Using PLS procedure, we can rapidly and effectively select optimal descriptors and thus construct a model with good stability and predictability. This analysis can provide an effective tool for the high-throughput screening of the placental barrier permeability of drugs.
Eddy current characterization of small cracks using least square support vector machine
International Nuclear Information System (INIS)
Chelabi, M; Hacib, T; Ikhlef, N; Boughedda, H; Mekideche, M R; Le Bihan, Y
2016-01-01
Eddy current (EC) sensors are used for non-destructive testing since they are able to probe conductive materials. Despite being a conventional technique for defect detection and localization, the main weakness of this technique is that defect characterization, of the exact determination of the shape and dimension, is still a question to be answered. In this work, we demonstrate the capability of small crack sizing using signals acquired from an EC sensor. We report our effort to develop a systematic approach to estimate the size of rectangular and thin defects (length and depth) in a conductive plate. The achieved approach by the novel combination of a finite element method (FEM) with a statistical learning method is called least square support vector machines (LS-SVM). First, we use the FEM to design the forward problem. Next, an algorithm is used to find an adaptive database. Finally, the LS-SVM is used to solve the inverse problems, creating polynomial functions able to approximate the correlation between the crack dimension and the signal picked up from the EC sensor. Several methods are used to find the parameters of the LS-SVM. In this study, the particle swarm optimization (PSO) and genetic algorithm (GA) are proposed for tuning the LS-SVM. The results of the design and the inversions were compared to both simulated and experimental data, with accuracy experimentally verified. These suggested results prove the applicability of the presented approach. (paper)
Directory of Open Access Journals (Sweden)
M. Omidalizarandi
2013-09-01
Full Text Available Sensor fusion is to combine different sensor data from different sources in order to make a more accurate model. In this research, different sensors (Optical Speed Sensor, Bosch Sensor, Odometer, XSENS, Silicon and GPS receiver have been utilized to obtain different kinds of datasets to implement the multi-sensor system and comparing the accuracy of the each sensor with other sensors. The scope of this research is to estimate the current position and orientation of the Van. The Van's position can also be estimated by integrating its velocity and direction over time. To make these components work, it needs an interface that can bridge each other in a data acquisition module. The interface of this research has been developed based on using Labview software environment. Data have been transferred to PC via A/D convertor (LabJack and make a connection to PC. In order to synchronize all the sensors, calibration parameters of each sensor is determined in preparatory step. Each sensor delivers result in a sensor specific coordinate system that contains different location on the object, different definition of coordinate axes and different dimensions and units. Different test scenarios (Straight line approach and Circle approach with different algorithms (Kalman Filter, Least square Adjustment have been examined and the results of the different approaches are compared together.
Directory of Open Access Journals (Sweden)
WANG Yupu
2016-06-01
Full Text Available In order to better express the characteristic of satellite clock bias (SCB and further improve its prediction precision, a new SCB prediction model is proposed, which can take the physical feature, cyclic variation and stochastic variation behaviors of the space-borne atomic clock into consideration by using a robust least square collocation (LSC method. The proposed model firstly uses a quadratic polynomial model with periodic terms to fit and abstract the trend term and cyclic terms of SCB. Then for the residual stochastic variation part and possible gross errors hidden in SCB data, the model employs a robust LSC method to process them. The covariance function of the LSC is determined by selecting an empirical function and combining SCB prediction tests. Using the final precise IGS SCB products to conduct prediction tests, the results show that the proposed model can get better prediction performance. Specifically, the results' prediction accuracy can enhance 0.457 ns and 0.948 ns respectively, and the corresponding prediction stability can improve 0.445 ns and 1.233 ns, compared with the results of quadratic polynomial model and grey model. In addition, the results also show that the proposed covariance function corresponding to the new model is reasonable.
Lo, Yen-Li; Pan, Wen-Harn; Hsu, Wan-Lun; Chien, Yin-Chu; Chen, Jen-Yang; Hsu, Mow-Ming; Lou, Pei-Jen; Chen, I-How; Hildesheim, Allan; Chen, Chien-Jen
2016-01-01
Evidence on the association between dietary component, dietary pattern and nasopharyngeal carcinoma (NPC) is scarce. A major challenge is the high degree of correlation among dietary constituents. We aimed to identify dietary pattern associated with NPC and to illustrate the dose-response relationship between the identified dietary pattern scores and the risk of NPC. Taking advantage of a matched NPC case-control study, data from a total of 319 incident cases and 319 matched controls were analyzed. Dietary pattern was derived employing partial least square discriminant analysis (PLS-DA) performed on energy-adjusted food frequencies derived from a 66-item food-frequency questionnaire. Odds ratios (ORs) and 95% confidence intervals (CIs) were estimated with multiple conditional logistic regression models, linking pattern scores and NPC risk. A high score of the PLS-DA derived pattern was characterized by high intakes of fruits, milk, fresh fish, vegetables, tea, and eggs ordered by loading values. We observed that one unit increase in the scores was associated with a significantly lower risk of NPC (ORadj = 0.73, 95% CI = 0.60-0.88) after controlling for potential confounders. Similar results were observed among Epstein-Barr virus seropositive subjects. An NPC protective diet is indicated with more phytonutrient-rich plant foods (fruits, vegetables), milk, other protein-rich foods (in particular fresh fish and eggs), and tea. This information may be used to design potential dietary regimen for NPC prevention.
Least-squares reverse time migration in frequency domain using the adjoint-state method
International Nuclear Information System (INIS)
Ren, Haoran; Chen, Shengchang; Wang, Huazhong
2013-01-01
A new scheme is presented to implement a least-squares frequency domain reverse time migration (LS-FRTM). This scheme expresses the gradient of the misfit function with respect to the model as the product of the conjugated Green's functions and the data residuals in the frequency domain based on the adjoint state method. In the 2D case, for each frequency all the Green's functions from the shots to the reflectors and from the reflectors to the receivers which depend on the background velocity can be calculated once using the lower/upper decomposition. The pseudo-Hessian matrix which is also expressed as a function of the Green's function is used as a substitute for the approximate Hessian to amplitude compensation for the gradient. Since the linearized inversion does not update the background velocity, the Green's function needs to be calculated only once. An iteration based LS-FRTM can be implemented with high efficiency. As examples supporting our assertion, we present the results obtained by applying our method to the 2D Marmousi model. (paper)
Energy Technology Data Exchange (ETDEWEB)
Machado, A.E. de A, E-mail: aeam@rpd.ufmg.br [Laboratorio de Quimica Computacional e Modelagem Molecular (LQC-MM), Departamento de Quimica, ICEx, Universidade Federal de Minas Gerais (UFMG), Campus Universitario, Pampulha, Belo Horizonte, MG 31270-90 (Brazil); Departamento de Quimica Fundamental, Universidade Federal de Pernambuco, Recife, PE 50740-540 (Brazil); Gama, A.A. de S da; Barros Neto, B. de [Departamento de Quimica Fundamental, Universidade Federal de Pernambuco, Recife, PE 50740-540 (Brazil)
2011-09-22
Graphical abstract: PLS regression equations predicts quite well static {beta} values for a large set of donor-acceptor organic molecules, in close agreement with the available experimental data. Display Omitted Highlights: {yields} PLS regression predicts static {beta} values of 35 push-pull organic molecules. {yields} PLS equations show correlation of {beta} with structural-electronic parameters. {yields} PLS regression selects best components of push-bridge-pull nonlinear compounds. {yields} PLS analyses can be routinely used to select novel second-order materials. - Abstract: A partial least squares regression analysis of a large set of donor-acceptor organic molecules was performed to predict the magnitude of their static first hyperpolarizabilities ({beta}'s). Polyenes, phenylpolyenes and biphenylpolyenes with augmented chain lengths displayed large {beta} values, in agreement with the available experimental data. The regressors used were the HOMO-LUMO energy gap, the ground-state dipole moment, the HOMO energy AM1 values and the number of {pi}-electrons. The regression equation predicts quite well the static {beta} values for the molecules investigated and can be used to model new organic-based materials with enhanced nonlinear responses.
International Nuclear Information System (INIS)
Machado, A.E. de A; Gama, A.A. de S da; Barros Neto, B. de
2011-01-01
Graphical abstract: PLS regression equations predicts quite well static β values for a large set of donor-acceptor organic molecules, in close agreement with the available experimental data. Display Omitted Highlights: → PLS regression predicts static β values of 35 push-pull organic molecules. → PLS equations show correlation of β with structural-electronic parameters. → PLS regression selects best components of push-bridge-pull nonlinear compounds. → PLS analyses can be routinely used to select novel second-order materials. - Abstract: A partial least squares regression analysis of a large set of donor-acceptor organic molecules was performed to predict the magnitude of their static first hyperpolarizabilities (β's). Polyenes, phenylpolyenes and biphenylpolyenes with augmented chain lengths displayed large β values, in agreement with the available experimental data. The regressors used were the HOMO-LUMO energy gap, the ground-state dipole moment, the HOMO energy AM1 values and the number of π-electrons. The regression equation predicts quite well the static β values for the molecules investigated and can be used to model new organic-based materials with enhanced nonlinear responses.
Least-squares reverse time migration of marine data with frequency-selection encoding
Dai, Wei
2013-08-20
The phase-encoding technique can sometimes increase the efficiency of the least-squares reverse time migration (LSRTM) by more than one order of magnitude. However, traditional random encoding functions require all the encoded shots to share the same receiver locations, thus limiting the usage to seismic surveys with a fixed spread geometry. We implement a frequency-selection encoding strategy that accommodates data with a marine streamer geometry. The encoding functions are delta functions in the frequency domain, so that all the en- coded shots have unique non-overlapping frequency content, and the receivers can distinguish the wavefield from each shot with a unique frequency band. Since the encoding functions are orthogonal to each other, there will be no crosstalk between different shots during modeling and migration. With the frequency-selection encoding method, the computational efficiency of LSRTM is increased so that its cost is compara- ble to conventional RTM for both the Marmousi2 model and a marine data set recorded in the Gulf of Mexico. With more iterations, the LSRTM image quality is further improved. We conclude that LSRTM with frequency-selection is an efficient migration method that can sometimes produce more focused images than conventional RTM.
Least-squares reverse time migration of marine data with frequency-selection encoding
Dai, Wei
2013-06-24
The phase-encoding technique can sometimes increase the efficiency of the least-squares reverse time migration (LSRTM) by more than one order of magnitude. However, traditional random encoding functions require all the encoded shots to share the same receiver locations, thus limiting the usage to seismic surveys with a fixed spread geometry. We implement a frequency-selection encoding strategy that accommodates data with a marine streamer geometry. The encoding functions are delta functions in the frequency domain, so that all the encoded shots have unique nonoverlapping frequency content, and the receivers can distinguish the wavefield from each shot with a unique frequency band. Because the encoding functions are orthogonal to each other, there will be no crosstalk between different shots during modeling and migration. With the frequency-selection encoding method, the computational efficiency of LSRTM is increased so that its cost is comparable to conventional RTM for the Marmousi2 model and a marine data set recorded in the Gulf of Mexico. With more iterations, the LSRTM image quality is further improved by suppressing migration artifacts, balancing reflector amplitudes, and enhancing the spatial resolution. We conclude that LSRTM with frequency-selection is an efficient migration method that can sometimes produce more focused images than conventional RTM. © 2013 Society of Exploration Geophysicists.
Large-scale computation of incompressible viscous flow by least-squares finite element method
Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.
1993-01-01
The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.
Kim, Sanghong; Kano, Manabu; Nakagawa, Hiroshi; Hasebe, Shinji
2011-12-15
Development of quality estimation models using near infrared spectroscopy (NIRS) and multivariate analysis has been accelerated as a process analytical technology (PAT) tool in the pharmaceutical industry. Although linear regression methods such as partial least squares (PLS) are widely used, they cannot always achieve high estimation accuracy because physical and chemical properties of a measuring object have a complex effect on NIR spectra. In this research, locally weighted PLS (LW-PLS) which utilizes a newly defined similarity between samples is proposed to estimate active pharmaceutical ingredient (API) content in granules for tableting. In addition, a statistical wavelength selection method which quantifies the effect of API content and other factors on NIR spectra is proposed. LW-PLS and the proposed wavelength selection method were applied to real process data provided by Daiichi Sankyo Co., Ltd., and the estimation accuracy was improved by 38.6% in root mean square error of prediction (RMSEP) compared to the conventional PLS using wavelengths selected on the basis of variable importance on the projection (VIP). The results clearly show that the proposed calibration modeling technique is useful for API content estimation and is superior to the conventional one. Copyright © 2011 Elsevier B.V. All rights reserved.
First-order system least-squares for the Helmholtz equation
Energy Technology Data Exchange (ETDEWEB)
Lee, B.; Manteuffel, T.; McCormick, S.; Ruge, J.
1996-12-31
We apply the FOSLS methodology to the exterior Helmholtz equation {Delta}p + k{sup 2}p = 0. Several least-squares functionals, some of which include both H{sup -1}({Omega}) and L{sup 2}({Omega}) terms, are examined. We show that in a special subspace of [H(div; {Omega}) {intersection} H(curl; {Omega})] x H{sup 1}({Omega}), each of these functionals are equivalent independent of k to a scaled H{sup 1}({Omega}) norm of p and u = {del}p. This special subspace does not include the oscillatory near-nullspace components ce{sup ik}({sup {alpha}x+{beta}y)}, where c is a complex vector and where {alpha}{sub 2} + {beta}{sup 2} = 1. These components are eliminated by applying a non-standard coarsening scheme. We achieve this scheme by introducing {open_quotes}ray{close_quotes} basis functions which depend on the parameter pair ({alpha}, {beta}), and which approximate ce{sup ik}({sup {alpha}x+{beta}y)} well on the coarser levels where bilinears cannot. We use several pairs of these parameters on each of these coarser levels so that several coarse grid problems are spun off from the finer levels. Some extensions of this theory to the transverse electric wave solution for Maxwell`s equations will also be presented.
Towards a Generic Method for Building-Parcel Vector Data Adjustment by Least Squares
Méneroux, Y.; Brasebin, M.
2015-08-01
Being able to merge high quality and complete building models with parcel data is of a paramount importance for any application dealing with urban planning. However since parcel boundaries often stand for the legal reference frame, the whole correction will be exclusively done on building features. Then a major task is to identify spatial relationships and properties that buildings should keep through the conflation process. The purpose of this paper is to describe a method based on least squares approach to ensure that buildings fit consistently into parcels while abiding by a set of standard constraints that may concern most of urban applications. An important asset of our model is that it can be easily extended to comply with more specific constraints. In addition, results analysis also demonstrates that it provides significantly better output than a basic algorithm relying on an individual correction of features, especially regarding conservation of metrics and topological relationships between buildings. In the future, we would like to include more specific constraints to retrieve the actual positions of buildings relatively to parcel borders and we plan to assess the contribution of our algorithm on the quality of urban application outputs.
Energy Technology Data Exchange (ETDEWEB)
Niazi, Ali [Azad University of Arak (Iran, Islamic Republic of). Faculty of Sciences. Dept. of Chemistry]. E-mail: ali.niazi@gmail.com
2006-09-15
A simple, novel and sensitive spectrophotometric method was described for simultaneous determination of uranium and thorium. The method is based on the complex formation of uranium and thorium with Arsenazo III at pH 3.0. All factors affecting the sensitivity were optimized and the linear dynamic range for determination of uranium and thorium found. The simultaneous determination of uranium and thorium mixtures by using spectrophotometric methods is a difficult problem, due to spectral interferences. By multivariate calibration methods such as partial least squares (PLS), it is possible to obtain a model adjusted to the concentration values of the mixtures used in the calibration range. Orthogonal signal correction (OSC) is a preprocessing technique used for removing the information unrelated to the target variables based on constrained principal component analysis. OSC is a suitable preprocessing method for PLS calibration of mixtures without loss of prediction capacity using spectrophotometric method. In this study, the calibration model is based on absorption spectra in the 600-760 nm range for 25 different mixtures of uranium and thorium. Calibration matrices contained 0.10- 21.00 and 0.25-18.50 {mu}g mL{sup -1} of uranium and thorium, respectively. The RMSEP for uranium and thorium with OSC and without OSC were 0.4362, 0.4183 and 1.5710, 1.0775, respectively. This procedure allows the simultaneous determination of uranium and thorium in synthetic and real matrix samples with good reliability of the determination. (author)
The pls Package: Principal Component and Partial Least Squares Regression in R
Directory of Open Access Journals (Sweden)
Bjørn-Helge Mevik
2007-01-01
Full Text Available The pls package implements principal component regression (PCR and partial least squares regression (PLSR in R (R Development Core Team 2006b, and is freely available from the Comprehensive R Archive Network (CRAN, licensed under the GNU General Public License (GPL. The user interface is modelled after the traditional formula interface, as exemplied by lm. This was done so that people used to R would not have to learn yet another interface, and also because we believe the formula interface is a good way of working interactively with models. It thus has methods for generic functions like predict, update and coef. It also has more specialised functions like scores, loadings and RMSEP, and a exible crossvalidation system. Visual inspection and assessment is important in chemometrics, and the pls package has a number of plot functions for plotting scores, loadings, predictions, coecients and RMSEP estimates. The package implements PCR and several algorithms for PLSR. The design is modular, so that it should be easy to use the underlying algorithms in other functions. It is our hope that the package will serve well both for interactive data analysis and as a building block for other functions or packages using PLSR or PCR. We will here describe the package and how it is used for data analysis, as well as how it can be used as a part of other packages. Also included is a section about formulas and data frames, for people not used to the R modelling idioms.
Distributed weighted least-squares estimation with fast convergence for large-scale systems☆
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976
A Least Squares Collocation Method for Accuracy Improvement of Mobile LiDAR Systems
Directory of Open Access Journals (Sweden)
Qingzhou Mao
2015-06-01
Full Text Available In environments that are hostile to Global Navigation Satellites Systems (GNSS, the precision achieved by a mobile light detection and ranging (LiDAR system (MLS can deteriorate into the sub-meter or even the meter range due to errors in the positioning and orientation system (POS. This paper proposes a novel least squares collocation (LSC-based method to improve the accuracy of the MLS in these hostile environments. Through a thorough consideration of the characteristics of POS errors, the proposed LSC-based method effectively corrects these errors using LiDAR control points, thereby improving the accuracy of the MLS. This method is also applied to the calibration of misalignment between the laser scanner and the POS. Several datasets from different scenarios have been adopted in order to evaluate the effectiveness of the proposed method. The results from experiments indicate that this method would represent a significant improvement in terms of the accuracy of the MLS in environments that are essentially hostile to GNSS and is also effective regarding the calibration of misalignment.
Amplitude differences least squares method applied to temporal cardiac beat alignment
International Nuclear Information System (INIS)
Correa, R O; Laciar, E; Valentinuzzi, M E
2007-01-01
High resolution averaged ECG is an important diagnostic technique in post-infarcted and/or chagasic patients with high risk of ventricular tachycardia (VT). It calls for precise determination of the synchronism point (fiducial point) in each beat to be averaged. Cross-correlation (CC) between each detected beat and a reference beat is, by and large, the standard alignment procedure. However, the fiducial point determination is not precise in records contaminated with high levels of noise. Herein, we propose an alignment procedure based on the least squares calculation of the amplitude differences (LSAD) between the ECG samples and a reference or template beat. Both techniques, CC and LSAD, were tested in high resolution ECG's corrupted with white noise and 50 Hz line interference of varying amplitudes (RMS range: 0-100μV). Results point out that LSDA produced a lower alignment error in all contaminated records while in those blurred by power line interference better results were found only within the 0-40 μV range. It is concluded that the proposed method represents a valid alignment alternative
Directory of Open Access Journals (Sweden)
Margaretha Ohyver
2016-12-01
Full Text Available Partial Least Squares (PLS method was developed in 1960 by Herman Wold. The method particularly suits with construct a regression model when the number of independent variables is many and highly collinear. The PLS can be combined with other methods, one of which is a Continuous Wavelet Transformation (CWT. By considering that the presence of outliers can lead to a less reliable model, and this kind of transformation may be required at a stage of pre-processing, the data is free of noise or outliers. Based on the previous study, Kendari hotel room occupancy rate was affected by the outlier, and it had a low value of R2. Therefore, this research aimed to obtain a good model by combining the PLS method and CWT transformation using the Mexican Hats them other wavelet of CWT. The research concludes that merging the PLS and the Mexican Hat transformation has resulted in a better model compared to the model that combined the PLS and the Haar wavelet transformation as shown in the previous study. The research shows that by changing the mother of the wavelet, the value of R2 can be improved significantly. The result provides information on how to increase the value of R2. The other advantage is the information for hotel managements to notice the age of the hotel, the maximum rates, the facilities, and the number of rooms to increase the number of visitors.
Liu, X. Y.; Alfi, S.; Bruni, S.
2016-06-01
A model-based condition monitoring strategy for the railway vehicle suspension is proposed in this paper. This approach is based on recursive least square (RLS) algorithm focusing on the deterministic 'input-output' model. RLS has Kalman filtering feature and is able to identify the unknown parameters from a noisy dynamic system by memorising the correlation properties of variables. The identification of suspension parameter is achieved by machine learning of the relationship between excitation and response in a vehicle dynamic system. A fault detection method for the vertical primary suspension is illustrated as an instance of this condition monitoring scheme. Simulation results from the rail vehicle dynamics software 'ADTreS' are utilised as 'virtual measurements' considering a trailer car of Italian ETR500 high-speed train. The field test data from an E464 locomotive are also employed to validate the feasibility of this strategy for the real application. Results of the parameter identification performed indicate that estimated suspension parameters are consistent or approximate with the reference values. These results provide the supporting evidence that this fault diagnosis technique is capable of paving the way for the future vehicle condition monitoring system.
Robbins, J. W.
1985-01-01
An autonomous spaceborne gravity gradiometer mission is being considered as a post Geopotential Research Mission project. The introduction of satellite diometry data to geodesy is expected to improve solid earth gravity models. The possibility of utilizing gradiometer data for the determination of pertinent gravimetric quantities on a local basis is explored. The analytical technique of least squares collocation is investigated for its usefulness in local solutions of this type. It is assumed, in the error analysis, that the vertical gravity gradient component of the gradient tensor is used as the raw data signal from which the corresponding reference gradients are removed to create the centered observations required in the collocation solution. The reference gradients are computed from a high degree and order geopotential model. The solution can be made in terms of mean or point gravity anomalies, height anomalies, or other useful gravimetric quantities depending on the choice of covariance types. Selected for this study were 30 x 30 foot mean gravity and height anomalies. Existing software and new software are utilized to implement the collocation technique. It was determined that satellite gradiometry data at an altitude of 200 km can be used successfully for the determination of 30 x 30 foot mean gravity anomalies to an accuracy of 9.2 mgal from this algorithm. It is shown that the resulting accuracy estimates are sensitive to gravity model coefficient uncertainties, data reduction assumptions and satellite mission parameters.
Least squares estimation of molecular distance--noise abatement in phylogenetic reconstruction.
Goldstein, D B; Pollock, D D
1994-06-01
Zuckerkandl and Pauling (1962, "Horizons in Biochemistry," pp. 189-225, Academic Press, New York) first noticed that the degree of sequence similarity between the proteins of different species could be used to estimate their phylogenetic relationship. Since then models have been developed to improve the accuracy of phylogenetic inferences based on amino acid or DNA sequences. Most of these models were designed to yield distance measures that are linear with time, on average. The reliability of phylogenetic reconstruction, however, depends on the variance of the distance measure in addition to its expectation. In this paper we show how the method of generalized least squares can be used to combine data types, each most informative at different points in time, into a single distance measure. This measure reconstructs phylogenies more accurately than existing non-likelihood distance measures. We illustrate the approach for a two-rate mutation model and demonstrate that its application provides more accurate phylogenetic reconstruction than do currently available analytical distance measures.
Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua
2017-05-01
The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.
Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration
Directory of Open Access Journals (Sweden)
Bo Liu
2012-02-01
Full Text Available In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL, for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI. Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD. Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces.
Zhang, Hao
2017-07-07
Least-squares reverse time migration (LSRTM) is a seismic imaging technique based on linear inversion, which usually aims to improve the quality of seismic image through removing the acquisition footprint, suppressing migration artifacts, and enhancing resolution. LSRTM has been shown to produce migration images with better quality than those computed by conventional migration. In this paper, our derivation of LSRTM approximates the near-incident reflection coefficient with the normal-incident reflection coefficient, which shows that the reflectivity term defined is related to the normal-incident reflection coefficient and the background velocity. With reflected data, LSRTM is mainly sensitive to impedance perturbations. According to an approximate relationship between them, we reformulate the perturbation related system into a reflection-coefficient related one. Then, we seek the inverted image through linearized iteration. In the proposed algorithm, we only need the migration velocity for LSRTM considering that the density changes gently when compared with migration velocity. To validate our algorithms, we first apply it to a synthetic case and then a field data set. Both applications illustrate that our imaging results are of good quality.
Peterson, K. T.; Wulamu, A.
2017-12-01
Water, essential to all living organisms, is one of the Earth's most precious resources. Remote sensing offers an ideal approach to monitor water quality over traditional in-situ techniques that are highly time and resource consuming. Utilizing a multi-scale approach, incorporating data from handheld spectroscopy, UAS based hyperspectal, and satellite multispectral images were collected in coordination with in-situ water quality samples for the two midwestern watersheds. The remote sensing data was modeled and correlated to the in-situ water quality variables including chlorophyll content (Chl), turbidity, and total dissolved solids (TDS) using Normalized Difference Spectral Indices (NDSI) and Partial Least Squares Regression (PLSR). The results of the study supported the original hypothesis that correlating water quality variables with remotely sensed data benefits greatly from the use of more complex modeling and regression techniques such as PLSR. The final results generated from the PLSR analysis resulted in much higher R2 values for all variables when compared to NDSI. The combination of NDSI and PLSR analysis also identified key wavelengths for identification that aligned with previous study's findings. This research displays the advantages and future for complex modeling and machine learning techniques to improve water quality variable estimation from spectral data.
Detection of Tetracycline in Milk using NIR Spectroscopy and Partial Least Squares
Wu, Nan; Xu, Chenshan; Yang, Renjie; Ji, Xinning; Liu, Xinyuan; Yang, Fan; Zeng, Ming
2018-02-01
The feasibility of measuring tetracycline in milk was investigated by near infrared (NIR) spectroscopic technique combined with partial least squares (PLS) method. The NIR transmittance spectra of 40 pure milk samples and 40 tetracycline adulterated milk samples with different concentrations (from 0.005 to 40 mg/L) were obtained. The pure milk and tetracycline adulterated milk samples were properly assigned to the categories with 100% accuracy in the calibration set, and the rate of correct classification of 96.3% was obtained in the prediction set. For the quantitation of tetracycline in adulterated milk, the root mean squares errors for calibration and prediction models were 0.61 mg/L and 4.22 mg/L, respectively. The PLS model had good fitting effect in calibration set, however its predictive ability was limited, especially for low tetracycline concentration samples. Totally, this approach can be considered as a promising tool for discrimination of tetracycline adulterated milk, as a supplement to high performance liquid chromatography.
Evaluation of milk compositional variables on coagulation properties using partial least squares.
Bland, Julie H; Grandison, Alistair S; Fagan, Colette C
2015-02-01
The aim of this study was to investigate the effects of numerous milk compositional factors on milk coagulation properties using Partial Least Squares (PLS). Milk from herds of Jersey and Holstein-Friesian cattle was collected across the year and blended (n=55), to maximise variation in composition and coagulation. The milk was analysed for casein, protein, fat, titratable acidity, lactose, Ca2+, urea content, micelles size, fat globule size, somatic cell count and pH. Milk coagulation properties were defined as coagulation time, curd firmness and curd firmness rate measured by a controlled strain rheometer. The models derived from PLS had higher predictive power than previous models demonstrating the value of measuring more milk components. In addition to the well-established relationships with casein and protein levels, CMS and fat globule size were found to have as strong impact on all of the three models. The study also found a positive impact of fat on milk coagulation properties and a strong relationship between lactose and curd firmness, and urea and curd firmness rate, all of which warrant further investigation due to current lack of knowledge of the underlying mechanism. These findings demonstrate the importance of using a wider range of milk compositional variables for the prediction of the milk coagulation properties, and hence as indicators of milk suitability for cheese making.
Polat, Esra; Gunay, Suleyman
2013-10-01
One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.
International Nuclear Information System (INIS)
Pang, Hongfeng; Chen, Dixiang; Pan, Mengchun; Luo, Shitu; Zhang, Qi; Luo, Feilu
2012-01-01
Fluxgate magnetometers are widely used for magnetic field measurement. However, their accuracy is influenced by temperature. In this paper, a new method was proposed to compensate the temperature drift of fluxgate magnetometers, in which a least-squares support vector machine (LSSVM) is utilized. The compensation performance was analyzed by simulation, which shows that the LSSVM has better performance and less training time than backpropagation and radical basis function neural networks. The temperature characteristics of a DM fluxgate magnetometer were measured with a temperature experiment box. Forty-five measured data under different magnetic fields and temperatures were obtained and divided into 36 training data and nine test data. The training data were used to obtain the parameters of the LSSVM model, and the compensation performance of the LSSVM model was verified by the test data. Experimental results show that the temperature drift of magnetometer is reduced from 109.3 to 3.3 nT after compensation, which suggests that this compensation method is effective for the accuracy improvement of fluxgate magnetometers. (paper)
Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches
Brooks, Wesley R.; Fienen, Michael N.; Corsi, Steven R.
2013-01-01
At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.
Rajaraman, Prathish K; Manteuffel, T A; Belohlavek, M; Heys, Jeffrey J
2017-01-01
A new approach has been developed for combining and enhancing the results from an existing computational fluid dynamics model with experimental data using the weighted least-squares finite element method (WLSFEM). Development of the approach was motivated by the existence of both limited experimental blood velocity in the left ventricle and inexact numerical models of the same flow. Limitations of the experimental data include measurement noise and having data only along a two-dimensional plane. Most numerical modeling approaches do not provide the flexibility to assimilate noisy experimental data. We previously developed an approach that could assimilate experimental data into the process of numerically solving the Navier-Stokes equations, but the approach was limited because it required the use of specific finite element methods for solving all model equations and did not support alternative numerical approximation methods. The new approach presented here allows virtually any numerical method to be used for approximately solving the Navier-Stokes equations, and then the WLSFEM is used to combine the experimental data with the numerical solution of the model equations in a final step. The approach dynamically adjusts the influence of the experimental data on the numerical solution so that more accurate data are more closely matched by the final solution and less accurate data are not closely matched. The new approach is demonstrated on different test problems and provides significantly reduced computational costs compared with many previous methods for data assimilation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Fu, Y; Xu, O; Yang, W; Zhou, L; Wang, J
2017-01-01
To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately. (paper)
Multidimensional model of apathy in older adults using partial least squares--path modeling.
Raffard, Stéphane; Bortolon, Catherine; Burca, Marianna; Gely-Nargeot, Marie-Christine; Capdevielle, Delphine
2016-06-01
Apathy defined as a mental state characterized by a lack of goal-directed behavior is prevalent and associated with poor functioning in older adults. The main objective of this study was to identify factors contributing to the distinct dimensions of apathy (cognitive, emotional, and behavioral) in older adults without dementia. One hundred and fifty participants (mean age, 80.42) completed self-rated questionnaires assessing apathy, emotional distress, anticipatory pleasure, motivational systems, physical functioning, quality of life, and cognitive functioning. Data were analyzed using partial least squares variance-based structural equation modeling in order to examine factors contributing to the three different dimensions of apathy in our sample. Overall, the different facets of apathy were associated with cognitive functioning, anticipatory pleasure, sensitivity to reward, and physical functioning, but the contribution of these different factors to the three dimensions of apathy differed significantly. More specifically, the impact of anticipatory pleasure and physical functioning was stronger for the cognitive than for emotional apathy. Conversely, the impact of sensibility to reward, although small, was slightly stronger on emotional apathy. Regarding behavioral apathy, again we found similar latent variables except for the cognitive functioning whose impact was not statistically significant. Our results highlight the need to take into account various mechanisms involved in the different facets of apathy in older adults without dementia, including not only cognitive factors but also motivational variables and aspects related to physical disability. Clinical implications are discussed.
Non-stationary covariance function modelling in 2D least-squares collocation
Darbeheshti, N.; Featherstone, W. E.
2009-06-01
Standard least-squares collocation (LSC) assumes 2D stationarity and 3D isotropy, and relies on a covariance function to account for spatial dependence in the observed data. However, the assumption that the spatial dependence is constant throughout the region of interest may sometimes be violated. Assuming a stationary covariance structure can result in over-smoothing of, e.g., the gravity field in mountains and under-smoothing in great plains. We introduce the kernel convolution method from spatial statistics for non-stationary covariance structures, and demonstrate its advantage for dealing with non-stationarity in geodetic data. We then compared stationary and non- stationary covariance functions in 2D LSC to the empirical example of gravity anomaly interpolation near the Darling Fault, Western Australia, where the field is anisotropic and non-stationary. The results with non-stationary covariance functions are better than standard LSC in terms of formal errors and cross-validation against data not used in the interpolation, demonstrating that the use of non-stationary covariance functions can improve upon standard (stationary) LSC.
International Nuclear Information System (INIS)
Liu, Jinhai; Su, Hanguang; Ma, Yanjuan; Wang, Gang; Wang, Yuan; Zhang, Kun
2016-01-01
Small leakages are severe threats to the long distance pipeline transportation. An online small leakage detection method based on chaos characteristics and Least Squares Support Vector Machines (LS-SVMs) is proposed in this paper. For the first time, the relationship between the chaos characteristics of pipeline inner pressures and the small leakages is investigated and applied in the pipeline detection method. Firstly, chaos in the pipeline inner pressure is found. Relevant chaos characteristics are estimated by the nonlinear time series analysis package (TISEAN). Then LS-SVM with a hybrid kernel is built and named as hybrid kernel LS-SVM (HKLS-SVM). It is applied to analyze the chaos characteristics and distinguish the negative pressure waves (NPWs) caused by small leaks. A new leak location method is also expounded. Finally, data of the chaotic Logistic-Map system is used in the simulation. A comparison between HKLS-SVM and other methods, in terms of the identification accuracy and computing efficiency, is made. The simulation result shows that HKLS-SVM gets the best performance and is effective in error analysis of chaotic systems. When real pipeline data is used in the test, the ultimate identification accuracy of HKLS-SVM reaches 97.38% and the position accuracy is 99.28%, indicating that the method proposed in this paper has good performance in detecting and locating small pipeline leaks.
Study of aged cognac using solid-phase microextraction and partial least-squares regression.
Watts, Vivian A; Butzke, Christian E; Boulton, Roger B
2003-12-17
Headspace solid-phase microextraction (SPME) and GC-MS were used to analyze 17 commercial French Cognac brandies (9 young and 8 well-aged, ranging in age from 3 to 55 years). Sixty-four volatiles were chosen on the basis of chromatographic separation and/or known odor importance. Chromatographic peaks were manually integrated and the peak area data analyzed using partial least-squares (PLS) regression to study relationships between volatile composition (X variables) and age (Y variable). When only those compounds with the highest significance were included and from these selected the variables (a total of 33) with the highest correlation loadings on the first two principal components, principal component 1 explained 82% of the variance of the measured compounds and 85% of the variance in age. These were considered the most important volatiles to distinguish products of different ages because young and old samples were separated along principal component 1. Norisoprenoids, terpenes, and acetate esters had weaker positive and negative loadings and were therefore left out. The PLS model could predict sample age accurately with the optimum 33 volatiles as well as with a smaller subset consisting of ethyl esters and methyl ketones.
Dynamic temperature modeling of an SOFC using least squares support vector machines
Energy Technology Data Exchange (ETDEWEB)
Kang, Ying-Wei; Li, Jun; Cao, Guang-Yi; Tu, Heng-Yong [Institute of Fuel Cell, Shanghai Jiao Tong University, Shanghai 200240 (China); Li, Jian; Yang, Jie [School of Materials Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China)
2008-05-01
Cell temperature control plays a crucial role in SOFC operation. In order to design effective temperature control strategies by model-based control methods, a dynamic temperature model of an SOFC is presented in this paper using least squares support vector machines (LS-SVMs). The nonlinear temperature dynamics of the SOFC is represented by a nonlinear autoregressive with exogenous inputs (NARXs) model that is implemented using an LS-SVM regression model. Issues concerning the development of the LS-SVM temperature model are discussed in detail, including variable selection, training set construction and tuning of the LS-SVM parameters (usually referred to as hyperparameters). Comprehensive validation tests demonstrate that the developed LS-SVM model is sufficiently accurate to be used independently from the SOFC process, emulating its temperature response from the only process input information over a relatively wide operating range. The powerful ability of the LS-SVM temperature model benefits from the approaches of constructing the training set and tuning hyperparameters automatically by the genetic algorithm (GA), besides the modeling method itself. The proposed LS-SVM temperature model can be conveniently employed to design temperature control strategies of the SOFC. (author)
International Nuclear Information System (INIS)
Hao, Ming; Wang, Yanli; Bryant, Stephen H.
2016-01-01
Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision–recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. - Graphical abstract: Flowchart of the proposed RLS-KF algorithm for drug-target interaction predictions. - Highlights: • A nonlinear kernel fusion algorithm is proposed to perform drug-target interaction predictions. • Performance can further be improved by using the recalculated kernel. • Top predictions can be validated by experimental data.
Kim, Jong-Yun; Choi, Yong Suk; Park, Yong Joon; Jung, Sung-Hee
2009-01-01
Neutron spectrometry, based on the scattering of high energy fast neutrons from a radioisotope and slowing-down by the light hydrogen atoms, is a useful technique for non-destructive, quantitative measurement of hydrogen content because it has a large measuring volume, and is not affected by temperature, pressure, pH value and color. The most common choice for radioisotope neutron source is (252)Cf or (241)Am-Be. In this study, (252)Cf with a neutron flux of 6.3x10(6)n/s has been used as an attractive neutron source because of its high flux neutron and weak radioactivity. Pulse-height neutron spectra have been obtained by using in-house built radioisotopic neutron spectrometric system equipped with (3)He detector and multi-channel analyzer, including a neutron shield. As a preliminary study, polyethylene block (density of approximately 0.947g/cc and area of 40cmx25cm) was used for the determination of hydrogen content by using multivariate calibration models, depending on the thickness of the block. Compared with the results obtained from a simple linear calibration model, partial least-squares regression (PLSR) method offered a better performance in a quantitative data analysis. It also revealed that the PLSR method in a neutron spectrometric system can be promising in the real-time, online monitoring of the powder process to determine the content of any type of molecules containing hydrogen nuclei.
Due Date Assignment in a Dynamic Job Shop with the Orthogonal Kernel Least Squares Algorithm
Yang, D. H.; Hu, L.; Qian, Y.
2017-06-01
Meeting due dates is a key goal in the manufacturing industries. This paper proposes a method for due date assignment (DDA) by using the Orthogonal Kernel Least Squares Algorithm (OKLSA). A simulation model is built to imitate the production process of a highly dynamic job shop. Several factors describing job characteristics and system state are extracted as attributes to predict job flow-times. A number of experiments under conditions of varying dispatching rules and 90% shop utilization level have been carried out to evaluate the effectiveness of OKLSA applied for DDA. The prediction performance of OKLSA is compared with those of five conventional DDA models and back-propagation neural network (BPNN). The experimental results indicate that OKLSA is statistically superior to other DDA models in terms of mean absolute lateness and root mean squares lateness in most cases. The only exception occurs when the shortest processing time rule is used for dispatching jobs, the difference between OKLSA and BPNN is not statistically significant.
Li, Qian-qian; Wu, Li-jun; Liu, Wei; Cao, Jin-li; Duan, Jia; Huang, Yue; Min, Shun-geng
2012-02-01
In the present study, sucrose was used as a chiral selector to detect the molar fraction of R-metalaxyl and S-ibuprofen due to the UV spectral difference caused by the interaction of the R- and S-isomer with sucrose. The quantitative model of the molar fraction of R-metalaxyl was established by partial least squares (PLS) regression and the robustness of the models was evaluated by 6 independent validation samples. The determination coefficient R2 and the standard error of calibration set (SEC) was 99.98% and 0.003 respectively. The correlation coefficient of estimated value and specified value, the standard error and the relative standard deviation (RSD) of the independent validation samples was 0.999 8, 0.000 4 and 0.054% respectively. The quantitative models of the molar fraction of S-ibuprofen were established by PLS and the robustness of models was evaluated. The determination coefficient R2 and the standard error of calibration set (SEC) was 99.82% and 0.007 respectively. The correlation coefficient of estimated value and specified value of the independent validation samples was 0.998 1. The standard error of prediction (SEP) was 0.002 and the relative standard deviation (RSD) was 0.2%. The result demonstrates that sucrose is an ideal chiral selector for building a stable regression model to determine the enantiomeric composition.
Directory of Open Access Journals (Sweden)
Margaretha Ohyver
2014-12-01
Full Text Available Multicollinearity and outliers are the common problems when estimating regression model. Multicollinearitiy occurs when there are high correlations among predictor variables, leading to difficulties in separating the effects of each independent variable on the response variable. While, if outliers are present in the data to be analyzed, then the assumption of normality in the regression will be violated and the results of the analysis may be incorrect or misleading. Both of these cases occurred in the data on room occupancy rate of hotels in Kendari. The purpose of this study is to find a model for the data that is free of multicollinearity and outliers and to determine the factors that affect the level of room occupancy hotels in Kendari. The method used is Continuous Wavelet Transformation and Partial Least Squares. The result of this research is a regression model that is free of multicollinearity and a pattern of data that resolved the present of outliers.
Fishery landing forecasting using EMD-based least square support vector machine models
Shabri, Ani
2015-05-01
In this paper, the novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EMD) and least square support machine (LSSVM) is proposed to improve the accuracy of fishery landing forecasting. This hybrid is formulated specifically to address in modeling fishery landing, which has high nonlinear, non-stationary and seasonality time series which can hardly be properly modelled and accurately forecasted by traditional statistical models. In the hybrid model, EMD is used to decompose original data into a finite and often small number of sub-series. The each sub-series is modeled and forecasted by a LSSVM model. Finally the forecast of fishery landing is obtained by aggregating all forecasting results of sub-series. To assess the effectiveness and predictability of EMD-LSSVM, monthly fishery landing record data from East Johor of Peninsular Malaysia, have been used as a case study. The result shows that proposed model yield better forecasts than Autoregressive Integrated Moving Average (ARIMA), LSSVM and EMD-ARIMA models on several criteria..
Dutta, Gaurav
2014-10-01
Strong subsurface attenuation leads to distortion of amplitudes and phases of seismic waves propagating inside the earth. Conventional acoustic reverse time migration (RTM) and least-squares reverse time migration (LSRTM) do not account for this distortion, which can lead to defocusing of migration images in highly attenuative geologic environments. To correct for this distortion, we used a linearized inversion method, denoted as Qp-LSRTM. During the leastsquares iterations, we used a linearized viscoacoustic modeling operator for forward modeling. The adjoint equations were derived using the adjoint-state method for back propagating the residual wavefields. The merit of this approach compared with conventional RTM and LSRTM was that Qp-LSRTM compensated for the amplitude loss due to attenuation and could produce images with better balanced amplitudes and more resolution below highly attenuative layers. Numerical tests on synthetic and field data illustrated the advantages of Qp-LSRTM over RTM and LSRTM when the recorded data had strong attenuation effects. Similar to standard LSRTM, the sensitivity tests for background velocity and Qp errors revealed that the liability of this method is the requirement for smooth and accurate migration velocity and attenuation models.
Least-squares reverse time migration with local Radon-based preconditioning
Dutta, Gaurav
2017-03-08
Least-squares migration (LSM) can produce images with better balanced amplitudes and fewer artifacts than standard migration. The conventional objective function used for LSM minimizes the L2-norm of the data residual between the predicted and the observed data. However, for field-data applications in which the recorded data are noisy and undersampled, the conventional formulation of LSM fails to provide the desired uplift in the quality of the inverted image. We have developed a leastsquares reverse time migration (LSRTM) method using local Radon-based preconditioning to overcome the low signal-tonoise ratio (S/N) problem of noisy or severely undersampled data. A high-resolution local Radon transform of the reflectivity is used, and sparseness constraints are imposed on the inverted reflectivity in the local Radon domain. The sparseness constraint is that the inverted reflectivity is sparse in the Radon domain and each location of the subsurface is represented by a limited number of geologic dips. The forward and the inverse mapping of the reflectivity to the local Radon domain and vice versa is done through 3D Fourier-based discrete Radon transform operators. The weights for the preconditioning are chosen to be varying locally based on the relative amplitudes of the local dips or assigned using quantile measures. Numerical tests on synthetic and field data validate the effectiveness of our approach in producing images with good S/N and fewer aliasing artifacts when compared with standard RTM or standard LSRTM.
Q-Least Squares Reverse Time Migration with Viscoacoustic Deblurring Filters
Chen, Yuqing
2017-08-02
Viscoacoustic least-squares reverse time migration (Q-LSRTM) linearly inverts for the subsurface reflectivity model from lossy data. Compared to the conventional migration methods, it can compensate for the amplitude loss in the migrated images because of the strong subsurface attenuation and can produce reflectors that are accurately positioned in depth. However, the adjoint Q propagators used for backward propagating the residual data are also attenuative. Thus, the inverted images from Q-LSRTM are often observed to have lower resolution when compared to the benchmark acoustic LSRTM images from acoustic data. To increase the resolution and accelerate the convergence of Q-LSRTM, we propose using viscoacoustic deblurring filters as a preconditioner for Q-LSRTM. These filters can be estimated by matching a simulated migration image to its reference reflectivity model. Numerical tests on synthetic and field data demonstrate that Q-LSRTM combined with viscoacoustic deblurring filters can produce images with higher resolution and more balanced amplitudes than images from acoustic RTM, acoustic LSRTM and Q-LSRTM when there is strong attenuation in the background medium. The proposed preconditioning method is also shown to improve the convergence rate of Q-LSRTM by more than 30 percent in some cases and significantly compensate for the lossy artifacts in RTM images.
Metafitting: Weight optimization for least-squares fitting of PTTI data
Douglas, Rob J.; Boulanger, J.-S.
1995-01-01
For precise time intercomparisons between a master frequency standard and a slave time scale, we have found it useful to quantitatively compare different fitting strategies by examining the standard uncertainty in time or average frequency. It is particularly useful when designing procedures which use intermittent intercomparisons, with some parameterized fit used to interpolate or extrapolate from the calibrating intercomparisons. We use the term 'metafitting' for the choices that are made before a fitting procedure is operationally adopted. We present methods for calculating the standard uncertainty for general, weighted least-squares fits and a method for optimizing these weights for a general noise model suitable for many PTTI applications. We present the results of the metafitting of procedures for the use of a regular schedule of (hypothetical) high-accuracy frequency calibration of a maser time scale. We have identified a cumulative series of improvements that give a significant reduction of the expected standard uncertainty, compared to the simplest procedure of resetting the maser synthesizer after each calibration. The metafitting improvements presented include the optimum choice of weights for the calibration runs, optimized over a period of a week or 10 days.
A scaled Lagrangian method for performing a least squares fit of a model to plant data
International Nuclear Information System (INIS)
Crisp, K.E.
1988-01-01
Due to measurement errors, even a perfect mathematical model will not be able to match all the corresponding plant measurements simultaneously. A further discrepancy may be introduced if an un-modelled change in conditions occurs within the plant which should have required a corresponding change in model parameters - e.g. a gradual deterioration in the performance of some component(s). Taking both these factors into account, what is required is that the overall discrepancy between the model predictions and the plant data is kept to a minimum. This process is known as 'model fitting', A method is presented for minimising any function which consists of the sum of squared terms, subject to any constraints. Its most obvious application is in the process of model fitting, where a weighted sum of squares of the differences between model predictions and plant data is the function to be minimised. When implemented within existing Central Electricity Generating Board computer models, it will perform a least squares fit of a model to plant data within a single job submission. (author)
A bifurcation identifier for IV-OCT using orthogonal least squares and supervised machine learning.
Macedo, Maysa M G; Guimarães, Welingson V N; Galon, Micheli Z; Takimura, Celso K; Lemos, Pedro A; Gutierrez, Marco Antonio
2015-12-01
Intravascular optical coherence tomography (IV-OCT) is an in-vivo imaging modality based on the intravascular introduction of a catheter which provides a view of the inner wall of blood vessels with a spatial resolution of 10-20 μm. Recent studies in IV-OCT have demonstrated the importance of the bifurcation regions. Therefore, the development of an automated tool to classify hundreds of coronary OCT frames as bifurcation or nonbifurcation can be an important step to improve automated methods for atherosclerotic plaques quantification, stent analysis and co-registration between different modalities. This paper describes a fully automated method to identify IV-OCT frames in bifurcation regions. The method is divided into lumen detection; feature extraction; and classification, providing a lumen area quantification, geometrical features of the cross-sectional lumen and labeled slices. This classification method is a combination of supervised machine learning algorithms and feature selection using orthogonal least squares methods. Training and tests were performed in sets with a maximum of 1460 human coronary OCT frames. The lumen segmentation achieved a mean difference of lumen area of 0.11 mm(2) compared with manual segmentation, and the AdaBoost classifier presented the best result reaching a F-measure score of 97.5% using 104 features. Copyright © 2015 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Tomáš Masák
2017-09-01
Full Text Available Principal component analysis (PCA is a popular dimensionality reduction and data visualization method. Sparse PCA (SPCA is its extensively studied and NP-hard-to-solve modifcation. In the past decade, many diferent algorithms were proposed to perform SPCA. We build upon the work of Zou et al. (2006 who recast the SPCA problem into the regression framework and proposed to induce sparsity with the l1 penalty. Instead, we propose to drop the l1 penalty and promote sparsity by re-weighting the l2-norm. Our algorithm thus consists mainly of solving weighted ridge regression problems. We show that the algorithm basically attempts to fnd a solution to a penalized least squares problem with a non-convex penalty that resembles the l0-norm more closely. We also apply the algorithm to analyze the voting records of the Chamber of Deputies of the Parliament of the Czech Republic. We show not only why the SPCA is more appropriate to analyze this type of data, but we also discuss whether the variable selection property can be utilized as an additional piece of information, for example to create voting calculators automatically.
Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi
2010-01-01
The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.
Elastic least-squares reverse time migration with velocities and density perturbation
Qu, Yingming; Li, Jinli; Huang, Jianping; Li, Zhenchun
2018-02-01
Elastic least-squares reverse time migration (LSRTM) based on the non-density-perturbation assumption can generate false-migrated interfaces caused by density variations. We perform an elastic LSRTM scheme with density variations for multicomponent seismic data to produce high-quality images in Vp, Vs and ρ components. However, the migrated images may suffer from crosstalk artefacts caused by P- and S-waves coupling in elastic LSRTM no matter what model parametrizations used. We have proposed an elastic LSRTM with density variations method based on wave modes separation to reduce these crosstalk artefacts by using P- and S-wave decoupled elastic velocity-stress equations to derive demigration equations and gradient formulae with respect to Vp, Vs and ρ. Numerical experiments with synthetic data demonstrate the capability and superiority of the proposed method. The imaging results suggest that our method promises imaging results with higher quality and has a faster residual convergence rate. Sensitivity analysis of migration velocity, migration density and stochastic noise verifies the robustness of the proposed method for field data.
Sequential least-square reconstruction of instantaneous pressure field around a body from TR-PIV
Jeon, Young Jin; Gomit, G.; Earl, T.; Chatellier, L.; David, L.
2018-02-01
A procedure is introduced to obtain an instantaneous pressure field around a wing from time-resolved particle image velocimetry (TR-PIV) and particle image accelerometry (PIA). The instantaneous fields of velocity and material acceleration are provided by the recently introduced multi-frame PIV method, fluid trajectory evaluation based on ensemble-averaged cross-correlation (FTEE). The integration domain is divided into several subdomains in accordance with the local reliability. The near-edge and near-body regions are determined based on the recorded image of the wing. The instantaneous wake region is assigned by a combination of a self-defined criterion and binary morphological processes. The pressure is reconstructed from a minimization process of the difference between measured and reconstructed pressure gradients in a least-square sense. This is solved sequentially according to a decreasing order of reliability of each subdomain to prevent a propagation of error from the less reliable near-body region to the free-stream. The present procedure is numerically assessed by synthetically generated 2D particle images based on a numerical simulation. Volumetric pressure fields are then evaluated from tomographic TR-PIV of a flow around a 30-degree-inclined NACA0015 airfoil. A possibility of using a different scheme to evaluate material acceleration for a specific subdomain is presented. Moreover, this 3D application allows the investigation of the effect of the third component of the pressure gradient by which the wake region seems to be affected.
Least Squares Estimators for Unit Root Processes with Locally Stationary Disturbance
Directory of Open Access Journals (Sweden)
Junichi Hirukawa
2012-01-01
Full Text Available The random walk is used as a model expressing equitableness and the effectiveness of various finance phenomena. Random walk is included in unit root process which is a class of nonstationary processes. Due to its nonstationarity, the least squares estimator (LSE of random walk does not satisfy asymptotic normality. However, it is well known that the sequence of partial sum processes of random walk weakly converges to standard Brownian motion. This result is so-called functional central limit theorem (FCLT. We can derive the limiting distribution of LSE of unit root process from the FCLT result. The FCLT result has been extended to unit root process with locally stationary process (LSP innovation. This model includes different two types of nonstationarity. Since the LSP innovation has time-varying spectral structure, it is suitable for describing the empirical financial time series data. Here we will derive the limiting distributions of LSE of unit root, near unit root and general integrated processes with LSP innovation. Testing problem between unit root and near unit root will be also discussed. Furthermore, we will suggest two kind of extensions for LSE, which include various famous estimators as special cases.
Energy Technology Data Exchange (ETDEWEB)
Hao, Ming; Wang, Yanli, E-mail: ywang@ncbi.nlm.nih.gov; Bryant, Stephen H., E-mail: bryant@ncbi.nlm.nih.gov
2016-02-25
Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision–recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. - Graphical abstract: Flowchart of the proposed RLS-KF algorithm for drug-target interaction predictions. - Highlights: • A nonlinear kernel fusion algorithm is proposed to perform drug-target interaction predictions. • Performance can further be improved by using the recalculated kernel. • Top predictions can be validated by experimental data.
A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting.
Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue
2016-07-29
The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals.
Yang, J-J; Yoon, U; Yun, H J; Im, K; Choi, Y Y; Lee, K H; Park, H; Hough, M G; Lee, J-M
2013-08-29
A number of imaging studies have reported neuroanatomical correlates of human intelligence with various morphological characteristics of the cerebral cortex. However, it is not yet clear whether these morphological properties of the cerebral cortex account for human intelligence. We assumed that the complex structure of the cerebral cortex could be explained effectively considering cortical thickness, surface area, sulcal depth and absolute mean curvature together. In 78 young healthy adults (age range: 17-27, male/female: 39/39), we used the full-scale intelligence quotient (FSIQ) and the cortical measurements calculated in native space from each subject to determine how much combining various cortical measures explained human intelligence. Since each cortical measure is thought to be not independent but highly inter-related, we applied partial least square (PLS) regression, which is one of the most promising multivariate analysis approaches, to overcome multicollinearity among cortical measures. Our results showed that 30% of FSIQ was explained by the first latent variable extracted from PLS regression analysis. Although it is difficult to relate the first derived latent variable with specific anatomy, we found that cortical thickness measures had a substantial impact on the PLS model supporting the most significant factor accounting for FSIQ. Our results presented here strongly suggest that the new predictor combining different morphometric properties of complex cortical structure is well suited for predicting human intelligence. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization
Directory of Open Access Journals (Sweden)
José R. Casar
2011-09-01
Full Text Available The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network. The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.
Aimran, Ahmad Nazim; Ahmad, Sabri; Afthanorhan, Asyraf; Awang, Zainudin
2017-05-01
Structural equation modeling (SEM) is the second generation statistical analysis technique developed for analyzing the inter-relationships among multiple variables in a model. Previous studies have shown that there seemed to be at least an implicit agreement about the factors that should drive the choice between covariance-based structural equation modeling (CB-SEM) and partial least square path modeling (PLS-PM). PLS-PM appears to be the preferred method by previous scholars because of its less stringent assumption and the need to avoid the perceived difficulties in CB-SEM. Along with this issue has been the increasing debate among researchers on the use of CB-SEM and PLS-PM in studies. The present study intends to assess the performance of CB-SEM and PLS-PM as a confirmatory study in which the findings will contribute to the body of knowledge of SEM. Maximum likelihood (ML) was chosen as the estimator for CB-SEM and was expected to be more powerful than PLS-PM. Based on the balanced experimental design, the multivariate normal data with specified population parameter and sample sizes were generated using Pro-Active Monte Carlo simulation, and the data were analyzed using AMOS for CB-SEM and SmartPLS for PLS-PM. Comparative Bias Index (CBI), construct relationship, average variance extracted (AVE), composite reliability (CR), and Fornell-Larcker criterion were used to study the consequence of each estimator. The findings conclude that CB-SEM performed notably better than PLS-PM in estimation for large sample size (100 and above), particularly in terms of estimations accuracy and consistency.
Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.
2012-08-01
Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.
Marugán-Lobón, Jesús; Buscalioni, Angela D
2006-01-01
While rostral variation has been the subject of detailed avian evolutionary research, avian skull organization, characterized by a flexed or extended appearance of the skull, has eventually become neglected by mainstream evolutionary inquiries. This study aims to recapture its significance, evaluating possible functional, phylogenetic and developmental factors that may be underlying it. In order to estimate which, and how, elements of the skull intervene in patterning the skull we tested the statistical interplay between a series of old mid-sagittal angular measurements (mostly endocranial) in combination with newly obtained skull metrics based on landmark superimposition methods (exclusively exocranial shape), by means of the statistic-morphometric technique of two-block partial least squares. As classic literature anticipated, we found that the external appearance of the skull corresponds to the way in which the plane of the caudal cranial base is oriented, in connection with the orientations of the plane of the foramen magnum and of the lateral semicircular canal. The pattern of covariation found between metrics conveys flexed or extended appearances of the skull implicitly within a single and statistically significant dimension of covariation. Marked shape changes with which angles covary concentrate at the supraoccipital bone, the cranial base and the antorbital window, whereas the plane measuring the orientation of the anterior portion of the rostrum does not intervene. Statistical covariance between elements of the caudal cranial base and the occiput inplies that morphological integration underlies avian skull macroevolutionary organization as a by-product of the regional concordance of such correlated elements within the early embryonic chordal domain of mesodermic origin.
Soil Salinity Retrieval from Advanced Multi-Spectral Sensor with Partial Least Square Regression
Directory of Open Access Journals (Sweden)
Xingwang Fan
2015-01-01
Full Text Available Improper use of land resources may result in severe soil salinization. Timely monitoring and early warning of soil salinity is in urgent need for sustainable development. This paper addresses the possibility and potential of Advanced Land Imager (ALI for mapping soil salinity. In situ field spectra and soil salinity data were collected in the Yellow River Delta, China. Statistical analysis demonstrated the importance of ALI blue and near infrared (NIR bands for soil salinity. A partial least square regression (PLSR model was established between soil salinity and ALI-convolved field spectra. The model estimated soil salinity with a R2 (coefficient of determination, RPD (ratio of prediction to deviation, bias, standard deviation (SD and root mean square error (RMSE of 0.749, 3.584, 0.036 g∙kg−1, 0.778 g∙kg−1 and 0.779 g∙kg−1. The model was then applied to atmospherically corrected ALI data. Soil salinity was underestimated for moderately (soil salinity within 2–4 g∙kg−1 and highly saline (soil salinity >4 g∙kg−1 soils. The underestimates increased with the degree of soil salinization, with a maximum value of ~4 g∙kg−1. The major contribution for the underestimation (>80% may result from data inaccuracy other than model ineffectiveness. Uncertainty analysis confirmed that improper atmospheric correction contributed to a very conservative uncertainty of 1.3 g∙kg−1. Field sampling within remote sensing pixels was probably the major source responsible for the underestimation. Our study demonstrates the effectiveness of PLSR model in retrieving soil salinity from new-generation multi-spectral sensors. This is very valuable for achieving worldwide soil salinity mapping with low cost and considerable accuracy.
Directory of Open Access Journals (Sweden)
Man Zhu
2017-03-01
Full Text Available Determination of ship maneuvering models is a tough task of ship maneuverability prediction. Among several prime approaches of estimating ship maneuvering models, system identification combined with the full-scale or free- running model test is preferred. In this contribution, real-time system identification programs using recursive identification method, such as the recursive least square method (RLS, are exerted for on-line identification of ship maneuvering models. However, this method seriously depends on the objects of study and initial values of identified parameters. To overcome this, an intelligent technology, i.e., support vector machines (SVM, is firstly used to estimate initial values of the identified parameters with finite samples. As real measured motion data of the Mariner class ship always involve noise from sensors and external disturbances, the zigzag simulation test data include a substantial quantity of Gaussian white noise. Wavelet method and empirical mode decomposition (EMD are used to filter the data corrupted by noise, respectively. The choice of the sample number for SVM to decide initial values of identified parameters is extensively discussed and analyzed. With de-noised motion data as input-output training samples, parameters of ship maneuvering models are estimated using RLS and SVM-RLS, respectively. The comparison between identification results and true values of parameters demonstrates that both the identified ship maneuvering models from RLS and SVM-RLS have reasonable agreements with simulated motions of the ship, and the increment of the sample for SVM positively affects the identification results. Furthermore, SVM-RLS using data de-noised by EMD shows the highest accuracy and best convergence.
Li, Jiangtong; Luo, Yongdao; Dai, Honglin
2018-01-01
Water is the source of life and the essential foundation of all life. With the development of industrialization, the phenomenon of water pollution is becoming more and more frequent, which directly affects the survival and development of human. Water quality detection is one of the necessary measures to protect water resources. Ultraviolet (UV) spectral analysis is an important research method in the field of water quality detection, which partial least squares regression (PLSR) analysis method is becoming predominant technology, however, in some special cases, PLSR's analysis produce considerable errors. In order to solve this problem, the traditional principal component regression (PCR) analysis method was improved by using the principle of PLSR in this paper. The experimental results show that for some special experimental data set, improved PCR analysis method performance is better than PLSR. The PCR and PLSR is the focus of this paper. Firstly, the principal component analysis (PCA) is performed by MATLAB to reduce the dimensionality of the spectral data; on the basis of a large number of experiments, the optimized principal component is extracted by using the principle of PLSR, which carries most of the original data information. Secondly, the linear regression analysis of the principal component is carried out with statistic package for social science (SPSS), which the coefficients and relations of principal components can be obtained. Finally, calculating a same water spectral data set by PLSR and improved PCR, analyzing and comparing two results, improved PCR and PLSR is similar for most data, but improved PCR is better than PLSR for data near the detection limit. Both PLSR and improved PCR can be used in Ultraviolet spectral analysis of water, but for data near the detection limit, improved PCR's result better than PLSR.
Directory of Open Access Journals (Sweden)
Vasileios A. Tzanakakis
2014-12-01
Full Text Available Partial Least Squares Regression (PLSR can integrate a great number of variables and overcome collinearity problems, a fact that makes it suitable for intensive agronomical practices such as land application. In the present study a PLSR model was developed to predict important management goals, including biomass production and nutrient recovery (i.e., nitrogen and phosphorus, associated with treatment potential, environmental impacts, and economic benefits. Effluent loading and a considerable number of soil parameters commonly monitored in effluent irrigated lands were considered as potential predictor variables during the model development. All data were derived from a three year field trial including plantations of four different plant species (Acacia cyanophylla, Eucalyptus camaldulensis, Populus nigra, and Arundo donax, irrigated with pre-treated domestic effluent. PLSR method was very effective despite the small sample size and the wide nature of data set (with many highly correlated inputs and several highly correlated responses. Through PLSR method the number of initial predictor variables was reduced and only several variables were remained and included in the final PLSR model. The important input variables maintained were: Effluent loading, electrical conductivity (EC, available phosphorus (Olsen-P, Na+, Ca2+, Mg2+, K2+, SAR, and NO3−-N. Among these variables, effluent loading, EC, and nitrates had the greater contribution to the final PLSR model. PLSR is highly compatible with intensive agronomical practices such as land application, in which a large number of highly collinear and noisy input variables is monitored to assess plant species performance and to detect impacts on the environment.
Rumondor, Alfred C F; Taylor, Lynne S
2010-10-15
Among the different experimental methods that can be used to quantify the evolution of drug crystallinity in polymer-containing amorphous solid dispersions, powder X-ray diffractometry (PXRD) is commonly considered as a frontline method. In order to achieve accurate quantification of the percent drug crystallinity in the system, calibration curves have to be constructed using appropriate calibration samples and calculation methods. This can be non-trivial in the case of partially crystalline solid dispersions where the calibration samples must capture the multiphase nature of the systems and the mathematical model must be robust enough to accommodate subtle and not so subtle changes in the diffractograms. The purpose of this study was to compare two different calculation and model-building methods to quantify the proportion of crystalline drug in amorphous solid dispersions containing different ratios of drug and amorphous polymer. The first method involves predicting the % drug crystallinity from the ratio of the area underneath the Bragg peaks to total area of the diffractogram. The second method is multivariate analysis using a Partial Least-Squares (PLS) multivariate regression method. It was found that PLS analysis provided far better accuracy and prediction of % drug crystallinity in the sample. Through the application of PLS, root-mean-squared error of estimation (RMSEE) values of 2.2%, 1.9%, and 4.7% drug crystallinity was achieved for samples containing 25%, 50%, and 75% polymer, respectively, compared to values of 11.2%, 17.0%, and 23.6% for the area model. In addition, construction of a PLS model enables further analysis of the data, including identification of outliers and non-linearity in the data, as well as insight into which factors are most important to correlate PXRD diffractograms with % crystallinity of the drug through analysis of the loadings. Copyright 2010 Elsevier B.V. All rights reserved.
Eliminating negative VTEC in global ionosphere maps using inequality-constrained least squares
Zhang, Hongping; Xu, Peiliang; Han, Wenhui; Ge, Maorong; Shi, Chuang
2013-03-01
Currently, ground-based Global Navigation Satellite System (GNSS) stations of the International GNSS Service (IGS) are distributed unevenly around the world. Most of them are located on the mainland, while only a small part of them are scattered on some islands in the oceans. As a consequence, many unreasonable zero values (in fact negative values) appear in Vertical Total Electron Content (VTEC) of European Space Agency (ESA) and Center for Orbit Determination in Europe (CODE) IONEX products, especially in 2008 and 2009 when the solar activities were rather quiet. To improve this situation, we directly implement non-negative physical constraints of ionosphere for global ionosphere maps (GIM) with spherical harmonic functions. Mathematically, we propose an inequality-constrained least squares method by imposing non-negative inequality constraints in the areas where negative VTEC values may occur to reconstruct GIM models. We then apply the new method to process the IGS data in 2008. The results have shown that the new algorithm efficiently eliminates the unwanted behavior of negative VTEC values, which could otherwise often be seen in the current CODE and ESA GIM products in both middle and high latitude areas of the Southern Hemisphere (45°S˜90°S) and the Northern Hemisphere (50°N˜90°N). About 64% of GPS receivers' DCBs have been significantly improved. Finally, we compare the GIM results between with and without the inequality constraints, which has clearly shown that the GIM result with inequality constraints is significantly better than that without the inequality constraints. The inequality-constrained GIM result is also highly consistent with the final IGS products in terms of root mean squared (RMS) and mean VTEC.
[MEG]PLS: A pipeline for MEG data analysis and partial least squares statistics.
Cheung, Michael J; Kovačević, Natasa; Fatima, Zainab; Mišić, Bratislav; McIntosh, Anthony R
2016-01-01
The emphasis of modern neurobiological theories has recently shifted from the independent function of brain areas to their interactions in the context of whole-brain networks. As a result, neuroimaging methods and analyses have also increasingly focused on network discovery. Magnetoencephalography (MEG) is a neuroimaging modality that captures neural activity with a high degree of temporal specificity, providing detailed, time varying maps of neural activity. Partial least squares (PLS) analysis is a multivariate framework that can be used to isolate distributed spatiotemporal patterns of neural activity that differentiate groups or cognitive tasks, to relate neural activity to behavior, and to capture large-scale network interactions. Here we introduce [MEG]PLS, a MATLAB-based platform that streamlines MEG data preprocessing, source reconstruction and PLS analysis in a single unified framework. [MEG]PLS facilitates MRI preprocessing, including segmentation and coregistration, MEG preprocessing, including filtering, epoching, and artifact correction, MEG sensor analysis, in both time and frequency domains, MEG source analysis, including multiple head models and beamforming algorithms, and combines these with a suite of PLS analyses. The pipeline is open-source and modular, utilizing functions from FieldTrip (Donders, NL), AFNI (NIMH, USA), SPM8 (UCL, UK) and PLScmd (Baycrest, CAN), which are extensively supported and continually developed by their respective communities. [MEG]PLS is flexible, providing both a graphical user interface and command-line options, depending on the needs of the user. A visualization suite allows multiple types of data and analyses to be displayed and includes 4-D montage functionality. [MEG]PLS is freely available under the GNU public license (http://meg-pls.weebly.com). Copyright © 2015 Elsevier Inc. All rights reserved.
Least Square NUFFT Methods Applied to 2D and 3D Radially Encoded MR Image Reconstruction
Song, Jiayu; Liu, Qing H.; Gewalt, Sally L.; Cofer, Gary; Johnson, G. Allan
2009-01-01
Radially encoded MR imaging (MRI) has gained increasing attention in applications such as hyperpolarized gas imaging, contrast-enhanced MR angiography, and dynamic imaging, due to its motion insensitivity and improved artifact properties. However, since the technique collects k-space samples nonuniformly, multidimensional (especially 3D) radially sampled MRI image reconstruction is challenging. The balance between reconstruction accuracy and speed becomes critical when a large data set is processed. Kaiser-Bessel gridding reconstruction has been widely used for non-Cartesian reconstruction. The objective of this work is to provide an alternative reconstruction option in high dimensions with on-the-fly kernels calculation. The work develops general multi-dimensional least square nonuniform fast Fourier transform (LS-NUFFT) algorithms and incorporates them into a k-space simulation and image reconstruction framework. The method is then applied to reconstruct the radially encoded k-space, although the method addresses general nonuniformity and is applicable to any non-Cartesian patterns. Performance assessments are made by comparing the LS-NUFFT based method with the conventional Kaiser-Bessel gridding method for 2D and 3D radially encoded computer simulated phantoms and physically scanned phantoms. The results show that the LS-NUFFT reconstruction method has better accuracy-speed efficiency than the Kaiser-Bessel gridding method when the kernel weights are calculated on the fly. The accuracy of the LS-NUFFT method depends on the choice of scaling factor, and it is found that for a particular conventional kernel function, using its corresponding deapodization function as scaling factor and utilizing it into the LS-NUFFT framework has the potential to improve accuracy. When a cosine scaling factor is used, in particular, the LS-NUFFT method is faster than Kaiser-Bessel gridding method because of a quasi closed-form solution. The method is successfully applied to 2D and
Boccard, Julien; Rudaz, Serge
2016-05-12
Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. Copyright © 2016 Elsevier B.V. All rights reserved.
Rebillat, Marc; Schoukens, Maarten
2018-05-01
Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.
Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui
2017-06-13
The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.
Identifying model error in metabolic flux analysis - a generalized least squares approach.
Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G
2016-09-13
The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.
Bayesian inference for data assimilation using Least-Squares Finite Element methods
International Nuclear Information System (INIS)
Dwight, Richard P
2010-01-01
It has recently been observed that Least-Squares Finite Element methods (LS-FEMs) can be used to assimilate experimental data into approximations of PDEs in a natural way, as shown by Heyes et al. in the case of incompressible Navier-Stokes flow. The approach was shown to be effective without regularization terms, and can handle substantial noise in the experimental data without filtering. Of great practical importance is that - unlike other data assimilation techniques - it is not significantly more expensive than a single physical simulation. However the method as presented so far in the literature is not set in the context of an inverse problem framework, so that for example the meaning of the final result is unclear. In this paper it is shown that the method can be interpreted as finding a maximum a posteriori (MAP) estimator in a Bayesian approach to data assimilation, with normally distributed observational noise, and a Bayesian prior based on an appropriate norm of the governing equations. In this setting the method may be seen to have several desirable properties: most importantly discretization and modelling error in the simulation code does not affect the solution in limit of complete experimental information, so these errors do not have to be modelled statistically. Also the Bayesian interpretation better justifies the choice of the method, and some useful generalizations become apparent. The technique is applied to incompressible Navier-Stokes flow in a pipe with added velocity data, where its effectiveness, robustness to noise, and application to inverse problems is demonstrated.
A method based on moving least squares for XRII image distortion correction
International Nuclear Information System (INIS)
Yan Shiju; Wang Chengtao; Ye Ming
2007-01-01
This paper presents a novel integrated method to correct geometric distortions of XRII (x-ray image intensifier) images. The method has been compared, in terms of mean-squared residual error measured at control and intermediate points, with two traditional local methods and a traditional global methods. The proposed method is based on the methods of moving least squares (MLS) and polynomial fitting. Extensive experiments were performed on simulated and real XRII images. In simulation, the effect of pincushion distortion, sigmoidal distortion, local distortion, noise, and the number of control points was tested. The traditional local methods were sensitive to pincushion and sigmoidal distortion. The traditional global method was only sensitive to sigmoidal distortion. The proposed method was found neither sensitive to pincushion distortion nor sensitive to sigmoidal distortion. The sensitivity of the proposed method to local distortion was lower than or comparable with that of the traditional global method. The sensitivity of the proposed method to noise was higher than that of all three traditional methods. Nevertheless, provided the standard deviation of noise was not greater than 0.1 pixels, accuracy of the proposed method is still higher than the traditional methods. The sensitivity of the proposed method to the number of control points was greatly lower than that of the traditional methods. Provided that a proper cutoff radius is chosen, accuracy of the proposed method is higher than that of the traditional methods. Experiments on real images, carried out by using a 9 in. XRII, showed that residual error of the proposed method (0.2544±0.2479 pixels) is lower than that of the traditional global method (0.4223±0.3879 pixels) and local methods (0.4555±0.3518 pixels and 0.3696±0.4019 pixels, respectively)
Current identification in vacuum circuit breakers as a least squares problem*
Directory of Open Access Journals (Sweden)
Ghezzi Luca
2013-01-01
Full Text Available In this work, a magnetostatic inverse problem is solved, in order to reconstruct the electric current distribution inside high voltage, vacuum circuit breakers from measurements of the outside magnetic field. The (rectangular final algebraic linear system is solved in the least square sense, by involving a regularized singular value decomposition of the system matrix. An approximated distribution of the electric current is thus returned, without the theoretical problem which is encountered with optical methods of matching light to temperature and finally to current density. The feasibility is justified from the computational point of view as the (industrial goal is to evaluate whether, or to what extent in terms of accuracy, a given experimental set-up (number and noise level of sensors is adequate to work as a “magnetic camera” for a given circuit breaker. Dans cet article, on résout un problème inverse magnétostatique pour déterminer la distribution du courant électrique dans le vide d’un disjoncteur à haute tension à partir des mesures du champ magnétique extérieur. Le système algébrique (rectangulaire final est résolu au sens des moindres carrés en faisant appel à une décomposition en valeurs singulières regularisée de la matrice du système. On obtient ainsi une approximation de la distribution du courant électrique sans le problème théorique propre des méthodes optiques qui est celui de relier la lumière à la température et donc à la densité du courant. La faisabilité est justifiée d’un point de vue numérique car le but (industriel est d’évaluer si, ou à quelle précision, un dispositif expérimental donné (nombre et seuil limite de bruit des senseurs peut travailler comme une “caméra magnétique” pour un certain disjoncteur.
The characterization of the infrasonic noise field and its effects on least squares estimation
Galbraith, Joseph
Localization of the source of an acoustic wave propagating through the atmosphere is not a new problem. Location methods date back to World War I, when sound location was used to determine enemy artillery positions. Since the drafting of the Comprehensive Nuclear-Test-Ban Treaty in 1996 there has been increased interest in the accurate location of distant sources using infrasound. A standard method of acoustic source location is triangulation of the source from multi-array back azimuth estimates. For waves traveling long distances through the atmosphere, the most appropriate method of estimating the back azimuth is the least squares estimate (LSE). Under the assumption of an acoustic signal corrupted with additive Gaussian, white, uncorrelated noise the LSE is theoretically the minimum variance, unbiased estimate of the slowness vector. The infrasonic noise field present at most arrays is known to violate the assumption of white, uncorrelated noise. The following work characterizes the noise field at two infrasound arrays operated by the University of Alaska Fairbanks, The power distribution and coherence of the noise fields was determined from atmospheric pressure measurements collected from 2003-2006. The estimated power distribution and coherence of the noise field were not the white, uncorrelated noise field assumed in the analytic derivation of the LSE of the slowness vector. The performance of the LSE of azimuth and trace velocity with the empirically derived noise field was numerically compared to its performance under the standard noise assumptions. The effect of violating the correlation assumption was also investigated. The inclusion of clutter in the noise field introduced a dependence to the performance of the LSE on the relative signal amplitude. If the signal-to-clutter ratio was above 10 dB, the parameter estimates made with the correlated noise field were comparable to the estimates made with uncorrelated noise. From the results of these numerical
Energy Technology Data Exchange (ETDEWEB)
Mouton, Nicolas; Devos, Olivier; Sliwa, Michel [Université Lille-Nord de France, LASIR, CNRS-UMR 8516, F-59655 Villeneuve d‘Ascq (France); Juan, Anna de [Departament de Química Analítica, Facultat de Química, Martí i Franquès, 1-11, E-08028 Barcelona (Spain); Ruckebusch, Cyril, E-mail: Cyril.ruckebusch@univ-lille1.fr [Université Lille-Nord de France, LASIR, CNRS-UMR 8516, F-59655 Villeneuve d‘Ascq (France)
2013-07-25
Graphical abstract: -- Highlights: •Femtosecond transient absorption spectroscopy investigating a complex photodynamic scheme. •Combining experiments obtained from two photo-active systems with complementary pathways. •Multiset hybrid hard- and soft-multivariate curve resolution incorporating reaction quantum yields. -- Abstract: The main advantage of multivariate curve resolution – alternating least squares method (MCR–ALS) is the possibility to act as multiset analysis method, combining data coming from different experiments to provide a complete and more accurate description of a chemical system. Exploiting the multiset side, the combination of experiments obtained from two photo-active systems with complementary pathways and monitored by femtosecond UV–vis transient absorption spectroscopy is presented in this work. A multiset hard- and soft-multivariate curve resolution model (HS-MCR) was built allowing the description of the spectrokinetic features of the entire system. Additionally, reaction quantum yields were incorporated in the hard-model in order to describe branching ratios for intermediate species. The photodynamics of salicylidene aniline (SA) was investigated as a case study. The overall reaction scheme involves two competitive and parallel pathways. On the one hand, a photoinduced excited state intramolecular proton transfer (ESIPT) followed by a cis–trans isomerization leads to the so-called photochromic form of the molecule, which absorbs in the visible. The formation of the photochromic species is well characterized in the literature. On the other hand, a complex internal rotation of the molecule takes place, which is a competing reaction. The rotation mechanism is based on a trans–cis isomerization. This work aimed at providing a detailed spectrokinetic characterization of both reaction pathways for SA. For this purpose, the photodynamics of two molecules of identical parent structures and different substituent patterns were
Mouton, Nicolas; Devos, Olivier; Sliwa, Michel; de Juan, Anna; Ruckebusch, Cyril
2013-07-25
The main advantage of multivariate curve resolution - alternating least squares method (MCR-ALS) is the possibility to act as multiset analysis method, combining data coming from different experiments to provide a complete and more accurate description of a chemical system. Exploiting the multiset side, the combination of experiments obtained from two photo-active systems with complementary pathways and monitored by femtosecond UV-vis transient absorption spectroscopy is presented in this work. A multiset hard- and soft-multivariate curve resolution model (HS-MCR) was built allowing the description of the spectrokinetic features of the entire system. Additionally, reaction quantum yields were incorporated in the hard-model in order to describe branching ratios for intermediate species. The photodynamics of salicylidene aniline (SA) was investigated as a case study. The overall reaction scheme involves two competitive and parallel pathways. On the one hand, a photoinduced excited state intramolecular proton transfer (ESIPT) followed by a cis-trans isomerization leads to the so-called photochromic form of the molecule, which absorbs in the visible. The formation of the photochromic species is well characterized in the literature. On the other hand, a complex internal rotation of the molecule takes place, which is a competing reaction. The rotation mechanism is based on a trans-cis isomerization. This work aimed at providing a detailed spectrokinetic characterization of both reaction pathways for SA. For this purpose, the photodynamics of two molecules of identical parent structures and different substituent patterns were investigated in femtosecond transient absorption spectroscopy. For SA, the mechanism described above involving the two parallel pathways was observed, whereas for the derivative form of SA, the photochromic reaction was blocked because of the replacement of an H atom by a methyl group. The application of MCR approaches enabled to obtain transient
Energy Technology Data Exchange (ETDEWEB)
Boccard, Julien, E-mail: julien.boccard@unige.ch; Rudaz, Serge
2016-05-12
Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. - Highlights: • A new method is proposed for the analysis of Omics data generated using design of
Least-squares Migration and Full Waveform Inversion with Multisource Frequency Selection
Huang, Yunsong
2013-09-01
Multisource Least-Squares Migration (LSM) of phase-encoded supergathers has shown great promise in reducing the computational cost of conventional migration. But for the marine acquisition geometry this approach faces the challenge of erroneous misfit due to the mismatch between the limited number of live traces/shot recorded in the field and the pervasive number of traces generated by the finite-difference modeling method. To tackle this mismatch problem, I present a frequency selection strategy with LSM of supergathers. The key idea is, at each LSM iteration, to assign a unique frequency band to each shot gather, so that the spectral overlap among those shots—and therefore their crosstallk—is zero. Consequently, each receiver can unambiguously identify and then discount the superfluous sources—those that are not associated with the receiver in marine acquisition. To compare with standard migration, I apply the proposed method to 2D SEG/EAGE salt model and obtain better resolved images computed at about 1/8 the cost; results for 3D SEG/EAGE salt model, with Ocean Bottom Seismometer (OBS) survey, show a speedup of 40×. This strategy is next extended to multisource Full Waveform Inversion (FWI) of supergathers for marine streamer data, with the same advantages of computational efficiency and storage savings. In the Finite-Difference Time-Domain (FDTD) method, to mitigate spectral leakage due to delayed onsets of sine waves detected at receivers, I double the simulation time and retain only the second half of the simulated records. To compare with standard FWI, I apply the proposed method to 2D velocity model of SEG/EAGE salt and to Gulf Of Mexico (GOM) field data, and obtain a speedup of about 4× and 8×. Formulas are then derived for the resolution limits of various constituent wavepaths pertaining to FWI: diving waves, primary reflections, diffractions, and multiple reflections. They suggest that inverting multiples can provide some low and intermediate
An application of partial least squares for identifying dietary patterns in bone health.
Yang, Tiffany C; Aucott, Lorna S; Duthie, Garry G; Macdonald, Helen M
2017-12-01
In a large cohort of older women, a mechanism-driven statistical technique for assessing dietary patterns that considers a potential nutrient pathway found two dietary patterns associated with lumbar spine and femoral neck bone mineral density. A "healthy" dietary pattern was observed to be beneficial for bone mineral density. Dietary patterns represent a broader, more realistic representation of how foods are consumed, compared to individual food or nutrient analyses. Partial least-squares (PLS) is a data-reduction technique for identifying dietary patterns that maximizes correlation between foods and nutrients hypothesized to be on the path to disease, is more hypothesis-driven than previous methods, and has not been applied to the study of dietary patterns in relation to bone health. Women from the Aberdeen Prospective Osteoporosis Screening Study (2007-2011, n = 2129, age = 66 years (2.2)) provided dietary intake using a food frequency questionnaire; 37 food groups were created. We applied PLS to the 37 food groups and 9 chosen response variables (calcium, potassium, vitamin C, vitamin D, protein, alcohol, magnesium, phosphorus, zinc) to identify dietary patterns associated with bone mineral density (BMD) cross-sectionally. Multivariable regression was used to assess the relationship between the retained dietary patterns and BMD at the lumbar spine and femoral neck, adjusting for age, body mass index, physical activity level, smoking, and national deprivation category. Five dietary patterns were identified, explaining 25% of the variation in food groups and 77% in the response variables. Two dietary patterns were positively associated with lumbar spine (per unit increase in factor 2: 0.012 g/cm 2 [95% CI: 0.006, 0.01]; factor 4: 0.007 g/cm 2 [95% CI: 0.00001, 0.01]) and femoral neck (factor 2: 0.006 g/cm 2 [95% CI: 0.002, 0.01]; factor 4: 0.008 g/cm 2 [95% CI: 0.003, 0.01)]) BMD. Dietary pattern 2 was characterized by high intakes of milk
Kernelized partial least squares for feature reduction and classification of gene microarray data
Directory of Open Access Journals (Sweden)
Land Walker H
2011-12-01
Full Text Available Abstract Background The primary objectives of this paper are: 1. to apply Statistical Learning Theory (SLT, specifically Partial Least Squares (PLS and Kernelized PLS (K-PLS, to the universal "feature-rich/case-poor" (also known as "large p small n", or "high-dimension, low-sample size" microarray problem by eliminating those features (or probes that do not contribute to the "best" chromosome bio-markers for lung cancer, and 2. quantitatively measure and verify (by an independent means the efficacy of this PLS process. A secondary objective is to integrate these significant improvements in diagnostic and prognostic biomedical applications into the clinical research arena. That is, to devise a framework for converting SLT results into direct, useful clinical information for patient care or pharmaceutical research. We, therefore, propose and preliminarily evaluate, a process whereby PLS, K-PLS, and Support Vector Machines (SVM may be integrated with the accepted and well understood traditional biostatistical "gold standard", Cox Proportional Hazard model and Kaplan-Meier survival analysis methods. Specifically, this new combination will be illustrated with both PLS and Kaplan-Meier followed by PLS and Cox Hazard Ratios (CHR and can be easily extended for both the K-PLS and SVM paradigms. Finally, these previously described processes are contained in the Fine Feature Selection (FFS component of our overall feature reduction/evaluation process, which consists of the following components: 1. coarse feature reduction, 2. fine feature selection and 3. classification (as described in this paper and prediction. Results Our results for PLS and K-PLS showed that these techniques, as part of our overall feature reduction process, performed well on noisy microarray data. The best performance was a good 0.794 Area Under a Receiver Operating Characteristic (ROC Curve (AUC for classification of recurrence prior to or after 36 months and a strong 0.869 AUC for
Directory of Open Access Journals (Sweden)
Kim Chang
2007-07-01
Full Text Available Abstract Background A reverse engineering of gene regulatory network with large number of genes and limited number of experimental data points is a computationally challenging task. In particular, reverse engineering using linear systems is an underdetermined and ill conditioned problem, i.e. the amount of microarray data is limited and the solution is very sensitive to noise in the data. Therefore, the reverse engineering of gene regulatory networks with large number of genes and limited number of data points requires rigorous optimization algorithm. Results This study presents a novel algorithm for reverse engineering with linear systems. The proposed algorithm is a combination of the orthogonal least squares, second order derivative for network pruning, and Bayesian model comparison. In this study, the entire network is decomposed into a set of small networks that are defined as unit networks. The algorithm provides each unit network with P(D|Hi, which is used as confidence level. The unit network with higher P(D|Hi has a higher confidence such that the unit network is correctly elucidated. Thus, the proposed algorithm is able to locate true positive interactions using P(D|Hi, which is a unique property of the proposed algorithm. The algorithm is evaluated with synthetic and Saccharomyces cerevisiae expression data using the dynamic Bayesian network. With synthetic data, it is shown that the performance of the algorithm depends on the number of genes, noise level, and the number of data points. With Yeast expression data, it is shown that there is remarkable number of known physical or genetic events among all interactions elucidated by the proposed algorithm. The performance of the algorithm is compared with Sparse Bayesian Learning algorithm using both synthetic and Saccharomyces cerevisiae expression data sets. The comparison experiments show that the algorithm produces sparser solutions with less false positives than Sparse Bayesian
Global Least-Squares Analysis of the IR Rotation-Vibration Spectrum of HCI
Tellinghuisen, Joel
2005-01-01
Several data-analysis problems could be addressed in different ways, ranging from a series of related "local" fitting problems to a single comprehensive "global analysis". The approach has become a powerful one for fitting data to moderately complex models by using library functions and the methods are illustrated for the analysis of HCI-IR…
Energy Technology Data Exchange (ETDEWEB)
Bloechle, B.; Manteuffel, T.; McCormick, S.; Starke, G.
1996-12-31
Many physical phenomena are modeled as scalar second-order elliptic boundary value problems with discontinuous coefficients. The first-order system least-squares (FOSLS) methodology is an alternative to standard mixed finite element methods for such problems. The occurrence of singularities at interface corners and cross-points requires that care be taken when implementing the least-squares finite element method in the FOSLS context. We introduce two methods of handling the challenges resulting from singularities. The first method is based on a weighted least-squares functional and results in non-conforming finite elements. The second method is based on the use of singular basis functions and results in conforming finite elements. We also share numerical results comparing the two approaches.
DEFF Research Database (Denmark)
Sørensen, Helle Aagaard; Petersen, Marianne Kjerstine; Jacobsen, Susanne
2004-01-01
Rapid methods for the identification of wheat varieties and their end-use quality have been developed. The methods combine the analysis of wheat protein extracts by mass spectrometry with partial least-squares regression in order to predict the variety or end-use quality of unknown wheat samples....... The whole process takes similar to30 min. Extracts of alcohol-soluble storage proteins (gliadins) from wheat were analysed by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. Partial least-squares regression was subsequently applied using these mass spectra for making models...... that could predict the wheat variety or end-use quality. Previously, an artificial neural network was used to identify wheat varieties based on their protein mass spectra profiles. The present study showed that partial least-squares regression is at least as useful as neural networks for this identification...
DEFF Research Database (Denmark)
Sørensen, Helle Aagaard; Petersen, Marianne Kjerstine; Jacobsen, Susanne
2004-01-01
that could predict the wheat variety or end-use quality. Previously, an artificial neural network was used to identify wheat varieties based on their protein mass spectra profiles. The present study showed that partial least-squares regression is at least as useful as neural networks for this identification......Rapid methods for the identification of wheat varieties and their end-use quality have been developed. The methods combine the analysis of wheat protein extracts by mass spectrometry with partial least-squares regression in order to predict the variety or end-use quality of unknown wheat samples...
Sinha, Mrinal
2015-08-19
We propose an interferometric least-squares migration method that can significantly reduce migration artifacts due to statics and errors in the near-surface velocity model. We first choose a reference reflector whose topography is well known from the, e.g., well logs. Reflections from this reference layer are correlated with the traces associated with reflections from deeper interfaces to get crosscorrelograms. These crosscorrelograms are then migrated using interferometric least-squares migration (ILSM). In this way statics and velocity errors at the near surface are largely eliminated for the examples in our paper.
Joint 2D-DOA and Frequency Estimation for L-Shaped Array Using Iterative Least Squares Method
Directory of Open Access Journals (Sweden)
Ling-yun Xu
2012-01-01
Full Text Available We introduce an iterative least squares method (ILS for estimating the 2D-DOA and frequency based on L-shaped array. The ILS iteratively finds direction matrix and delay matrix, then 2D-DOA and frequency can be obtained by the least squares method. Without spectral peak searching and pairing, this algorithm works well and pairs the parameters automatically. Moreover, our algorithm has better performance than conventional ESPRIT algorithm and propagator method. The useful behavior of the proposed algorithm is verified by simulations.
Directory of Open Access Journals (Sweden)
Mingjun Zhang
2015-12-01
Full Text Available A novel thruster fault identification method for autonomous underwater vehicle is presented in this article. It uses the proposed peak region energy method to extract fault feature and uses the proposed least square grey relational grade method to estimate fault degree. The peak region energy method is developed from fusion feature modulus maximum method. It applies the fusion feature modulus maximum method to get fusion feature and then regards the maximum of peak region energy in the convolution operation results of fusion feature as fault feature. The least square grey relational grade method is developed from grey relational analysis algorithm. It determines the fault degree interval by the grey relational analysis algorithm and then estimates fault degree in the interval by least square algorithm. Pool experiments of the experimental prototype are conducted to verify the effectiveness of the proposed methods. The experimental results show that the fault feature extracted by the peak region energy method is monotonic to fault degree while the one extracted by the fusion feature modulus maximum method is not. The least square grey relational grade method can further get an estimation result between adjacent standard fault degrees while the estimation result of the grey relational analysis algorithm is just one of the standard fault degrees.
DEFF Research Database (Denmark)
Tscherning, Carl Christian
2015-01-01
The method of Least-Squares Collocation (LSC) may be used for the modeling of the anomalous gravity potential (T) and for the computation (prediction) of quantities related to T by a linear functional. Errors may also be estimated. However, when using an isotropic covariance function or equivalen...
Benitez Amado, Jose; Henseler, Jörg; Castillo, Ana
Partial least squares (PLS) path modeling has been widely and dominantly used in the field of Information Systems (IS) during decades. The usage and prescriptions for performing PLS path modeling has been recently examined, debated, and improved, which have generated substantial changes,
Fernandez Pierna, J.A.; Lin, L.; Wahl, F.; Faber, N.M.; Massart, D.L.
2003-01-01
The prediction uncertainty is studied when using a multivariate partial least squares regression (PLSR) model constructed with reference values that contain a sizeable measurement error. Several approximate expressions for calculating a sample-specific standard error of prediction have been proposed
Bulcock, J. W.
The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…
Knol, Dirk L.; ten Berge, Jos M.F.
1987-01-01
An algorithm is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. The proposed algorithm is based on a solution for C. I. Mosier's oblique Procrustes rotation problem offered by J. M. F. ten Berge and K. Nevels (1977).
Brus, D.J.; Gruijter, de J.J.
2011-01-01
This paper introduces and demonstrates design-based Generalized Least Squares (GLS) estimation of spatial means at selected time points from data collected in repeated soil surveys with partial overlap, such as a rotating and a supplemented panel. The linear time trend of the spatial means can then
Beauducel, Andre; Herzberg, Philipp Yorck
2006-01-01
This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…
Directory of Open Access Journals (Sweden)
Fei Jin
2013-05-01
Full Text Available This paper studies the generalized spatial two stage least squares (GS2SLS estimation of spatial autoregressive models with autoregressive disturbances when there are endogenous regressors with many valid instruments. Using many instruments may improve the efficiency of estimators asymptotically, but the bias might be large in finite samples, making the inference inaccurate. We consider the case that the number of instruments K increases with, but at a rate slower than, the sample size, and derive the approximate mean square errors (MSE that account for the trade-offs between the bias and variance, for both the GS2SLS estimator and a bias-corrected GS2SLS estimator. A criterion function for the optimal K selection can be based on the approximate MSEs. Monte Carlo experiments are provided to show the performance of our procedure of choosing K.
International Nuclear Information System (INIS)
Pontaza, J.P.; Reddy, J.N.
2004-01-01
We consider least-squares finite element models for the numerical solution of the non-stationary Navier-Stokes equations governing viscous incompressible fluid flows. The paper presents a formulation where the effects of space and time are coupled, resulting in a true space-time least-squares minimization procedure, as opposed to a space-time decoupled formulation where a least-squares minimization procedure is performed in space at each time step. The formulation is first presented for the linear advection-diffusion equation and then extended to the Navier-Stokes equations. The formulation has no time step stability restrictions and is spectrally accurate in both space and time. To allow the use of practical C 0 element expansions in the resulting finite element model, the Navier-Stokes equations are expressed as an equivalent set of first-order equations by introducing vorticity as an additional independent variable and the least-squares method is used to develop the finite element model of the governing equations. High-order element expansions are used to construct the discrete model. The discrete model thus obtained is linearized by Newton's method, resulting in a linear system of equations with a symmetric positive definite coefficient matrix that is solved in a fully coupled manner by a preconditioned conjugate gradient method in matrix-free form. Spectral convergence of the L 2 least-squares functional and L 2 error norms in space-time is verified using a smooth solution to the two-dimensional non-stationary incompressible Navier-Stokes equations. Numerical results are presented for impulsively started lid-driven cavity flow, oscillatory lid-driven cavity flow, transient flow over a backward-facing step, and flow around a circular cylinder; the results demonstrate the predictive capability and robustness of the proposed formulation. Even though the space-time coupled formulation is emphasized, we also present the formulation and numerical results for least-squares
Borin, Alessandra; Ferrão, Marco Flôres; Mello, Cesar; Maretto, Danilo Althmann; Poppi, Ronei Jesus
2006-10-02
This paper proposes the use of the least-squares support vector machine (LS-SVM) as an alternative multivariate calibration method for the simultaneous quantification of some common adulterants (starch, whey or sucrose) found in powdered milk samples, using near-infrared spectroscopy with direct measurements by diffuse reflectance. Due to the spectral differences of the three adulterants a nonlinear behavior is present when all groups of adulterants are in the same data set, making the use of linear methods such as partial least squares regression (PLSR) difficult. Excellent models were built using LS-SVM, with low prediction errors and superior performance in relation to PLSR. These results show it possible to built robust models to quantify some common adulterants in powdered milk using near-infrared spectroscopy and LS-SVM as a nonlinear multivariate calibration procedure.
A least-squares finite-element Sn method for solving first-order neutron transport equation
International Nuclear Information System (INIS)
Ju Haitao; Wu Hongchun; Zhou Yongqiang; Cao Liangzhi; Yao Dong; Xian, Chun-Yu
2007-01-01
A discrete ordinates finite-element method for solving the two-dimensional first-order neutron transport equation is derived using the least-squares variation. It avoids the singularity in void regions of the method derived from the second-order equation which contains the inversion of the cross-section. Different from using the standard Galerkin variation to the first-order equation, the least-squares variation results in a symmetric matrix, which can be solved easily and effectively. To eliminate the discontinuity of the angular flux on the vacuum boundary in the spherical harmonics method, the angle variable is discretized by the discrete ordinates method. A two-dimensional transport simulation code is developed and applied to some benchmark problems with unstructured geometry. The numerical results verified the validity of this method
Wu, J. C.; Tang, H. W.; Chen, Y. Q.; Li, Y. X.
2006-07-01
In this paper, the velocities of 154 stations obtained in 2001 and 2003 GPS survey campaigns are applied to formulate a continuous velocity field by the least-squares collocation method. The strain rate field obtained by the least-squares collocation method shows more clear deformation patterns than that of the conventional discrete triangle method. The significant deformation zones obtained are mainly located in three places, to the north of Tangshan, between Tianjing and Shijiazhuang, and to the north of Datong, which agree with the places of the Holocene active deformation zones obtained by geological investigations. The maximum shear strain rate is located at latitude 38.6°N and longitude 116.8°E, with a magnitude of 0.13 ppm/a. The strain rate field obtained can be used for earthquake prediction research in the North China Basin.
Directory of Open Access Journals (Sweden)
Wayan Somayasa
2013-05-01
Full Text Available A functional central limit theorem for a sequence of partial sums processes of the least squares residuals of a spatial linear regression model in which the observations are sampled according to a probability measure is established. Under mild assumptions to the model, the limit of the sequence of the least squares residual partial sums processes is explicitly derived. It is shown that the limit process which is a function of the Brownian sheet depends on the regression functions and the probability measure under which the design is constructed. Several examples ofthe limit processes when the model is true are presented. Lower and upper bounds for boundary crossing probabilities of signal plus noise models when the noises come from the residual partial sums processes are also investigated.
Mingjun Zhang; Baoji Yin; Xing Liu; Jia Guo
2015-01-01
A novel thruster fault identification method for autonomous underwater vehicle is presented in this article. It uses the proposed peak region energy method to extract fault feature and uses the proposed least square grey relational grade method to estimate fault degree. The peak region energy method is developed from fusion feature modulus maximum method. It applies the fusion feature modulus maximum method to get fusion feature and then regards the maximum of peak region energy in the convol...
Faria A.V.; Macedo Jr. F.C.; Marsaioli A.J.; Ferreira M.M.C.; Cendes F.
2011-01-01
High resolution proton nuclear magnetic resonance spectroscopy (¹H MRS) can be used to detect biochemical changes in vitro caused by distinct pathologies. It can reveal distinct metabolic profiles of brain tumors although the accurate analysis and classification of different spectra remains a challenge. In this study, the pattern recognition method partial least squares discriminant analysis (PLS-DA) was used to classify 11.7 T ¹H MRS spectra of brain tissue extracts from patients with brain ...
Jongguk Lim; Giyoung Kim; Changyeun Mo; Kyoungmin Oh; Hyeonchae Yoo; Hyeonheui Ham; Moon S. Kim
2017-01-01
The purpose of this study is to use near-infrared reflectance (NIR) spectroscopy equipment to nondestructively and rapidly discriminate Fusarium-infected hulled barley. Both normal hulled barley and Fusarium-infected hulled barley were scanned by using a NIR spectrometer with a wavelength range of 1175 to 2170 nm. Multiple mathematical pretreatments were applied to the reflectance spectra obtained for Fusarium discrimination and the multivariate analysis method of partial least squares discri...
International Nuclear Information System (INIS)
Yang, Zong-Chang
2014-01-01
Highlights: • Introduce a finite Fourier-series model for evaluating monthly movement of annual average solar insolation. • Present a forecast method for predicting its movement based on the extended Fourier-series model in the least-squares. • Shown its movement is well described by a low numbers of harmonics with approximately 6-term Fourier series. • Predict its movement most fitting with less than 6-term Fourier series. - Abstract: Solar insolation is one of the most important measurement parameters in many fields. Modeling and forecasting monthly movement of annual average solar insolation is of increasingly importance in areas of engineering, science and economics. In this study, Fourier-analysis employing finite Fourier-series is proposed for evaluating monthly movement of annual average solar insolation and extended in the least-squares for forecasting. The conventional Fourier analysis, which is the most common analysis method in the frequency domain, cannot be directly applied for prediction. Incorporated with the least-square method, the introduced Fourier-series model is extended to predict its movement. The extended Fourier-series forecasting model obtains its optimums Fourier coefficients in the least-square sense based on its previous monthly movements. The proposed method is applied to experiments and yields satisfying results in the different cities (states). It is indicated that monthly movement of annual average solar insolation is well described by a low numbers of harmonics with approximately 6-term Fourier series. The extended Fourier forecasting model predicts the monthly movement of annual average solar insolation most fitting with less than 6-term Fourier series
DEFF Research Database (Denmark)
Jamali, Ali; Rahman, Alias Abdul; Antón Castro, Francesc/François
2016-01-01
, the buildings and their rooms must be surveyed. One of the most utilized methods of indoor surveying is laser scanning. The laser scanning method allows taking accurate and detailed measurements. However, Terrestrial Laser Scanner is costly and time consuming. In this paper, several techniques for indoor 3D...... in horizontal angles for short distances in indoor environments. The range finder horizontal angle sensor was calibrated using a least square adjustment algorithm, a polynomial kernel, interval analysis and homotopy continuation....
Liu, Cun-Xi; Li, Ze-Rong; Zhou, Chong-Wen; Li, Xiang-Yuan
2009-05-01
Owing to the significance in kinetic modeling of the oxidation and combustion mechanisms of hydrocarbons, a fast and relatively accurate method was developed for the prediction of Delta(f)H(298)(o) of alkyl peroxides. By this method, a raw Delta(f)H(298)(o) value was calculated from the optimized geometry and vibration frequencies at B3LYP/6-31G(d,p) level and then an accurate Delta(f)H(298)(o) value was obtained by a least-square procedure. The least-square procedure is a six-parameter linear equation and is validated by a leave-one out technique, giving a cross-validation squared correlation coefficient q(2) of 0.97 and a squared correlation coefficient of 0.98 for the final model. Calculated results demonstrated that the least-square calibration leads to a remarkable reduction of error and to the accurate Delta(f)H(298)(o) values within the chemical accuracy of 8 kJ mol(-1) except (CH(3))(2)CHCH(2)CH(2)CH(2)OOH which has an error of 8.69 kJ mol(-1). Comparison of the results by CBS-Q, CBS-QB3, G2, and G3 revealed that B3LYP/6-31G(d,p) in combination with a least-square calibration is reliable in the accurate prediction of the standard enthalpies of formation for alkyl peroxides. Standard entropies at 298 K and heat capacities in the temperature range of 300-1500 K for alkyl peroxides were also calculated using the rigid rotor-harmonic oscillator approximation. 2008 Wiley Periodicals, Inc.
DEFF Research Database (Denmark)
Madsen, H.; Mikkelsen, Peter Steen; Rosbjerg, Dan
2002-01-01
, the mean value of the exceedance magnitudes, and the coefficient of L variation (LCV) are considered as regional variables. A generalized least squares (GLS) regression model that explicitly accounts for intersite correlation and sampling uncertainties is applied for evaluating the regional heterogenity...... of the PDS parameters. For the parameters that show a significant regional variability the GLS model is subsequently adopted for describing the variability from physiographic and climatic characteristics. For determination of a proper regional parent distribution L moment analysis is applied...
Plata, Maria R.; Koch, Cosima; Wechselberger, Patrick; Herwig, Christoph; Lendl, Bernhard
2013-01-01
A fast and simple method to control variations in carbohydrate composition of Saccharomyces cerevisiae, baker's yeast, during fermentation was developed using mid-infrared (mid-IR) spectroscopy. The method allows for precise and accurate determinations with minimal or no sample preparation and reagent consumption based on mid-IR spectra and partial least squares (PLS) regression. The PLS models were developed employing the results from reference analysis of the yeast cells. The reference anal...
International Nuclear Information System (INIS)
Aspinall, J.
1982-01-01
A computational method was developed which alleviates the need for lengthy parametric scans as part of a design process. The method makes use of a least squares algorithm to find the optimal value of a parameter vector. Optimal is defined in terms of a utility function prescribed by the user. The placement of the vertical field coils of a torsatron is such a non linear problem
Adib, Arash; Poorveis, Davood; Mehraban, Farid
2018-03-01
In this research, two equations are considered as examples of hyperbolic and elliptic equations. In addition, two finite element methods are applied for solving of these equations. The purpose of this research is the selection of suitable method for solving each of two equations. Burgers' equation is a hyperbolic equation. This equation is a pure advection (without diffusion) equation. This equation is one-dimensional and unsteady. A sudden shock wave is introduced to the model. This wave moves without deformation. In addition, Laplace's equation is an elliptical equation. This equation is steady and two-dimensional. The solution of Laplace's equation in an earth dam is considered. By solution of Laplace's equation, head pressure and the value of seepage in the directions X and Y are calculated in different points of earth dam. At the end, water table is shown in the earth dam. For Burgers' equation, least-square method can show movement of wave with oscillation but Galerkin method can not show it correctly (the best method for solving of the Burgers' equation is discrete space by least-square finite element method and discrete time by forward difference.). For Laplace's equation, Galerkin and least square methods can show water table correctly in earth dam.
International Nuclear Information System (INIS)
Guo, Yin; Nazarian, Ehsan; Ko, Jeonghan; Rajurkar, Kamlakar
2014-01-01
Highlights: • Developed hourly-indexed ARX models for robust cooling-load forecasting. • Proposed a two-stage weighted least-squares regression approach. • Considered the effect of outliers as well as trend of cooling load and weather patterns. • Included higher order terms and day type patterns in the forecasting models. • Demonstrated better accuracy compared with some ARX and ANN models. - Abstract: This paper presents a robust hourly cooling-load forecasting method based on time-indexed autoregressive with exogenous inputs (ARX) models, in which the coefficients are estimated through a two-stage weighted least squares regression. The prediction method includes a combination of two separate time-indexed ARX models to improve prediction accuracy of the cooling load over different forecasting periods. The two-stage weighted least-squares regression approach in this study is robust to outliers and suitable for fast and adaptive coefficient estimation. The proposed method is tested on a large-scale central cooling system in an academic institution. The numerical case studies show the proposed prediction method performs better than some ANN and ARX forecasting models for the given test data set
Liu, Fei; Wang, Li; He, Yong
2008-11-01
The determination of citric acid of lemon vinegar was processed using visible and near infrared (Vis/NIR) spectroscopy combined with least squares-support vector machine (LS-SVM). Five concentration levels (100%, 80%, 60%, 40% and 20%) of lemon vinegar were studied. The calibration set was consisted of 225 samples (45 samples for each level) and the remaining 75 samples for the validation set. Partial least squares (PLS) analysis was employed for the calibration models as well as extraction of certain latent variables (LVs) and effective wavelengths (EWs). Different preprocessing methods were compared in PLS models including smoothing, standard normal variate (SNV), the first and second derivative. The selected LVs and EWs were employed as the inputs to develop least square-support vector machine (LSSVM) models. The optimal prediction results were achieved by LV-LS-SVM model, and the correlation coefficient (r), root mean square error of prediction (RMSEP) and bias for validation set were 0.9990, 0.1972 and -0.0334, respectively. Moreover, the EW-LS-SVM model was also acceptable and slightly better than all PLS models. The results indicated that Vis/NIR spectroscopy could be utilized as a parsimonious and efficient way for the determination of citric acid of lemon vinegar based on LS-SVM method.
Energy Technology Data Exchange (ETDEWEB)
Jabr, R.A. [Electrical, Computer and Communication Engineering Department, Notre Dame University, P.O. Box 72, Zouk Mikhael, Zouk Mosbeh (Lebanon)
2006-02-15
This paper presents an implementation of the least absolute value (LAV) power system state estimator based on obtaining a sequence of solutions to the L{sub 1}-regression problem using an iteratively reweighted least squares (IRLS{sub L1}) method. The proposed implementation avoids reformulating the regression problem into standard linear programming (LP) form and consequently does not require the use of common methods of LP, such as those based on the simplex method or interior-point methods. It is shown that the IRLS{sub L1} method is equivalent to solving a sequence of linear weighted least squares (LS) problems. Thus, its implementation presents little additional effort since the sparse LS solver is common to existing LS state estimators. Studies on the termination criteria of the IRLS{sub L1} method have been carried out to determine a procedure for which the proposed estimator is more computationally efficient than a previously proposed non-linear iteratively reweighted least squares (IRLS) estimator. Indeed, it is revealed that the proposed method is a generalization of the previously reported IRLS estimator, but is based on more rigorous theory. (author)
Shotorban, Babak
2010-04-01
The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.
Shotorban, Babak
2010-04-01
The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.
International Nuclear Information System (INIS)
Sanchez Miro, J.J.; Pena, J.
1991-01-01
In this repport is offered, to scientist and technical people, a numeric tool consisting in a FORTRAN program, of interactive use, with destination to make lineal 'least squares', fittings on any set of empirical observations. The method based in the orthogonal functions (for discrete case), instead of direct solving the equations system, is used. The procedure includes also the optionally facilities of: variable change, direct interpolation, correlation non linear factor, 'weights' of the points, confidence intervals (Scheffe, Miller, Student), and plotting results. (Author). 10 refs
Czech Academy of Sciences Publication Activity Database
Hnětynková, I.; Plešinger, Martin; Sima, D.M.; Strakoš, Z.; Huffel van, S.
2011-01-01
Roč. 32, č. 3 (2011), s. 748-770 ISSN 0895-4798 R&D Projects: GA AV ČR IAA100300802 Grant - others:GA ČR(CZ) GA201/09/0917 Program:GA Institutional research plan: CEZ:AV0Z10300504 Keywords : total least squares * multiple right-hand sides * linear approximation problems * orthogonally invariant problems * orthogonal regression * errors-in-variables modeling Subject RIV: BA - General Mathematics Impact factor: 1.368, year: 2011
Stilbs, P
1998-11-01
Use of prior knowledge with regard to the number of components in an image or NMR data set makes possible a full analysis and separation of correlated sets of such data. It is demonstrated that a diffusional NMR microscopy image set can readily be separated into its components, with the extra benefit of a global least-squares fit over the whole image of the respective diffusional rates. As outlined, the computational approach (CORE processing) is also applicable to various multidimensional NMR data sets and is suggested as a potentially powerful tool in functional MRI. Copyright 1998 Academic Press.
Lu, Weiying; Jiang, Qianqian; Shi, Haiming; Niu, Yuge; Gao, Boyan; Yu, Liangli Lucy
2014-09-17
Lycium barbarum L. fruits (Chinese wolfberries) were differentiated for their cultivation locations and the cultivars by ultraperformance liquid chromatography coupled with mass spectrometry (UPLC-MS) and flow injection mass spectrometric (FIMS) fingerprinting techniques combined with chemometrics analyses. The partial least-squares-discriminant analysis (PLS-DA) was applied to the data projection and supervised learning with validation. The samples formed clusters in the projected data. The prediction accuracies by PLS-DA with bootstrapped Latin partition validation were greater than 90% for all models. The chemical profiles of Chinese wolfberries were also obtained. The differentiation techniques might be utilized for Chinese wolfberry authentication.
Directory of Open Access Journals (Sweden)
Andrzej Banachowicz
2017-12-01
Full Text Available Different calculation methods and configurations of navigation systems can be used in algorithms of navigational parameter fusion and estimation. The article presents a comparison of two methods of fusion of dead reckoning position with that from a positioning system. These are the least squares method and the Kalman filter. In both methods the minimization of the sum of squared measurement deviations is the optimization criterion. Both methods of navigation position parameter measurements fusion are illustrated using the data recorded during actual sea trials. With the same probabilistic model of dead reckoning navigation, the fusion of DR results with positioning data gives similar outcome.
International Nuclear Information System (INIS)
Iwama, N.; Inoue, A.; Tsukishima, T.; Sato, M.; Kawahata, K.
1981-07-01
A new procedure for the maximum entropy spectral estimation is studied for the purpose of data processing in Fourier transform spectroscopy. The autoregressive model fitting is examined under a least squares criterion based on the Yule-Walker equations. An AIC-like criterion is suggested for selecting the model order. The principal advantage of the new procedure lies in the enhanced frequency resolution particularly for small values of the maximum optical path-difference of the interferogram. The usefulness of the procedure is ascertained by some numerical simulations and further by experiments with respect to a highly coherent submillimeter wave and the electron cyclotron emission from a stellarator plasma. (author)
Directory of Open Access Journals (Sweden)
Arlinah Abd Rashid
2016-06-01
Full Text Available The good and service tax (GST in Malaysia was implemented in 2015 as a tax reform program to generate a stable source of revenue. This study explores the respondents’ behaviour towards GST, a week post-implementation. The partial least square (PLS modelling was used to establish the relationship between acceptance, knowledge and feelings towards GST as well as the household quality of life. There is a positive relationship between the antecedents and the quality of life. Acceptance of GST exerts a significant relationship towards feelings and quality of life. The study concludes that Malaysians, in general, accept GST that ensures a better quality of life in the future.
Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K
2013-08-01
A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.
Henao-Escobar, W; Domínguez-Renedo, O; Alonso-Lomillo, M A; Arcos-Martínez, M J
2015-10-01
This work presents the simultaneous determination of cadaverine, histamine, putrescine and tyramine by square wave voltammetry using a boron-doped diamond electrode. A multivariate calibration method based on partial least square regressions has allowed the resolution of the very high overlapped voltammetric signals obtained for the analyzed biogenic amines. Prediction errors lower than 9% have been obtained when concentration of quaternary mixtures were calculated. The developed procedure has been applied in the analysis of ham samples, which results are in good agreement with those obtained using the standard HPLC method. Copyright © 2015 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Baker, K.L.
2005-01-01
This article details a multigrid algorithm that is suitable for least-squares wave-front reconstruction of Shack-Hartmann and shearing interferometer wave-front sensors. The algorithm detailed in this article is shown to scale with the number of subapertures in the same fashion as fast Fourier transform techniques, making it suitable for use in applications requiring a large number of subapertures and high Strehl ratio systems such as for high spatial frequency characterization of high-density plasmas, optics metrology, and multiconjugate and extreme adaptive optics systems
Han, Bangxing; Peng, Huasheng; Yan, Hui
2016-01-01
Mugua is a common Chinese herbal medicine. There are three main medicinal origin places in China, Xuancheng City Anhui Province, Qijiang District Chongqing City, Yichang City, Hubei Province, and suitable for food origin places Linyi City Shandong Province. To construct a qualitative analytical method to identify the origin of medicinal Mugua by near infrared spectroscopy (NIRS). Partial least squares discriminant analysis (PLSDA) model was established after the Mugua derived from five different origins were preprocessed by the original spectrum. Moreover, the hierarchical cluster analysis was performed. The result showed that PLSDA model was established. According to the relationship of the origins-related important score and wavenumber, and K-mean cluster analysis, the Muguas derived from different origins were effectively identified. NIRS technology can quickly and accurately identify the origin of Mugua, provide a new method and technology for the identification of Chinese medicinal materials. After preprocessed by D1+autoscale, more peaks were increased in the preprocessed Mugua in the near infrared spectrumFive latent variable scores could reflect the information related to the origin place of MuguaOrigins of Mugua were well-distinguished according to K. mean value clustering analysis. Abbreviations used: TCM: Traditional Chinese Medicine, NIRS: Near infrared spectroscopy, SG: Savitzky-Golay smoothness, D1: First derivative, D2: Second derivative, SNV: Standard normal variable transformation, MSC: Multiplicative scatter correction, PLSDA: Partial least squares discriminant analysis, LV: Latent variable, VIP scores: Important score.
Steinbock, Michael J.; Hyde, Milo W.
2012-10-01
Adaptive optics is used in applications such as laser communication, remote sensing, and laser weapon systems to estimate and correct for atmospheric distortions of propagated light in real-time. Within an adaptive optics system, a reconstruction process interprets the raw wavefront sensor measurements and calculates an estimate for the unwrapped phase function to be sent through a control law and applied to a wavefront correction device. This research is focused on adaptive optics using a self-referencing interferometer wavefront sensor, which directly measures the wrapped wavefront phase. Therefore, its measurements must be reconstructed for use on a continuous facesheet deformable mirror. In testing and evaluating a novel class of branch-point- tolerant wavefront reconstructors based on the post-processing congruence operation technique, an increase in Strehl ratio compared to a traditional least squares reconstructor was noted even in non-scintillated fields. To investigate this further, this paper uses wave-optics simulations to eliminate many of the variables from a hardware adaptive optics system, so as to focus on the reconstruction techniques alone. The simulation results along with a discussion of the physical reasoning for this phenomenon are provided. For any applications using a self-referencing interferometer wavefront sensor with low signal levels or high localized wavefront gradients, understanding this phenomena is critical when applying a traditional least squares wavefront reconstructor.
Kala, Abhishek K; Tiwari, Chetan; Mikler, Armin R; Atkinson, Samuel F
2017-01-01
The primary aim of the study reported here was to determine the effectiveness of utilizing local spatial variations in environmental data to uncover the statistical relationships between West Nile Virus (WNV) risk and environmental factors. Because least squares regression methods do not account for spatial autocorrelation and non-stationarity of the type of spatial data analyzed for studies that explore the relationship between WNV and environmental determinants, we hypothesized that a geographically weighted regression model would help us better understand how environmental factors are related to WNV risk patterns without the confounding effects of spatial non-stationarity. We examined commonly mapped environmental factors using both ordinary least squares regression (LSR) and geographically weighted regression (GWR). Both types of models were applied to examine the relationship between WNV-infected dead bird counts and various environmental factors for those locations. The goal was to determine which approach yielded a better predictive model. LSR efforts lead to identifying three environmental variables that were statistically significantly related to WNV infected dead birds (adjusted R 2 = 0.61): stream density, road density, and land surface temperature. GWR efforts increased the explanatory value of these three environmental variables with better spatial precision (adjusted R 2 = 0.71). The spatial granularity resulting from the geographically weighted approach provides a better understanding of how environmental spatial heterogeneity is related to WNV risk as implied by WNV infected dead birds, which should allow improved planning of public health management strategies.
Lee, Soo Yee; Mediani, Ahmed; Maulidiani, Maulidiani; Khatib, Alfi; Ismail, Intan Safinar; Zawawi, Norhasnida; Abas, Faridah
2018-01-01
Neptunia oleracea is a plant consumed as a vegetable and which has been used as a folk remedy for several diseases. Herein, two regression models (partial least squares, PLS; and random forest, RF) in a metabolomics approach were compared and applied to the evaluation of the relationship between phenolics and bioactivities of N. oleracea. In addition, the effects of different extraction conditions on the phenolic constituents were assessed by pattern recognition analysis. Comparison of the PLS and RF showed that RF exhibited poorer generalization and hence poorer predictive performance. Both the regression coefficient of PLS and the variable importance of RF revealed that quercetin and kaempferol derivatives, caffeic acid and vitexin-2-O-rhamnoside were significant towards the tested bioactivities. Furthermore, principal component analysis (PCA) and partial least squares-discriminant analysis (PLS-DA) results showed that sonication and absolute ethanol are the preferable extraction method and ethanol ratio, respectively, to produce N. oleracea extracts with high phenolic levels and therefore high DPPH scavenging and α-glucosidase inhibitory activities. Both PLS and RF are useful regression models in metabolomics studies. This work provides insight into the performance of different multivariate data analysis tools and the effects of different extraction conditions on the extraction of desired phenolics from plants. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
International Nuclear Information System (INIS)
Rukolaine, Sergey A.
2010-01-01
Optimal shape design problems of steady-state radiative heat transfer are considered. The optimal shape design problem (in the three-dimensional space) is formulated as an inverse one, i.e., in the form of an operator equation of the first kind with respect to a surface to be optimized. The operator equation is reduced to a minimization problem via a least-squares objective functional. The minimization problem has to be solved numerically. Gradient minimization methods need the gradient of a functional to be minimized. In this paper the shape gradient of the least-squares objective functional is derived with the help of the shape sensitivity analysis and adjoint problem method. In practice a surface to be optimized may be (or, most likely, is to be) given in a parametric form by a finite number of parameters. In this case the objective functional is, in fact, a function in a finite-dimensional space and the shape gradient becomes an ordinary gradient. The gradient of the objective functional, in the case that the surface to be optimized is given in a finite-parametric form, is derived from the shape gradient. A particular case, that a surface to be optimized is a 'two-dimensional' polyhedral one, is considered. The technique, developed in the paper, is applied to a synthetic problem of designing a 'two-dimensional' radiant enclosure.
Peyton Jones, James C; Muske, Kenneth R
2009-10-01
Linear look-up tables are widely used to approximate and characterize complex nonlinear functional relationships between system input and output. However, both the initial calibration and subsequent real-time adaptation of these tables can be time consuming and prone to error as a result of the large number of table parameters typically required to map the system and the uncertainties and noise in the experimental data on which the calibration is based. In this paper, a new method is presented for identifying or adapting the look-up table parameters using a recursive least-squares based approach. The new method differs from standard recursive least squares algorithms because it exploits the structure of the look-up table equations in order to perform the identification process in a way that is highly computationally and memory efficient. The technique can therefore be implemented within the constraints of typical embedded applications. In the present study, the technique is applied to the identification of the volumetric efficiency look-up table commonly used in gasoline engine fueling strategies. The technique is demonstrated on a Ford 2.0L I4 Duratec engine using time-delayed feedback from a sensor in the exhaust manifold in order to adapt the table parameters online.
Kuzishchin, V. F.; Merzlikina, E. I.; Van Va, Hoang
2017-11-01
The problem of PID and PI-algorithms tuning by means of the approximation by the least square method of the frequency response of a linear algorithm to the sub-optimal algorithm is considered. The advantage of the method is that the parameter values are obtained through one cycle of calculation. Recommendations how to choose the parameters of the least square method taking into consideration the plant dynamics are given. The parameters mentioned are the time constant of the filter, the approximation frequency range and the correction coefficient for the time delay parameter. The problem is considered for integrating plants for some practical cases (the level control system in a boiler drum). The transfer function of the suboptimal algorithm is determined relating to the disturbance that acts in the point of the control impact input, it is typical for thermal plants. In the recommendations it is taken into consideration that the overregulation for the transient process when the setpoint is changed is also limited. In order to compare the results the systems under consideration are also calculated by the classical method with the limited frequency oscillation index. The results given in the paper can be used by specialists dealing with tuning systems with the integrating plants.
Directory of Open Access Journals (Sweden)
L. X. Peng
2014-01-01
Full Text Available Based on the first-order shear deformation theory (FSDT and the moving least-squares approximation, a new meshless model to study the geometric nonlinear problem of ribbed rectangular plates is presented. Considering the plate and the ribs separately, the displacement field, the stress, and strain of the plate and the ribs are obtained according to the moving least-squares approximation, the von Karman large deflection theory, and the FSDT. The ribs are attached to the plate by considering the displacement compatible condition along the connections between the ribs and the plate. The virtual strain energy formulation of the plate and the ribs is derived separately, and the nonlinear equilibrium equation of the entire ribbed plate is given by the virtual work principle. In the new meshless model for ribbed plates, there is no limitation to the rib position; for example, the ribs need not to be placed along the mesh lines of the plate as they need to be in FEM, and the change of rib positions will not lead to remeshing of the plate. The proposed model is compared with the FEM models from pieces of literature and ANSYS in several numerical examples, which proves the accuracy of the model.
Grigorie, Teodor Lucian; Corcau, Ileana Jenica; Tudosie, Alexandru Nicolae
2017-06-01
The paper presents a way to obtain an intelligent miniaturized three-axial accelerometric sensor, based on the on-line estimation and compensation of the sensor errors generated by the environmental temperature variation. Taking into account that this error's value is a strongly nonlinear complex function of the values of environmental temperature and of the acceleration exciting the sensor, its correction may not be done off-line and it requires the presence of an additional temperature sensor. The proposed identification methodology for the error model is based on the least square method which process off-line the numerical values obtained from the accelerometer experimental testing for different values of acceleration applied to its axes of sensitivity and for different values of operating temperature. A final analysis of the error level after the compensation highlights the best variant for the matrix in the error model. In the sections of the paper are shown the results of the experimental testing of the accelerometer on all the three sensitivity axes, the identification of the error models on each axis by using the least square method, and the validation of the obtained models with experimental values. For all of the three detection channels was obtained a reduction by almost two orders of magnitude of the acceleration absolute maximum error due to environmental temperature variation.
International Nuclear Information System (INIS)
Xu, Yu-Lin.
1988-01-01
The problem of computing the orbit of a visual binary from a set of observed positions is reconsidered. It is a least squares adjustment problem, if the observational errors follow a bias-free multivariate Gaussian distribution and the covariance matrix of the observations is assumed to be known. The condition equations are constructed to satisfy both the conic section equation and the area theorem, which are nonlinear in both the observations and the adjustment parameters. The traditional least squares algorithm, which employs condition equations that are solved with respect to the uncorrelated observations and either linear in the adjustment parameters or linearized by developing them in Taylor series by first-order approximation, is inadequate in the orbit problem. Not long ago, a completely general solution was published by W. H. Jefferys, who proposed a rigorous adjustment algorithm for models in which the observations appear nonlinearly in the condition equations and may be correlated, and in which construction of the normal equations and the residual function involves no approximation. This method was successfully applied in this problem. The normal equations were first solved by Newton's scheme. Newton's method was modified to yield a definitive solution in the case the normal approach fails, by combination with the method of steepest descent and other sophisticated algorithms. Practical examples show that the modified Newton scheme can always lead to a final solution. The weighting of observations, the orthogonal parameters and the efficiency of a set of adjustment parameters are also considered
Lawi, Armin; Adhitya, Yudhi
2018-03-01
The objective of this research is to determine the quality of cocoa beans through morphology of their digital images. Samples of cocoa beans were scattered on a bright white paper under a controlled lighting condition. A compact digital camera was used to capture the images. The images were then processed to extract their morphological parameters. Classification process begins with an analysis of cocoa beans image based on morphological feature extraction. Parameters for extraction of morphological or physical feature parameters, i.e., Area, Perimeter, Major Axis Length, Minor Axis Length, Aspect Ratio, Circularity, Roundness, Ferret Diameter. The cocoa beans are classified into 4 groups, i.e.: Normal Beans, Broken Beans, Fractured Beans, and Skin Damaged Beans. The model of classification used in this paper is the Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM), a proposed improvement model of SVM using ensemble method in which the separate hyperplanes are obtained by least square approach and the multiclass procedure uses One-Against- All method. The result of our proposed model showed that the classification with morphological feature input parameters were accurately as 99.705% for the four classes, respectively.
Kochiya, Yuko; Hirabayashi, Akari; Ichimaru, Yuhei
2017-09-16
To evaluate the dynamic nature of nocturnal heart rate variability, RR intervals recorded with a wearable heart rate sensor were analyzed using the Least Square Cosine Spectrum Method. Six 1-year-old infants participated in the study. A wearable heart rate sensor was placed on their chest to measure RR intervals and 3-axis acceleration. Heartbeat time series were analyzed for every 30 s using the Least Square Cosine Spectrum Method, and an original parameter to quantify the regularity of respiratory-related heart rate rhythm was extracted and referred to as "RA (RA-COSPEC: Respiratory Area obtained by COSPEC)." The RA value is higher when a cosine curve is fitted to the original data series. The time sequential changes of RA showed cyclic changes with significant rhythm during the night. The mean cycle length of RA was 70 ± 15 min, which is shorter than young adult's cycle in our previous study. At the threshold level of RA greater than 3, the HR was significantly decreased compared with the RA value less than 3. The regularity of heart rate rhythm showed dynamic changes during the night in 1-year-old infants. Significant decrease of HR at the time of higher RA suggests the increase of parasympathetic activity. We suspect that the higher RA reflects the regular respiratory pattern during the night. This analysis system may be useful for quantitative assessment of regularity and dynamic changes of nocturnal heart rate variability in infants.
Energy Technology Data Exchange (ETDEWEB)
Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL
2014-01-01
At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.
International Nuclear Information System (INIS)
Gawand, Hemangi Laxman; Bhattacharjee, A. K.; Roy, Kallol
2017-01-01
In industrial plants such as nuclear power plants, system operations are performed by embedded controllers orchestrated by Supervisory Control and Data Acquisition (SCADA) software. A targeted attack (also termed a control aware attack) on the controller/SCADA software can lead a control system to operate in an unsafe mode or sometimes to complete shutdown of the plant. Such malware attacks can result in tremendous cost to the organization for recovery, cleanup, and maintenance activity. SCADA systems in operational mode generate huge log files. These files are useful in analysis of the plant behavior and diagnostics during an ongoing attack. However, they are bulky and difficult for manual inspection. Data mining techniques such as least squares approximation and computational methods can be used in the analysis of logs and to take proactive actions when required. This paper explores methodologies and algorithms so as to develop an effective monitoring scheme against control aware cyber attacks. It also explains soft computation techniques such as the computational geometric method and least squares approximation that can be effective in monitor design. This paper provides insights into diagnostic monitoring of its effectiveness by attack simulations on a four-tank model and using computation techniques to diagnose it. Cyber security of instrumentation and control systems used in nuclear power plants is of paramount importance and hence could be a possible target of such applications
Directory of Open Access Journals (Sweden)
Hemangi Laxman Gawand
2017-04-01
Full Text Available In industrial plants such as nuclear power plants, system operations are performed by embedded controllers orchestrated by Supervisory Control and Data Acquisition (SCADA software. A targeted attack (also termed a control aware attack on the controller/SCADA software can lead a control system to operate in an unsafe mode or sometimes to complete shutdown of the plant. Such malware attacks can result in tremendous cost to the organization for recovery, cleanup, and maintenance activity. SCADA systems in operational mode generate huge log files. These files are useful in analysis of the plant behavior and diagnostics during an ongoing attack. However, they are bulky and difficult for manual inspection. Data mining techniques such as least squares approximation and computational methods can be used in the analysis of logs and to take proactive actions when required. This paper explores methodologies and algorithms so as to develop an effective monitoring scheme against control aware cyber attacks. It also explains soft computation techniques such as the computational geometric method and least squares approximation that can be effective in monitor design. This paper provides insights into diagnostic monitoring of its effectiveness by attack simulations on a four-tank model and using computation techniques to diagnose it. Cyber security of instrumentation and control systems used in nuclear power plants is of paramount importance and hence could be a possible target of such applications.
Energy Technology Data Exchange (ETDEWEB)
Gawand, Hemangi Laxman [Homi Bhabha National Institute, Computer Section, BARC, Mumbai (India); Bhattacharjee, A. K. [Reactor Control Division, BARC, Mumbai (India); Roy, Kallol [BHAVINI, Kalpakkam (India)
2017-04-15
In industrial plants such as nuclear power plants, system operations are performed by embedded controllers orchestrated by Supervisory Control and Data Acquisition (SCADA) software. A targeted attack (also termed a control aware attack) on the controller/SCADA software can lead a control system to operate in an unsafe mode or sometimes to complete shutdown of the plant. Such malware attacks can result in tremendous cost to the organization for recovery, cleanup, and maintenance activity. SCADA systems in operational mode generate huge log files. These files are useful in analysis of the plant behavior and diagnostics during an ongoing attack. However, they are bulky and difficult for manual inspection. Data mining techniques such as least squares approximation and computational methods can be used in the analysis of logs and to take proactive actions when required. This paper explores methodologies and algorithms so as to develop an effective monitoring scheme against control aware cyber attacks. It also explains soft computation techniques such as the computational geometric method and least squares approximation that can be effective in monitor design. This paper provides insights into diagnostic monitoring of its effectiveness by attack simulations on a four-tank model and using computation techniques to diagnose it. Cyber security of instrumentation and control systems used in nuclear power plants is of paramount importance and hence could be a possible target of such applications.
Zhang, Linna; Li, Gang; Sun, Meixiu; Li, Hongxiao; Wang, Zhennan; Li, Yingxin; Lin, Ling
2017-11-01
Identifying whole bloods to be either human or nonhuman is an important responsibility for import-export ports and inspection and quarantine departments. Analytical methods and DNA testing methods are usually destructive. Previous studies demonstrated that visible diffuse reflectance spectroscopy method can realize noncontact human and nonhuman blood discrimination. An appropriate method for calibration set selection was very important for a robust quantitative model. In this paper, Random Selection (RS) method and Kennard-Stone (KS) method was applied in selecting samples for calibration set. Moreover, proper stoichiometry method can be greatly beneficial for improving the performance of classification model or quantification model. Partial Least Square Discrimination Analysis (PLSDA) method was commonly used in identification of blood species with spectroscopy methods. Least Square Support Vector Machine (LSSVM) was proved to be perfect for discrimination analysis. In this research, PLSDA method and LSSVM method was used for human blood discrimination. Compared with the results of PLSDA method, this method could enhance the performance of identified models. The overall results convinced that LSSVM method was more feasible for identifying human and animal blood species, and sufficiently demonstrated LSSVM method was a reliable and robust method for human blood identification, and can be more effective and accurate.
Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy
International Nuclear Information System (INIS)
Randriantsizafy, R.D.; Ramanandraibe, M.J.; Raboanary, R.
2007-01-01
The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.
DEFF Research Database (Denmark)
Yildiz, H.; Forsberg, René; Ågren, J.
2012-01-01
The remove-compute-restore (RCR) technique for regional geoid determination implies that both topography and low-degree global geopotential model signals are removed before computation and restored after Stokes' integration or Least Squares Collocation (LSC) solution. The Least Squares Modificati...
Directory of Open Access Journals (Sweden)
Dilip C Nath
2011-07-01
Full Text Available The Quasi-Least Squares (QLS is useful for different correlation structure with attachment of Generalized Estimating Equation (GEE. The purpose of this work is to compare the regression parameter in the presence of different correlation structure with respect to GEE and QLS method. The comparison of estimated regression parameter has been performed in clinical trial data set; studying the effect of drug treatment (metformin with pioglitazone Vs (gliclazide with pioglitazone in type 2 diabetes patients. In case of QLS, the correlation coefficient of post-parandinal blood sugar (PPBS under tridiagonal correlation is 0.008 while it failed to produce by GEE. It has been found that the combination of metformin with pioglitazone is more effective as compared to the combination of gliclazide with pioglitazone.
Directory of Open Access Journals (Sweden)
Dilip C Nath
2011-08-01
Full Text Available The Quasi-Least Squares (QLS is useful for different correlation structure with attachment of Generalized Estimating Equation (GEE. The purpose of this work is to compare the regression parameter in the presence of different correlation structure with respect to GEE and QLS method. The comparison of estimated regression parameter has been performed in clinical trial data set; studying the effect of drug treatment (metformin with pioglitazone Vs (gliclazide with pioglitazone in type 2 diabetes patients. In case of QLS, the correlation coefficient of post-parandinal blood sugar (PPBS under tridiagonal correlation is 0.008 while it failed to produce by GEE. It has been found that the combination of metformin with pioglitazone is more effective as compared to the combination of gliclazide with pioglitazone.
Directory of Open Access Journals (Sweden)
Qingsong Xu
2014-01-01
Full Text Available Extreme learning machine (ELM is a learning algorithm for single-hidden layer feedforward neural network dedicated to an extremely fast learning. However, the performance of ELM in structural impact localization is unknown yet. In this paper, a comparison study of ELM with least squares support vector machine (LSSVM is presented for the application on impact localization of a plate structure with surface-mounted piezoelectric sensors. Both basic and kernel-based ELM regression models have been developed for the location prediction. Comparative studies of the basic ELM, kernel-based ELM, and LSSVM models are carried out. Results show that the kernel-based ELM requires the shortest learning time and it is capable of producing suboptimal localization accuracy among the three models. Hence, ELM paves a promising way in structural impact detection.
A Meshless Local Petrov-Galerkin Shepard and Least-Squares Method Based on Duo Nodal Supports
Directory of Open Access Journals (Sweden)
Xiaoying Zhuang
2014-01-01
Full Text Available The meshless Shepard and least-squares (MSLS interpolation is a newly developed partition of unity- (PU- based method which removes the difficulties with many other meshless methods such as the lack of the Kronecker delta property. The MSLS interpolation is efficient to compute and retain compatibility for any basis function used. In this paper, we extend the MSLS interpolation to the local Petrov-Galerkin weak form and adopt the duo nodal support domain. In the new formulation, there is no need for employing singular weight functions as is required in the original MSLS and also no need for background mesh for integration. Numerical examples demonstrate the effectiveness and robustness of the present method.
Energy Technology Data Exchange (ETDEWEB)
Vincent M. Laboure; Yaqi Wang; Mark D. DeHart
2016-05-01
In this paper, we study the Least-Squares (LS) PN form of the transport equation compatible with voids in the context of Continuous Finite Element Methods (CFEM).We first deriveweakly imposed boundary conditions which make the LS weak formulation equivalent to the Self-Adjoint Angular Flux (SAAF) variational formulation with a void treatment, in the particular case of constant cross-sections and a uniform mesh. We then implement this method in Rattlesnake with the Multiphysics Object Oriented Simulation Environment (MOOSE) framework using a spherical harmonics (PN) expansion to discretize in angle. We test our implementation using the Method of Manufactured Solutions (MMS) and find the expected convergence behavior both in angle and space. Lastly, we investigate the impact of the global non-conservation of LS by comparing the method with SAAF on a heterogeneous test problem.
Energy Technology Data Exchange (ETDEWEB)
Laboure, Vincent M.; Wang, Yaqi; DeHart, Mark D.
2016-05-01
In this paper, we study the Least-Squares (LS) PN form of the transport equation compatible with voids [1] in the context of Continuous Finite Element Methods (CFEM).We first deriveweakly imposed boundary conditions which make the LS weak formulation equivalent to the Self-Adjoint Angular Flux (SAAF) variational formulation with a void treatment [2], in the particular case of constant cross-sections and a uniform mesh. We then implement this method in Rattlesnake with the Multiphysics Object Oriented Simulation Environment (MOOSE) framework [3] using a spherical harmonics (PN) expansion to discretize in angle. We test our implementation using the Method of Manufactured Solutions (MMS) and find the expected convergence behavior both in angle and space. Lastly, we investigate the impact of the global non-conservation of LS by comparing the method with SAAF on a heterogeneous test problem.
Suliman, Mohamed Abdalla Elhag
2016-12-19
This paper proposes a new approach to find the regularization parameter for linear least-squares discrete ill-posed problems. In the proposed approach, an artificial perturbation matrix with a bounded norm is forced into the discrete ill-posed model matrix. This perturbation is introduced to enhance the singular-value (SV) structure of the matrix and hence to provide a better solution. The proposed approach is derived to select the regularization parameter in a way that minimizes the mean-squared error (MSE) of the estimator. Numerical results demonstrate that the proposed approach outperforms a set of benchmark methods in most cases when applied to different scenarios of discrete ill-posed problems. Jointly, the proposed approach enjoys the lowest run-time and offers the highest level of robustness amongst all the tested methods.
International Nuclear Information System (INIS)
Chen Qiang; Ren Xuemei; Na Jing
2011-01-01
Highlights: Model uncertainty of the system is approximated by multiple-kernel LSSVM. Approximation errors and disturbances are compensated in the controller design. Asymptotical anti-synchronization is achieved with model uncertainty and disturbances. Abstract: In this paper, we propose a robust anti-synchronization scheme based on multiple-kernel least squares support vector machine (MK-LSSVM) modeling for two uncertain chaotic systems. The multiple-kernel regression, which is a linear combination of basic kernels, is designed to approximate system uncertainties by constructing a multiple-kernel Lagrangian function and computing the corresponding regression parameters. Then, a robust feedback control based on MK-LSSVM modeling is presented and an improved update law is employed to estimate the unknown bound of the approximation error. The proposed control scheme can guarantee the asymptotic convergence of the anti-synchronization errors in the presence of system uncertainties and external disturbances. Numerical examples are provided to show the effectiveness of the proposed method.
Zhang, Xue-Xi; Yin, Jian-Hua; Mao, Zhi-Hua; Xia, Yang
2015-06-01
Fourier transform infrared imaging (FTIRI) combined with chemometrics algorithm has strong potential to obtain complex chemical information from biology tissues. FTIRI and partial least squares-discriminant analysis (PLS-DA) were used to differentiate healthy and osteoarthritic (OA) cartilages for the first time. A PLS model was built on the calibration matrix of spectra that was randomly selected from the FTIRI spectral datasets of healthy and lesioned cartilage. Leave-one-out cross-validation was performed in the PLS model, and the fitting coefficient between actual and predicted categorical values of the calibration matrix reached 0.95. In the calibration and prediction matrices, the successful identifying percentages of healthy and lesioned cartilage spectra were 100% and 90.24%, respectively. These results demonstrated that FTIRI combined with PLS-DA could provide a promising approach for the categorical identification of healthy and OA cartilage specimens.
Lim, Jongguk; Kim, Giyoung; Mo, Changyeun; Oh, Kyoungmin; Yoo, Hyeonchae; Ham, Hyeonheui; Kim, Moon S
2017-09-30
The purpose of this study is to use near-infrared reflectance (NIR) spectroscopy equipment to nondestructively and rapidly discriminate Fusarium -infected hulled barley. Both normal hulled barley and Fusarium -infected hulled barley were scanned by using a NIR spectrometer with a wavelength range of 1175 to 2170 nm. Multiple mathematical pretreatments were applied to the reflectance spectra obtained for Fusarium discrimination and the multivariate analysis method of partial least squares discriminant analysis (PLS-DA) was used for discriminant prediction. The PLS-DA prediction model developed by applying the second-order derivative pretreatment to the reflectance spectra obtained from the side of hulled barley without crease achieved 100% accuracy in discriminating the normal hulled barley and the Fusarium -infected hulled barley. These results demonstrated the feasibility of rapid discrimination of the Fusarium -infected hulled barley by combining multivariate analysis with the NIR spectroscopic technique, which is utilized as a nondestructive detection method.
Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping
2011-04-01
In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Loula, A.F.D.; Toledo, E.M.; Franca, L.P.; Garcia, E.L.M.
1989-08-01
A variationaly consistent finite element formulation for constrained problems free from shear or membrane locking is applied to axisymetric shells subjected to arbitrary loading. The governing equations are writen according to Love's classical theory for a problem of bending of axisymetric thin and moderately thick shells accounting for shear deformation. The mixed variational formulation, in terms of stresses and displacements here presented consists of classical Galerkin method plus mesh-dependent least-square type terms employed with equal-order finite element polynomials. The additional terms enhance stability and accuracy of the original Galerkin method, as already proven theoretically and confirmed trough numerical experiments. Numerical results of some examples are presented to demonstrate the good stability and accuracy of the formulation. (author) [pt
Fadel, Maya Abou; de Juan, Anna; Vezin, Hervé; Duponchel, Ludovic
2016-12-01
Electron paramagnetic resonance (EPR) spectroscopy is a powerful technique that is able to characterize radicals formed in kinetic reactions. However, spectral characterization of individual chemical species is often limited or even unmanageable due to the severe kinetic and spectral overlap among species in kinetic processes. Therefore, we applied, for the first time, multivariate curve resolution-alternating least squares (MCR-ALS) method to EPR time evolving data sets to model and characterize the different constituents in a kinetic reaction. Here we demonstrate the advantage of multivariate analysis in the investigation of radicals formed along the kinetic process of hydroxycoumarin in alkaline medium. Multiset analysis of several EPR-monitored kinetic experiments performed in different conditions revealed the individual paramagnetic centres as well as their kinetic profiles. The results obtained by MCR-ALS method demonstrate its prominent potential in analysis of EPR time evolved spectra. Copyright © 2016 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Ming Yang
2018-03-01
Full Text Available In this paper, an on-line parameter identification algorithm to iteratively compute the numerical values of inertia and load torque is proposed. Since inertia and load torque are strongly coupled variables due to the degenerate-rank problem, it is hard to estimate relatively accurate values for them in the cases such as when load torque variation presents or one cannot obtain a relatively accurate priori knowledge of inertia. This paper eliminates this problem and realizes ideal online inertia identification regardless of load condition and initial error. The algorithm in this paper integrates a full-order Kalman Observer and Recursive Least Squares, and introduces adaptive controllers to enhance the robustness. It has a better performance when iteratively computing load torque and moment of inertia. Theoretical sensitivity analysis of the proposed algorithm is conducted. Compared to traditional methods, the validity of the proposed algorithm is proved by simulation and experiment results.
Directory of Open Access Journals (Sweden)
Bhanu Pratap Soni
2016-12-01
Full Text Available This paper proposes an effective supervised learning approach for static security assessment of a large power system. Supervised learning approach employs least square support vector machine (LS-SVM to rank the contingencies and predict the system severity level. The severity of the contingency is measured by two scalar performance indices (PIs: line MVA performance index (PIMVA and Voltage-reactive power performance index (PIVQ. SVM works in two steps. Step I is the estimation of both standard indices (PIMVA and PIVQ that is carried out under different operating scenarios and Step II contingency ranking is carried out based on the values of PIs. The effectiveness of the proposed methodology is demonstrated on IEEE 39-bus (New England system. The approach can be beneficial tool which is less time consuming and accurate security assessment and contingency analysis at energy management center.
Sugiyama, Masashi; Yamada, Makoto; von Bünau, Paul; Suzuki, Taiji; Kanamori, Takafumi; Kawanabe, Motoaki
2011-03-01
Methods for directly estimating the ratio of two probability density functions have been actively explored recently since they can be used for various data processing tasks such as non-stationarity adaptation, outlier detection, and feature selection. In this paper, we develop a new method which incorporates dimensionality reduction into a direct density-ratio estimation procedure. Our key idea is to find a low-dimensional subspace in which densities are significantly different and perform density-ratio estimation only in this subspace. The proposed method, D(3)-LHSS (Direct Density-ratio estimation with Dimensionality reduction via Least-squares Hetero-distributional Subspace Search), is shown to overcome the limitation of baseline methods. Copyright © 2010 Elsevier Ltd. All rights reserved.
Bestari, T. A. S.; Supian, S.; Purwani, S.
2018-03-01
Cimanuk River, Garut District, West Java which have upper course in Papandayan Mountain have an important purpose in dialy living of Garut people as a water source. But in 2016 flash flood in this river was hitted and there was 26 peple dead and 23 peole gone. Flash flood which hitted last year make the settlement almost align with the ground, soaking school and hospital. BPLHD Jawa Barat saw this condition as a disaster which coused by distroyed upper course of Cimanuk River. Flash Flood which happened on the 2016 had ever made economic sector paralized. Least square method selected to analyze economic condition in residents affected post disaster, after the mathematical equations was determined by Cobb Douglas Method. By searching proportion value of the damage, and the result expected became a view to the stakeholder to know which sector that become a worse and be able to make a priority in development
Directory of Open Access Journals (Sweden)
Tiannan Ma
2016-12-01
Full Text Available Accurate forecasting of icing thickness has great significance for ensuring the security and stability of the power grid. In order to improve the forecasting accuracy, this paper proposes an icing forecasting system based on the fireworks algorithm and weighted least square support vector machine (W-LSSVM. The method of the fireworks algorithm is employed to select the proper input features with the purpose of eliminating redundant influence. In addition, the aim of the W-LSSVM model is to train and test the historical data-set with the selected features. The capability of this proposed icing forecasting model and framework is tested through simulation experiments using real-world icing data from the monitoring center of the key laboratory of anti-ice disaster, Hunan, South China. The results show that the proposed W-LSSVM-FA method has a higher prediction accuracy and it may be a promising alternative for icing thickness forecasting.
Wang, Qi
2015-08-01
This paper analyzes the effect of random noise on the measurement of central positions of white-light correlograms with the least-squares method. Measurements of two types of central positions, the central position of the envelope (CPE) and the central position of the central fringe (CPCF), are investigated. Two types of random noise, intensity noise and position noise, are considered. Analytic expressions for random error due to intensity noise (REIN) and random error due to position noise (REPN) are derived. The theoretical results are compared with the random errors estimated from computer simulations. Random errors of CPE measurement are compared with those of CPCF measurement. Relationships are investigated between the random errors and the wavelength of the light source. The REPN of CPCF measurement has been found to be independent of the wavelength of the light source and the amplitude of the central fringe.
Lamp system with a single second-lens newly designed by using the least square method for 4 LEDs
Jo, Jae Heung; Ryu, Jae Myung; Hong, Chun Gang
2014-05-01
It is common for many companies to use multiple LEDs to enhance the brightness of a LED lamp and, in general, four LEDs are used in the LED lamp systems. Moreover, the second-lens must be used to obtain a straight uniform illumination from LED lights. Where four LEDs are used, four second-lenses are also assembled conventionally and those four units of second-lenses are manufactured from a single mold and assembled together with the LEDs. However, this study introduces a new method of using the Least Square Method to get a uniform illumination with the divergence angle of 40 degrees with a new single injection molded lens. Thanks to this optical design with a single lens, the assembling process of LED lamp system was simplified by eliminating the complicated assembly procedure. Also, the uniformity of illumination of this newly designed lamp system was less than 14.1%.
Agahi, Hossein; Zarafshani, Kiumars; Behjat, Amir-Mohsen
The purpose of this study was to describe the effect of crop insurance on agricultural production among dry wheat farmers in Kermanshah province. The population of this study consisted of dry wheat farmers. Data used in this study was collected using stratified multi-stage cluster sampling method and face to face interview with 251 farmers in three different climate regions: tropical, temperate and cold during 2003-2004 crop years. The procedures used for determining farmers' technical efficiency was Corrected Ordinary Least Square (COLS). Findings revealed that crop insurance has positive effect on temperate and tropical regions. However, the production difference between insured and uninsured farmers in cold region was non-significant. It is therefore concluded that technical efficiency of agricultural production in Kermanshah province is a function of crop insurance as well as other variables such as crop management practices, personal characteristics and fair distribution of agricultural inputs.
Fujii, Satoshi; Sato, Shinobu; Fukuda, Keisuke; Okinaga, Toshinori; Ariyoshi, Wataru; Usui, Michihiko; Nakashima, Keisuke; Nishihara, Tatsuji; Takenaka, Shigeori
2016-01-01
Diagnosis of periodontal disease by Fourier transform infrared (FT-IR) microscopic technique was achieved for saliva samples. Twenty-two saliva samples, collected from 10 patients with periodontal disease and 12 normal volunteers, were pre-processed and analyzed by FT-IR microscopy. We found that the periodontal samples showed a larger raw IR spectrum than the control samples. In addition, the shape of the second derivative spectrum was clearly different between the periodontal and control samples. Furthermore, the amount of saliva content and the mixture ratio were different between the two samples. Partial least squares discriminant analysis was used for the discrimination of periodontal samples based on the second derivative spectrum. The leave-one-out cross-validation discrimination accuracy was 94.3%. Thus, these results show that periodontal disease may be diagnosed by analyzing saliva samples with FT-IR microscopy.
Williams, Charles A.; Richardson, Randall M.
1988-01-01
A nonlinear weighted least-squares analysis was performed for a synthetic elastic layer over a viscoelastic half-space model of strike-slip faulting. Also, an inversion of strain rate data was attempted for the locked portions of the San Andreas fault in California. Based on an eigenvector analysis of synthetic data, it is found that the only parameter which can be resolved is the average shear modulus of the elastic layer and viscoelastic half-space. The other parameters were obtained by performing a suite of inversions for the fault. The inversions on data from the northern San Andreas resulted in predicted parameter ranges similar to those produced by inversions on data from the whole fault.
International Nuclear Information System (INIS)
Carbonniere, Philippe; Begue, Didier; Dargelos, Alain; Pouchan, Claude
2004-01-01
In this work we present an attractive least-squares fitting procedure which allows for the calculation of a quartic force field by jointly using energy, gradient, and Hessian data, obtained from electronic wave function calculations on a suitably chosen grid of points. We use the experimental design to select the grid points: a 'simplex-sum' of Box and Behnken grid was chosen for its efficiency and accuracy. We illustrate the numerical implementations of the method by using the energy and gradient data for H 2 O and H 2 CO. The B3LYP/cc-pVTZ quartic force field performed from 11 and 44 simplex-sum configurations shows excellent agreement in comparison to the classical 44 and 168 energy calculations
International Nuclear Information System (INIS)
Burns, W.A.; Mankiewicz, P.J.; Bence, A.E.; Page, D.S.; Parker, K.R.
1997-01-01
A method was developed to allocate polycyclic aromatic hydrocarbons (PAHs) in sediment samples to the PAH sources from which they came. The method uses principal-component analysis to identify possible sources and a least-squares model to find the source mix that gives the best fit of 36 PAH analytes in each sample. The method identified 18 possible PAH sources in a large set of field data collected in Prince William Sound, Alaska, USA, after the 1989 Exxon Valdez oil spill, including diesel oil, diesel soot, spilled crude oil in various weathering states, natural background, creosote, and combustion products from human activities and forest fires. Spill oil was generally found to be a small increment of the natural background in subtidal sediments, whereas combustion products were often the predominant sources for subtidal PAHs near sites of past or present human activity. The method appears to be applicable to other situations, including other spills
Directory of Open Access Journals (Sweden)
Mohamed G. Egila
2016-12-01
Full Text Available This paper presents a proposed design for analyzing electrocardiography (ECG signals. This methodology employs highpass least-square linear phase Finite Impulse Response (FIR filtering technique to filter out the baseline wander noise embedded in the input ECG signal to the system. Discrete Wavelet Transform (DWT was utilized as a feature extraction methodology to extract the reduced feature set from the input ECG signal. The design uses back propagation neural network classifier to classify the input ECG signal. The system is implemented on Xilinx 3AN-XC3S700AN Field Programming Gate Array (FPGA board. A system simulation has been done. The design is compared with some other designs achieving total accuracy of 97.8%, and achieving reduction in utilizing resources on FPGA implementation.
Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza
2018-03-01
In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.
Directory of Open Access Journals (Sweden)
Ning Wang
2014-01-01
Full Text Available This paper developed a rapid and nondestructive method for quantitative analysis of a cheaper adulterant (wheat flour in oat flour by NIR spectroscopy and chemometrics. Reflectance FT-NIR spectra in the range of 4000 to 12000 cm−1 of 300 oat flour objects adulterated with wheat flour were measured. The doping levels of wheat flour ranged from 5% to 50% (w/w. To ensure the generalization performance of the method, both the oat and the wheat flour samples were collected from different producing areas and an incomplete unbalanced randomized block (IURB design was performed to include the significant variations that may be encountered in future samples. Partial least squares regression (PLSR was used to develop calibration models for predicting the levels of wheat flour. Different preprocessing methods including smoothing, taking second-order derivative (D2, and standard normal variate (SNV transformation were investigated to improve the model accuracy of PLS. The root mean squared error of Monte Carlo cross-validation (RMSEMCCV and root mean squared error of prediction (RMSEP were 1.921 and 1.975 (%, w/w by D2-PLS, respectively. The results indicate that NIR and chemometrics can provide a rapid method for quantitative analysis of wheat flour in oat flour.
Wang, Ning; Zhang, Xingxiang; Yu, Zhuo; Li, Guodong; Zhou, Bin
2014-01-01
This paper developed a rapid and nondestructive method for quantitative analysis of a cheaper adulterant (wheat flour) in oat flour by NIR spectroscopy and chemometrics. Reflectance FT-NIR spectra in the range of 4000 to 12000 cm(-1) of 300 oat flour objects adulterated with wheat flour were measured. The doping levels of wheat flour ranged from 5% to 50% (w/w). To ensure the generalization performance of the method, both the oat and the wheat flour samples were collected from different producing areas and an incomplete unbalanced randomized block (IURB) design was performed to include the significant variations that may be encountered in future samples. Partial least squares regression (PLSR) was used to develop calibration models for predicting the levels of wheat flour. Different preprocessing methods including smoothing, taking second-order derivative (D2), and standard normal variate (SNV) transformation were investigated to improve the model accuracy of PLS. The root mean squared error of Monte Carlo cross-validation (RMSEMCCV) and root mean squared error of prediction (RMSEP) were 1.921 and 1.975 (%, w/w) by D2-PLS, respectively. The results indicate that NIR and chemometrics can provide a rapid method for quantitative analysis of wheat flour in oat flour.
Directory of Open Access Journals (Sweden)
Prabhat K. Koner
2016-09-01
Full Text Available Global sea-surface temperatures (SST from MODIS measured brightness temperatures generated using the regression methods, have been available to users for more than a decade, and are used extensively for a wide range of atmospheric and oceanic studies. However, as evidenced by a number of studies, there are indications that the retrieval quality and cloud detection are somewhat sub-optimal. To improve the performance of both of these aspects, we endorse a new physical deterministic algorithm, based on truncated total least squares (TTLS, using multiple channels and parameters, in conjunction with a hybrid cloud detection scheme using a radiative transfer model atop a functional spectral difference method. The TTLS method is a new addition that improves the information content of the retrieval compared to our previous work using modified total least squares (MTLS, which is feasible because more measurements are available, allowing a larger retrieval vector. A systematic study is conducted to ascertain the appropriate channel selection for SST retrieval from the 16 thermal infrared channels available from the MODIS instrument. Additionally, since atmospheric aerosol is a well-known source of degraded quality of SST retrieval, we include aerosol profiles from numerical weather prediction in the forward simulation and include the total column density of all aerosols in the retrieval vector of our deterministic inverse method. We used a slightly modified version of our earlier reported cloud detection algorithm, namely CEM (cloud and error mask, for this study. Time series analysis of more than a million match-ups shows that our new algorithm (TTLS+CEM can reduce RMSE by ~50% while increasing data coverage by ~50% compared to the operationally available MODIS SST.
Energy Technology Data Exchange (ETDEWEB)
Aziz, A., E-mail: aziz@gonzaga.edu [Department of Mechanical Engineering, School of Engineering and Applied Science, Gonzaga University, Spokane, WA 99258 (United States); Bouaziz, M.N. [Department of Mechanical Engineering, University of Medea, BP 164, Medea 26000 (Algeria)
2011-08-15
Highlights: {yields} Analytical solutions for a rectangular fin with temperature dependent heat generation and thermal conductivity. {yields} Graphs give temperature distributions and fin efficiency. {yields} Comparison of analytical and numerical solutions. {yields} Method of least squares used for the analytical solutions. - Abstract: Approximate but highly accurate solutions for the temperature distribution, fin efficiency, and optimum fin parameter for a constant area longitudinal fin with temperature dependent internal heat generation and thermal conductivity are derived analytically. The method of least squares recently used by the authors is applied to treat the two nonlinearities, one associated with the temperature dependent internal heat generation and the other due to temperature dependent thermal conductivity. The solution is built from the classical solution for a fin with uniform internal heat generation and constant thermal conductivity. The results are presented graphically and compared with the direct numerical solutions. The analytical solutions retain their accuracy (within 1% of the numerical solution) even when there is a 60% increase in thermal conductivity and internal heat generation at the base temperature from their corresponding values at the sink temperature. The present solution is simple (involves hyperbolic functions only) compared with the fairly complex approximate solutions based on the homotopy perturbation method, variational iteration method, and the double series regular perturbation method and offers high accuracy. The simple analytical expressions for the temperature distribution, the fin efficiency and the optimum fin parameter are convenient for use by engineers dealing with the design and analysis of heat generating fins operating with a large temperature difference between the base and the environment.
International Nuclear Information System (INIS)
Lv, You; Liu, Jizhen; Yang, Tingting; Zeng, Deliang
2013-01-01
Real operation data of power plants are inclined to be concentrated in some local areas because of the operators’ habits and control system design. In this paper, a novel least squares support vector machine (LSSVM)-based ensemble learning paradigm is proposed to predict NO x emission of a coal-fired boiler using real operation data. In view of the plant data characteristics, a soft fuzzy c-means cluster algorithm is proposed to decompose the original data and guarantee the diversity of individual learners. Subsequently the base LSSVM is trained in each individual subset to solve the subtask. Finally, partial least squares (PLS) is applied as the combination strategy to eliminate the collinear and redundant information of the base learners. Considering that the fuzzy membership also has an effect on the ensemble output, the membership degree is added as one of the variables of the combiner. The single LSSVM and other ensemble models using different decomposition and combination strategies are also established to make a comparison. The result shows that the new soft FCM-LSSVM-PLS ensemble method can predict NO x emission accurately. Besides, because of the divide and conquer frame, the total time consumed in the searching the parameters and training also decreases evidently. - Highlights: • A novel LSSVM ensemble model to predict NO x emissions is presented. • LSSVM is used as the base learner and PLS is employed as the combiner. • The model is applied to process data from a 660 MW coal-fired boiler. • The generalization ability of the model is enhanced. • The time consuming in training and searching the parameters decreases sharply
Grate, J W; Patrash, S J; Kaganovet, S N; Abraham, M H; Wise, B M; Gallagher, N B
2001-11-01
In previous work, it was shown that, in principle, vapor descriptors could be derived from the responses of an array of polymer-coated acoustic wave devices. This new chemometric classification approach was based on polymer/vapor interactions following the well-established linear solvation energy relationships (LSERs) and the surface acoustic wave (SAW) transducers being mass sensitive. Mathematical derivations were included and were supported by simulations. In this work, an experimental data set of polymer-coated SAW vapor sensors is investigated. The data set includes 20 diverse polymers tested against 18 diverse organic vapors. It is shown that interfacial adsorption can influence the response behavior of sensors with nonpolar polymers in response to hydrogen-bonding vapors; however, in general, most sensor responses are related to vapor interactions with the polymers. It is also shown that polymer-coated SAW sensor responses can be empirically modeled with LSERs, deriving an LSER for each individual sensor based on its responses to the 18 vapors. Inverse least-squares methods are used to develop models that correlate and predict vapor descriptors from sensor array responses. Successful correlations can be developed by multiple linear regression (MLR), principal components regression (PCR), and partial least-squares (PLS) regression. MLR yields the best fits to the training data, however cross-validation shows that prediction of vapor descriptors for vapors not in the training set is significantly more successful using PCR or PLS. In addition, the optimal dimension of the PCR and PLS models supports the dimensionality of the LSER formulation and SAW response models.
Directory of Open Access Journals (Sweden)
Ondrej eLibiger
2015-12-01
Full Text Available It is now feasible to examine the composition and diversity of microbial communities (i.e., `microbiomes‘ that populate different human organs and orifices using DNA sequencing and related technologies. To explore the potential links between changes in microbial communities and various diseases in the human body, it is essential to test associations involving different species within and across microbiomes, environmental settings and disease states. Although a number of statistical techniques exist for carrying out relevant analyses, it is unclear which of these techniques exhibit the greatest statistical power to detect associations given the complexity of most microbiome datasets. We compared the statistical power of principal component regression, partial least squares regression, regularized regression, distance-based regression, Hill's diversity measures, and a modified test implemented in the popular and widely used microbiome analysis methodology 'Metastats‘ across a wide range of simulated scenarios involving changes in feature abundance between two sets of metagenomic samples. For this purpose, simulation studies were used to change the abundance of microbial species in a real dataset from a published study examining human hands. Each technique was applied to the same data, and its ability to detect the simulated change in abundance was assessed. We hypothesized that a small subset of methods would outperform the rest in terms of the statistical power. Indeed, we found that the Metastats technique modified to accommodate multivariate analysis and partial least squares regression yielded high power under the models and data sets we studied. The statistical power of diversity measure-based tests, distance-based regression and regularized regression was significantly lower. Our results provide insight into powerful analysis strategies that utilize information on species counts from large microbiome data sets exhibiting skewed frequency
Directory of Open Access Journals (Sweden)
Yanda Christian
2018-01-01
Full Text Available Acceleration of national development increases the number of construction projects in Indonesia, including road projects. The contractor as the service provider in the implementation of the construction work shall have a detailed implementation schedule and project cost budget plan so that the construction work shall not be subject to delays and cost overrun. The main thing that can cause cost overrun is the error in cost estimation. In this study discusses the modeling of increasing the accuracy of cost estimation as well as the development of factors that can improve the accuracy of cost estimation. Validation of research variables was done to experts using Analytical Hierarchy Process (AHP method and modeling using Structural Equation ModelingPartial Least Square (SEM-PLS method to project contractor of Public Works Department of Central Kalimantan Province and National Road Implementation Center XI Unit Work of Central Kalimantan with contract value of project worth 20 Billion to 50 Billion Rupiah Year 2016. The result of variable validation shows the competence variable of estimator, survey, availability of information, calculation of cost estimation and internal company is variable which influence estimation The obtained modeling equation is AEB = 0,129 KE + 0.466 S + 0,191 KI + 0,153 PEB + 0,069 IP + 0,181 ζ. The development of cost estimation is done by improving each influential indicator in each variable and applying development strategies to increase the estimated cost estimation based on SWOT analysis. Keywords : Analytical Hierarchy Process (AHP, cost estimation, road, Structural Equation Modeling-Partial Least Square (SEM-PLS, SWOT analysis.
Klees, R.; Slobbe, D. C.; Farahani, H. H.
2018-04-01
The paper is about a methodology to combine a noisy satellite-only global gravity field model (GGM) with other noisy datasets to estimate a local quasi-geoid model using weighted least-squares techniques. In this way, we attempt to improve the quality of the estimated quasi-geoid model and to complement it with a full noise covariance matrix for quality control and further data processing. The methodology goes beyond the classical remove-compute-restore approach, which does not account for the noise in the satellite-only GGM. We suggest and analyse three different approaches of data combination. Two of them are based on a local single-scale spherical radial basis function (SRBF) model of the disturbing potential, and one is based on a two-scale SRBF model. Using numerical experiments, we show that a single-scale SRBF model does not fully exploit the information in the satellite-only GGM. We explain this by a lack of flexibility of a single-scale SRBF model to deal with datasets of significantly different bandwidths. The two-scale SRBF model performs well in this respect, provided that the model coefficients representing the two scales are estimated separately. The corresponding methodology is developed in this paper. Using the statistics of the least-squares residuals and the statistics of the errors in the estimated two-scale quasi-geoid model, we demonstrate that the developed methodology provides a two-scale quasi-geoid model, which exploits the information in all datasets.
Directory of Open Access Journals (Sweden)
Américo José dos Santos Reis
2005-01-01
Full Text Available By definition, the genetic effects obtained from a circulant diallel table are random. However, because of the methods of analysis, those effects have been considered as fixed. Two different statistical approaches were applied. One assumed the model to be fixed and obtained solutions through the ordinary least square (OLS method. The other assumed a mixed model and estimated the fixed effects (BLUE by generalized least squares (GLS and the best linear unbiased predictor (BLUP of the random effects. The goal of this study was to evaluate the consequences when considering these effects as fixed or random, using the coefficient of correlation between the responses of observed and non-observed hybrids. Crossings were made between S1 inbred lines from two maize populations developed at Universidade Federal de Goiás, the UFG-Samambaia "Dent" and UFG-Samambaia "Flint". A circulant inter-group design was applied, and there were five (s = 5 crossings for each parent. The predictions were made using a reduced model. Diallels with different sizes of s (from 2 to 5 were simulated, and the coefficients of correlation were obtained using two different approaches for each size of s. In the first approach, the observed hybrids were included in both the estimation of the genetic parameters and the coefficient of correlation, while in the second a cross-validation process was employed. In this process, the set of hybrids was divided in two groups: one group, comprising 75% of the original group, to estimate the genetic parameters, and a second one, consisting of the remaining 25%, to validate the predictions. In all cases, a bootstrap process with 200 resamplings was used to generate the empirical distribution of the correlation coefficient. This coefficient showed a decrease as the value of s decreased. The cross-validation method allowed to estimate the bias magnitude in evaluating the correlation coefficient using the same hybrids, to predict the genetic
Dong, Yuting; Zhang, Lu; Balz, Timo; Luo, Heng; Liao, Mingsheng
2018-03-01
Radargrammetry is a powerful tool to construct digital surface models (DSMs) especially in heavily vegetated and mountainous areas where SAR interferometry (InSAR) technology suffers from decorrelation problems. In radargrammetry, the most challenging step is to produce an accurate disparity map through massive image matching, from which terrain height information can be derived using a rigorous sensor orientation model. However, precise stereoscopic SAR (StereoSAR) image matching is a very difficult task in mountainous areas due to the presence of speckle noise and dissimilar geometric/radiometric distortions. In this article, an adaptive-window least squares matching (AW-LSM) approach with an enhanced epipolar geometric constraint is proposed to robustly identify homologous points after compensation for radiometric discrepancies and geometric distortions. The matching procedure consists of two stages. In the first stage, the right image is re-projected into the left image space to generate epipolar images using rigorous imaging geometries enhanced with elevation information extracted from the prior DEM data e.g. SRTM DEM instead of the mean height of the mapped area. Consequently, the dissimilarities in geometric distortions between the left and right images are largely reduced, and the residual disparity corresponds to the height difference between true ground surface and the prior DEM. In the second stage, massive per-pixel matching between StereoSAR epipolar images identifies the residual disparity. To ensure the reliability and accuracy of the matching results, we develop an iterative matching scheme in which the classic cross correlation matching is used to obtain initial results, followed by the least squares matching (LSM) to refine the matching results. An adaptively resizing search window strategy is adopted during the dense matching step to help find right matching points. The feasibility and effectiveness of the proposed approach is demonstrated using
Hancewicz, Thomas M; Xiao, Chunhong; Zhang, Shuliang; Misra, Manoj
2013-12-01
In vivo confocal Raman spectroscopy has become the measurement technique of choice for skin health and skin care related communities as a way of measuring functional chemistry aspects of skin that are key indicators for care and treatment of various skin conditions. Chief among these techniques are stratum corneum water content, a critical health indicator for severe skin condition related to dryness, and natural moisturizing factor components that are associated with skin protection and barrier health. In addition, in vivo Raman spectroscopy has proven to be a rapid and effective method for quantifying component penetration in skin for topically applied skin care formulations. The benefit of such a capability is that noninvasive analytical chemistry can be performed in vivo in a clinical setting, significantly simplifying studies aimed at evaluating product performance. This presumes, however, that the data and analysis methods used are compatible and appropriate for the intended purpose. The standard analysis method used by most researchers for in vivo Raman data is ordinary least squares (OLS) regression. The focus of work described in this paper is the applicability of OLS for in vivo Raman analysis with particular attention given to use for non-ideal data that often violate the inherent limitations and deficiencies associated with proper application of OLS. We then describe a newly developed in vivo Raman spectroscopic analysis methodology called multivariate curve resolution-augmented ordinary least squares (MCR-OLS), a relatively simple route to addressing many of the issues with OLS. The method is compared with the standard OLS method using the same in vivo Raman data set and using both qualitative and quantitative comparisons based on model fit error, adherence to known data constraints, and performance against calibration samples. A clear improvement is shown in each comparison for MCR-OLS over standard OLS, thus supporting the premise that the MCR
International Nuclear Information System (INIS)
Mori, Takamasa; Nakagawa, Masayuki; Kaneko, Kunio.
1996-05-01
A code system has been developed to produce neutron cross section libraries for the MVP continuous energy Monte Carlo code from an evaluated nuclear data library in the ENDF format. The code system consists of 9 computer codes, and can process nuclear data in the latest ENDF-6 format. By using the present system, MVP neutron cross section libraries for important nuclides in reactor core analyses, shielding and fusion neutronics calculations have been prepared from JENDL-3.1, JENDL-3.2, JENDL-FUSION file and ENDF/B-VI data bases. This report describes the format of MVP neutron cross section library, the details of each code in the code system and how to use them. (author)
Energy Technology Data Exchange (ETDEWEB)
Mori, Takamasa; Nakagawa, Masayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kaneko, Kunio
1996-05-01
A code system has been developed to produce neutron cross section libraries for the MVP continuous energy Monte Carlo code from an evaluated nuclear data library in the ENDF format. The code system consists of 9 computer codes, and can process nuclear data in the latest ENDF-6 format. By using the present system, MVP neutron cross section libraries for important nuclides in reactor core analyses, shielding and fusion neutronics calculations have been prepared from JENDL-3.1, JENDL-3.2, JENDL-FUSION file and ENDF/B-VI data bases. This report describes the format of MVP neutron cross section library, the details of each code in the code system and how to use them. (author).
Turner, D A; Von Behren, P L; Ruggie, N T; Hauser, R G; Denes, P; Ali, A; Messer, J V; Fordham, E W; Groch, M W
1982-06-01
Least-square phase analysis (LSPA) of radionuclide cineangiograms demonstrates the sequence of onset of inward ventricular movement noninvasively. To validate the method and explore its ability to identify abnormal initial sites of ventricular activation, LSPA was applied to 14 patients with pacemakers (one with electrodes in two locations) (group 1) and three patients with recurrent ventricular tachycardia (VT) (group 2) who had undergone electrophysiologic endocardial mapping. The segment in which the site of initial ventricular activation was located was correctly identified in 13 of 15 paced studies and in two of three group 2 patients during VT. Pacing increased the duration of spread of onset of inward ventricular movement, and the duration of spread of onset correlated well with the duration of the QRS (r = 0.80). The sequence of onset of inward ventricular movement during VT was similar to the sequence of depolarization in all three group 2 patients. These preliminary results suggest that the sequence of onset of ventricular contraction as depicted by LSPA is a valid representation of the actual contraction sequence and that LSPA or radionuclide cineangiography correctly identifies abnormal sites of initial ventricular activation.
Directory of Open Access Journals (Sweden)
Mardawia M Panrereng
2015-06-01
Full Text Available Dalam beberapa tahun terakhir, sistem komunikasi akustik bawah air banyak dikembangkan oleh beberapa peneliti. Besarnya tantangan yang dihadapi membuat para peneliti semakin tertarik untuk mengembangkan penelitian dibidang ini. Kanal bawah air merupakan media komunikasi yang sulit karena adanya attenuasi, absorption, dan multipath yang disebabkan oleh gerakan gelombang air setiap saat. Untuk perairan dangkal, multipath disebabkan adanya pantulan dari permukaan dan dasar laut. Kebutuhan pengiriman data cepat dengan bandwidth terbatas menjadikan Ortogonal Frequency Division Multiplexing (OFDM sebagai solusi untuk komunikasi transmisi tinggi dengan modulasi menggunakan Binary Phase-Shift Keying (BPSK. Estimasi kanal bertujuan untuk mengetahui karakteristik respon impuls kanal propagasi dengan mengirimkan pilot simbol. Pada estimasi kanal menggunakan metode Least Square (LS nilai Mean Square Error (MSE yang diperoleh cenderung lebih besar dari metode estimasi kanal menggunakan metode Minimum Mean Square (MMSE. Hasil kinerja estimasi kanal berdasarkan perhitungan Bit Error Rate (BER untuk estimasi kanal menggunakan metode LS dan metode MMSE tidak menunjukkan perbedaan yang signifikan yaitu berselisih satu SNR untuk setiap metode estimasi kanal yang digunakan.
Olivares, A.; Górriz, J. M.; Ramírez, J.; Olivares, G.
2011-02-01
Inertial sensors are widely used in human body motion monitoring systems since they permit us to determine the position of the subject's limbs. Limb angle measurement is carried out through the integration of the angular velocity measured by a rate sensor and the decomposition of the components of static gravity acceleration measured by an accelerometer. Different factors derived from the sensors' nature, such as the angle random walk and dynamic bias, lead to erroneous measurements. Dynamic bias effects can be reduced through the use of adaptive filtering based on sensor fusion concepts. Most existing published works use a Kalman filtering sensor fusion approach. Our aim is to perform a comparative study among different adaptive filters. Several least mean squares (LMS), recursive least squares (RLS) and Kalman filtering variations are tested for the purpose of finding the best method leading to a more accurate and robust limb angle measurement. A new angle wander compensation sensor fusion approach based on LMS and RLS filters has been developed.
International Nuclear Information System (INIS)
Olivares, A; Olivares, G; Górriz, J M; Ramírez, J
2011-01-01
Inertial sensors are widely used in human body motion monitoring systems since they permit us to determine the position of the subject's limbs. Limb angle measurement is carried out through the integration of the angular velocity measured by a rate sensor and the decomposition of the components of static gravity acceleration measured by an accelerometer. Different factors derived from the sensors' nature, such as the angle random walk and dynamic bias, lead to erroneous measurements. Dynamic bias effects can be reduced through the use of adaptive filtering based on sensor fusion concepts. Most existing published works use a Kalman filtering sensor fusion approach. Our aim is to perform a comparative study among different adaptive filters. Several least mean squares (LMS), recursive least squares (RLS) and Kalman filtering variations are tested for the purpose of finding the best method leading to a more accurate and robust limb angle measurement. A new angle wander compensation sensor fusion approach based on LMS and RLS filters has been developed
Monfared, Ali Momenpour T; Tiwari, Vidhu S; Tripathi, Markandey M; Anis, Hanan
2013-02-01
Heparin is the most widely used anti-coagulant for the prevention of blood clots in patients undergoing certain types of surgeries including open heart surgeries and dialysis. The precise monitoring of heparin amount in patients' blood is crucial for reducing the morbidity and mortality in surgical environments. Based upon these considerations, we have used Raman spectroscopy in conjunction with partial least squares (PLS) analysis to measure heparin concentration at clinical level which is less than 10 United States Pharmacopeia (USP) in serum. The PLS calibration model was constructed from the Raman spectra of different concentrations of heparin in serum. It showed a high coefficient of determination (R2>0.91) between the spectral data and heparin level in serum along with a low root mean square error of prediction ~4 USP/ml. It enabled the detection of extremely low concentrations of heparin in serum (~8 USP/ml) as desirable in clinical environment. The proposed optical method has the potential of being implemented as the point-of-care testing procedure during surgeries, where the interest is to rapidly monitor low concentrations of heparin in patient's blood.
Anderson, R. B.; Morris, Richard V.; Clegg, S. M.; Humphries, S. D.; Wiens, R. C.; Bell, J. F., III; Mertzman, S. A.
2010-01-01
The ChemCam instrument [1] on the Mars Science Laboratory (MSL) rover will be used to obtain the chemical composition of surface targets within 7 m of the rover using Laser Induced Breakdown Spectroscopy (LIBS). ChemCam analyzes atomic emission spectra (240-800 nm) from a plasma created by a pulsed Nd:KGW 1067 nm laser. The LIBS spectra can be used in a semiquantitative way to rapidly classify targets (e.g., basalt, andesite, carbonate, sulfate, etc.) and in a quantitative way to estimate their major and minor element chemical compositions. Quantitative chemical analysis from LIBS spectra is complicated by a number of factors, including chemical matrix effects [2]. Recent work has shown promising results using multivariate techniques such as partial least squares (PLS) regression and artificial neural networks (ANN) to predict elemental abundances in samples [e.g. 2-6]. To develop, refine, and evaluate analysis schemes for LIBS spectra of geologic materials, we collected spectra of a diverse set of well-characterized natural geologic samples and are comparing the predictive abilities of PLS, cascade correlation ANN (CC-ANN) and multilayer perceptron ANN (MLP-ANN) analysis procedures.
Heddam, Salim; Kisi, Ozgur
2018-04-01
In the present study, three types of artificial intelligence techniques, least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5T) are applied for modeling daily dissolved oxygen (DO) concentration using several water quality variables as inputs. The DO concentration and water quality variables data from three stations operated by the United States Geological Survey (USGS) were used for developing the three models. The water quality data selected consisted of daily measured of water temperature (TE, °C), pH (std. unit), specific conductance (SC, μS/cm) and discharge (DI cfs), are used as inputs to the LSSVM, MARS and M5T models. The three models were applied for each station separately and compared to each other. According to the results obtained, it was found that: (i) the DO concentration could be successfully estimated using the three models and (ii) the best model among all others differs from one station to another.