WorldWideScience

Sample records for linear two-dimensional least-squares

  1. Simplified neural networks for solving linear least squares and total least squares problems in real time.

    Science.gov (United States)

    Cichocki, A; Unbehauen, R

    1994-01-01

    In this paper a new class of simplified low-cost analog artificial neural networks with on chip adaptive learning algorithms are proposed for solving linear systems of algebraic equations in real time. The proposed learning algorithms for linear least squares (LS), total least squares (TLS) and data least squares (DLS) problems can be considered as modifications and extensions of well known algorithms: the row-action projection-Kaczmarz algorithm and/or the LMS (Adaline) Widrow-Hoff algorithms. The algorithms can be applied to any problem which can be formulated as a linear regression problem. The correctness and high performance of the proposed neural networks are illustrated by extensive computer simulation results.

  2. Solving linear inequalities in a least squares sense

    Energy Technology Data Exchange (ETDEWEB)

    Bramley, R.; Winnicka, B. [Indiana Univ., Bloomington, IN (United States)

    1994-12-31

    Let A {element_of} {Re}{sup mxn} be an arbitrary real matrix, and let b {element_of} {Re}{sup m} a given vector. A familiar problem in computational linear algebra is to solve the system Ax = b in a least squares sense; that is, to find an x* minimizing {parallel}Ax {minus} b{parallel}, where {parallel} {center_dot} {parallel} refers to the vector two-norm. Such an x* solves the normal equations A{sup T}(Ax {minus} b) = 0, and the optimal residual r* = b {minus} Ax* is unique (although x* need not be). The least squares problem is usually interpreted as corresponding to multiple observations, represented by the rows of A and b, on a vector of data x. The observations may be inconsistent, and in this case a solution is sought that minimizes the norm of the residuals. A less familiar problem to numerical linear algebraists is the solution of systems of linear inequalities Ax {le} b in a least squares sense, but the motivation is similar: if a set of observations places upper or lower bounds on linear combinations of variables, the authors want to find x* minimizing {parallel} (Ax {minus} b){sub +} {parallel}, where the i{sup th} component of the vector v{sub +} is the maximum of zero and the i{sup th} component of v.

  3. Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kookjin [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science; Carlberg, Kevin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Elman, Howard C. [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science and Inst. for Advanced Computer Studies

    2018-03-29

    Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weighted $\\ell^2$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $\\ell^2$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.

  4. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  5. Use of correspondence analysis partial least squares on linear and unimodal data

    DEFF Research Database (Denmark)

    Frisvad, Jens Christian; Norsker, Merete

    1996-01-01

    Correspondence analysis partial least squares (CA-PLS) has been compared with PLS conceming classification and prediction of unimodal growth temperature data and an example using infrared (IR) spectroscopy for predicting amounts of chemicals in mixtures. CA-PLS was very effective for ordinating...... that could only be seen in two-dimensional plots, and also less effective predictions. PLS was the best method in the linear case treated, with fewer components and a better prediction than CA-PLS....

  6. A Two-Layer Least Squares Support Vector Machine Approach to Credit Risk Assessment

    Science.gov (United States)

    Liu, Jingli; Li, Jianping; Xu, Weixuan; Shi, Yong

    Least squares support vector machine (LS-SVM) is a revised version of support vector machine (SVM) and has been proved to be a useful tool for pattern recognition. LS-SVM had excellent generalization performance and low computational cost. In this paper, we propose a new method called two-layer least squares support vector machine which combines kernel principle component analysis (KPCA) and linear programming form of least square support vector machine. With this method sparseness and robustness is obtained while solving large dimensional and large scale database. A U.S. commercial credit card database is used to test the efficiency of our method and the result proved to be a satisfactory one.

  7. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig; Suliman, Mohamed Abdalla Elhag; Al-Naffouri, Tareq Y.

    2017-01-01

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded

  8. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig

    2015-04-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  9. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig; Al-Naffouri, Tareq Y.

    2015-01-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  10. Locally Linear Embedding of Local Orthogonal Least Squares Images for Face Recognition

    Science.gov (United States)

    Hafizhelmi Kamaru Zaman, Fadhlan

    2018-03-01

    Dimensionality reduction is very important in face recognition since it ensures that high-dimensionality data can be mapped to lower dimensional space without losing salient and integral facial information. Locally Linear Embedding (LLE) has been previously used to serve this purpose, however, the process of acquiring LLE features requires high computation and resources. To overcome this limitation, we propose a locally-applied Local Orthogonal Least Squares (LOLS) model can be used as initial feature extraction before the application of LLE. By construction of least squares regression under orthogonal constraints we can preserve more discriminant information in the local subspace of facial features while reducing the overall features into a more compact form that we called LOLS images. LLE can then be applied on the LOLS images to maps its representation into a global coordinate system of much lower dimensionality. Several experiments carried out using publicly available face datasets such as AR, ORL, YaleB, and FERET under Single Sample Per Person (SSPP) constraint demonstrates that our proposed method can reduce the time required to compute LLE features while delivering better accuracy when compared to when either LLE or OLS alone is used. Comparison against several other feature extraction methods and more recent feature-learning method such as state-of-the-art Convolutional Neural Networks (CNN) also reveal the superiority of the proposed method under SSPP constraint.

  11. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig

    2017-10-18

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.

  12. Weighted conditional least-squares estimation

    International Nuclear Information System (INIS)

    Booth, J.G.

    1987-01-01

    A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered

  13. Spectral/hp least-squares finite element formulation for the Navier-Stokes equations

    International Nuclear Information System (INIS)

    Pontaza, J.P.; Reddy, J.N.

    2003-01-01

    We consider the application of least-squares finite element models combined with spectral/hp methods for the numerical solution of viscous flow problems. The paper presents the formulation, validation, and application of a spectral/hp algorithm to the numerical solution of the Navier-Stokes equations governing two- and three-dimensional stationary incompressible and low-speed compressible flows. The Navier-Stokes equations are expressed as an equivalent set of first-order equations by introducing vorticity or velocity gradients as additional independent variables and the least-squares method is used to develop the finite element model. High-order element expansions are used to construct the discrete model. The discrete model thus obtained is linearized by Newton's method, resulting in a linear system of equations with a symmetric positive definite coefficient matrix that is solved in a fully coupled manner by a preconditioned conjugate gradient method. Spectral convergence of the L 2 least-squares functional and L 2 error norms is verified using smooth solutions to the two-dimensional stationary Poisson and incompressible Navier-Stokes equations. Numerical results for flow over a backward-facing step, steady flow past a circular cylinder, three-dimensional lid-driven cavity flow, and compressible buoyant flow inside a square enclosure are presented to demonstrate the predictive capability and robustness of the proposed formulation

  14. Extracting information from two-dimensional electrophoresis gels by partial least squares regression

    DEFF Research Database (Denmark)

    Jessen, Flemming; Lametsch, R.; Bendixen, E.

    2002-01-01

    of all proteins/spots in the gels. In the present study it is demonstrated how information can be extracted by multivariate data analysis. The strategy is based on partial least squares regression followed by variable selection to find proteins that individually or in combination with other proteins vary......Two-dimensional gel electrophoresis (2-DE) produces large amounts of data and extraction of relevant information from these data demands a cautious and time consuming process of spot pattern matching between gels. The classical approach of data analysis is to detect protein markers that appear...... or disappear depending on the experimental conditions. Such biomarkers are found by comparing the relative volumes of individual spots in the individual gels. Multivariate statistical analysis and modelling of 2-DE data for comparison and classification is an alternative approach utilising the combination...

  15. Least median of squares and iteratively re-weighted least squares as robust linear regression methods for fluorimetric determination of α-lipoic acid in capsules in ideal and non-ideal cases of linearity.

    Science.gov (United States)

    Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F

    2018-03-26

    This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.

  16. Multisplitting for linear, least squares and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Renaut, R.

    1996-12-31

    In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.

  17. Preconditioned Iterative Methods for Solving Weighted Linear Least Squares Problems

    Czech Academy of Sciences Publication Activity Database

    Bru, R.; Marín, J.; Mas, J.; Tůma, Miroslav

    2014-01-01

    Roč. 36, č. 4 (2014), A2002-A2022 ISSN 1064-8275 Institutional support: RVO:67985807 Keywords : preconditioned iterative methods * incomplete decompositions * approximate inverses * linear least squares Subject RIV: BA - General Mathematics Impact factor: 1.854, year: 2014

  18. Linearized least-square imaging of internally scattered data

    KAUST Repository

    Aldawood, Ali; Hoteit, Ibrahim; Turkiyyah, George M.; Zuberi, M. A H; Alkhalifah, Tariq Ali

    2014-01-01

    Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single-scattering energy such as nearly vertical faults. Standard migration of these multiples provide subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. Hence, we apply a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. Application to synthetic data demonstrated the effectiveness of the proposed inversion in imaging a reflector that is poorly illuminated by single-scattering energy. The least-square inversion of doublescattered data helped delineate that reflector with minimal acquisition fingerprint.

  19. Positive solution of non-square fully Fuzzy linear system of equation in general form using least square method

    Directory of Open Access Journals (Sweden)

    Reza Ezzati

    2014-08-01

    Full Text Available In this paper, we propose the least square method for computing the positive solution of a non-square fully fuzzy linear system. To this end, we use Kaffman' arithmetic operations on fuzzy numbers \\cite{17}. Here, considered existence of exact solution using pseudoinverse, if they are not satisfy in positive solution condition, we will compute fuzzy vector core and then we will obtain right and left spreads of positive fuzzy vector by introducing constrained least squares problem. Using our proposed method, non-square fully fuzzy linear system of equations always has a solution. Finally, we illustrate the efficiency of proposed method by solving some numerical examples.

  20. Least-squares model-based halftoning

    Science.gov (United States)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach

  1. BRGLM, Interactive Linear Regression Analysis by Least Square Fit

    International Nuclear Information System (INIS)

    Ringland, J.T.; Bohrer, R.E.; Sherman, M.E.

    1985-01-01

    1 - Description of program or function: BRGLM is an interactive program written to fit general linear regression models by least squares and to provide a variety of statistical diagnostic information about the fit. Stepwise and all-subsets regression can be carried out also. There are facilities for interactive data management (e.g. setting missing value flags, data transformations) and tools for constructing design matrices for the more commonly-used models such as factorials, cubic Splines, and auto-regressions. 2 - Method of solution: The least squares computations are based on the orthogonal (QR) decomposition of the design matrix obtained using the modified Gram-Schmidt algorithm. 3 - Restrictions on the complexity of the problem: The current release of BRGLM allows maxima of 1000 observations, 99 variables, and 3000 words of main memory workspace. For a problem with N observations and P variables, the number of words of main memory storage required is MAX(N*(P+6), N*P+P*P+3*N, and 3*P*P+6*N). Any linear model may be fit although the in-memory workspace will have to be increased for larger problems

  2. Linear least squares compartmental-model-independent parameter identification in PET

    International Nuclear Information System (INIS)

    Thie, J.A.; Smith, G.T.; Hubner, K.F.

    1997-01-01

    A simplified approach involving linear-regression straight-line parameter fitting of dynamic scan data is developed for both specific and nonspecific models. Where compartmental-model topologies apply, the measured activity may be expressed in terms of: its integrals, plasma activity and plasma integrals -- all in a linear expression with macroparameters as coefficients. Multiple linear regression, as in spreadsheet software, determines parameters for best data fits. Positron emission tomography (PET)-acquired gray-matter images in a dynamic scan are analyzed: both by this method and by traditional iterative nonlinear least squares. Both patient and simulated data were used. Regression and traditional methods are in expected agreement. Monte-Carlo simulations evaluate parameter standard deviations, due to data noise, and much smaller noise-induced biases. Unique straight-line graphical displays permit visualizing data influences on various macroparameters as changes in slopes. Advantages of regression fitting are: simplicity, speed, ease of implementation in spreadsheet software, avoiding risks of convergence failures or false solutions in iterative least squares, and providing various visualizations of the uptake process by straight line graphical displays. Multiparameter model-independent analyses on lesser understood systems is also made possible

  3. Preprocessing in Matlab Inconsistent Linear System for a Meaningful Least Squares Solution

    Science.gov (United States)

    Sen, Symal K.; Shaykhian, Gholam Ali

    2011-01-01

    Mathematical models of many physical/statistical problems are systems of linear equations Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the . minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.

  4. Status of the Monte Carlo library least-squares (MCLLS) approach for non-linear radiation analyzer problems

    Science.gov (United States)

    Gardner, Robin P.; Xu, Libai

    2009-10-01

    The Center for Engineering Applications of Radioisotopes (CEAR) has been working for over a decade on the Monte Carlo library least-squares (MCLLS) approach for treating non-linear radiation analyzer problems including: (1) prompt gamma-ray neutron activation analysis (PGNAA) for bulk analysis, (2) energy-dispersive X-ray fluorescence (EDXRF) analyzers, and (3) carbon/oxygen tool analysis in oil well logging. This approach essentially consists of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required background libraries. These libraries are then used in the linear library least-squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. Iterations of this are used until the LLS values agree with the composition used to generate the libraries. The current status of the methods (and topics) necessary to implement the MCLLS approach is reported. This includes: (1) the Monte Carlo codes such as CEARXRF, CEARCPG, and CEARCO for forward generation of the necessary elemental library spectra for the LLS calculation for X-ray fluorescence, neutron capture prompt gamma-ray analyzers, and carbon/oxygen tools; (2) the correction of spectral pulse pile-up (PPU) distortion by Monte Carlo simulation with the code CEARIPPU; (3) generation of detector response functions (DRF) for detectors with linear and non-linear responses for Monte Carlo simulation of pulse-height spectra; and (4) the use of the differential operator (DO) technique to make the necessary iterations for non-linear responses practical. In addition to commonly analyzed single spectra, coincidence spectra or even two-dimensional (2-D) coincidence spectra can also be used in the MCLLS approach and may provide more accurate results.

  5. A least-squares computational ''tool kit''

    International Nuclear Information System (INIS)

    Smith, D.L.

    1993-04-01

    The information assembled in this report is intended to offer a useful computational ''tool kit'' to individuals who are interested in a variety of practical applications for the least-squares method of parameter estimation. The fundamental principles of Bayesian analysis are outlined first and these are applied to development of both the simple and the generalized least-squares conditions. Formal solutions that satisfy these conditions are given subsequently. Their application to both linear and non-linear problems is described in detail. Numerical procedures required to implement these formal solutions are discussed and two utility computer algorithms are offered for this purpose (codes LSIOD and GLSIOD written in FORTRAN). Some simple, easily understood examples are included to illustrate the use of these algorithms. Several related topics are then addressed, including the generation of covariance matrices, the role of iteration in applications of least-squares procedures, the effects of numerical precision and an approach that can be pursued in developing data analysis packages that are directed toward special applications

  6. Updating QR factorization procedure for solution of linear least squares problem with equality constraints.

    Science.gov (United States)

    Zeb, Salman; Yousaf, Muhammad

    2017-01-01

    In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.

  7. Penalized linear regression for discrete ill-posed problems: A hybrid least-squares and mean-squared error approach

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.

    2016-01-01

    This paper proposes a new approach to find the regularization parameter for linear least-squares discrete ill-posed problems. In the proposed approach, an artificial perturbation matrix with a bounded norm is forced into the discrete ill-posed model

  8. Time Scale in Least Square Method

    Directory of Open Access Journals (Sweden)

    Özgür Yeniay

    2014-01-01

    Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.

  9. Moving Least Squares Method for a One-Dimensional Parabolic Inverse Problem

    Directory of Open Access Journals (Sweden)

    Baiyu Wang

    2014-01-01

    Full Text Available This paper investigates the numerical solution of a class of one-dimensional inverse parabolic problems using the moving least squares approximation; the inverse problem is the determination of an unknown source term depending on time. The collocation method is used for solving the equation; some numerical experiments are presented and discussed to illustrate the stability and high efficiency of the method.

  10. A Weighted Least Squares Approach To Robustify Least Squares Estimates.

    Science.gov (United States)

    Lin, Chowhong; Davenport, Ernest C., Jr.

    This study developed a robust linear regression technique based on the idea of weighted least squares. In this technique, a subsample of the full data of interest is drawn, based on a measure of distance, and an initial set of regression coefficients is calculated. The rest of the data points are then taken into the subsample, one after another,…

  11. A constrained robust least squares approach for contaminant release history identification

    Science.gov (United States)

    Sun, Alexander Y.; Painter, Scott L.; Wittmeyer, Gordon W.

    2006-04-01

    Contaminant source identification is an important type of inverse problem in groundwater modeling and is subject to both data and model uncertainty. Model uncertainty was rarely considered in the previous studies. In this work, a robust framework for solving contaminant source recovery problems is introduced. The contaminant source identification problem is first cast into one of solving uncertain linear equations, where the response matrix is constructed using a superposition technique. The formulation presented here is general and is applicable to any porous media flow and transport solvers. The robust least squares (RLS) estimator, which originated in the field of robust identification, directly accounts for errors arising from model uncertainty and has been shown to significantly reduce the sensitivity of the optimal solution to perturbations in model and data. In this work, a new variant of RLS, the constrained robust least squares (CRLS), is formulated for solving uncertain linear equations. CRLS allows for additional constraints, such as nonnegativity, to be imposed. The performance of CRLS is demonstrated through one- and two-dimensional test problems. When the system is ill-conditioned and uncertain, it is found that CRLS gave much better performance than its classical counterpart, the nonnegative least squares. The source identification framework developed in this work thus constitutes a reliable tool for recovering source release histories in real applications.

  12. Spectrum unfolding by the least-squares methods

    International Nuclear Information System (INIS)

    Perey, F.G.

    1977-01-01

    The method of least squares is briefly reviewed, and the conditions under which it may be used are stated. From this analysis, a least-squares approach to the solution of the dosimetry neutron spectrum unfolding problem is introduced. The mathematical solution to this least-squares problem is derived from the general solution. The existence of this solution is analyzed in some detail. A chi 2 -test is derived for the consistency of the input data which does not require the solution to be obtained first. The fact that the problem is technically nonlinear, but should be treated in general as a linear one, is argued. Therefore, the solution should not be obtained by iteration. Two interpretations are made for the solution of the code STAY'SL, which solves this least-squares problem. The relationship of the solution to this least-squares problem to those obtained currently by other methods of solving the dosimetry neutron spectrum unfolding problem is extensively discussed. It is shown that the least-squares method does not require more input information than would be needed by current methods in order to estimate the uncertainties in their solutions. From this discussion it is concluded that the proposed least-squares method does provide the best complete solution, with uncertainties, to the problem as it is understood now. Finally, some implications of this method are mentioned regarding future work required in order to exploit its potential fully

  13. Efficient non-linear model reduction via a least-squares Petrov-Galerkin projection and compressive tensor approximations

    KAUST Repository

    Carlberg, Kevin

    2010-10-28

    A Petrov-Galerkin projection method is proposed for reducing the dimension of a discrete non-linear static or dynamic computational model in view of enabling its processing in real time. The right reduced-order basis is chosen to be invariant and is constructed using the Proper Orthogonal Decomposition method. The left reduced-order basis is selected to minimize the two-norm of the residual arising at each Newton iteration. Thus, this basis is iteration-dependent, enables capturing of non-linearities, and leads to the globally convergent Gauss-Newton method. To avoid the significant computational cost of assembling the reduced-order operators, the residual and action of the Jacobian on the right reduced-order basis are each approximated by the product of an invariant, large-scale matrix, and an iteration-dependent, smaller one. The invariant matrix is computed using a data compression procedure that meets proposed consistency requirements. The iteration-dependent matrix is computed to enable the least-squares reconstruction of some entries of the approximated quantities. The results obtained for the solution of a turbulent flow problem and several non-linear structural dynamics problems highlight the merit of the proposed consistency requirements. They also demonstrate the potential of this method to significantly reduce the computational cost associated with high-dimensional non-linear models while retaining their accuracy. © 2010 John Wiley & Sons, Ltd.

  14. Efficient non-linear model reduction via a least-squares Petrov-Galerkin projection and compressive tensor approximations

    KAUST Repository

    Carlberg, Kevin; Bou-Mosleh, Charbel; Farhat, Charbel

    2010-01-01

    A Petrov-Galerkin projection method is proposed for reducing the dimension of a discrete non-linear static or dynamic computational model in view of enabling its processing in real time. The right reduced-order basis is chosen to be invariant and is constructed using the Proper Orthogonal Decomposition method. The left reduced-order basis is selected to minimize the two-norm of the residual arising at each Newton iteration. Thus, this basis is iteration-dependent, enables capturing of non-linearities, and leads to the globally convergent Gauss-Newton method. To avoid the significant computational cost of assembling the reduced-order operators, the residual and action of the Jacobian on the right reduced-order basis are each approximated by the product of an invariant, large-scale matrix, and an iteration-dependent, smaller one. The invariant matrix is computed using a data compression procedure that meets proposed consistency requirements. The iteration-dependent matrix is computed to enable the least-squares reconstruction of some entries of the approximated quantities. The results obtained for the solution of a turbulent flow problem and several non-linear structural dynamics problems highlight the merit of the proposed consistency requirements. They also demonstrate the potential of this method to significantly reduce the computational cost associated with high-dimensional non-linear models while retaining their accuracy. © 2010 John Wiley & Sons, Ltd.

  15. Support-Vector-based Least Squares for learning non-linear dynamics

    NARCIS (Netherlands)

    de Kruif, B.J.; de Vries, Theodorus J.A.

    2002-01-01

    A function approximator is introduced that is based on least squares support vector machines (LSSVM) and on least squares (LS). The potential indicators for the LS method are chosen as the kernel functions of all the training samples similar to LSSVM. By selecting these as indicator functions the

  16. Parameter estimation of Monod model by the Least-Squares method for microalgae Botryococcus Braunii sp

    Science.gov (United States)

    See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.

    2018-04-01

    This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.

  17. FC LSEI WNNLS, Least-Square Fitting Algorithms Using B Splines

    International Nuclear Information System (INIS)

    Hanson, R.J.; Haskell, K.H.

    1989-01-01

    1 - Description of problem or function: FC allows a user to fit dis- crete data, in a weighted least-squares sense, using piece-wise polynomial functions represented by B-Splines on a given set of knots. In addition to the least-squares fitting of the data, equality, inequality, and periodic constraints at a discrete, user-specified set of points can be imposed on the fitted curve or its derivatives. The subprograms LSEI and WNNLS solve the linearly-constrained least-squares problem. LSEI solves the class of problem with general inequality constraints, and, if requested, obtains a covariance matrix of the solution parameters. WNNLS solves the class of problem with non-negativity constraints. It is anticipated that most users will find LSEI suitable for their needs; however, users with inequalities that are single bounds on variables may wish to use WNNLS. 2 - Method of solution: The discrete data are fit by a linear combination of piece-wise polynomial curves which leads to a linear least-squares system of algebraic equations. Additional information is expressed as a discrete set of linear inequality and equality constraints on the fitted curve which leads to a linearly-constrained least-squares system of algebraic equations. The solution of this system is the main computational problem solved

  18. New approach to breast cancer CAD using partial least squares and kernel-partial least squares

    Science.gov (United States)

    Land, Walker H., Jr.; Heine, John; Embrechts, Mark; Smith, Tom; Choma, Robert; Wong, Lut

    2005-04-01

    Breast cancer is second only to lung cancer as a tumor-related cause of death in women. Currently, the method of choice for the early detection of breast cancer is mammography. While sensitive to the detection of breast cancer, its positive predictive value (PPV) is low, resulting in biopsies that are only 15-34% likely to reveal malignancy. This paper explores the use of two novel approaches called Partial Least Squares (PLS) and Kernel-PLS (K-PLS) to the diagnosis of breast cancer. The approach is based on optimization for the partial least squares (PLS) algorithm for linear regression and the K-PLS algorithm for non-linear regression. Preliminary results show that both the PLS and K-PLS paradigms achieved comparable results with three separate support vector learning machines (SVLMs), where these SVLMs were known to have been trained to a global minimum. That is, the average performance of the three separate SVLMs were Az = 0.9167927, with an average partial Az (Az90) = 0.5684283. These results compare favorably with the K-PLS paradigm, which obtained an Az = 0.907 and partial Az = 0.6123. The PLS paradigm provided comparable results. Secondly, both the K-PLS and PLS paradigms out performed the ANN in that the Az index improved by about 14% (Az ~ 0.907 compared to the ANN Az of ~ 0.8). The "Press R squared" value for the PLS and K-PLS machine learning algorithms were 0.89 and 0.9, respectively, which is in good agreement with the other MOP values.

  19. Spectral mimetic least-squares method for div-curl systems

    NARCIS (Netherlands)

    Gerritsma, Marc; Palha, Artur; Lirkov, I.; Margenov, S.

    2018-01-01

    In this paper the spectral mimetic least-squares method is applied to a two-dimensional div-curl system. A test problem is solved on orthogonal and curvilinear meshes and both h- and p-convergence results are presented. The resulting solutions will be pointwise divergence-free for these test

  20. Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Jaechil [Univ. of Wisconsin, Madison, WI (United States)

    1996-12-31

    Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.

  1. Non linear-least-squares fitting for pixe spectra

    International Nuclear Information System (INIS)

    Benamar, M.A.; Tchantchane, A.; Benouali, N.; Azbouche, A.; Tobbeche, S.

    1992-10-01

    An interactive computer program for the analysis of Pixe spectra is described. The fitting procedure consists of computing a function which approximates the experimental data. A nonlinear least-squares fitting is used to determine the parameters of the fit. The program takes into account the low energy tail and the escape peaks

  2. Least Squares Data Fitting with Applications

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Pereyra, Víctor; Scherer, Godela

    As one of the classical statistical regression techniques, and often the first to be taught to new students, least squares fitting can be a very effective tool in data analysis. Given measured data, we establish a relationship between independent and dependent variables so that we can use the data....... In a number of applications, the accuracy and efficiency of the least squares fit is central, and Per Christian Hansen, Víctor Pereyra, and Godela Scherer survey modern computational methods and illustrate them in fields ranging from engineering and environmental sciences to geophysics. Anyone working...... with problems of linear and nonlinear least squares fitting will find this book invaluable as a hands-on guide, with accessible text and carefully explained problems. Included are • an overview of computational methods together with their properties and advantages • topics from statistical regression analysis...

  3. A complex linear least-squares method to derive relative and absolute orientations of seismic sensors

    OpenAIRE

    F. Grigoli; Simone Cesca; Torsten Dahm; L. Krieger

    2012-01-01

    Determining the relative orientation of the horizontal components of seismic sensors is a common problem that limits data analysis and interpretation for several acquisition setups, including linear arrays of geophones deployed in borehole installations or ocean bottom seismometers deployed at the seafloor. To solve this problem we propose a new inversion method based on a complex linear algebra approach. Relative orientation angles are retrieved by minimizing, in a least-squares sense, the l...

  4. Penalized linear regression for discrete ill-posed problems: A hybrid least-squares and mean-squared error approach

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-12-19

    This paper proposes a new approach to find the regularization parameter for linear least-squares discrete ill-posed problems. In the proposed approach, an artificial perturbation matrix with a bounded norm is forced into the discrete ill-posed model matrix. This perturbation is introduced to enhance the singular-value (SV) structure of the matrix and hence to provide a better solution. The proposed approach is derived to select the regularization parameter in a way that minimizes the mean-squared error (MSE) of the estimator. Numerical results demonstrate that the proposed approach outperforms a set of benchmark methods in most cases when applied to different scenarios of discrete ill-posed problems. Jointly, the proposed approach enjoys the lowest run-time and offers the highest level of robustness amongst all the tested methods.

  5. Plane-wave Least-squares Reverse Time Migration

    KAUST Repository

    Dai, Wei

    2012-11-04

    Least-squares reverse time migration is formulated with a new parameterization, where the migration image of each shot is updated separately and a prestack image is produced with common image gathers. The advantage is that it can offer stable convergence for least-squares migration even when the migration velocity is not completely accurate. To significantly reduce computation cost, linear phase shift encoding is applied to hundreds of shot gathers to produce dozens of planes waves. A regularization term which penalizes the image difference between nearby angles are used to keep the prestack image consistent through all the angles. Numerical tests on a marine dataset is performed to illustrate the advantages of least-squares reverse time migration in the plane-wave domain. Through iterations of least-squares migration, the migration artifacts are reduced and the image resolution is improved. Empirical results suggest that the LSRTM in plane wave domain is an efficient method to improve the image quality and produce common image gathers.

  6. Total least squares for anomalous change detection

    Science.gov (United States)

    Theiler, James; Matsekh, Anna M.

    2010-04-01

    A family of subtraction-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQbased anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and special cases of it are equivalent to canonical correlation analysis and optimized covariance equalization. What whitened TLSQ offers is a generalization of these algorithms with the potential for better performance.

  7. Pseudoinverse preconditioners and iterative methods for large dense linear least-squares problems

    Directory of Open Access Journals (Sweden)

    Oskar Cahueñas

    2013-05-01

    Full Text Available We address the issue of approximating the pseudoinverse of the coefficient matrix for dynamically building preconditioning strategies for the numerical solution of large dense linear least-squares problems. The new preconditioning strategies are embedded into simple and well-known iterative schemes that avoid the use of the, usually ill-conditioned, normal equations. We analyze a scheme to approximate the pseudoinverse, based on Schulz iterative method, and also different iterative schemes, based on extensions of Richardson's method, and the conjugate gradient method, that are suitable for preconditioning strategies. We present preliminary numerical results to illustrate the advantages of the proposed schemes.

  8. Linear negative magnetoresistance in two-dimensional Lorentz gases

    Science.gov (United States)

    Schluck, J.; Hund, M.; Heckenthaler, T.; Heinzel, T.; Siboni, N. H.; Horbach, J.; Pierz, K.; Schumacher, H. W.; Kazazis, D.; Gennser, U.; Mailly, D.

    2018-03-01

    Two-dimensional Lorentz gases formed by obstacles in the shape of circles, squares, and retroreflectors are reported to show a pronounced linear negative magnetoresistance at small magnetic fields. For circular obstacles at low number densities, our results agree with the predictions of a model based on classical retroreflection. In extension to the existing theoretical models, we find that the normalized magnetoresistance slope depends on the obstacle shape and increases as the number density of the obstacles is increased. The peaks are furthermore suppressed by in-plane magnetic fields as well as by elevated temperatures. These results suggest that classical retroreflection can form a significant contribution to the magnetoresistivity of two-dimensional Lorentz gases, while contributions from weak localization cannot be excluded, in particular for large obstacle densities.

  9. Non-linear HVAC computations using least square support vector machines

    International Nuclear Information System (INIS)

    Kumar, Mahendra; Kar, I.N.

    2009-01-01

    This paper aims to demonstrate application of least square support vector machines (LS-SVM) to model two complex heating, ventilating and air-conditioning (HVAC) relationships. The two applications considered are the estimation of the predicted mean vote (PMV) for thermal comfort and the generation of psychrometric chart. LS-SVM has the potential for quick, exact representations and also possesses a structure that facilitates hardware implementation. The results show very good agreement between function values computed from conventional model and LS-SVM model in real time. The robustness of LS-SVM models against input noises has also been analyzed.

  10. Making the most out of least-squares migration

    KAUST Repository

    Huang, Yunsong; Dutta, Gaurav; Dai, Wei; Wang, Xin; Schuster, Gerard T.; Yu, Jianhua

    2014-01-01

    ) weak amplitudes resulting from geometric spreading, attenuation, and defocusing. These problems can be remedied in part by least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), which aims to linearly

  11. A least-squares computational ``tool kit``. Nuclear data and measurements series

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D.L.

    1993-04-01

    The information assembled in this report is intended to offer a useful computational ``tool kit`` to individuals who are interested in a variety of practical applications for the least-squares method of parameter estimation. The fundamental principles of Bayesian analysis are outlined first and these are applied to development of both the simple and the generalized least-squares conditions. Formal solutions that satisfy these conditions are given subsequently. Their application to both linear and non-linear problems is described in detail. Numerical procedures required to implement these formal solutions are discussed and two utility computer algorithms are offered for this purpose (codes LSIOD and GLSIOD written in FORTRAN). Some simple, easily understood examples are included to illustrate the use of these algorithms. Several related topics are then addressed, including the generation of covariance matrices, the role of iteration in applications of least-squares procedures, the effects of numerical precision and an approach that can be pursued in developing data analysis packages that are directed toward special applications.

  12. Determination of calibration equations by means of the generalized least squares method

    International Nuclear Information System (INIS)

    Zijp, W.L.

    1984-12-01

    For the determination of two-dimensional calibration curves (e.g. in tank calibration procedures) or of three dimensional calibration equations (e.g. for the calibration of NDA equipment for enrichment measurements) one performs measurements under well chosen conditions, where all observables of interest (inclusive the values of the standard material) are subject to measurement uncertainties. Moreover correlations in several measurements may occur. This document describes the mathematical-statistical approach to determine the values of the model parameters and their covariance matrix, which fit best to the mathematical model for the calibration equation. The formulae are based on the method of generalized least squares where the term generalized implies that non-linear equations in the unknown parameters and also covariance matrices of the measurement data of the calibration can be taken into account. In the general case an iteration procedure is required. No iteration is required when the model is linear in the parameters and the covariance matrices for the measurements of co-ordinates of the calibration points are proportional to each other

  13. EXPALS, Least Square Fit of Linear Combination of Exponential Decay Function

    International Nuclear Information System (INIS)

    Douglas Gardner, C.

    1980-01-01

    1 - Description of problem or function: This program fits by least squares a function which is a linear combination of real exponential decay functions. The function is y(k) = summation over j of a(j) * exp(-lambda(j) * k). Values of the independent variable (k) and the dependent variable y(k) are specified as input data. Weights may be specified as input information or set by the program (w(k) = 1/y(k)). 2 - Method of solution: The Prony-Householder iteration method is used. For unequally-spaced data, a number of interpolation options are provided. This revision includes an option to call a differential correction subroutine REFINE to improve the approximation to unequally-spaced data when equal-interval interpolation is faulty. If convergence is achieved, the probable errors in the computed parameters are calculated also. 3 - Restrictions on the complexity of the problem: Generally, it is desirable to have at least 10n observations where n equals the number of terms and to input k+n significant figures if k significant figures are expected

  14. Making the most out of least-squares migration

    KAUST Repository

    Huang, Yunsong

    2014-09-01

    Standard migration images can suffer from (1) migration artifacts caused by an undersampled acquisition geometry, (2) poor resolution resulting from a limited recording aperture, (3) ringing artifacts caused by ripples in the source wavelet, and (4) weak amplitudes resulting from geometric spreading, attenuation, and defocusing. These problems can be remedied in part by least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), which aims to linearly invert seismic data for the reflectivity distribution. Given a sufficiently accurate migration velocity model, LSM can mitigate many of the above problems and can produce more resolved migration images, sometimes with more than twice the spatial resolution of standard migration. However, LSM faces two challenges: The computational cost can be an order of magnitude higher than that of standard migration, and the resulting image quality can fail to improve for migration velocity errors of about 5% or more. It is possible to obtain the most from least-squares migration by reducing the cost and velocity sensitivity of LSM.

  15. Robust regularized least-squares beamforming approach to signal estimation

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2017-05-12

    In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill-conditioned covariance matrix of the received signals. Secondly, the steering vector pertaining to the direction of arrival of the signal of interest is not known precisely. To tackle these two challenges, the standard capon beamformer is manipulated to a form where the beamformer output is obtained as a scaled version of the inner product of two vectors. The two vectors are linearly related to the steering vector and the received signal snapshot, respectively. The linear operator, in both cases, is the square root of the covariance matrix. A regularized least-squares (RLS) approach is proposed to estimate these two vectors and to provide robustness without exploiting prior information. Simulation results show that the RLS beamformer using the proposed regularization algorithm outperforms state-of-the-art beamforming algorithms, as well as another RLS beamformers using a standard regularization approaches.

  16. First-order system least squares for the pure traction problem in planar linear elasticity

    Energy Technology Data Exchange (ETDEWEB)

    Cai, Z.; Manteuffel, T.; McCormick, S.; Parter, S.

    1996-12-31

    This talk will develop two first-order system least squares (FOSLS) approaches for the solution of the pure traction problem in planar linear elasticity. Both are two-stage algorithms that first solve for the gradients of displacement, then for the displacement itself. One approach, which uses L{sup 2} norms to define the FOSLS functional, is shown under certain H{sup 2} regularity assumptions to admit optimal H{sup 1}-like performance for standard finite element discretization and standard multigrid solution methods that is uniform in the Poisson ratio for all variables. The second approach, which is based on H{sup -1} norms, is shown under general assumptions to admit optimal uniform performance for displacement flux in an L{sup 2} norm and for displacement in an H{sup 1} norm. These methods do not degrade as other methods generally do when the material properties approach the incompressible limit.

  17. Making the most out of the least (squares migration)

    KAUST Repository

    Dutta, Gaurav

    2014-08-05

    Standard migration images can suffer from migration artifacts due to 1) poor source-receiver sampling, 2) weak amplitudes caused by geometric spreading, 3) attenuation, 4) defocusing, 5) poor resolution due to limited source-receiver aperture, and 6) ringiness caused by a ringy source wavelet. To partly remedy these problems, least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), proposes to linearly invert seismic data for the reflectivity distribution. If the migration velocity model is sufficiently accurate, then LSM can mitigate many of the above problems and lead to a more resolved migration image, sometimes with twice the spatial resolution. However, there are two problems with LSM: the cost can be an order of magnitude more than standard migration and the quality of the LSM image is no better than the standard image for velocity errors of 5% or more. We now show how to get the most from least-squares migration by reducing the cost and velocity sensitivity of LSM.

  18. Least squares orthogonal polynomial approximation in several independent variables

    International Nuclear Information System (INIS)

    Caprari, R.S.

    1992-06-01

    This paper begins with an exposition of a systematic technique for generating orthonormal polynomials in two independent variables by application of the Gram-Schmidt orthogonalization procedure of linear algebra. It is then demonstrated how a linear least squares approximation for experimental data or an arbitrary function can be generated from these polynomials. The least squares coefficients are computed without recourse to matrix arithmetic, which ensures both numerical stability and simplicity of implementation as a self contained numerical algorithm. The Gram-Schmidt procedure is then utilised to generate a complete set of orthogonal polynomials of fourth degree. A theory for the transformation of the polynomial representation from an arbitrary basis into the familiar sum of products form is presented, together with a specific implementation for fourth degree polynomials. Finally, the computational integrity of this algorithm is verified by reconstructing arbitrary fourth degree polynomials from their values at randomly chosen points in their domain. 13 refs., 1 tab

  19. Plane-wave Least-squares Reverse Time Migration

    KAUST Repository

    Dai, Wei; Schuster, Gerard T.

    2012-01-01

    convergence for least-squares migration even when the migration velocity is not completely accurate. To significantly reduce computation cost, linear phase shift encoding is applied to hundreds of shot gathers to produce dozens of planes waves. A

  20. Space-time coupled spectral/hp least-squares finite element formulation for the incompressible Navier-Stokes equations

    International Nuclear Information System (INIS)

    Pontaza, J.P.; Reddy, J.N.

    2004-01-01

    We consider least-squares finite element models for the numerical solution of the non-stationary Navier-Stokes equations governing viscous incompressible fluid flows. The paper presents a formulation where the effects of space and time are coupled, resulting in a true space-time least-squares minimization procedure, as opposed to a space-time decoupled formulation where a least-squares minimization procedure is performed in space at each time step. The formulation is first presented for the linear advection-diffusion equation and then extended to the Navier-Stokes equations. The formulation has no time step stability restrictions and is spectrally accurate in both space and time. To allow the use of practical C 0 element expansions in the resulting finite element model, the Navier-Stokes equations are expressed as an equivalent set of first-order equations by introducing vorticity as an additional independent variable and the least-squares method is used to develop the finite element model of the governing equations. High-order element expansions are used to construct the discrete model. The discrete model thus obtained is linearized by Newton's method, resulting in a linear system of equations with a symmetric positive definite coefficient matrix that is solved in a fully coupled manner by a preconditioned conjugate gradient method in matrix-free form. Spectral convergence of the L 2 least-squares functional and L 2 error norms in space-time is verified using a smooth solution to the two-dimensional non-stationary incompressible Navier-Stokes equations. Numerical results are presented for impulsively started lid-driven cavity flow, oscillatory lid-driven cavity flow, transient flow over a backward-facing step, and flow around a circular cylinder; the results demonstrate the predictive capability and robustness of the proposed formulation. Even though the space-time coupled formulation is emphasized, we also present the formulation and numerical results for least-squares

  1. Two-dimensional Fast ESPRIT Algorithm for Linear Array SAR Imaging

    Directory of Open Access Journals (Sweden)

    Zhao Yi-chao

    2015-10-01

    Full Text Available The linear array Synthetic Aperture Radar (SAR system is a popular research tool, because it can realize three-dimensional imaging. However, owning to limitations of the aircraft platform and actual conditions, resolution improvement is difficult in cross-track and along-track directions. In this study, a twodimensional fast Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT algorithm for linear array SAR imaging is proposed to overcome these limitations. This approach combines the Gerschgorin disks method and the ESPRIT algorithm to estimate the positions of scatterers in cross and along-rack directions. Moreover, the reflectivity of scatterers is obtained by a modified pairing method based on “region growing”, replacing the least-squares method. The simulation results demonstrate the applicability of the algorithm with high resolution, quick calculation, and good real-time response.

  2. Efectivity of Additive Spline for Partial Least Square Method in Regression Model Estimation

    Directory of Open Access Journals (Sweden)

    Ahmad Bilfarsah

    2005-04-01

    Full Text Available Additive Spline of Partial Least Square method (ASPL as one generalization of Partial Least Square (PLS method. ASPLS method can be acommodation to non linear and multicollinearity case of predictor variables. As a principle, The ASPLS method approach is cahracterized by two idea. The first is to used parametric transformations of predictors by spline function; the second is to make ASPLS components mutually uncorrelated, to preserve properties of the linear PLS components. The performance of ASPLS compared with other PLS method is illustrated with the fisher economic application especially the tuna fish production.

  3. 3D plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin; Dai, Wei; Huang, Yunsong; Schuster, Gerard T.

    2014-01-01

    A three dimensional least-squares Kirchhoff migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images and the computational efficiency. Due to the limitation of current 3D marine acquisition

  4. Making the most out of the least (squares migration)

    KAUST Repository

    Dutta, Gaurav; Huang, Yunsong; Dai, Wei; Wang, Xin; Schuster, Gerard T.

    2014-01-01

    ) ringiness caused by a ringy source wavelet. To partly remedy these problems, least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), proposes to linearly invert seismic data for the reflectivity distribution

  5. Regularization by truncated total least squares

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Fierro, R.D; Golub, G.H

    1997-01-01

    The total least squares (TLS) method is a successful method for noise reduction in linear least squares problems in a number of applications. The TLS method is suited to problems in which both the coefficient matrix and the right-hand side are not precisely known. This paper focuses on the use...... schemes for relativistic hydrodynamical equations. Such an approximate Riemann solver is presented in this paper which treats all waves emanating from an initial discontinuity as themselves discontinuous. Therefore, jump conditions for shocks are approximately used for rarefaction waves. The solver...... is easy to implement in a Godunov scheme and converges rapidly for relativistic hydrodynamics. The fast convergence of the solver indicates the potential of a higher performance of a Godunov scheme in which the solver is used....

  6. Solution of a Complex Least Squares Problem with Constrained Phase.

    Science.gov (United States)

    Bydder, Mark

    2010-12-30

    The least squares solution of a complex linear equation is in general a complex vector with independent real and imaginary parts. In certain applications in magnetic resonance imaging, a solution is desired such that each element has the same phase. A direct method for obtaining the least squares solution to the phase constrained problem is described.

  7. A high order compact least-squares reconstructed discontinuous Galerkin method for the steady-state compressible flows on hybrid grids

    Science.gov (United States)

    Cheng, Jian; Zhang, Fan; Liu, Tiegang

    2018-06-01

    In this paper, a class of new high order reconstructed DG (rDG) methods based on the compact least-squares (CLS) reconstruction [23,24] is developed for simulating the two dimensional steady-state compressible flows on hybrid grids. The proposed method combines the advantages of the DG discretization with the flexibility of the compact least-squares reconstruction, which exhibits its superior potential in enhancing the level of accuracy and reducing the computational cost compared to the underlying DG methods with respect to the same number of degrees of freedom. To be specific, a third-order compact least-squares rDG(p1p2) method and a fourth-order compact least-squares rDG(p2p3) method are developed and investigated in this work. In this compact least-squares rDG method, the low order degrees of freedom are evolved through the underlying DG(p1) method and DG(p2) method, respectively, while the high order degrees of freedom are reconstructed through the compact least-squares reconstruction, in which the constitutive relations are built by requiring the reconstructed polynomial and its spatial derivatives on the target cell to conserve the cell averages and the corresponding spatial derivatives on the face-neighboring cells. The large sparse linear system resulted by the compact least-squares reconstruction can be solved relatively efficient when it is coupled with the temporal discretization in the steady-state simulations. A number of test cases are presented to assess the performance of the high order compact least-squares rDG methods, which demonstrates their potential to be an alternative approach for the high order numerical simulations of steady-state compressible flows.

  8. Performance improvement of shunt active power filter based on non-linear least-square approach

    DEFF Research Database (Denmark)

    Terriche, Yacine

    2018-01-01

    . This paper proposes an improved open loop strategy which is unconditionally stable and flexible. The proposed method which is based on non-linear least square (NLS) approach can extract the fundamental voltage and estimates its phase within only half cycle, even in the presence of odd harmonics and dc offset......). The synchronous reference frame (SRF) approach is widely used for generating the RCC due to its simplicity and computation efficiency. However, the SRF approach needs precise information of the voltage phase which becomes a challenge under adverse grid conditions. A typical solution to answer this need...

  9. Wave-equation Q tomography and least-squares migration

    KAUST Repository

    Dutta, Gaurav

    2016-03-01

    This thesis designs new methods for Q tomography and Q-compensated prestack depth migration when the recorded seismic data suffer from strong attenuation. A motivation of this work is that the presence of gas clouds or mud channels in overburden structures leads to the distortion of amplitudes and phases in seismic waves propagating inside the earth. If the attenuation parameter Q is very strong, i.e., Q<30, ignoring the anelastic effects in imaging can lead to dimming of migration amplitudes and loss of resolution. This, in turn, adversely affects the ability to accurately predict reservoir properties below such layers. To mitigate this problem, I first develop an anelastic least-squares reverse time migration (Q-LSRTM) technique. I reformulate the conventional acoustic least-squares migration problem as a viscoacoustic linearized inversion problem. Using linearized viscoacoustic modeling and adjoint operators during the least-squares iterations, I show with numerical tests that Q-LSRTM can compensate for the amplitude loss and produce images with better balanced amplitudes than conventional migration. To estimate the background Q model that can be used for any Q-compensating migration algorithm, I then develop a wave-equation based optimization method that inverts for the subsurface Q distribution by minimizing a skeletonized misfit function ε. Here, ε is the sum of the squared differences between the observed and the predicted peak/centroid-frequency shifts of the early-arrivals. Through numerical tests on synthetic and field data, I show that noticeable improvements in the migration image quality can be obtained from Q models inverted using wave-equation Q tomography. A key feature of skeletonized inversion is that it is much less likely to get stuck in a local minimum than a standard waveform inversion method. Finally, I develop a preconditioning technique for least-squares migration using a directional Gabor-based preconditioning approach for isotropic

  10. The possibilities of least-squares migration of internally scattered seismic energy

    KAUST Repository

    Aldawood, Ali

    2015-05-26

    Approximate images of the earth’s subsurface structures are usually obtained by migrating surface seismic data. Least-squares migration, under the single-scattering assumption, is used as an iterative linearized inversion scheme to suppress migration artifacts, deconvolve the source signature, mitigate the acquisition fingerprint, and enhance the spatial resolution of migrated images. The problem with least-squares migration of primaries, however, is that it may not be able to enhance events that are mainly illuminated by internal multiples, such as vertical and nearly vertical faults or salt flanks. To alleviate this problem, we adopted a linearized inversion framework to migrate internally scattered energy. We apply the least-squares migration of first-order internal multiples to image subsurface vertical fault planes. Tests on synthetic data demonstrated the ability of the proposed method to resolve vertical fault planes, which are poorly illuminated by the least-squares migration of primaries only. The proposed scheme is robust in the presence of white Gaussian observational noise and in the case of imaging the fault planes using inaccurate migration velocities. Our results suggested that the proposed least-squares imaging, under the double-scattering assumption, still retrieved the vertical fault planes when imaging the scattered data despite a slight defocusing of these events due to the presence of noise or velocity errors.

  11. The possibilities of least-squares migration of internally scattered seismic energy

    KAUST Repository

    Aldawood, Ali; Hoteit, Ibrahim; Zuberi, Mohammad; Turkiyyah, George; Alkhalifah, Tariq Ali

    2015-01-01

    Approximate images of the earth’s subsurface structures are usually obtained by migrating surface seismic data. Least-squares migration, under the single-scattering assumption, is used as an iterative linearized inversion scheme to suppress migration artifacts, deconvolve the source signature, mitigate the acquisition fingerprint, and enhance the spatial resolution of migrated images. The problem with least-squares migration of primaries, however, is that it may not be able to enhance events that are mainly illuminated by internal multiples, such as vertical and nearly vertical faults or salt flanks. To alleviate this problem, we adopted a linearized inversion framework to migrate internally scattered energy. We apply the least-squares migration of first-order internal multiples to image subsurface vertical fault planes. Tests on synthetic data demonstrated the ability of the proposed method to resolve vertical fault planes, which are poorly illuminated by the least-squares migration of primaries only. The proposed scheme is robust in the presence of white Gaussian observational noise and in the case of imaging the fault planes using inaccurate migration velocities. Our results suggested that the proposed least-squares imaging, under the double-scattering assumption, still retrieved the vertical fault planes when imaging the scattered data despite a slight defocusing of these events due to the presence of noise or velocity errors.

  12. Constrained Balancing of Two Industrial Rotor Systems: Least Squares and Min-Max Approaches

    Directory of Open Access Journals (Sweden)

    Bin Huang

    2009-01-01

    Full Text Available Rotor vibrations caused by rotor mass unbalance distributions are a major source of maintenance problems in high-speed rotating machinery. Minimizing this vibration by balancing under practical constraints is quite important to industry. This paper considers balancing of two large industrial rotor systems by constrained least squares and min-max balancing methods. In current industrial practice, the weighted least squares method has been utilized to minimize rotor vibrations for many years. One of its disadvantages is that it cannot guarantee that the maximum value of vibration is below a specified value. To achieve better balancing performance, the min-max balancing method utilizing the Second Order Cone Programming (SOCP with the maximum correction weight constraint, the maximum residual response constraint as well as the weight splitting constraint has been utilized for effective balancing. The min-max balancing method can guarantee a maximum residual vibration value below an optimum value and is shown by simulation to significantly outperform the weighted least squares method.

  13. Deformation analysis with Total Least Squares

    Directory of Open Access Journals (Sweden)

    M. Acar

    2006-01-01

    Full Text Available Deformation analysis is one of the main research fields in geodesy. Deformation analysis process comprises measurement and analysis phases. Measurements can be collected using several techniques. The output of the evaluation of the measurements is mainly point positions. In the deformation analysis phase, the coordinate changes in the point positions are investigated. Several models or approaches can be employed for the analysis. One approach is based on a Helmert or similarity coordinate transformation where the displacements and the respective covariance matrix are transformed into a unique datum. Traditionally a Least Squares (LS technique is used for the transformation procedure. Another approach that could be introduced as an alternative methodology is the Total Least Squares (TLS that is considerably a new approach in geodetic applications. In this study, in order to determine point displacements, 3-D coordinate transformations based on the Helmert transformation model were carried out individually by the Least Squares (LS and the Total Least Squares (TLS, respectively. The data used in this study was collected by GPS technique in a landslide area located nearby Istanbul. The results obtained from these two approaches have been compared.

  14. Resolution of the neutron transport equation by a three-dimensional least square method

    International Nuclear Information System (INIS)

    Varin, Elisabeth

    2001-01-01

    The knowledge of space and time distribution of neutrons with a certain energy or speed allows the exploitation and control of a nuclear reactor and the assessment of the irradiation dose about an irradiated nuclear fuel storage site. The neutron density is described by a transport equation. The objective of this research thesis is to develop a software for the resolution of this stationary equation in a three-dimensional Cartesian domain by means of a deterministic method. After a presentation of the transport equation, the author gives an overview of the different deterministic resolution approaches, identifies their benefits and drawbacks, and discusses the choice of the Ressel method. The least square method is precisely described and then applied. Numerical benchmarks are reported for validation purposes

  15. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    Science.gov (United States)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  16. Global Search Strategies for Solving Multilinear Least-Squares Problems

    Directory of Open Access Journals (Sweden)

    Mats Andersson

    2012-04-01

    Full Text Available The multilinear least-squares (MLLS problem is an extension of the linear least-squares problem. The difference is that a multilinear operator is used in place of a matrix-vector product. The MLLS is typically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present a global search strategy that allows for moving from one local minimizer to a better one. The efficiency of this strategy is illustrated by the results of numerical experiments performed for some problems related to the design of filter networks.

  17. Direct integral linear least square regression method for kinetic evaluation of hepatobiliary scintigraphy

    International Nuclear Information System (INIS)

    Shuke, Noriyuki

    1991-01-01

    In hepatobiliary scintigraphy, kinetic model analysis, which provides kinetic parameters like hepatic extraction or excretion rate, have been done for quantitative evaluation of liver function. In this analysis, unknown model parameters are usually determined using nonlinear least square regression method (NLS method) where iterative calculation and initial estimate for unknown parameters are required. As a simple alternative to NLS method, direct integral linear least square regression method (DILS method), which can determine model parameters by a simple calculation without initial estimate, is proposed, and tested the applicability to analysis of hepatobiliary scintigraphy. In order to see whether DILS method could determine model parameters as good as NLS method, or to determine appropriate weight for DILS method, simulated theoretical data based on prefixed parameters were fitted to 1 compartment model using both DILS method with various weightings and NLS method. The parameter values obtained were then compared with prefixed values which were used for data generation. The effect of various weights on the error of parameter estimate was examined, and inverse of time was found to be the best weight to make the error minimum. When using this weight, DILS method could give parameter values close to those obtained by NLS method and both parameter values were very close to prefixed values. With appropriate weighting, the DILS method could provide reliable parameter estimate which is relatively insensitive to the data noise. In conclusion, the DILS method could be used as a simple alternative to NLS method, providing reliable parameter estimate. (author)

  18. Partial Least Squares tutorial for analyzing neuroimaging data

    Directory of Open Access Journals (Sweden)

    Patricia Van Roon

    2014-09-01

    Full Text Available Partial least squares (PLS has become a respected and meaningful soft modeling analysis technique that can be applied to very large datasets where the number of factors or variables is greater than the number of observations. Current biometric studies (e.g., eye movements, EKG, body movements, EEG are often of this nature. PLS eliminates the multiple linear regression issues of over-fitting data by finding a few underlying or latent variables (factors that account for most of the variation in the data. In real-world applications, where linear models do not always apply, PLS can model the non-linear relationship well. This tutorial introduces two PLS methods, PLS Correlation (PLSC and PLS Regression (PLSR and their applications in data analysis which are illustrated with neuroimaging examples. Both methods provide straightforward and comprehensible techniques for determining and modeling relationships between two multivariate data blocks by finding latent variables that best describes the relationships. In the examples, the PLSC will analyze the relationship between neuroimaging data such as Event-Related Potential (ERP amplitude averages from different locations on the scalp with their corresponding behavioural data. Using the same data, the PLSR will be used to model the relationship between neuroimaging and behavioural data. This model will be able to predict future behaviour solely from available neuroimaging data. To find latent variables, Singular Value Decomposition (SVD for PLSC and Non-linear Iterative PArtial Least Squares (NIPALS for PLSR are implemented in this tutorial. SVD decomposes the large data block into three manageable matrices containing a diagonal set of singular values, as well as left and right singular vectors. For PLSR, NIPALS algorithms are used because it provides amore precise estimation of the latent variables. Mathematica notebooks are provided for each PLS method with clearly labeled sections and subsections. The

  19. A Generalized Autocovariance Least-Squares Method for Kalman Filter Tuning

    DEFF Research Database (Denmark)

    Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2008-01-01

    This paper discusses a method for estimating noise covariances from process data. In linear stochastic state-space representations the true noise covariances are generally unknown in practical applications. Using estimated covariances a Kalman filter can be tuned in order to increase the accuracy...... of the state estimates. There is a linear relationship between covariances and autocovariance. Therefore, the covariance estimation problem can be stated as a least-squares problem, which can be solved as a symmetric semidefinite least-squares problem. This problem is convex and can be solved efficiently...... by interior-point methods. A numerical algorithm for solving the symmetric is able to handle systems with mutually correlated process noise and measurement noise. (c) 2007 Elsevier Ltd. All rights reserved....

  20. Boosted regression trees, multivariate adaptive regression splines and their two-step combinations with multiple linear regression or partial least squares to predict blood-brain barrier passage: a case study.

    Science.gov (United States)

    Deconinck, E; Zhang, M H; Petitet, F; Dubus, E; Ijjaali, I; Coomans, D; Vander Heyden, Y

    2008-02-18

    The use of some unconventional non-linear modeling techniques, i.e. classification and regression trees and multivariate adaptive regression splines-based methods, was explored to model the blood-brain barrier (BBB) passage of drugs and drug-like molecules. The data set contains BBB passage values for 299 structural and pharmacological diverse drugs, originating from a structured knowledge-based database. Models were built using boosted regression trees (BRT) and multivariate adaptive regression splines (MARS), as well as their respective combinations with stepwise multiple linear regression (MLR) and partial least squares (PLS) regression in two-step approaches. The best models were obtained using combinations of MARS with either stepwise MLR or PLS. It could be concluded that the use of combinations of a linear with a non-linear modeling technique results in some improved properties compared to the individual linear and non-linear models and that, when the use of such a combination is appropriate, combinations using MARS as non-linear technique should be preferred over those with BRT, due to some serious drawbacks of the BRT approaches.

  1. The least weighted squares I. The asymptotic linearity of normal equations

    Czech Academy of Sciences Publication Activity Database

    Víšek, Jan Ámos

    2002-01-01

    Roč. 9, č. 15 (2002), s. 31-58 ISSN 1212-074X R&D Projects: GA AV ČR KSK1019101 Grant - others:GA UK(CZ) 255/2002/A EK /FSV Institutional research plan: CEZ:AV0Z1075907 Keywords : the least weighted squares * robust regression * asymptotic normality and representation Subject RIV: BA - General Mathematics

  2. Least Squares Problems with Absolute Quadratic Constraints

    Directory of Open Access Journals (Sweden)

    R. Schöne

    2012-01-01

    Full Text Available This paper analyzes linear least squares problems with absolute quadratic constraints. We develop a generalized theory following Bookstein's conic-fitting and Fitzgibbon's direct ellipse-specific fitting. Under simple preconditions, it can be shown that a minimum always exists and can be determined by a generalized eigenvalue problem. This problem is numerically reduced to an eigenvalue problem by multiplications of Givens' rotations. Finally, four applications of this approach are presented.

  3. Feasibility study on the least square method for fitting non-Gaussian noise data

    Science.gov (United States)

    Xu, Wei; Chen, Wen; Liang, Yingjie

    2018-02-01

    This study is to investigate the feasibility of least square method in fitting non-Gaussian noise data. We add different levels of the two typical non-Gaussian noises, Lévy and stretched Gaussian noises, to exact value of the selected functions including linear equations, polynomial and exponential equations, and the maximum absolute and the mean square errors are calculated for the different cases. Lévy and stretched Gaussian distributions have many applications in fractional and fractal calculus. It is observed that the non-Gaussian noises are less accurately fitted than the Gaussian noise, but the stretched Gaussian cases appear to perform better than the Lévy noise cases. It is stressed that the least-squares method is inapplicable to the non-Gaussian noise cases when the noise level is larger than 5%.

  4. Comparison of multiple linear regression, partial least squares and artificial neural networks for prediction of gas chromatographic relative retention times of trimethylsilylated anabolic androgenic steroids.

    Science.gov (United States)

    Fragkaki, A G; Farmaki, E; Thomaidis, N; Tsantili-Kakoulidou, A; Angelis, Y S; Koupparis, M; Georgakopoulos, C

    2012-09-21

    The comparison among different modelling techniques, such as multiple linear regression, partial least squares and artificial neural networks, has been performed in order to construct and evaluate models for prediction of gas chromatographic relative retention times of trimethylsilylated anabolic androgenic steroids. The performance of the quantitative structure-retention relationship study, using the multiple linear regression and partial least squares techniques, has been previously conducted. In the present study, artificial neural networks models were constructed and used for the prediction of relative retention times of anabolic androgenic steroids, while their efficiency is compared with that of the models derived from the multiple linear regression and partial least squares techniques. For overall ranking of the models, a novel procedure [Trends Anal. Chem. 29 (2010) 101-109] based on sum of ranking differences was applied, which permits the best model to be selected. The suggested models are considered useful for the estimation of relative retention times of designer steroids for which no analytical data are available. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Data-adapted moving least squares method for 3-D image interpolation

    International Nuclear Information System (INIS)

    Jang, Sumi; Lee, Yeon Ju; Jeong, Byeongseon; Nam, Haewon; Lee, Rena; Yoon, Jungho

    2013-01-01

    In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons. (paper)

  6. An Adaptive Noise Cancellation System Based on Linear and Widely Linear Complex Valued Least Mean Square Algorithms for Removing Electrooculography Artifacts from Electroencephalography Signals

    Directory of Open Access Journals (Sweden)

    Engin Cemal MENGÜÇ

    2018-03-01

    Full Text Available In this study, an adaptive noise cancellation (ANC system based on linear and widely linear (WL complex valued least mean square (LMS algorithms is designed for removing electrooculography (EOG artifacts from electroencephalography (EEG signals. The real valued EOG and EEG signals (Fp1 and Fp2 given in dataset are primarily expressed as a complex valued signal in the complex domain. Then, using the proposed ANC system, the EOG artifacts are eliminated in the complex domain from the EEG signals. Expression of these signals in the complex domain allows us to remove EOG artifacts from two EEG channels simultaneously. Moreover, in this study, it has been shown that the complex valued EEG signal exhibits noncircular behavior, and in the case, the WL-CLMS algorithm enhances the performance of the ANC system compared to real-valued LMS and CLMS algorithms. Simulation results support the proposed approach.

  7. A square-plate ultrasonic linear motor operating in two orthogonal first bending modes.

    Science.gov (United States)

    Chen, Zhijiang; Li, Xiaotian; Chen, Jianguo; Dong, Shuxiang

    2013-01-01

    A novel square-plate piezoelectric ultrasonic linear motor operated in two orthogonal first bending vibration modes (B₁) is proposed. The piezoelectric vibrator of the linear motor is simply made of a single PZT ceramic plate (sizes: 15 x 15 x 2 mm) and poled in its thickness direction. The top surface electrode of the square ceramic plate was divided into four active areas along its two diagonal lines for exciting two orthogonal B₁ modes. The achieved driving force and speed from the linear motor are 1.8 N and 230 mm/s, respectively, under one pair orthogonal voltage drive of 150 V(p-p) at the resonance frequency of 92 kHz. The proposed linear motor has advantages over conventional ultrasonic linear motors, such as relatively larger driving force, very simple working mode and structure, and low fabrication cost.

  8. Partial update least-square adaptive filtering

    CERN Document Server

    Xie, Bei

    2014-01-01

    Adaptive filters play an important role in the fields related to digital signal processing and communication, such as system identification, noise cancellation, channel equalization, and beamforming. In practical applications, the computational complexity of an adaptive filter is an important consideration. The Least Mean Square (LMS) algorithm is widely used because of its low computational complexity (O(N)) and simplicity in implementation. The least squares algorithms, such as Recursive Least Squares (RLS), Conjugate Gradient (CG), and Euclidean Direction Search (EDS), can converge faster a

  9. Solve: a non linear least-squares code and its application to the optimal placement of torsatron vertical field coils

    International Nuclear Information System (INIS)

    Aspinall, J.

    1982-01-01

    A computational method was developed which alleviates the need for lengthy parametric scans as part of a design process. The method makes use of a least squares algorithm to find the optimal value of a parameter vector. Optimal is defined in terms of a utility function prescribed by the user. The placement of the vertical field coils of a torsatron is such a non linear problem

  10. Decaying quasi-two-dimensional viscous flow on a square domain

    DEFF Research Database (Denmark)

    Konijnenberg, J.A. van de; Flor, J.B.; Heijst, G.J.F. van

    1998-01-01

    A comparison is made between experimental, numerical and analytical results for the two-dimensional flow on a square domain. The experiments concern the flow at the interface of a two-layer stratified fluid, evoked by either stirring the fluid with a rake, or by injecting additional fluid...... at the interface. Two numerical simulations were performed with initial conditions and boundary conditions that correspond approximately with those met in the experiments. The analytical results concern the calculation of the lowest modes of a decaying Stokes flow on a square domain. At late times...... relationship between vorticity and stream function in the experiments and the simulations. (C) 1998 American Institute of Physics....

  11. Hourly cooling load forecasting using time-indexed ARX models with two-stage weighted least squares regression

    International Nuclear Information System (INIS)

    Guo, Yin; Nazarian, Ehsan; Ko, Jeonghan; Rajurkar, Kamlakar

    2014-01-01

    Highlights: • Developed hourly-indexed ARX models for robust cooling-load forecasting. • Proposed a two-stage weighted least-squares regression approach. • Considered the effect of outliers as well as trend of cooling load and weather patterns. • Included higher order terms and day type patterns in the forecasting models. • Demonstrated better accuracy compared with some ARX and ANN models. - Abstract: This paper presents a robust hourly cooling-load forecasting method based on time-indexed autoregressive with exogenous inputs (ARX) models, in which the coefficients are estimated through a two-stage weighted least squares regression. The prediction method includes a combination of two separate time-indexed ARX models to improve prediction accuracy of the cooling load over different forecasting periods. The two-stage weighted least-squares regression approach in this study is robust to outliers and suitable for fast and adaptive coefficient estimation. The proposed method is tested on a large-scale central cooling system in an academic institution. The numerical case studies show the proposed prediction method performs better than some ANN and ARX forecasting models for the given test data set

  12. Harmonically trapped dipolar fermions in a two-dimensional square lattice

    DEFF Research Database (Denmark)

    Larsen, Anne-Louise G.; Bruun, Georg

    2012-01-01

    We consider dipolar fermions in a two-dimensional square lattice and a harmonic trapping potential. The anisotropy of the dipolar interaction combined with the lattice leads to transitions between phases with density order of different symmetries. We show that the attractive part of the dipolar...

  13. On the multivariate total least-squares approach to empirical coordinate transformations. Three algorithms

    Science.gov (United States)

    Schaffrin, Burkhard; Felus, Yaron A.

    2008-06-01

    The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.

  14. Bubble-Enriched Least-Squares Finite Element Method for Transient Advective Transport

    Directory of Open Access Journals (Sweden)

    Rajeev Kumar

    2008-01-01

    Full Text Available The least-squares finite element method (LSFEM has received increasing attention in recent years due to advantages over the Galerkin finite element method (GFEM. The method leads to a minimization problem in the L2-norm and thus results in a symmetric and positive definite matrix, even for first-order differential equations. In addition, the method contains an implicit streamline upwinding mechanism that prevents the appearance of oscillations that are characteristic of the Galerkin method. Thus, the least-squares approach does not require explicit stabilization and the associated stabilization parameters required by the Galerkin method. A new approach, the bubble enriched least-squares finite element method (BELSFEM, is presented and compared with the classical LSFEM. The BELSFEM requires a space-time element formulation and employs bubble functions in space and time to increase the accuracy of the finite element solution without degrading computational performance. We apply the BELSFEM and classical least-squares finite element methods to benchmark problems for 1D and 2D linear transport. The accuracy and performance are compared.

  15. Resolution of co-eluting compounds of Cannabis Sativa in comprehensive two-dimensional gas chromatography/mass spectrometry detection with Multivariate Curve Resolution-Alternating Least Squares.

    Science.gov (United States)

    Omar, Jone; Olivares, Maitane; Amigo, José Manuel; Etxebarria, Nestor

    2014-04-01

    Comprehensive Two Dimensional Gas Chromatography - Mass Spectrometry (GC × GC/qMS) analysis of Cannabis sativa extracts shows a high complexity due to the large variety of terpenes and cannabinoids and to the fact that the complete resolution of the peaks is not straightforwardly achieved. In order to support the resolution of the co-eluted peaks in the sesquiterpene and the cannabinoid chromatographic region the combination of Multivariate Curve Resolution and Alternating Least Squares algorithms was satisfactorily applied. As a result, four co-eluting areas were totally resolved in the sesquiterpene region and one in the cannabinoid region in different samples of Cannabis sativa. The comparison of the mass spectral profiles obtained for each resolved peak with theoretical mass spectra allowed the identification of some of the co-eluted peaks. Finally, the classification of the studied samples was achieved based on the relative concentrations of the resolved peaks. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Least Squares Adjustment: Linear and Nonlinear Weighted Regression Analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2007-01-01

    This note primarily describes the mathematics of least squares regression analysis as it is often used in geodesy including land surveying and satellite positioning applications. In these fields regression is often termed adjustment. The note also contains a couple of typical land surveying...... and satellite positioning application examples. In these application areas we are typically interested in the parameters in the model typically 2- or 3-D positions and not in predictive modelling which is often the main concern in other regression analysis applications. Adjustment is often used to obtain...... the clock error) and to obtain estimates of the uncertainty with which the position is determined. Regression analysis is used in many other fields of application both in the natural, the technical and the social sciences. Examples may be curve fitting, calibration, establishing relationships between...

  17. Small-kernel constrained-least-squares restoration of sampled image data

    Science.gov (United States)

    Hazra, Rajeeb; Park, Stephen K.

    1992-10-01

    Constrained least-squares image restoration, first proposed by Hunt twenty years ago, is a linear image restoration technique in which the restoration filter is derived by maximizing the smoothness of the restored image while satisfying a fidelity constraint related to how well the restored image matches the actual data. The traditional derivation and implementation of the constrained least-squares restoration filter is based on an incomplete discrete/discrete system model which does not account for the effects of spatial sampling and image reconstruction. For many imaging systems, these effects are significant and should not be ignored. In a recent paper Park demonstrated that a derivation of the Wiener filter based on the incomplete discrete/discrete model can be extended to a more comprehensive end-to-end, continuous/discrete/continuous model. In a similar way, in this paper, we show that a derivation of the constrained least-squares filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model and, by so doing, an improved restoration filter is derived. Building on previous work by Reichenbach and Park for the Wiener filter, we also show that this improved constrained least-squares restoration filter can be efficiently implemented as a small-kernel convolution in the spatial domain.

  18. Linear least-squares method for global luminescent oil film skin friction field analysis

    Science.gov (United States)

    Lee, Taekjin; Nonomura, Taku; Asai, Keisuke; Liu, Tianshu

    2018-06-01

    A data analysis method based on the linear least-squares (LLS) method was developed for the extraction of high-resolution skin friction fields from global luminescent oil film (GLOF) visualization images of a surface in an aerodynamic flow. In this method, the oil film thickness distribution and its spatiotemporal development are measured by detecting the luminescence intensity of the thin oil film. From the resulting set of GLOF images, the thin oil film equation is solved to obtain an ensemble-averaged (steady) skin friction field as an inverse problem. In this paper, the formulation of a discrete linear system of equations for the LLS method is described, and an error analysis is given to identify the main error sources and the relevant parameters. Simulations were conducted to evaluate the accuracy of the LLS method and the effects of the image patterns, image noise, and sample numbers on the results in comparison with the previous snapshot-solution-averaging (SSA) method. An experimental case is shown to enable the comparison of the results obtained using conventional oil flow visualization and those obtained using both the LLS and SSA methods. The overall results show that the LLS method is more reliable than the SSA method and the LLS method can yield a more detailed skin friction topology in an objective way.

  19. PERBANDINGAN ANALISIS LEAST ABSOLUTE SHRINKAGE AND SELECTION OPERATOR DAN PARTIAL LEAST SQUARES (Studi Kasus: Data Microarray

    Directory of Open Access Journals (Sweden)

    KADEK DWI FARMANI

    2012-09-01

    Full Text Available Linear regression analysis is one of the parametric statistical methods which utilize the relationship between two or more quantitative variables. In linear regression analysis, there are several assumptions that must be met that is normal distribution of errors, there is no correlation between the error and error variance is constant and homogent. There are some constraints that caused the assumption can not be met, for example, the correlation between independent variables (multicollinearity, constraints on the number of data and independent variables are obtained. When the number of samples obtained less than the number of independent variables, then the data is called the microarray data. Least Absolute shrinkage and Selection Operator (LASSO and Partial Least Squares (PLS is a statistical method that can be used to overcome the microarray, overfitting, and multicollinearity. From the above description, it is necessary to study with the intention of comparing LASSO and PLS method. This study uses coronary heart and stroke patients data which is a microarray data and contain multicollinearity. With these two characteristics of the data that most have a weak correlation between independent variables, LASSO method produces a better model than PLS seen from the large RMSEP.

  20. TENSOLVE: A software package for solving systems of nonlinear equations and nonlinear least squares problems using tensor methods

    Energy Technology Data Exchange (ETDEWEB)

    Bouaricha, A. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.; Schnabel, R.B. [Colorado Univ., Boulder, CO (United States). Dept. of Computer Science

    1996-12-31

    This paper describes a modular software package for solving systems of nonlinear equations and nonlinear least squares problems, using a new class of methods called tensor methods. It is intended for small to medium-sized problems, say with up to 100 equations and unknowns, in cases where it is reasonable to calculate the Jacobian matrix or approximate it by finite differences at each iteration. The software allows the user to select between a tensor method and a standard method based upon a linear model. The tensor method models F({ital x}) by a quadratic model, where the second-order term is chosen so that the model is hardly more expensive to form, store, or solve than the standard linear model. Moreover, the software provides two different global strategies, a line search and a two- dimensional trust region approach. Test results indicate that, in general, tensor methods are significantly more efficient and robust than standard methods on small and medium-sized problems in iterations and function evaluations.

  1. Efficient Model Selection for Sparse Least-Square SVMs

    Directory of Open Access Journals (Sweden)

    Xiao-Lei Xia

    2013-01-01

    Full Text Available The Forward Least-Squares Approximation (FLSA SVM is a newly-emerged Least-Square SVM (LS-SVM whose solution is extremely sparse. The algorithm uses the number of support vectors as the regularization parameter and ensures the linear independency of the support vectors which span the solution. This paper proposed a variant of the FLSA-SVM, namely, Reduced FLSA-SVM which is of reduced computational complexity and memory requirements. The strategy of “contexts inheritance” is introduced to improve the efficiency of tuning the regularization parameter for both the FLSA-SVM and the RFLSA-SVM algorithms. Experimental results on benchmark datasets showed that, compared to the SVM and a number of its variants, the RFLSA-SVM solutions contain a reduced number of support vectors, while maintaining competitive generalization abilities. With respect to the time cost for tuning of the regularize parameter, the RFLSA-SVM algorithm was empirically demonstrated fastest compared to FLSA-SVM, the LS-SVM, and the SVM algorithms.

  2. Bivariate least squares linear regression: Towards a unified analytic formalism. I. Functional models

    Science.gov (United States)

    Caimmi, R.

    2011-08-01

    Concerning bivariate least squares linear regression, the classical approach pursued for functional models in earlier attempts ( York, 1966, 1969) is reviewed using a new formalism in terms of deviation (matrix) traces which, for unweighted data, reduce to usual quantities leaving aside an unessential (but dimensional) multiplicative factor. Within the framework of classical error models, the dependent variable relates to the independent variable according to the usual additive model. The classes of linear models considered are regression lines in the general case of correlated errors in X and in Y for weighted data, and in the opposite limiting situations of (i) uncorrelated errors in X and in Y, and (ii) completely correlated errors in X and in Y. The special case of (C) generalized orthogonal regression is considered in detail together with well known subcases, namely: (Y) errors in X negligible (ideally null) with respect to errors in Y; (X) errors in Y negligible (ideally null) with respect to errors in X; (O) genuine orthogonal regression; (R) reduced major-axis regression. In the limit of unweighted data, the results determined for functional models are compared with their counterparts related to extreme structural models i.e. the instrumental scatter is negligible (ideally null) with respect to the intrinsic scatter ( Isobe et al., 1990; Feigelson and Babu, 1992). While regression line slope and intercept estimators for functional and structural models necessarily coincide, the contrary holds for related variance estimators even if the residuals obey a Gaussian distribution, with the exception of Y models. An example of astronomical application is considered, concerning the [O/H]-[Fe/H] empirical relations deduced from five samples related to different stars and/or different methods of oxygen abundance determination. For selected samples and assigned methods, different regression models yield consistent results within the errors (∓ σ) for both

  3. Pattern formation in two-dimensional square-shoulder systems

    International Nuclear Information System (INIS)

    Fornleitner, Julia; Kahl, Gerhard

    2010-01-01

    Using a highly efficient and reliable optimization tool that is based on ideas of genetic algorithms, we have systematically studied the pattern formation of the two-dimensional square-shoulder system. An overwhelming wealth of complex ordered equilibrium structures emerge from this investigation as we vary the shoulder width. With increasing pressure three structural archetypes could be identified: cluster lattices, where clusters of particles occupy the sites of distorted hexagonal lattices, lane formation, and compact particle arrangements with high coordination numbers. The internal complexity of these structures increases with increasing shoulder width.

  4. Pattern formation in two-dimensional square-shoulder systems

    Energy Technology Data Exchange (ETDEWEB)

    Fornleitner, Julia [Institut fuer Festkoerperforschung, Forschungsszentrum Juelich, D-52425 Juelich (Germany); Kahl, Gerhard, E-mail: fornleitner@cmt.tuwien.ac.a [Institut fuer Theoretische Physik and Centre for Computational Materials Science (CMS), Technische Universitaet Wien, Wiedner Hauptstrasse 8-10, A-1040 Wien (Austria)

    2010-03-17

    Using a highly efficient and reliable optimization tool that is based on ideas of genetic algorithms, we have systematically studied the pattern formation of the two-dimensional square-shoulder system. An overwhelming wealth of complex ordered equilibrium structures emerge from this investigation as we vary the shoulder width. With increasing pressure three structural archetypes could be identified: cluster lattices, where clusters of particles occupy the sites of distorted hexagonal lattices, lane formation, and compact particle arrangements with high coordination numbers. The internal complexity of these structures increases with increasing shoulder width.

  5. Least square regularized regression in sum space.

    Science.gov (United States)

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  6. Estimating the kinetic parameters of activated sludge storage using weighted non-linear least-squares and accelerating genetic algorithm.

    Science.gov (United States)

    Fang, Fang; Ni, Bing-Jie; Yu, Han-Qing

    2009-06-01

    In this study, weighted non-linear least-squares analysis and accelerating genetic algorithm are integrated to estimate the kinetic parameters of substrate consumption and storage product formation of activated sludge. A storage product formation equation is developed and used to construct the objective function for the determination of its production kinetics. The weighted least-squares analysis is employed to calculate the differences in the storage product concentration between the model predictions and the experimental data as the sum of squared weighted errors. The kinetic parameters for the substrate consumption and the storage product formation are estimated to be the maximum heterotrophic growth rate of 0.121/h, the yield coefficient of 0.44 mg CODX/mg CODS (COD, chemical oxygen demand) and the substrate half saturation constant of 16.9 mg/L, respectively, by minimizing the objective function using a real-coding-based accelerating genetic algorithm. Also, the fraction of substrate electrons diverted to the storage product formation is estimated to be 0.43 mg CODSTO/mg CODS. The validity of our approach is confirmed by the results of independent tests and the kinetic parameter values reported in literature, suggesting that this approach could be useful to evaluate the product formation kinetics of mixed cultures like activated sludge. More importantly, as this integrated approach could estimate the kinetic parameters rapidly and accurately, it could be applied to other biological processes.

  7. Newton-Gauss Algorithm of Robust Weighted Total Least Squares Model

    Directory of Open Access Journals (Sweden)

    WANG Bin

    2015-06-01

    Full Text Available Based on the Newton-Gauss iterative algorithm of weighted total least squares (WTLS, a robust WTLS (RWTLS model is presented. The model utilizes the standardized residuals to construct the weight factor function and the square root of the variance component estimator with robustness is obtained by introducing the median method. Therefore, the robustness in both the observation and structure spaces can be simultaneously achieved. To obtain standardized residuals, the linearly approximate cofactor propagation law is employed to derive the expression of the cofactor matrix of WTLS residuals. The iterative calculation steps for RWTLS are also described. The experiment indicates that the model proposed in this paper exhibits satisfactory robustness for gross errors handling problem of WTLS, the obtained parameters have no significant difference with the results of WTLS without gross errors. Therefore, it is superior to the robust weighted total least squares model directly constructed with residuals.

  8. Flow Applications of the Least Squares Finite Element Method

    Science.gov (United States)

    Jiang, Bo-Nan

    1998-01-01

    The main thrust of the effort has been towards the development, analysis and implementation of the least-squares finite element method (LSFEM) for fluid dynamics and electromagnetics applications. In the past year, there were four major accomplishments: 1) special treatments in computational fluid dynamics and computational electromagnetics, such as upwinding, numerical dissipation, staggered grid, non-equal order elements, operator splitting and preconditioning, edge elements, and vector potential are unnecessary; 2) the analysis of the LSFEM for most partial differential equations can be based on the bounded inverse theorem; 3) the finite difference and finite volume algorithms solve only two Maxwell equations and ignore the divergence equations; and 4) the first numerical simulation of three-dimensional Marangoni-Benard convection was performed using the LSFEM.

  9. Least-squares Minimization Approaches to Interpret Total Magnetic Anomalies Due to Spheres

    Science.gov (United States)

    Abdelrahman, E. M.; El-Araby, T. M.; Soliman, K. S.; Essa, K. S.; Abo-Ezz, E. R.

    2007-05-01

    We have developed three different least-squares approaches to determine successively: the depth, magnetic angle, and amplitude coefficient of a buried sphere from a total magnetic anomaly. By defining the anomaly value at the origin and the nearest zero-anomaly distance from the origin on the profile, the problem of depth determination is transformed into the problem of finding a solution of a nonlinear equation of the form f(z)=0. Knowing the depth and applying the least-squares method, the magnetic angle and amplitude coefficient are determined using two simple linear equations. In this way, the depth, magnetic angle, and amplitude coefficient are determined individually from all observed total magnetic data. The method is applied to synthetic examples with and without random errors and tested on a field example from Senegal, West Africa. In all cases, the depth solutions are in good agreement with the actual ones.

  10. Comparison between results of solution of Burgers' equation and Laplace's equation by Galerkin and least-square finite element methods

    Science.gov (United States)

    Adib, Arash; Poorveis, Davood; Mehraban, Farid

    2018-03-01

    In this research, two equations are considered as examples of hyperbolic and elliptic equations. In addition, two finite element methods are applied for solving of these equations. The purpose of this research is the selection of suitable method for solving each of two equations. Burgers' equation is a hyperbolic equation. This equation is a pure advection (without diffusion) equation. This equation is one-dimensional and unsteady. A sudden shock wave is introduced to the model. This wave moves without deformation. In addition, Laplace's equation is an elliptical equation. This equation is steady and two-dimensional. The solution of Laplace's equation in an earth dam is considered. By solution of Laplace's equation, head pressure and the value of seepage in the directions X and Y are calculated in different points of earth dam. At the end, water table is shown in the earth dam. For Burgers' equation, least-square method can show movement of wave with oscillation but Galerkin method can not show it correctly (the best method for solving of the Burgers' equation is discrete space by least-square finite element method and discrete time by forward difference.). For Laplace's equation, Galerkin and least square methods can show water table correctly in earth dam.

  11. A comparison of two least-squared random coefficient autoregressive models: with and without autocorrelated errors

    OpenAIRE

    Autcha Araveeporn

    2013-01-01

    This paper compares a Least-Squared Random Coefficient Autoregressive (RCA) model with a Least-Squared RCA model based on Autocorrelated Errors (RCA-AR). We looked at only the first order models, denoted RCA(1) and RCA(1)-AR(1). The efficiency of the Least-Squared method was checked by applying the models to Brownian motion and Wiener process, and the efficiency followed closely the asymptotic properties of a normal distribution. In a simulation study, we compared the performance of RCA(1) an...

  12. Least-squares fit of a linear combination of functions

    Directory of Open Access Journals (Sweden)

    Niraj Upadhyay

    2013-12-01

    Full Text Available We propose that given a data-set $S=\\{(x_i,y_i/i=1,2,{\\dots}n\\}$ and real-valued functions $\\{f_\\alpha(x/\\alpha=1,2,{\\dots}m\\},$ the least-squares fit vector $A=\\{a_\\alpha\\}$ for $y=\\sum_\\alpha a_{\\alpha}f_\\alpha(x$ is $A = (F^TF^{-1}F^TY$ where $[F_{i\\alpha}]=[f_\\alpha(x_i].$ We test this formalism by deriving the algebraic expressions of the regression coefficients in $y = ax + b$ and in $y = ax^2 + bx + c.$ As a practical application, we successfully arrive at the coefficients in the semi-empirical mass formula of nuclear physics. The formalism is {\\it generic} - it has the potential of being applicable to any {\\it type} of $\\{x_i\\}$ as long as there exist appropriate $\\{f_\\alpha\\}.$ The method can be exploited with a CAS or an object-oriented language and is excellently suitable for parallel-processing.

  13. Prediction of retention indices for frequently reported compounds of plant essential oils using multiple linear regression, partial least squares, and support vector machine.

    Science.gov (United States)

    Yan, Jun; Huang, Jian-Hua; He, Min; Lu, Hong-Bing; Yang, Rui; Kong, Bo; Xu, Qing-Song; Liang, Yi-Zeng

    2013-08-01

    Retention indices for frequently reported compounds of plant essential oils on three different stationary phases were investigated. Multivariate linear regression, partial least squares, and support vector machine combined with a new variable selection approach called random-frog recently proposed by our group, were employed to model quantitative structure-retention relationships. Internal and external validations were performed to ensure the stability and predictive ability. All the three methods could obtain an acceptable model, and the optimal results by support vector machine based on a small number of informative descriptors with the square of correlation coefficient for cross validation, values of 0.9726, 0.9759, and 0.9331 on the dimethylsilicone stationary phase, the dimethylsilicone phase with 5% phenyl groups, and the PEG stationary phase, respectively. The performances of two variable selection approaches, random-frog and genetic algorithm, are compared. The importance of the variables was found to be consistent when estimated from correlation coefficients in multivariate linear regression equations and selection probability in model spaces. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Least-squares reverse time migration of multiples

    KAUST Repository

    Zhang, Dongliang; Schuster, Gerard T.

    2013-01-01

    The theory of least-squares reverse time migration of multiples (RTMM) is presented. In this method, least squares migration (LSM) is used to image free-surface multiples where the recorded traces are used as the time histories of the virtual

  15. Gauss’s, Cholesky’s and Banachiewicz’s Contributions to Least Squares

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Wasniewski, Jerzy

    This paper describes historically Gauss’s contributions to the area of Least Squares. Also mentioned are Cholesky’s and Banachiewicz’s contributions to linear algebra. The material given is backup information to a Tutorial given at PPAM 2011 to honor Cholesky on the hundred anniversary of his...

  16. A new stabilized least-squares imaging condition

    International Nuclear Information System (INIS)

    Vivas, Flor A; Pestana, Reynam C; Ursin, Bjørn

    2009-01-01

    The classical deconvolution imaging condition consists of dividing the upgoing wave field by the downgoing wave field and summing over all frequencies and sources. The least-squares imaging condition consists of summing the cross-correlation of the upgoing and downgoing wave fields over all frequencies and sources, and dividing the result by the total energy of the downgoing wave field. This procedure is more stable than using the classical imaging condition, but it still requires stabilization in zones where the energy of the downgoing wave field is small. To stabilize the least-squares imaging condition, the energy of the downgoing wave field is replaced by its average value computed in a horizontal plane in poorly illuminated regions. Applications to the Marmousi and Sigsbee2A data sets show that the stabilized least-squares imaging condition produces better images than the least-squares and cross-correlation imaging conditions

  17. Plane-wave least-squares reverse-time migration

    KAUST Repository

    Dai, Wei

    2013-06-03

    A plane-wave least-squares reverse-time migration (LSRTM) is formulated with a new parameterization, where the migration image of each shot gather is updated separately and an ensemble of prestack images is produced along with common image gathers. The merits of plane-wave prestack LSRTM are the following: (1) plane-wave prestack LSRTM can sometimes offer stable convergence even when the migration velocity has bulk errors of up to 5%; (2) to significantly reduce computation cost, linear phase-shift encoding is applied to hundreds of shot gathers to produce dozens of plane waves. Unlike phase-shift encoding with random time shifts applied to each shot gather, plane-wave encoding can be effectively applied to data with a marine streamer geometry. (3) Plane-wave prestack LSRTM can provide higher-quality images than standard reverse-time migration. Numerical tests on the Marmousi2 model and a marine field data set are performed to illustrate the benefits of plane-wave LSRTM. Empirical results show that LSRTM in the plane-wave domain, compared to standard reversetime migration, produces images efficiently with fewer artifacts and better spatial resolution. Moreover, the prestack image ensemble accommodates more unknowns to makes it more robust than conventional least-squares migration in the presence of migration velocity errors. © 2013 Society of Exploration Geophysicists.

  18. Squares of Random Linear Codes

    DEFF Research Database (Denmark)

    Cascudo Pueyo, Ignacio; Cramer, Ronald; Mirandola, Diego

    2015-01-01

    a positive answer, for codes of dimension $k$ and length roughly $\\frac{1}{2}k^2$ or smaller. Moreover, the convergence speed is exponential if the difference $k(k+1)/2-n$ is at least linear in $k$. The proof uses random coding and combinatorial arguments, together with algebraic tools involving the precise......Given a linear code $C$, one can define the $d$-th power of $C$ as the span of all componentwise products of $d$ elements of $C$. A power of $C$ may quickly fill the whole space. Our purpose is to answer the following question: does the square of a code ``typically'' fill the whole space? We give...

  19. The consistency of ordinary least-squares and generalized least-squares polynomial regression on characterizing the mechanomyographic amplitude versus torque relationship

    International Nuclear Information System (INIS)

    Herda, Trent J; Ryan, Eric D; Costa, Pablo B; DeFreitas, Jason M; Walter, Ashley A; Stout, Jeffrey R; Beck, Travis W; Cramer, Joel T; Housh, Terry J; Weir, Joseph P

    2009-01-01

    The primary purpose of this study was to examine the consistency of ordinary least-squares (OLS) and generalized least-squares (GLS) polynomial regression analyses utilizing linear, quadratic and cubic models on either five or ten data points that characterize the mechanomyographic amplitude (MMG RMS ) versus isometric torque relationship. The secondary purpose was to examine the consistency of OLS and GLS polynomial regression utilizing only linear and quadratic models (excluding cubic responses) on either ten or five data points. Eighteen participants (mean ± SD age = 24 ± 4 yr) completed ten randomly ordered isometric step muscle actions from 5% to 95% of the maximal voluntary contraction (MVC) of the right leg extensors during three separate trials. MMG RMS was recorded from the vastus lateralis during the MVCs and each submaximal muscle action. MMG RMS versus torque relationships were analyzed on a subject-by-subject basis using OLS and GLS polynomial regression. When using ten data points, only 33% and 27% of the subjects were fitted with the same model (utilizing linear, quadratic and cubic models) across all three trials for OLS and GLS, respectively. After eliminating the cubic model, there was an increase to 55% of the subjects being fitted with the same model across all trials for both OLS and GLS regression. Using only five data points (instead of ten data points), 55% of the subjects were fitted with the same model across all trials for OLS and GLS regression. Overall, OLS and GLS polynomial regression models were only able to consistently describe the torque-related patterns of response for MMG RMS in 27–55% of the subjects across three trials. Future studies should examine alternative methods for improving the consistency and reliability of the patterns of response for the MMG RMS versus isometric torque relationship

  20. Regularized Partial Least Squares with an Application to NMR Spectroscopy

    OpenAIRE

    Allen, Genevera I.; Peterson, Christine; Vannucci, Marina; Maletic-Savatic, Mirjana

    2012-01-01

    High-dimensional data common in genomics, proteomics, and chemometrics often contains complicated correlation structures. Recently, partial least squares (PLS) and Sparse PLS methods have gained attention in these areas as dimension reduction techniques in the context of supervised data analysis. We introduce a framework for Regularized PLS by solving a relaxation of the SIMPLS optimization problem with penalties on the PLS loadings vectors. Our approach enjoys many advantages including flexi...

  1. Performance improvement of shunt active power filter based on non-linear least-square approach

    DEFF Research Database (Denmark)

    Terriche, Yacine

    2018-01-01

    Nowadays, the shunt active power filters (SAPFs) have become a popular solution for power quality issues. A crucial issue in controlling the SAPFs which is highly correlated with their accuracy, flexibility and dynamic behavior, is generating the reference compensating current (RCC). The synchron......Nowadays, the shunt active power filters (SAPFs) have become a popular solution for power quality issues. A crucial issue in controlling the SAPFs which is highly correlated with their accuracy, flexibility and dynamic behavior, is generating the reference compensating current (RCC......). The synchronous reference frame (SRF) approach is widely used for generating the RCC due to its simplicity and computation efficiency. However, the SRF approach needs precise information of the voltage phase which becomes a challenge under adverse grid conditions. A typical solution to answer this need....... This paper proposes an improved open loop strategy which is unconditionally stable and flexible. The proposed method which is based on non-linear least square (NLS) approach can extract the fundamental voltage and estimates its phase within only half cycle, even in the presence of odd harmonics and dc offset...

  2. COMPARISON OF PARTIAL LEAST SQUARES REGRESSION METHOD ALGORITHMS: NIPALS AND PLS-KERNEL AND AN APPLICATION

    Directory of Open Access Journals (Sweden)

    ELİF BULUT

    2013-06-01

    Full Text Available Partial Least Squares Regression (PLSR is a multivariate statistical method that consists of partial least squares and multiple linear regression analysis. Explanatory variables, X, having multicollinearity are reduced to components which explain the great amount of covariance between explanatory and response variable. These components are few in number and they don’t have multicollinearity problem. Then multiple linear regression analysis is applied to those components to model the response variable Y. There are various PLSR algorithms. In this study NIPALS and PLS-Kernel algorithms will be studied and illustrated on a real data set.

  3. Vectorized Matlab Codes for Linear Two-Dimensional Elasticity

    Directory of Open Access Journals (Sweden)

    Jonas Koko

    2007-01-01

    Full Text Available A vectorized Matlab implementation for the linear finite element is provided for the two-dimensional linear elasticity with mixed boundary conditions. Vectorization means that there is no loop over triangles. Numerical experiments show that our implementation is more efficient than the standard implementation with a loop over all triangles.

  4. Optimization Method of Fusing Model Tree into Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Yu Fang

    2017-01-01

    Full Text Available Partial Least Square (PLS can’t adapt to the characteristics of the data of many fields due to its own features multiple independent variables, multi-dependent variables and non-linear. However, Model Tree (MT has a good adaptability to nonlinear function, which is made up of many multiple linear segments. Based on this, a new method combining PLS and MT to analysis and predict the data is proposed, which build MT through the main ingredient and the explanatory variables(the dependent variable extracted from PLS, and extract residual information constantly to build Model Tree until well-pleased accuracy condition is satisfied. Using the data of the maxingshigan decoction of the monarch drug to treat the asthma or cough and two sample sets in the UCI Machine Learning Repository, the experimental results show that, the ability of explanation and predicting get improved in the new method.

  5. Parameter Estimation of Permanent Magnet Synchronous Motor Using Orthogonal Projection and Recursive Least Squares Combinatorial Algorithm

    Directory of Open Access Journals (Sweden)

    Iman Yousefi

    2015-01-01

    Full Text Available This paper presents parameter estimation of Permanent Magnet Synchronous Motor (PMSM using a combinatorial algorithm. Nonlinear fourth-order space state model of PMSM is selected. This model is rewritten to the linear regression form without linearization. Noise is imposed to the system in order to provide a real condition, and then combinatorial Orthogonal Projection Algorithm and Recursive Least Squares (OPA&RLS method is applied in the linear regression form to the system. Results of this method are compared to the Orthogonal Projection Algorithm (OPA and Recursive Least Squares (RLS methods to validate the feasibility of the proposed method. Simulation results validate the efficacy of the proposed algorithm.

  6. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    Science.gov (United States)

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  7. Multi-source least-squares reverse time migration

    KAUST Repository

    Dai, Wei

    2012-06-15

    Least-squares migration has been shown to improve image quality compared to the conventional migration method, but its computational cost is often too high to be practical. In this paper, we develop two numerical schemes to implement least-squares migration with the reverse time migration method and the blended source processing technique to increase computation efficiency. By iterative migration of supergathers, which consist in a sum of many phase-encoded shots, the image quality is enhanced and the crosstalk noise associated with the encoded shots is reduced. Numerical tests on 2D HESS VTI data show that the multisource least-squares reverse time migration (LSRTM) algorithm suppresses migration artefacts, balances the amplitudes, improves image resolution and reduces crosstalk noise associated with the blended shot gathers. For this example, the multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with a comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution and fewer migration artefacts compared to conventional RTM. The empirical results suggest that multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with a similar or less computational cost. The caveat is that the LSRTM image is sensitive to large errors in the migration velocity model. © 2012 European Association of Geoscientists & Engineers.

  8. Multi-source least-squares reverse time migration

    KAUST Repository

    Dai, Wei; Fowler, Paul J.; Schuster, Gerard T.

    2012-01-01

    Least-squares migration has been shown to improve image quality compared to the conventional migration method, but its computational cost is often too high to be practical. In this paper, we develop two numerical schemes to implement least-squares migration with the reverse time migration method and the blended source processing technique to increase computation efficiency. By iterative migration of supergathers, which consist in a sum of many phase-encoded shots, the image quality is enhanced and the crosstalk noise associated with the encoded shots is reduced. Numerical tests on 2D HESS VTI data show that the multisource least-squares reverse time migration (LSRTM) algorithm suppresses migration artefacts, balances the amplitudes, improves image resolution and reduces crosstalk noise associated with the blended shot gathers. For this example, the multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with a comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution and fewer migration artefacts compared to conventional RTM. The empirical results suggest that multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with a similar or less computational cost. The caveat is that the LSRTM image is sensitive to large errors in the migration velocity model. © 2012 European Association of Geoscientists & Engineers.

  9. Precision PEP-II optics measurement with an SVD-enhanced Least-Square fitting

    Science.gov (United States)

    Yan, Y. T.; Cai, Y.

    2006-03-01

    A singular value decomposition (SVD)-enhanced Least-Square fitting technique is discussed. By automatic identifying, ordering, and selecting dominant SVD modes of the derivative matrix that responds to the variations of the variables, the converging process of the Least-Square fitting is significantly enhanced. Thus the fitting speed can be fast enough for a fairly large system. This technique has been successfully applied to precision PEP-II optics measurement in which we determine all quadrupole strengths (both normal and skew components) and sextupole feed-downs as well as all BPM gains and BPM cross-plane couplings through Least-Square fitting of the phase advances and the Local Green's functions as well as the coupling ellipses among BPMs. The local Green's functions are specified by 4 local transfer matrix components R12, R34, R32, R14. These measurable quantities (the Green's functions, the phase advances and the coupling ellipse tilt angles and axis ratios) are obtained by analyzing turn-by-turn Beam Position Monitor (BPM) data with a high-resolution model-independent analysis (MIA). Once all of the quadrupoles and sextupole feed-downs are determined, we obtain a computer virtual accelerator which matches the real accelerator in linear optics. Thus, beta functions, linear coupling parameters, and interaction point (IP) optics characteristics can be measured and displayed.

  10. On Solution of Total Least Squares Problems with Multiple Right-hand Sides

    Czech Academy of Sciences Publication Activity Database

    Hnětynková, I.; Plešinger, Martin; Strakoš, Zdeněk

    2008-01-01

    Roč. 8, č. 1 (2008), s. 10815-10816 ISSN 1617-7061 R&D Projects: GA AV ČR IAA100300802 Institutional research plan: CEZ:AV0Z10300504 Keywords : total least squares problem * multiple right-hand sides * linear approximation problem Subject RIV: BA - General Mathematics

  11. A comparison of least squares linear regression and measurement error modeling of warm/cold multipole correlation in SSC prototype dipole magnets

    International Nuclear Information System (INIS)

    Pollock, D.; Kim, K.; Gunst, R.; Schucany, W.

    1993-05-01

    Linear estimation of cold magnetic field quality based on warm multipole measurements is being considered as a quality control method for SSC production magnet acceptance. To investigate prediction uncertainties associated with such an approach, axial-scan (Z-scan) magnetic measurements from SSC Prototype Collider Dipole Magnets (CDM's) have been studied. This paper presents a preliminary evaluation of the explanatory ability of warm measurement multipole variation on the prediction of cold magnet multipoles. Two linear estimation methods are presented: least-squares regression, which uses the assumption of fixed independent variable (xi) observations, and the measurement error model, which includes measurement error in the xi's. The influence of warm multipole measurement errors on predicted cold magnet multipole averages is considered. MSD QA is studying warm/cold correlation to answer several magnet quality control questions. How well do warm measurements predict cold (2kA) multipoles? Does sampling error significantly influence estimates of the linear coefficients (slope, intercept and residual standard error)? Is estimation error for the predicted cold magnet average small compared to typical variation along the Z-Axis? What fraction of the multipole RMS tolerance is accounted for by individual magnet prediction uncertainty?

  12. BER analysis of regularized least squares for BPSK recovery

    KAUST Repository

    Ben Atitallah, Ismail; Thrampoulidis, Christos; Kammoun, Abla; Al-Naffouri, Tareq Y.; Hassibi, Babak; Alouini, Mohamed-Slim

    2017-01-01

    This paper investigates the problem of recovering an n-dimensional BPSK signal x0 ∈ {−1, 1}n from m-dimensional measurement vector y = Ax+z, where A and z are assumed to be Gaussian with iid entries. We consider two variants of decoders based on the regularized least squares followed by hard-thresholding: the case where the convex relaxation is from {−1, 1}n to ℝn and the box constrained case where the relaxation is to [−1, 1]n. For both cases, we derive an exact expression of the bit error probability when n and m grow simultaneously large at a fixed ratio. For the box constrained case, we show that there exists a critical value of the SNR, above which the optimal regularizer is zero. On the other side, the regularization can further improve the performance of the box relaxation at low to moderate SNR regimes. We also prove that the optimal regularizer in the bit error rate sense for the unboxed case is nothing but the MMSE detector.

  13. BER analysis of regularized least squares for BPSK recovery

    KAUST Repository

    Ben Atitallah, Ismail

    2017-06-20

    This paper investigates the problem of recovering an n-dimensional BPSK signal x0 ∈ {−1, 1}n from m-dimensional measurement vector y = Ax+z, where A and z are assumed to be Gaussian with iid entries. We consider two variants of decoders based on the regularized least squares followed by hard-thresholding: the case where the convex relaxation is from {−1, 1}n to ℝn and the box constrained case where the relaxation is to [−1, 1]n. For both cases, we derive an exact expression of the bit error probability when n and m grow simultaneously large at a fixed ratio. For the box constrained case, we show that there exists a critical value of the SNR, above which the optimal regularizer is zero. On the other side, the regularization can further improve the performance of the box relaxation at low to moderate SNR regimes. We also prove that the optimal regularizer in the bit error rate sense for the unboxed case is nothing but the MMSE detector.

  14. Online Least Squares One-Class Support Vector Machines-Based Abnormal Visual Event Detection

    Directory of Open Access Journals (Sweden)

    Tian Wang

    2013-12-01

    Full Text Available The abnormal event detection problem is an important subject in real-time video surveillance. In this paper, we propose a novel online one-class classification algorithm, online least squares one-class support vector machine (online LS-OC-SVM, combined with its sparsified version (sparse online LS-OC-SVM. LS-OC-SVM extracts a hyperplane as an optimal description of training objects in a regularized least squares sense. The online LS-OC-SVM learns a training set with a limited number of samples to provide a basic normal model, then updates the model through remaining data. In the sparse online scheme, the model complexity is controlled by the coherence criterion. The online LS-OC-SVM is adopted to handle the abnormal event detection problem. Each frame of the video is characterized by the covariance matrix descriptor encoding the moving information, then is classified into a normal or an abnormal frame. Experiments are conducted, on a two-dimensional synthetic distribution dataset and a benchmark video surveillance dataset, to demonstrate the promising results of the proposed online LS-OC-SVM method.

  15. Quantitative analysis of Fe and Co in Co-substituted magnetite using XPS: The application of non-linear least squares fitting (NLLSF)

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Hongmei, E-mail: hmliu@gig.ac.cn [CAS Key Laboratory of Mineralogy and Metallogeny/Guangdong Provincial Key Laboratory of Mineral Physics and Materials, Guangzhou Institute of Geochemistry, Chinese Academy of Sciences, Guangzhou, 510640 (China); Wei, Gaoling [Guangdong Key Laboratory of Agricultural Environment Pollution Integrated Control, Guangdong Institute of Eco-Environmental and Soil Sciences, Guangzhou, 510650 (China); Xu, Zhen [School of Materials Science and Engineering, Central South University, Changsha, 410012 (China); Liu, Peng; Li, Ying [CAS Key Laboratory of Mineralogy and Metallogeny/Guangdong Provincial Key Laboratory of Mineral Physics and Materials, Guangzhou Institute of Geochemistry, Chinese Academy of Sciences, Guangzhou, 510640 (China); University of Chinese Academy of Sciences, Beijing, 100049 (China)

    2016-12-15

    Highlights: • XPS and Auger peak overlapping complicates Co-substituted magnetite quantification. • Disrurbance of Auger peaks was eliminated by non-linear least squares fitting. • Fitting greatly improved the accuracy of quantification for Co and Fe. • Catalytic activity of magnetite was enhanced with the increase of Co substitution. - Abstract: Quantitative analysis of Co and Fe using X-ray photoelectron spectroscopy (XPS) is of important for the evaluation of the catalytic ability of Co-substituted magnetite. However, the overlap of XPS peaks and Auger peaks for Co and Fe complicate quantification. In this study, non-linear least squares fitting (NLLSF) was used to calculate the relative Co and Fe contents of a series of synthesized Co-substituted magnetite samples with different Co doping levels. NLLSF separated the XPS peaks of Co 2p and Fe 2p from the Auger peaks of Fe and Co, respectively. Compared with a control group without fitting, the accuracy of quantification of Co and Fe was greatly improved after elimination by NLLSF of the disturbance of Auger peaks. A catalysis study confirmed that the catalytic activity of magnetite was enhanced with the increase of Co substitution. This study confirms the effectiveness and accuracy of the NLLSF method in XPS quantitative calculation of Fe and Co coexisting in a material.

  16. FPGA-based electrocardiography (ECG signal analysis system using least-square linear phase finite impulse response (FIR filter

    Directory of Open Access Journals (Sweden)

    Mohamed G. Egila

    2016-12-01

    Full Text Available This paper presents a proposed design for analyzing electrocardiography (ECG signals. This methodology employs highpass least-square linear phase Finite Impulse Response (FIR filtering technique to filter out the baseline wander noise embedded in the input ECG signal to the system. Discrete Wavelet Transform (DWT was utilized as a feature extraction methodology to extract the reduced feature set from the input ECG signal. The design uses back propagation neural network classifier to classify the input ECG signal. The system is implemented on Xilinx 3AN-XC3S700AN Field Programming Gate Array (FPGA board. A system simulation has been done. The design is compared with some other designs achieving total accuracy of 97.8%, and achieving reduction in utilizing resources on FPGA implementation.

  17. The shape gradient of the least-squares objective functional in optimal shape design problems of radiative heat transfer

    International Nuclear Information System (INIS)

    Rukolaine, Sergey A.

    2010-01-01

    Optimal shape design problems of steady-state radiative heat transfer are considered. The optimal shape design problem (in the three-dimensional space) is formulated as an inverse one, i.e., in the form of an operator equation of the first kind with respect to a surface to be optimized. The operator equation is reduced to a minimization problem via a least-squares objective functional. The minimization problem has to be solved numerically. Gradient minimization methods need the gradient of a functional to be minimized. In this paper the shape gradient of the least-squares objective functional is derived with the help of the shape sensitivity analysis and adjoint problem method. In practice a surface to be optimized may be (or, most likely, is to be) given in a parametric form by a finite number of parameters. In this case the objective functional is, in fact, a function in a finite-dimensional space and the shape gradient becomes an ordinary gradient. The gradient of the objective functional, in the case that the surface to be optimized is given in a finite-parametric form, is derived from the shape gradient. A particular case, that a surface to be optimized is a 'two-dimensional' polyhedral one, is considered. The technique, developed in the paper, is applied to a synthetic problem of designing a 'two-dimensional' radiant enclosure.

  18. A primer on linear models

    CERN Document Server

    Monahan, John F

    2008-01-01

    Preface Examples of the General Linear Model Introduction One-Sample Problem Simple Linear Regression Multiple Regression One-Way ANOVA First Discussion The Two-Way Nested Model Two-Way Crossed Model Analysis of Covariance Autoregression Discussion The Linear Least Squares Problem The Normal Equations The Geometry of Least Squares Reparameterization Gram-Schmidt Orthonormalization Estimability and Least Squares Estimators Assumptions for the Linear Mean Model Confounding, Identifiability, and Estimability Estimability and Least Squares Estimators F

  19. Solution of Large Systems of Linear Equations in the Presence of Errors. A Constructive Criticism of the Least Squares Method

    Energy Technology Data Exchange (ETDEWEB)

    Nygaard, K

    1968-09-15

    From the point of view that no mathematical method can ever minimise or alter errors already made in a physical measurement, the classical least squares method has severe limitations which makes it unsuitable for the statistical analysis of many physical measurements. Based on the assumptions that the experimental errors are characteristic for each single experiment and that the errors must be properly estimated rather than minimised, a new method for solving large systems of linear equations is developed. The new method exposes the entire range of possible solutions before the decision is taken which of the possible solutions should be chosen as a representative one. The choice is based on physical considerations which (in two examples, curve fitting and unfolding of a spectrum) are presented in such a form that a computer is able to make the decision, A description of the computation is given. The method described is a tool for removing uncertainties due to conventional mathematical formulations (zero determinant, linear dependence) and which are not inherent in the physical problem as such. The method is therefore especially well fitted for unfolding of spectra.

  20. Solution of Large Systems of Linear Equations in the Presence of Errors. A Constructive Criticism of the Least Squares Method

    International Nuclear Information System (INIS)

    Nygaard, K.

    1968-09-01

    From the point of view that no mathematical method can ever minimise or alter errors already made in a physical measurement, the classical least squares method has severe limitations which makes it unsuitable for the statistical analysis of many physical measurements. Based on the assumptions that the experimental errors are characteristic for each single experiment and that the errors must be properly estimated rather than minimised, a new method for solving large systems of linear equations is developed. The new method exposes the entire range of possible solutions before the decision is taken which of the possible solutions should be chosen as a representative one. The choice is based on physical considerations which (in two examples, curve fitting and unfolding of a spectrum) are presented in such a form that a computer is able to make the decision, A description of the computation is given. The method described is a tool for removing uncertainties due to conventional mathematical formulations (zero determinant, linear dependence) and which are not inherent in the physical problem as such. The method is therefore especially well fitted for unfolding of spectra

  1. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  2. Power system state estimation using an iteratively reweighted least squares method for sequential L{sub 1}-regression

    Energy Technology Data Exchange (ETDEWEB)

    Jabr, R.A. [Electrical, Computer and Communication Engineering Department, Notre Dame University, P.O. Box 72, Zouk Mikhael, Zouk Mosbeh (Lebanon)

    2006-02-15

    This paper presents an implementation of the least absolute value (LAV) power system state estimator based on obtaining a sequence of solutions to the L{sub 1}-regression problem using an iteratively reweighted least squares (IRLS{sub L1}) method. The proposed implementation avoids reformulating the regression problem into standard linear programming (LP) form and consequently does not require the use of common methods of LP, such as those based on the simplex method or interior-point methods. It is shown that the IRLS{sub L1} method is equivalent to solving a sequence of linear weighted least squares (LS) problems. Thus, its implementation presents little additional effort since the sparse LS solver is common to existing LS state estimators. Studies on the termination criteria of the IRLS{sub L1} method have been carried out to determine a procedure for which the proposed estimator is more computationally efficient than a previously proposed non-linear iteratively reweighted least squares (IRLS) estimator. Indeed, it is revealed that the proposed method is a generalization of the previously reported IRLS estimator, but is based on more rigorous theory. (author)

  3. Iterative least-squares solvers for the Navier-Stokes equations

    Energy Technology Data Exchange (ETDEWEB)

    Bochev, P. [Univ. of Texas, Arlington, TX (United States)

    1996-12-31

    In the recent years finite element methods of least-squares type have attracted considerable attention from both mathematicians and engineers. This interest has been motivated, to a large extent, by several valuable analytic and computational properties of least-squares variational principles. In particular, finite element methods based on such principles circumvent Ladyzhenskaya-Babuska-Brezzi condition and lead to symmetric and positive definite algebraic systems. Thus, it is not surprising that numerical solution of fluid flow problems has been among the most promising and successful applications of least-squares methods. In this context least-squares methods offer significant theoretical and practical advantages in the algorithmic design, which makes resulting methods suitable, among other things, for large-scale numerical simulations.

  4. A Newton Algorithm for Multivariate Total Least Squares Problems

    Directory of Open Access Journals (Sweden)

    WANG Leyang

    2016-04-01

    Full Text Available In order to improve calculation efficiency of parameter estimation, an algorithm for multivariate weighted total least squares adjustment based on Newton method is derived. The relationship between the solution of this algorithm and that of multivariate weighted total least squares adjustment based on Lagrange multipliers method is analyzed. According to propagation of cofactor, 16 computational formulae of cofactor matrices of multivariate total least squares adjustment are also listed. The new algorithm could solve adjustment problems containing correlation between observation matrix and coefficient matrix. And it can also deal with their stochastic elements and deterministic elements with only one cofactor matrix. The results illustrate that the Newton algorithm for multivariate total least squares problems could be practiced and have higher convergence rate.

  5. Elastic least-squares reverse time migration

    KAUST Repository

    Feng, Zongcai

    2017-03-08

    We use elastic least-squares reverse time migration (LSRTM) to invert for the reflectivity images of P- and S-wave impedances. Elastic LSRTMsolves the linearized elastic-wave equations for forward modeling and the adjoint equations for backpropagating the residual wavefield at each iteration. Numerical tests on synthetic data and field data reveal the advantages of elastic LSRTM over elastic reverse time migration (RTM) and acoustic LSRTM. For our examples, the elastic LSRTM images have better resolution and amplitude balancing, fewer artifacts, and less crosstalk compared with the elastic RTM images. The images are also better focused and have better reflector continuity for steeply dipping events compared to the acoustic LSRTM images. Similar to conventional leastsquares migration, elastic LSRTM also requires an accurate estimation of the P- and S-wave migration velocity models. However, the problem remains that, when there are moderate errors in the velocity model and strong multiples, LSRTMwill produce migration noise stronger than that seen in the RTM images.

  6. Elastic least-squares reverse time migration

    KAUST Repository

    Feng, Zongcai; Schuster, Gerard T.

    2017-01-01

    We use elastic least-squares reverse time migration (LSRTM) to invert for the reflectivity images of P- and S-wave impedances. Elastic LSRTMsolves the linearized elastic-wave equations for forward modeling and the adjoint equations for backpropagating the residual wavefield at each iteration. Numerical tests on synthetic data and field data reveal the advantages of elastic LSRTM over elastic reverse time migration (RTM) and acoustic LSRTM. For our examples, the elastic LSRTM images have better resolution and amplitude balancing, fewer artifacts, and less crosstalk compared with the elastic RTM images. The images are also better focused and have better reflector continuity for steeply dipping events compared to the acoustic LSRTM images. Similar to conventional leastsquares migration, elastic LSRTM also requires an accurate estimation of the P- and S-wave migration velocity models. However, the problem remains that, when there are moderate errors in the velocity model and strong multiples, LSRTMwill produce migration noise stronger than that seen in the RTM images.

  7. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  8. Microprocessor-controlled system for automatic acquisition of potentiometric data and their non-linear least-squares fit in equilibrium studies.

    Science.gov (United States)

    Gampp, H; Maeder, M; Zuberbühler, A D; Kaden, T A

    1980-06-01

    A microprocessor-controlled potentiometric titration apparatus for equilibrium studies is described. The microprocessor controls the stepwise addition of reagent, monitors the pH until it becomes constant and stores the constant value. The data are recorded on magnetic tape by a cassette recorder with an RS232 input-output interface. A non-linear least-squares program based on Marquardt's modification of the Newton-Gauss method is discussed and its performance in the calculation of equilibrium constants is exemplified. An HP 9821 desk-top computer accepts the data from the magnetic tape recorder. In addition to a fully automatic fitting procedure, the program allows manual adjustment of the parameters. Three examples are discussed with regard to performance and reproducibility.

  9. An Incremental Weighted Least Squares Approach to Surface Lights Fields

    Science.gov (United States)

    Coombe, Greg; Lastra, Anselmo

    An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.

  10. Fast Dating Using Least-Squares Criteria and Algorithms.

    Science.gov (United States)

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that

  11. Strong source heat transfer simulations based on a GalerKin/Gradient - least - squares method

    International Nuclear Information System (INIS)

    Franca, L.P.; Carmo, E.G.D. do.

    1989-05-01

    Heat conduction problems with temperature-dependent strong sources are modeled by an equation with a laplacian term, a linear term and a given source distribution term. When the linear-temperature-dependent source term is much larger than the laplacian term, we have a singular perturbation problem. In this case, boundary layers are formed to satisfy the Dirichlet boundary conditions. Although this is an elliptic equation, the standard Galerkin method solution is contaminated by spurious oscillations in the neighborhood of the boundary layers. Herein we employ a Galerkin/Gradient-least-squares method which eliminates all pathological phenomena of the Galerkin method. The method is constructed by adding to the Galerkin method a mesh-dependent term obtained by the least-squares form of the gradient of the Euler-Lagrange equation. Error estimates, numerical simulations in one-and multi-dimensions are given that attest the good stability and accuracy properties of the method [pt

  12. New method to incorporate Type B uncertainty into least-squares procedures in radionuclide metrology

    International Nuclear Information System (INIS)

    Han, Jubong; Lee, K.B.; Lee, Jong-Man; Park, Tae Soon; Oh, J.S.; Oh, Pil-Jei

    2016-01-01

    We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. - Highlights: • A new method proposed to incorporate Type B uncertainty into least-squares method. • The method constructed from the likelihood function and PDFs of Type B uncertainty. • A case study performed to compare results from the new and the conventional method. • Fitted parameters are consistent but with larger uncertainties in the new method.

  13. An improved conjugate gradient scheme to the solution of least squares SVM.

    Science.gov (United States)

    Chu, Wei; Ong, Chong Jin; Keerthi, S Sathiya

    2005-03-01

    The least square support vector machines (LS-SVM) formulation corresponds to the solution of a linear system of equations. Several approaches to its numerical solutions have been proposed in the literature. In this letter, we propose an improved method to the numerical solution of LS-SVM and show that the problem can be solved using one reduced system of linear equations. Compared with the existing algorithm for LS-SVM, the approach used in this letter is about twice as efficient. Numerical results using the proposed method are provided for comparisons with other existing algorithms.

  14. Source allocation by least-squares hydrocarbon fingerprint matching

    Energy Technology Data Exchange (ETDEWEB)

    William A. Burns; Stephen M. Mudge; A. Edward Bence; Paul D. Boehm; John S. Brown; David S. Page; Keith R. Parker [W.A. Burns Consulting Services LLC, Houston, TX (United States)

    2006-11-01

    There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS. 15 refs., 5 figs.

  15. Consistency of the least weighted squares under heteroscedasticity

    Czech Academy of Sciences Publication Activity Database

    Víšek, Jan Ámos

    2011-01-01

    Roč. 2011, č. 47 (2011), s. 179-206 ISSN 0023-5954 Grant - others:GA UK(CZ) GA402/09/055 Institutional research plan: CEZ:AV0Z10750506 Keywords : Regression * Consistency * The least weighted squares * Heteroscedasticity Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/visek-consistency of the least weighted squares under heteroscedasticity.pdf

  16. Classification of Hyperspectral Images Using Kernel Fully Constrained Least Squares

    Directory of Open Access Journals (Sweden)

    Jianjun Liu

    2017-11-01

    Full Text Available As a widely used classifier, sparse representation classification (SRC has shown its good performance for hyperspectral image classification. Recent works have highlighted that it is the collaborative representation mechanism under SRC that makes SRC a highly effective technique for classification purposes. If the dimensionality and the discrimination capacity of a test pixel is high, other norms (e.g., ℓ 2 -norm can be used to regularize the coding coefficients, except for the sparsity ℓ 1 -norm. In this paper, we show that in the kernel space the nonnegative constraint can also play the same role, and thus suggest the investigation of kernel fully constrained least squares (KFCLS for hyperspectral image classification. Furthermore, in order to improve the classification performance of KFCLS by incorporating spatial-spectral information, we investigate two kinds of spatial-spectral methods using two regularization strategies: (1 the coefficient-level regularization strategy, and (2 the class-level regularization strategy. Experimental results conducted on four real hyperspectral images demonstrate the effectiveness of the proposed KFCLS, and show which way to incorporate spatial-spectral information efficiently in the regularization framework.

  17. Two-dimensional heat flow analysis applied to heat sterilization of ponderosa pine and Douglas-fir square timbers

    Science.gov (United States)

    William T. Simpson

    2004-01-01

    Equations for a two-dimensional finite difference heat flow analysis were developed and applied to ponderosa pine and Douglas-fir square timbers to calculate the time required to heat the center of the squares to target temperature. The squares were solid piled, which made their surfaces inaccessible to the heating air, and thus surface temperatures failed to attain...

  18. Spin-orbit coupling, electron transport and pairing instabilities in two-dimensional square structures

    Energy Technology Data Exchange (ETDEWEB)

    Kocharian, Armen N. [Department of Physics, California State University, Los Angeles, CA 90032 (United States); Fernando, Gayanath W.; Fang, Kun [Department of Physics, University of Connecticut, Storrs, Connecticut 06269 (United States); Palandage, Kalum [Department of Physics, Trinity College, Hartford, Connecticut 06106 (United States); Balatsky, Alexander V. [AlbaNova University Center Nordita, SE-106 91 Stockholm (Sweden)

    2016-05-15

    Rashba spin-orbit effects and electron correlations in the two-dimensional cylindrical lattices of square geometries are assessed using mesoscopic two-, three- and four-leg ladder structures. Here the electron transport properties are systematically calculated by including the spin-orbit coupling in tight binding and Hubbard models threaded by a magnetic flux. These results highlight important aspects of possible symmetry breaking mechanisms in square ladder geometries driven by the combined effect of a magnetic gauge field spin-orbit interaction and temperature. The observed persistent current, spin and charge polarizations in the presence of spin-orbit coupling are driven by separation of electron and hole charges and opposite spins in real-space. The modeled spin-flip processes on the pairing mechanism induced by the spin-orbit coupling in assembled nanostructures (as arrays of clusters) engineered in various two-dimensional multi-leg structures provide an ideal playground for understanding spatial charge and spin density inhomogeneities leading to electron pairing and spontaneous phase separation instabilities in unconventional superconductors. Such studies also fall under the scope of current challenging problems in superconductivity and magnetism, topological insulators and spin dependent transport associated with numerous interfaces and heterostructures.

  19. Spin-orbit coupling, electron transport and pairing instabilities in two-dimensional square structures

    Directory of Open Access Journals (Sweden)

    Armen N. Kocharian

    2016-05-01

    Full Text Available Rashba spin-orbit effects and electron correlations in the two-dimensional cylindrical lattices of square geometries are assessed using mesoscopic two-, three- and four-leg ladder structures. Here the electron transport properties are systematically calculated by including the spin-orbit coupling in tight binding and Hubbard models threaded by a magnetic flux. These results highlight important aspects of possible symmetry breaking mechanisms in square ladder geometries driven by the combined effect of a magnetic gauge field spin-orbit interaction and temperature. The observed persistent current, spin and charge polarizations in the presence of spin-orbit coupling are driven by separation of electron and hole charges and opposite spins in real-space. The modeled spin-flip processes on the pairing mechanism induced by the spin-orbit coupling in assembled nanostructures (as arrays of clusters engineered in various two-dimensional multi-leg structures provide an ideal playground for understanding spatial charge and spin density inhomogeneities leading to electron pairing and spontaneous phase separation instabilities in unconventional superconductors. Such studies also fall under the scope of current challenging problems in superconductivity and magnetism, topological insulators and spin dependent transport associated with numerous interfaces and heterostructures.

  20. note: The least square nucleolus is a general nucleolus

    OpenAIRE

    Elisenda Molina; Juan Tejada

    2000-01-01

    This short note proves that the least square nucleolus (Ruiz et al. (1996)) and the lexicographical solution (Sakawa and Nishizaki (1994)) select the same imputation in each game with nonempty imputation set. As a consequence the least square nucleolus is a general nucleolus (Maschler et al. (1992)).

  1. Prediction of toxicity of nitrobenzenes using ab initio and least squares support vector machines

    International Nuclear Information System (INIS)

    Niazi, Ali; Jameh-Bozorghi, Saeed; Nori-Shargh, Davood

    2008-01-01

    A quantitative structure-property relationship (QSPR) study is suggested for the prediction of toxicity (IGC 50 ) of nitrobenzenes. Ab initio theory was used to calculate some quantum chemical descriptors including electrostatic potentials and local charges at each atom, HOMO and LUMO energies, etc. Modeling of the IGC 50 of nitrobenzenes as a function of molecular structures was established by means of the least squares support vector machines (LS-SVM). This model was applied for the prediction of the toxicity (IGC 50 ) of nitrobenzenes, which were not in the modeling procedure. The resulted model showed high prediction ability with root mean square error of prediction of 0.0049 for LS-SVM. Results have shown that the introduction of LS-SVM for quantum chemical descriptors drastically enhances the ability of prediction in QSAR studies superior to multiple linear regression and partial least squares

  2. Conjugate gradient and cross-correlation based least-square reverse time migration and its application

    Science.gov (United States)

    Sun, Xiao-Dong; Ge, Zhong-Hui; Li, Zhen-Chun

    2017-09-01

    Although conventional reverse time migration can be perfectly applied to structural imaging it lacks the capability of enabling detailed delineation of a lithological reservoir due to irregular illumination. To obtain reliable reflectivity of the subsurface it is necessary to solve the imaging problem using inversion. The least-square reverse time migration (LSRTM) (also known as linearized reflectivity inversion) aims to obtain relatively high-resolution amplitude preserving imaging by including the inverse of the Hessian matrix. In practice, the conjugate gradient algorithm is proven to be an efficient iterative method for enabling use of LSRTM. The velocity gradient can be derived from a cross-correlation between observed data and simulated data, making LSRTM independent of wavelet signature and thus more robust in practice. Tests on synthetic and marine data show that LSRTM has good potential for use in reservoir description and four-dimensional (4D) seismic images compared to traditional RTM and Fourier finite difference (FFD) migration. This paper investigates the first order approximation of LSRTM, which is also known as the linear Born approximation. However, for more complex geological structures a higher order approximation should be considered to improve imaging quality.

  3. Application of least-squares method to decay heat evaluation

    International Nuclear Information System (INIS)

    Schmittroth, F.; Schenter, R.E.

    1976-01-01

    Generalized least-squares methods are applied to decay-heat experiments and summation calculations to arrive at evaluated values and uncertainties for the fission-product decay-heat from the thermal fission of 235 U. Emphasis is placed on a proper treatment of both statistical and correlated uncertainties in the least-squares method

  4. Multivariat least-squares methods applied to the quantitative spectral analysis of multicomponent samples

    International Nuclear Information System (INIS)

    Haaland, D.M.; Easterling, R.G.; Vopicka, D.A.

    1985-01-01

    In an extension of earlier work, weighted multivariate least-squares methods of quantitative FT-IR analysis have been developed. A linear least-squares approximation to nonlinearities in the Beer-Lambert law is made by allowing the reference spectra to be a set of known mixtures, The incorporation of nonzero intercepts in the relation between absorbance and concentration further improves the approximation of nonlinearities while simultaneously accounting for nonzero spectra baselines. Pathlength variations are also accommodated in the analysis, and under certain conditions, unknown sample pathlengths can be determined. All spectral data are used to improve the precision and accuracy of the estimated concentrations. During the calibration phase of the analysis, pure component spectra are estimated from the standard mixture spectra. These can be compared with the measured pure component spectra to determine which vibrations experience nonlinear behavior. In the predictive phase of the analysis, the calculated spectra are used in our previous least-squares analysis to estimate sample component concentrations. These methods were applied to the analysis of the IR spectra of binary mixtures of esters. Even with severely overlapping spectral bands and nonlinearities in the Beer-Lambert law, the average relative error in the estimated concentration was <1%

  5. Multiples least-squares reverse time migration

    KAUST Repository

    Zhang, Dongliang; Zhan, Ge; Dai, Wei; Schuster, Gerard T.

    2013-01-01

    To enhance the image quality, we propose multiples least-squares reverse time migration (MLSRTM) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. Since each recorded trace is treated

  6. And still, a new beginning: the Galerkin least-squares gradient method

    International Nuclear Information System (INIS)

    Franca, L.P.; Carmo, E.G.D. do

    1988-08-01

    A finite element method is proposed to solve a scalar singular diffusion problem. The method is constructed by adding to the standard Galerkin a mesh-dependent term obtained by taking the gradient of the Euler-lagrange equation and multiplying it by its least-squares. For the one-dimensional homogeneous problem the method is designed to develop nodal exact solution. An error estimate shows that the method converges optimaly for any value of the singular parameter. Numerical results demonstrate the good stability and accuracy properties of the method. (author) [pt

  7. Application of the Polynomial-Based Least Squares and Total Least Squares Models for the Attenuated Total Reflection Fourier Transform Infrared Spectra of Binary Mixtures of Hydroxyl Compounds.

    Science.gov (United States)

    Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang

    2016-03-01

    An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS. © The Author(s) 2016.

  8. Least-squares dual characterization for ROI assessment in emission tomography

    International Nuclear Information System (INIS)

    Ben Bouallègue, F; Mariano-Goulart, D; Crouzet, J F; Dubois, A; Buvat, I

    2013-01-01

    Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff. (paper)

  9. Least-squares dual characterization for ROI assessment in emission tomography

    Science.gov (United States)

    Ben Bouallègue, F.; Crouzet, J. F.; Dubois, A.; Buvat, I.; Mariano-Goulart, D.

    2013-06-01

    Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.

  10. Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points

    KAUST Repository

    Migliorati, Giovanni; Nobile, Fabio; Tempone, Raul

    2015-01-01

    We study the accuracy of the discrete least-squares approximation on a finite dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability

  11. Generalized least squares and empirical Bayes estimation in regional partial duration series index-flood modeling

    DEFF Research Database (Denmark)

    Madsen, Henrik; Rosbjerg, Dan

    1997-01-01

    parameters is inferred from regional data using generalized least squares (GLS) regression. Two different Bayesian T-year event estimators are introduced: a linear estimator that requires only some moments of the prior distributions to be specified and a parametric estimator that is based on specified......A regional estimation procedure that combines the index-flood concept with an empirical Bayes method for inferring regional information is introduced. The model is based on the partial duration series approach with generalized Pareto (GP) distributed exceedances. The prior information of the model...

  12. Time-domain least-squares migration using the Gaussian beam summation method

    Science.gov (United States)

    Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo

    2018-04-01

    With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modeling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modeling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a preconditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.

  13. LSL: a logarithmic least-squares adjustment method

    International Nuclear Information System (INIS)

    Stallmann, F.W.

    1982-01-01

    To meet regulatory requirements, spectral unfolding codes must not only provide reliable estimates for spectral parameters, but must also be able to determine the uncertainties associated with these parameters. The newer codes, which are more appropriately called adjustment codes, use the least squares principle to determine estimates and uncertainties. The principle is simple and straightforward, but there are several different mathematical models to describe the unfolding problem. In addition to a sound mathematical model, ease of use and range of options are important considerations in the construction of adjustment codes. Based on these considerations, a least squares adjustment code for neutron spectrum unfolding has been constructed some time ago and tentatively named LSL

  14. Sparse least-squares reverse time migration using seislets

    KAUST Repository

    Dutta, Gaurav

    2015-08-19

    We propose sparse least-squares reverse time migration (LSRTM) using seislets as a basis for the reflectivity distribution. This basis is used along with a dip-constrained preconditioner that emphasizes image updates only along prominent dips during the iterations. These dips can be estimated from the standard migration image or from the gradient using plane-wave destruction filters or structural tensors. Numerical tests on synthetic datasets demonstrate the benefits of this method for mitigation of aliasing artifacts and crosstalk noise in multisource least-squares migration.

  15. Quantized kernel least mean square algorithm.

    Science.gov (United States)

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  16. First-order system least squares and the energetic variational approach for two-phase flow

    Science.gov (United States)

    Adler, J. H.; Brannick, J.; Liu, C.; Manteuffel, T.; Zikatanov, L.

    2011-07-01

    This paper develops a first-order system least-squares (FOSLS) formulation for equations of two-phase flow. The main goal is to show that this discretization, along with numerical techniques such as nested iteration, algebraic multigrid, and adaptive local refinement, can be used to solve these types of complex fluid flow problems. In addition, from an energetic variational approach, it can be shown that an important quantity to preserve in a given simulation is the energy law. We discuss the energy law and inherent structure for two-phase flow using the Allen-Cahn interface model and indicate how it is related to other complex fluid models, such as magnetohydrodynamics. Finally, we show that, using the FOSLS framework, one can still satisfy the appropriate energy law globally while using well-known numerical techniques.

  17. Group-wise partial least square regression

    NARCIS (Netherlands)

    Camacho, José; Saccenti, Edoardo

    2018-01-01

    This paper introduces the group-wise partial least squares (GPLS) regression. GPLS is a new sparse PLS technique where the sparsity structure is defined in terms of groups of correlated variables, similarly to what is done in the related group-wise principal component analysis. These groups are

  18. Prediction of clinical depression scores and detection of changes in whole-brain using resting-state functional MRI data with partial least squares regression.

    Directory of Open Access Journals (Sweden)

    Kosuke Yoshida

    Full Text Available In diagnostic applications of statistical machine learning methods to brain imaging data, common problems include data high-dimensionality and co-linearity, which often cause over-fitting and instability. To overcome these problems, we applied partial least squares (PLS regression to resting-state functional magnetic resonance imaging (rs-fMRI data, creating a low-dimensional representation that relates symptoms to brain activity and that predicts clinical measures. Our experimental results, based upon data from clinically depressed patients and healthy controls, demonstrated that PLS and its kernel variants provided significantly better prediction of clinical measures than ordinary linear regression. Subsequent classification using predicted clinical scores distinguished depressed patients from healthy controls with 80% accuracy. Moreover, loading vectors for latent variables enabled us to identify brain regions relevant to depression, including the default mode network, the right superior frontal gyrus, and the superior motor area.

  19. Least-squares methods for identifying biochemical regulatory networks from noisy measurements

    Directory of Open Access Journals (Sweden)

    Heslop-Harrison Pat

    2007-01-01

    Full Text Available Abstract Background We consider the problem of identifying the dynamic interactions in biochemical networks from noisy experimental data. Typically, approaches for solving this problem make use of an estimation algorithm such as the well-known linear Least-Squares (LS estimation technique. We demonstrate that when time-series measurements are corrupted by white noise and/or drift noise, more accurate and reliable identification of network interactions can be achieved by employing an estimation algorithm known as Constrained Total Least Squares (CTLS. The Total Least Squares (TLS technique is a generalised least squares method to solve an overdetermined set of equations whose coefficients are noisy. The CTLS is a natural extension of TLS to the case where the noise components of the coefficients are correlated, as is usually the case with time-series measurements of concentrations and expression profiles in gene networks. Results The superior performance of the CTLS method in identifying network interactions is demonstrated on three examples: a genetic network containing four genes, a network describing p53 activity and mdm2 messenger RNA interactions, and a recently proposed kinetic model for interleukin (IL-6 and (IL-12b messenger RNA expression as a function of ATF3 and NF-κB promoter binding. For the first example, the CTLS significantly reduces the errors in the estimation of the Jacobian for the gene network. For the second, the CTLS reduces the errors from the measurements that are corrupted by white noise and the effect of neglected kinetics. For the third, it allows the correct identification, from noisy data, of the negative regulation of (IL-6 and (IL-12b by ATF3. Conclusion The significant improvements in performance demonstrated by the CTLS method under the wide range of conditions tested here, including different levels and types of measurement noise and different numbers of data points, suggests that its application will enable

  20. Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS

    Directory of Open Access Journals (Sweden)

    Ade Widyaningsih

    2015-04-01

    Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.

  1. Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS

    Directory of Open Access Journals (Sweden)

    Ade Widyaningsih

    2014-06-01

    Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.

  2. Space-time least-squares Petrov-Galerkin projection in nonlinear model reduction.

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Youngsoo [Sandia National Laboratories (SNL-CA), Livermore, CA (United States). Extreme-scale Data Science and Analytics Dept.; Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carlberg, Kevin Thomas [Sandia National Laboratories (SNL-CA), Livermore, CA (United States). Extreme-scale Data Science and Analytics Dept.

    2017-09-01

    Our work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply Petrov-Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discrete-optimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over all space and time in a weighted ℓ2-norm. This norm can be de ned to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time GNAT variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include: (1) a reduction of both the spatial and temporal dimensions of the dynamical system, (2) the removal of spurious temporal modes (e.g., unstable growth) from the state space, and (3) error bounds that exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy.

  3. Internal displacement and strain measurement using digital volume correlation: a least-squares framework

    International Nuclear Information System (INIS)

    Pan, Bing; Wu, Dafang; Wang, Zhaoyang

    2012-01-01

    As a novel tool for quantitative 3D internal deformation measurement throughout the interior of a material or tissue, digital volume correlation (DVC) has increasingly gained attention and application in the fields of experimental mechanics, material research and biomedical engineering. However, the practical implementation of DVC involves important challenges such as implementation complexity, calculation accuracy and computational efficiency. In this paper, a least-squares framework is presented for 3D internal displacement and strain field measurement using DVC. The proposed DVC combines a practical linear-intensity-change model with an easy-to-implement iterative least-squares (ILS) algorithm to retrieve 3D internal displacement vector field with sub-voxel accuracy. Because the linear-intensity-change model is capable of accounting for both the possible intensity changes and the relative geometric transform of the target subvolume, the presented DVC thus provides the highest sub-voxel registration accuracy and widest applicability. Furthermore, as the ILS algorithm uses only first-order spatial derivatives of the deformed volumetric image, the developed DVC thus significantly reduces computational complexity. To further extract 3D strain distributions from the 3D discrete displacement vectors obtained by the ILS algorithm, the presented DVC employs a pointwise least-squares algorithm to estimate the strain components for each measurement point. Computer-simulated volume images with controlled displacements are employed to investigate the performance of the proposed DVC method in terms of mean bias error and standard deviation error. Results reveal that the present technique is capable of providing accurate measurements in an easy-to-implement manner, and can be applied to practical 3D internal displacement and strain calculation. (paper)

  4. Optimistic semi-supervised least squares classification

    DEFF Research Database (Denmark)

    Krijthe, Jesse H.; Loog, Marco

    2017-01-01

    The goal of semi-supervised learning is to improve supervised classifiers by using additional unlabeled training examples. In this work we study a simple self-learning approach to semi-supervised learning applied to the least squares classifier. We show that a soft-label and a hard-label variant ...

  5. Numerical prediction of turbulent heat transfer augmentation in an annular fuel channel with two-dimensional square ribs

    International Nuclear Information System (INIS)

    Takase, Kazuyuki

    1996-01-01

    The square-ribbed fuel rod for high temperature gas-cooled reactors was developed in order to enhance the turbulent heat transfer in comparison with the standard fuel rod. To evaluate the heat transfer performance of the square-ribbed fuel rod, the turbulent heat transfer coefficients in an annular fuel channel with repeated two-dimensional square ribs were analyzed numerically on a fully developed incompressible flow using the k - ε turbulence model and the two-dimensional axisymmetrical coordinate system. Numerical analyses were carried out for a range of Reynolds numbers from 3000 to 20000 and ratios of square-rib pitch to height of 10, 20 and 40, respectively. The predicted values of the heat transfer coefficients agreed within an error of 10% for the square-rib pitch to height ratio of 10, 20% for 20 and 25% for 40, respectively, with the heat transfer empirical correlations obtained from the experimental data. It was concluded by the present study that the effect of the heat transfer augmentation by square ribs could be predicted sufficiently by the present numerical simulations and also a part of its mechanism could be explained by means of the change in the turbulence kinematic energy distribution along the flow direction. (author)

  6. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef

    2017-06-30

    Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.

  7. Two-dimensional strain gradient damage modeling: a variational approach

    Science.gov (United States)

    Placidi, Luca; Misra, Anil; Barchiesi, Emilio

    2018-06-01

    In this paper, we formulate a linear elastic second gradient isotropic two-dimensional continuum model accounting for irreversible damage. The failure is defined as the condition in which the damage parameter reaches 1, at least in one point of the domain. The quasi-static approximation is done, i.e., the kinetic energy is assumed to be negligible. In order to deal with dissipation, a damage dissipation term is considered in the deformation energy functional. The key goal of this paper is to apply a non-standard variational procedure to exploit the damage irreversibility argument. As a result, we derive not only the equilibrium equations but, notably, also the Karush-Kuhn-Tucker conditions. Finally, numerical simulations for exemplary problems are discussed as some constitutive parameters are varying, with the inclusion of a mesh-independence evidence. Element-free Galerkin method and moving least square shape functions have been employed.

  8. Application of Least-Squares Spectral Element Methods to Polynomial Chaos

    NARCIS (Netherlands)

    Vos, P.E.J.; Gerritsma, M.I.

    2006-01-01

    This papers describes the use of the Least-Squares Spectral Element Method to polynomial Chaos to solve stochastic partial differential equations. The method will be described in detail and a comparison will be presented between the least-squares projection and the conventional Galerkin projection.

  9. Two-dimensional differential transform method for solving linear and non-linear Schroedinger equations

    International Nuclear Information System (INIS)

    Ravi Kanth, A.S.V.; Aruna, K.

    2009-01-01

    In this paper, we propose a reliable algorithm to develop exact and approximate solutions for the linear and nonlinear Schroedinger equations. The approach rest mainly on two-dimensional differential transform method which is one of the approximate methods. The method can easily be applied to many linear and nonlinear problems and is capable of reducing the size of computational work. Exact solutions can also be achieved by the known forms of the series solutions. Several illustrative examples are given to demonstrate the effectiveness of the present method.

  10. Numerical prediction of augmented turbulent heat transfer in an annular fuel channel with repeated two-dimensional square ribs

    International Nuclear Information System (INIS)

    Takase, K.

    1996-01-01

    The square-ribbed fuel rod for high temperature gas-cooled reactors was designed and developed so as to enhance the turbulent heat transfer in comparison with the previous standard fuel rod. The turbulent heat transfer characteristics in an annular fuel channel with repeated two-dimensional square ribs were analysed numerically on a fully developed incompressible flow using the k-ε turbulence model and the two-dimensional axisymmetrical coordinate system. Numerical analyses were carried out under the conditions of Reynolds numbers from 3000 to 20000 and ratios of square-rib pitch to height of 10, 20 and 40 respectively. The predictions of the heat transfer coefficients agreed well within an error of 10% for the square-rib pitch to height ratio of 10, 20% for 20 and 25% for 40 respectively, with the heat transfer empirical correlations obtained from the experimental data due to the simulated square-ribbed fuel rods. Therefore it was found that the effect of heat transfer augmentation due to the square ribs could be predicted by the present numerical simulations and the mechanism could be explained by the change in the turbulence kinematic energy distribution along the flow direction. (orig.)

  11. First-order system least-squares for second-order elliptic problems with discontinuous coefficients: Further results

    Energy Technology Data Exchange (ETDEWEB)

    Bloechle, B.; Manteuffel, T.; McCormick, S.; Starke, G.

    1996-12-31

    Many physical phenomena are modeled as scalar second-order elliptic boundary value problems with discontinuous coefficients. The first-order system least-squares (FOSLS) methodology is an alternative to standard mixed finite element methods for such problems. The occurrence of singularities at interface corners and cross-points requires that care be taken when implementing the least-squares finite element method in the FOSLS context. We introduce two methods of handling the challenges resulting from singularities. The first method is based on a weighted least-squares functional and results in non-conforming finite elements. The second method is based on the use of singular basis functions and results in conforming finite elements. We also share numerical results comparing the two approaches.

  12. Multi-source least-squares migration of marine data

    KAUST Repository

    Wang, Xin

    2012-11-04

    Kirchhoff based multi-source least-squares migration (MSLSM) is applied to marine streamer data. To suppress the crosstalk noise from the excitation of multiple sources, a dynamic encoding function (including both time-shifts and polarity changes) is applied to the receiver side traces. Results show that the MSLSM images are of better quality than the standard Kirchhoff migration and reverse time migration images; moreover, the migration artifacts are reduced and image resolution is significantly improved. The computational cost of MSLSM is about the same as conventional least-squares migration, but its IO cost is significantly decreased.

  13. Fitting of two and three variate polynomials from experimental data through the least squares method

    International Nuclear Information System (INIS)

    Sanchez-Miro, J.J.; Sanz-Martin, J.C.

    1994-01-01

    Obtaining polynomial fittings from observational data in two and three dimensions is an interesting and practical task. Such an arduous problem suggests the development of an automatic code. The main novelty we provide lies in the generalization of the classical least squares method in three FORTRAN 77 programs usable in any sampling problem. Furthermore, we introduce the orthogonal 2D-Legendre function in the fitting process. These FORTRAN 77 programs are equipped with the options to calculate the approximation quality standard indicators, obviously generalized to two and three dimensions (correlation nonlinear factor, confidence intervals, cuadratic mean error, and so on). The aim of this paper is to rectify the absence of fitting algorithms for more than one independent variable in mathematical libraries

  14. Least-squares methods involving the H{sup -1} inner product

    Energy Technology Data Exchange (ETDEWEB)

    Pasciak, J.

    1996-12-31

    Least-squares methods are being shown to be an effective technique for the solution of elliptic boundary value problems. However, the methods differ depending on the norms in which they are formulated. For certain problems, it is much more natural to consider least-squares functionals involving the H{sup -1} norm. Such norms give rise to improved convergence estimates and better approximation to problems with low regularity solutions. In addition, fewer new variables need to be added and less stringent boundary conditions need to be imposed. In this talk, I will describe some recent developments involving least-squares methods utilizing the H{sup -1} inner product.

  15. Influence of linear-energy-dependent density of states on two-band superconductors: Three-square-well model approach

    International Nuclear Information System (INIS)

    Ogbuu, O.A.; Abah, O.C.; Asomba, G.C.; Okoye, C.M.I.

    2011-01-01

    We derived the transition temperature and the isotope exponent of two-band superconductor. We employed Bogoliubov-Valatin formalism assuming a three-square-well potential. The effect of linear-energy-dependent electronic DOS in superconductors is considered. The relevance of the studies to MgB 2 is analyzed. We have derived the expressions for the transition temperature and the isotope effect exponent within the framework of Bogoliubov-Valatin two-band formalism using a linear-energy-dependent electronic density of states assuming a three-square-well potentials model. Our results show that the approach could be used to account for a wide range of values of the transition temperature and isotope effect exponent. The relevance of the present calculations to MgB 2 is analyzed.

  16. RCS Leak Rate Calculation with High Order Least Squares Method

    International Nuclear Information System (INIS)

    Lee, Jeong Hun; Kang, Young Kyu; Kim, Yang Ki

    2010-01-01

    As a part of action items for Application of Leak before Break(LBB), RCS Leak Rate Calculation Program is upgraded in Kori unit 3 and 4. For real time monitoring of operators, periodic calculation is needed and corresponding noise reduction scheme is used. This kind of study was issued in Korea, so there have upgraded and used real time RCS Leak Rate Calculation Program in UCN unit 3 and 4 and YGN unit 1 and 2. For reduction of the noise in signals, Linear Regression Method was used in those programs. Linear Regression Method is powerful method for noise reduction. But the system is not static with some alternative flow paths and this makes mixed trend patterns of input signal values. In this condition, the trend of signal and average of Linear Regression are not entirely same pattern. In this study, high order Least squares Method is used to follow the trend of signal and the order of calculation is rearranged. The result of calculation makes reasonable trend and the procedure is physically consistence

  17. Regularized plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin; Dai, Wei; Schuster, Gerard T.

    2013-01-01

    A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity

  18. Least squares reverse time migration of controlled order multiples

    Science.gov (United States)

    Liu, Y.

    2016-12-01

    Imaging using the reverse time migration of multiples generates inherent crosstalk artifacts due to the interference among different order multiples. Traditionally, least-square fitting has been used to address this issue by seeking the best objective function to measure the amplitude differences between the predicted and observed data. We have developed an alternative objective function by decomposing multiples into different orders to minimize the difference between Born modeling predicted multiples and specific-order multiples from observational data in order to attenuate the crosstalk. This method is denoted as the least-squares reverse time migration of controlled order multiples (LSRTM-CM). Our numerical examples demonstrated that the LSRTM-CM can significantly improve image quality compared with reverse time migration of multiples and least-square reverse time migration of multiples. Acknowledgments This research was funded by the National Nature Science Foundation of China (Grant Nos. 41430321 and 41374138).

  19. Discrete least squares polynomial approximation with random evaluations - application to PDEs with Random parameters

    KAUST Repository

    Nobile, Fabio

    2015-01-07

    We consider a general problem F(u, y) = 0 where u is the unknown solution, possibly Hilbert space valued, and y a set of uncertain parameters. We specifically address the situation in which the parameterto-solution map u(y) is smooth, however y could be very high (or even infinite) dimensional. In particular, we are interested in cases in which F is a differential operator, u a Hilbert space valued function and y a distributed, space and/or time varying, random field. We aim at reconstructing the parameter-to-solution map u(y) from random noise-free or noisy observations in random points by discrete least squares on polynomial spaces. The noise-free case is relevant whenever the technique is used to construct metamodels, based on polynomial expansions, for the output of computer experiments. In the case of PDEs with random parameters, the metamodel is then used to approximate statistics of the output quantity. We discuss the stability of discrete least squares on random points show convergence estimates both in expectation and probability. We also present possible strategies to select, either a-priori or by adaptive algorithms, sequences of approximating polynomial spaces that allow to reduce, and in some cases break, the curse of dimensionality

  20. Accurate human limb angle measurement: sensor fusion through Kalman, least mean squares and recursive least-squares adaptive filtering

    Science.gov (United States)

    Olivares, A.; Górriz, J. M.; Ramírez, J.; Olivares, G.

    2011-02-01

    Inertial sensors are widely used in human body motion monitoring systems since they permit us to determine the position of the subject's limbs. Limb angle measurement is carried out through the integration of the angular velocity measured by a rate sensor and the decomposition of the components of static gravity acceleration measured by an accelerometer. Different factors derived from the sensors' nature, such as the angle random walk and dynamic bias, lead to erroneous measurements. Dynamic bias effects can be reduced through the use of adaptive filtering based on sensor fusion concepts. Most existing published works use a Kalman filtering sensor fusion approach. Our aim is to perform a comparative study among different adaptive filters. Several least mean squares (LMS), recursive least squares (RLS) and Kalman filtering variations are tested for the purpose of finding the best method leading to a more accurate and robust limb angle measurement. A new angle wander compensation sensor fusion approach based on LMS and RLS filters has been developed.

  1. Accurate human limb angle measurement: sensor fusion through Kalman, least mean squares and recursive least-squares adaptive filtering

    International Nuclear Information System (INIS)

    Olivares, A; Olivares, G; Górriz, J M; Ramírez, J

    2011-01-01

    Inertial sensors are widely used in human body motion monitoring systems since they permit us to determine the position of the subject's limbs. Limb angle measurement is carried out through the integration of the angular velocity measured by a rate sensor and the decomposition of the components of static gravity acceleration measured by an accelerometer. Different factors derived from the sensors' nature, such as the angle random walk and dynamic bias, lead to erroneous measurements. Dynamic bias effects can be reduced through the use of adaptive filtering based on sensor fusion concepts. Most existing published works use a Kalman filtering sensor fusion approach. Our aim is to perform a comparative study among different adaptive filters. Several least mean squares (LMS), recursive least squares (RLS) and Kalman filtering variations are tested for the purpose of finding the best method leading to a more accurate and robust limb angle measurement. A new angle wander compensation sensor fusion approach based on LMS and RLS filters has been developed

  2. Iterative methods for weighted least-squares

    Energy Technology Data Exchange (ETDEWEB)

    Bobrovnikova, E.Y.; Vavasis, S.A. [Cornell Univ., Ithaca, NY (United States)

    1996-12-31

    A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.

  3. Regression model of support vector machines for least squares prediction of crystallinity of cracking catalysts by infrared spectroscopy

    International Nuclear Information System (INIS)

    Comesanna Garcia, Yumirka; Dago Morales, Angel; Talavera Bustamante, Isneri

    2010-01-01

    The recently introduction of the least squares support vector machines method for regression purposes in the field of Chemometrics has provided several advantages to linear and nonlinear multivariate calibration methods. The objective of the paper was to propose the use of the least squares support vector machine as an alternative multivariate calibration method for the prediction of the percentage of crystallinity of fluidized catalytic cracking catalysts, by means of Fourier transform mid-infrared spectroscopy. A linear kernel was used in the calculations of the regression model. The optimization of its gamma parameter was carried out using the leave-one-out cross-validation procedure. The root mean square error of prediction was used to measure the performance of the model. The accuracy of the results obtained with the application of the method is in accordance with the uncertainty of the X-ray powder diffraction reference method. To compare the generalization capability of the developed method, a comparison study was carried out, taking into account the results achieved with the new model and those reached through the application of linear calibration methods. The developed method can be easily implemented in refinery laboratories

  4. A Generalized Autocovariance Least-Squares Method for Covariance Estimation

    DEFF Research Database (Denmark)

    Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2007-01-01

    A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....

  5. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample selection by random number... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each for...

  6. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Qiqi, E-mail: qiqi@mit.edu; Hu, Rui, E-mail: hurui@mit.edu; Blonigan, Patrick, E-mail: blonigan@mit.edu

    2014-06-15

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

  7. Weighted least squares phase unwrapping based on the wavelet transform

    Science.gov (United States)

    Chen, Jiafeng; Chen, Haiqin; Yang, Zhengang; Ren, Haixia

    2007-01-01

    The weighted least squares phase unwrapping algorithm is a robust and accurate method to solve phase unwrapping problem. This method usually leads to a large sparse linear equation system. Gauss-Seidel relaxation iterative method is usually used to solve this large linear equation. However, this method is not practical due to its extremely slow convergence. The multigrid method is an efficient algorithm to improve convergence rate. However, this method needs an additional weight restriction operator which is very complicated. For this reason, the multiresolution analysis method based on the wavelet transform is proposed. By applying the wavelet transform, the original system is decomposed into its coarse and fine resolution levels and an equivalent equation system with better convergence condition can be obtained. Fast convergence in separate coarse resolution levels speeds up the overall system convergence rate. The simulated experiment shows that the proposed method converges faster and provides better result than the multigrid method.

  8. Tensor hypercontraction. II. Least-squares renormalization

    Science.gov (United States)

    Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David

    2012-12-01

    The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)], 10.1063/1.4732310. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1/r12 operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N^5) effort if exact integrals are decomposed, or O(N^4) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N^4) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry.

  9. Linear least squares approach for evaluating crack tip fracture parameters using isochromatic and isoclinic data from digital photoelasticity

    Science.gov (United States)

    Patil, Prataprao; Vyasarayani, C. P.; Ramji, M.

    2017-06-01

    In this work, digital photoelasticity technique is used to estimate the crack tip fracture parameters for different crack configurations. Conventionally, only isochromatic data surrounding the crack tip is used for SIF estimation, but with the advent of digital photoelasticity, pixel-wise availability of both isoclinic and isochromatic data could be exploited for SIF estimation in a novel way. A linear least square approach is proposed to estimate the mixed-mode crack tip fracture parameters by solving the multi-parameter stress field equation. The stress intensity factor (SIF) is extracted from those estimated fracture parameters. The isochromatic and isoclinic data around the crack tip is estimated using the ten-step phase shifting technique. To get the unwrapped data, the adaptive quality guided phase unwrapping algorithm (AQGPU) has been used. The mixed mode fracture parameters, especially SIF are estimated for specimen configurations like single edge notch (SEN), center crack and straight crack ahead of inclusion using the proposed algorithm. The experimental SIF values estimated using the proposed method are compared with analytical/finite element analysis (FEA) results, and are found to be in good agreement.

  10. A Bayesian least squares support vector machines based framework for fault diagnosis and failure prognosis

    Science.gov (United States)

    Khawaja, Taimoor Saleem

    A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior

  11. Analysis of a plane stress wave by the moving least squares method

    Directory of Open Access Journals (Sweden)

    Wojciech Dornowski

    2014-08-01

    Full Text Available A meshless method based on the moving least squares approximation is applied to stress wave propagation analysis. Two kinds of node meshes, the randomly generated mesh and the regular mesh are used. The nearest neighbours’ problem is developed from a triangulation that satisfies minimum edges length conditions. It is found that this method of neighbours’ choice significantly improves the solution accuracy. The reflection of stress waves from the free edge is modelled using fictitious nodes (outside the plate. The comparison with the finite difference results also demonstrated the accuracy of the proposed approach.[b]Keywords[/b]: civil engineering, meshless method, moving least squares method, elastic waves

  12. Two-body relativistic scattering with an O(1,1)-symmetric square-well potential

    International Nuclear Information System (INIS)

    Arshansky, R.; Horwitz, L.P.

    1984-01-01

    Scattering theory in the framework of a relativistic manifestly covariant quantum mechanics is applied to the relativistic analog of the nonrelativistic one-dimensional square-well potential, a two-body O(1,1)-symmetric hyperbolic square well in one space and one time dimension. The unitary S matrix is explicitly obtained. For well sizes large compared to the de Broglie wavelength of the reduced motion system, simple formulas are obtained for the associated sequence of resonances. This sequence has equally spaced levels and constant widths for higher resonances, and linearly increasing widths for lower-lying levels

  13. Window least squares method applied to statistical noise smoothing of positron annihilation data

    International Nuclear Information System (INIS)

    Adam, G.; Adam, S.; Barbiellini, B.; Hoffmann, L.; Manuel, A.A.; Peter, M.

    1993-06-01

    The paper deals with the off-line processing of experimental data obtained by two-dimensional angular correlation of the electron-positron annihilation radiation (2D-ACAR) technique on high-temperature superconductors. A piecewise continuous window least squares (WLS) method devoted to the statistical noise smoothing of 2D-ACAR data, under close control of the crystal reciprocal lattice periodicity, is derived. Reliability evaluation of the constant local weight WLS smoothing formula (CW-WLSF) shows that consistent processing 2D-ACAR data by CW-WLSF is possible. CW-WLSF analysis of 2D-ACAR data collected on untwinned Y Ba 2 Cu 3 O 7-δ single crystals yields significantly improved signature of the Fermi surface ridge at second Umklapp processes and resolves, for the first time, the ridge signature at third Umklapp processes. (author). 24 refs, 9 figs

  14. Nonnegative least-squares image deblurring: improved gradient projection approaches

    Science.gov (United States)

    Benvenuto, F.; Zanella, R.; Zanni, L.; Bertero, M.

    2010-02-01

    The least-squares approach to image deblurring leads to an ill-posed problem. The addition of the nonnegativity constraint, when appropriate, does not provide regularization, even if, as far as we know, a thorough investigation of the ill-posedness of the resulting constrained least-squares problem has still to be done. Iterative methods, converging to nonnegative least-squares solutions, have been proposed. Some of them have the 'semi-convergence' property, i.e. early stopping of the iteration provides 'regularized' solutions. In this paper we consider two of these methods: the projected Landweber (PL) method and the iterative image space reconstruction algorithm (ISRA). Even if they work well in many instances, they are not frequently used in practice because, in general, they require a large number of iterations before providing a sensible solution. Therefore, the main purpose of this paper is to refresh these methods by increasing their efficiency. Starting from the remark that PL and ISRA require only the computation of the gradient of the functional, we propose the application to these algorithms of special acceleration techniques that have been recently developed in the area of the gradient methods. In particular, we propose the application of efficient step-length selection rules and line-search strategies. Moreover, remarking that ISRA is a scaled gradient algorithm, we evaluate its behaviour in comparison with a recent scaled gradient projection (SGP) method for image deblurring. Numerical experiments demonstrate that the accelerated methods still exhibit the semi-convergence property, with a considerable gain both in the number of iterations and in the computational time; in particular, SGP appears definitely the most efficient one.

  15. Least Squares Methods for Equidistant Tree Reconstruction

    OpenAIRE

    Fahey, Conor; Hosten, Serkan; Krieger, Nathan; Timpe, Leslie

    2008-01-01

    UPGMA is a heuristic method identifying the least squares equidistant phylogenetic tree given empirical distance data among $n$ taxa. We study this classic algorithm using the geometry of the space of all equidistant trees with $n$ leaves, also known as the Bergman complex of the graphical matroid for the complete graph $K_n$. We show that UPGMA performs an orthogonal projection of the data onto a maximal cell of the Bergman complex. We also show that the equidistant tree with the least (Eucl...

  16. Optimally weighted least-squares steganalysis

    Science.gov (United States)

    Ker, Andrew D.

    2007-02-01

    Quantitative steganalysis aims to estimate the amount of payload in a stego object, and such estimators seem to arise naturally in steganalysis of Least Significant Bit (LSB) replacement in digital images. However, as with all steganalysis, the estimators are subject to errors, and their magnitude seems heavily dependent on properties of the cover. In very recent work we have given the first derivation of estimation error, for a certain method of steganalysis (the Least-Squares variant of Sample Pairs Analysis) of LSB replacement steganography in digital images. In this paper we make use of our theoretical results to find an improved estimator and detector. We also extend the theoretical analysis to another (more accurate) steganalysis estimator (Triples Analysis) and hence derive an improved version of that estimator too. Experimental results show that the new steganalyzers have improved accuracy, particularly in the difficult case of never-compressed covers.

  17. Multilevel solvers of first-order system least-squares for Stokes equations

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Chen-Yao G. [National Chung Cheng Univ., Chia-Yi (Taiwan, Province of China)

    1996-12-31

    Recently, The use of first-order system least squares principle for the approximate solution of Stokes problems has been extensively studied by Cai, Manteuffel, and McCormick. In this paper, we study multilevel solvers of first-order system least-squares method for the generalized Stokes equations based on the velocity-vorticity-pressure formulation in three dimensions. The least-squares functionals is defined to be the sum of the L{sup 2}-norms of the residuals, which is weighted appropriately by the Reynolds number. We develop convergence analysis for additive and multiplicative multilevel methods applied to the resulting discrete equations.

  18. Spectral line shapes in linear absorption and two-dimensional spectroscopy with skewed frequency distributions

    NARCIS (Netherlands)

    Farag, Marwa H.; Hoenders, Bernhard J.; Knoester, Jasper; Jansen, Thomas L. C.

    2017-01-01

    The effect of Gaussian dynamics on the line shapes in linear absorption and two-dimensional correlation spectroscopy is well understood as the second-order cumulant expansion provides exact spectra. Gaussian solvent dynamics can be well analyzed using slope line analysis of two-dimensional

  19. Elastic least-squares reverse time migration

    KAUST Repository

    Feng, Zongcai; Schuster, Gerard T.

    2016-01-01

    Elastic least-squares reverse time migration (LSRTM) is used to invert synthetic particle-velocity data and crosswell pressure field data. The migration images consist of both the P- and Svelocity perturbation images. Numerical tests on synthetic and field data illustrate the advantages of elastic LSRTM over elastic reverse time migration (RTM). In addition, elastic LSRTM images are better focused and have better reflector continuity than do the acoustic LSRTM images.

  20. Elastic least-squares reverse time migration

    KAUST Repository

    Feng, Zongcai

    2016-09-06

    Elastic least-squares reverse time migration (LSRTM) is used to invert synthetic particle-velocity data and crosswell pressure field data. The migration images consist of both the P- and Svelocity perturbation images. Numerical tests on synthetic and field data illustrate the advantages of elastic LSRTM over elastic reverse time migration (RTM). In addition, elastic LSRTM images are better focused and have better reflector continuity than do the acoustic LSRTM images.

  1. 8th International Conference on Partial Least Squares and Related Methods

    CERN Document Server

    Vinzi, Vincenzo; Russolillo, Giorgio; Saporta, Gilbert; Trinchera, Laura

    2016-01-01

    This volume presents state of the art theories, new developments, and important applications of Partial Least Square (PLS) methods. The text begins with the invited communications of current leaders in the field who cover the history of PLS, an overview of methodological issues, and recent advances in regression and multi-block approaches. The rest of the volume comprises selected, reviewed contributions from the 8th International Conference on Partial Least Squares and Related Methods held in Paris, France, on 26-28 May, 2014. They are organized in four coherent sections: 1) new developments in genomics and brain imaging, 2) new and alternative methods for multi-table and path analysis, 3) advances in partial least square regression (PLSR), and 4) partial least square path modeling (PLS-PM) breakthroughs and applications. PLS methods are very versatile methods that are now used in areas as diverse as engineering, life science, sociology, psychology, brain imaging, genomics, and business among both academics ...

  2. 3D plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin

    2014-08-05

    A three dimensional least-squares Kirchhoff migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images and the computational efficiency. Due to the limitation of current 3D marine acquisition geometries, a cylindrical-wave encoding is adopted for the narrow azimuth streamer data. To account for the mispositioning of reflectors due to errors in the velocity model, a regularized LSM is devised so that each plane-wave or cylindrical-wave gather gives rise to an individual migration image, and a regularization term is included to encourage the similarities between the migration images of similar encoding schemes. Both synthetic and field results show that: 1) plane-wave or cylindrical-wave encoding LSM can achieve both computational and IO saving, compared to shot-domain LSM, however, plane-wave LSM is still about 5 times more expensive than plane-wave migration; 2) the regularized LSM is more robust compared to LSM with one reflectivity model common for all the plane-wave or cylindrical-wave gathers.

  3. Constrained least squares regularization in PET

    International Nuclear Information System (INIS)

    Choudhury, K.R.; O'Sullivan, F.O.

    1996-01-01

    Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort

  4. Penalized Nonlinear Least Squares Estimation of Time-Varying Parameters in Ordinary Differential Equations

    KAUST Repository

    Cao, Jiguo; Huang, Jianhua Z.; Wu, Hulin

    2012-01-01

    Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online.

  5. A least-squares minimization approach for model parameters estimate by using a new magnetic anomaly formula

    Science.gov (United States)

    Abo-Ezz, E. R.; Essa, K. S.

    2016-04-01

    A new linear least-squares approach is proposed to interpret magnetic anomalies of the buried structures by using a new magnetic anomaly formula. This approach depends on solving different sets of algebraic linear equations in order to invert the depth ( z), amplitude coefficient ( K), and magnetization angle ( θ) of buried structures using magnetic data. The utility and validity of the new proposed approach has been demonstrated through various reliable synthetic data sets with and without noise. In addition, the method has been applied to field data sets from USA and India. The best-fitted anomaly has been delineated by estimating the root-mean squared (rms). Judging satisfaction of this approach is done by comparing the obtained results with other available geological or geophysical information.

  6. On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.

    Science.gov (United States)

    Mizutani, Eiji; Demmel, James W

    2003-01-01

    This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).

  7. On oscillation and nonoscillation of two-dimensional linear differential systems

    Czech Academy of Sciences Publication Activity Database

    Lomtatidze, A.; Šremr, Jiří

    2013-01-01

    Roč. 20, č. 3 (2013), s. 573-600 ISSN 1072-947X Institutional support: RVO:67985840 Keywords : two-dimensional system of linear ODE * oscillation * nonoscillation Subject RIV: BA - General Mathematics Impact factor: 0.340, year: 2013 http://www.degruyter.com/view/j/gmj.2013.20.issue-3/gmj-2013-0025/gmj-2013-0025.xml?format=INT

  8. On oscillation and nonoscillation of two-dimensional linear differential systems

    Czech Academy of Sciences Publication Activity Database

    Lomtatidze, A.; Šremr, Jiří

    2013-01-01

    Roč. 20, č. 3 (2013), s. 573-600 ISSN 1072-947X Institutional support: RVO:67985840 Keywords : two-dimensional system of linear ODE * oscillation * nonoscillation Subject RIV: BA - General Mathematics Impact factor: 0.340, year: 2013 http://www.degruyter.com/view/j/gmj.2013.20.issue-3/gmj-2013-0025/gmj-2013-0025. xml ?format=INT

  9. On the use of a penalized least squares method to process kinematic full-field measurements

    International Nuclear Information System (INIS)

    Moulart, Raphaël; Rotinat, René

    2014-01-01

    This work is aimed at exploring the performances of an alternative procedure to smooth and differentiate full-field displacement measurements. After recalling the strategies currently used by the experimental mechanics community, a short overview of the available smoothing algorithms is drawn up and the requirements that such an algorithm has to fulfil to be applicable to process kinematic measurements are listed. A comparative study of the chosen algorithm is performed including the 2D penalized least squares method and two other commonly implemented strategies. The results obtained by penalized least squares are comparable in terms of quality to those produced by the two other algorithms, while the penalized least squares method appears to be the fastest and the most flexible. Unlike both the other considered methods, it is possible with penalized least squares to automatically choose the parameter governing the amount of smoothing to apply. Unfortunately, it appears that this automation is not suitable for the proposed application since it does not lead to optimal strain maps. Finally, it is possible with this technique to perform the derivation to obtain strain maps before smoothing them (while the smoothing is normally applied to displacement maps before the differentiation), which can lead in some cases to a more effective reconstruction of the strain fields. (paper)

  10. Multi-source least-squares migration of marine data

    KAUST Repository

    Wang, Xin; Schuster, Gerard T.

    2012-01-01

    Kirchhoff based multi-source least-squares migration (MSLSM) is applied to marine streamer data. To suppress the crosstalk noise from the excitation of multiple sources, a dynamic encoding function (including both time-shifts and polarity changes

  11. Weighted least-squares criteria for electrical impedance tomography

    International Nuclear Information System (INIS)

    Kallman, J.S.; Berryman, J.G.

    1992-01-01

    Methods are developed for design of electrical impedance tomographic reconstruction algorithms with specified properties. Assuming a starting model with constant conductivity or some other specified background distribution, an algorithm with the following properties is found: (1) the optimum constant for the starting model is determined automatically; (2) the weighted least-squares error between the predicted and measured power dissipation data is as small as possible; (3) the variance of the reconstructed conductivity from the starting model is minimized; (4) potential distributions with the largest volume integral of gradient squared have the least influence on the reconstructed conductivity, and therefore distributions most likely to be corrupted by contact impedance effects are deemphasized; (5) cells that dissipate the most power during the current injection tests tend to deviate least from the background value. The resulting algorithm maps the reconstruction problem into a vector space where the contribution to the inversion from the background conductivity remains invariant, while the optimum contributions in orthogonal directions are found. For a starting model with nonconstant conductivity, the reconstruction algorithm has analogous properties

  12. Parametric output-only identification of time-varying structures using a kernel recursive extended least squares TARMA approach

    Science.gov (United States)

    Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim

    2018-01-01

    The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.

  13. Laterally structured ripple and square phases with one and two dimensional thickness modulations in a model bilayer system.

    Science.gov (United States)

    Debnath, Ananya; Thakkar, Foram M; Maiti, Prabal K; Kumaran, V; Ayappa, K G

    2014-10-14

    Molecular dynamics simulations of bilayers in a surfactant/co-surfactant/water system with explicit solvent molecules show formation of topologically distinct gel phases depending upon the bilayer composition. At low temperatures, the bilayers transform from the tilted gel phase, Lβ', to the one dimensional (1D) rippled, Pβ' phase as the surfactant concentration is increased. More interestingly, we observe a two dimensional (2D) square phase at higher surfactant concentration which, upon heating, transforms to the gel Lβ' phase. The thickness modulations in the 1D rippled and square phases are asymmetric in two surfactant leaflets and the bilayer thickness varies by a factor of ∼2 between maximum and minimum. The 1D ripple consists of a thinner interdigitated region of smaller extent alternating with a thicker non-interdigitated region. The 2D ripple phase is made up of two superimposed square lattices of maximum and minimum thicknesses with molecules of high tilt forming a square lattice translated from the lattice formed with the thickness minima. Using Voronoi diagrams we analyze the intricate interplay between the area-per-head-group, height modulations and chain tilt for the different ripple symmetries. Our simulations indicate that composition plays an important role in controlling the formation of low temperature gel phase symmetries and rippling accommodates the increased area-per-head-group of the surfactant molecules.

  14. Decision-Directed Recursive Least Squares MIMO Channels Tracking

    Directory of Open Access Journals (Sweden)

    Karami Ebrahim

    2006-01-01

    Full Text Available A new approach for joint data estimation and channel tracking for multiple-input multiple-output (MIMO channels is proposed based on the decision-directed recursive least squares (DD-RLS algorithm. RLS algorithm is commonly used for equalization and its application in channel estimation is a novel idea. In this paper, after defining the weighted least squares cost function it is minimized and eventually the RLS MIMO channel estimation algorithm is derived. The proposed algorithm combined with the decision-directed algorithm (DDA is then extended for the blind mode operation. From the computational complexity point of view being versus the number of transmitter and receiver antennas, the proposed algorithm is very efficient. Through various simulations, the mean square error (MSE of the tracking of the proposed algorithm for different joint detection algorithms is compared with Kalman filtering approach which is one of the most well-known channel tracking algorithms. It is shown that the performance of the proposed algorithm is very close to Kalman estimator and that in the blind mode operation it presents a better performance with much lower complexity irrespective of the need to know the channel model.

  15. Least squares analysis of fission neutron standard fields

    International Nuclear Information System (INIS)

    Griffin, P.J.; Williams, J.G.

    1997-01-01

    A least squares analysis of fission neutron standard fields has been performed using the latest dosimetry cross sections. Discrepant nuclear data are identified and adjusted spectra for 252 Cf spontaneous fission and 235 U thermal fission fields are presented

  16. Driving performance of a two-dimensional homopolar linear DC motor

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Y.; Yamaguchi, M.; Kano, Y. [Tokyo University of Agriculture and Technology, Tokyo (Japan)

    1998-05-01

    This paper presents a novel two-dimensional homopolar linear de motor (LDM) which can realize two-dimensional (2-D) motion. For position control purposes, two kinds of position detecting methods are proposed. The position in one position is detected by means of a capacitive sensor which makes the output of the sensor partially immune to the variation of the gap between electrodes. The position in the other direction is achieved by exploiting the position dependent property of the driving coil inductance, instead of using an independent sensor. The position control is implemented on the motor and 2-D tracking performance is analyzed. Experiments show that the motor demonstrates satisfactory driving performance, 2-D tracking error being within 5.5% when the angular frequency of reference signal is 3.14 rad./s. 7 refs., 17 figs., 2 tabs.

  17. Positive Scattering Cross Sections using Constrained Least Squares

    International Nuclear Information System (INIS)

    Dahl, J.A.; Ganapol, B.D.; Morel, J.E.

    1999-01-01

    A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented

  18. Least-squares finite element discretizations of neutron transport equations in 3 dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Manteuffel, T.A [Univ. of Colorado, Boulder, CO (United States); Ressel, K.J. [Interdisciplinary Project Center for Supercomputing, Zurich (Switzerland); Starkes, G. [Universtaet Karlsruhe (Germany)

    1996-12-31

    The least-squares finite element framework to the neutron transport equation introduced in is based on the minimization of a least-squares functional applied to the properly scaled neutron transport equation. Here we report on some practical aspects of this approach for neutron transport calculations in three space dimensions. The systems of partial differential equations resulting from a P{sub 1} and P{sub 2} approximation of the angular dependence are derived. In the diffusive limit, the system is essentially a Poisson equation for zeroth moment and has a divergence structure for the set of moments of order 1. One of the key features of the least-squares approach is that it produces a posteriori error bounds. We report on the numerical results obtained for the minimum of the least-squares functional augmented by an additional boundary term using trilinear finite elements on a uniform tesselation into cubes.

  19. On the generalization of linear least mean squares estimation to quantum systems with non-commutative outputs

    Energy Technology Data Exchange (ETDEWEB)

    Amini, Nina H. [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); CNRS, Laboratoire des Signaux et Systemes (L2S) CentraleSupelec, Gif-sur-Yvette (France); Miao, Zibo; Pan, Yu; James, Matthew R. [Australian National University, ARC Centre for Quantum Computation and Communication Technology, Research School of Engineering, Canberra, ACT (Australia); Mabuchi, Hideo [Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States)

    2015-12-15

    The purpose of this paper is to study the problem of generalizing the Belavkin-Kalman filter to the case where the classical measurement signal is replaced by a fully quantum non-commutative output signal. We formulate a least mean squares estimation problem that involves a non-commutative system as the filter processing the non-commutative output signal. We solve this estimation problem within the framework of non-commutative probability. Also, we find the necessary and sufficient conditions which make these non-commutative estimators physically realizable. These conditions are restrictive in practice. (orig.)

  20. Robust design optimization using the price of robustness, robust least squares and regularization methods

    Science.gov (United States)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  1. Least squares methodology applied to LWR-PV damage dosimetry, experience and expectations

    International Nuclear Information System (INIS)

    Wagschal, J.J.; Broadhead, B.L.; Maerker, R.E.

    1979-01-01

    The development of an advanced methodology for Light Water Reactors (LWR) Pressure Vessel (PV) damage dosimetry applications is the subject of an ongoing EPRI-sponsored research project at ORNL. This methodology includes a generalized least squares approach to a combination of data. The data include measured foil activations, evaluated cross sections and calculated fluxes. The uncertainties associated with the data as well as with the calculational methods are an essential component of this methodology. Activation measurements in two NBS benchmark neutron fields ( 252 Cf ISNF) and in a prototypic reactor field (Oak Ridge Pool Critical Assembly - PCA) are being analyzed using a generalized least squares method. The sensitivity of the results to the representation of the uncertainties (covariances) was carefully checked. Cross element covariances were found to be of utmost importance

  2. Comparison of some nonlinear smoothing methods

    International Nuclear Information System (INIS)

    Bell, P.R.; Dillon, R.S.

    1977-01-01

    Due to the poor quality of many nuclear medicine images, computer-driven smoothing procedures are frequently employed to enhance the diagnostic utility of these images. While linear methods were first tried, it was discovered that nonlinear techniques produced superior smoothing with little detail suppression. We have compared four methods: Gaussian smoothing (linear), two-dimensional least-squares smoothing (linear), two-dimensional least-squares bounding (nonlinear), and two-dimensional median smoothing (nonlinear). The two dimensional least-squares procedures have yielded the most satisfactorily enhanced images, with the median smoothers providing quite good images, even in the presence of widely aberrant points

  3. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2012-02-01

    Full Text Available In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL, for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI. Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD. Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces.

  4. Validating the Galerkin least-squares finite element methods in predicting mixing flows in stirred tank reactors

    International Nuclear Information System (INIS)

    Johnson, K.; Bittorf, K.J.

    2002-01-01

    A novel approach for computer aided modeling and optimizing mixing process has been developed using Galerkin least-squares finite element technology. Computer aided mixing modeling and analysis involves Lagrangian and Eulerian analysis for relative fluid stretching, and energy dissipation concepts for laminar and turbulent flows. High quality, conservative, accurate, fluid velocity, and continuity solutions are required for determining mixing quality. The ORCA Computational Fluid Dynamics (CFD) package, based on a finite element formulation, solves the incompressible Reynolds Averaged Navier Stokes (RANS) equations. Although finite element technology has been well used in areas of heat transfer, solid mechanics, and aerodynamics for years, it has only recently been applied to the area of fluid mixing. ORCA, developed using the Galerkin Least-Squares (GLS) finite element technology, provides another formulation for numerically solving the RANS based and LES based fluid mechanics equations. The ORCA CFD package is validated against two case studies. The first, a free round jet, demonstrates that the CFD code predicts the theoretical velocity decay rate, linear expansion rate, and similarity profile. From proper prediction of fundamental free jet characteristics, confidence can be derived when predicting flows in a stirred tank, as a stirred tank reactor can be considered a series of free jets and wall jets. (author)

  5. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  6. Moving least squares simulation of free surface flows

    DEFF Research Database (Denmark)

    Felter, C. L.; Walther, Jens Honore; Henriksen, Christian

    2014-01-01

    In this paper a Moving Least Squares method (MLS) for the simulation of 2D free surface flows is presented. The emphasis is on the governing equations, the boundary conditions, and the numerical implementation. The compressible viscous isothermal Navier–Stokes equations are taken as the starting ...

  7. Multivariate calibration with least-squares support vector machines.

    NARCIS (Netherlands)

    Thissen, U.M.J.; Ustun, B.; Melssen, W.J.; Buydens, L.M.C.

    2004-01-01

    This paper proposes the use of least-squares support vector machines (LS-SVMs) as a relatively new nonlinear multivariate calibration method, capable of dealing with ill-posed problems. LS-SVMs are an extension of "traditional" SVMs that have been introduced recently in the field of chemistry and

  8. Magnetic phase transition induced by electrostatic gating in two-dimensional square metal-organic frameworks

    Science.gov (United States)

    Wang, Yun-Peng; Li, Xiang-Guo; Liu, Shuang-Long; Fry, James N.; Cheng, Hai-Ping

    2018-03-01

    We investigate theoretically magnetism and magnetic phase transitions induced by electrostatic gating of two-dimensional square metal-organic framework compounds. We find that electrostatic gating can induce phase transitions between homogeneous ferromagnetic and various spin-textured antiferromagnetic states. Electronic structure and Wannier function analysis can reveal hybridizations between transition-metal d orbitals and conjugated π orbitals in the organic framework. Mn-containing compounds exhibit a strong d -π hybridization that leads to partially occupied spin-minority bands, in contrast to compounds containing transition-metal ions other than Mn, for which electronic structure around the Fermi energy is only slightly spin split due to weak d -π hybridization and the magnetic interaction is of the Ruderman-Kittel-Kasuya-Yosida type. We use a ferromagnetic Kondo lattice model to understand the phase transition in Mn-containing compounds in terms of carrier density and illuminate the complexity and the potential to control two-dimensional magnetization.

  9. Least-Square Prediction for Backward Adaptive Video Coding

    Directory of Open Access Journals (Sweden)

    Li Xin

    2006-01-01

    Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.

  10. Simplified Least Squares Shadowing sensitivity analysis for chaotic ODEs and PDEs

    Energy Technology Data Exchange (ETDEWEB)

    Chater, Mario, E-mail: chaterm@mit.edu; Ni, Angxiu, E-mail: niangxiu@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu

    2017-01-15

    This paper develops a variant of the Least Squares Shadowing (LSS) method, which has successfully computed the derivative for several chaotic ODEs and PDEs. The development in this paper aims to simplify Least Squares Shadowing method by improving how time dilation is treated. Instead of adding an explicit time dilation term as in the original method, the new variant uses windowing, which can be more efficient and simpler to implement, especially for PDEs.

  11. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    International Nuclear Information System (INIS)

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    2016-01-01

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.

  12. Least-mean-square spatial filter for IR sensors.

    Science.gov (United States)

    Takken, E H; Friedman, D; Milton, A F; Nitzberg, R

    1979-12-15

    A new least-mean-square filter is defined for signal-detection problems. The technique is proposed for scanning IR surveillance systems operating in poorly characterized but primarily low-frequency clutter interference. Near-optimal detection of point-source targets is predicted both for continuous-time and sampled-data systems.

  13. Canards in a minimal piecewise-linear square-wave burster

    Energy Technology Data Exchange (ETDEWEB)

    Desroches, M.; Krupa, M. [Inria Sophia-Antipolis Méditerranée Research Centre, MathNeuro Project-Team 2004 route des Lucioles BP 93, 06902 Valbonne Cedex (France); Fernández-García, S., E-mail: soledad@us.es [Departamento EDAN, University of Seville, Facultad de Matemáticas C/ Tarfia, s/n., 41012 Sevilla (Spain)

    2016-07-15

    We construct a piecewise-linear (PWL) approximation of the Hindmarsh-Rose (HR) neuron model that is minimal, in the sense that the vector field has the least number of linearity zones, in order to reproduce all the dynamics present in the original HR model with classical parameter values. This includes square-wave bursting and also special trajectories called canards, which possess long repelling segments and organise the transitions between stable bursting patterns with n and n + 1 spikes, also referred to as spike-adding canard explosions. We propose a first approximation of the smooth HR model, using a continuous PWL system, and show that its fast subsystem cannot possess a homoclinic bifurcation, which is necessary to obtain proper square-wave bursting. We then relax the assumption of continuity of the vector field across all zones, and we show that we can obtain a homoclinic bifurcation in the fast subsystem. We use the recently developed canard theory for PWL systems in order to reproduce the spike-adding canard explosion feature of the HR model as studied, e.g., in Desroches et al., Chaos 23(4), 046106 (2013).

  14. Block Least Mean Squares Algorithm over Distributed Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    T. Panigrahi

    2012-01-01

    Full Text Available In a distributed parameter estimation problem, during each sampling instant, a typical sensor node communicates its estimate either by the diffusion algorithm or by the incremental algorithm. Both these conventional distributed algorithms involve significant communication overheads and, consequently, defeat the basic purpose of wireless sensor networks. In the present paper, we therefore propose two new distributed algorithms, namely, block diffusion least mean square (BDLMS and block incremental least mean square (BILMS by extending the concept of block adaptive filtering techniques to the distributed adaptation scenario. The performance analysis of the proposed BDLMS and BILMS algorithms has been carried out and found to have similar performances to those offered by conventional diffusion LMS and incremental LMS algorithms, respectively. The convergence analyses of the proposed algorithms obtained from the simulation study are also found to be in agreement with the theoretical analysis. The remarkable and interesting aspect of the proposed block-based algorithms is that their communication overheads per node and latencies are less than those of the conventional algorithms by a factor as high as the block size used in the algorithms.

  15. The crux of the method: assumptions in ordinary least squares and logistic regression.

    Science.gov (United States)

    Long, Rebecca G

    2008-10-01

    Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.

  16. Prediction of octanol-water partition coefficients of organic compounds by multiple linear regression, partial least squares, and artificial neural network.

    Science.gov (United States)

    Golmohammadi, Hassan

    2009-11-30

    A quantitative structure-property relationship (QSPR) study was performed to develop models those relate the structure of 141 organic compounds to their octanol-water partition coefficients (log P(o/w)). A genetic algorithm was applied as a variable selection tool. Modeling of log P(o/w) of these compounds as a function of theoretically derived descriptors was established by multiple linear regression (MLR), partial least squares (PLS), and artificial neural network (ANN). The best selected descriptors that appear in the models are: atomic charge weighted partial positively charged surface area (PPSA-3), fractional atomic charge weighted partial positive surface area (FPSA-3), minimum atomic partial charge (Qmin), molecular volume (MV), total dipole moment of molecule (mu), maximum antibonding contribution of a molecule orbital in the molecule (MAC), and maximum free valency of a C atom in the molecule (MFV). The result obtained showed the ability of developed artificial neural network to prediction of partition coefficients of organic compounds. Also, the results revealed the superiority of ANN over the MLR and PLS models. Copyright 2009 Wiley Periodicals, Inc.

  17. Flexible aluminum tubes and a least square multi-objective non-linear optimization scheme

    International Nuclear Information System (INIS)

    Endelt, Benny; Nielsen, Karl Brian; Olsen, Soeren

    2004-01-01

    The automotive industry currently uses rubber hoses as the media carrier between e.g. the radiator and the engine, and the basic idea is to replace the rubber hoses with flexible aluminum tubes.A good quality is defined through several quality measurements, i.e. in the current case the key objective is to produce a flexible convolution through optimization of the tool geometry, but the process should also be stable, and the process stability is evaluated through Forming Limit Diagrams. Typically the defined objectives are conflicting, i.e. the optimized configuration represents therefore a trade-off between the individual objectives, in this case flexibility versus process stability.The optimization problem is solved through iteratively minimizing the object function. A second-order least square scheme is used for the approximation of the quadratic model, and the change in the design parameters is evaluated through the trust region scheme and box constraints are introduced within the trust region framework. Furthermore, the object function is minimized by applying the non-monotone scheme, and the trust region subproblem is solved by applying the Cholesky factorization scheme.An optimal bell shaped geometry is identified and the design is verified experimentally

  18. Emulating facial biomechanics using multivariate partial least squares surrogate models.

    Science.gov (United States)

    Wu, Tim; Martens, Harald; Hunter, Peter; Mithraratne, Kumar

    2014-11-01

    A detailed biomechanical model of the human face driven by a network of muscles is a useful tool in relating the muscle activities to facial deformations. However, lengthy computational times often hinder its applications in practical settings. The objective of this study is to replace precise but computationally demanding biomechanical model by a much faster multivariate meta-model (surrogate model), such that a significant speedup (to real-time interactive speed) can be achieved. Using a multilevel fractional factorial design, the parameter space of the biomechanical system was probed from a set of sample points chosen to satisfy maximal rank optimality and volume filling. The input-output relationship at these sampled points was then statistically emulated using linear and nonlinear, cross-validated, partial least squares regression models. It was demonstrated that these surrogate models can mimic facial biomechanics efficiently and reliably in real-time. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Least-squares collocation meshless approach for radiative heat transfer in absorbing and scattering media

    Science.gov (United States)

    Liu, L. H.; Tan, J. Y.

    2007-02-01

    A least-squares collocation meshless method is employed for solving the radiative heat transfer in absorbing, emitting and scattering media. The least-squares collocation meshless method for radiative transfer is based on the discrete ordinates equation. A moving least-squares approximation is applied to construct the trial functions. Except for the collocation points which are used to construct the trial functions, a number of auxiliary points are also adopted to form the total residuals of the problem. The least-squares technique is used to obtain the solution of the problem by minimizing the summation of residuals of all collocation and auxiliary points. Three numerical examples are studied to illustrate the performance of this new solution method. The numerical results are compared with the other benchmark approximate solutions. By comparison, the results show that the least-squares collocation meshless method is efficient, accurate and stable, and can be used for solving the radiative heat transfer in absorbing, emitting and scattering media.

  20. Least-squares collocation meshless approach for radiative heat transfer in absorbing and scattering media

    International Nuclear Information System (INIS)

    Liu, L.H.; Tan, J.Y.

    2007-01-01

    A least-squares collocation meshless method is employed for solving the radiative heat transfer in absorbing, emitting and scattering media. The least-squares collocation meshless method for radiative transfer is based on the discrete ordinates equation. A moving least-squares approximation is applied to construct the trial functions. Except for the collocation points which are used to construct the trial functions, a number of auxiliary points are also adopted to form the total residuals of the problem. The least-squares technique is used to obtain the solution of the problem by minimizing the summation of residuals of all collocation and auxiliary points. Three numerical examples are studied to illustrate the performance of this new solution method. The numerical results are compared with the other benchmark approximate solutions. By comparison, the results show that the least-squares collocation meshless method is efficient, accurate and stable, and can be used for solving the radiative heat transfer in absorbing, emitting and scattering media

  1. A square-plate piezoelectric linear motor operating in two orthogonal and isomorphic face-diagonal-bending modes.

    Science.gov (United States)

    Ci, Penghong; Chen, Zhijiang; Liu, Guoxi; Dong, Shuxiang

    2014-01-01

    We report a piezoelectric linear motor made of a single Pb(Zr,Ti)O3 square-plate, which operates in two orthogonal and isomorphic face-diagonal-bending modes to produce precision linear motion. A 15 × 15 × 2 mm prototype was fabricated, and the motor generated a driving force of up to 1.8 N and a speed of 170 mm/s under an applied voltage of 100 Vpp at the resonance frequency of 136.5 kHz. The motor shows such advantages as large driving force under relatively low driving voltage, simple structure, and stable motion because of its isomorphic face-diagonal-bending mode.

  2. Dual stacked partial least squares for analysis of near-infrared spectra

    Energy Technology Data Exchange (ETDEWEB)

    Bi, Yiming [Institute of Automation, Chinese Academy of Sciences, 100190 Beijing (China); Xie, Qiong, E-mail: yimbi@163.com [Institute of Automation, Chinese Academy of Sciences, 100190 Beijing (China); Peng, Silong; Tang, Liang; Hu, Yong; Tan, Jie [Institute of Automation, Chinese Academy of Sciences, 100190 Beijing (China); Zhao, Yuhui [School of Economics and Business, Northeastern University at Qinhuangdao, 066000 Qinhuangdao City (China); Li, Changwen [Food Research Institute of Tianjin Tasly Group, 300410 Tianjin (China)

    2013-08-20

    Graphical abstract: -- Highlights: •Dual stacking steps are used for multivariate calibration of near-infrared spectra. •A selective weighting strategy is introduced that only a subset of all available sub-models is used for model fusion. •Using two public near-infrared datasets, the proposed method achieved competitive results. •The method can be widely applied in many fields, such as Mid-infrared spectra data and Raman spectra data. -- Abstract: A new ensemble learning algorithm is presented for quantitative analysis of near-infrared spectra. The algorithm contains two steps of stacked regression and Partial Least Squares (PLS), termed Dual Stacked Partial Least Squares (DSPLS) algorithm. First, several sub-models were generated from the whole calibration set. The inner-stack step was implemented on sub-intervals of the spectrum. Then the outer-stack step was used to combine these sub-models. Several combination rules of the outer-stack step were analyzed for the proposed DSPLS algorithm. In addition, a novel selective weighting rule was also involved to select a subset of all available sub-models. Experiments on two public near-infrared datasets demonstrate that the proposed DSPLS with selective weighting rule provided superior prediction performance and outperformed the conventional PLS algorithm. Compared with the single model, the new ensemble model can provide more robust prediction result and can be considered an alternative choice for quantitative analytical applications.

  3. Dual stacked partial least squares for analysis of near-infrared spectra

    International Nuclear Information System (INIS)

    Bi, Yiming; Xie, Qiong; Peng, Silong; Tang, Liang; Hu, Yong; Tan, Jie; Zhao, Yuhui; Li, Changwen

    2013-01-01

    Graphical abstract: -- Highlights: •Dual stacking steps are used for multivariate calibration of near-infrared spectra. •A selective weighting strategy is introduced that only a subset of all available sub-models is used for model fusion. •Using two public near-infrared datasets, the proposed method achieved competitive results. •The method can be widely applied in many fields, such as Mid-infrared spectra data and Raman spectra data. -- Abstract: A new ensemble learning algorithm is presented for quantitative analysis of near-infrared spectra. The algorithm contains two steps of stacked regression and Partial Least Squares (PLS), termed Dual Stacked Partial Least Squares (DSPLS) algorithm. First, several sub-models were generated from the whole calibration set. The inner-stack step was implemented on sub-intervals of the spectrum. Then the outer-stack step was used to combine these sub-models. Several combination rules of the outer-stack step were analyzed for the proposed DSPLS algorithm. In addition, a novel selective weighting rule was also involved to select a subset of all available sub-models. Experiments on two public near-infrared datasets demonstrate that the proposed DSPLS with selective weighting rule provided superior prediction performance and outperformed the conventional PLS algorithm. Compared with the single model, the new ensemble model can provide more robust prediction result and can be considered an alternative choice for quantitative analytical applications

  4. Correlation of rocket propulsion fuel properties with chemical composition using comprehensive two-dimensional gas chromatography with time-of-flight mass spectrometry followed by partial least squares regression analysis.

    Science.gov (United States)

    Kehimkar, Benjamin; Hoggard, Jamin C; Marney, Luke C; Billingsley, Matthew C; Fraga, Carlos G; Bruno, Thomas J; Synovec, Robert E

    2014-01-31

    There is an increased need to more fully assess and control the composition of kerosene-based rocket propulsion fuels such as RP-1. In particular, it is critical to make better quantitative connections among the following three attributes: fuel performance (thermal stability, sooting propensity, engine specific impulse, etc.), fuel properties (such as flash point, density, kinematic viscosity, net heat of combustion, and hydrogen content), and the chemical composition of a given fuel, i.e., amounts of specific chemical compounds and compound classes present in a fuel as a result of feedstock blending and/or processing. Recent efforts in predicting fuel chemical and physical behavior through modeling put greater emphasis on attaining detailed and accurate fuel properties and fuel composition information. Often, one-dimensional gas chromatography (GC) combined with mass spectrometry (MS) is employed to provide chemical composition information. Building on approaches that used GC-MS, but to glean substantially more chemical information from these complex fuels, we recently studied the use of comprehensive two dimensional (2D) gas chromatography combined with time-of-flight mass spectrometry (GC×GC-TOFMS) using a "reversed column" format: RTX-wax column for the first dimension, and a RTX-1 column for the second dimension. In this report, by applying chemometric data analysis, specifically partial least-squares (PLS) regression analysis, we are able to readily model (and correlate) the chemical compositional information provided by use of GC×GC-TOFMS to RP-1 fuel property information such as density, kinematic viscosity, net heat of combustion, and so on. Furthermore, we readily identified compounds that contribute significantly to measured differences in fuel properties based on results from the PLS models. We anticipate this new chemical analysis strategy will have broad implications for the development of high fidelity composition-property models, leading to an

  5. Dynamics of two-dimensional vortex system in a strong square pinning array at the second matching field

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Qing-Bao [Department of Physics, Lishui University, Lishui 323000 (China); Luo, Meng-Bo, E-mail: Luomengbo@zju.edu.cn [Department of Physics, Zhejiang University, Hangzhou 310027 (China)

    2013-10-30

    We study the dynamics of a two-dimensional vortex system in a strong square pinning array at the second matching field. Two kinds of depinning behaviors, a continuous depinning transition at weak pinning and a discontinuous one at strong pinning, are found. We show that the two different kinds of vortex depinning transitions can be identified in transport as a function of the pinning strength and temperature. Moreover, interstitial vortex state can be probed from the transport properties of vortices.

  6. Linear and nonlinear viscous flow in two-dimensional fluids

    International Nuclear Information System (INIS)

    Gravina, D.; Ciccotti, G.; Holian, B.L.

    1995-01-01

    We report on molecular dynamics simulations of shear viscosity η of a dense two-dimensional fluid as a function of the shear rate γ. We find an analytic dependence of η on γ, and do not find any evidence whatsoever of divergence in the Green-Kubo (GK) value that would be caused by the well-known long-time tail for the shear-stress autocorrelation function, as predicted by the mode-coupling theory. In accordance with the linear response theory, the GK value of η agrees remarkably well with nonequilibrium values at small shear rates. (c) 1995 The American Physical Society

  7. Estimation of active pharmaceutical ingredients content using locally weighted partial least squares and statistical wavelength selection.

    OpenAIRE

    Kim, Sanghong; Kano, Manabu; Nakagawa, Hiroshi; Hasebe, Shinji

    2011-01-01

    Development of quality estimation models using near infrared spectroscopy (NIRS) and multivariate analysis has been accelerated as a process analytical technology (PAT) tool in the pharmaceutical industry. Although linear regression methods such as partial least squares (PLS) are widely used, they cannot always achieve high estimation accuracy because physical and chemical properties of a measuring object have a complex effect on NIR spectra. In this research, locally weighted PLS (LW-PLS) wh...

  8. A decentralized square root information filter/smoother

    Science.gov (United States)

    Bierman, G. J.; Belzer, M. R.

    1985-01-01

    A number of developments has recently led to a considerable interest in the decentralization of linear least squares estimators. The developments are partly related to the impending emergence of VLSI technology, the realization of parallel processing, and the need for algorithmic ways to speed the solution of dynamically decoupled, high dimensional estimation problems. A new method is presented for combining Square Root Information Filters (SRIF) estimates obtained from independent data sets. The new method involves an orthogonal transformation, and an information matrix filter 'homework' problem discussed by Schweppe (1973) is generalized. The employed SRIF orthogonal transformation methodology has been described by Bierman (1977).

  9. Least Squares Approach to the Alignment of the Generic High Precision Tracking System

    Science.gov (United States)

    de Renstrom, Pawel Brückman; Haywood, Stephen

    2006-04-01

    A least squares method to solve a generic alignment problem of a high granularity tracking system is presented. The algorithm is based on an analytical linear expansion and allows for multiple nested fits, e.g. imposing a common vertex for groups of particle tracks is of particular interest. We present a consistent and complete recipe to impose constraints on either implicit or explicit parameters. The method has been applied to the full simulation of a subset of the ATLAS silicon tracking system. The ultimate goal is to determine ≈35,000 degrees of freedom (DoF's). We present a limited scale exercise exploring various aspects of the solution.

  10. Least-squares resolution of gamma-ray spectra in environmental samples

    International Nuclear Information System (INIS)

    Kanipe, L.G.; Seale, S.K.; Liggett, W.S.

    1977-08-01

    The use of ALPHA-M, a least squares computer program for analyzing NaI (Tl) gamma spectra of environmental samples, is evaluated. Included is a comprehensive set of program instructions, listings, and flowcharts. Two other programs, GEN4 and SIMSPEC, are also described. GEN4 is used to create standard libraries for ALPHA-M, and SIMSPEC is used to simulate spectra for ALPHA-M analysis. Tests to evaluate the standard libraries selected for use in analyzing environmental samples are provided. An evaluation of the results of sample analyses is discussed

  11. An Inverse Function Least Square Fitting Approach of the Buildup Factor for Radiation Shielding Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Park, Chang Je [Sejong Univ., Seoul (Korea, Republic of); Alkhatee, Sari; Roh, Gyuhong; Lee, Byungchul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Dose absorption and energy absorption buildup factors are widely used in the shielding analysis. The dose rate of the medium is main concern in the dose buildup factor, however energy absorption is an important parameter in the energy buildup factors. ANSI/ANS-6.4.3-1991 standard data is widely used based on interpolation and extrapolation by means of an approximation method. Recently, Yoshida's geometric progression (GP) formulae are also popular and it is already implemented in QAD code. In the QAD code, two buildup factors are notated as DOSE for standard air exposure response and ENG for the response of the energy absorbed in the material itself. In this paper, a new least square fitting method is suggested to obtain a reliable buildup factors proposed since 1991. Total 4 datasets of air exposure buildup factors are used for evaluation including ANSI/ANS-6.4.3-1991, Taylor, Berger, and GP data. The standard deviation of the fitted data are analyzed based on the results. A new reverse least square fitting method is proposed in this study in order to reduce the fitting uncertainties. It adapts an inverse function rather than the original function by the distribution slope of dataset. Some quantitative comparisons are provided for concrete and lead in this paper, too. This study is focused on the least square fitting of existing buildup factors to be utilized in the point-kernel code for radiation shielding analysis. The inverse least square fitting method is suggested to obtain more reliable results of concave shaped dataset such as concrete. In the concrete case, the variance and residue are decreased significantly, too. However, the convex shaped case of lead can be applied to the usual least square fitting method. In the future, more datasets will be tested by using the least square fitting. And the fitted data could be implemented to the existing point-kernel codes.

  12. Implementation of a computationally efficient least-squares algorithm for highly under-determined three-dimensional diffuse optical tomography problems.

    Science.gov (United States)

    Yalavarthy, Phaneendra K; Lynch, Daniel R; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2008-05-01

    Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman-Morrison-Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg-Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photo-multiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2.

  13. New recursive-least-squares algorithms for nonlinear active control of sound and vibration using neural networks.

    Science.gov (United States)

    Bouchard, M

    2001-01-01

    In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.

  14. An information geometric approach to least squares minimization

    Science.gov (United States)

    Transtrum, Mark; Machta, Benjamin; Sethna, James

    2009-03-01

    Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.

  15. Error propagation of partial least squares for parameters optimization in NIR modeling

    Science.gov (United States)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-01

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.

  16. Error propagation of partial least squares for parameters optimization in NIR modeling.

    Science.gov (United States)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-05

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.

  17. Output-only modal parameter estimator of linear time-varying structural systems based on vector TAR model and least squares support vector machine

    Science.gov (United States)

    Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei

    2018-01-01

    Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.

  18. Application of pulse pile-up correction spectrum to the library least-squares method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Hoon [Kyungpook National Univ., Daegu (Korea, Republic of)

    2006-12-15

    The Monte Carlo simulation code CEARPPU has been developed and updated to provide pulse pile-up correction spectra for high counting rate cases. For neutron activation analysis, CEARPPU correction spectra were used in library least-squares method to give better isotopic activity results than the convention library least-squares fitting with uncorrected spectra.

  19. An on-line modified least-mean-square algorithm for training neurofuzzy controllers.

    Science.gov (United States)

    Tan, Woei Wan

    2007-04-01

    The problem hindering the use of data-driven modelling methods for training controllers on-line is the lack of control over the amount by which the plant is excited. As the operating schedule determines the information available on-line, the knowledge of the process may degrade if the setpoint remains constant for an extended period. This paper proposes an identification algorithm that alleviates "learning interference" by incorporating fuzzy theory into the normalized least-mean-square update rule. The ability of the proposed methodology to achieve faster learning is examined by employing the algorithm to train a neurofuzzy feedforward controller for controlling a liquid level process. Since the proposed identification strategy has similarities with the normalized least-mean-square update rule and the recursive least-square estimator, the on-line learning rates of these algorithms are also compared.

  20. Performance Evaluation of the Ordinary Least Square (OLS) and ...

    African Journals Online (AJOL)

    Nana Kwasi Peprah

    1Deparment of Geomatic Engineering, University of Mines and Technology, ... precise, accurate and can be used to execute any engineering works due to ..... and Ordinary Least Squares Methods”, Journal of Geomatics and Planning, Vol ... Technology”, Unpublished BSc Project Report, University of Mines and Technology ...

  1. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  2. Ordinary least square regression, orthogonal regression, geometric mean regression and their applications in aerosol science

    International Nuclear Information System (INIS)

    Leng Ling; Zhang Tianyi; Kleinman, Lawrence; Zhu Wei

    2007-01-01

    Regression analysis, especially the ordinary least squares method which assumes that errors are confined to the dependent variable, has seen a fair share of its applications in aerosol science. The ordinary least squares approach, however, could be problematic due to the fact that atmospheric data often does not lend itself to calling one variable independent and the other dependent. Errors often exist for both measurements. In this work, we examine two regression approaches available to accommodate this situation. They are orthogonal regression and geometric mean regression. Comparisons are made theoretically as well as numerically through an aerosol study examining whether the ratio of organic aerosol to CO would change with age

  3. An improved algorithm for the determination of the system paramters of a visual binary by least squares

    Science.gov (United States)

    Xu, Yu-Lin

    The problem of computing the orbit of a visual binary from a set of observed positions is reconsidered. It is a least squares adjustment problem, if the observational errors follow a bias-free multivariate Gaussian distribution and the covariance matrix of the observations is assumed to be known. The condition equations are constructed to satisfy both the conic section equation and the area theorem, which are nonlinear in both the observations and the adjustment parameters. The traditional least squares algorithm, which employs condition equations that are solved with respect to the uncorrelated observations and either linear in the adjustment parameters or linearized by developing them in Taylor series by first-order approximation, is inadequate in our orbit problem. D.C. Brown proposed an algorithm solving a more general least squares adjustment problem in which the scalar residual function, however, is still constructed by first-order approximation. Not long ago, a completely general solution was published by W.H Jefferys, who proposed a rigorous adjustment algorithm for models in which the observations appear nonlinearly in the condition equations and may be correlated, and in which construction of the normal equations and the residual function involves no approximation. This method was successfully applied in our problem. The normal equations were first solved by Newton's scheme. Practical examples show that this converges fast if the observational errors are sufficiently small and the initial approximate solution is sufficiently accurate, and that it fails otherwise. Newton's method was modified to yield a definitive solution in the case the normal approach fails, by combination with the method of steepest descent and other sophisticated algorithms. Practical examples show that the modified Newton scheme can always lead to a final solution. The weighting of observations, the orthogonal parameters and the efficiency of a set of adjustment parameters are also

  4. A FORTRAN program for a least-square fitting

    International Nuclear Information System (INIS)

    Yamazaki, Tetsuo

    1978-01-01

    A practical FORTRAN program for a least-squares fitting is presented. Although the method is quite usual, the program calculates not only the most satisfactory set of values of unknowns but also the plausible errors associated with them. As an example, a measured lateral absorbed-dose distribution in water for a narrow 25-MeV electron beam is fitted to a Gaussian distribution. (auth.)

  5. Simulation of Two-Fluid Flows by the Least-Squares Finite Element Method Using a Continuum Surface Tension Model

    Science.gov (United States)

    Wu, Jie; Yu, Sheng-Tao; Jiang, Bo-nan

    1996-01-01

    In this paper a numerical procedure for simulating two-fluid flows is presented. This procedure is based on the Volume of Fluid (VOF) method proposed by Hirt and Nichols and the continuum surface force (CSF) model developed by Brackbill, et al. In the VOF method fluids of different properties are identified through the use of a continuous field variable (color function). The color function assigns a unique constant (color) to each fluid. The interfaces between different fluids are distinct due to sharp gradients of the color function. The evolution of the interfaces is captured by solving the convective equation of the color function. The CSF model is used as a means to treat surface tension effect at the interfaces. Here a modified version of the CSF model, proposed by Jacqmin, is used to calculate the tension force. In the modified version, the force term is obtained by calculating the divergence of a stress tensor defined by the gradient of the color function. In its analytical form, this stress formulation is equivalent to the original CSF model. Numerically, however, the use of the stress formulation has some advantages over the original CSF model, as it bypasses the difficulty in approximating the curvatures of the interfaces. The least-squares finite element method (LSFEM) is used to discretize the governing equation systems. The LSFEM has proven to be effective in solving incompressible Navier-Stokes equations and pure convection equations, making it an ideal candidate for the present applications. The LSFEM handles all the equations in a unified manner without any additional special treatment such as upwinding or artificial dissipation. Various bench mark tests have been carried out for both two dimensional planar and axisymmetric flows, including a dam breaking, oscillating and stationary bubbles and a conical liquid sheet in a pressure swirl atomizer.

  6. Least-squares reverse time migration of multiples

    KAUST Repository

    Zhang, Dongliang

    2013-12-06

    The theory of least-squares reverse time migration of multiples (RTMM) is presented. In this method, least squares migration (LSM) is used to image free-surface multiples where the recorded traces are used as the time histories of the virtual sources at the hydrophones and the surface-related multiples are the observed data. For a single source, the entire free-surface becomes an extended virtual source where the downgoing free-surface multiples more fully illuminate the subsurface compared to the primaries. Since each recorded trace is treated as the time history of a virtual source, knowledge of the source wavelet is not required and the ringy time series for each source is automatically deconvolved. If the multiples can be perfectly separated from the primaries, numerical tests on synthetic data for the Sigsbee2B and Marmousi2 models show that least-squares reverse time migration of multiples (LSRTMM) can significantly improve the image quality compared to RTMM or standard reverse time migration (RTM) of primaries. However, if there is imperfect separation and the multiples are strongly interfering with the primaries then LSRTMM images show no significant advantage over the primary migration images. In some cases, they can be of worse quality. Applying LSRTMM to Gulf of Mexico data shows higher signal-to-noise imaging of the salt bottom and top compared to standard RTM images. This is likely attributed to the fact that the target body is just below the sea bed so that the deep water multiples do not have strong interference with the primaries. Migrating a sparsely sampled version of the Marmousi2 ocean bottom seismic data shows that LSM of primaries and LSRTMM provides significantly better imaging than standard RTM. A potential liability of LSRTMM is that multiples require several round trips between the reflector and the free surface, so that high frequencies in the multiples suffer greater attenuation compared to the primary reflections. This can lead to lower

  7. Nonlinear Least Square Based on Control Direction by Dual Method and Its Application

    Directory of Open Access Journals (Sweden)

    Zhengqing Fu

    2016-01-01

    Full Text Available A direction controlled nonlinear least square (NLS estimation algorithm using the primal-dual method is proposed. The least square model is transformed into the primal-dual model; then direction of iteration can be controlled by duality. The iterative algorithm is designed. The Hilbert morbid matrix is processed by the new model and the least square estimate and ridge estimate. The main research method is to combine qualitative analysis and quantitative analysis. The deviation between estimated values and the true value and the estimated residuals fluctuation of different methods are used for qualitative analysis. The root mean square error (RMSE is used for quantitative analysis. The results of experiment show that the model has the smallest residual error and the minimum root mean square error. The new estimate model has effectiveness and high precision. The genuine data of Jining area in unwrapping experiments are used and the comparison with other classical unwrapping algorithms is made, so better results in precision aspects can be achieved through the proposed algorithm.

  8. Transfer of optical signals around bends in two-dimensional linear photonic networks

    International Nuclear Information System (INIS)

    Nikolopoulos, G M

    2015-01-01

    The ability to navigate light signals in two-dimensional networks of waveguide arrays is a prerequisite for the development of all-optical integrated circuits for information processing and networking. In this article, we present a theoretical analysis of bending losses in linear photonic lattices with engineered couplings, and discuss possible ways for their minimization. In contrast to previous work in the field, the lattices under consideration operate in the linear regime, in the sense that discrete solitons cannot exist. The present results suggest that the functionality of linear waveguide networks can be extended to operations that go beyond the recently demonstrated point-to-point transfer of signals, such as blocking, routing, logic functions, etc. (paper)

  9. Modelos lineares e não lineares inteiros para problemas da mochila bidimensional restrita a 2 estágios Linear and nonlinear integer models for constrained two-stage two-dimensional knapsack problems

    Directory of Open Access Journals (Sweden)

    Horacio Hideki Yanasse

    2013-01-01

    Full Text Available Neste trabalho revemos alguns modelos lineares e não lineares inteiros para gerar padrões de corte bidimensionais guilhotinados de 2 estágios, incluindo os casos exato e não exato e restrito e irrestrito. Esses problemas são casos particulares do problema da mochila bidimensional. Apresentamos também novos modelos para gerar esses padrões de corte, baseados em adaptações ou extensões de modelos para gerar padrões de corte bidimensionais restritos 1-grupo. Padrões 2 estágios aparecem em diferentes processos de corte, como, por exemplo, em indústrias de móveis e de chapas de madeira. Os modelos são úteis para a pesquisa e o desenvolvimento de métodos de solução mais eficientes, explorando estruturas particulares, a decomposição do modelo, relaxações do modelo etc. Eles também são úteis para a avaliação do desempenho de heurísticas, já que permitem (pelo menos para problemas de tamanho moderado uma estimativa do gap de otimalidade de soluções obtidas por heurísticas. Para ilustrar a aplicação dos modelos, analisamos os resultados de alguns experimentos computacionais com exemplos da literatura e outros gerados aleatoriamente. Os resultados foram produzidos usando um software comercial conhecido e mostram que o esforço computacional necessário para resolver os modelos pode ser bastante diferente.In this work we review some linear and nonlinear integer models to generate two stage two-dimensional guillotine cutting patterns, including the constrained, non constrained, exact and non exact cases. These problems are particular cases of the two dimensional knapsack problems. We also present new models to generate these cutting patterns, based on adaptations and extensions of models that generate one-group constrained two dimensional cutting patterns. Two stage patterns arise in different cutting processes like, for instance, in the furniture industry and wooden hardboards. The models are useful for the research and

  10. Two Enhancements of the Logarithmic Least-Squares Method for Analyzing Subjective Comparisons

    Science.gov (United States)

    1989-03-25

    error term. 1 For this model, the total sum of squares ( SSTO ), defined as n 2 SSTO = E (yi y) i=1 can be partitioned into error and regression sums...of the regression line around the mean value. Mathematically, for the model given by equation A.4, SSTO = SSE + SSR (A.6) A-4 where SSTO is the total...sum of squares (i.e., the variance of the yi’s), SSE is error sum of squares, and SSR is the regression sum of squares. SSTO , SSE, and SSR are given

  11. Beam-hardening correction by a surface fitting and phase classification by a least square support vector machine approach for tomography images of geological samples

    Science.gov (United States)

    Khan, F.; Enzmann, F.; Kersten, M.

    2015-12-01

    In X-ray computed microtomography (μXCT) image processing is the most important operation prior to image analysis. Such processing mainly involves artefact reduction and image segmentation. We propose a new two-stage post-reconstruction procedure of an image of a geological rock core obtained by polychromatic cone-beam μXCT technology. In the first stage, the beam-hardening (BH) is removed applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. The final BH-corrected image is extracted from the residual data, or the difference between the surface elevation values and the original grey-scale values. For the second stage, we propose using a least square support vector machine (a non-linear classifier algorithm) to segment the BH-corrected data as a pixel-based multi-classification task. A combination of the two approaches was used to classify a complex multi-mineral rock sample. The Matlab code for this approach is provided in the Appendix. A minor drawback is that the proposed segmentation algorithm may become computationally demanding in the case of a high dimensional training data set.

  12. Conduction in rectangular quasi-one-dimensional and two-dimensional random resistor networks away from the percolation threshold.

    Science.gov (United States)

    Kiefer, Thomas; Villanueva, Guillermo; Brugger, Jürgen

    2009-08-01

    In this study we investigate electrical conduction in finite rectangular random resistor networks in quasione and two dimensions far away from the percolation threshold p(c) by the use of a bond percolation model. Various topologies such as parallel linear chains in one dimension, as well as square and triangular lattices in two dimensions, are compared as a function of the geometrical aspect ratio. In particular we propose a linear approximation for conduction in two-dimensional systems far from p(c), which is useful for engineering purposes. We find that the same scaling function, which can be used for finite-size scaling of percolation thresholds, also applies to describe conduction away from p(c). This is in contrast to the quasi-one-dimensional case, which is highly nonlinear. The qualitative analysis of the range within which the linear approximation is legitimate is given. A brief link to real applications is made by taking into account a statistical distribution of the resistors in the network. Our results are of potential interest in fields such as nanostructured or composite materials and sensing applications.

  13. Pressurized water reactor monitoring. Study of detection, diagnostic and estimation methods (least error squares and filtering)

    International Nuclear Information System (INIS)

    Gillet, M.

    1986-07-01

    This thesis presents a study for the surveillance of the ''primary coolant circuit inventory monitoring'' of a pressurized water reactor. A reference model is developed in view of an automatic system ensuring detection and diagnostic in real time. The methods used for the present application are statistical tests and a method related to pattern recognition. The estimation of failures detected, difficult owing to the non-linearity of the problem, is treated by the least error squares method of the predictor or corrector type, and by filtering. It is in this frame that a new optimized method with superlinear convergence is developed, and that a segmented linearization of the model is introduced, in view of a multiple filtering [fr

  14. Attenuation compensation in least-squares reverse time migration using the visco-acoustic wave equation

    KAUST Repository

    Dutta, Gaurav

    2013-08-20

    Attenuation leads to distortion of amplitude and phase of seismic waves propagating inside the earth. Conventional acoustic and least-squares reverse time migration do not account for this distortion which leads to defocusing of migration images in highly attenuative geological environments. To account for this distortion, we propose to use the visco-acoustic wave equation for least-squares reverse time migration. Numerical tests on synthetic data show that least-squares reverse time migration with the visco-acoustic wave equation corrects for this distortion and produces images with better balanced amplitudes compared to the conventional approach. © 2013 SEG.

  15. Numerical Control Machine Tool Fault Diagnosis Using Hybrid Stationary Subspace Analysis and Least Squares Support Vector Machine with a Single Sensor

    Directory of Open Access Journals (Sweden)

    Chen Gao

    2017-03-01

    Full Text Available Tool fault diagnosis in numerical control (NC machines plays a significant role in ensuring manufacturing quality. However, current methods of tool fault diagnosis lack accuracy. Therefore, in the present paper, a fault diagnosis method was proposed based on stationary subspace analysis (SSA and least squares support vector machine (LS-SVM using only a single sensor. First, SSA was used to extract stationary and non-stationary sources from multi-dimensional signals without the need for independency and without prior information of the source signals, after the dimensionality of the vibration signal observed by a single sensor was expanded by phase space reconstruction technique. Subsequently, 10 dimensionless parameters in the time-frequency domain for non-stationary sources were calculated to generate samples to train the LS-SVM. Finally, the measured vibration signals from tools of an unknown state and their non-stationary sources were separated by SSA to serve as test samples for the trained SVM. The experimental validation demonstrated that the proposed method has better diagnosis accuracy than three previous methods based on LS-SVM alone, Principal component analysis and LS-SVM or on SSA and Linear discriminant analysis.

  16. Track Circuit Fault Diagnosis Method based on Least Squares Support Vector

    Science.gov (United States)

    Cao, Yan; Sun, Fengru

    2018-01-01

    In order to improve the troubleshooting efficiency and accuracy of the track circuit, track circuit fault diagnosis method was researched. Firstly, the least squares support vector machine was applied to design the multi-fault classifier of the track circuit, and then the measured track data as training samples was used to verify the feasibility of the methods. Finally, the results based on BP neural network fault diagnosis methods and the methods used in this paper were compared. Results shows that the track fault classifier based on least squares support vector machine can effectively achieve the five track circuit fault diagnosis with less computing time.

  17. Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach

    Science.gov (United States)

    Ulbrich, N.; Volden, T.

    2017-01-01

    A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.

  18. Application of the Least Squares Method in Axisymmetric Biharmonic Problems

    Directory of Open Access Journals (Sweden)

    Vasyl Chekurin

    2016-01-01

    Full Text Available An approach for solving of the axisymmetric biharmonic boundary value problems for semi-infinite cylindrical domain was developed in the paper. On the lateral surface of the domain homogeneous Neumann boundary conditions are prescribed. On the remaining part of the domain’s boundary four different biharmonic boundary pieces of data are considered. To solve the formulated biharmonic problems the method of least squares on the boundary combined with the method of homogeneous solutions was used. That enabled reducing the problems to infinite systems of linear algebraic equations which can be solved with the use of reduction method. Convergence of the solution obtained with developed approach was studied numerically on some characteristic examples. The developed approach can be used particularly to solve axisymmetric elasticity problems for cylindrical bodies, the heights of which are equal to or exceed their diameters, when on their lateral surface normal and tangential tractions are prescribed and on the cylinder’s end faces various types of boundary conditions in stresses in displacements or mixed ones are given.

  19. Analysis of quantile regression as alternative to ordinary least squares

    OpenAIRE

    Ibrahim Abdullahi; Abubakar Yahaya

    2015-01-01

    In this article, an alternative to ordinary least squares (OLS) regression based on analytical solution in the Statgraphics software is considered, and this alternative is no other than quantile regression (QR) model. We also present goodness of fit statistic as well as approximate distributions of the associated test statistics for the parameters. Furthermore, we suggest a goodness of fit statistic called the least absolute deviation (LAD) coefficient of determination. The procedure is well ...

  20. Improved implementation algorithms of the two-dimensional nonseparable linear canonical transform.

    Science.gov (United States)

    Ding, Jian-Jiun; Pei, Soo-Chang; Liu, Chun-Lin

    2012-08-01

    The two-dimensional nonseparable linear canonical transform (2D NSLCT), which is a generalization of the fractional Fourier transform and the linear canonical transform, is useful for analyzing optical systems. However, since the 2D NSLCT has 16 parameters and is very complicated, it is a great challenge to implement it in an efficient way. In this paper, we improved the previous work and propose an efficient way to implement the 2D NSLCT. The proposed algorithm can minimize the numerical error arising from interpolation operations and requires fewer chirp multiplications. The simulation results show that, compared with the existing algorithm, the proposed algorithms can implement the 2D NSLCT more accurately and the required computation time is also less.

  1. Weighted least-square approach for simultaneous measurement of multiple reflective surfaces

    Science.gov (United States)

    Tang, Shouhong; Bills, Richard E.; Freischlad, Klaus

    2007-09-01

    Phase shifting interferometry (PSI) is a highly accurate method for measuring the nanometer-scale relative surface height of a semi-reflective test surface. PSI is effectively used in conjunction with Fizeau interferometers for optical testing, hard disk inspection, and semiconductor wafer flatness. However, commonly-used PSI algorithms are unable to produce an accurate phase measurement if more than one reflective surface is present in the Fizeau interferometer test cavity. Examples of test parts that fall into this category include lithography mask blanks and their protective pellicles, and plane parallel optical beam splitters. The plane parallel surfaces of these parts generate multiple interferograms that are superimposed in the recording plane of the Fizeau interferometer. When using wavelength shifting in PSI the phase shifting speed of each interferogram is proportional to the optical path difference (OPD) between the two reflective surfaces. The proposed method is able to differentiate each underlying interferogram from each other in an optimal manner. In this paper, we present a method for simultaneously measuring the multiple test surfaces of all underlying interferograms from these superimposed interferograms through the use of a weighted least-square fitting technique. The theoretical analysis of weighted least-square technique and the measurement results will be described in this paper.

  2. Pressurized water reactor monitoring. Study of detection, diagnostic and estimation (least squares and filtering) methods

    International Nuclear Information System (INIS)

    Gillet, M.

    1986-07-01

    This thesis presents a study for the surveillance of the Primary circuit water inventory of a pressurized water reactor. A reference model is developed for the development of an automatic system ensuring detection and real-time diagnostic. The methods to our application are statistical tests and adapted a pattern recognition method. The estimation of the detected anomalies is treated by the least square fit method, and by filtering. A new projected optimization method with superlinear convergence is developed in this framework, and a segmented linearization of the model is introduced, in view of a multiple filtering. 46 refs [fr

  3. Explorative data analysis of two-dimensional electrophoresis gels

    DEFF Research Database (Denmark)

    Schultz, J.; Gottlieb, D.M.; Petersen, Marianne Kjerstine

    2004-01-01

    of gels is presented. First, an approach is demonstrated in which no prior knowledge of the separated proteins is used. Alignment of the gels followed by a simple transformation of data makes it possible to analyze the gels in an automated explorative manner by principal component analysis, to determine......Methods for classification of two-dimensional (2-DE) electrophoresis gels based on multivariate data analysis are demonstrated. Two-dimensional gels of ten wheat varieties are analyzed and it is demonstrated how to classify the wheat varieties in two qualities and a method for initial screening...... if the gels should be further analyzed. A more detailed approach is done by analyzing spot volume lists by principal components analysis and partial least square regression. The use of spot volume data offers a mean to investigate the spot pattern and link the classified protein patterns to distinct spots...

  4. Growth kinetics of borided layers: Artificial neural network and least square approaches

    Science.gov (United States)

    Campos, I.; Islas, M.; Ramírez, G.; VillaVelázquez, C.; Mota, C.

    2007-05-01

    The present study evaluates the growth kinetics of the boride layer Fe 2B in AISI 1045 steel, by means of neural networks and the least square techniques. The Fe 2B phase was formed at the material surface using the paste boriding process. The surface boron potential was modified considering different boron paste thicknesses, with exposure times of 2, 4 and 6 h, and treatment temperatures of 1193, 1223 and 1273 K. The neural network and the least square models were set by the layer thickness of Fe 2B phase, and assuming that the growth of the boride layer follows a parabolic law. The reliability of the techniques used is compared with a set of experiments at a temperature of 1223 K with 5 h of treatment time and boron potentials of 2, 3, 4 and 5 mm. The results of the Fe 2B layer thicknesses show a mean error of 5.31% for the neural network and 3.42% for the least square method.

  5. Unweighted least squares phase unwrapping by means of multigrid techniques

    Science.gov (United States)

    Pritt, Mark D.

    1995-11-01

    We present a multigrid algorithm for unweighted least squares phase unwrapping. This algorithm applies Gauss-Seidel relaxation schemes to solve the Poisson equation on smaller, coarser grids and transfers the intermediate results to the finer grids. This approach forms the basis of our multigrid algorithm for weighted least squares phase unwrapping, which is described in a separate paper. The key idea of our multigrid approach is to maintain the partial derivatives of the phase data in separate arrays and to correct these derivatives at the boundaries of the coarser grids. This maintains the boundary conditions necessary for rapid convergence to the correct solution. Although the multigrid algorithm is an iterative algorithm, we demonstrate that it is nearly as fast as the direct Fourier-based method. We also describe how to parallelize the algorithm for execution on a distributed-memory parallel processor computer or a network-cluster of workstations.

  6. Dynamic least-squares kernel density modeling of Fokker-Planck equations with application to neural population.

    Science.gov (United States)

    Shotorban, Babak

    2010-04-01

    The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.

  7. On the Numerical Solution of the Elliptic Monge—Ampère Equation in Dimension Two: A Least-Squares Approach

    Science.gov (United States)

    Dean, Edward J.; Glowinski, Roland

    During his outstanding career, Olivier Pironneau has addressed the solution of a large variety of problems from the Natural Sciences, Engineering and Finance to name a few, an evidence of his activity being the many articles and books he has written. It is the opinion of these authors, and former collaborators of O. Pironneau (cf. [DGP91]), that this chapter is well-suited to a volume honoring him. Indeed, the two pillars of the solution methodology that we are going to describe are: (1) a nonlinear least squares formulation in an appropriate Hilbert space, and (2) a mixed finite element approximation, reminiscent of the one used in [DGP91] and [GP79] for solving the Stokes and Navier-Stokes equations in their stream function-vorticity formulation; the contributions of O. Pironneau on the two above topics are well-known world wide. Last but not least, we will show that the solution method discussed here can be viewed as a solution method for a non-standard variant of the incompressible Navier-Stokes equations, an area where O. Pironneau has many outstanding and celebrated contributions (cf. [Pir89], for example).

  8. A least-squares minimisation approach to depth determination from numerical second horizontal self-potential anomalies

    Science.gov (United States)

    Abdelrahman, El-Sayed Mohamed; Soliman, Khalid; Essa, Khalid Sayed; Abo-Ezz, Eid Ragab; El-Araby, Tarek Mohamed

    2009-06-01

    This paper develops a least-squares minimisation approach to determine the depth of a buried structure from numerical second horizontal derivative anomalies obtained from self-potential (SP) data using filters of successive window lengths. The method is based on using a relationship between the depth and a combination of observations at symmetric points with respect to the coordinate of the projection of the centre of the source in the plane of the measurement points with a free parameter (graticule spacing). The problem of depth determination from second derivative SP anomalies has been transformed into the problem of finding a solution to a non-linear equation of the form f(z)=0. Formulas have been derived for horizontal cylinders, spheres, and vertical cylinders. Procedures are also formulated to determine the electric dipole moment and the polarization angle. The proposed method was tested on synthetic noisy and real SP data. In the case of the synthetic data, the least-squares method determined the correct depths of the sources. In the case of practical data (SP anomalies over a sulfide ore deposit, Sariyer, Turkey and over a Malachite Mine, Jefferson County, Colorado, USA), the estimated depths of the buried structures are in good agreement with the results obtained from drilling and surface geology.

  9. Face recognition based on two-dimensional discriminant sparse preserving projection

    Science.gov (United States)

    Zhang, Dawei; Zhu, Shanan

    2018-04-01

    In this paper, a supervised dimensionality reduction algorithm named two-dimensional discriminant sparse preserving projection (2DDSPP) is proposed for face recognition. In order to accurately model manifold structure of data, 2DDSPP constructs within-class affinity graph and between-class affinity graph by the constrained least squares (LS) and l1 norm minimization problem, respectively. Based on directly operating on image matrix, 2DDSPP integrates graph embedding (GE) with Fisher criterion. The obtained projection subspace preserves within-class neighborhood geometry structure of samples, while keeping away samples from different classes. The experimental results on the PIE and AR face databases show that 2DDSPP can achieve better recognition performance.

  10. Feature extraction through least squares fit to a simple model

    International Nuclear Information System (INIS)

    Demuth, H.B.

    1976-01-01

    The Oak Ridge National Laboratory (ORNL) presented the Los Alamos Scientific Laboratory (LASL) with 18 radiographs of fuel rod test bundles. The problem is to estimate the thickness of the gap between some cylindrical rods and a flat wall surface. The edges of the gaps are poorly defined due to finite source size, x-ray scatter, parallax, film grain noise, and other degrading effects. The radiographs were scanned and the scan-line data were averaged to reduce noise and to convert the problem to one dimension. A model of the ideal gap, convolved with an appropriate point-spread function, was fit to the averaged data with a least squares program; and the gap width was determined from the final fitted-model parameters. The least squares routine did converge and the gaps obtained are of reasonable size. The method is remarkably insensitive to noise. This report describes the problem, the techniques used to solve it, and the results and conclusions. Suggestions for future work are also given

  11. PENERAPAN METODE LEAST MEDIAN SQUARE-MINIMUM COVARIANCE DETERMINANT (LMS-MCD DALAM REGRESI KOMPONEN UTAMA

    Directory of Open Access Journals (Sweden)

    I PUTU EKA IRAWAN

    2013-11-01

    Full Text Available Principal Component Regression is a method to overcome multicollinearity techniques by combining principal component analysis with regression analysis. The calculation of classical principal component analysis is based on the regular covariance matrix. The covariance matrix is optimal if the data originated from a multivariate normal distribution, but is very sensitive to the presence of outliers. Alternatives are used to overcome this problem the method of Least Median Square-Minimum Covariance Determinant (LMS-MCD. The purpose of this research is to conduct a comparison between Principal Component Regression (RKU and Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD in dealing with outliers. In this study, Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD has a bias and mean square error (MSE is smaller than the parameter RKU. Based on the difference of parameter estimators, still have a test that has a difference of parameter estimators method LMS-MCD greater than RKU method.

  12. A deterministic iterative least-squares algorithm for beam weight optimization in conformal radiotherapy

    International Nuclear Information System (INIS)

    Chen Yan; Michalski, Darek; Houser, Christopher; Galvin, James M.

    2002-01-01

    Currently, inverse treatment planning in conformal radiotherapy is, in part, a trial-and-error process due to the interplay of many competing criteria for obtaining a clinically acceptable dose distribution. A new method is developed for beam weight optimization that incorporates clinically relevant nonlinear and linear constraints. The process is driven by a nonlinear, quasi-quadratic objective function and the solution space is defined by a set of linear constraints. At each step of iteration, the optimization problem is linearized by a self-consistent approximation that is local to the existing dose distribution. The dose distribution is then improved by solving a series of constrained least-squares problems using an established method until all prescribed constraints are satisfied. This differs from the current approaches in that it does not rely on the search for the global minimum of a specific objective function. Essentially, our proposed objective function can be construed as a functional that comprises a class of dose-based quadratic objective functions. Empirical adjustment for appropriate model parameters in the construction of objective function is minimized, since these parameters are in effect adaptively adjusted during optimization. The method is robust in solving difficult clinical cases using either aperture or pencil beam based planning techniques for intensity-modulated radiation therapy. (author)

  13. DEM4-26, Least Square Fit for IBM PC by Deming Method

    International Nuclear Information System (INIS)

    Rinard, P.M.; Bosler, G.E.

    1989-01-01

    1 - Description of program or function: DEM4-26 is a generalized least square fitting program based on Deming's method. Functions built into the program for fitting include linear, quadratic, cubic, power, Howard's, exponential, and Gaussian; others can easily be added. The program has the following capabilities: (1) entry, editing, and saving of data; (2) fitting of any of the built-in functions or of a user-supplied function; (3) plotting the data and fitted function on the display screen, with error limits if requested, and with the option of copying the plot to the printer; (4) interpolation of x or y values from the fitted curve with error estimates based on error limits selected by the user; and (5) plotting the residuals between the y data values and the fitted curve, with the option copying the plot to the printer. 2 - Method of solution: Deming's method

  14. Geodesic least squares regression for scaling studies in magnetic confinement fusion

    International Nuclear Information System (INIS)

    Verdoolaege, Geert

    2015-01-01

    In regression analyses for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. However, concerns have been raised with respect to several assumptions underlying OLS in its application to scaling laws. We here discuss a new regression method that is robust in the presence of significant uncertainty on both the data and the regression model. The method, which we call geodesic least squares regression (GLS), is based on minimization of the Rao geodesic distance on a probabilistic manifold. We demonstrate the superiority of the method using synthetic data and we present an application to the scaling law for the power threshold for the transition to the high confinement regime in magnetic confinement fusion devices

  15. On the treatment of ill-conditioned cases in the Monte Carlo library least-squares approach for inverse radiation analyzers

    Science.gov (United States)

    Meric, Ilker; Johansen, Geir A.; Holstad, Marie B.; Mattingly, John; Gardner, Robin P.

    2012-05-01

    Prompt gamma-ray neutron activation analysis (PGNAA) has been and still is one of the major methods of choice for the elemental analysis of various bulk samples. This is mostly due to the fact that PGNAA offers a rapid, non-destructive and on-line means of sample interrogation. The quantitative analysis of the prompt gamma-ray data could, on the other hand, be performed either through the single peak analysis or the so-called Monte Carlo library least-squares (MCLLS) approach, of which the latter has been shown to be more sensitive and more accurate than the former. The MCLLS approach is based on the assumption that the total prompt gamma-ray spectrum of any sample is a linear combination of the contributions from the individual constituents or libraries. This assumption leads to, through the minimization of the chi-square value, a set of linear equations which has to be solved to obtain the library multipliers, a process that involves the inversion of the covariance matrix. The least-squares solution may be extremely uncertain due to the ill-conditioning of the covariance matrix. The covariance matrix will become ill-conditioned whenever, in the subsequent calculations, two or more libraries are highly correlated. The ill-conditioning will also be unavoidable whenever the sample contains trace amounts of certain elements or elements with significantly low thermal neutron capture cross-sections. In this work, a new iterative approach, which can handle the ill-conditioning of the covariance matrix, is proposed and applied to a hydrocarbon multiphase flow problem in which the parameters of interest are the separate amounts of the oil, gas, water and salt phases. The results of the proposed method are also compared with the results obtained through the implementation of a well-known regularization method, the truncated singular value decomposition. Final calculations indicate that the proposed approach would be able to treat ill-conditioned cases appropriately.

  16. On the treatment of ill-conditioned cases in the Monte Carlo library least-squares approach for inverse radiation analyzers

    International Nuclear Information System (INIS)

    Meric, Ilker; Johansen, Geir A; Holstad, Marie B; Mattingly, John; Gardner, Robin P

    2012-01-01

    Prompt gamma-ray neutron activation analysis (PGNAA) has been and still is one of the major methods of choice for the elemental analysis of various bulk samples. This is mostly due to the fact that PGNAA offers a rapid, non-destructive and on-line means of sample interrogation. The quantitative analysis of the prompt gamma-ray data could, on the other hand, be performed either through the single peak analysis or the so-called Monte Carlo library least-squares (MCLLS) approach, of which the latter has been shown to be more sensitive and more accurate than the former. The MCLLS approach is based on the assumption that the total prompt gamma-ray spectrum of any sample is a linear combination of the contributions from the individual constituents or libraries. This assumption leads to, through the minimization of the chi-square value, a set of linear equations which has to be solved to obtain the library multipliers, a process that involves the inversion of the covariance matrix. The least-squares solution may be extremely uncertain due to the ill-conditioning of the covariance matrix. The covariance matrix will become ill-conditioned whenever, in the subsequent calculations, two or more libraries are highly correlated. The ill-conditioning will also be unavoidable whenever the sample contains trace amounts of certain elements or elements with significantly low thermal neutron capture cross-sections. In this work, a new iterative approach, which can handle the ill-conditioning of the covariance matrix, is proposed and applied to a hydrocarbon multiphase flow problem in which the parameters of interest are the separate amounts of the oil, gas, water and salt phases. The results of the proposed method are also compared with the results obtained through the implementation of a well-known regularization method, the truncated singular value decomposition. Final calculations indicate that the proposed approach would be able to treat ill-conditioned cases appropriately. (paper)

  17. Applying the methodology of Design of Experiments to stability studies: a Partial Least Squares approach for evaluation of drug stability.

    Science.gov (United States)

    Jordan, Nika; Zakrajšek, Jure; Bohanec, Simona; Roškar, Robert; Grabnar, Iztok

    2018-05-01

    The aim of the present research is to show that the methodology of Design of Experiments can be applied to stability data evaluation, as they can be seen as multi-factor and multi-level experimental designs. Linear regression analysis is usually an approach for analyzing stability data, but multivariate statistical methods could also be used to assess drug stability during the development phase. Data from a stability study for a pharmaceutical product with hydrochlorothiazide (HCTZ) as an unstable drug substance was used as a case example in this paper. The design space of the stability study was modeled using Umetrics MODDE 10.1 software. We showed that a Partial Least Squares model could be used for a multi-dimensional presentation of all data generated in a stability study and for determination of the relationship among factors that influence drug stability. It might also be used for stability predictions and potentially for the optimization of the extent of stability testing needed to determine shelf life and storage conditions, which would be time and cost-effective for the pharmaceutical industry.

  18. Quantitative analysis of Ni2+/Ni3+ in Li[NixMnyCoz]O2 cathode materials: Non-linear least-squares fitting of XPS spectra

    Science.gov (United States)

    Fu, Zewei; Hu, Juntao; Hu, Wenlong; Yang, Shiyu; Luo, Yunfeng

    2018-05-01

    Quantitative analysis of Ni2+/Ni3+ using X-ray photoelectron spectroscopy (XPS) is important for evaluating the crystal structure and electrochemical performance of Lithium-nickel-cobalt-manganese oxide (Li[NixMnyCoz]O2, NMC). However, quantitative analysis based on Gaussian/Lorentzian (G/L) peak fitting suffers from the challenges of reproducibility and effectiveness. In this study, the Ni2+ and Ni3+ standard samples and a series of NMC samples with different Ni doping levels were synthesized. The Ni2+/Ni3+ ratios in NMC were quantitatively analyzed by non-linear least-squares fitting (NLLSF). Two Ni 2p overall spectra of synthesized Li [Ni0.33Mn0.33Co0.33]O2(NMC111) and bulk LiNiO2 were used as the Ni2+ and Ni3+ reference standards. Compared to G/L peak fitting, the fitting parameters required no adjustment, meaning that the spectral fitting process was free from operator dependence and the reproducibility was improved. Comparison of residual standard deviation (STD) showed that the fitting quality of NLLSF was superior to that of G/L peaks fitting. Overall, these findings confirmed the reproducibility and effectiveness of the NLLSF method in XPS quantitative analysis of Ni2+/Ni3+ ratio in Li[NixMnyCoz]O2 cathode materials.

  19. Regularized plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin

    2013-09-22

    A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity model. Both synthetic and field results show that: 1) LSM with a reflectivity model common for all the plane-wave gathers provides the best image when the migration velocity model is accurate, but it is more sensitive to the velocity errors, 2) the regularized plane-wave LSM is more robust in the presence of velocity errors, and 3) LSM achieves both computational and IO saving by plane-wave encoding compared to shot-domain LSM for the models tested.

  20. Multiples least-squares reverse time migration

    KAUST Repository

    Zhang, Dongliang

    2013-01-01

    To enhance the image quality, we propose multiples least-squares reverse time migration (MLSRTM) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. Since each recorded trace is treated as a virtual source, knowledge of the source wavelet is not required. Numerical tests on synthetic data for the Sigsbee2B model and field data from Gulf of Mexico show that MLSRTM can improve the image quality by removing artifacts, balancing amplitudes, and suppressing crosstalk compared to standard migration of the free-surface multiples. The potential liability of this method is that multiples require several roundtrips between the reflector and the free surface, so that high frequencies in the multiples are attenuated compared to the primary reflections. This can lead to lower resolution in the migration image compared to that computed from primaries.

  1. An improved algorithm for the determination of the system parameters of a visual binary by least squares

    International Nuclear Information System (INIS)

    Xu, Yu-Lin.

    1988-01-01

    The problem of computing the orbit of a visual binary from a set of observed positions is reconsidered. It is a least squares adjustment problem, if the observational errors follow a bias-free multivariate Gaussian distribution and the covariance matrix of the observations is assumed to be known. The condition equations are constructed to satisfy both the conic section equation and the area theorem, which are nonlinear in both the observations and the adjustment parameters. The traditional least squares algorithm, which employs condition equations that are solved with respect to the uncorrelated observations and either linear in the adjustment parameters or linearized by developing them in Taylor series by first-order approximation, is inadequate in the orbit problem. Not long ago, a completely general solution was published by W. H. Jefferys, who proposed a rigorous adjustment algorithm for models in which the observations appear nonlinearly in the condition equations and may be correlated, and in which construction of the normal equations and the residual function involves no approximation. This method was successfully applied in this problem. The normal equations were first solved by Newton's scheme. Newton's method was modified to yield a definitive solution in the case the normal approach fails, by combination with the method of steepest descent and other sophisticated algorithms. Practical examples show that the modified Newton scheme can always lead to a final solution. The weighting of observations, the orthogonal parameters and the efficiency of a set of adjustment parameters are also considered

  2. A non-linear piezoelectric actuator calibration using N-dimensional Lissajous figure

    Science.gov (United States)

    Albertazzi, A.; Viotti, M. R.; Veiga, C. L. N.; Fantin, A. V.

    2016-08-01

    Piezoelectric translators (PZTs) are very often used as phase shifters in interferometry. However, they typically present a non-linear behavior and strong hysteresis. The use of an additional resistive or capacitive sensor make possible to linearize the response of the PZT by feedback control. This approach works well, but makes the device more complex and expensive. A less expensive approach uses a non-linear calibration. In this paper, the authors used data from at least five interferograms to form N-dimensional Lissajous figures to establish the actual relationship between the applied voltages and the resulting phase shifts [1]. N-dimensional Lissajous figures are formed when N sinusoidal signals are combined in an N-dimensional space, where one signal is assigned to each axis. It can be verified that the resulting Ndimensional ellipsis lays in a 2D plane. By fitting an ellipsis equation to the resulting 2D ellipsis it is possible to accurately compute the resulting phase value for each interferogram. In this paper, the relationship between the resulting phase shift and the applied voltage is simultaneously established for a set of 12 increments by a fourth degree polynomial. The results in speckle interferometry show that, after two or three interactions, the calibration error is usually smaller than 1°.

  3. Cognitive assessment in mathematics with the least squares distance method.

    Science.gov (United States)

    Ma, Lin; Çetin, Emre; Green, Kathy E

    2012-01-01

    This study investigated the validation of comprehensive cognitive attributes of an eighth-grade mathematics test using the least squares distance method and compared performance on attributes by gender and region. A sample of 5,000 students was randomly selected from the data of the 2005 Turkish national mathematics assessment of eighth-grade students. Twenty-five math items were assessed for presence or absence of 20 cognitive attributes (content, cognitive processes, and skill). Four attributes were found to be misspecified or nonpredictive. However, results demonstrated the validity of cognitive attributes in terms of the revised set of 17 attributes. The girls had similar performance on the attributes as the boys. The students from the two eastern regions significantly underperformed on the most attributes.

  4. Application of new least-squares methods for the quantitative infrared analysis of multicomponent samples

    International Nuclear Information System (INIS)

    Haaland, D.M.; Easterling, R.G.

    1982-01-01

    Improvements have been made in previous least-squares regression analyses of infrared spectra for the quantitative estimation of concentrations of multicomponent mixtures. Spectral baselines are fitted by least-squares methods, and overlapping spectral features are accounted for in the fitting procedure. Selection of peaks above a threshold value reduces computation time and data storage requirements. Four weighted least-squares methods incorporating different baseline assumptions were investigated using FT-IR spectra of the three pure xylene isomers and their mixtures. By fitting only regions of the spectra that follow Beer's Law, accurate results can be obtained using three of the fitting methods even when baselines are not corrected to zero. Accurate results can also be obtained using one of the fits even in the presence of Beer's Law deviations. This is a consequence of pooling the weighted results for each spectral peak such that the greatest weighting is automatically given to those peaks that adhere to Beer's Law. It has been shown with the xylene spectra that semiquantitative results can be obtained even when all the major components are not known or when expected components are not present. This improvement over previous methods greatly expands the utility of quantitative least-squares analyses

  5. Least squares shadowing sensitivity analysis of a modified Kuramoto–Sivashinsky equation

    International Nuclear Information System (INIS)

    Blonigan, Patrick J.; Wang, Qiqi

    2014-01-01

    Highlights: •Modifying the Kuramoto–Sivashinsky equation and changing its boundary conditions make it an ergodic dynamical system. •The modified Kuramoto–Sivashinsky equation exhibits distinct dynamics for three different ranges of system parameters. •Least squares shadowing sensitivity analysis computes accurate gradients for a wide range of system parameters. - Abstract: Computational methods for sensitivity analysis are invaluable tools for scientists and engineers investigating a wide range of physical phenomena. However, many of these methods fail when applied to chaotic systems, such as the Kuramoto–Sivashinsky (K–S) equation, which models a number of different chaotic systems found in nature. The following paper discusses the application of a new sensitivity analysis method developed by the authors to a modified K–S equation. We find that least squares shadowing sensitivity analysis computes accurate gradients for solutions corresponding to a wide range of system parameters

  6. Novel target design algorithm for two-dimensional optical storage (TwoDOS)

    NARCIS (Netherlands)

    Huang, Li; Chong, T.C.; Vijaya Kumar, B.V.K.; Kobori, H.

    2004-01-01

    In this paper we introduce the Hankel transform based channel model of Two-Dimensional Optical Storage (TwoDOS) system. Based on this model, the two-dimensional (2D) minimum mean-square error (MMSE) equalizer has been derived and applied to some simple but common cases. The performance of the 2D

  7. Commutative discrete filtering on unstructured grids based on least-squares techniques

    International Nuclear Information System (INIS)

    Haselbacher, Andreas; Vasilyev, Oleg V.

    2003-01-01

    The present work is concerned with the development of commutative discrete filters for unstructured grids and contains two main contributions. First, building on the work of Marsden et al. [J. Comp. Phys. 175 (2002) 584], a new commutative discrete filter based on least-squares techniques is constructed. Second, a new analysis of the discrete commutation error is carried out. The analysis indicates that the discrete commutation error is not only dependent on the number of vanishing moments of the filter weights, but also on the order of accuracy of the discrete gradient operator. The results of the analysis are confirmed by grid-refinement studies

  8. Non-stationary least-squares complex decomposition for microseismic noise attenuation

    Science.gov (United States)

    Chen, Yangkang

    2018-06-01

    Microseismic data processing and imaging are crucial for subsurface real-time monitoring during hydraulic fracturing process. Unlike the active-source seismic events or large-scale earthquake events, the microseismic event is usually of very small magnitude, which makes its detection challenging. The biggest trouble of microseismic data is the low signal-to-noise ratio issue. Because of the small energy difference between effective microseismic signal and ambient noise, the effective signals are usually buried in strong random noise. I propose a useful microseismic denoising algorithm that is based on decomposing a microseismic trace into an ensemble of components using least-squares inversion. Based on the predictive property of useful microseismic event along the time direction, the random noise can be filtered out via least-squares fitting of multiple damping exponential components. The method is flexible and almost automated since the only parameter needed to be defined is a decomposition number. I use some synthetic and real data examples to demonstrate the potential of the algorithm in processing complicated microseismic data sets.

  9. Pure and entangled N=4 linear supermultiplets and their one-dimensional sigma-models

    International Nuclear Information System (INIS)

    Gonzales, Marcelo; Iga, Kevin; Khodaee, Sadi; Toppan, Francesco

    2012-01-01

    “Pure” homogeneous linear supermultiplets (minimal and non-minimal) of the N=4-extended one-dimensional supersymmetry algebra are classified. “Pure” means that they admit at least one graphical presentation (the corresponding graph/graphs are known as “Adinkras”). We further prove the existence of “entangled” linear supermultiplets which do not admit a graphical presentation, by constructing an explicit example of an entangled N=4 supermultiplet with field content (3, 8, 5). It interpolates between two inequivalent pure N=4 supermultiplets with the same field content. The one-dimensional N=4 sigma-model with a three-dimensional target based on the entangled supermultiplet is presented. The distinction between the notion of equivalence for pure supermultiplets and the notion of equivalence for their associated graphs (Adinkras) is discussed. Discrete properties such as “chirality” and “coloring” can discriminate different supermultiplets. The tools used in our classification include, among others, the notion of field content, connectivity symbol, commuting group, node choice group, and so on.

  10. Modeling and control of PEMFC based on least squares support vector machines

    International Nuclear Information System (INIS)

    Li Xi; Cao Guangyi; Zhu Xinjian

    2006-01-01

    The proton exchange membrane fuel cell (PEMFC) is one of the most important power supplies. The operating temperature of the stack is an important controlled variable, which impacts the performance of the PEMFC. In order to improve the generating performance of the PEMFC, prolong its life and guarantee safety, credibility and low cost of the PEMFC system, it must be controlled efficiently. A nonlinear predictive control algorithm based on a least squares support vector machine (LS-SVM) model is presented for a family of complex systems with severe nonlinearity, such as the PEMFC, in this paper. The nonlinear off line model of the PEMFC is built by a LS-SVM model with radial basis function (RBF) kernel so as to implement nonlinear predictive control of the plant. During PEMFC operation, the off line model is linearized at each sampling instant, and the generalized predictive control (GPC) algorithm is applied to the predictive control of the plant. Experimental results demonstrate the effectiveness and advantages of this approach

  11. Distributed weighted least-squares estimation with fast convergence for large-scale systems☆

    Science.gov (United States)

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976

  12. Distributed weighted least-squares estimation with fast convergence for large-scale systems.

    Science.gov (United States)

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.

  13. A cross-correlation objective function for least-squares migration and visco-acoustic imaging

    KAUST Repository

    Dutta, Gaurav

    2014-08-05

    Conventional acoustic least-squares migration inverts for a reflectivity image that best matches the amplitudes of the observed data. However, for field data applications, it is not easy to match the recorded amplitudes because of the visco-elastic nature of the earth and inaccuracies in the estimation of source signature and strength at different shot locations. To relax the requirement for strong amplitude matching of least-squares migration, we use a normalized cross-correlation objective function that is only sensitive to the similarity between the predicted and the observed data. Such a normalized cross-correlation objective function is also equivalent to a time-domain phase inversion method where the main emphasis is only on matching the phase of the data rather than the amplitude. Numerical tests on synthetic and field data show that such an objective function can be used as an alternative to visco-acoustic least-squares reverse time migration (Qp-LSRTM) when there is strong attenuation in the subsurface and the estimation of the attenuation parameter Qp is insufficiently accurate.

  14. A cross-correlation objective function for least-squares migration and visco-acoustic imaging

    KAUST Repository

    Dutta, Gaurav; Sinha, Mrinal; Schuster, Gerard T.

    2014-01-01

    Conventional acoustic least-squares migration inverts for a reflectivity image that best matches the amplitudes of the observed data. However, for field data applications, it is not easy to match the recorded amplitudes because of the visco-elastic nature of the earth and inaccuracies in the estimation of source signature and strength at different shot locations. To relax the requirement for strong amplitude matching of least-squares migration, we use a normalized cross-correlation objective function that is only sensitive to the similarity between the predicted and the observed data. Such a normalized cross-correlation objective function is also equivalent to a time-domain phase inversion method where the main emphasis is only on matching the phase of the data rather than the amplitude. Numerical tests on synthetic and field data show that such an objective function can be used as an alternative to visco-acoustic least-squares reverse time migration (Qp-LSRTM) when there is strong attenuation in the subsurface and the estimation of the attenuation parameter Qp is insufficiently accurate.

  15. Measurement of 3-D Vibrational Motion by Dynamic Photogrammetry Using Least-Square Image Matching for Sub-Pixel Targeting to Improve Accuracy

    Science.gov (United States)

    Lee, Hyoseong; Rhee, Huinam; Oh, Jae Hong; Park, Jin Ho

    2016-01-01

    This paper deals with an improved methodology to measure three-dimensional dynamic displacements of a structure by digital close-range photogrammetry. A series of stereo images of a vibrating structure installed with targets are taken at specified intervals by using two daily-use cameras. A new methodology is proposed to accurately trace the spatial displacement of each target in three-dimensional space. This method combines the correlation and the least-square image matching so that the sub-pixel targeting can be obtained to increase the measurement accuracy. Collinearity and space resection theory are used to determine the interior and exterior orientation parameters. To verify the proposed method, experiments have been performed to measure displacements of a cantilevered beam excited by an electrodynamic shaker, which is vibrating in a complex configuration with mixed bending and torsional motions simultaneously with multiple frequencies. The results by the present method showed good agreement with the measurement by two laser displacement sensors. The proposed methodology only requires inexpensive daily-use cameras, and can remotely detect the dynamic displacement of a structure vibrating in a complex three-dimensional defection shape up to sub-pixel accuracy. It has abundant potential applications to various fields, e.g., remote vibration monitoring of an inaccessible or dangerous facility. PMID:26978366

  16. Skeletonized Least Squares Wave Equation Migration

    KAUST Repository

    Zhan, Ge

    2010-10-17

    The theory for skeletonized least squares wave equation migration (LSM) is presented. The key idea is, for an assumed velocity model, the source‐side Green\\'s function and the geophone‐side Green\\'s function are computed by a numerical solution of the wave equation. Only the early‐arrivals of these Green\\'s functions are saved and skeletonized to form the migration Green\\'s function (MGF) by convolution. Then the migration image is obtained by a dot product between the recorded shot gathers and the MGF for every trial image point. The key to an efficient implementation of iterative LSM is that at each conjugate gradient iteration, the MGF is reused and no new finitedifference (FD) simulations are needed to get the updated migration image. It is believed that this procedure combined with phase‐encoded multi‐source technology will allow for the efficient computation of wave equation LSM images in less time than that of conventional reverse time migration (RTM).

  17. Three-dimensional rail-current distribution near the armature of simple, square-bore, two-rail railguns

    International Nuclear Information System (INIS)

    Beno, J.H.

    1991-01-01

    In this paper vector potential is solved as a three dimensional, boundary value problem for a conductor geometry consisting of square-bore railgun rails and a stationary armature. Conductors are infinitely conducting and perfect contact is assumed between rails and the armature. From the vector potential solution, surface current distribution is inferred

  18. A Monte Carlo Library Least Square approach in the Neutron Inelastic-scattering and Thermal-capture Analysis (NISTA) process in bulk coal samples

    Science.gov (United States)

    Reyhancan, Iskender Atilla; Ebrahimi, Alborz; Çolak, Üner; Erduran, M. Nizamettin; Angin, Nergis

    2017-01-01

    A new Monte-Carlo Library Least Square (MCLLS) approach for treating non-linear radiation analysis problem in Neutron Inelastic-scattering and Thermal-capture Analysis (NISTA) was developed. 14 MeV neutrons were produced by a neutron generator via the 3H (2H , n) 4He reaction. The prompt gamma ray spectra from bulk samples of seven different materials were measured by a Bismuth Germanate (BGO) gamma detection system. Polyethylene was used as neutron moderator along with iron and lead as neutron and gamma ray shielding, respectively. The gamma detection system was equipped with a list mode data acquisition system which streams spectroscopy data directly to the computer, event-by-event. A GEANT4 simulation toolkit was used for generating the single-element libraries of all the elements of interest. These libraries were then used in a Linear Library Least Square (LLLS) approach with an unknown experimental sample spectrum to fit it with the calculated elemental libraries. GEANT4 simulation results were also used for the selection of the neutron shielding material.

  19. Attenuation compensation for least-squares reverse time migration using the viscoacoustic-wave equation

    KAUST Repository

    Dutta, Gaurav

    2014-10-01

    Strong subsurface attenuation leads to distortion of amplitudes and phases of seismic waves propagating inside the earth. Conventional acoustic reverse time migration (RTM) and least-squares reverse time migration (LSRTM) do not account for this distortion, which can lead to defocusing of migration images in highly attenuative geologic environments. To correct for this distortion, we used a linearized inversion method, denoted as Qp-LSRTM. During the leastsquares iterations, we used a linearized viscoacoustic modeling operator for forward modeling. The adjoint equations were derived using the adjoint-state method for back propagating the residual wavefields. The merit of this approach compared with conventional RTM and LSRTM was that Qp-LSRTM compensated for the amplitude loss due to attenuation and could produce images with better balanced amplitudes and more resolution below highly attenuative layers. Numerical tests on synthetic and field data illustrated the advantages of Qp-LSRTM over RTM and LSRTM when the recorded data had strong attenuation effects. Similar to standard LSRTM, the sensitivity tests for background velocity and Qp errors revealed that the liability of this method is the requirement for smooth and accurate migration velocity and attenuation models.

  20. Attenuation compensation for least-squares reverse time migration using the viscoacoustic-wave equation

    KAUST Repository

    Dutta, Gaurav; Schuster, Gerard T.

    2014-01-01

    Strong subsurface attenuation leads to distortion of amplitudes and phases of seismic waves propagating inside the earth. Conventional acoustic reverse time migration (RTM) and least-squares reverse time migration (LSRTM) do not account for this distortion, which can lead to defocusing of migration images in highly attenuative geologic environments. To correct for this distortion, we used a linearized inversion method, denoted as Qp-LSRTM. During the leastsquares iterations, we used a linearized viscoacoustic modeling operator for forward modeling. The adjoint equations were derived using the adjoint-state method for back propagating the residual wavefields. The merit of this approach compared with conventional RTM and LSRTM was that Qp-LSRTM compensated for the amplitude loss due to attenuation and could produce images with better balanced amplitudes and more resolution below highly attenuative layers. Numerical tests on synthetic and field data illustrated the advantages of Qp-LSRTM over RTM and LSRTM when the recorded data had strong attenuation effects. Similar to standard LSRTM, the sensitivity tests for background velocity and Qp errors revealed that the liability of this method is the requirement for smooth and accurate migration velocity and attenuation models.

  1. Prediction of long-residue properties of potential blends from mathematically mixed infrared spectra of pure crude oils by partial least-squares regression models

    NARCIS (Netherlands)

    de Peinder, P.; Visser, T.; Petrauskas, D.D.; Salvatori, F.; Soulimani, F.; Weckhuysen, B.M.

    2009-01-01

    Research has been carried out to determine the feasibility of partial least-squares (PLS) regression models to predict the long-residue (LR) properties of potential blends from infrared (IR) spectra that have been created by linearly co-adding the IR spectra of crude oils. The study is the follow-up

  2. Analysis of total least squares in estimating the parameters of a mortar trajectory

    Energy Technology Data Exchange (ETDEWEB)

    Lau, D.L.; Ng, L.C.

    1994-12-01

    Least Squares (LS) is a method of curve fitting used with the assumption that error exists in the observation vector. The method of Total Least Squares (TLS) is more useful in cases where there is error in the data matrix as well as the observation vector. This paper describes work done in comparing the LS and TLS results for parameter estimation of a mortar trajectory based on a time series of angular observations. To improve the results, we investigated several derivations of the LS and TLS methods, and early findings show TLS provided slightly, 10%, improved results over the LS method.

  3. Time-Series INSAR: An Integer Least-Squares Approach For Distributed Scatterers

    Science.gov (United States)

    Samiei-Esfahany, Sami; Hanssen, Ramon F.

    2012-01-01

    The objective of this research is to extend the geode- tic mathematical model which was developed for persistent scatterers to a model which can exploit distributed scatterers (DS). The main focus is on the integer least- squares framework, and the main challenge is to include the decorrelation effect in the mathematical model. In order to adapt the integer least-squares mathematical model for DS we altered the model from a single master to a multi-master configuration and introduced the decorrelation effect stochastically. This effect is described in our model by a full covariance matrix. We propose to de- rive this covariance matrix by numerical integration of the (joint) probability distribution function (PDF) of interferometric phases. This PDF is a function of coherence values and can be directly computed from radar data. We show that the use of this model can improve the performance of temporal phase unwrapping of distributed scatterers.

  4. Adaptive Noise Canceling Menggunakan Algoritma Least Mean Square (Lms)

    OpenAIRE

    Nardiana, Anita; Sumaryono, Sari Sujoko

    2011-01-01

    Noise is inevitable in communication system. In some cases, noise can disturb signal. It is veryannoying as the received signal is jumbled with the noise itself. To reduce or remove noise, filter lowpass,highpass or bandpass can solve the problems, but this method cannot reach a maximum standard. One ofthe alternatives to solve the problem is by using adaptive filter. Adaptive algorithm frequently used is LeastMean Square (LMS) Algorithm which is compatible to Finite Impulse Response (FIR). T...

  5. Online Identification of Multivariable Discrete Time Delay Systems Using a Recursive Least Square Algorithm

    Directory of Open Access Journals (Sweden)

    Saïda Bedoui

    2013-01-01

    Full Text Available This paper addresses the problem of simultaneous identification of linear discrete time delay multivariable systems. This problem involves both the estimation of the time delays and the dynamic parameters matrices. In fact, we suggest a new formulation of this problem allowing defining the time delay and the dynamic parameters in the same estimated vector and building the corresponding observation vector. Then, we use this formulation to propose a new method to identify the time delays and the parameters of these systems using the least square approach. Convergence conditions and statistics properties of the proposed method are also developed. Simulation results are presented to illustrate the performance of the proposed method. An application of the developed approach to compact disc player arm is also suggested in order to validate simulation results.

  6. Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points

    KAUST Repository

    Migliorati, Giovanni

    2015-08-28

    We study the accuracy of the discrete least-squares approximation on a finite dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability measure. The convergence estimates are given in mean-square sense with respect to the sampling measure. The noise may be correlated with the location of the evaluation and may have nonzero mean (offset). We consider both cases of bounded or square-integrable noise / offset. We prove conditions between the number of sampling points and the dimension of the underlying approximation space that ensure a stable and accurate approximation. Particular focus is on deriving estimates in probability within a given confidence level. We analyze how the best approximation error and the noise terms affect the convergence rate and the overall confidence level achieved by the convergence estimate. The proofs of our convergence estimates in probability use arguments from the theory of large deviations to bound the noise term. Finally we address the particular case of multivariate polynomial approximation spaces with any density in the beta family, including uniform and Chebyshev.

  7. Application of principal component regression and partial least squares regression in ultraviolet spectrum water quality detection

    Science.gov (United States)

    Li, Jiangtong; Luo, Yongdao; Dai, Honglin

    2018-01-01

    Water is the source of life and the essential foundation of all life. With the development of industrialization, the phenomenon of water pollution is becoming more and more frequent, which directly affects the survival and development of human. Water quality detection is one of the necessary measures to protect water resources. Ultraviolet (UV) spectral analysis is an important research method in the field of water quality detection, which partial least squares regression (PLSR) analysis method is becoming predominant technology, however, in some special cases, PLSR's analysis produce considerable errors. In order to solve this problem, the traditional principal component regression (PCR) analysis method was improved by using the principle of PLSR in this paper. The experimental results show that for some special experimental data set, improved PCR analysis method performance is better than PLSR. The PCR and PLSR is the focus of this paper. Firstly, the principal component analysis (PCA) is performed by MATLAB to reduce the dimensionality of the spectral data; on the basis of a large number of experiments, the optimized principal component is extracted by using the principle of PLSR, which carries most of the original data information. Secondly, the linear regression analysis of the principal component is carried out with statistic package for social science (SPSS), which the coefficients and relations of principal components can be obtained. Finally, calculating a same water spectral data set by PLSR and improved PCR, analyzing and comparing two results, improved PCR and PLSR is similar for most data, but improved PCR is better than PLSR for data near the detection limit. Both PLSR and improved PCR can be used in Ultraviolet spectral analysis of water, but for data near the detection limit, improved PCR's result better than PLSR.

  8. Q-Least Squares Reverse Time Migration with Viscoacoustic Deblurring Filters

    KAUST Repository

    Chen, Yuqing; Dutta, Gaurav; Dai, Wei; Schuster, Gerard T.

    2017-01-01

    Viscoacoustic least-squares reverse time migration (Q-LSRTM) linearly inverts for the subsurface reflectivity model from lossy data. Compared to the conventional migration methods, it can compensate for the amplitude loss in the migrated images because of the strong subsurface attenuation and can produce reflectors that are accurately positioned in depth. However, the adjoint Q propagators used for backward propagating the residual data are also attenuative. Thus, the inverted images from Q-LSRTM are often observed to have lower resolution when compared to the benchmark acoustic LSRTM images from acoustic data. To increase the resolution and accelerate the convergence of Q-LSRTM, we propose using viscoacoustic deblurring filters as a preconditioner for Q-LSRTM. These filters can be estimated by matching a simulated migration image to its reference reflectivity model. Numerical tests on synthetic and field data demonstrate that Q-LSRTM combined with viscoacoustic deblurring filters can produce images with higher resolution and more balanced amplitudes than images from acoustic RTM, acoustic LSRTM and Q-LSRTM when there is strong attenuation in the background medium. The proposed preconditioning method is also shown to improve the convergence rate of Q-LSRTM by more than 30 percent in some cases and significantly compensate for the lossy artifacts in RTM images.

  9. Q-Least Squares Reverse Time Migration with Viscoacoustic Deblurring Filters

    KAUST Repository

    Chen, Yuqing

    2017-08-02

    Viscoacoustic least-squares reverse time migration (Q-LSRTM) linearly inverts for the subsurface reflectivity model from lossy data. Compared to the conventional migration methods, it can compensate for the amplitude loss in the migrated images because of the strong subsurface attenuation and can produce reflectors that are accurately positioned in depth. However, the adjoint Q propagators used for backward propagating the residual data are also attenuative. Thus, the inverted images from Q-LSRTM are often observed to have lower resolution when compared to the benchmark acoustic LSRTM images from acoustic data. To increase the resolution and accelerate the convergence of Q-LSRTM, we propose using viscoacoustic deblurring filters as a preconditioner for Q-LSRTM. These filters can be estimated by matching a simulated migration image to its reference reflectivity model. Numerical tests on synthetic and field data demonstrate that Q-LSRTM combined with viscoacoustic deblurring filters can produce images with higher resolution and more balanced amplitudes than images from acoustic RTM, acoustic LSRTM and Q-LSRTM when there is strong attenuation in the background medium. The proposed preconditioning method is also shown to improve the convergence rate of Q-LSRTM by more than 30 percent in some cases and significantly compensate for the lossy artifacts in RTM images.

  10. Fabrication of long linear arrays of plastic optical fibers with squared ends for the use of code mark printing lithography

    Science.gov (United States)

    Horiuchi, Toshiyuki; Watanabe, Jun; Suzuki, Yuta; Iwasaki, Jun-ya

    2017-05-01

    Two dimensional code marks are often used for the production management. In particular, in the production lines of liquid-crystal-display panels and others, data on fabrication processes such as production number and process conditions are written on each substrate or device in detail, and they are used for quality managements. For this reason, lithography system specialized in code mark printing is developed. However, conventional systems using lamp projection exposure or laser scan exposure are very expensive. Therefore, development of a low-cost exposure system using light emitting diodes (LEDs) and optical fibers with squared ends arrayed in a matrix is strongly expected. In the past research, feasibility of such a new exposure system was demonstrated using a handmade system equipped with 100 LEDs with a central wavelength of 405 nm, a 10×10 matrix of optical fibers with 1 mm square ends, and a 10X projection lens. Based on these progresses, a new method for fabricating large-scale arrays of finer fibers with squared ends was developed in this paper. At most 40 plastic optical fibers were arranged in a linear gap of an arraying instrument, and simultaneously squared by heating them on a hotplate at 120°C for 7 min. Fiber sizes were homogeneous within 496+/-4 μm. In addition, average light leak was improved from 34.4 to 21.3% by adopting the new method in place of conventional one by one squaring method. Square matrix arrays necessary for printing code marks will be obtained by piling the newly fabricated linear arrays up.

  11. Seismic time-lapse imaging using Interferometric least-squares migration

    KAUST Repository

    Sinha, Mrinal

    2016-09-06

    One of the problems with 4D surveys is that the environmental conditions change over time so that the experiment is insufficiently repeatable. To mitigate this problem, we propose the use of interferometric least-squares migration (ILSM) to estimate the migration image for the baseline and monitor surveys. Here, a known reflector is used as the reference reflector for ILSM. Results with synthetic and field data show that ILSM can eliminate artifacts caused by non-repeatability in time-lapse surveys.

  12. Seismic time-lapse imaging using Interferometric least-squares migration

    KAUST Repository

    Sinha, Mrinal; Schuster, Gerard T.

    2016-01-01

    One of the problems with 4D surveys is that the environmental conditions change over time so that the experiment is insufficiently repeatable. To mitigate this problem, we propose the use of interferometric least-squares migration (ILSM) to estimate the migration image for the baseline and monitor surveys. Here, a known reflector is used as the reference reflector for ILSM. Results with synthetic and field data show that ILSM can eliminate artifacts caused by non-repeatability in time-lapse surveys.

  13. Partial least squares path modeling basic concepts, methodological issues and applications

    CERN Document Server

    Noonan, Richard

    2017-01-01

    This edited book presents the recent developments in partial least squares-path modeling (PLS-PM) and provides a comprehensive overview of the current state of the most advanced research related to PLS-PM. The first section of this book emphasizes the basic concepts and extensions of the PLS-PM method. The second section discusses the methodological issues that are the focus of the recent development of the PLS-PM method. The third part discusses the real world application of the PLS-PM method in various disciplines. The contributions from expert authors in the field of PLS focus on topics such as the factor-based PLS-PM, the perfect match between a model and a mode, quantile composite-based path modeling (QC-PM), ordinal consistent partial least squares (OrdPLSc), non-symmetrical composite-based path modeling (NSCPM), modern view for mediation analysis in PLS-PM, a multi-method approach for identifying and treating unobserved heterogeneity, multigroup analysis (PLS-MGA), the assessment of the common method b...

  14. The Total Least Squares Problem in AX approximate to B: A New Classification with the Relationship to the Classical Works

    Czech Academy of Sciences Publication Activity Database

    Hnětynková, I.; Plešinger, Martin; Sima, D.M.; Strakoš, Z.; Huffel van, S.

    2011-01-01

    Roč. 32, č. 3 (2011), s. 748-770 ISSN 0895-4798 R&D Projects: GA AV ČR IAA100300802 Grant - others:GA ČR(CZ) GA201/09/0917 Program:GA Institutional research plan: CEZ:AV0Z10300504 Keywords : total least squares * multiple right-hand sides * linear approximation problems * orthogonally invariant problems * orthogonal regression * errors-in-variables modeling Subject RIV: BA - General Mathematics Impact factor: 1.368, year: 2011

  15. An iteratively reweighted least-squares approach to adaptive robust adjustment of parameters in linear regression models with autoregressive and t-distributed deviations

    Science.gov (United States)

    Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza

    2018-03-01

    In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.

  16. Third-order least squares modelling of milling state term for improved computation of stability boundaries

    Directory of Open Access Journals (Sweden)

    C.G. Ozoegwu

    2016-01-01

    Full Text Available The general least squares model for milling process state term is presented. A discrete map for milling stability analysis that is based on the third-order case of the presented general least squares milling state term model is first studied and compared with its third-order counterpart that is based on the interpolation theory. Both numerical rate of convergence and chatter stability results of the two maps are compared using the single degree of freedom (1DOF milling model. The numerical rate of convergence of the presented third-order model is also studied using the two degree of freedom (2DOF milling process model. Comparison gave that stability results from the two maps agree closely but the presented map demonstrated reduction in number of needed calculations leading to about 30% savings in computational time (CT. It is seen in earlier works that accuracy of milling stability analysis using the full-discretization method rises from first-order theory to second-order theory and continues to rise to the third-order theory. The present work confirms this trend. In conclusion, the method presented in this work will enable fast and accurate computation of stability diagrams for use by machinists.

  17. Robust anti-synchronization of uncertain chaotic systems based on multiple-kernel least squares support vector machine modeling

    International Nuclear Information System (INIS)

    Chen Qiang; Ren Xuemei; Na Jing

    2011-01-01

    Highlights: Model uncertainty of the system is approximated by multiple-kernel LSSVM. Approximation errors and disturbances are compensated in the controller design. Asymptotical anti-synchronization is achieved with model uncertainty and disturbances. Abstract: In this paper, we propose a robust anti-synchronization scheme based on multiple-kernel least squares support vector machine (MK-LSSVM) modeling for two uncertain chaotic systems. The multiple-kernel regression, which is a linear combination of basic kernels, is designed to approximate system uncertainties by constructing a multiple-kernel Lagrangian function and computing the corresponding regression parameters. Then, a robust feedback control based on MK-LSSVM modeling is presented and an improved update law is employed to estimate the unknown bound of the approximation error. The proposed control scheme can guarantee the asymptotic convergence of the anti-synchronization errors in the presence of system uncertainties and external disturbances. Numerical examples are provided to show the effectiveness of the proposed method.

  18. Three-dimensional study of flow past a square cylinder at low Reynolds numbers

    International Nuclear Information System (INIS)

    Saha, A.K.; Biswas, G.; Muralidhar, K.

    2003-01-01

    The spatial evolution of vortices and transition to three-dimensionality in the wake of a square cylinder have been numerically studied. A Reynolds number range between 150 and 500 has been considered. Starting from the two-dimensional Karman vortex street, the transition to three-dimensionality is found to take place at a Reynolds number between 150 and 175. The three-dimensional wake of the square cylinder has been characterized using indicators appropriate for the wake of a bluff body as described by the earlier workers. In these terms, the secondary vortices of Mode-A are seen to persist over the Reynolds number range of 175-240. At about a Reynolds number of 250, Mode-B secondary vortices are present, these having predominantly small-scale structures. The transitional flow around a square cylinder exhibits an intermittent low frequency modulation due to the formation of a large-scale irregularity in the near-wake, called vortex dislocation. The superposition of vortex dislocation and the Mode-A vortices leads to a new pattern, labelled as Mode-A with dislocations. The results for the square cylinder are in good accordance with the three-dimensional modes of transition that are well-known in the circular cylinder wake. In the case of a circular cylinder, the transition from periodic vortex shedding to Mode-A is characterized by a discontinuity in the Strouhal number-Reynolds number relationship at about a Reynolds of 190. The transition from Mode-A to Mode-B is characterized by a second discontinuity in the frequency law at a Reynolds number of ∼250. The numerical computations of the present study with a square cylinder show that the values of the Strouhal number and the time-averaged drag-coefficient are closely associated with each other over the range of Reynolds numbers of interest and reflect the spatial structure of the wake

  19. Least-squares reverse time migration of marine data with frequency-selection encoding

    KAUST Repository

    Dai, Wei; Huang, Yunsong; Schuster, Gerard T.

    2013-01-01

    The phase-encoding technique can sometimes increase the efficiency of the least-squares reverse time migration (LSRTM) by more than one order of magnitude. However, traditional random encoding functions require all the encoded shots to share

  20. Handbook of Partial Least Squares Concepts, Methods and Applications

    CERN Document Server

    Vinzi, Vincenzo Esposito; Henseler, Jörg

    2010-01-01

    This handbook provides a comprehensive overview of Partial Least Squares (PLS) methods with specific reference to their use in marketing and with a discussion of the directions of current research and perspectives. It covers the broad area of PLS methods, from regression to structural equation modeling applications, software and interpretation of results. The handbook serves both as an introduction for those without prior knowledge of PLS and as a comprehensive reference for researchers and practitioners interested in the most recent advances in PLS methodology.

  1. Partial Least Square with Savitzky Golay Derivative in Predicting Blood Hemoglobin Using Near Infrared Spectrum

    Directory of Open Access Journals (Sweden)

    Mohd Idrus Mohd Nazrul Effendy

    2018-01-01

    Full Text Available Near infrared spectroscopy (NIRS is a reliable technique that widely used in medical fields. Partial least square was developed to predict blood hemoglobin concentration using NIRS. The aims of this paper are (i to develop predictive model for near infrared spectroscopic analysis in blood hemoglobin prediction, (ii to establish relationship between blood hemoglobin and near infrared spectrum using a predictive model, (iii to evaluate the predictive accuracy of a predictive model based on root mean squared error (RMSE and coefficient of determination rp2. Partial least square with first order Savitzky Golay (SG derivative preprocessing (PLS-SGd1 showed the higher performance of predictions with RMSE = 0.7965 and rp2= 0.9206 in K-fold cross validation. Optimum number of latent variable (LV and frame length (f were 32 and 27 nm, respectively. These findings suggest that the relationship between blood hemoglobin and near infrared spectrum is strong, and the partial least square with first order SG derivative is able to predict the blood hemoglobin using near infrared spectral data.

  2. Enhanced least squares Monte Carlo method for real-time decision optimizations for evolving natural hazards

    DEFF Research Database (Denmark)

    Anders, Annett; Nishijima, Kazuyoshi

    The present paper aims at enhancing a solution approach proposed by Anders & Nishijima (2011) to real-time decision problems in civil engineering. The approach takes basis in the Least Squares Monte Carlo method (LSM) originally proposed by Longstaff & Schwartz (2001) for computing American option...... prices. In Anders & Nishijima (2011) the LSM is adapted for a real-time operational decision problem; however it is found that further improvement is required in regard to the computational efficiency, in order to facilitate it for practice. This is the focus in the present paper. The idea behind...... the improvement of the computational efficiency is to “best utilize” the least squares method; i.e. least squares method is applied for estimating the expected utility for terminal decisions, conditional on realizations of underlying random phenomena at respective times in a parametric way. The implementation...

  3. Generalized Spatial Two Stage Least Squares Estimation of Spatial Autoregressive Models with Autoregressive Disturbances in the Presence of Endogenous Regressors and Many Instruments

    Directory of Open Access Journals (Sweden)

    Fei Jin

    2013-05-01

    Full Text Available This paper studies the generalized spatial two stage least squares (GS2SLS estimation of spatial autoregressive models with autoregressive disturbances when there are endogenous regressors with many valid instruments. Using many instruments may improve the efficiency of estimators asymptotically, but the bias might be large in finite samples, making the inference inaccurate. We consider the case that the number of instruments K increases with, but at a rate slower than, the sample size, and derive the approximate mean square errors (MSE that account for the trade-offs between the bias and variance, for both the GS2SLS estimator and a bias-corrected GS2SLS estimator. A criterion function for the optimal K selection can be based on the approximate MSEs. Monte Carlo experiments are provided to show the performance of our procedure of choosing K.

  4. Simultaneous determination of penicillin G salts by infrared spectroscopy: Evaluation of combining orthogonal signal correction with radial basis function-partial least squares regression

    Science.gov (United States)

    Talebpour, Zahra; Tavallaie, Roya; Ahmadi, Seyyed Hamid; Abdollahpour, Assem

    2010-09-01

    In this study, a new method for the simultaneous determination of penicillin G salts in pharmaceutical mixture via FT-IR spectroscopy combined with chemometrics was investigated. The mixture of penicillin G salts is a complex system due to similar analytical characteristics of components. Partial least squares (PLS) and radial basis function-partial least squares (RBF-PLS) were used to develop the linear and nonlinear relation between spectra and components, respectively. The orthogonal signal correction (OSC) preprocessing method was used to correct unexpected information, such as spectral overlapping and scattering effects. In order to compare the influence of OSC on PLS and RBF-PLS models, the optimal linear (PLS) and nonlinear (RBF-PLS) models based on conventional and OSC preprocessed spectra were established and compared. The obtained results demonstrated that OSC clearly enhanced the performance of both RBF-PLS and PLS calibration models. Also in the case of some nonlinear relation between spectra and component, OSC-RBF-PLS gave satisfactory results than OSC-PLS model which indicated that the OSC was helpful to remove extrinsic deviations from linearity without elimination of nonlinear information related to component. The chemometric models were tested on an external dataset and finally applied to the analysis commercialized injection product of penicillin G salts.

  5. ANYOLS, Least Square Fit by Stepwise Regression

    International Nuclear Information System (INIS)

    Atwoods, C.L.; Mathews, S.

    1986-01-01

    Description of program or function: ANYOLS is a stepwise program which fits data using ordinary or weighted least squares. Variables are selected for the model in a stepwise way based on a user- specified input criterion or a user-written subroutine. The order in which variables are entered can be influenced by user-defined forcing priorities. Instead of stepwise selection, ANYOLS can try all possible combinations of any desired subset of the variables. Automatic output for the final model in a stepwise search includes plots of the residuals, 'studentized' residuals, and leverages; if the model is not too large, the output also includes partial regression and partial leverage plots. A data set may be re-used so that several selection criteria can be tried. Flexibility is increased by allowing the substitution of user-written subroutines for several default subroutines

  6. Least Squares Inference on Integrated Volatility and the Relationship between Efficient Prices and Noise

    DEFF Research Database (Denmark)

    Nolte, Ingmar; Voev, Valeri

    The expected value of sums of squared intraday returns (realized variance) gives rise to a least squares regression which adapts itself to the assumptions of the noise process and allows for a joint inference on integrated volatility (IV), noise moments and price-noise relations. In the iid noise...

  7. A least squares calculational method: application to e±-H elastic scattering

    International Nuclear Information System (INIS)

    Das, J.N.; Chakraborty, S.

    1989-01-01

    The least squares calcualtional method proposed by Das has been applied for the e ± -H elastic scattering problems for intermediate energies. Some important conclusions are made on the basis of the calculation. (author). 7 refs ., 2 tabs

  8. Accumulation of unstable periodic orbits and the stickiness in the two-dimensional piecewise linear map.

    Science.gov (United States)

    Akaishi, A; Shudo, A

    2009-12-01

    We investigate the stickiness of the two-dimensional piecewise linear map with a family of marginal unstable periodic orbits (FMUPOs), and show that a series of unstable periodic orbits accumulating to FMUPOs plays a significant role to give rise to the power law correlation of trajectories. We can explicitly specify the sticky zone in which unstable periodic orbits whose stability increases algebraically exist, and find that there exists a hierarchy in accumulating periodic orbits. In particular, the periodic orbits with linearly increasing stability play the role of fundamental cycles as in the hyperbolic systems, which allows us to apply the method of cycle expansion. We also study the recurrence time distribution, especially discussing the position and size of the recurrence region. Following the definition adopted in one-dimensional maps, we show that the recurrence time distribution has an exponential part in the short time regime and an asymptotic power law part. The analysis on the crossover time T(c)(*) between these two regimes implies T(c)(*) approximately -log[micro(R)] where micro(R) denotes the area of the recurrence region.

  9. Non-linear least squares curve fitting of a simple theoretical model to radioimmunoassay dose-response data using a mini-computer

    International Nuclear Information System (INIS)

    Wilkins, T.A.; Chadney, D.C.; Bryant, J.; Palmstroem, S.H.; Winder, R.L.

    1977-01-01

    Using the simple univalent antigen univalent-antibody equilibrium model the dose-response curve of a radioimmunoassay (RIA) may be expressed as a function of Y, X and the four physical parameters of the idealised system. A compact but powerful mini-computer program has been written in BASIC for rapid iterative non-linear least squares curve fitting and dose interpolation with this function. In its simplest form the program can be operated in an 8K byte mini-computer. The program has been extensively tested with data from 10 different assay systems (RIA and CPBA) for measurement of drugs and hormones ranging in molecular size from thyroxine to insulin. For each assay system the results have been analysed in terms of (a) curve fitting biases and (b) direct comparison with manual fitting. In all cases the quality of fitting was remarkably good in spite of the fact that the chemistry of each system departed significantly from one or more of the assumptions implicit in the model used. A mathematical analysis of departures from the model's principal assumption has provided an explanation for this somewhat unexpected observation. The essential features of this analysis are presented in this paper together with the statistical analyses of the performance of the program. From these and the results obtained to date in the routine quality control of these 10 assays, it is concluded that the method of curve fitting and dose interpolation presented in this paper is likely to be of general applicability. (orig.) [de

  10. Two-dimensional linear and nonlinear Talbot effect from rogue waves.

    Science.gov (United States)

    Zhang, Yiqi; Belić, Milivoj R; Petrović, Milan S; Zheng, Huaibin; Chen, Haixia; Li, Changbiao; Lu, Keqing; Zhang, Yanpeng

    2015-03-01

    We introduce two-dimensional (2D) linear and nonlinear Talbot effects. They are produced by propagating periodic 2D diffraction patterns and can be visualized as 3D stacks of Talbot carpets. The nonlinear Talbot effect originates from 2D rogue waves and forms in a bulk 3D nonlinear medium. The recurrences of an input rogue wave are observed at the Talbot length and at the half-Talbot length, with a π phase shift; no other recurrences are observed. Differing from the nonlinear Talbot effect, the linear effect displays the usual fractional Talbot images as well. We also find that the smaller the period of incident rogue waves, the shorter the Talbot length. Increasing the beam intensity increases the Talbot length, but above a threshold this leads to a catastrophic self-focusing phenomenon which destroys the effect. We also find that the Talbot recurrence can be viewed as a self-Fourier transform of the initial periodic beam that is automatically performed during propagation. In particular, linear Talbot effect can be viewed as a fractional self-Fourier transform, whereas the nonlinear Talbot effect can be viewed as the regular self-Fourier transform. Numerical simulations demonstrate that the rogue-wave initial condition is sufficient but not necessary for the observation of the effect. It may also be observed from other periodic inputs, provided they are set on a finite background. The 2D effect may find utility in the production of 3D photonic crystals.

  11. Multisource Least-squares Reverse Time Migration

    KAUST Repository

    Dai, Wei

    2012-12-01

    Least-squares migration has been shown to be able to produce high quality migration images, but its computational cost is considered to be too high for practical imaging. In this dissertation, a multisource least-squares reverse time migration algorithm (LSRTM) is proposed to increase by up to 10 times the computational efficiency by utilizing the blended sources processing technique. There are three main chapters in this dissertation. In Chapter 2, the multisource LSRTM algorithm is implemented with random time-shift and random source polarity encoding functions. Numerical tests on the 2D HESS VTI data show that the multisource LSRTM algorithm suppresses migration artifacts, balances the amplitudes, improves image resolution, and reduces crosstalk noise associated with the blended shot gathers. For this example, multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution, and fewer migration artifacts compared to conventional RTM. The empirical results suggest that the multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with similar or less computational cost. The caveat is that LSRTM image is sensitive to large errors in the migration velocity model. In Chapter 3, the multisource LSRTM algorithm is implemented with frequency selection encoding strategy and applied to marine streamer data, for which traditional random encoding functions are not applicable. The frequency-selection encoding functions are delta functions in the frequency domain, so that all the encoded shots have unique non-overlapping frequency content. Therefore, the receivers can distinguish the wavefield from each shot according to the frequencies. With the frequency-selection encoding method, the computational efficiency of LSRTM is increased so that its cost is

  12. Ordinary Least Squares and Quantile Regression: An Inquiry-Based Learning Approach to a Comparison of Regression Methods

    Science.gov (United States)

    Helmreich, James E.; Krog, K. Peter

    2018-01-01

    We present a short, inquiry-based learning course on concepts and methods underlying ordinary least squares (OLS), least absolute deviation (LAD), and quantile regression (QR). Students investigate squared, absolute, and weighted absolute distance functions (metrics) as location measures. Using differential calculus and properties of convex…

  13. A new finite element formulation for CFD:VIII. The Galerkin/least-squares method for advective-diffusive equations

    International Nuclear Information System (INIS)

    Hughes, T.J.R.; Hulbert, G.M.; Franca, L.P.

    1988-10-01

    Galerkin/least-squares finite element methods are presented for advective-diffusive equations. Galerkin/least-squares represents a conceptual simplification of SUPG, and is in fact applicable to a wide variety of other problem types. A convergence analysis and error estimates are presented. (author) [pt

  14. Geometric Least Square Models for Deriving [0,1]-Valued Interval Weights from Interval Fuzzy Preference Relations Based on Multiplicative Transitivity

    Directory of Open Access Journals (Sweden)

    Xuan Yang

    2015-01-01

    Full Text Available This paper presents a geometric least square framework for deriving [0,1]-valued interval weights from interval fuzzy preference relations. By analyzing the relationship among [0,1]-valued interval weights, multiplicatively consistent interval judgments, and planes, a geometric least square model is developed to derive a normalized [0,1]-valued interval weight vector from an interval fuzzy preference relation. Based on the difference ratio between two interval fuzzy preference relations, a geometric average difference ratio between one interval fuzzy preference relation and the others is defined and employed to determine the relative importance weights for individual interval fuzzy preference relations. A geometric least square based approach is further put forward for solving group decision making problems. An individual decision numerical example and a group decision making problem with the selection of enterprise resource planning software products are furnished to illustrate the effectiveness and applicability of the proposed models.

  15. Solution of the two-dimensional spectral factorization problem

    Science.gov (United States)

    Lawton, W. M.

    1985-01-01

    An approximation theorem is proven which solves a classic problem in two-dimensional (2-D) filter theory. The theorem shows that any continuous two-dimensional spectrum can be uniformly approximated by the squared modulus of a recursively stable finite trigonometric polynomial supported on a nonsymmetric half-plane.

  16. A Monte Carlo Investigation of the Box-Cox Model and a Nonlinear Least Squares Alternative.

    OpenAIRE

    Showalter, Mark H

    1994-01-01

    This paper reports a Monte Carlo study of the Box-Cox model and a nonlinear least squares alternative. Key results include the following: the transformation parameter in the Box-Cox model appears to be inconsistently estimated in the presence of conditional heteroskedasticity; the constant term in both the Box-Cox and the nonlinear least squares models is poorly estimated in small samples; conditional mean forecasts tend to underestimate their true value in the Box-Cox model when the transfor...

  17. A rigid-body least-squares program with angular and translation scan facilities

    CERN Document Server

    Kutschabsky, L

    1981-01-01

    The described computer program, written in CERN Fortran, is designed to enlarge the convergence radius of the rigid-body least-squares method by allowing a stepwise change of the angular and/or translational parameters within a chosen range. (6 refs).

  18. Harmonic tidal analysis at a few stations using the least squares method

    Digital Repository Service at National Institute of Oceanography (India)

    Fernandes, A.A.; Das, V.K.; Bahulayan, N.

    Using the least squares method, harmonic analysis has been performed on hourly water level records of 29 days at several stations depicting different types of non-tidal noise. For a tidal record at Mormugao, which was free from storm surges (low...

  19. Tip-tilt disturbance model identification based on non-linear least squares fitting for Linear Quadratic Gaussian control

    Science.gov (United States)

    Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing

    2018-05-01

    We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.

  20. Improving imaging quality using least-squares reverse time migration: application to data from Bohai basin

    KAUST Repository

    Zhang, Hao; Liu, Qiancheng; Wu, Jizhong

    2017-01-01

    Least-squares reverse time migration (LSRTM) is a seismic imaging technique based on linear inversion, which usually aims to improve the quality of seismic image through removing the acquisition footprint, suppressing migration artifacts, and enhancing resolution. LSRTM has been shown to produce migration images with better quality than those computed by conventional migration. In this paper, our derivation of LSRTM approximates the near-incident reflection coefficient with the normal-incident reflection coefficient, which shows that the reflectivity term defined is related to the normal-incident reflection coefficient and the background velocity. With reflected data, LSRTM is mainly sensitive to impedance perturbations. According to an approximate relationship between them, we reformulate the perturbation related system into a reflection-coefficient related one. Then, we seek the inverted image through linearized iteration. In the proposed algorithm, we only need the migration velocity for LSRTM considering that the density changes gently when compared with migration velocity. To validate our algorithms, we first apply it to a synthetic case and then a field data set. Both applications illustrate that our imaging results are of good quality.

  1. Improving imaging quality using least-squares reverse time migration: application to data from Bohai basin

    KAUST Repository

    Zhang, Hao

    2017-07-07

    Least-squares reverse time migration (LSRTM) is a seismic imaging technique based on linear inversion, which usually aims to improve the quality of seismic image through removing the acquisition footprint, suppressing migration artifacts, and enhancing resolution. LSRTM has been shown to produce migration images with better quality than those computed by conventional migration. In this paper, our derivation of LSRTM approximates the near-incident reflection coefficient with the normal-incident reflection coefficient, which shows that the reflectivity term defined is related to the normal-incident reflection coefficient and the background velocity. With reflected data, LSRTM is mainly sensitive to impedance perturbations. According to an approximate relationship between them, we reformulate the perturbation related system into a reflection-coefficient related one. Then, we seek the inverted image through linearized iteration. In the proposed algorithm, we only need the migration velocity for LSRTM considering that the density changes gently when compared with migration velocity. To validate our algorithms, we first apply it to a synthetic case and then a field data set. Both applications illustrate that our imaging results are of good quality.

  2. Robust analysis of trends in noisy tokamak confinement data using geodesic least squares regression

    Energy Technology Data Exchange (ETDEWEB)

    Verdoolaege, G., E-mail: geert.verdoolaege@ugent.be [Department of Applied Physics, Ghent University, B-9000 Ghent (Belgium); Laboratory for Plasma Physics, Royal Military Academy, B-1000 Brussels (Belgium); Shabbir, A. [Department of Applied Physics, Ghent University, B-9000 Ghent (Belgium); Max Planck Institute for Plasma Physics, Boltzmannstr. 2, 85748 Garching (Germany); Hornung, G. [Department of Applied Physics, Ghent University, B-9000 Ghent (Belgium)

    2016-11-15

    Regression analysis is a very common activity in fusion science for unveiling trends and parametric dependencies, but it can be a difficult matter. We have recently developed the method of geodesic least squares (GLS) regression that is able to handle errors in all variables, is robust against data outliers and uncertainty in the regression model, and can be used with arbitrary distribution models and regression functions. We here report on first results of application of GLS to estimation of the multi-machine scaling law for the energy confinement time in tokamaks, demonstrating improved consistency of the GLS results compared to standard least squares.

  3. A Hybrid Least Square Support Vector Machine Model with Parameters Optimization for Stock Forecasting

    Directory of Open Access Journals (Sweden)

    Jian Chai

    2015-01-01

    Full Text Available This paper proposes an EMD-LSSVM (empirical mode decomposition least squares support vector machine model to analyze the CSI 300 index. A WD-LSSVM (wavelet denoising least squares support machine is also proposed as a benchmark to compare with the performance of EMD-LSSVM. Since parameters selection is vital to the performance of the model, different optimization methods are used, including simplex, GS (grid search, PSO (particle swarm optimization, and GA (genetic algorithm. Experimental results show that the EMD-LSSVM model with GS algorithm outperforms other methods in predicting stock market movement direction.

  4. Uniform two-dimensional square assemblies from conjugated block copolymers driven by π–π interactions with controllable sizes

    Energy Technology Data Exchange (ETDEWEB)

    Han, Liang; Wang, Meijing; Jia, Xiangmeng; Chen, Wei; Qian, Hujun; He, Feng

    2018-02-28

    Two-dimensional (2-D) micro- and nano- architectures are attractive because of their unique properties caused by their ultrathin and flat morphologies. However, the formation of 2-D supramolecular highly symmetrical structures with considerable control is still a major challenge. Here, we presented a simple approach for the preparation of regular and homogeneous 2-D fluorescent square noncrystallization micelles with conjugated diblock copolymers PPV12-b-P2VPn through a process of dissolving-cooling-aging. The scale of the formed micelles could be controlled by the ratio of PPV/P2VP blocks and the concentration of the solution. The forming process of the platelet square micelles was analyzed by UV-Vis, DLS and SLS, while the molecular arrangement was characterized by GIXD. The results revealed that the micelles of PPV12-b-P2VPn initially form 1-D structures and then grow into 2-D structures in solution, and the growth is driven by intermolecular π-π interactions with the PPV12 blocks. The formation of 2-D square micelles is induced by herringbone arrangement of the molecules, which is closely related to the presence of the branched alkyl chains attached to conjugated PPV12 cores.

  5. Application of partial least squares near-infrared spectral classification in diabetic identification

    Science.gov (United States)

    Yan, Wen-juan; Yang, Ming; He, Guo-quan; Qin, Lin; Li, Gang

    2014-11-01

    In order to identify the diabetic patients by using tongue near-infrared (NIR) spectrum - a spectral classification model of the NIR reflectivity of the tongue tip is proposed, based on the partial least square (PLS) method. 39sample data of tongue tip's NIR spectra are harvested from healthy people and diabetic patients , respectively. After pretreatment of the reflectivity, the spectral data are set as the independent variable matrix, and information of classification as the dependent variables matrix, Samples were divided into two groups - i.e. 53 samples as calibration set and 25 as prediction set - then the PLS is used to build the classification model The constructed modelfrom the 53 samples has the correlation of 0.9614 and the root mean square error of cross-validation (RMSECV) of 0.1387.The predictions for the 25 samples have the correlation of 0.9146 and the RMSECV of 0.2122.The experimental result shows that the PLS method can achieve good classification on features of healthy people and diabetic patients.

  6. Identification method for gas-liquid two-phase flow regime based on singular value decomposition and least square support vector machine

    International Nuclear Information System (INIS)

    Sun Bin; Zhou Yunlong; Zhao Peng; Guan Yuebo

    2007-01-01

    Aiming at the non-stationary characteristics of differential pressure fluctuation signals of gas-liquid two-phase flow, and the slow convergence of learning and liability of dropping into local minima for BP neural networks, flow regime identification method based on Singular Value Decomposition (SVD) and Least Square Support Vector Machine (LS-SVM) is presented. First of all, the Empirical Mode Decomposition (EMD) method is used to decompose the differential pressure fluctuation signals of gas-liquid two-phase flow into a number of stationary Intrinsic Mode Functions (IMFs) components from which the initial feature vector matrix is formed. By applying the singular vale decomposition technique to the initial feature vector matrixes, the singular values are obtained. Finally, the singular values serve as the flow regime characteristic vector to be LS-SVM classifier and flow regimes are identified by the output of the classifier. The identification result of four typical flow regimes of air-water two-phase flow in horizontal pipe has shown that this method achieves a higher identification rate. (authors)

  7. Convergence of Inner-Iteration GMRES Methods for Rank-Deficient Least Squares Problems

    Czech Academy of Sciences Publication Activity Database

    Morikuni, Keiichi; Hayami, K.

    2015-01-01

    Roč. 36, č. 1 (2015), s. 225-250 ISSN 0895-4798 Institutional support: RVO:67985807 Keywords : least squares problem * iterative methods * preconditioner * inner-outer iteration * GMRES method * stationary iterative method * rank-deficient problem Subject RIV: BA - General Mathematics Impact factor: 1.883, year: 2015

  8. Least-squares approximation of an improper correlation matrix by a proper one

    NARCIS (Netherlands)

    Knol, Dirk L.; ten Berge, Jos M.F.

    1989-01-01

    An algorithm is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. The proposed algorithm is based upon a solution for Mosier's oblique Procrustes rotation problem offered by ten Berge and Nevels. A necessary and

  9. PRIM: An Efficient Preconditioning Iterative Reweighted Least Squares Method for Parallel Brain MRI Reconstruction.

    Science.gov (United States)

    Xu, Zheng; Wang, Sheng; Li, Yeqing; Zhu, Feiyun; Huang, Junzhou

    2018-02-08

    The most recent history of parallel Magnetic Resonance Imaging (pMRI) has in large part been devoted to finding ways to reduce acquisition time. While joint total variation (JTV) regularized model has been demonstrated as a powerful tool in increasing sampling speed for pMRI, however, the major bottleneck is the inefficiency of the optimization method. While all present state-of-the-art optimizations for the JTV model could only reach a sublinear convergence rate, in this paper, we squeeze the performance by proposing a linear-convergent optimization method for the JTV model. The proposed method is based on the Iterative Reweighted Least Squares algorithm. Due to the complexity of the tangled JTV objective, we design a novel preconditioner to further accelerate the proposed method. Extensive experiments demonstrate the superior performance of the proposed algorithm for pMRI regarding both accuracy and efficiency compared with state-of-the-art methods.

  10. Implementation of the Least-Squares Lattice with Order and Forgetting Factor Estimation for FPGA

    Czech Academy of Sciences Publication Activity Database

    Pohl, Zdeněk; Tichý, Milan; Kadlec, Jiří

    2008-01-01

    Roč. 2008, č. 2008 (2008), s. 1-11 ISSN 1687-6172 R&D Projects: GA MŠk(CZ) 1M0567 EU Projects: European Commission(XE) 027611 - AETHER Program:FP6 Institutional research plan: CEZ:AV0Z10750506 Keywords : DSP * Least-squares lattice * order estimation * exponential forgetting factor estimation * FPGA implementation * scheduling * dynamic reconfiguration * microblaze Subject RIV: IN - Informatics, Computer Science Impact factor: 1.055, year: 2008 http://library.utia.cas.cz/separaty/2008/ZS/pohl-tichy-kadlec-implementation%20of%20the%20least-squares%20lattice%20with%20order%20and%20forgetting%20factor%20estimation%20for%20fpga.pdf

  11. Two-dimensional semi-analytic nodal method for multigroup pin power reconstruction

    International Nuclear Information System (INIS)

    Seung Gyou, Baek; Han Gyu, Joo; Un Chul, Lee

    2007-01-01

    A pin power reconstruction method applicable to multigroup problems involving square fuel assemblies is presented. The method is based on a two-dimensional semi-analytic nodal solution which consists of eight exponential terms and 13 polynomial terms. The 13 polynomial terms represent the particular solution obtained under the condition of a 2-dimensional 13 term source expansion. In order to achieve better approximation of the source distribution, the least square fitting method is employed. The 8 exponential terms represent a part of the analytically obtained homogeneous solution and the 8 coefficients are determined by imposing constraints on the 4 surface average currents and 4 corner point fluxes. The surface average currents determined from a transverse-integrated nodal solution are used directly whereas the corner point fluxes are determined during the course of the reconstruction by employing an iterative scheme that would realize the corner point balance condition. The outgoing current based corner point flux determination scheme is newly introduced. The accuracy of the proposed method is demonstrated with the L336C5 benchmark problem. (authors)

  12. A Least Square-Based Self-Adaptive Localization Method for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Baoguo Yu

    2016-01-01

    Full Text Available In the wireless sensor network (WSN localization methods based on Received Signal Strength Indicator (RSSI, it is usually required to determine the parameters of the radio signal propagation model before estimating the distance between the anchor node and an unknown node with reference to their communication RSSI value. And finally we use a localization algorithm to estimate the location of the unknown node. However, this localization method, though high in localization accuracy, has weaknesses such as complex working procedure and poor system versatility. Concerning these defects, a self-adaptive WSN localization method based on least square is proposed, which uses the least square criterion to estimate the parameters of radio signal propagation model, which positively reduces the computation amount in the estimation process. The experimental results show that the proposed self-adaptive localization method outputs a high processing efficiency while satisfying the high localization accuracy requirement. Conclusively, the proposed method is of definite practical value.

  13. A reconstruction algorithm for three-dimensional object-space data using spatial-spectral multiplexing

    Science.gov (United States)

    Wu, Zhejun; Kudenov, Michael W.

    2017-05-01

    This paper presents a reconstruction algorithm for the Spatial-Spectral Multiplexing (SSM) optical system. The goal of this algorithm is to recover the three-dimensional spatial and spectral information of a scene, given that a one-dimensional spectrometer array is used to sample the pupil of the spatial-spectral modulator. The challenge of the reconstruction is that the non-parametric representation of the three-dimensional spatial and spectral object requires a large number of variables, thus leading to an underdetermined linear system that is hard to uniquely recover. We propose to reparameterize the spectrum using B-spline functions to reduce the number of unknown variables. Our reconstruction algorithm then solves the improved linear system via a least- square optimization of such B-spline coefficients with additional spatial smoothness regularization. The ground truth object and the optical model for the measurement matrix are simulated with both spatial and spectral assumptions according to a realistic field of view. In order to test the robustness of the algorithm, we add Poisson noise to the measurement and test on both two-dimensional and three-dimensional spatial and spectral scenes. Our analysis shows that the root mean square error of the recovered results can be achieved within 5.15%.

  14. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    Science.gov (United States)

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  15. Least-squares migration of multisource data with a deblurring filter

    KAUST Repository

    Dai, Wei; Wang, Xin; Schuster, Gerard T.

    2011-01-01

    Least-squares migration (LSM) has been shown to be able to produce high-quality migration images, but its computational cost is considered to be too high for practical imaging. We have developed a multisource least-squares migration algorithm (MLSM) to increase the computational efficiency by using the blended sources processing technique. To expedite convergence, a multisource deblurring filter is used as a preconditioner to reduce the data residual. This MLSM algorithm is applicable with Kirchhoff migration, wave-equation migration, or reverse time migration, and the gain in computational efficiency depends on the choice of migration method. Numerical results with Kirchhoff LSM on the 2D SEG/EAGE salt model show that an accurate image is obtained by migrating a supergather of 320 phase-encoded shots. When the encoding functions are the same for every iteration, the input/output cost of MLSM is reduced by 320 times. Empirical results show that the crosstalk noise introduced by blended sources is more effectively reduced when the encoding functions are changed at every iteration. The analysis of signal-to-noise ratio (S/N) suggests that not too many iterations are needed to enhance the S/N to an acceptable level. Therefore, when implemented with wave-equation migration or reverse time migration methods, the MLSM algorithm can be more efficient than the conventional migration method. © 2011 Society of Exploration Geophysicists.

  16. Least-squares migration of multisource data with a deblurring filter

    KAUST Repository

    Dai, Wei

    2011-09-01

    Least-squares migration (LSM) has been shown to be able to produce high-quality migration images, but its computational cost is considered to be too high for practical imaging. We have developed a multisource least-squares migration algorithm (MLSM) to increase the computational efficiency by using the blended sources processing technique. To expedite convergence, a multisource deblurring filter is used as a preconditioner to reduce the data residual. This MLSM algorithm is applicable with Kirchhoff migration, wave-equation migration, or reverse time migration, and the gain in computational efficiency depends on the choice of migration method. Numerical results with Kirchhoff LSM on the 2D SEG/EAGE salt model show that an accurate image is obtained by migrating a supergather of 320 phase-encoded shots. When the encoding functions are the same for every iteration, the input/output cost of MLSM is reduced by 320 times. Empirical results show that the crosstalk noise introduced by blended sources is more effectively reduced when the encoding functions are changed at every iteration. The analysis of signal-to-noise ratio (S/N) suggests that not too many iterations are needed to enhance the S/N to an acceptable level. Therefore, when implemented with wave-equation migration or reverse time migration methods, the MLSM algorithm can be more efficient than the conventional migration method. © 2011 Society of Exploration Geophysicists.

  17. Applying ISO 11929:2010 Standard to detection limit calculation in least-squares based multi-nuclide gamma-ray spectrum evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Kanisch, G., E-mail: guenter.kanisch@hanse.net

    2017-05-21

    The concepts of ISO 11929 (2010) are applied to evaluation of radionuclide activities from more complex multi-nuclide gamma-ray spectra. From net peak areas estimated by peak fitting, activities and their standard uncertainties are calculated by weighted linear least-squares method with an additional step, where uncertainties of the design matrix elements are taken into account. A numerical treatment of the standard's uncertainty function, based on ISO 11929 Annex C.5, leads to a procedure for deriving decision threshold and detection limit values. The methods shown allow resolving interferences between radionuclide activities also in case of calculating detection limits where they can improve the latter by including more than one gamma line per radionuclide. The co'mmon single nuclide weighted mean is extended to an interference-corrected (generalized) weighted mean, which, combined with the least-squares method, allows faster detection limit calculations. In addition, a new grouped uncertainty budget was inferred, which for each radionuclide gives uncertainty budgets from seven main variables, such as net count rates, peak efficiencies, gamma emission intensities and others; grouping refers to summation over lists of peaks per radionuclide.

  18. Variable forgetting factor mechanisms for diffusion recursive least squares algorithm in sensor networks

    Science.gov (United States)

    Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.

    2017-12-01

    In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.

  19. Mitigation of defocusing by statics and near-surface velocity errors by interferometric least-squares migration

    KAUST Repository

    Sinha, Mrinal

    2015-08-19

    We propose an interferometric least-squares migration method that can significantly reduce migration artifacts due to statics and errors in the near-surface velocity model. We first choose a reference reflector whose topography is well known from the, e.g., well logs. Reflections from this reference layer are correlated with the traces associated with reflections from deeper interfaces to get crosscorrelograms. These crosscorrelograms are then migrated using interferometric least-squares migration (ILSM). In this way statics and velocity errors at the near surface are largely eliminated for the examples in our paper.

  20. 'AJUSTAR' a interactive processor for to Fit, by means of least squares, one variable polinomials (arbitrary degree) at experimental points

    International Nuclear Information System (INIS)

    Sanchez Miro, J.J.; Pena, J.

    1991-01-01

    In this repport is offered, to scientist and technical people, a numeric tool consisting in a FORTRAN program, of interactive use, with destination to make lineal 'least squares', fittings on any set of empirical observations. The method based in the orthogonal functions (for discrete case), instead of direct solving the equations system, is used. The procedure includes also the optionally facilities of: variable change, direct interpolation, correlation non linear factor, 'weights' of the points, confidence intervals (Scheffe, Miller, Student), and plotting results. (Author). 10 refs

  1. Least-Squares Approximation of an Improper Correlation Matrix by a Proper One.

    Science.gov (United States)

    Knol, Dirk L.; ten Berge, Jos M. F.

    1989-01-01

    An algorithm, based on a solution for C. I. Mosier's oblique Procrustes rotation problem, is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. Results are of interest for missing value and tetrachoric correlation, indefinite matrix correlation, and constrained…

  2. Application of the tuning algorithm with the least squares approximation to the suboptimal control algorithm for integrating objects

    Science.gov (United States)

    Kuzishchin, V. F.; Merzlikina, E. I.; Van Va, Hoang

    2017-11-01

    The problem of PID and PI-algorithms tuning by means of the approximation by the least square method of the frequency response of a linear algorithm to the sub-optimal algorithm is considered. The advantage of the method is that the parameter values are obtained through one cycle of calculation. Recommendations how to choose the parameters of the least square method taking into consideration the plant dynamics are given. The parameters mentioned are the time constant of the filter, the approximation frequency range and the correction coefficient for the time delay parameter. The problem is considered for integrating plants for some practical cases (the level control system in a boiler drum). The transfer function of the suboptimal algorithm is determined relating to the disturbance that acts in the point of the control impact input, it is typical for thermal plants. In the recommendations it is taken into consideration that the overregulation for the transient process when the setpoint is changed is also limited. In order to compare the results the systems under consideration are also calculated by the classical method with the limited frequency oscillation index. The results given in the paper can be used by specialists dealing with tuning systems with the integrating plants.

  3. Avian skull morphological evolution: exploring exo- and endocranial covariation with two-block partial least squares.

    Science.gov (United States)

    Marugán-Lobón, Jesús; Buscalioni, Angela D

    2006-01-01

    While rostral variation has been the subject of detailed avian evolutionary research, avian skull organization, characterized by a flexed or extended appearance of the skull, has eventually become neglected by mainstream evolutionary inquiries. This study aims to recapture its significance, evaluating possible functional, phylogenetic and developmental factors that may be underlying it. In order to estimate which, and how, elements of the skull intervene in patterning the skull we tested the statistical interplay between a series of old mid-sagittal angular measurements (mostly endocranial) in combination with newly obtained skull metrics based on landmark superimposition methods (exclusively exocranial shape), by means of the statistic-morphometric technique of two-block partial least squares. As classic literature anticipated, we found that the external appearance of the skull corresponds to the way in which the plane of the caudal cranial base is oriented, in connection with the orientations of the plane of the foramen magnum and of the lateral semicircular canal. The pattern of covariation found between metrics conveys flexed or extended appearances of the skull implicitly within a single and statistically significant dimension of covariation. Marked shape changes with which angles covary concentrate at the supraoccipital bone, the cranial base and the antorbital window, whereas the plane measuring the orientation of the anterior portion of the rostrum does not intervene. Statistical covariance between elements of the caudal cranial base and the occiput inplies that morphological integration underlies avian skull macroevolutionary organization as a by-product of the regional concordance of such correlated elements within the early embryonic chordal domain of mesodermic origin.

  4. An improved partial least-squares regression method for Raman spectroscopy

    Science.gov (United States)

    Momenpour Tehran Monfared, Ali; Anis, Hanan

    2017-10-01

    It is known that the performance of partial least-squares (PLS) regression analysis can be improved using the backward variable selection method (BVSPLS). In this paper, we further improve the BVSPLS based on a novel selection mechanism. The proposed method is based on sorting the weighted regression coefficients, and then the importance of each variable of the sorted list is evaluated using root mean square errors of prediction (RMSEP) criterion in each iteration step. Our Improved BVSPLS (IBVSPLS) method has been applied to leukemia and heparin data sets and led to an improvement in limit of detection of Raman biosensing ranged from 10% to 43% compared to PLS. Our IBVSPLS was also compared to the jack-knifing (simpler) and Genetic Algorithm (more complex) methods. Our method was consistently better than the jack-knifing method and showed either a similar or a better performance compared to the genetic algorithm.

  5. Discrete least squares polynomial approximation with random evaluations − application to parametric and stochastic elliptic PDEs

    KAUST Repository

    Chkifa, Abdellah

    2015-04-08

    Motivated by the numerical treatment of parametric and stochastic PDEs, we analyze the least-squares method for polynomial approximation of multivariate functions based on random sampling according to a given probability measure. Recent work has shown that in the univariate case, the least-squares method is quasi-optimal in expectation in [A. Cohen, M A. Davenport and D. Leviatan. Found. Comput. Math. 13 (2013) 819–834] and in probability in [G. Migliorati, F. Nobile, E. von Schwerin, R. Tempone, Found. Comput. Math. 14 (2014) 419–456], under suitable conditions that relate the number of samples with respect to the dimension of the polynomial space. Here “quasi-optimal” means that the accuracy of the least-squares approximation is comparable with that of the best approximation in the given polynomial space. In this paper, we discuss the quasi-optimality of the polynomial least-squares method in arbitrary dimension. Our analysis applies to any arbitrary multivariate polynomial space (including tensor product, total degree or hyperbolic crosses), under the minimal requirement that its associated index set is downward closed. The optimality criterion only involves the relation between the number of samples and the dimension of the polynomial space, independently of the anisotropic shape and of the number of variables. We extend our results to the approximation of Hilbert space-valued functions in order to apply them to the approximation of parametric and stochastic elliptic PDEs. As a particular case, we discuss “inclusion type” elliptic PDE models, and derive an exponential convergence estimate for the least-squares method. Numerical results confirm our estimate, yet pointing out a gap between the condition necessary to achieve optimality in the theory, and the condition that in practice yields the optimal convergence rate.

  6. Incoherent dictionary learning for reducing crosstalk noise in least-squares reverse time migration

    Science.gov (United States)

    Wu, Juan; Bai, Min

    2018-05-01

    We propose to apply a novel incoherent dictionary learning (IDL) algorithm for regularizing the least-squares inversion in seismic imaging. The IDL is proposed to overcome the drawback of traditional dictionary learning algorithm in losing partial texture information. Firstly, the noisy image is divided into overlapped image patches, and some random patches are extracted for dictionary learning. Then, we apply the IDL technology to minimize the coherency between atoms during dictionary learning. Finally, the sparse representation problem is solved by a sparse coding algorithm, and image is restored by those sparse coefficients. By reducing the correlation among atoms, it is possible to preserve most of the small-scale features in the image while removing much of the long-wavelength noise. The application of the IDL method to regularization of seismic images from least-squares reverse time migration shows successful performance.

  7. Small-kernel, constrained least-squares restoration of sampled image data

    Science.gov (United States)

    Hazra, Rajeeb; Park, Stephen K.

    1992-01-01

    Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.

  8. Distribución de las transformaciones lineales de los residuos mínimos cuadrados studentizados internamente = Distribution of linear transformations of internally studentized least squares residuals

    Directory of Open Access Journals (Sweden)

    Seppo Pynnönem

    2012-03-01

    that the resulting ratio, U/S, has a distribution that is free of from the nuisance unknown scale parameter. External Studentization refers to a ratio in which the nominator and denominator are independent, while internal Studentization refers to a ratio in which these are dependent. The advantage of the internal Studentization is that typically one can use a single common scale estimator, while in the external Studentization every single residual is scaled by different scale estimator to gain the independence. With normal regression errors the joint distribution of an arbitrary (linearly independent subset of internally Studentized residuals is well documented. However, in some applications a linear combination of internally Studentized residuals may be useful. The boundedness of them is well documented, but the distribution seems not be derived in the literature. This paper contributes to the existing literature by deriving the joint distribution of an arbitrary linear transformation of internally Studentized residuals from ordinary least squares regression with spherical error distribution. All major versions of commonly utilized internally Studentized regression residuals in literature are obtained as special cases of the linear transformation

  9. Error analysis of some Galerkin - least squares methods for the elasticity equations

    International Nuclear Information System (INIS)

    Franca, L.P.; Stenberg, R.

    1989-05-01

    We consider the recent technique of stabilizing mixed finite element methods by augmenting the Galerkin formulation with least squares terms calculated separately on each element. The error analysis is performed in a unified manner yielding improved results for some methods introduced earlier. In addition, a new formulation is introduced and analyzed [pt

  10. Tunable all-angle negative refraction and photonic band gaps in two-dimensional plasma photonic crystals with square-like Archimedean lattices

    International Nuclear Information System (INIS)

    Zhang, Hai-Feng; Liu, Shao-Bin; Jiang, Yu-Chi

    2014-01-01

    In this paper, the tunable all-angle negative refraction and photonic band gaps (PBGs) in two types of two-dimensional (2D) plasma photonic crystals (PPCs) composed of homogeneous plasma and dielectric (GaAs) with square-like Archimedean lattices (ladybug and bathroom lattices) for TM wave are theoretically investigated based on a modified plane wave expansion method. The type-1 structure is dielectric rods immersed in the plasma background, and the complementary structure is named as type-2 PPCs. Theoretical simulations demonstrate that the both types of PPCs with square-like Archimedean lattices have some advantages in obtaining the higher cut-off frequency, the larger PBGs, more number of PBGs, and the relative bandwidths compared to the conventional square lattices as the filling factor or radius of inserted rods is same. The influences of plasma frequency and radius of inserted rod on the properties of PBGs for both types of PPCs also are discussed in detail. The calculated results show that PBGs can be manipulated by the parameters as mentioned above. The possibilities of all-angle negative refraction in such two types of PPCs at low bands also are discussed. Our calculations reveal that the all-angle negative phenomena can be observed in the first two TM bands, and the frequency range of all-angle negative refraction can be tuned by changing plasma frequency. Those properties can be used to design the optical switching and sensor

  11. Estimating Frequency by Interpolation Using Least Squares Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Changwei Ma

    2015-01-01

    Full Text Available Discrete Fourier transform- (DFT- based maximum likelihood (ML algorithm is an important part of single sinusoid frequency estimation. As signal to noise ratio (SNR increases and is above the threshold value, it will lie very close to Cramer-Rao lower bound (CRLB, which is dependent on the number of DFT points. However, its mean square error (MSE performance is directly proportional to its calculation cost. As a modified version of support vector regression (SVR, least squares SVR (LS-SVR can not only still keep excellent capabilities for generalizing and fitting but also exhibit lower computational complexity. In this paper, therefore, LS-SVR is employed to interpolate on Fourier coefficients of received signals and attain high frequency estimation accuracy. Our results show that the proposed algorithm can make a good compromise between calculation cost and MSE performance under the assumption that the sample size, number of DFT points, and resampling points are already known.

  12. Fitting the two-compartment model in DCE-MRI by linear inversion.

    Science.gov (United States)

    Flouri, Dimitra; Lesnic, Daniel; Sourbron, Steven P

    2016-09-01

    Model fitting of dynamic contrast-enhanced-magnetic resonance imaging-MRI data with nonlinear least squares (NLLS) methods is slow and may be biased by the choice of initial values. The aim of this study was to develop and evaluate a linear least squares (LLS) method to fit the two-compartment exchange and -filtration models. A second-order linear differential equation for the measured concentrations was derived where model parameters act as coefficients. Simulations of normal and pathological data were performed to determine calculation time, accuracy and precision under different noise levels and temporal resolutions. Performance of the LLS was evaluated by comparison against the NLLS. The LLS method is about 200 times faster, which reduces the calculation times for a 256 × 256 MR slice from 9 min to 3 s. For ideal data with low noise and high temporal resolution the LLS and NLLS were equally accurate and precise. The LLS was more accurate and precise than the NLLS at low temporal resolution, but less accurate at high noise levels. The data show that the LLS leads to a significant reduction in calculation times, and more reliable results at low noise levels. At higher noise levels the LLS becomes exceedingly inaccurate compared to the NLLS, but this may be improved using a suitable weighting strategy. Magn Reson Med 76:998-1006, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  13. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    Science.gov (United States)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  14. Comparison of preconditioned generalized conjugate gradient methods to two-dimensional neutron and photon transport equation

    International Nuclear Information System (INIS)

    Chen, G.S.

    1997-01-01

    We apply and compare the preconditioned generalized conjugate gradient methods to solve the linear system equation that arises in the two-dimensional neutron and photon transport equation in this paper. Several subroutines are developed on the basis of preconditioned generalized conjugate gradient methods for time-independent, two-dimensional neutron and photon transport equation in the transport theory. These generalized conjugate gradient methods are used. TFQMR (transpose free quasi-minimal residual algorithm), CGS (conjuage gradient square algorithm), Bi-CGSTAB (bi-conjugate gradient stabilized algorithm) and QMRCGSTAB (quasi-minimal residual variant of bi-conjugate gradient stabilized algorithm). These sub-routines are connected to computer program DORT. Several problems are tested on a personal computer with Intel Pentium CPU. (author)

  15. Optimization of sequential decisions by least squares Monte Carlo method

    DEFF Research Database (Denmark)

    Nishijima, Kazuyoshi; Anders, Annett

    change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which...... is proposed by Longstaff and Schwartz (2001) for pricing of American options. The present paper formulates the decision problem in a more general manner and explains how the solution scheme proposed by Anders and Nishijima (2011) is implemented for the optimization of the formulated decision problem...

  16. Tracer particles in two-dimensional elastic networks diffuse logarithmically slow

    International Nuclear Information System (INIS)

    Lizana, Ludvig; Ambjörnsson, Tobias; Lomholt, Michael A

    2017-01-01

    Several experiments on tagged molecules or particles in living systems suggest that they move anomalously slow—their mean squared displacement (MSD) increase slower than linearly with time. Leading models aimed at understanding these experiments predict that the MSD grows as a power law with a growth exponent that is smaller than unity. However, in some experiments the growth is so slow (fitted exponent  ∼0.1–0.2) that they hint towards other mechanisms at play. In this paper, we theoretically demonstrate how in-plane collective modes excited by thermal fluctuations in a two dimensional membrane lead to logarithmic time dependence for the the tracer particle’s MSD. (paper)

  17. Canonical Least-Squares Monte Carlo Valuation of American Options: Convergence and Empirical Pricing Analysis

    Directory of Open Access Journals (Sweden)

    Xisheng Yu

    2014-01-01

    Full Text Available The paper by Liu (2010 introduces a method termed the canonical least-squares Monte Carlo (CLM which combines a martingale-constrained entropy model and a least-squares Monte Carlo algorithm to price American options. In this paper, we first provide the convergence results of CLM and numerically examine the convergence properties. Then, the comparative analysis is empirically conducted using a large sample of the S&P 100 Index (OEX puts and IBM puts. The results on the convergence show that choosing the shifted Legendre polynomials with four regressors is more appropriate considering the pricing accuracy and the computational cost. With this choice, CLM method is empirically demonstrated to be superior to the benchmark methods of binominal tree and finite difference with historical volatilities.

  18. Influence of the least-squares phase on optical vortices in strongly scintillated beams

    CSIR Research Space (South Africa)

    Chen, M

    2009-06-01

    Full Text Available , the average total number of vortices is reduced further. However, the reduction becomes smaller for each succes- sive step. This indicates that the ability of getting rid of optical vortices by removing the least-squares phase becomes progressively less...

  19. Expectile smoothing: new perspectives on asymmetric least squares. An application to life expectancy

    NARCIS (Netherlands)

    Schnabel, S.K.

    2011-01-01

    While initially motivated from a demographic application, this thesis develops methodology for expectile estimation. To this end first the basic model for expectile curves using least asymmetrically weighted squares (LAWS) was introduced as well as methods for smoothing in this context. The simple

  20. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    Directory of Open Access Journals (Sweden)

    Weiqiang Pan

    2015-03-01

    Full Text Available In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  1. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    Science.gov (United States)

    Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing

    2015-06-01

    In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  2. Joint 2D-DOA and Frequency Estimation for L-Shaped Array Using Iterative Least Squares Method

    Directory of Open Access Journals (Sweden)

    Ling-yun Xu

    2012-01-01

    Full Text Available We introduce an iterative least squares method (ILS for estimating the 2D-DOA and frequency based on L-shaped array. The ILS iteratively finds direction matrix and delay matrix, then 2D-DOA and frequency can be obtained by the least squares method. Without spectral peak searching and pairing, this algorithm works well and pairs the parameters automatically. Moreover, our algorithm has better performance than conventional ESPRIT algorithm and propagator method. The useful behavior of the proposed algorithm is verified by simulations.

  3. The Use of Alternative Regression Methods in Social Sciences and the Comparison of Least Squares and M Estimation Methods in Terms of the Determination of Coefficient

    Science.gov (United States)

    Coskuntuncel, Orkun

    2013-01-01

    The purpose of this study is two-fold; the first aim being to show the effect of outliers on the widely used least squares regression estimator in social sciences. The second aim is to compare the classical method of least squares with the robust M-estimator using the "determination of coefficient" (R[superscript 2]). For this purpose,…

  4. Analysis of the Two-Regime Method on Square Meshes

    KAUST Repository

    Flegg, Mark B.

    2014-01-01

    The two-regime method (TRM) has been recently developed for optimizing stochastic reaction-diffusion simulations [M. Flegg, J. Chapman, and R. Erban, J. Roy. Soc. Interface, 9 (2012), pp. 859-868]. It is a multiscale (hybrid) algorithm which uses stochastic reaction-diffusion models with different levels of detail in different parts of the computational domain. The coupling condition on the interface between different modeling regimes of the TRM was previously derived for onedimensional models. In this paper, the TRM is generalized to higher dimensional reaction-diffusion systems. Coupling Brownian dynamics models with compartment-based models on regular (square) two-dimensional lattices is studied in detail. In this case, the interface between different modeling regimes contains either flat parts or right-angle corners. Both cases are studied in the paper. For flat interfaces, it is shown that the one-dimensional theory can be used along the line perpendicular to the TRM interface. In the direction tangential to the interface, two choices of the TRM parameters are presented. Their applicability depends on the compartment size and the time step used in the molecular-based regime. The two-dimensional generalization of the TRM is also discussed in the case of corners. © 2014 Society for Industrial and Applied Mathematics.

  5. Least median of squares filtering of locally optimal point matches for compressible flow image registration

    International Nuclear Information System (INIS)

    Castillo, Edward; Guerrero, Thomas; Castillo, Richard; White, Benjamin; Rojo, Javier

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. (paper)

  6. Circular contour retrieval in real-world conditions by higher order statistics and an alternating-least squares algorithm

    Science.gov (United States)

    Jiang, Haiping; Marot, Julien; Fossati, Caroline; Bourennane, Salah

    2011-12-01

    In real-world conditions, contours are most often blurred in digital images because of acquisition conditions such as movement, light transmission environment, and defocus. Among image segmentation methods, Hough transform requires a computational load which increases with the number of noise pixels, level set methods also require a high computational load, and some other methods assume that the contours are one-pixel wide. For the first time, we retrieve the characteristics of multiple possibly concentric blurred circles. We face correlated noise environment, to get closer to real-world conditions. For this, we model a blurred circle by a few parameters--center coordinates, radius, and spread--which characterize its mean position and gray level variations. We derive the signal model which results from signal generation on circular antenna. Linear antennas provide the center coordinates. To retrieve the circle radii, we adapt the second-order statistics TLS-ESPRIT method for non-correlated noise environment, and propose a novel version of TLS-ESPRIT based on higher-order statistics for correlated noise environment. Then, we derive a least-squares criterion and propose an alternating least-squares algorithm to retrieve simultaneously all spread values of concentric circles. Experiments performed on hand-made and real-world images show that the proposed methods outperform the Hough transform and a level set method dedicated to blurred contours in terms of computational load. Moreover, the proposed model and optimization method provide the information of the contour grey level variations.

  7. Exploring two-dimensional electron gases with two-dimensional Fourier transform spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Paul, J.; Dey, P.; Karaiskaj, D., E-mail: karaiskaj@usf.edu [Department of Physics, University of South Florida, 4202 East Fowler Ave., Tampa, Florida 33620 (United States); Tokumoto, T.; Hilton, D. J. [Department of Physics, University of Alabama at Birmingham, Birmingham, Alabama 35294 (United States); Reno, J. L. [CINT, Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)

    2014-10-07

    The dephasing of the Fermi edge singularity excitations in two modulation doped single quantum wells of 12 nm and 18 nm thickness and in-well carrier concentration of ∼4 × 10{sup 11} cm{sup −2} was carefully measured using spectrally resolved four-wave mixing (FWM) and two-dimensional Fourier transform (2DFT) spectroscopy. Although the absorption at the Fermi edge is broad at this doping level, the spectrally resolved FWM shows narrow resonances. Two peaks are observed separated by the heavy hole/light hole energy splitting. Temperature dependent “rephasing” (S{sub 1}) 2DFT spectra show a rapid linear increase of the homogeneous linewidth with temperature. The dephasing rate increases faster with temperature in the narrower 12 nm quantum well, likely due to an increased carrier-phonon scattering rate. The S{sub 1} 2DFT spectra were measured using co-linear, cross-linear, and co-circular polarizations. Distinct 2DFT lineshapes were observed for co-linear and cross-linear polarizations, suggesting the existence of polarization dependent contributions. The “two-quantum coherence” (S{sub 3}) 2DFT spectra for the 12 nm quantum well show a single peak for both co-linear and co-circular polarizations.

  8. An Improved Generalized Predictive Control in a Robust Dynamic Partial Least Square Framework

    Directory of Open Access Journals (Sweden)

    Jin Xin

    2015-01-01

    Full Text Available To tackle the sensitivity to outliers in system identification, a new robust dynamic partial least squares (PLS model based on an outliers detection method is proposed in this paper. An improved radial basis function network (RBFN is adopted to construct the predictive model from inputs and outputs dataset, and a hidden Markov model (HMM is applied to detect the outliers. After outliers are removed away, a more robust dynamic PLS model is obtained. In addition, an improved generalized predictive control (GPC with the tuning weights under dynamic PLS framework is proposed to deal with the interaction which is caused by the model mismatch. The results of two simulations demonstrate the effectiveness of proposed method.

  9. Optimal Control Strategies in a Two Dimensional Differential Game Using Linear Equation under a Perturbed System

    Directory of Open Access Journals (Sweden)

    Musa Danjuma SHEHU

    2008-06-01

    Full Text Available This paper lays emphasis on formulation of two dimensional differential games via optimal control theory and consideration of control systems whose dynamics is described by a system of Ordinary Differential equation in the form of linear equation under the influence of two controls U(. and V(.. Base on this, strategies were constructed. Hence we determine the optimal strategy for a control say U(. under a perturbation generated by the second control V(. within a given manifold M.

  10. Least Squares Inference on Integrated Volatility and the Relationship between Efficient Prices and Noise

    OpenAIRE

    Nolte, Ingmar; Voev, Valeri

    2009-01-01

    The expected value of sums of squared intraday returns (realized variance)gives rise to a least squares regression which adapts itself to the assumptions ofthe noise process and allows for a joint inference on integrated volatility (IV),noise moments and price-noise relations. In the iid noise case we derive theasymptotic variance of the regression parameter estimating the IV, show thatit is consistent and compare its asymptotic efficiency against alternative consistentIV measures. In case of...

  11. Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model

    Science.gov (United States)

    Yu, Lean; Wang, Shouyang; Lai, K. K.

    Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.

  12. Optimizing gradient conditions in online comprehensive two-dimensional reversed-phase liquid chromatography by use of the linear solvent strength model

    DEFF Research Database (Denmark)

    Græsbøll, Rune; Janssen, Hans-Gerd; Christensen, Jan H.

    2017-01-01

    The linear solvent strength model was used to predict coverage in online comprehensive two-dimensional reversed-phase liquid chromatography. The prediction model uses a parallelogram to describe the separation space covered with peaks in a system with limited orthogonality. The corners of the par......The linear solvent strength model was used to predict coverage in online comprehensive two-dimensional reversed-phase liquid chromatography. The prediction model uses a parallelogram to describe the separation space covered with peaks in a system with limited orthogonality. The corners...... of the parallelogram are assumed to behave like chromatographic peaks and the position of these pseudo-compounds was predicted. A mix of 25 polycyclic aromatic compounds were used as a test. The precision of the prediction, span 0-25, was tested by varying input parameters, and was found to be acceptable with root...... factors were low, or when gradient conditions affected parameters not included in the model, e.g. second dimension gradient time affects the second dimension equilibration time. The concept shows promise as a tool for gradient optimization in online comprehensive two-dimensional liquid chromatography...

  13. Linear regression in astronomy. II

    Science.gov (United States)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  14. Single Directional SMO Algorithm for Least Squares Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Xigao Shao

    2013-01-01

    Full Text Available Working set selection is a major step in decomposition methods for training least squares support vector machines (LS-SVMs. In this paper, a new technique for the selection of working set in sequential minimal optimization- (SMO- type decomposition methods is proposed. By the new method, we can select a single direction to achieve the convergence of the optimality condition. A simple asymptotic convergence proof for the new algorithm is given. Experimental comparisons demonstrate that the classification accuracy of the new method is not largely different from the existing methods, but the training speed is faster than existing ones.

  15. Uncertainty analysis of pollutant build-up modelling based on a Bayesian weighted least squares approach

    International Nuclear Information System (INIS)

    Haddad, Khaled; Egodawatta, Prasanna; Rahman, Ataur; Goonetilleke, Ashantha

    2013-01-01

    Reliable pollutant build-up prediction plays a critical role in the accuracy of urban stormwater quality modelling outcomes. However, water quality data collection is resource demanding compared to streamflow data monitoring, where a greater quantity of data is generally available. Consequently, available water quality datasets span only relatively short time scales unlike water quantity data. Therefore, the ability to take due consideration of the variability associated with pollutant processes and natural phenomena is constrained. This in turn gives rise to uncertainty in the modelling outcomes as research has shown that pollutant loadings on catchment surfaces and rainfall within an area can vary considerably over space and time scales. Therefore, the assessment of model uncertainty is an essential element of informed decision making in urban stormwater management. This paper presents the application of a range of regression approaches such as ordinary least squares regression, weighted least squares regression and Bayesian weighted least squares regression for the estimation of uncertainty associated with pollutant build-up prediction using limited datasets. The study outcomes confirmed that the use of ordinary least squares regression with fixed model inputs and limited observational data may not provide realistic estimates. The stochastic nature of the dependent and independent variables need to be taken into consideration in pollutant build-up prediction. It was found that the use of the Bayesian approach along with the Monte Carlo simulation technique provides a powerful tool, which attempts to make the best use of the available knowledge in prediction and thereby presents a practical solution to counteract the limitations which are otherwise imposed on water quality modelling. - Highlights: ► Water quality data spans short time scales leading to significant model uncertainty. ► Assessment of uncertainty essential for informed decision making in water

  16. Attenuation compensation in least-squares reverse time migration using the visco-acoustic wave equation

    KAUST Repository

    Dutta, Gaurav; Lu, Kai; Wang, Xin; Schuster, Gerard T.

    2013-01-01

    Attenuation leads to distortion of amplitude and phase of seismic waves propagating inside the earth. Conventional acoustic and least-squares reverse time migration do not account for this distortion which leads to defocusing of migration images

  17. Comparison between the basic least squares and the Bayesian approach for elastic constants identification

    Science.gov (United States)

    Gogu, C.; Haftka, R.; LeRiche, R.; Molimard, J.; Vautrin, A.; Sankar, B.

    2008-11-01

    The basic formulation of the least squares method, based on the L2 norm of the misfit, is still widely used today for identifying elastic material properties from experimental data. An alternative statistical approach is the Bayesian method. We seek here situations with significant difference between the material properties found by the two methods. For a simple three bar truss example we illustrate three such situations in which the Bayesian approach leads to more accurate results: different magnitude of the measurements, different uncertainty in the measurements and correlation among measurements. When all three effects add up, the Bayesian approach can have a large advantage. We then compared the two methods for identification of elastic constants from plate vibration natural frequencies.

  18. Densis. Densimetric representation of two-dimensional matrices

    International Nuclear Information System (INIS)

    Los Arcos Merino, J.M.

    1978-01-01

    Densis is a Fortran V program which allows off-line control of a Calcomp digital plotter, to represent a two-dimensional matrix of numerical elements in the form of a variable shading intensity map in two colours. Each matrix element is associated to a square of a grid which is traced over by lines whose number is a function of the element value according to a selected scale. Program features, subroutine structure and running instructions, are described. Some typical results, for gamma-gamma coincidence experimental data and a sampled two-dimensional function, are indicated. (author)

  19. Two biased estimation techniques in linear regression: Application to aircraft

    Science.gov (United States)

    Klein, Vladislav

    1988-01-01

    Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.

  20. Two Dimensional Symmetric Correlation Functions of the S Operator and Two Dimensional Fourier Transforms: Considering the Line Coupling for P and R Lines of Linear Molecules

    Science.gov (United States)

    Ma, Q.; Boulet, C.; Tipping, R. H.

    2014-01-01

    The refinement of the Robert-Bonamy (RB) formalism by considering the line coupling for isotropic Raman Q lines of linear molecules developed in our previous study [Q. Ma, C. Boulet, and R. H. Tipping, J. Chem. Phys. 139, 034305 (2013)] has been extended to infrared P and R lines. In these calculations, the main task is to derive diagonal and off-diagonal matrix elements of the Liouville operator iS1 - S2 introduced in the formalism. When one considers the line coupling for isotropic Raman Q lines where their initial and final rotational quantum numbers are identical, the derivations of off-diagonal elements do not require extra correlation functions of the ^S operator and their Fourier transforms except for those used in deriving diagonal elements. In contrast, the derivations for infrared P and R lines become more difficult because they require a lot of new correlation functions and their Fourier transforms. By introducing two dimensional correlation functions labeled by two tensor ranks and making variable changes to become even functions, the derivations only require the latters' two dimensional Fourier transforms evaluated at two modulation frequencies characterizing the averaged energy gap and the frequency detuning between the two coupled transitions. With the coordinate representation, it is easy to accurately derive these two dimensional correlation functions. Meanwhile, by using the sampling theory one is able to effectively evaluate their two dimensional Fourier transforms. Thus, the obstacles in considering the line coupling for P and R lines have been overcome. Numerical calculations have been carried out for the half-widths of both the isotropic Raman Q lines and the infrared P and R lines of C2H2 broadened by N2. In comparison with values derived from the RB formalism, new calculated values are significantly reduced and become closer to measurements.

  1. Kinetic microplate bioassays for relative potency of antibiotics improved by partial Least Square (PLS) regression.

    Science.gov (United States)

    Francisco, Fabiane Lacerda; Saviano, Alessandro Morais; Almeida, Túlia de Souza Botelho; Lourenço, Felipe Rebello

    2016-05-01

    Microbiological assays are widely used to estimate the relative potencies of antibiotics in order to guarantee the efficacy, safety, and quality of drug products. Despite of the advantages of turbidimetric bioassays when compared to other methods, it has limitations concerning the linearity and range of the dose-response curve determination. Here, we proposed to use partial least squares (PLS) regression to solve these limitations and to improve the prediction of relative potencies of antibiotics. Kinetic-reading microplate turbidimetric bioassays for apramacyin and vancomycin were performed using Escherichia coli (ATCC 8739) and Bacillus subtilis (ATCC 6633), respectively. Microbial growths were measured as absorbance up to 180 and 300min for apramycin and vancomycin turbidimetric bioassays, respectively. Conventional dose-response curves (absorbances or area under the microbial growth curve vs. log of antibiotic concentration) showed significant regression, however there were significant deviation of linearity. Thus, they could not be used for relative potency estimations. PLS regression allowed us to construct a predictive model for estimating the relative potencies of apramycin and vancomycin without over-fitting and it improved the linear range of turbidimetric bioassay. In addition, PLS regression provided predictions of relative potencies equivalent to those obtained from agar diffusion official methods. Therefore, we conclude that PLS regression may be used to estimate the relative potencies of antibiotics with significant advantages when compared to conventional dose-response curve determination. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Application of least squares support vector regression and linear multiple regression for modeling removal of methyl orange onto tin oxide nanoparticles loaded on activated carbon and activated carbon prepared from Pistacia atlantica wood.

    Science.gov (United States)

    Ghaedi, M; Rahimi, Mahmoud Reza; Ghaedi, A M; Tyagi, Inderjeet; Agarwal, Shilpi; Gupta, Vinod Kumar

    2016-01-01

    Two novel and eco friendly adsorbents namely tin oxide nanoparticles loaded on activated carbon (SnO2-NP-AC) and activated carbon prepared from wood tree Pistacia atlantica (AC-PAW) were used for the rapid removal and fast adsorption of methyl orange (MO) from the aqueous phase. The dependency of MO removal with various adsorption influential parameters was well modeled and optimized using multiple linear regressions (MLR) and least squares support vector regression (LSSVR). The optimal parameters for the LSSVR model were found based on γ value of 0.76 and σ(2) of 0.15. For testing the data set, the mean square error (MSE) values of 0.0010 and the coefficient of determination (R(2)) values of 0.976 were obtained for LSSVR model, and the MSE value of 0.0037 and the R(2) value of 0.897 were obtained for the MLR model. The adsorption equilibrium and kinetic data was found to be well fitted and in good agreement with Langmuir isotherm model and second-order equation and intra-particle diffusion models respectively. The small amount of the proposed SnO2-NP-AC and AC-PAW (0.015 g and 0.08 g) is applicable for successful rapid removal of methyl orange (>95%). The maximum adsorption capacity for SnO2-NP-AC and AC-PAW was 250 mg g(-1) and 125 mg g(-1) respectively. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Structure-activity relationship study of oxindole-based inhibitors of cyclin-dependent kinases based on least-squares support vector machines

    International Nuclear Information System (INIS)

    Li Jiazhong; Liu Huanxiang; Yao Xiaojun; Liu Mancang; Hu Zhide; Fan Botao

    2007-01-01

    The least-squares support vector machines (LS-SVMs), as an effective modified algorithm of support vector machine, was used to build structure-activity relationship (SAR) models to classify the oxindole-based inhibitors of cyclin-dependent kinases (CDKs) based on their activity. Each compound was depicted by the structural descriptors that encode constitutional, topological, geometrical, electrostatic and quantum-chemical features. The forward-step-wise linear discriminate analysis method was used to search the descriptor space and select the structural descriptors responsible for activity. The linear discriminant analysis (LDA) and nonlinear LS-SVMs method were employed to build classification models, and the best results were obtained by the LS-SVMs method with prediction accuracy of 100% on the test set and 90.91% for CDK1 and CDK2, respectively, as well as that of LDA models 95.45% and 86.36%. This paper provides an effective method to screen CDKs inhibitors

  4. Speed control of induction motor using fuzzy recursive least squares technique

    OpenAIRE

    Santiago Sánchez; Eduardo Giraldo

    2008-01-01

    A simple adaptive controller design is presented in this paper, the control system uses the adaptive fuzzy logic, sliding modes and is trained with the recursive least squares technique. The problem of parameter variation is solved with the adaptive controller; the use of an internal PI regulator produces that the speed control of the induction motor be achieved by the stator currents instead the input voltage. The rotor-flux oriented coordinated system model is used to develop and test the c...

  5. Analysis of neutron and x-ray reflectivity data by constrained least-squares methods

    DEFF Research Database (Denmark)

    Pedersen, J.S.; Hamley, I.W.

    1994-01-01

    . The coefficients in the series are determined by constrained nonlinear least-squares methods, in which the smoothest solution that agrees with the data is chosen. In the second approach the profile is expressed as a series of sine and cosine terms. A smoothness constraint is used which reduces the coefficients...

  6. Linear zonal atmospheric prediction for adaptive optics

    Science.gov (United States)

    McGuire, Patrick C.; Rhoadarmer, Troy A.; Coy, Hanna A.; Angel, J. Roger P.; Lloyd-Hart, Michael

    2000-07-01

    We compare linear zonal predictors of atmospheric turbulence for adaptive optics. Zonal prediction has the possible advantage of being able to interpret and utilize wind-velocity information from the wavefront sensor better than modal prediction. For simulated open-loop atmospheric data for a 2- meter 16-subaperture AO telescope with 5 millisecond prediction and a lookback of 4 slope-vectors, we find that Widrow-Hoff Delta-Rule training of linear nets and Back- Propagation training of non-linear multilayer neural networks is quite slow, getting stuck on plateaus or in local minima. Recursive Least Squares training of linear predictors is two orders of magnitude faster and it also converges to the solution with global minimum error. We have successfully implemented Amari's Adaptive Natural Gradient Learning (ANGL) technique for a linear zonal predictor, which premultiplies the Delta-Rule gradients with a matrix that orthogonalizes the parameter space and speeds up the training by two orders of magnitude, like the Recursive Least Squares predictor. This shows that the simple Widrow-Hoff Delta-Rule's slow convergence is not a fluke. In the case of bright guidestars, the ANGL, RLS, and standard matrix-inversion least-squares (MILS) algorithms all converge to the same global minimum linear total phase error (approximately 0.18 rad2), which is only approximately 5% higher than the spatial phase error (approximately 0.17 rad2), and is approximately 33% lower than the total 'naive' phase error without prediction (approximately 0.27 rad2). ANGL can, in principle, also be extended to make non-linear neural network training feasible for these large networks, with the potential to lower the predictor error below the linear predictor error. We will soon scale our linear work to the approximately 108-subaperture MMT AO system, both with simulations and real wavefront sensor data from prime focus.

  7. Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine

    International Nuclear Information System (INIS)

    Xu Ruirui; Bian Guoxing; Gao Chenfeng; Chen Tianlun

    2005-01-01

    The least squares support vector machine (LS-SVM) is used to study the nonlinear time series prediction. First, the parameter γ and multi-step prediction capabilities of the LS-SVM network are discussed. Then we employ clustering method in the model to prune the number of the support values. The learning rate and the capabilities of filtering noise for LS-SVM are all greatly improved.

  8. Estimation of active pharmaceutical ingredients content using locally weighted partial least squares and statistical wavelength selection.

    Science.gov (United States)

    Kim, Sanghong; Kano, Manabu; Nakagawa, Hiroshi; Hasebe, Shinji

    2011-12-15

    Development of quality estimation models using near infrared spectroscopy (NIRS) and multivariate analysis has been accelerated as a process analytical technology (PAT) tool in the pharmaceutical industry. Although linear regression methods such as partial least squares (PLS) are widely used, they cannot always achieve high estimation accuracy because physical and chemical properties of a measuring object have a complex effect on NIR spectra. In this research, locally weighted PLS (LW-PLS) which utilizes a newly defined similarity between samples is proposed to estimate active pharmaceutical ingredient (API) content in granules for tableting. In addition, a statistical wavelength selection method which quantifies the effect of API content and other factors on NIR spectra is proposed. LW-PLS and the proposed wavelength selection method were applied to real process data provided by Daiichi Sankyo Co., Ltd., and the estimation accuracy was improved by 38.6% in root mean square error of prediction (RMSEP) compared to the conventional PLS using wavelengths selected on the basis of variable importance on the projection (VIP). The results clearly show that the proposed calibration modeling technique is useful for API content estimation and is superior to the conventional one. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Consistent Partial Least Squares Path Modeling via Regularization.

    Science.gov (United States)

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  10. Consistent Partial Least Squares Path Modeling via Regularization

    Directory of Open Access Journals (Sweden)

    Sunho Jung

    2018-02-01

    Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  11. A Galerkin least squares approach to viscoelastic flow.

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    A Galerkin/least-squares stabilization technique is applied to a discrete Elastic Viscous Stress Splitting formulation of for viscoelastic flow. From this, a possible viscoelastic stabilization method is proposed. This method is tested with the flow of an Oldroyd-B fluid past a rigid cylinder, where it is found to produce inaccurate drag coefficients. Furthermore, it fails for relatively low Weissenberg number indicating it is not suited for use as a general algorithm. In addition, a decoupled approach is used as a way separating the constitutive equation from the rest of the system. A Pressure Poisson equation is used when the velocity and pressure are sought to be decoupled, but this fails to produce a solution when inflow/outflow boundaries are considered. However, a coupled pressure-velocity equation with a decoupled constitutive equation is successful for the flow past a rigid cylinder and seems to be suitable as a general-use algorithm.

  12. Least-squares reverse time migration with radon preconditioning

    KAUST Repository

    Dutta, Gaurav

    2016-09-06

    We present a least-squares reverse time migration (LSRTM) method using Radon preconditioning to regularize noisy or severely undersampled data. A high resolution local radon transform is used as a change of basis for the reflectivity and sparseness constraints are applied to the inverted reflectivity in the transform domain. This reflects the prior that for each location of the subsurface the number of geological dips is limited. The forward and the adjoint mapping of the reflectivity to the local Radon domain and back are done through 3D Fourier-based discrete Radon transform operators. The sparseness is enforced by applying weights to the Radon domain components which either vary with the amplitudes of the local dips or are thresholded at given quantiles. Numerical tests on synthetic and field data validate the effectiveness of the proposed approach in producing images with improved SNR and reduced aliasing artifacts when compared with standard RTM or LSRTM.

  13. Status of software for PGNAA bulk analysis by the Monte Carlo - Library Least-Squares (MCLLS) approach

    International Nuclear Information System (INIS)

    Gardner, R.P.; Zhang, W.; Metwally, W.A.

    2005-01-01

    The Center for Engineering Applications of Radioisotopes (CEAR) has been working for about ten years on the Monte Carlo - Library Least-Squares (MCLLS) approach for treating the nonlinear inverse analysis problem for PGNAA bulk analysis. This approach consists essentially of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required libraries. These libraries are then used in the linear Library Least-Squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. The other libraries include all sources of background which includes: (1) gamma-rays emitted by the neutron source, (2) prompt gamma-rays produced in the analyzer construction materials, (3) natural gamma-rays from K-40 and the uranium and thorium decay chains, and (4) prompt and decay gamma-rays produced in the NaI detector by neutron activation. A number of unforeseen problems have arisen in pursuing this approach including: (1) the neutron activation of the most common detector (NaI) used in bulk analysis PGNAA systems, (2) the nonlinearity of this detector, and (3) difficulties in obtaining detector response functions for this (and other) detectors. These problems have been addressed by CEAR recently and have either been solved or are almost solved at the present time. Development of Monte Carlo simulation for all of the libraries has been finished except the prompt gamma-ray library from the activation of the NaI detector. Treatment for the coincidence schemes for Na and particularly I must be first determined to complete the Monte Carlo simulation of this last library. (author)

  14. Matter-wave two-dimensional solitons in crossed linear and nonlinear optical lattices

    International Nuclear Information System (INIS)

    Luz, H. L. F. da; Gammal, A.; Abdullaev, F. Kh.; Salerno, M.; Tomio, Lauro

    2010-01-01

    The existence of multidimensional matter-wave solitons in a crossed optical lattice (OL) with a linear optical lattice (LOL) in the x direction and a nonlinear optical lattice (NOL) in the y direction, where the NOL can be generated by a periodic spatial modulation of the scattering length using an optically induced Feshbach resonance is demonstrated. In particular, we show that such crossed LOLs and NOLs allow for stabilizing two-dimensional solitons against decay or collapse for both attractive and repulsive interactions. The solutions for the soliton stability are investigated analytically, by using a multi-Gaussian variational approach, with the Vakhitov-Kolokolov necessary criterion for stability; and numerically, by using the relaxation method and direct numerical time integrations of the Gross-Pitaevskii equation. Very good agreement of the results corresponding to both treatments is observed.

  15. Matter-wave two-dimensional solitons in crossed linear and nonlinear optical lattices

    Science.gov (United States)

    da Luz, H. L. F.; Abdullaev, F. Kh.; Gammal, A.; Salerno, M.; Tomio, Lauro

    2010-10-01

    The existence of multidimensional matter-wave solitons in a crossed optical lattice (OL) with a linear optical lattice (LOL) in the x direction and a nonlinear optical lattice (NOL) in the y direction, where the NOL can be generated by a periodic spatial modulation of the scattering length using an optically induced Feshbach resonance is demonstrated. In particular, we show that such crossed LOLs and NOLs allow for stabilizing two-dimensional solitons against decay or collapse for both attractive and repulsive interactions. The solutions for the soliton stability are investigated analytically, by using a multi-Gaussian variational approach, with the Vakhitov-Kolokolov necessary criterion for stability; and numerically, by using the relaxation method and direct numerical time integrations of the Gross-Pitaevskii equation. Very good agreement of the results corresponding to both treatments is observed.

  16. Lameness detection challenges in automated milking systems addressed with partial least squares discriminant analysis

    DEFF Research Database (Denmark)

    Garcia, Emanuel; Klaas, Ilka Christine; Amigo Rubio, Jose Manuel

    2014-01-01

    Lameness is prevalent in dairy herds. It causes decreased animal welfare and leads to higher production costs. This study explored data from an automatic milking system (AMS) to model on-farm gait scoring from a commercial farm. A total of 88 cows were gait scored once per week, for 2 5-wk periods......). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3...

  17. Modelling of the thermal parameters of high-power linear laser-diode arrays. Two-dimensional transient model

    International Nuclear Information System (INIS)

    Bezotosnyi, V V; Kumykov, Kh Kh

    1998-01-01

    A two-dimensional transient thermal model of an injection laser is developed. This model makes it possible to analyse the temperature profiles in pulsed and cw stripe lasers with an arbitrary width of the stripe contact, and also in linear laser-diode arrays. This can be done for any durations and repetition rates of the pump pulses. The model can also be applied to two-dimensional laser-diode arrays operating quasicontinuously. An analysis is reported of the influence of various structural parameters of a diode array on the thermal regime of a single laser. The temperature distributions along the cavity axis are investigated for different variants of mounting a crystal on a heat sink. It is found that the temperature drop along the cavity length in cw and quasi-cw laser diodes may exceed 20%. (lasers)

  18. Discrete least squares polynomial approximation with random evaluations - application to PDEs with Random parameters

    KAUST Repository

    Nobile, Fabio

    2015-01-01

    the parameter-to-solution map u(y) from random noise-free or noisy observations in random points by discrete least squares on polynomial spaces. The noise-free case is relevant whenever the technique is used to construct metamodels, based on polynomial

  19. Modeling geochemical datasets for source apportionment: Comparison of least square regression and inversion approaches.

    Digital Repository Service at National Institute of Oceanography (India)

    Tripathy, G.R.; Das, Anirban.

    used methods, the Least Square Regression (LSR) and Inverse Modeling (IM), to determine the contributions of (i) solutes from different sources to global river water, and (ii) various rocks to a glacial till. The purpose of this exercise is to compare...

  20. Quasi-two-dimensional holography

    International Nuclear Information System (INIS)

    Kutzner, J.; Erhard, A.; Wuestenberg, H.; Zimpfer, J.

    1980-01-01

    The acoustical holography with numerical reconstruction by area scanning is memory- and time-intensive. With the experiences by the linear holography we tried to derive a scanning for the evaluating of the two-dimensional flaw-sizes. In most practical cases it is sufficient to determine the exact depth extension of a flaw, whereas the accuracy of the length extension is less critical. For this reason the applicability of the so-called quasi-two-dimensional holography is appropriate. The used sound field given by special probes is divergent in the inclined plane and light focussed in the perpendicular plane using cylindrical lenses. (orig.) [de

  1. Speed control of induction motor using fuzzy recursive least squares technique

    Directory of Open Access Journals (Sweden)

    Santiago Sánchez

    2008-12-01

    Full Text Available A simple adaptive controller design is presented in this paper, the control system uses the adaptive fuzzy logic, sliding modes and is trained with the recursive least squares technique. The problem of parameter variation is solved with the adaptive controller; the use of an internal PI regulator produces that the speed control of the induction motor be achieved by the stator currents instead the input voltage. The rotor-flux oriented coordinated system model is used to develop and test the control system.

  2. The derivation of vector magnetic fields from Stokes profiles - Integral versus least squares fitting techniques

    Science.gov (United States)

    Ronan, R. S.; Mickey, D. L.; Orrall, F. Q.

    1987-01-01

    The results of two methods for deriving photospheric vector magnetic fields from the Zeeman effect, as observed in the Fe I line at 6302.5 A at high spectral resolution (45 mA), are compared. The first method does not take magnetooptical effects into account, but determines the vector magnetic field from the integral properties of the Stokes profiles. The second method is an iterative least-squares fitting technique which fits the observed Stokes profiles to the profiles predicted by the Unno-Rachkovsky solution to the radiative transfer equation. For sunspot fields above about 1500 gauss, the two methods are found to agree in derived azimuthal and inclination angles to within about + or - 20 deg.

  3. Study of the convergence behavior of the complex kernel least mean square algorithm.

    Science.gov (United States)

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2013-09-01

    The complex kernel least mean square (CKLMS) algorithm is recently derived and allows for online kernel adaptive learning for complex data. Kernel adaptive methods can be used in finding solutions for neural network and machine learning applications. The derivation of CKLMS involved the development of a modified Wirtinger calculus for Hilbert spaces to obtain the cost function gradient. We analyze the convergence of the CKLMS with different kernel forms for complex data. The expressions obtained enable us to generate theory-predicted mean-square error curves considering the circularity of the complex input signals and their effect on nonlinear learning. Simulations are used for verifying the analysis results.

  4. Two-dimensional square ternary Cu2MX4 (M = Mo, W; X = S, Se) monolayers and nanoribbons predicted from density functional theory

    KAUST Repository

    Gan, Liyong

    2014-03-19

    Two-dimensional (2D) materials often adopt a hexagonal lattice. We report on a class of 2D materials, Cu2MX4 (M = Mo, W; X = S, Se), that has a square lattice. Up to three monolayers, the systems are kinetically stable. All of them are semiconductors with band gaps from 2.03 to 2.48 eV. Specifically, the states giving rise to the valence band maximum are confined to the Cu and X atoms, while those giving rise to the conduction band minimum are confined to the M atoms, suggesting that spontaneous charge separation occurs. The semiconductive nature makes the materials promising for transistors, optoelectronics, and solar energy conversion. Moreover, the ferromagnetism on the edges of square Cu2MX4 nanoribbons opens applications in spintronics.

  5. Two-dimensional square ternary Cu2MX4 (M = Mo, W; X = S, Se) monolayers and nanoribbons predicted from density functional theory

    KAUST Repository

    Gan, Liyong; Schwingenschlö gl, Udo

    2014-01-01

    Two-dimensional (2D) materials often adopt a hexagonal lattice. We report on a class of 2D materials, Cu2MX4 (M = Mo, W; X = S, Se), that has a square lattice. Up to three monolayers, the systems are kinetically stable. All of them are semiconductors with band gaps from 2.03 to 2.48 eV. Specifically, the states giving rise to the valence band maximum are confined to the Cu and X atoms, while those giving rise to the conduction band minimum are confined to the M atoms, suggesting that spontaneous charge separation occurs. The semiconductive nature makes the materials promising for transistors, optoelectronics, and solar energy conversion. Moreover, the ferromagnetism on the edges of square Cu2MX4 nanoribbons opens applications in spintronics.

  6. A weak Galerkin least-squares finite element method for div-curl systems

    Science.gov (United States)

    Li, Jichun; Ye, Xiu; Zhang, Shangyou

    2018-06-01

    In this paper, we introduce a weak Galerkin least-squares method for solving div-curl problem. This finite element method leads to a symmetric positive definite system and has the flexibility to work with general meshes such as hybrid mesh, polytopal mesh and mesh with hanging nodes. Error estimates of the finite element solution are derived. The numerical examples demonstrate the robustness and flexibility of the proposed method.

  7. semPLS: Structural Equation Modeling Using Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Armin Monecke

    2012-05-01

    Full Text Available Structural equation models (SEM are very popular in many disciplines. The partial least squares (PLS approach to SEM offers an alternative to covariance-based SEM, which is especially suited for situations when data is not normally distributed. PLS path modelling is referred to as soft-modeling-technique with minimum demands regarding mea- surement scales, sample sizes and residual distributions. The semPLS package provides the capability to estimate PLS path models within the R programming environment. Different setups for the estimation of factor scores can be used. Furthermore it contains modular methods for computation of bootstrap confidence intervals, model parameters and several quality indices. Various plot functions help to evaluate the model. The well known mobile phone dataset from marketing research is used to demonstrate the features of the package.

  8. Correlation of two-dimensional echocardiography and pathologic findings in porcine valve dysfunction.

    Science.gov (United States)

    Forman, M B; Phelan, B K; Robertson, R M; Virmani, R

    1985-02-01

    Two-dimensional echocardiographic findings in porcine valve dysfunction were compared with pathologic findings in 10 patients (12 valves). Three specific echocardiographic findings were identified in patients with regurgitant lesions: prolapse, fracture and flail leaflets. Prolapse was associated pathologically with thinning of the leaflets, longitudinal tears close to the ring margin and acid mucopolysaccharide accumulation. Valve fracture was seen with and without prolapse and was accompanied pathologically by small pinpoint perforations or tears of the leaflet. A flail leaflet was seen with a linear tear of the free margin and was associated with calcific deposits. Mild degrees of fracture seen pathologically were missed on the echocardiographic study in five patients. Thickening or calcification, when present in moderate or severe amounts, was correctly identified by echocardiography. When all abnormal features were considered collectively, two-dimensional echocardiography correctly identified at least one of them in all patients. Therefore, two-dimensional echocardiography may prove useful in assessing the source of valvular regurgitation in patients with bioprosthetic valves.

  9. Equalization of Loudspeaker and Room Responses Using Kautz Filters: Direct Least Squares Design

    Directory of Open Access Journals (Sweden)

    Karjalainen Matti

    2007-01-01

    Full Text Available DSP-based correction of loudspeaker and room responses is becoming an important part of improving sound reproduction. Such response equalization (EQ is based on using a digital filter in cascade with the reproduction channel to counteract the response errors introduced by loudspeakers and room acoustics. Several FIR and IIR filter design techniques have been proposed for equalization purposes. In this paper we investigate Kautz filters, an interesting class of IIR filters, from the point of view of direct least squares EQ design. Kautz filters can be seen as generalizations of FIR filters and their frequency-warped counterparts. They provide a flexible means to obtain desired frequency resolution behavior, which allows low filter orders even for complex corrections. Kautz filters have also the desirable property to avoid inverting dips in transfer function to sharp and long-ringing resonances in the equalizer. Furthermore, the direct least squares design is applicable to nonminimum-phase EQ design and allows using a desired target response. The proposed method is demonstrated by case examples with measured and synthetic loudspeaker and room responses.

  10. Optical nonlinearities associated to applied electric fields in parabolic two-dimensional quantum rings

    International Nuclear Information System (INIS)

    Duque, C.M.; Morales, A.L.; Mora-Ramos, M.E.; Duque, C.A.

    2013-01-01

    The linear and nonlinear optical absorption as well as the linear and nonlinear corrections to the refractive index are calculated in a disc shaped quantum dot under the effect of an external magnetic field and parabolic and inverse square confining potentials. The exact solutions for the two-dimensional motion of the conduction band electrons are used as the basis for a perturbation-theory treatment of the effect of a static applied electric field. In general terms, the variation of one of the different potential energy parameters – for a fixed configuration of the remaining ones – leads to either blueshifts or redshifts of the resonant peaks as well as to distinct rates of change for their amplitudes. -- Highlights: • Optical absorption and corrections to the refractive in quantum dots. • Electric and magnetic field and parabolic and inverse square potentials. • Perturbation-theory treatment of the effect of the electric field. • Induced blueshifts or redshifts of the resonant peaks are studied. • Evolution of rates of change for amplitudes of resonant peaks

  11. Optical nonlinearities associated to applied electric fields in parabolic two-dimensional quantum rings

    Energy Technology Data Exchange (ETDEWEB)

    Duque, C.M., E-mail: cduque@fisica.udea.edu.co [Instituto de Física, Universidad de Antioquia, AA 1226, Medellín (Colombia); Morales, A.L. [Instituto de Física, Universidad de Antioquia, AA 1226, Medellín (Colombia); Mora-Ramos, M.E. [Instituto de Física, Universidad de Antioquia, AA 1226, Medellín (Colombia); Facultad de Ciencias, Universidad Autónoma del Estado de Morelos, Ave. Universidad 1001, CP 62209, Cuernavaca, Morelos (Mexico); Duque, C.A. [Instituto de Física, Universidad de Antioquia, AA 1226, Medellín (Colombia)

    2013-11-15

    The linear and nonlinear optical absorption as well as the linear and nonlinear corrections to the refractive index are calculated in a disc shaped quantum dot under the effect of an external magnetic field and parabolic and inverse square confining potentials. The exact solutions for the two-dimensional motion of the conduction band electrons are used as the basis for a perturbation-theory treatment of the effect of a static applied electric field. In general terms, the variation of one of the different potential energy parameters – for a fixed configuration of the remaining ones – leads to either blueshifts or redshifts of the resonant peaks as well as to distinct rates of change for their amplitudes. -- Highlights: • Optical absorption and corrections to the refractive in quantum dots. • Electric and magnetic field and parabolic and inverse square potentials. • Perturbation-theory treatment of the effect of the electric field. • Induced blueshifts or redshifts of the resonant peaks are studied. • Evolution of rates of change for amplitudes of resonant peaks.

  12. Direct Linear System Identification Method for Multistory Three-dimensional Building Structure with General Eccentricity

    OpenAIRE

    Shintani, Kenichirou; Yoshitomi, Shinta; Takewaki, Izuru

    2017-01-01

    A method of physical parameter system identification (SI) is proposed here for three-dimensional (3D) building structures with in-plane rigid floors in which the stiffness and damping coefficients of each structural frame in the 3D building structure are identified from the measured floor horizontal accelerations. A batch processing least-squares estimation method for many discrete time domain measured data is proposed for the direct identification of the stiffness and damping coefficients of...

  13. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    Science.gov (United States)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  14. Least Squares Estimate of the Initial Phases in STFT based Speech Enhancement

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie; Krawczyk-Becker, Martin; Gerkmann, Timo

    2015-01-01

    In this paper, we consider single-channel speech enhancement in the short time Fourier transform (STFT) domain. We suggest to improve an STFT phase estimate by estimating the initial phases. The method is based on the harmonic model and a model for the phase evolution over time. The initial phases...... are estimated by setting up a least squares problem between the noisy phase and the model for phase evolution. Simulations on synthetic and speech signals show a decreased error on the phase when an estimate of the initial phase is included compared to using the noisy phase as an initialisation. The error...... on the phase is decreased at input SNRs from -10 to 10 dB. Reconstructing the signal using the clean amplitude, the mean squared error is decreased and the PESQ score is increased....

  15. Temporal gravity field modeling based on least square collocation with short-arc approach

    Science.gov (United States)

    ran, jiangjun; Zhong, Min; Xu, Houze; Liu, Chengshu; Tangdamrongsub, Natthachet

    2014-05-01

    After the launch of the Gravity Recovery And Climate Experiment (GRACE) in 2002, several research centers have attempted to produce the finest gravity model based on different approaches. In this study, we present an alternative approach to derive the Earth's gravity field, and two main objectives are discussed. Firstly, we seek the optimal method to estimate the accelerometer parameters, and secondly, we intend to recover the monthly gravity model based on least square collocation method. The method has been paid less attention compared to the least square adjustment method because of the massive computational resource's requirement. The positions of twin satellites are treated as pseudo-observations and unknown parameters at the same time. The variance covariance matrices of the pseudo-observations and the unknown parameters are valuable information to improve the accuracy of the estimated gravity solutions. Our analyses showed that introducing a drift parameter as an additional accelerometer parameter, compared to using only a bias parameter, leads to a significant improvement of our estimated monthly gravity field. The gravity errors outside the continents are significantly reduced based on the selected set of the accelerometer parameters. We introduced the improved gravity model namely the second version of Institute of Geodesy and Geophysics, Chinese Academy of Sciences (IGG-CAS 02). The accuracy of IGG-CAS 02 model is comparable to the gravity solutions computed from the Geoforschungszentrum (GFZ), the Center for Space Research (CSR) and the NASA Jet Propulsion Laboratory (JPL). In term of the equivalent water height, the correlation coefficients over the study regions (the Yangtze River valley, the Sahara desert, and the Amazon) among four gravity models are greater than 0.80.

  16. Estimasi Kanal Akustik Bawah Air Untuk Perairan Dangkal Menggunakan Metode Least Square (LS dan Minimum Mean Square Error (MMSE

    Directory of Open Access Journals (Sweden)

    Mardawia M Panrereng

    2015-06-01

    Full Text Available Dalam beberapa tahun terakhir, sistem komunikasi akustik bawah air banyak dikembangkan oleh beberapa peneliti. Besarnya tantangan yang dihadapi membuat para peneliti semakin tertarik untuk mengembangkan penelitian dibidang ini. Kanal bawah air merupakan media komunikasi yang sulit karena adanya attenuasi, absorption, dan multipath yang disebabkan oleh gerakan gelombang air setiap saat. Untuk perairan dangkal, multipath disebabkan adanya pantulan dari permukaan dan dasar laut. Kebutuhan pengiriman data cepat dengan bandwidth terbatas menjadikan Ortogonal Frequency Division Multiplexing (OFDM sebagai solusi untuk komunikasi transmisi tinggi dengan modulasi menggunakan Binary Phase-Shift Keying (BPSK. Estimasi kanal bertujuan untuk mengetahui karakteristik respon impuls kanal propagasi dengan mengirimkan pilot simbol. Pada estimasi kanal menggunakan metode Least Square (LS nilai Mean Square Error (MSE yang diperoleh cenderung lebih besar dari metode estimasi kanal menggunakan metode Minimum Mean Square (MMSE. Hasil kinerja estimasi kanal berdasarkan perhitungan Bit Error Rate (BER untuk estimasi kanal menggunakan metode LS dan metode MMSE tidak menunjukkan perbedaan yang signifikan yaitu berselisih satu SNR untuk setiap metode estimasi kanal yang digunakan.

  17. Curve Fitting via the Criterion of Least Squares. Applications of Algebra and Elementary Calculus to Curve Fitting. [and] Linear Programming in Two Dimensions: I. Applications of High School Algebra to Operations Research. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Units 321, 453.

    Science.gov (United States)

    Alexander, John W., Jr.; Rosenberg, Nancy S.

    This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…

  18. A negative-norm least-squares method for time-harmonic Maxwell equations

    KAUST Repository

    Copeland, Dylan M.

    2012-04-01

    This paper presents and analyzes a negative-norm least-squares finite element discretization method for the dimension-reduced time-harmonic Maxwell equations in the case of axial symmetry. The reduced equations are expressed in cylindrical coordinates, and the analysis consequently involves weighted Sobolev spaces based on the degenerate radial weighting. The main theoretical results established in this work include existence and uniqueness of the continuous and discrete formulations and error estimates for simple finite element functions. Numerical experiments confirm the error estimates and efficiency of the method for piecewise constant coefficients. © 2011 Elsevier Inc.

  19. Comment on "Fringe projection profilometry with nonparallel illumination: a least-squares approach"

    Science.gov (United States)

    Wang, Zhaoyang; Bi, Hongbo

    2006-07-01

    We comment on the recent Letter by Chen and Quan [Opt. Lett.30, 2101 (2005)] in which a least-squares approach was proposed to cope with the nonparallel illumination in fringe projection profilometry. It is noted that the previous mathematical derivations of the fringe pitch and carrier phase functions on the reference plane were incorrect. In addition, we suggest that the variation of carrier phase along the vertical direction should be considered.

  20. Corporate Social Responsibility and Financial Performance: A Two Least Regression Approach

    Directory of Open Access Journals (Sweden)

    Alexander Olawumi Dabor

    2017-12-01

    Full Text Available The objective of this study is to investigate the casuality between corporate social responsibility and firm financial performance. The study employed two least square regression approaches. Fifty-two firms were selected using the scientific method. The findings revealed that corporate social responsibility and firm performance in manufacturing sector are mutually related at 5%. The study recommended that management of manufacturing companies in Nigeria should expend on CSR to boost profitability and corporate image.

  1. On the Dynamics of Two-Dimensional Capillary-Gravity Solitary Waves with a Linear Shear Current

    Directory of Open Access Journals (Sweden)

    Dali Guo

    2014-01-01

    Full Text Available The numerical study of the dynamics of two-dimensional capillary-gravity solitary waves on a linear shear current is presented in this paper. The numerical method is based on the time-dependent conformal mapping. The stability of different kinds of solitary waves is considered. Both depression wave and large amplitude elevation wave are found to be stable, while small amplitude elevation wave is unstable to the small perturbation, and it finally evolves to be a depression wave with tails, which is similar to the irrotational capillary-gravity waves.

  2. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza; Validi, AbdoulAhad; Iaccarino, Gianluca

    2013-01-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  3. Non-intrusive low-rank separated approximation of high-dimensional stochastic models

    KAUST Repository

    Doostan, Alireza

    2013-08-01

    This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.

  4. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-01-01

    with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used

  5. Phase-unwrapping algorithm by a rounding-least-squares approach

    Science.gov (United States)

    Juarez-Salazar, Rigoberto; Robledo-Sanchez, Carlos; Guerrero-Sanchez, Fermin

    2014-02-01

    A simple and efficient phase-unwrapping algorithm based on a rounding procedure and a global least-squares minimization is proposed. Instead of processing the gradient of the wrapped phase, this algorithm operates over the gradient of the phase jumps by a robust and noniterative scheme. Thus, the residue-spreading and over-smoothing effects are reduced. The algorithm's performance is compared with four well-known phase-unwrapping methods: minimum cost network flow (MCNF), fast Fourier transform (FFT), quality-guided, and branch-cut. A computer simulation and experimental results show that the proposed algorithm reaches a high-accuracy level than the MCNF method by a low-computing time similar to the FFT phase-unwrapping method. Moreover, since the proposed algorithm is simple, fast, and user-free, it could be used in metrological interferometric and fringe-projection automatic real-time applications.

  6. An Introduction to Kristof's Theorem for Solving Least-Square Optimization Problems Without Calculus.

    Science.gov (United States)

    Waller, Niels

    2018-01-01

    Kristof's Theorem (Kristof, 1970 ) describes a matrix trace inequality that can be used to solve a wide-class of least-square optimization problems without calculus. Considering its generality, it is surprising that Kristof's Theorem is rarely used in statistics and psychometric applications. The underutilization of this method likely stems, in part, from the mathematical complexity of Kristof's ( 1964 , 1970 ) writings. In this article, I describe the underlying logic of Kristof's Theorem in simple terms by reviewing four key mathematical ideas that are used in the theorem's proof. I then show how Kristof's Theorem can be used to provide novel derivations to two cognate models from statistics and psychometrics. This tutorial includes a glossary of technical terms and an online supplement with R (R Core Team, 2017 ) code to perform the calculations described in the text.

  7. Modeling and forecasting monthly movement of annual average solar insolation based on the least-squares Fourier-model

    International Nuclear Information System (INIS)

    Yang, Zong-Chang

    2014-01-01

    Highlights: • Introduce a finite Fourier-series model for evaluating monthly movement of annual average solar insolation. • Present a forecast method for predicting its movement based on the extended Fourier-series model in the least-squares. • Shown its movement is well described by a low numbers of harmonics with approximately 6-term Fourier series. • Predict its movement most fitting with less than 6-term Fourier series. - Abstract: Solar insolation is one of the most important measurement parameters in many fields. Modeling and forecasting monthly movement of annual average solar insolation is of increasingly importance in areas of engineering, science and economics. In this study, Fourier-analysis employing finite Fourier-series is proposed for evaluating monthly movement of annual average solar insolation and extended in the least-squares for forecasting. The conventional Fourier analysis, which is the most common analysis method in the frequency domain, cannot be directly applied for prediction. Incorporated with the least-square method, the introduced Fourier-series model is extended to predict its movement. The extended Fourier-series forecasting model obtains its optimums Fourier coefficients in the least-square sense based on its previous monthly movements. The proposed method is applied to experiments and yields satisfying results in the different cities (states). It is indicated that monthly movement of annual average solar insolation is well described by a low numbers of harmonics with approximately 6-term Fourier series. The extended Fourier forecasting model predicts the monthly movement of annual average solar insolation most fitting with less than 6-term Fourier series

  8. Two-step superresolution approach for surveillance face image through radial basis function-partial least squares regression and locality-induced sparse representation

    Science.gov (United States)

    Jiang, Junjun; Hu, Ruimin; Han, Zhen; Wang, Zhongyuan; Chen, Jun

    2013-10-01

    Face superresolution (SR), or face hallucination, refers to the technique of generating a high-resolution (HR) face image from a low-resolution (LR) one with the help of a set of training examples. It aims at transcending the limitations of electronic imaging systems. Applications of face SR include video surveillance, in which the individual of interest is often far from cameras. A two-step method is proposed to infer a high-quality and HR face image from a low-quality and LR observation. First, we establish the nonlinear relationship between LR face images and HR ones, according to radial basis function and partial least squares (RBF-PLS) regression, to transform the LR face into the global face space. Then, a locality-induced sparse representation (LiSR) approach is presented to enhance the local facial details once all the global faces for each LR training face are constructed. A comparison of some state-of-the-art SR methods shows the superiority of the proposed two-step approach, RBF-PLS global face regression followed by LiSR-based local patch reconstruction. Experiments also demonstrate the effectiveness under both simulation conditions and some real conditions.

  9. A method for the selection of a functional form for a thermodynamic equation of state using weighted linear least squares stepwise regression

    Science.gov (United States)

    Jacobsen, R. T.; Stewart, R. B.; Crain, R. W., Jr.; Rose, G. L.; Myers, A. F.

    1976-01-01

    A method was developed for establishing a rational choice of the terms to be included in an equation of state with a large number of adjustable coefficients. The methods presented were developed for use in the determination of an equation of state for oxygen and nitrogen. However, a general application of the methods is possible in studies involving the determination of an optimum polynomial equation for fitting a large number of data points. The data considered in the least squares problem are experimental thermodynamic pressure-density-temperature data. Attention is given to a description of stepwise multiple regression and the use of stepwise regression in the determination of an equation of state for oxygen and nitrogen.

  10. Determination of carbohydrates present in Saccharomyces cerevisiae using mid-infrared spectroscopy and partial least squares regression.

    Science.gov (United States)

    Plata, Maria R; Koch, Cosima; Wechselberger, Patrick; Herwig, Christoph; Lendl, Bernhard

    2013-10-01

    A fast and simple method to control variations in carbohydrate composition of Saccharomyces cerevisiae, baker's yeast, during fermentation was developed using mid-infrared (mid-IR) spectroscopy. The method allows for precise and accurate determinations with minimal or no sample preparation and reagent consumption based on mid-IR spectra and partial least squares (PLS) regression. The PLS models were developed employing the results from reference analysis of the yeast cells. The reference analyses quantify the amount of trehalose, glucose, glycogen, and mannan in S. cerevisiae. The selection and optimization of pretreatment steps of samples such as the disruption of the yeast cells and the hydrolysis of mannan and glycogen to obtain monosaccharides were carried out. Trehalose, glucose, and mannose were determined using high-performance liquid chromatography coupled with a refractive index detector and total carbohydrates were measured using the phenol-sulfuric method. Linear concentration range, accuracy, precision, LOD and LOQ were examined to check the reliability of the chromatographic method for each analyte.

  11. Mappings with closed range and finite dimensional linear spaces

    International Nuclear Information System (INIS)

    Iyahen, S.O.

    1984-09-01

    This paper looks at two settings, each of continuous linear mappings of linear topological spaces. In one setting, the domain space is fixed while the range space varies over a class of linear topological spaces. In the second setting, the range space is fixed while the domain space similarly varies. The interest is in when the requirement that the mappings have a closed range implies that the domain or range space is finite dimensional. Positive results are obtained for metrizable spaces. (author)

  12. DIFFERENTIATION OF AURANTII FRUCTUS IMMATURUS AND FRUCTUS PONICIRI TRIFOLIATAE IMMATURUS BY FLOW-INJECTION WITH ULTRAVIOLET SPECTROSCOPIC DETECTION AND PROTON NUCLEAR MAGNETIC RESONANCE USING PARTIAL LEAST-SQUARES DISCRIMINANT ANALYSIS.

    Science.gov (United States)

    Zhang, Mengliang; Zhao, Yang; Harrington, Peter de B; Chen, Pei

    2016-03-01

    Two simple fingerprinting methods, flow-injection coupled to ultraviolet spectroscopy and proton nuclear magnetic resonance, were used for discriminating between Aurantii fructus immaturus and Fructus poniciri trifoliatae immaturus . Both methods were combined with partial least-squares discriminant analysis. In the flow-injection method, four data representations were evaluated: total ultraviolet absorbance chromatograms, averaged ultraviolet spectra, absorbance at 193, 205, 225, and 283 nm, and absorbance at 225 and 283 nm. Prediction rates of 100% were achieved for all data representations by partial least-squares discriminant analysis using leave-one-sample-out cross-validation. The prediction rate for the proton nuclear magnetic resonance data by partial least-squares discriminant analysis with leave-one-sample-out cross-validation was also 100%. A new validation set of data was collected by flow-injection with ultraviolet spectroscopic detection two weeks later and predicted by partial least-squares discriminant analysis models constructed by the initial data representations with no parameter changes. The classification rates were 95% with the total ultraviolet absorbance chromatograms datasets and 100% with the other three datasets. Flow-injection with ultraviolet detection and proton nuclear magnetic resonance are simple, high throughput, and low-cost methods for discrimination studies.

  13. A Bayesian least-squares support vector machine method for predicting the remaining useful life of a microwave component

    Directory of Open Access Journals (Sweden)

    Fuqiang Sun

    2017-01-01

    Full Text Available Rapid and accurate lifetime prediction of critical components in a system is important to maintaining the system’s reliable operation. To this end, many lifetime prediction methods have been developed to handle various failure-related data collected in different situations. Among these methods, machine learning and Bayesian updating are the most popular ones. In this article, a Bayesian least-squares support vector machine method that combines least-squares support vector machine with Bayesian inference is developed for predicting the remaining useful life of a microwave component. A degradation model describing the change in the component’s power gain over time is developed, and the point and interval remaining useful life estimates are obtained considering a predefined failure threshold. In our case study, the radial basis function neural network approach is also implemented for comparison purposes. The results indicate that the Bayesian least-squares support vector machine method is more precise and stable in predicting the remaining useful life of this type of components.

  14. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions

    Science.gov (United States)

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-01

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  15. Recursive least squares method of regression coefficients estimation as a special case of Kalman filter

    Science.gov (United States)

    Borodachev, S. M.

    2016-06-01

    The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.

  16. Gravity Search Algorithm hybridized Recursive Least Square method for power system harmonic estimation

    Directory of Open Access Journals (Sweden)

    Santosh Kumar Singh

    2017-06-01

    Full Text Available This paper presents a new hybrid method based on Gravity Search Algorithm (GSA and Recursive Least Square (RLS, known as GSA-RLS, to solve the harmonic estimation problems in the case of time varying power signals in presence of different noises. GSA is based on the Newton’s law of gravity and mass interactions. In the proposed method, the searcher agents are a collection of masses that interact with each other using Newton’s laws of gravity and motion. The basic GSA algorithm strategy is combined with RLS algorithm sequentially in an adaptive way to update the unknown parameters (weights of the harmonic signal. Simulation and practical validation are made with the experimentation of the proposed algorithm with real time data obtained from a heavy paper industry. A comparative performance of the proposed algorithm is evaluated with other recently reported algorithms like, Differential Evolution (DE, Particle Swarm Optimization (PSO, Bacteria Foraging Optimization (BFO, Fuzzy-BFO (F-BFO hybridized with Least Square (LS and BFO hybridized with RLS algorithm, which reveals that the proposed GSA-RLS algorithm is the best in terms of accuracy, convergence and computational time.

  17. Extreme Learning Machine and Moving Least Square Regression Based Solar Panel Vision Inspection

    Directory of Open Access Journals (Sweden)

    Heng Liu

    2017-01-01

    Full Text Available In recent years, learning based machine intelligence has aroused a lot of attention across science and engineering. Particularly in the field of automatic industry inspection, the machine learning based vision inspection plays a more and more important role in defect identification and feature extraction. Through learning from image samples, many features of industry objects, such as shapes, positions, and orientations angles, can be obtained and then can be well utilized to determine whether there is defect or not. However, the robustness and the quickness are not easily achieved in such inspection way. In this work, for solar panel vision inspection, we present an extreme learning machine (ELM and moving least square regression based approach to identify solder joint defect and detect the panel position. Firstly, histogram peaks distribution (HPD and fractional calculus are applied for image preprocessing. Then an ELM-based defective solder joints identification is discussed in detail. Finally, moving least square regression (MLSR algorithm is introduced for solar panel position determination. Experimental results and comparisons show that the proposed ELM and MLSR based inspection method is efficient not only in detection accuracy but also in processing speed.

  18. Two-dimensional calculus

    CERN Document Server

    Osserman, Robert

    2011-01-01

    The basic component of several-variable calculus, two-dimensional calculus is vital to mastery of the broader field. This extensive treatment of the subject offers the advantage of a thorough integration of linear algebra and materials, which aids readers in the development of geometric intuition. An introductory chapter presents background information on vectors in the plane, plane curves, and functions of two variables. Subsequent chapters address differentiation, transformations, and integration. Each chapter concludes with problem sets, and answers to selected exercises appear at the end o

  19. FEAST: a two-dimensional non-linear finite element code for calculating stresses

    International Nuclear Information System (INIS)

    Tayal, M.

    1986-06-01

    The computer code FEAST calculates stresses, strains, and displacements. The code is two-dimensional. That is, either plane or axisymmetric calculations can be done. The code models elastic, plastic, creep, and thermal strains and stresses. Cracking can also be simulated. The finite element method is used to solve equations describing the following fundamental laws of mechanics: equilibrium; compatibility; constitutive relations; yield criterion; and flow rule. FEAST combines several unique features that permit large time-steps in even severely non-linear situations. The features include a special formulation for permitting many finite elements to simultaneously cross the boundary from elastic to plastic behaviour; accomodation of large drops in yield-strength due to changes in local temperature and a three-step predictor-corrector method for plastic analyses. These features reduce computing costs. Comparisons against twenty analytical solutions and against experimental measurements show that predictions of FEAST are generally accurate to ± 5%

  20. Stable Galerkin versus equal-order Galerkin least-squares elements for the stokes flow problem

    International Nuclear Information System (INIS)

    Franca, L.P.; Frey, S.L.; Sampaio, R.

    1989-11-01

    Numerical experiments are performed for the stokes flow problem employing a stable Galerkin method and a Galerkin/Least-squares method with equal-order elements. Error estimates for the methods tested herein are reviewed. The numerical results presented attest the good stability properties of all methods examined herein. (A.C.A.S.) [pt