WorldWideScience

Sample records for least-square deconvolution lsd

  1. Imaging of stellar surfaces with the Occamian approach and the least-squares deconvolution technique

    Science.gov (United States)

    Järvinen, S. P.; Berdyugina, S. V.

    2010-10-01

    Context. We present in this paper a new technique for the indirect imaging of stellar surfaces (Doppler imaging, DI), when low signal-to-noise spectral data have been improved by the least-squares deconvolution (LSD) method and inverted into temperature maps with the Occamian approach. We apply this technique to both simulated and real data and investigate its applicability for different stellar rotation rates and noise levels in data. Aims: Our goal is to boost the signal of spots in spectral lines and to reduce the effect of photon noise without loosing the temperature information in the lines. Methods: We simulated data from a test star, to which we added different amounts of noise, and employed the inversion technique based on the Occamian approach with and without LSD. In order to be able to infer a temperature map from LSD profiles, we applied the LSD technique for the first time to both the simulated observations and theoretical local line profiles, which remain dependent on temperature and limb angles. We also investigated how the excitation energy of individual lines effects the obtained solution by using three submasks that have lines with low, medium, and high excitation energy levels. Results: We show that our novel approach enables us to overcome the limitations of the two-temperature approximation, which was previously employed for LSD profiles, and to obtain true temperature maps with stellar atmosphere models. The resulting maps agree well with those obtained using the inversion code without LSD, provided the data are noiseless. However, using LSD is only advisable for poor signal-to-noise data. Further, we show that the Occamian technique, both with and without LSD, approaches the surface temperature distribution reasonably well for an adequate spatial resolution. Thus, the stellar rotation rate has a great influence on the result. For instance, in a slowly rotating star, closely situated spots are usually recovered blurred and unresolved, which

  2. Least-squares dual characterization for ROI assessment in emission tomography

    International Nuclear Information System (INIS)

    Ben Bouallègue, F; Mariano-Goulart, D; Crouzet, J F; Dubois, A; Buvat, I

    2013-01-01

    Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff. (paper)

  3. Least-squares dual characterization for ROI assessment in emission tomography

    Science.gov (United States)

    Ben Bouallègue, F.; Crouzet, J. F.; Dubois, A.; Buvat, I.; Mariano-Goulart, D.

    2013-06-01

    Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.

  4. Making the most out of least-squares migration

    KAUST Repository

    Huang, Yunsong; Dutta, Gaurav; Dai, Wei; Wang, Xin; Schuster, Gerard T.; Yu, Jianhua

    2014-01-01

    ) weak amplitudes resulting from geometric spreading, attenuation, and defocusing. These problems can be remedied in part by least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), which aims to linearly

  5. A new stabilized least-squares imaging condition

    International Nuclear Information System (INIS)

    Vivas, Flor A; Pestana, Reynam C; Ursin, Bjørn

    2009-01-01

    The classical deconvolution imaging condition consists of dividing the upgoing wave field by the downgoing wave field and summing over all frequencies and sources. The least-squares imaging condition consists of summing the cross-correlation of the upgoing and downgoing wave fields over all frequencies and sources, and dividing the result by the total energy of the downgoing wave field. This procedure is more stable than using the classical imaging condition, but it still requires stabilization in zones where the energy of the downgoing wave field is small. To stabilize the least-squares imaging condition, the energy of the downgoing wave field is replaced by its average value computed in a horizontal plane in poorly illuminated regions. Applications to the Marmousi and Sigsbee2A data sets show that the stabilized least-squares imaging condition produces better images than the least-squares and cross-correlation imaging conditions

  6. Making the most out of the least (squares migration)

    KAUST Repository

    Dutta, Gaurav; Huang, Yunsong; Dai, Wei; Wang, Xin; Schuster, Gerard T.

    2014-01-01

    ) ringiness caused by a ringy source wavelet. To partly remedy these problems, least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), proposes to linearly invert seismic data for the reflectivity distribution

  7. LSD-based analysis of high-resolution stellar spectra

    Science.gov (United States)

    Tsymbal, V.; Tkachenko, A.; Van, Reeth T.

    2014-11-01

    We present a generalization of the method of least-squares deconvolution (LSD), a powerful tool for extracting high S/N average line profiles from stellar spectra. The generalization of the method is effected by extending it towards the multiprofile LSD and by introducing the possibility to correct the line strengths from the initial mask. We illustrate the new approach by two examples: (a) the detection of astroseismic signatures from low S/N spectra of single stars, and (b) disentangling spectra of multiple stellar objects. The analysis is applied to spectra obtained with 2-m class telescopes in the course of spectroscopic ground-based support for space missions such as CoRoT and Kepler. Usually, rather high S/N is required, so smaller telescopes can only compete successfully with more advanced ones when one can apply a technique that enables a remarkable increase in the S/N of the spectra which they observe. Since the LSD profiles have a potential for reconstruction what is common in all the spectral profiles, it should have a particular practical application to faint stars observed with 2-m class telescopes and whose spectra show remarkable LPVs.

  8. Constrained least squares regularization in PET

    International Nuclear Information System (INIS)

    Choudhury, K.R.; O'Sullivan, F.O.

    1996-01-01

    Standard reconstruction methods used in tomography produce images with undesirable negative artifacts in background and in areas of high local contrast. While sophisticated statistical reconstruction methods can be devised to correct for these artifacts, their computational implementation is excessive for routine operational use. This work describes a technique for rapid computation of approximate constrained least squares regularization estimates. The unique feature of the approach is that it involves no iterative projection or backprojection steps. This contrasts with the familiar computationally intensive algorithms based on algebraic reconstruction (ART) or expectation-maximization (EM) methods. Experimentation with the new approach for deconvolution and mixture analysis shows that the root mean square error quality of estimators based on the proposed algorithm matches and usually dominates that of more elaborate maximum likelihood, at a fraction of the computational effort

  9. Making the most out of least-squares migration

    KAUST Repository

    Huang, Yunsong

    2014-09-01

    Standard migration images can suffer from (1) migration artifacts caused by an undersampled acquisition geometry, (2) poor resolution resulting from a limited recording aperture, (3) ringing artifacts caused by ripples in the source wavelet, and (4) weak amplitudes resulting from geometric spreading, attenuation, and defocusing. These problems can be remedied in part by least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), which aims to linearly invert seismic data for the reflectivity distribution. Given a sufficiently accurate migration velocity model, LSM can mitigate many of the above problems and can produce more resolved migration images, sometimes with more than twice the spatial resolution of standard migration. However, LSM faces two challenges: The computational cost can be an order of magnitude higher than that of standard migration, and the resulting image quality can fail to improve for migration velocity errors of about 5% or more. It is possible to obtain the most from least-squares migration by reducing the cost and velocity sensitivity of LSM.

  10. Making the most out of the least (squares migration)

    KAUST Repository

    Dutta, Gaurav

    2014-08-05

    Standard migration images can suffer from migration artifacts due to 1) poor source-receiver sampling, 2) weak amplitudes caused by geometric spreading, 3) attenuation, 4) defocusing, 5) poor resolution due to limited source-receiver aperture, and 6) ringiness caused by a ringy source wavelet. To partly remedy these problems, least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), proposes to linearly invert seismic data for the reflectivity distribution. If the migration velocity model is sufficiently accurate, then LSM can mitigate many of the above problems and lead to a more resolved migration image, sometimes with twice the spatial resolution. However, there are two problems with LSM: the cost can be an order of magnitude more than standard migration and the quality of the LSM image is no better than the standard image for velocity errors of 5% or more. We now show how to get the most from least-squares migration by reducing the cost and velocity sensitivity of LSM.

  11. Weighted conditional least-squares estimation

    International Nuclear Information System (INIS)

    Booth, J.G.

    1987-01-01

    A two-stage estimation procedure is proposed that generalizes the concept of conditional least squares. The method is instead based upon the minimization of a weighted sum of squares, where the weights are inverses of estimated conditional variance terms. Some general conditions are given under which the estimators are consistent and jointly asymptotically normal. More specific details are given for ergodic Markov processes with stationary transition probabilities. A comparison is made with the ordinary conditional least-squares estimators for two simple branching processes with immigration. The relationship between weighted conditional least squares and other, more well-known, estimators is also investigated. In particular, it is shown that in many cases estimated generalized least-squares estimators can be obtained using the weighted conditional least-squares approach. Applications to stochastic compartmental models, and linear models with nested error structures are considered

  12. Deconvoluting double Doppler spectra

    International Nuclear Information System (INIS)

    Ho, K.F.; Beling, C.D.; Fung, S.; Chan, K.L.; Tang, H.W.

    2001-01-01

    The successful deconvolution of data from double Doppler broadening of annihilation radiation (D-DBAR) spectroscopy is a promising area of endeavour aimed at producing momentum distributions of a quality comparable to those of the angular correlation technique. The deconvolution procedure we test in the present study is the constrained generalized least square method. Trials with computer simulated DDBAR spectra are generated and deconvoluted in order to find the best form of regularizer and the regularization parameter. For these trials the Neumann (reflective) boundary condition is used to give a single matrix operation in Fourier space. Experimental D-DBAR spectra are also subject to the same type of deconvolution after having carried out a background subtraction and using a symmetrize resolution function obtained from an 85 Sr source with wide coincidence windows. (orig.)

  13. Fast Dating Using Least-Squares Criteria and Algorithms.

    Science.gov (United States)

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    of the most sophisticated methods, while their computing time is much faster. We apply these algorithms on a large data set comprising 1194 strains of Influenza virus from the pdm09 H1N1 Human pandemic. Again the results show that these algorithms provide a very fast alternative with results similar to those of other computer programs. These algorithms are implemented in the LSD software (least-squares dating), which can be downloaded from http://www.atgc-montpellier.fr/LSD/, along with all our data sets and detailed results. An Online Appendix, providing additional algorithm descriptions, tables, and figures can be found in the Supplementary Material available on Dryad at http://dx.doi.org/10.5061/dryad.968t3. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  14. Method of LSD profile asymmetry for estimating the center of mass velocities of pulsating stars

    Science.gov (United States)

    Britavskiy, N.; Pancino, E.; Tsymbal, V.; Romano, D.; Cacciari, C.; Clementini, C.

    2016-05-01

    We present radial velocity analysis for 20 solar neighborhood RR Lyrae and 3 Population II Cepheids. High-resolution spectra were observed with either TNG/SARG or VLT/UVES over varying phases. To estimate the center of mass (barycentric) velocities of the program stars, we utilized two independent methods. First, the 'classic' method was employed, which is based on RR Lyrae radial velocity curve templates. Second, we provide the new method that used absorption line profile asymmetry to determine both the pulsation and the barycentric velocities even with a low number of high-resolution spectra and in cases where the phase of the observations is uncertain. This new method is based on a least squares deconvolution (LSD) of the line profiles in order to an- alyze line asymmetry that occurs in the spectra of pulsating stars. By applying this method to our sample stars we attain accurate measurements (+- 2 kms^-1) of the pulsation component of the radial velocity. This results in determination of the barycentric velocity to within 5 kms^-1 even with a low number of high- resolution spectra. A detailed investigation of LSD profile asymmetry shows the variable nature of the project factor at different pulsation phases, which should be taken into account in the detailed spectroscopic analysis of pulsating stars.

  15. LSD. Specialized Information Service.

    Science.gov (United States)

    Do It Now Foundation, Phoenix, AZ.

    The document presents a collection of articles about LSD. The first article discusses the increasingly popular use of blotter acid (tiny squares of absorbent paper soaked in liquid LSD). Article 2 furthers this look at the newer LSD formats and describes rumors of lick-'n-stick stamps and color-transfer tattoos as examples of techniques aimed at…

  16. Least-squares model-based halftoning

    Science.gov (United States)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach

  17. Partial update least-square adaptive filtering

    CERN Document Server

    Xie, Bei

    2014-01-01

    Adaptive filters play an important role in the fields related to digital signal processing and communication, such as system identification, noise cancellation, channel equalization, and beamforming. In practical applications, the computational complexity of an adaptive filter is an important consideration. The Least Mean Square (LMS) algorithm is widely used because of its low computational complexity (O(N)) and simplicity in implementation. The least squares algorithms, such as Recursive Least Squares (RLS), Conjugate Gradient (CG), and Euclidean Direction Search (EDS), can converge faster a

  18. Simplified neural networks for solving linear least squares and total least squares problems in real time.

    Science.gov (United States)

    Cichocki, A; Unbehauen, R

    1994-01-01

    In this paper a new class of simplified low-cost analog artificial neural networks with on chip adaptive learning algorithms are proposed for solving linear systems of algebraic equations in real time. The proposed learning algorithms for linear least squares (LS), total least squares (TLS) and data least squares (DLS) problems can be considered as modifications and extensions of well known algorithms: the row-action projection-Kaczmarz algorithm and/or the LMS (Adaline) Widrow-Hoff algorithms. The algorithms can be applied to any problem which can be formulated as a linear regression problem. The correctness and high performance of the proposed neural networks are illustrated by extensive computer simulation results.

  19. A Weighted Least Squares Approach To Robustify Least Squares Estimates.

    Science.gov (United States)

    Lin, Chowhong; Davenport, Ernest C., Jr.

    This study developed a robust linear regression technique based on the idea of weighted least squares. In this technique, a subsample of the full data of interest is drawn, based on a measure of distance, and an initial set of regression coefficients is calculated. The rest of the data points are then taken into the subsample, one after another,…

  20. Spectrum unfolding by the least-squares methods

    International Nuclear Information System (INIS)

    Perey, F.G.

    1977-01-01

    The method of least squares is briefly reviewed, and the conditions under which it may be used are stated. From this analysis, a least-squares approach to the solution of the dosimetry neutron spectrum unfolding problem is introduced. The mathematical solution to this least-squares problem is derived from the general solution. The existence of this solution is analyzed in some detail. A chi 2 -test is derived for the consistency of the input data which does not require the solution to be obtained first. The fact that the problem is technically nonlinear, but should be treated in general as a linear one, is argued. Therefore, the solution should not be obtained by iteration. Two interpretations are made for the solution of the code STAY'SL, which solves this least-squares problem. The relationship of the solution to this least-squares problem to those obtained currently by other methods of solving the dosimetry neutron spectrum unfolding problem is extensively discussed. It is shown that the least-squares method does not require more input information than would be needed by current methods in order to estimate the uncertainties in their solutions. From this discussion it is concluded that the proposed least-squares method does provide the best complete solution, with uncertainties, to the problem as it is understood now. Finally, some implications of this method are mentioned regarding future work required in order to exploit its potential fully

  1. Deformation analysis with Total Least Squares

    Directory of Open Access Journals (Sweden)

    M. Acar

    2006-01-01

    Full Text Available Deformation analysis is one of the main research fields in geodesy. Deformation analysis process comprises measurement and analysis phases. Measurements can be collected using several techniques. The output of the evaluation of the measurements is mainly point positions. In the deformation analysis phase, the coordinate changes in the point positions are investigated. Several models or approaches can be employed for the analysis. One approach is based on a Helmert or similarity coordinate transformation where the displacements and the respective covariance matrix are transformed into a unique datum. Traditionally a Least Squares (LS technique is used for the transformation procedure. Another approach that could be introduced as an alternative methodology is the Total Least Squares (TLS that is considerably a new approach in geodetic applications. In this study, in order to determine point displacements, 3-D coordinate transformations based on the Helmert transformation model were carried out individually by the Least Squares (LS and the Total Least Squares (TLS, respectively. The data used in this study was collected by GPS technique in a landslide area located nearby Istanbul. The results obtained from these two approaches have been compared.

  2. Least-squares reverse time migration of multiples

    KAUST Repository

    Zhang, Dongliang; Schuster, Gerard T.

    2013-01-01

    The theory of least-squares reverse time migration of multiples (RTMM) is presented. In this method, least squares migration (LSM) is used to image free-surface multiples where the recorded traces are used as the time histories of the virtual

  3. Time Scale in Least Square Method

    Directory of Open Access Journals (Sweden)

    Özgür Yeniay

    2014-01-01

    Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.

  4. Least Squares Data Fitting with Applications

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Pereyra, Víctor; Scherer, Godela

    As one of the classical statistical regression techniques, and often the first to be taught to new students, least squares fitting can be a very effective tool in data analysis. Given measured data, we establish a relationship between independent and dependent variables so that we can use the data....... In a number of applications, the accuracy and efficiency of the least squares fit is central, and Per Christian Hansen, Víctor Pereyra, and Godela Scherer survey modern computational methods and illustrate them in fields ranging from engineering and environmental sciences to geophysics. Anyone working...... with problems of linear and nonlinear least squares fitting will find this book invaluable as a hands-on guide, with accessible text and carefully explained problems. Included are • an overview of computational methods together with their properties and advantages • topics from statistical regression analysis...

  5. Plane-wave Least-squares Reverse Time Migration

    KAUST Repository

    Dai, Wei

    2012-11-04

    Least-squares reverse time migration is formulated with a new parameterization, where the migration image of each shot is updated separately and a prestack image is produced with common image gathers. The advantage is that it can offer stable convergence for least-squares migration even when the migration velocity is not completely accurate. To significantly reduce computation cost, linear phase shift encoding is applied to hundreds of shot gathers to produce dozens of planes waves. A regularization term which penalizes the image difference between nearby angles are used to keep the prestack image consistent through all the angles. Numerical tests on a marine dataset is performed to illustrate the advantages of least-squares reverse time migration in the plane-wave domain. Through iterations of least-squares migration, the migration artifacts are reduced and the image resolution is improved. Empirical results suggest that the LSRTM in plane wave domain is an efficient method to improve the image quality and produce common image gathers.

  6. A least-squares computational ''tool kit''

    International Nuclear Information System (INIS)

    Smith, D.L.

    1993-04-01

    The information assembled in this report is intended to offer a useful computational ''tool kit'' to individuals who are interested in a variety of practical applications for the least-squares method of parameter estimation. The fundamental principles of Bayesian analysis are outlined first and these are applied to development of both the simple and the generalized least-squares conditions. Formal solutions that satisfy these conditions are given subsequently. Their application to both linear and non-linear problems is described in detail. Numerical procedures required to implement these formal solutions are discussed and two utility computer algorithms are offered for this purpose (codes LSIOD and GLSIOD written in FORTRAN). Some simple, easily understood examples are included to illustrate the use of these algorithms. Several related topics are then addressed, including the generation of covariance matrices, the role of iteration in applications of least-squares procedures, the effects of numerical precision and an approach that can be pursued in developing data analysis packages that are directed toward special applications

  7. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Ying-Xu [Department of Chemistry, University of Bergen, PO Box 7803, N-5020 Bergen (Norway); Mjøs, Svein Are, E-mail: svein.mjos@kj.uib.no [Department of Chemistry, University of Bergen, PO Box 7803, N-5020 Bergen (Norway); David, Fabrice P.A. [Bioinformatics and Biostatistics Core Facility, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL) and Swiss Institute of Bioinformatics (SIB), Lausanne (Switzerland); Schmid, Adrien W. [Proteomics Core Facility, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne (Switzerland)

    2016-03-31

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. - Highlights: • A flexible strategy for analyzing MS and LC-MS data of lipid molecules is proposed. • Isotope distribution spectra of theoretically possible compounds were generated. • High resolution MS and LC-MS data were resolved by least squares spectral resolution. • The method proposed compounds that are likely to occur in the analyzed samples. • The proposed compounds matched results from manual interpretation of fragment spectra.

  8. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data

    International Nuclear Information System (INIS)

    Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P.A.; Schmid, Adrien W.

    2016-01-01

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. - Highlights: • A flexible strategy for analyzing MS and LC-MS data of lipid molecules is proposed. • Isotope distribution spectra of theoretically possible compounds were generated. • High resolution MS and LC-MS data were resolved by least squares spectral resolution. • The method proposed compounds that are likely to occur in the analyzed samples. • The proposed compounds matched results from manual interpretation of fragment spectra.

  9. Iterative least-squares solvers for the Navier-Stokes equations

    Energy Technology Data Exchange (ETDEWEB)

    Bochev, P. [Univ. of Texas, Arlington, TX (United States)

    1996-12-31

    In the recent years finite element methods of least-squares type have attracted considerable attention from both mathematicians and engineers. This interest has been motivated, to a large extent, by several valuable analytic and computational properties of least-squares variational principles. In particular, finite element methods based on such principles circumvent Ladyzhenskaya-Babuska-Brezzi condition and lead to symmetric and positive definite algebraic systems. Thus, it is not surprising that numerical solution of fluid flow problems has been among the most promising and successful applications of least-squares methods. In this context least-squares methods offer significant theoretical and practical advantages in the algorithmic design, which makes resulting methods suitable, among other things, for large-scale numerical simulations.

  10. Deconvolution for the localization of sound sources using a circular microphone array

    DEFF Research Database (Denmark)

    Tiana Roig, Elisabet; Jacobsen, Finn

    2013-01-01

    During the last decade, the aeroacoustic community has examined various methods based on deconvolution to improve the visualization of acoustic fields scanned with planar sparse arrays of microphones. These methods assume that the beamforming map in an observation plane can be approximated by a c......-negative least squares, and the Richardson-Lucy. This investigation examines the matter with computer simulations and measurements....... that the beamformer's point-spread function is shift-invariant. This makes it possible to apply computationally efficient deconvolution algorithms that consist of spectral procedures in the entire region of interest, such as the deconvolution approach for the mapping of the acoustic sources 2, the Fourier-based non...

  11. A Newton Algorithm for Multivariate Total Least Squares Problems

    Directory of Open Access Journals (Sweden)

    WANG Leyang

    2016-04-01

    Full Text Available In order to improve calculation efficiency of parameter estimation, an algorithm for multivariate weighted total least squares adjustment based on Newton method is derived. The relationship between the solution of this algorithm and that of multivariate weighted total least squares adjustment based on Lagrange multipliers method is analyzed. According to propagation of cofactor, 16 computational formulae of cofactor matrices of multivariate total least squares adjustment are also listed. The new algorithm could solve adjustment problems containing correlation between observation matrix and coefficient matrix. And it can also deal with their stochastic elements and deterministic elements with only one cofactor matrix. The results illustrate that the Newton algorithm for multivariate total least squares problems could be practiced and have higher convergence rate.

  12. Least-squares variance component estimation

    NARCIS (Netherlands)

    Teunissen, P.J.G.; Amiri-Simkooei, A.R.

    2007-01-01

    Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight

  13. Regularization by truncated total least squares

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Fierro, R.D; Golub, G.H

    1997-01-01

    The total least squares (TLS) method is a successful method for noise reduction in linear least squares problems in a number of applications. The TLS method is suited to problems in which both the coefficient matrix and the right-hand side are not precisely known. This paper focuses on the use...... schemes for relativistic hydrodynamical equations. Such an approximate Riemann solver is presented in this paper which treats all waves emanating from an initial discontinuity as themselves discontinuous. Therefore, jump conditions for shocks are approximately used for rarefaction waves. The solver...... is easy to implement in a Godunov scheme and converges rapidly for relativistic hydrodynamics. The fast convergence of the solver indicates the potential of a higher performance of a Godunov scheme in which the solver is used....

  14. Magnetic fields of HgMn stars

    DEFF Research Database (Denmark)

    Hubrig, S.; González, J. F.; Ilyin, I.

    2012-01-01

    Context. The frequent presence of weak magnetic fields on the surface of spotted late-B stars with HgMn peculiarity in binary systems has been controversial during the two last decades. Recent studies of magnetic fields in these stars using the least-squares deconvolution (LSD) technique have...... failed to detect magnetic fields, indicating an upper limit on the longitudinal field between 8 and 15G. In these LSD studies, assumptions were made that all spectral lines are identical in shape and can be described by a scaled mean profile. Aims. We re-analyse the available spectropolarimetric material...

  15. Consistency of the least weighted squares under heteroscedasticity

    Czech Academy of Sciences Publication Activity Database

    Víšek, Jan Ámos

    2011-01-01

    Roč. 2011, č. 47 (2011), s. 179-206 ISSN 0023-5954 Grant - others:GA UK(CZ) GA402/09/055 Institutional research plan: CEZ:AV0Z10750506 Keywords : Regression * Consistency * The least weighted squares * Heteroscedasticity Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/visek-consistency of the least weighted squares under heteroscedasticity.pdf

  16. Comparison of least-squares vs. maximum likelihood estimation for standard spectrum technique of β−γ coincidence spectrum analysis

    International Nuclear Information System (INIS)

    Lowrey, Justin D.; Biegalski, Steven R.F.

    2012-01-01

    The spectrum deconvolution analysis tool (SDAT) software code was written and tested at The University of Texas at Austin utilizing the standard spectrum technique to determine activity levels of Xe-131m, Xe-133m, Xe-133, and Xe-135 in β–γ coincidence spectra. SDAT was originally written to utilize the method of least-squares to calculate the activity of each radionuclide component in the spectrum. Recently, maximum likelihood estimation was also incorporated into the SDAT tool. This is a robust statistical technique to determine the parameters that maximize the Poisson distribution likelihood function of the sample data. In this case it is used to parameterize the activity level of each of the radioxenon components in the spectra. A new test dataset was constructed utilizing Xe-131m placed on a Xe-133 background to compare the robustness of the least-squares and maximum likelihood estimation methods for low counting statistics data. The Xe-131m spectra were collected independently from the Xe-133 spectra and added to generate the spectra in the test dataset. The true independent counts of Xe-131m and Xe-133 are known, as they were calculated before the spectra were added together. Spectra with both high and low counting statistics are analyzed. Studies are also performed by analyzing only the 30 keV X-ray region of the β–γ coincidence spectra. Results show that maximum likelihood estimation slightly outperforms least-squares for low counting statistics data.

  17. note: The least square nucleolus is a general nucleolus

    OpenAIRE

    Elisenda Molina; Juan Tejada

    2000-01-01

    This short note proves that the least square nucleolus (Ruiz et al. (1996)) and the lexicographical solution (Sakawa and Nishizaki (1994)) select the same imputation in each game with nonempty imputation set. As a consequence the least square nucleolus is a general nucleolus (Maschler et al. (1992)).

  18. Wave-equation Q tomography and least-squares migration

    KAUST Repository

    Dutta, Gaurav

    2016-03-01

    This thesis designs new methods for Q tomography and Q-compensated prestack depth migration when the recorded seismic data suffer from strong attenuation. A motivation of this work is that the presence of gas clouds or mud channels in overburden structures leads to the distortion of amplitudes and phases in seismic waves propagating inside the earth. If the attenuation parameter Q is very strong, i.e., Q<30, ignoring the anelastic effects in imaging can lead to dimming of migration amplitudes and loss of resolution. This, in turn, adversely affects the ability to accurately predict reservoir properties below such layers. To mitigate this problem, I first develop an anelastic least-squares reverse time migration (Q-LSRTM) technique. I reformulate the conventional acoustic least-squares migration problem as a viscoacoustic linearized inversion problem. Using linearized viscoacoustic modeling and adjoint operators during the least-squares iterations, I show with numerical tests that Q-LSRTM can compensate for the amplitude loss and produce images with better balanced amplitudes than conventional migration. To estimate the background Q model that can be used for any Q-compensating migration algorithm, I then develop a wave-equation based optimization method that inverts for the subsurface Q distribution by minimizing a skeletonized misfit function ε. Here, ε is the sum of the squared differences between the observed and the predicted peak/centroid-frequency shifts of the early-arrivals. Through numerical tests on synthetic and field data, I show that noticeable improvements in the migration image quality can be obtained from Q models inverted using wave-equation Q tomography. A key feature of skeletonized inversion is that it is much less likely to get stuck in a local minimum than a standard waveform inversion method. Finally, I develop a preconditioning technique for least-squares migration using a directional Gabor-based preconditioning approach for isotropic

  19. Application of least-squares method to decay heat evaluation

    International Nuclear Information System (INIS)

    Schmittroth, F.; Schenter, R.E.

    1976-01-01

    Generalized least-squares methods are applied to decay-heat experiments and summation calculations to arrive at evaluated values and uncertainties for the fission-product decay-heat from the thermal fission of 235 U. Emphasis is placed on a proper treatment of both statistical and correlated uncertainties in the least-squares method

  20. Validated ultra-performance liquid chromatography-tandem mass spectrometry method for analyzing LSD, iso-LSD, nor-LSD, and O-H-LSD in blood and urine.

    Science.gov (United States)

    Chung, Angela; Hudson, John; McKay, Gordon

    2009-06-01

    The Royal Canadian Mounted Police Forensic Science and Identification Services was looking for a confirmatory method for lysergic acid diethylamide (LSD). As a result, an ultra-performance liquid chromatography-tandem mass spectrometry method was validated for the confirmation and quantitation of LSD, iso-LSD, N-demethyl-LSD (nor-LSD), and 2-oxo-3-hydroxy-LSD (O-H-LSD). Relative retention time and ion ratios were used as identification parameters. Limits of detection (LOD) in blood were 5 pg/mL for LSD and iso-LSD and 10 pg/mL for nor-LSD and O-H-LSD. In urine, the LOD was 10 pg/mL for all analytes. Limits of quantitation (LOQ) in blood and urine were 20 pg/mL for LSD and iso-LSD and 50 pg/mL for nor-LSD and O-H-LSD. The method was linear, accurate, and precise from 10 to 2000 pg/mL in blood and 20 to 2000 pg/mL in urine for LSD and iso-LSD and from 20 to 2000 pg/mL in blood and 50 to 2000 pg/mL in urine for nor-LSD and O-H-LSD with a coefficient of determination (R(2)) > or = 0.99. The method was applied to blinded biological control samples and biological samples taken from a suspected LSD user. This is the first reported detection of O-H-LSD in blood from a suspected LSD user.

  1. Multiples least-squares reverse time migration

    KAUST Repository

    Zhang, Dongliang; Zhan, Ge; Dai, Wei; Schuster, Gerard T.

    2013-01-01

    To enhance the image quality, we propose multiples least-squares reverse time migration (MLSRTM) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. Since each recorded trace is treated

  2. Classification of Ultrasonic NDE Signals Using the Expectation Maximization (EM) and Least Mean Square (LMS) Algorithms

    International Nuclear Information System (INIS)

    Kim, Dae Won

    2005-01-01

    Ultrasonic inspection methods are widely used for detecting flaws in materials. The signal analysis step plays a crucial part in the data interpretation process. A number of signal processing methods have been proposed to classify ultrasonic flaw signals. One of the more popular methods involves the extraction of an appropriate set of features followed by the use of a neural network for the classification of the signals in the feature spare. This paper describes an alternative approach which uses the least mean square (LMS) method and exportation maximization (EM) algorithm with the model based deconvolution which is employed for classifying nondestructive evaluation (NDE) signals from steam generator tubes in a nuclear power plant. The signals due to cracks and deposits are not significantly different. These signals must be discriminated to prevent from happening a huge disaster such as contamination of water or explosion. A model based deconvolution has been described to facilitate comparison of classification results. The method uses the space alternating generalized expectation maximiBation (SAGE) algorithm ill conjunction with the Newton-Raphson method which uses the Hessian parameter resulting in fast convergence to estimate the time of flight and the distance between the tube wall and the ultrasonic sensor. Results using these schemes for the classification of ultrasonic signals from cracks and deposits within steam generator tubes are presented and showed a reasonable performances

  3. VizieR Online Data Catalog: Polarisation of a sample of late M dwarfs (Morin+, 2010)

    Science.gov (United States)

    Morin, J.; Donati, J.-F.; Petit, P.; Delfosse, X.; Forveille, T.; Jardine, M. M.

    2010-06-01

    We have collected 174 pairs of Stokes I (unpolarised) and V (circularly polarised) spectra with ESPaDOnS at CFHT (2003ASPC..307...41D) between June 2006 and July 2009. All spectra were reduced using the Libre-Esprit pipeline, and the mean I and V line profiles were extracted using the Least-Squares Deconvolution (LSD) technique (Donati et al., 1997MNRAS.291..658D). (3 data files).

  4. LSL: a logarithmic least-squares adjustment method

    International Nuclear Information System (INIS)

    Stallmann, F.W.

    1982-01-01

    To meet regulatory requirements, spectral unfolding codes must not only provide reliable estimates for spectral parameters, but must also be able to determine the uncertainties associated with these parameters. The newer codes, which are more appropriately called adjustment codes, use the least squares principle to determine estimates and uncertainties. The principle is simple and straightforward, but there are several different mathematical models to describe the unfolding problem. In addition to a sound mathematical model, ease of use and range of options are important considerations in the construction of adjustment codes. Based on these considerations, a least squares adjustment code for neutron spectrum unfolding has been constructed some time ago and tentatively named LSL

  5. New approach to breast cancer CAD using partial least squares and kernel-partial least squares

    Science.gov (United States)

    Land, Walker H., Jr.; Heine, John; Embrechts, Mark; Smith, Tom; Choma, Robert; Wong, Lut

    2005-04-01

    Breast cancer is second only to lung cancer as a tumor-related cause of death in women. Currently, the method of choice for the early detection of breast cancer is mammography. While sensitive to the detection of breast cancer, its positive predictive value (PPV) is low, resulting in biopsies that are only 15-34% likely to reveal malignancy. This paper explores the use of two novel approaches called Partial Least Squares (PLS) and Kernel-PLS (K-PLS) to the diagnosis of breast cancer. The approach is based on optimization for the partial least squares (PLS) algorithm for linear regression and the K-PLS algorithm for non-linear regression. Preliminary results show that both the PLS and K-PLS paradigms achieved comparable results with three separate support vector learning machines (SVLMs), where these SVLMs were known to have been trained to a global minimum. That is, the average performance of the three separate SVLMs were Az = 0.9167927, with an average partial Az (Az90) = 0.5684283. These results compare favorably with the K-PLS paradigm, which obtained an Az = 0.907 and partial Az = 0.6123. The PLS paradigm provided comparable results. Secondly, both the K-PLS and PLS paradigms out performed the ANN in that the Az index improved by about 14% (Az ~ 0.907 compared to the ANN Az of ~ 0.8). The "Press R squared" value for the PLS and K-PLS machine learning algorithms were 0.89 and 0.9, respectively, which is in good agreement with the other MOP values.

  6. Sparse least-squares reverse time migration using seislets

    KAUST Repository

    Dutta, Gaurav

    2015-08-19

    We propose sparse least-squares reverse time migration (LSRTM) using seislets as a basis for the reflectivity distribution. This basis is used along with a dip-constrained preconditioner that emphasizes image updates only along prominent dips during the iterations. These dips can be estimated from the standard migration image or from the gradient using plane-wave destruction filters or structural tensors. Numerical tests on synthetic datasets demonstrate the benefits of this method for mitigation of aliasing artifacts and crosstalk noise in multisource least-squares migration.

  7. Quantized kernel least mean square algorithm.

    Science.gov (United States)

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  8. Group-wise partial least square regression

    NARCIS (Netherlands)

    Camacho, José; Saccenti, Edoardo

    2018-01-01

    This paper introduces the group-wise partial least squares (GPLS) regression. GPLS is a new sparse PLS technique where the sparsity structure is defined in terms of groups of correlated variables, similarly to what is done in the related group-wise principal component analysis. These groups are

  9. Optimistic semi-supervised least squares classification

    DEFF Research Database (Denmark)

    Krijthe, Jesse H.; Loog, Marco

    2017-01-01

    The goal of semi-supervised learning is to improve supervised classifiers by using additional unlabeled training examples. In this work we study a simple self-learning approach to semi-supervised learning applied to the least squares classifier. We show that a soft-label and a hard-label variant ...

  10. Application of Least-Squares Spectral Element Methods to Polynomial Chaos

    NARCIS (Netherlands)

    Vos, P.E.J.; Gerritsma, M.I.

    2006-01-01

    This papers describes the use of the Least-Squares Spectral Element Method to polynomial Chaos to solve stochastic partial differential equations. The method will be described in detail and a comparison will be presented between the least-squares projection and the conventional Galerkin projection.

  11. FC LSEI WNNLS, Least-Square Fitting Algorithms Using B Splines

    International Nuclear Information System (INIS)

    Hanson, R.J.; Haskell, K.H.

    1989-01-01

    1 - Description of problem or function: FC allows a user to fit dis- crete data, in a weighted least-squares sense, using piece-wise polynomial functions represented by B-Splines on a given set of knots. In addition to the least-squares fitting of the data, equality, inequality, and periodic constraints at a discrete, user-specified set of points can be imposed on the fitted curve or its derivatives. The subprograms LSEI and WNNLS solve the linearly-constrained least-squares problem. LSEI solves the class of problem with general inequality constraints, and, if requested, obtains a covariance matrix of the solution parameters. WNNLS solves the class of problem with non-negativity constraints. It is anticipated that most users will find LSEI suitable for their needs; however, users with inequalities that are single bounds on variables may wish to use WNNLS. 2 - Method of solution: The discrete data are fit by a linear combination of piece-wise polynomial curves which leads to a linear least-squares system of algebraic equations. Additional information is expressed as a discrete set of linear inequality and equality constraints on the fitted curve which leads to a linearly-constrained least-squares system of algebraic equations. The solution of this system is the main computational problem solved

  12. Deconvolution of 2D coincident Doppler broadening spectroscopy using the Richardson-Lucy algorithm

    International Nuclear Information System (INIS)

    Zhang, J.D.; Zhou, T.J.; Cheung, C.K.; Beling, C.D.; Fung, S.; Ng, M.K.

    2006-01-01

    Coincident Doppler Broadening Spectroscopy (CDBS) measurements are popular in positron solid-state studies of materials. By utilizing the instrumental resolution function obtained from a gamma line close in energy to the 511 keV annihilation line, it is possible to significantly enhance the quality of the CDBS spectra using deconvolution algorithms. In this paper, we compare two algorithms, namely the Non-Negativity Least Squares (NNLS) regularized method and the Richardson-Lucy (RL) algorithm. The latter, which is based on the method of maximum likelihood, is found to give superior results to the regularized least-squares algorithm and with significantly less computer processing time

  13. Multi-source least-squares reverse time migration

    KAUST Repository

    Dai, Wei

    2012-06-15

    Least-squares migration has been shown to improve image quality compared to the conventional migration method, but its computational cost is often too high to be practical. In this paper, we develop two numerical schemes to implement least-squares migration with the reverse time migration method and the blended source processing technique to increase computation efficiency. By iterative migration of supergathers, which consist in a sum of many phase-encoded shots, the image quality is enhanced and the crosstalk noise associated with the encoded shots is reduced. Numerical tests on 2D HESS VTI data show that the multisource least-squares reverse time migration (LSRTM) algorithm suppresses migration artefacts, balances the amplitudes, improves image resolution and reduces crosstalk noise associated with the blended shot gathers. For this example, the multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with a comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution and fewer migration artefacts compared to conventional RTM. The empirical results suggest that multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with a similar or less computational cost. The caveat is that the LSRTM image is sensitive to large errors in the migration velocity model. © 2012 European Association of Geoscientists & Engineers.

  14. Multi-source least-squares reverse time migration

    KAUST Repository

    Dai, Wei; Fowler, Paul J.; Schuster, Gerard T.

    2012-01-01

    Least-squares migration has been shown to improve image quality compared to the conventional migration method, but its computational cost is often too high to be practical. In this paper, we develop two numerical schemes to implement least-squares migration with the reverse time migration method and the blended source processing technique to increase computation efficiency. By iterative migration of supergathers, which consist in a sum of many phase-encoded shots, the image quality is enhanced and the crosstalk noise associated with the encoded shots is reduced. Numerical tests on 2D HESS VTI data show that the multisource least-squares reverse time migration (LSRTM) algorithm suppresses migration artefacts, balances the amplitudes, improves image resolution and reduces crosstalk noise associated with the blended shot gathers. For this example, the multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with a comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution and fewer migration artefacts compared to conventional RTM. The empirical results suggest that multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with a similar or less computational cost. The caveat is that the LSRTM image is sensitive to large errors in the migration velocity model. © 2012 European Association of Geoscientists & Engineers.

  15. Multi-source least-squares migration of marine data

    KAUST Repository

    Wang, Xin

    2012-11-04

    Kirchhoff based multi-source least-squares migration (MSLSM) is applied to marine streamer data. To suppress the crosstalk noise from the excitation of multiple sources, a dynamic encoding function (including both time-shifts and polarity changes) is applied to the receiver side traces. Results show that the MSLSM images are of better quality than the standard Kirchhoff migration and reverse time migration images; moreover, the migration artifacts are reduced and image resolution is significantly improved. The computational cost of MSLSM is about the same as conventional least-squares migration, but its IO cost is significantly decreased.

  16. Support-Vector-based Least Squares for learning non-linear dynamics

    NARCIS (Netherlands)

    de Kruif, B.J.; de Vries, Theodorus J.A.

    2002-01-01

    A function approximator is introduced that is based on least squares support vector machines (LSSVM) and on least squares (LS). The potential indicators for the LS method are chosen as the kernel functions of all the training samples similar to LSSVM. By selecting these as indicator functions the

  17. Least-squares methods involving the H{sup -1} inner product

    Energy Technology Data Exchange (ETDEWEB)

    Pasciak, J.

    1996-12-31

    Least-squares methods are being shown to be an effective technique for the solution of elliptic boundary value problems. However, the methods differ depending on the norms in which they are formulated. For certain problems, it is much more natural to consider least-squares functionals involving the H{sup -1} norm. Such norms give rise to improved convergence estimates and better approximation to problems with low regularity solutions. In addition, fewer new variables need to be added and less stringent boundary conditions need to be imposed. In this talk, I will describe some recent developments involving least-squares methods utilizing the H{sup -1} inner product.

  18. Plane-wave Least-squares Reverse Time Migration

    KAUST Repository

    Dai, Wei; Schuster, Gerard T.

    2012-01-01

    convergence for least-squares migration even when the migration velocity is not completely accurate. To significantly reduce computation cost, linear phase shift encoding is applied to hundreds of shot gathers to produce dozens of planes waves. A

  19. Regularized plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin; Dai, Wei; Schuster, Gerard T.

    2013-01-01

    A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity

  20. Least squares reverse time migration of controlled order multiples

    Science.gov (United States)

    Liu, Y.

    2016-12-01

    Imaging using the reverse time migration of multiples generates inherent crosstalk artifacts due to the interference among different order multiples. Traditionally, least-square fitting has been used to address this issue by seeking the best objective function to measure the amplitude differences between the predicted and observed data. We have developed an alternative objective function by decomposing multiples into different orders to minimize the difference between Born modeling predicted multiples and specific-order multiples from observational data in order to attenuate the crosstalk. This method is denoted as the least-squares reverse time migration of controlled order multiples (LSRTM-CM). Our numerical examples demonstrated that the LSRTM-CM can significantly improve image quality compared with reverse time migration of multiples and least-square reverse time migration of multiples. Acknowledgments This research was funded by the National Nature Science Foundation of China (Grant Nos. 41430321 and 41374138).

  1. Total least squares for anomalous change detection

    Science.gov (United States)

    Theiler, James; Matsekh, Anna M.

    2010-04-01

    A family of subtraction-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQbased anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and special cases of it are equivalent to canonical correlation analysis and optimized covariance equalization. What whitened TLSQ offers is a generalization of these algorithms with the potential for better performance.

  2. Constrained variable projection method for blind deconvolution

    International Nuclear Information System (INIS)

    Cornelio, A; Piccolomini, E Loli; Nagy, J G

    2012-01-01

    This paper is focused on the solution of the blind deconvolution problem, here modeled as a separable nonlinear least squares problem. The well known ill-posedness, both on recovering the blurring operator and the true image, makes the problem really difficult to handle. We show that, by imposing appropriate constraints on the variables and with well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved. Hence, the resulting nonlinear minimization problem can be effectively solved by classical methods, such as the Gauss-Newton algorithm.

  3. Accurate human limb angle measurement: sensor fusion through Kalman, least mean squares and recursive least-squares adaptive filtering

    Science.gov (United States)

    Olivares, A.; Górriz, J. M.; Ramírez, J.; Olivares, G.

    2011-02-01

    Inertial sensors are widely used in human body motion monitoring systems since they permit us to determine the position of the subject's limbs. Limb angle measurement is carried out through the integration of the angular velocity measured by a rate sensor and the decomposition of the components of static gravity acceleration measured by an accelerometer. Different factors derived from the sensors' nature, such as the angle random walk and dynamic bias, lead to erroneous measurements. Dynamic bias effects can be reduced through the use of adaptive filtering based on sensor fusion concepts. Most existing published works use a Kalman filtering sensor fusion approach. Our aim is to perform a comparative study among different adaptive filters. Several least mean squares (LMS), recursive least squares (RLS) and Kalman filtering variations are tested for the purpose of finding the best method leading to a more accurate and robust limb angle measurement. A new angle wander compensation sensor fusion approach based on LMS and RLS filters has been developed.

  4. Accurate human limb angle measurement: sensor fusion through Kalman, least mean squares and recursive least-squares adaptive filtering

    International Nuclear Information System (INIS)

    Olivares, A; Olivares, G; Górriz, J M; Ramírez, J

    2011-01-01

    Inertial sensors are widely used in human body motion monitoring systems since they permit us to determine the position of the subject's limbs. Limb angle measurement is carried out through the integration of the angular velocity measured by a rate sensor and the decomposition of the components of static gravity acceleration measured by an accelerometer. Different factors derived from the sensors' nature, such as the angle random walk and dynamic bias, lead to erroneous measurements. Dynamic bias effects can be reduced through the use of adaptive filtering based on sensor fusion concepts. Most existing published works use a Kalman filtering sensor fusion approach. Our aim is to perform a comparative study among different adaptive filters. Several least mean squares (LMS), recursive least squares (RLS) and Kalman filtering variations are tested for the purpose of finding the best method leading to a more accurate and robust limb angle measurement. A new angle wander compensation sensor fusion approach based on LMS and RLS filters has been developed

  5. Solution of a Complex Least Squares Problem with Constrained Phase.

    Science.gov (United States)

    Bydder, Mark

    2010-12-30

    The least squares solution of a complex linear equation is in general a complex vector with independent real and imaginary parts. In certain applications in magnetic resonance imaging, a solution is desired such that each element has the same phase. A direct method for obtaining the least squares solution to the phase constrained problem is described.

  6. Iterative methods for weighted least-squares

    Energy Technology Data Exchange (ETDEWEB)

    Bobrovnikova, E.Y.; Vavasis, S.A. [Cornell Univ., Ithaca, NY (United States)

    1996-12-31

    A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.

  7. A Generalized Autocovariance Least-Squares Method for Covariance Estimation

    DEFF Research Database (Denmark)

    Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2007-01-01

    A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....

  8. Deconvolution of shift-variant broadening for Compton scatter imaging

    International Nuclear Information System (INIS)

    Evans, Brian L.; Martin, Jeffrey B.; Roggemann, Michael C.

    1999-01-01

    A technique is presented for deconvolving shift-variant Doppler broadening of singly Compton scattered gamma rays from their recorded energy distribution. Doppler broadening is important in Compton scatter imaging techniques employing gamma rays with energies below roughly 100 keV. The deconvolution unfolds an approximation to the angular distribution of scattered photons from their recorded energy distribution in the presence of statistical noise and background counts. Two unfolding methods are presented, one based on a least-squares algorithm and one based on a maximum likelihood algorithm. Angular distributions unfolded from measurements made on small scattering targets show less evidence of Compton broadening. This deconvolution is shown to improve the quality of filtered backprojection images in multiplexed Compton scatter tomography. Improved sharpness and contrast are evident in the images constructed from unfolded signals

  9. Least Squares Methods for Equidistant Tree Reconstruction

    OpenAIRE

    Fahey, Conor; Hosten, Serkan; Krieger, Nathan; Timpe, Leslie

    2008-01-01

    UPGMA is a heuristic method identifying the least squares equidistant phylogenetic tree given empirical distance data among $n$ taxa. We study this classic algorithm using the geometry of the space of all equidistant trees with $n$ leaves, also known as the Bergman complex of the graphical matroid for the complete graph $K_n$. We show that UPGMA performs an orthogonal projection of the data onto a maximal cell of the Bergman complex. We also show that the equidistant tree with the least (Eucl...

  10. Optimally weighted least-squares steganalysis

    Science.gov (United States)

    Ker, Andrew D.

    2007-02-01

    Quantitative steganalysis aims to estimate the amount of payload in a stego object, and such estimators seem to arise naturally in steganalysis of Least Significant Bit (LSB) replacement in digital images. However, as with all steganalysis, the estimators are subject to errors, and their magnitude seems heavily dependent on properties of the cover. In very recent work we have given the first derivation of estimation error, for a certain method of steganalysis (the Least-Squares variant of Sample Pairs Analysis) of LSB replacement steganography in digital images. In this paper we make use of our theoretical results to find an improved estimator and detector. We also extend the theoretical analysis to another (more accurate) steganalysis estimator (Triples Analysis) and hence derive an improved version of that estimator too. Experimental results show that the new steganalyzers have improved accuracy, particularly in the difficult case of never-compressed covers.

  11. Multilevel solvers of first-order system least-squares for Stokes equations

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Chen-Yao G. [National Chung Cheng Univ., Chia-Yi (Taiwan, Province of China)

    1996-12-31

    Recently, The use of first-order system least squares principle for the approximate solution of Stokes problems has been extensively studied by Cai, Manteuffel, and McCormick. In this paper, we study multilevel solvers of first-order system least-squares method for the generalized Stokes equations based on the velocity-vorticity-pressure formulation in three dimensions. The least-squares functionals is defined to be the sum of the L{sup 2}-norms of the residuals, which is weighted appropriately by the Reynolds number. We develop convergence analysis for additive and multiplicative multilevel methods applied to the resulting discrete equations.

  12. The possibilities of least-squares migration of internally scattered seismic energy

    KAUST Repository

    Aldawood, Ali

    2015-05-26

    Approximate images of the earth’s subsurface structures are usually obtained by migrating surface seismic data. Least-squares migration, under the single-scattering assumption, is used as an iterative linearized inversion scheme to suppress migration artifacts, deconvolve the source signature, mitigate the acquisition fingerprint, and enhance the spatial resolution of migrated images. The problem with least-squares migration of primaries, however, is that it may not be able to enhance events that are mainly illuminated by internal multiples, such as vertical and nearly vertical faults or salt flanks. To alleviate this problem, we adopted a linearized inversion framework to migrate internally scattered energy. We apply the least-squares migration of first-order internal multiples to image subsurface vertical fault planes. Tests on synthetic data demonstrated the ability of the proposed method to resolve vertical fault planes, which are poorly illuminated by the least-squares migration of primaries only. The proposed scheme is robust in the presence of white Gaussian observational noise and in the case of imaging the fault planes using inaccurate migration velocities. Our results suggested that the proposed least-squares imaging, under the double-scattering assumption, still retrieved the vertical fault planes when imaging the scattered data despite a slight defocusing of these events due to the presence of noise or velocity errors.

  13. The possibilities of least-squares migration of internally scattered seismic energy

    KAUST Repository

    Aldawood, Ali; Hoteit, Ibrahim; Zuberi, Mohammad; Turkiyyah, George; Alkhalifah, Tariq Ali

    2015-01-01

    Approximate images of the earth’s subsurface structures are usually obtained by migrating surface seismic data. Least-squares migration, under the single-scattering assumption, is used as an iterative linearized inversion scheme to suppress migration artifacts, deconvolve the source signature, mitigate the acquisition fingerprint, and enhance the spatial resolution of migrated images. The problem with least-squares migration of primaries, however, is that it may not be able to enhance events that are mainly illuminated by internal multiples, such as vertical and nearly vertical faults or salt flanks. To alleviate this problem, we adopted a linearized inversion framework to migrate internally scattered energy. We apply the least-squares migration of first-order internal multiples to image subsurface vertical fault planes. Tests on synthetic data demonstrated the ability of the proposed method to resolve vertical fault planes, which are poorly illuminated by the least-squares migration of primaries only. The proposed scheme is robust in the presence of white Gaussian observational noise and in the case of imaging the fault planes using inaccurate migration velocities. Our results suggested that the proposed least-squares imaging, under the double-scattering assumption, still retrieved the vertical fault planes when imaging the scattered data despite a slight defocusing of these events due to the presence of noise or velocity errors.

  14. Elastic least-squares reverse time migration

    KAUST Repository

    Feng, Zongcai; Schuster, Gerard T.

    2016-01-01

    Elastic least-squares reverse time migration (LSRTM) is used to invert synthetic particle-velocity data and crosswell pressure field data. The migration images consist of both the P- and Svelocity perturbation images. Numerical tests on synthetic and field data illustrate the advantages of elastic LSRTM over elastic reverse time migration (RTM). In addition, elastic LSRTM images are better focused and have better reflector continuity than do the acoustic LSRTM images.

  15. Elastic least-squares reverse time migration

    KAUST Repository

    Feng, Zongcai

    2016-09-06

    Elastic least-squares reverse time migration (LSRTM) is used to invert synthetic particle-velocity data and crosswell pressure field data. The migration images consist of both the P- and Svelocity perturbation images. Numerical tests on synthetic and field data illustrate the advantages of elastic LSRTM over elastic reverse time migration (RTM). In addition, elastic LSRTM images are better focused and have better reflector continuity than do the acoustic LSRTM images.

  16. 8th International Conference on Partial Least Squares and Related Methods

    CERN Document Server

    Vinzi, Vincenzo; Russolillo, Giorgio; Saporta, Gilbert; Trinchera, Laura

    2016-01-01

    This volume presents state of the art theories, new developments, and important applications of Partial Least Square (PLS) methods. The text begins with the invited communications of current leaders in the field who cover the history of PLS, an overview of methodological issues, and recent advances in regression and multi-block approaches. The rest of the volume comprises selected, reviewed contributions from the 8th International Conference on Partial Least Squares and Related Methods held in Paris, France, on 26-28 May, 2014. They are organized in four coherent sections: 1) new developments in genomics and brain imaging, 2) new and alternative methods for multi-table and path analysis, 3) advances in partial least square regression (PLSR), and 4) partial least square path modeling (PLS-PM) breakthroughs and applications. PLS methods are very versatile methods that are now used in areas as diverse as engineering, life science, sociology, psychology, brain imaging, genomics, and business among both academics ...

  17. Detection of metabolites of lysergic acid diethylamide (LSD) in human urine specimens: 2-oxo-3-hydroxy-LSD, a prevalent metabolite of LSD.

    Science.gov (United States)

    Poch, G K; Klette, K L; Hallare, D A; Manglicmot, M G; Czarny, R J; McWhorter, L K; Anderson, C J

    1999-03-05

    Seventy-four urine specimens previously found to contain lysergic acid diethylamide (LSD) by gas chromatography-mass spectrometry (GC-MS) were analyzed by a new procedure for the LSD metabolite 2-oxo-3-hydroxy-LSD (O-H-LSD) using a Finnigan LC-MS-MS system. This procedure proved to be less complex, shorter to perform and provides cleaner chromatographic characteristics than the method currently utilized by the Navy Drug Screening Laboratories for the extraction of LSD from urine by GC-MS. All of the specimens used in the study screened positive for LSD by radioimmunoassay (Roche Abuscreen). Analysis by GC-MS revealed detectable amounts of LSD in all of the specimens. In addition, isolysergic diethylamide (iso-LSD), a byproduct of LSD synthesis, was quantitated in 64 of the specimens. Utilizing the new LC-MS-MS method, low levels of N-desmethyl-LSD (nor-LSD), another identified LSD metabolite, were detected in some of the specimens. However, all 74 specimens contained O-H-LSD at significantly higher concentrations than LSD, iso-LSD, or nor-LSD alone. The O-H-LSD concentration ranged from 732 to 112 831 pg/ml (mean, 16340 pg/ml) by quantification with an internal standard. The ratio of O-H-LSD to LSD ranged from 1.1 to 778.1 (mean, 42.9). The presence of O-H-LSD at substantially higher concentrations than LSD suggests that the analysis for O-H-LSD as the target analyte by employing LC-MS-MS will provide a much longer window of detection for the use of LSD than the analysis of the parent compound, LSD.

  18. Small-kernel constrained-least-squares restoration of sampled image data

    Science.gov (United States)

    Hazra, Rajeeb; Park, Stephen K.

    1992-10-01

    Constrained least-squares image restoration, first proposed by Hunt twenty years ago, is a linear image restoration technique in which the restoration filter is derived by maximizing the smoothness of the restored image while satisfying a fidelity constraint related to how well the restored image matches the actual data. The traditional derivation and implementation of the constrained least-squares restoration filter is based on an incomplete discrete/discrete system model which does not account for the effects of spatial sampling and image reconstruction. For many imaging systems, these effects are significant and should not be ignored. In a recent paper Park demonstrated that a derivation of the Wiener filter based on the incomplete discrete/discrete model can be extended to a more comprehensive end-to-end, continuous/discrete/continuous model. In a similar way, in this paper, we show that a derivation of the constrained least-squares filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model and, by so doing, an improved restoration filter is derived. Building on previous work by Reichenbach and Park for the Wiener filter, we also show that this improved constrained least-squares restoration filter can be efficiently implemented as a small-kernel convolution in the spatial domain.

  19. Solving linear inequalities in a least squares sense

    Energy Technology Data Exchange (ETDEWEB)

    Bramley, R.; Winnicka, B. [Indiana Univ., Bloomington, IN (United States)

    1994-12-31

    Let A {element_of} {Re}{sup mxn} be an arbitrary real matrix, and let b {element_of} {Re}{sup m} a given vector. A familiar problem in computational linear algebra is to solve the system Ax = b in a least squares sense; that is, to find an x* minimizing {parallel}Ax {minus} b{parallel}, where {parallel} {center_dot} {parallel} refers to the vector two-norm. Such an x* solves the normal equations A{sup T}(Ax {minus} b) = 0, and the optimal residual r* = b {minus} Ax* is unique (although x* need not be). The least squares problem is usually interpreted as corresponding to multiple observations, represented by the rows of A and b, on a vector of data x. The observations may be inconsistent, and in this case a solution is sought that minimizes the norm of the residuals. A less familiar problem to numerical linear algebraists is the solution of systems of linear inequalities Ax {le} b in a least squares sense, but the motivation is similar: if a set of observations places upper or lower bounds on linear combinations of variables, the authors want to find x* minimizing {parallel} (Ax {minus} b){sub +} {parallel}, where the i{sup th} component of the vector v{sub +} is the maximum of zero and the i{sup th} component of v.

  20. Parameter estimation of Monod model by the Least-Squares method for microalgae Botryococcus Braunii sp

    Science.gov (United States)

    See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.

    2018-04-01

    This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.

  1. Global Search Strategies for Solving Multilinear Least-Squares Problems

    Directory of Open Access Journals (Sweden)

    Mats Andersson

    2012-04-01

    Full Text Available The multilinear least-squares (MLLS problem is an extension of the linear least-squares problem. The difference is that a multilinear operator is used in place of a matrix-vector product. The MLLS is typically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present a global search strategy that allows for moving from one local minimizer to a better one. The efficiency of this strategy is illustrated by the results of numerical experiments performed for some problems related to the design of filter networks.

  2. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig; Suliman, Mohamed Abdalla Elhag; Al-Naffouri, Tareq Y.

    2017-01-01

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded

  3. Fruit fly optimization based least square support vector regression for blind image restoration

    Science.gov (United States)

    Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei

    2014-11-01

    The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and

  4. Bubble-Enriched Least-Squares Finite Element Method for Transient Advective Transport

    Directory of Open Access Journals (Sweden)

    Rajeev Kumar

    2008-01-01

    Full Text Available The least-squares finite element method (LSFEM has received increasing attention in recent years due to advantages over the Galerkin finite element method (GFEM. The method leads to a minimization problem in the L2-norm and thus results in a symmetric and positive definite matrix, even for first-order differential equations. In addition, the method contains an implicit streamline upwinding mechanism that prevents the appearance of oscillations that are characteristic of the Galerkin method. Thus, the least-squares approach does not require explicit stabilization and the associated stabilization parameters required by the Galerkin method. A new approach, the bubble enriched least-squares finite element method (BELSFEM, is presented and compared with the classical LSFEM. The BELSFEM requires a space-time element formulation and employs bubble functions in space and time to increase the accuracy of the finite element solution without degrading computational performance. We apply the BELSFEM and classical least-squares finite element methods to benchmark problems for 1D and 2D linear transport. The accuracy and performance are compared.

  5. Multi-source least-squares migration of marine data

    KAUST Repository

    Wang, Xin; Schuster, Gerard T.

    2012-01-01

    Kirchhoff based multi-source least-squares migration (MSLSM) is applied to marine streamer data. To suppress the crosstalk noise from the excitation of multiple sources, a dynamic encoding function (including both time-shifts and polarity changes

  6. Weighted least-squares criteria for electrical impedance tomography

    International Nuclear Information System (INIS)

    Kallman, J.S.; Berryman, J.G.

    1992-01-01

    Methods are developed for design of electrical impedance tomographic reconstruction algorithms with specified properties. Assuming a starting model with constant conductivity or some other specified background distribution, an algorithm with the following properties is found: (1) the optimum constant for the starting model is determined automatically; (2) the weighted least-squares error between the predicted and measured power dissipation data is as small as possible; (3) the variance of the reconstructed conductivity from the starting model is minimized; (4) potential distributions with the largest volume integral of gradient squared have the least influence on the reconstructed conductivity, and therefore distributions most likely to be corrupted by contact impedance effects are deemphasized; (5) cells that dissipate the most power during the current injection tests tend to deviate least from the background value. The resulting algorithm maps the reconstruction problem into a vector space where the contribution to the inversion from the background conductivity remains invariant, while the optimum contributions in orthogonal directions are found. For a starting model with nonconstant conductivity, the reconstruction algorithm has analogous properties

  7. 3D plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin; Dai, Wei; Huang, Yunsong; Schuster, Gerard T.

    2014-01-01

    A three dimensional least-squares Kirchhoff migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images and the computational efficiency. Due to the limitation of current 3D marine acquisition

  8. Decision-Directed Recursive Least Squares MIMO Channels Tracking

    Directory of Open Access Journals (Sweden)

    Karami Ebrahim

    2006-01-01

    Full Text Available A new approach for joint data estimation and channel tracking for multiple-input multiple-output (MIMO channels is proposed based on the decision-directed recursive least squares (DD-RLS algorithm. RLS algorithm is commonly used for equalization and its application in channel estimation is a novel idea. In this paper, after defining the weighted least squares cost function it is minimized and eventually the RLS MIMO channel estimation algorithm is derived. The proposed algorithm combined with the decision-directed algorithm (DDA is then extended for the blind mode operation. From the computational complexity point of view being versus the number of transmitter and receiver antennas, the proposed algorithm is very efficient. Through various simulations, the mean square error (MSE of the tracking of the proposed algorithm for different joint detection algorithms is compared with Kalman filtering approach which is one of the most well-known channel tracking algorithms. It is shown that the performance of the proposed algorithm is very close to Kalman estimator and that in the blind mode operation it presents a better performance with much lower complexity irrespective of the need to know the channel model.

  9. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig

    2017-10-18

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.

  10. Least squares analysis of fission neutron standard fields

    International Nuclear Information System (INIS)

    Griffin, P.J.; Williams, J.G.

    1997-01-01

    A least squares analysis of fission neutron standard fields has been performed using the latest dosimetry cross sections. Discrepant nuclear data are identified and adjusted spectra for 252 Cf spontaneous fission and 235 U thermal fission fields are presented

  11. Least-squares finite element discretizations of neutron transport equations in 3 dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Manteuffel, T.A [Univ. of Colorado, Boulder, CO (United States); Ressel, K.J. [Interdisciplinary Project Center for Supercomputing, Zurich (Switzerland); Starkes, G. [Universtaet Karlsruhe (Germany)

    1996-12-31

    The least-squares finite element framework to the neutron transport equation introduced in is based on the minimization of a least-squares functional applied to the properly scaled neutron transport equation. Here we report on some practical aspects of this approach for neutron transport calculations in three space dimensions. The systems of partial differential equations resulting from a P{sub 1} and P{sub 2} approximation of the angular dependence are derived. In the diffusive limit, the system is essentially a Poisson equation for zeroth moment and has a divergence structure for the set of moments of order 1. One of the key features of the least-squares approach is that it produces a posteriori error bounds. We report on the numerical results obtained for the minimum of the least-squares functional augmented by an additional boundary term using trilinear finite elements on a uniform tesselation into cubes.

  12. LC-ESI-MS/MS on an ion trap for the determination of LSD, iso-LSD, nor-LSD and 2-oxo-3-hydroxy-LSD in blood, urine and vitreous humor.

    Science.gov (United States)

    Favretto, Donata; Frison, Giampietro; Maietti, Sergio; Ferrara, Santo Davide

    2007-07-01

    A method has been developed for the simultaneous determination of lysergic acid diethylamide (LSD), its epimer iso-LSD, and its main metabolites nor-LSD and 2-oxo-3-hydroxy LSD in blood, urine, and, for the first time, vitreous humor samples. The method is based on liquid/liquid extraction and liquid chromatography-multiple mass spectrometry detection in an ion trap mass spectrometer, in positive ion electrospray ionization conditions. Five microliter of sample are injected and analysis time is 12 min. The method is specific, selective and sensitive, and achieves limits of quantification of 20 pg/ml for both LSD and nor-LSD in blood, urine, and vitreous humor. No significant interfering substance or ion suppression was identified for LSD, iso-LSD, and nor-LSD. The interassay reproducibilities for LSD at 20 pg/ml and 2 ng/ml in urine were 8.3 and 5.6%, respectively. Within-run precision using control samples at 20 pg/ml and 2 ng/ml was 6.9 and 3.9%. Mean recoveries of two concentrations spiked into drug free samples were in the range 60-107% in blood, 50-105% in urine, and 65-105% in vitreous humor. The method was successfully applied to the forensic determination of postmortem LSD levels in the biological fluids of a multi drug abuser; for the first time, LSD could be detected in vitreous humor.

  13. Positive solution of non-square fully Fuzzy linear system of equation in general form using least square method

    Directory of Open Access Journals (Sweden)

    Reza Ezzati

    2014-08-01

    Full Text Available In this paper, we propose the least square method for computing the positive solution of a non-square fully fuzzy linear system. To this end, we use Kaffman' arithmetic operations on fuzzy numbers \\cite{17}. Here, considered existence of exact solution using pseudoinverse, if they are not satisfy in positive solution condition, we will compute fuzzy vector core and then we will obtain right and left spreads of positive fuzzy vector by introducing constrained least squares problem. Using our proposed method, non-square fully fuzzy linear system of equations always has a solution. Finally, we illustrate the efficiency of proposed method by solving some numerical examples.

  14. Metabolism of lysergic acid diethylamide (LSD) to 2-oxo-3-hydroxy LSD (O-H-LSD) in human liver microsomes and cryopreserved human hepatocytes.

    Science.gov (United States)

    Klette, K L; Anderson, C J; Poch, G K; Nimrod, A C; ElSohly, M A

    2000-10-01

    The metabolism of lysergic acid diethylamide (LSD) to 2-oxo-3-hydroxy lysergic acid diethylamide (O-H-LSD) was investigated in liver microsomes and cyropreserved hepatocytes from humans. Previous studies have demonstrated that O-H-LSD is present in human urine at concentrations 16-43 times greater than LSD, the parent compound. Additionally, these studies have determined that O-H-LSD is not generated during the specimen extraction and analytical processes or due to parent compound degradation in aqueous urine samples. However, these studies have not been conclusive in demonstrating that O-H-LSD is uniquely produced during in vivo metabolism. Phase I drug metabolism was investigated by incubating human liver microsomes and cryopreserved human hepatocytes with LSD. The reaction was quenched at various time points, and the aliquots were extracted using liquid partitioning and analyzed by liquid chromatography-mass spectrometry. O-H-LSD was positively identified in all human liver microsomal and human hepatocyte fractions incubated with LSD. In addition, O-H-LSD was not detected in any microsomal or hepatocyte fraction not treated with LSD nor in LSD specimens devoid of microsomes or hepatocytes. This study provides definitive evidence that O-H-LSD is produced as a metabolic product following incubation of human liver microsomes and hepatocytes with LSD.

  15. Moving least squares simulation of free surface flows

    DEFF Research Database (Denmark)

    Felter, C. L.; Walther, Jens Honore; Henriksen, Christian

    2014-01-01

    In this paper a Moving Least Squares method (MLS) for the simulation of 2D free surface flows is presented. The emphasis is on the governing equations, the boundary conditions, and the numerical implementation. The compressible viscous isothermal Navier–Stokes equations are taken as the starting ...

  16. Multivariate calibration with least-squares support vector machines.

    NARCIS (Netherlands)

    Thissen, U.M.J.; Ustun, B.; Melssen, W.J.; Buydens, L.M.C.

    2004-01-01

    This paper proposes the use of least-squares support vector machines (LS-SVMs) as a relatively new nonlinear multivariate calibration method, capable of dealing with ill-posed problems. LS-SVMs are an extension of "traditional" SVMs that have been introduced recently in the field of chemistry and

  17. Linearized least-square imaging of internally scattered data

    KAUST Repository

    Aldawood, Ali; Hoteit, Ibrahim; Turkiyyah, George M.; Zuberi, M. A H; Alkhalifah, Tariq Ali

    2014-01-01

    Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single-scattering energy such as nearly vertical faults. Standard migration of these multiples provide subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. Hence, we apply a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. Application to synthetic data demonstrated the effectiveness of the proposed inversion in imaging a reflector that is poorly illuminated by single-scattering energy. The least-square inversion of doublescattered data helped delineate that reflector with minimal acquisition fingerprint.

  18. Source allocation by least-squares hydrocarbon fingerprint matching

    Energy Technology Data Exchange (ETDEWEB)

    William A. Burns; Stephen M. Mudge; A. Edward Bence; Paul D. Boehm; John S. Brown; David S. Page; Keith R. Parker [W.A. Burns Consulting Services LLC, Houston, TX (United States)

    2006-11-01

    There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS. 15 refs., 5 figs.

  19. Least-Square Prediction for Backward Adaptive Video Coding

    Directory of Open Access Journals (Sweden)

    Li Xin

    2006-01-01

    Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.

  20. Efficient Model Selection for Sparse Least-Square SVMs

    Directory of Open Access Journals (Sweden)

    Xiao-Lei Xia

    2013-01-01

    Full Text Available The Forward Least-Squares Approximation (FLSA SVM is a newly-emerged Least-Square SVM (LS-SVM whose solution is extremely sparse. The algorithm uses the number of support vectors as the regularization parameter and ensures the linear independency of the support vectors which span the solution. This paper proposed a variant of the FLSA-SVM, namely, Reduced FLSA-SVM which is of reduced computational complexity and memory requirements. The strategy of “contexts inheritance” is introduced to improve the efficiency of tuning the regularization parameter for both the FLSA-SVM and the RFLSA-SVM algorithms. Experimental results on benchmark datasets showed that, compared to the SVM and a number of its variants, the RFLSA-SVM solutions contain a reduced number of support vectors, while maintaining competitive generalization abilities. With respect to the time cost for tuning of the regularize parameter, the RFLSA-SVM algorithm was empirically demonstrated fastest compared to FLSA-SVM, the LS-SVM, and the SVM algorithms.

  1. Simplified Least Squares Shadowing sensitivity analysis for chaotic ODEs and PDEs

    Energy Technology Data Exchange (ETDEWEB)

    Chater, Mario, E-mail: chaterm@mit.edu; Ni, Angxiu, E-mail: niangxiu@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu

    2017-01-15

    This paper develops a variant of the Least Squares Shadowing (LSS) method, which has successfully computed the derivative for several chaotic ODEs and PDEs. The development in this paper aims to simplify Least Squares Shadowing method by improving how time dilation is treated. Instead of adding an explicit time dilation term as in the original method, the new variant uses windowing, which can be more efficient and simpler to implement, especially for PDEs.

  2. Least-mean-square spatial filter for IR sensors.

    Science.gov (United States)

    Takken, E H; Friedman, D; Milton, A F; Nitzberg, R

    1979-12-15

    A new least-mean-square filter is defined for signal-detection problems. The technique is proposed for scanning IR surveillance systems operating in poorly characterized but primarily low-frequency clutter interference. Near-optimal detection of point-source targets is predicted both for continuous-time and sampled-data systems.

  3. The crux of the method: assumptions in ordinary least squares and logistic regression.

    Science.gov (United States)

    Long, Rebecca G

    2008-10-01

    Logistic regression has increasingly become the tool of choice when analyzing data with a binary dependent variable. While resources relating to the technique are widely available, clear discussions of why logistic regression should be used in place of ordinary least squares regression are difficult to find. The current paper compares and contrasts the assumptions of ordinary least squares with those of logistic regression and explains why logistic regression's looser assumptions make it adept at handling violations of the more important assumptions in ordinary least squares.

  4. Development and validation of an LC-MS/MS method to quantify lysergic acid diethylamide (LSD), iso-LSD, 2-oxo-3-hydroxy-LSD, and nor-LSD and identify novel metabolites in plasma samples in a controlled clinical trial.

    Science.gov (United States)

    Dolder, Patrick C; Liechti, Matthias E; Rentsch, Katharina M

    2018-02-01

    Lysergic acid diethylamide (LSD) is a widely used recreational drug. The aim of this study was to develop and validate a liquid chromatography tandem mass spectrometry (LC-MS/MS) method for the quantification of LSD, iso-LSD, 2-oxo-3-hydroxy LSD (O-H-LSD), and nor-LSD in plasma samples from 24 healthy subjects after controlled administration of 100 μg LSD in a clinical trial. In addition, metabolites that have been recently described in in vitro studies, including lysergic acid monoethylamide (LAE), lysergic acid ethyl-2-hydroxyethylamide (LEO), 2-oxo-LSD, trioxylated-LSD, and 13/14-hydroxy-LSD, should be identified. Separation of LSD and its metabolites was achieved on a reversed phase chromatography column after turbulent-flow online extraction. For the identification and quantification, a triple-stage quadrupole LC-MS/MS instrument was used. The validation data showed slight matrix effects for LSD, iso-LSD, O-H-LSD, or nor-LSD. Mean intraday and interday accuracy and precision were 105%/4.81% and 105%/4.35% for LSD, 98.7%/5.75% and 99.4%/7.21% for iso-LSD, 106%/4.54% and 99.4%/7.21% for O-H-LSD, and 107%/5.82% and 102%/5.88% for nor-LSD, respectively. The limit of quantification was 0.05 ng/mL for LSD, iso-LSD, and nor-LSD and 0.1 ng/mL for O-H-LSD. The limit of detection was 0.01 ng/mL for all compounds. The method described herein was accurate, precise, and the calibration range within the range of expected plasma concentrations. LSD was quantified in the plasma samples of the 24 subjects of the clinical trial, whereas iso-LSD, O-H-LSD, nor-LSD, LAE, LEO, 13/14-hydroxy-LSD, and 2-oxo-LSD could only sporadically be detected but were too low for quantification. © 2017 Wiley Periodicals, Inc.

  5. Plane-wave least-squares reverse-time migration

    KAUST Repository

    Dai, Wei

    2013-06-03

    A plane-wave least-squares reverse-time migration (LSRTM) is formulated with a new parameterization, where the migration image of each shot gather is updated separately and an ensemble of prestack images is produced along with common image gathers. The merits of plane-wave prestack LSRTM are the following: (1) plane-wave prestack LSRTM can sometimes offer stable convergence even when the migration velocity has bulk errors of up to 5%; (2) to significantly reduce computation cost, linear phase-shift encoding is applied to hundreds of shot gathers to produce dozens of plane waves. Unlike phase-shift encoding with random time shifts applied to each shot gather, plane-wave encoding can be effectively applied to data with a marine streamer geometry. (3) Plane-wave prestack LSRTM can provide higher-quality images than standard reverse-time migration. Numerical tests on the Marmousi2 model and a marine field data set are performed to illustrate the benefits of plane-wave LSRTM. Empirical results show that LSRTM in the plane-wave domain, compared to standard reversetime migration, produces images efficiently with fewer artifacts and better spatial resolution. Moreover, the prestack image ensemble accommodates more unknowns to makes it more robust than conventional least-squares migration in the presence of migration velocity errors. © 2013 Society of Exploration Geophysicists.

  6. Is LSD toxic?

    Science.gov (United States)

    Nichols, David E; Grob, Charles S

    2018-03-01

    LSD (lysergic acid diethylamide) was discovered almost 75 years ago, and has been the object of episodic controversy since then. While initially explored as an adjunctive psychiatric treatment, its recreational use by the general public has persisted and on occasion has been associated with adverse outcomes, particularly when the drug is taken under suboptimal conditions. LSD's potential to cause psychological disturbance (bad trips) has been long understood, and has rarely been associated with accidental deaths and suicide. From a physiological perspective, however, LSD is known to be non-toxic and medically safe when taken at standard dosages (50-200μg). The scientific literature, along with recent media reports, have unfortunately implicated "LSD toxicity" in five cases of sudden death. On close examination, however, two of these fatalities were associated with ingestion of massive overdoses, two were evidently in individuals with psychological agitation after taking standard doses of LSD who were then placed in maximal physical restraint positions (hogtied) by police, following which they suffered fatal cardiovascular collapse, and one case of extreme hyperthermia leading to death that was likely caused by a drug substituted for LSD with strong effects on central nervous system temperature regulation (e.g. 25i-NBOMe). Given the renewed interest in the therapeutic potential of LSD and other psychedelic drugs, it is important that an accurate understanding be established of the true causes of such fatalities that had been erroneously attributed to LSD toxicity, including massive overdoses, excessive physical restraints, and psychoactive drugs other than LSD. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Non linear-least-squares fitting for pixe spectra

    International Nuclear Information System (INIS)

    Benamar, M.A.; Tchantchane, A.; Benouali, N.; Azbouche, A.; Tobbeche, S.

    1992-10-01

    An interactive computer program for the analysis of Pixe spectra is described. The fitting procedure consists of computing a function which approximates the experimental data. A nonlinear least-squares fitting is used to determine the parameters of the fit. The program takes into account the low energy tail and the escape peaks

  8. Least Squares Problems with Absolute Quadratic Constraints

    Directory of Open Access Journals (Sweden)

    R. Schöne

    2012-01-01

    Full Text Available This paper analyzes linear least squares problems with absolute quadratic constraints. We develop a generalized theory following Bookstein's conic-fitting and Fitzgibbon's direct ellipse-specific fitting. Under simple preconditions, it can be shown that a minimum always exists and can be determined by a generalized eigenvalue problem. This problem is numerically reduced to an eigenvalue problem by multiplications of Givens' rotations. Finally, four applications of this approach are presented.

  9. Least-squares collocation meshless approach for radiative heat transfer in absorbing and scattering media

    Science.gov (United States)

    Liu, L. H.; Tan, J. Y.

    2007-02-01

    A least-squares collocation meshless method is employed for solving the radiative heat transfer in absorbing, emitting and scattering media. The least-squares collocation meshless method for radiative transfer is based on the discrete ordinates equation. A moving least-squares approximation is applied to construct the trial functions. Except for the collocation points which are used to construct the trial functions, a number of auxiliary points are also adopted to form the total residuals of the problem. The least-squares technique is used to obtain the solution of the problem by minimizing the summation of residuals of all collocation and auxiliary points. Three numerical examples are studied to illustrate the performance of this new solution method. The numerical results are compared with the other benchmark approximate solutions. By comparison, the results show that the least-squares collocation meshless method is efficient, accurate and stable, and can be used for solving the radiative heat transfer in absorbing, emitting and scattering media.

  10. Least-squares collocation meshless approach for radiative heat transfer in absorbing and scattering media

    International Nuclear Information System (INIS)

    Liu, L.H.; Tan, J.Y.

    2007-01-01

    A least-squares collocation meshless method is employed for solving the radiative heat transfer in absorbing, emitting and scattering media. The least-squares collocation meshless method for radiative transfer is based on the discrete ordinates equation. A moving least-squares approximation is applied to construct the trial functions. Except for the collocation points which are used to construct the trial functions, a number of auxiliary points are also adopted to form the total residuals of the problem. The least-squares technique is used to obtain the solution of the problem by minimizing the summation of residuals of all collocation and auxiliary points. Three numerical examples are studied to illustrate the performance of this new solution method. The numerical results are compared with the other benchmark approximate solutions. By comparison, the results show that the least-squares collocation meshless method is efficient, accurate and stable, and can be used for solving the radiative heat transfer in absorbing, emitting and scattering media

  11. Newton-Gauss Algorithm of Robust Weighted Total Least Squares Model

    Directory of Open Access Journals (Sweden)

    WANG Bin

    2015-06-01

    Full Text Available Based on the Newton-Gauss iterative algorithm of weighted total least squares (WTLS, a robust WTLS (RWTLS model is presented. The model utilizes the standardized residuals to construct the weight factor function and the square root of the variance component estimator with robustness is obtained by introducing the median method. Therefore, the robustness in both the observation and structure spaces can be simultaneously achieved. To obtain standardized residuals, the linearly approximate cofactor propagation law is employed to derive the expression of the cofactor matrix of WTLS residuals. The iterative calculation steps for RWTLS are also described. The experiment indicates that the model proposed in this paper exhibits satisfactory robustness for gross errors handling problem of WTLS, the obtained parameters have no significant difference with the results of WTLS without gross errors. Therefore, it is superior to the robust weighted total least squares model directly constructed with residuals.

  12. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  13. A least-squares computational ``tool kit``. Nuclear data and measurements series

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D.L.

    1993-04-01

    The information assembled in this report is intended to offer a useful computational ``tool kit`` to individuals who are interested in a variety of practical applications for the least-squares method of parameter estimation. The fundamental principles of Bayesian analysis are outlined first and these are applied to development of both the simple and the generalized least-squares conditions. Formal solutions that satisfy these conditions are given subsequently. Their application to both linear and non-linear problems is described in detail. Numerical procedures required to implement these formal solutions are discussed and two utility computer algorithms are offered for this purpose (codes LSIOD and GLSIOD written in FORTRAN). Some simple, easily understood examples are included to illustrate the use of these algorithms. Several related topics are then addressed, including the generation of covariance matrices, the role of iteration in applications of least-squares procedures, the effects of numerical precision and an approach that can be pursued in developing data analysis packages that are directed toward special applications.

  14. Magnetic field and velocity of early M dwarfs (Morin+, 2008)

    Science.gov (United States)

    Morin, J.; Donati, J.-F.; Petit, P.; Delfosse, X.; Forveille, T.; Albert, L.; Auriere, M.; Cabanac, R.; Dintrans, B.; Fares, R.; Gastine, T.; Jardine, M. M.; Lignieres, F.; Paletou, F.; Ramirez Velez, J. C.; Theado, S.

    2010-06-01

    We have collected 107 pairs of Stokes I (unpolarised) and V (circularly polarised) spectra with the twin instruments ESPaDOnS at CFHT (2003ASPC..307...41D) and NARVAL at TBL between January 2006 and February 2008. All spectra were reduced using the Libre-Esprit pipeline, and the mean I and V line profiles were extracted using the Least-Squares Deconvolution (LSD) technique (1997MNRAS.291..658D). The star V374 Peg (2008MNRAS.384...77M) is also included in the discussion and in table1.dat. (2 data files).

  15. Neutrino astronomy at Mont Blanc: from LSD to LSD-2

    International Nuclear Information System (INIS)

    Saavedra, O.; Aglietta, M.; Badino, G.

    1988-01-01

    In this paper we present the upgrading of the LSD experiment, presently running in the Mont Blanc Laboratory. The data recorded during the period when supernova 1987A exploded are analysed in detail. The research program of LSD-2, the same experiment as LSD but with an higher sensitivity to search for neutrino burst from collapsing stars, is also discussed

  16. An information geometric approach to least squares minimization

    Science.gov (United States)

    Transtrum, Mark; Machta, Benjamin; Sethna, James

    2009-03-01

    Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.

  17. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig

    2015-04-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  18. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig; Al-Naffouri, Tareq Y.

    2015-01-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  19. Modern Clinical Research on LSD.

    Science.gov (United States)

    Liechti, Matthias E

    2017-10-01

    All modern clinical studies using the classic hallucinogen lysergic acid diethylamide (LSD) in healthy subjects or patients in the last 25 years are reviewed herein. There were five recent studies in healthy participants and one in patients. In a controlled setting, LSD acutely induced bliss, audiovisual synesthesia, altered meaning of perceptions, derealization, depersonalization, and mystical experiences. These subjective effects of LSD were mediated by the 5-HT 2A receptor. LSD increased feelings of closeness to others, openness, trust, and suggestibility. LSD impaired the recognition of sad and fearful faces, reduced left amygdala reactivity to fearful faces, and enhanced emotional empathy. LSD increased the emotional response to music and the meaning of music. LSD acutely produced deficits in sensorimotor gating, similar to observations in schizophrenia. LSD had weak autonomic stimulant effects and elevated plasma cortisol, prolactin, and oxytocin levels. Resting-state functional magnetic resonance studies showed that LSD acutely reduced the integrity of functional brain networks and increased connectivity between networks that normally are more dissociated. LSD increased functional thalamocortical connectivity and functional connectivity of the primary visual cortex with other brain areas. The latter effect was correlated with subjective hallucinations. LSD acutely induced global increases in brain entropy that were associated with greater trait openness 14 days later. In patients with anxiety associated with life-threatening disease, anxiety was reduced for 2 months after two doses of LSD. In medical settings, no complications of LSD administration were observed. These data should contribute to further investigations of the therapeutic potential of LSD in psychiatry.

  20. Error propagation of partial least squares for parameters optimization in NIR modeling

    Science.gov (United States)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-01

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.

  1. Error propagation of partial least squares for parameters optimization in NIR modeling.

    Science.gov (United States)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-05

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.

  2. Application of pulse pile-up correction spectrum to the library least-squares method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Hoon [Kyungpook National Univ., Daegu (Korea, Republic of)

    2006-12-15

    The Monte Carlo simulation code CEARPPU has been developed and updated to provide pulse pile-up correction spectra for high counting rate cases. For neutron activation analysis, CEARPPU correction spectra were used in library least-squares method to give better isotopic activity results than the convention library least-squares fitting with uncorrected spectra.

  3. Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS

    Directory of Open Access Journals (Sweden)

    Ade Widyaningsih

    2015-04-01

    Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.

  4. Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS

    Directory of Open Access Journals (Sweden)

    Ade Widyaningsih

    2014-06-01

    Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.

  5. An on-line modified least-mean-square algorithm for training neurofuzzy controllers.

    Science.gov (United States)

    Tan, Woei Wan

    2007-04-01

    The problem hindering the use of data-driven modelling methods for training controllers on-line is the lack of control over the amount by which the plant is excited. As the operating schedule determines the information available on-line, the knowledge of the process may degrade if the setpoint remains constant for an extended period. This paper proposes an identification algorithm that alleviates "learning interference" by incorporating fuzzy theory into the normalized least-mean-square update rule. The ability of the proposed methodology to achieve faster learning is examined by employing the algorithm to train a neurofuzzy feedforward controller for controlling a liquid level process. Since the proposed identification strategy has similarities with the normalized least-mean-square update rule and the recursive least-square estimator, the on-line learning rates of these algorithms are also compared.

  6. Performance Evaluation of the Ordinary Least Square (OLS) and ...

    African Journals Online (AJOL)

    Nana Kwasi Peprah

    1Deparment of Geomatic Engineering, University of Mines and Technology, ... precise, accurate and can be used to execute any engineering works due to ..... and Ordinary Least Squares Methods”, Journal of Geomatics and Planning, Vol ... Technology”, Unpublished BSc Project Report, University of Mines and Technology ...

  7. A study of the real-time deconvolution of digitized waveforms with pulse pile up for digital radiation spectroscopy

    International Nuclear Information System (INIS)

    Guo Weijun; Gardner, Robin P.; Mayo, Charles W.

    2005-01-01

    Two new real-time approaches have been developed and compared to the least-squares fit approach for the deconvolution of experimental waveforms with pile-up pulses. The single pulse shape chosen is typical for scintillators such as LSO and NaI(Tl). Simulated waveforms with pulse pile up were also generated and deconvolved to compare these three different approaches under cases where the single pulse component has a constant shape and the digitization error dominates. The effects of temporal separation and amplitude ratio between pile-up component pulses were also investigated and statistical tests were applied to quantify the consistency of deconvolution results for each case. Monte Carlo simulation demonstrated that applications of these pile-up deconvolution techniques to radiation spectroscopy are effective in extending the counting-rate range while preserving energy resolution for scintillation detectors

  8. Tensor hypercontraction. II. Least-squares renormalization

    Science.gov (United States)

    Parrish, Robert M.; Hohenstein, Edward G.; Martínez, Todd J.; Sherrill, C. David

    2012-12-01

    The least-squares tensor hypercontraction (LS-THC) representation for the electron repulsion integral (ERI) tensor is presented. Recently, we developed the generic tensor hypercontraction (THC) ansatz, which represents the fourth-order ERI tensor as a product of five second-order tensors [E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, J. Chem. Phys. 137, 044103 (2012)], 10.1063/1.4732310. Our initial algorithm for the generation of the THC factors involved a two-sided invocation of overlap-metric density fitting, followed by a PARAFAC decomposition, and is denoted PARAFAC tensor hypercontraction (PF-THC). LS-THC supersedes PF-THC by producing the THC factors through a least-squares renormalization of a spatial quadrature over the otherwise singular 1/r12 operator. Remarkably, an analytical and simple formula for the LS-THC factors exists. Using this formula, the factors may be generated with O(N^5) effort if exact integrals are decomposed, or O(N^4) effort if the decomposition is applied to density-fitted integrals, using any choice of density fitting metric. The accuracy of LS-THC is explored for a range of systems using both conventional and density-fitted integrals in the context of MP2. The grid fitting error is found to be negligible even for extremely sparse spatial quadrature grids. For the case of density-fitted integrals, the additional error incurred by the grid fitting step is generally markedly smaller than the underlying Coulomb-metric density fitting error. The present results, coupled with our previously published factorizations of MP2 and MP3, provide an efficient, robust O(N^4) approach to both methods. Moreover, LS-THC is generally applicable to many other methods in quantum chemistry.

  9. A Generalized Autocovariance Least-Squares Method for Kalman Filter Tuning

    DEFF Research Database (Denmark)

    Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2008-01-01

    This paper discusses a method for estimating noise covariances from process data. In linear stochastic state-space representations the true noise covariances are generally unknown in practical applications. Using estimated covariances a Kalman filter can be tuned in order to increase the accuracy...... of the state estimates. There is a linear relationship between covariances and autocovariance. Therefore, the covariance estimation problem can be stated as a least-squares problem, which can be solved as a symmetric semidefinite least-squares problem. This problem is convex and can be solved efficiently...... by interior-point methods. A numerical algorithm for solving the symmetric is able to handle systems with mutually correlated process noise and measurement noise. (c) 2007 Elsevier Ltd. All rights reserved....

  10. A Two-Layer Least Squares Support Vector Machine Approach to Credit Risk Assessment

    Science.gov (United States)

    Liu, Jingli; Li, Jianping; Xu, Weixuan; Shi, Yong

    Least squares support vector machine (LS-SVM) is a revised version of support vector machine (SVM) and has been proved to be a useful tool for pattern recognition. LS-SVM had excellent generalization performance and low computational cost. In this paper, we propose a new method called two-layer least squares support vector machine which combines kernel principle component analysis (KPCA) and linear programming form of least square support vector machine. With this method sparseness and robustness is obtained while solving large dimensional and large scale database. A U.S. commercial credit card database is used to test the efficiency of our method and the result proved to be a satisfactory one.

  11. A FORTRAN program for a least-square fitting

    International Nuclear Information System (INIS)

    Yamazaki, Tetsuo

    1978-01-01

    A practical FORTRAN program for a least-squares fitting is presented. Although the method is quite usual, the program calculates not only the most satisfactory set of values of unknowns but also the plausible errors associated with them. As an example, a measured lateral absorbed-dose distribution in water for a narrow 25-MeV electron beam is fitted to a Gaussian distribution. (auth.)

  12. Least squares orthogonal polynomial approximation in several independent variables

    International Nuclear Information System (INIS)

    Caprari, R.S.

    1992-06-01

    This paper begins with an exposition of a systematic technique for generating orthonormal polynomials in two independent variables by application of the Gram-Schmidt orthogonalization procedure of linear algebra. It is then demonstrated how a linear least squares approximation for experimental data or an arbitrary function can be generated from these polynomials. The least squares coefficients are computed without recourse to matrix arithmetic, which ensures both numerical stability and simplicity of implementation as a self contained numerical algorithm. The Gram-Schmidt procedure is then utilised to generate a complete set of orthogonal polynomials of fourth degree. A theory for the transformation of the polynomial representation from an arbitrary basis into the familiar sum of products form is presented, together with a specific implementation for fourth degree polynomials. Finally, the computational integrity of this algorithm is verified by reconstructing arbitrary fourth degree polynomials from their values at randomly chosen points in their domain. 13 refs., 1 tab

  13. Least-squares reverse time migration of multiples

    KAUST Repository

    Zhang, Dongliang

    2013-12-06

    The theory of least-squares reverse time migration of multiples (RTMM) is presented. In this method, least squares migration (LSM) is used to image free-surface multiples where the recorded traces are used as the time histories of the virtual sources at the hydrophones and the surface-related multiples are the observed data. For a single source, the entire free-surface becomes an extended virtual source where the downgoing free-surface multiples more fully illuminate the subsurface compared to the primaries. Since each recorded trace is treated as the time history of a virtual source, knowledge of the source wavelet is not required and the ringy time series for each source is automatically deconvolved. If the multiples can be perfectly separated from the primaries, numerical tests on synthetic data for the Sigsbee2B and Marmousi2 models show that least-squares reverse time migration of multiples (LSRTMM) can significantly improve the image quality compared to RTMM or standard reverse time migration (RTM) of primaries. However, if there is imperfect separation and the multiples are strongly interfering with the primaries then LSRTMM images show no significant advantage over the primary migration images. In some cases, they can be of worse quality. Applying LSRTMM to Gulf of Mexico data shows higher signal-to-noise imaging of the salt bottom and top compared to standard RTM images. This is likely attributed to the fact that the target body is just below the sea bed so that the deep water multiples do not have strong interference with the primaries. Migrating a sparsely sampled version of the Marmousi2 ocean bottom seismic data shows that LSM of primaries and LSRTMM provides significantly better imaging than standard RTM. A potential liability of LSRTMM is that multiples require several round trips between the reflector and the free surface, so that high frequencies in the multiples suffer greater attenuation compared to the primary reflections. This can lead to lower

  14. Genie in a blotter: A comparative study of LSD and LSD analogues' effects and user profile.

    Science.gov (United States)

    Coney, Leigh D; Maier, Larissa J; Ferris, Jason A; Winstock, Adam R; Barratt, Monica J

    2017-05-01

    This study aimed to describe self-reported patterns of use and effects of lysergic acid diethylamide (LSD) analogues (AL-LAD, 1P-LSD, and ETH-LAD) and the characteristics of those who use them. An anonymous self-selected online survey of people who use drugs (Global Drug Survey 2016; N = 96,894), which measured perceived drug effects of LSD and its analogues. Most LSD analogue users (91%) had also tried LSD. The proportion of U.K. and U.S. respondents reporting LSD analogue use in the last 12 months was higher than for LSD only. LSD analogue users described the effects as psychedelic (93%), over half (55%) obtained it online, and almost all (99%) reported an oral route of administration. The modal duration (8 hr) and time to peak (2 hr) of LSD analogues were not significantly different from LSD. Ratings for pleasurable high, strength of effect, comedown, urge to use more drugs, value for money, and risk of harm following use were significantly lower for LSD analogues compared with LSD. LSD analogues were reported as similar in time to peak and duration as LSD but weaker in strength, pleasurable high, and comedown. Future studies should seek to replicate these findings with chemical confirmation and dose measurement. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Nonlinear Least Square Based on Control Direction by Dual Method and Its Application

    Directory of Open Access Journals (Sweden)

    Zhengqing Fu

    2016-01-01

    Full Text Available A direction controlled nonlinear least square (NLS estimation algorithm using the primal-dual method is proposed. The least square model is transformed into the primal-dual model; then direction of iteration can be controlled by duality. The iterative algorithm is designed. The Hilbert morbid matrix is processed by the new model and the least square estimate and ridge estimate. The main research method is to combine qualitative analysis and quantitative analysis. The deviation between estimated values and the true value and the estimated residuals fluctuation of different methods are used for qualitative analysis. The root mean square error (RMSE is used for quantitative analysis. The results of experiment show that the model has the smallest residual error and the minimum root mean square error. The new estimate model has effectiveness and high precision. The genuine data of Jining area in unwrapping experiments are used and the comparison with other classical unwrapping algorithms is made, so better results in precision aspects can be achieved through the proposed algorithm.

  16. Spectral mimetic least-squares method for div-curl systems

    NARCIS (Netherlands)

    Gerritsma, Marc; Palha, Artur; Lirkov, I.; Margenov, S.

    2018-01-01

    In this paper the spectral mimetic least-squares method is applied to a two-dimensional div-curl system. A test problem is solved on orthogonal and curvilinear meshes and both h- and p-convergence results are presented. The resulting solutions will be pointwise divergence-free for these test

  17. Nonnegative least-squares image deblurring: improved gradient projection approaches

    Science.gov (United States)

    Benvenuto, F.; Zanella, R.; Zanni, L.; Bertero, M.

    2010-02-01

    The least-squares approach to image deblurring leads to an ill-posed problem. The addition of the nonnegativity constraint, when appropriate, does not provide regularization, even if, as far as we know, a thorough investigation of the ill-posedness of the resulting constrained least-squares problem has still to be done. Iterative methods, converging to nonnegative least-squares solutions, have been proposed. Some of them have the 'semi-convergence' property, i.e. early stopping of the iteration provides 'regularized' solutions. In this paper we consider two of these methods: the projected Landweber (PL) method and the iterative image space reconstruction algorithm (ISRA). Even if they work well in many instances, they are not frequently used in practice because, in general, they require a large number of iterations before providing a sensible solution. Therefore, the main purpose of this paper is to refresh these methods by increasing their efficiency. Starting from the remark that PL and ISRA require only the computation of the gradient of the functional, we propose the application to these algorithms of special acceleration techniques that have been recently developed in the area of the gradient methods. In particular, we propose the application of efficient step-length selection rules and line-search strategies. Moreover, remarking that ISRA is a scaled gradient algorithm, we evaluate its behaviour in comparison with a recent scaled gradient projection (SGP) method for image deblurring. Numerical experiments demonstrate that the accelerated methods still exhibit the semi-convergence property, with a considerable gain both in the number of iterations and in the computational time; in particular, SGP appears definitely the most efficient one.

  18. Development and validation of an LC-MS/MS method to quantify lysergic acid diethylamide (LSD), iso-LSD, 2-oxo-3-hydroxy-LSD, and nor-LSD and identify novel metabolites in plasma samples in a controlled clinical trial

    OpenAIRE

    Dolder, Patrick C.; Liechti, Matthias E.; Rentsch, Katharina M.

    2018-01-01

    Lysergic acid diethylamide (LSD) is a widely used recreational drug. The aim of this study was to develop and validate a liquid chromatography tandem mass spectrometry (LC-MS/MS) method for the quantification of LSD, iso-LSD, 2-oxo-3-hydroxy LSD (O-H-LSD), and nor-LSD in plasma samples from 24 healthy subjects after controlled administration of 100 μg LSD in a clinical trial. In addition, metabolites that have been recently described in in vitro studies, including lysergic acid monoethylamide...

  19. Multisplitting for linear, least squares and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Renaut, R.

    1996-12-31

    In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.

  20. Attenuation compensation in least-squares reverse time migration using the visco-acoustic wave equation

    KAUST Repository

    Dutta, Gaurav

    2013-08-20

    Attenuation leads to distortion of amplitude and phase of seismic waves propagating inside the earth. Conventional acoustic and least-squares reverse time migration do not account for this distortion which leads to defocusing of migration images in highly attenuative geological environments. To account for this distortion, we propose to use the visco-acoustic wave equation for least-squares reverse time migration. Numerical tests on synthetic data show that least-squares reverse time migration with the visco-acoustic wave equation corrects for this distortion and produces images with better balanced amplitudes compared to the conventional approach. © 2013 SEG.

  1. Track Circuit Fault Diagnosis Method based on Least Squares Support Vector

    Science.gov (United States)

    Cao, Yan; Sun, Fengru

    2018-01-01

    In order to improve the troubleshooting efficiency and accuracy of the track circuit, track circuit fault diagnosis method was researched. Firstly, the least squares support vector machine was applied to design the multi-fault classifier of the track circuit, and then the measured track data as training samples was used to verify the feasibility of the methods. Finally, the results based on BP neural network fault diagnosis methods and the methods used in this paper were compared. Results shows that the track fault classifier based on least squares support vector machine can effectively achieve the five track circuit fault diagnosis with less computing time.

  2. Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach

    Science.gov (United States)

    Ulbrich, N.; Volden, T.

    2017-01-01

    A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.

  3. A comparison of two least-squared random coefficient autoregressive models: with and without autocorrelated errors

    OpenAIRE

    Autcha Araveeporn

    2013-01-01

    This paper compares a Least-Squared Random Coefficient Autoregressive (RCA) model with a Least-Squared RCA model based on Autocorrelated Errors (RCA-AR). We looked at only the first order models, denoted RCA(1) and RCA(1)-AR(1). The efficiency of the Least-Squared method was checked by applying the models to Brownian motion and Wiener process, and the efficiency followed closely the asymptotic properties of a normal distribution. In a simulation study, we compared the performance of RCA(1) an...

  4. Application of the Polynomial-Based Least Squares and Total Least Squares Models for the Attenuated Total Reflection Fourier Transform Infrared Spectra of Binary Mixtures of Hydroxyl Compounds.

    Science.gov (United States)

    Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang

    2016-03-01

    An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS. © The Author(s) 2016.

  5. Preconditioned Iterative Methods for Solving Weighted Linear Least Squares Problems

    Czech Academy of Sciences Publication Activity Database

    Bru, R.; Marín, J.; Mas, J.; Tůma, Miroslav

    2014-01-01

    Roč. 36, č. 4 (2014), A2002-A2022 ISSN 1064-8275 Institutional support: RVO:67985807 Keywords : preconditioned iterative methods * incomplete decompositions * approximate inverses * linear least squares Subject RIV: BA - General Mathematics Impact factor: 1.854, year: 2014

  6. Analysis of quantile regression as alternative to ordinary least squares

    OpenAIRE

    Ibrahim Abdullahi; Abubakar Yahaya

    2015-01-01

    In this article, an alternative to ordinary least squares (OLS) regression based on analytical solution in the Statgraphics software is considered, and this alternative is no other than quantile regression (QR) model. We also present goodness of fit statistic as well as approximate distributions of the associated test statistics for the parameters. Furthermore, we suggest a goodness of fit statistic called the least absolute deviation (LAD) coefficient of determination. The procedure is well ...

  7. Spectral/hp least-squares finite element formulation for the Navier-Stokes equations

    International Nuclear Information System (INIS)

    Pontaza, J.P.; Reddy, J.N.

    2003-01-01

    We consider the application of least-squares finite element models combined with spectral/hp methods for the numerical solution of viscous flow problems. The paper presents the formulation, validation, and application of a spectral/hp algorithm to the numerical solution of the Navier-Stokes equations governing two- and three-dimensional stationary incompressible and low-speed compressible flows. The Navier-Stokes equations are expressed as an equivalent set of first-order equations by introducing vorticity or velocity gradients as additional independent variables and the least-squares method is used to develop the finite element model. High-order element expansions are used to construct the discrete model. The discrete model thus obtained is linearized by Newton's method, resulting in a linear system of equations with a symmetric positive definite coefficient matrix that is solved in a fully coupled manner by a preconditioned conjugate gradient method. Spectral convergence of the L 2 least-squares functional and L 2 error norms is verified using smooth solutions to the two-dimensional stationary Poisson and incompressible Navier-Stokes equations. Numerical results for flow over a backward-facing step, steady flow past a circular cylinder, three-dimensional lid-driven cavity flow, and compressible buoyant flow inside a square enclosure are presented to demonstrate the predictive capability and robustness of the proposed formulation

  8. On the multivariate total least-squares approach to empirical coordinate transformations. Three algorithms

    Science.gov (United States)

    Schaffrin, Burkhard; Felus, Yaron A.

    2008-06-01

    The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.

  9. Growth kinetics of borided layers: Artificial neural network and least square approaches

    Science.gov (United States)

    Campos, I.; Islas, M.; Ramírez, G.; VillaVelázquez, C.; Mota, C.

    2007-05-01

    The present study evaluates the growth kinetics of the boride layer Fe 2B in AISI 1045 steel, by means of neural networks and the least square techniques. The Fe 2B phase was formed at the material surface using the paste boriding process. The surface boron potential was modified considering different boron paste thicknesses, with exposure times of 2, 4 and 6 h, and treatment temperatures of 1193, 1223 and 1273 K. The neural network and the least square models were set by the layer thickness of Fe 2B phase, and assuming that the growth of the boride layer follows a parabolic law. The reliability of the techniques used is compared with a set of experiments at a temperature of 1223 K with 5 h of treatment time and boron potentials of 2, 3, 4 and 5 mm. The results of the Fe 2B layer thicknesses show a mean error of 5.31% for the neural network and 3.42% for the least square method.

  10. Unweighted least squares phase unwrapping by means of multigrid techniques

    Science.gov (United States)

    Pritt, Mark D.

    1995-11-01

    We present a multigrid algorithm for unweighted least squares phase unwrapping. This algorithm applies Gauss-Seidel relaxation schemes to solve the Poisson equation on smaller, coarser grids and transfers the intermediate results to the finer grids. This approach forms the basis of our multigrid algorithm for weighted least squares phase unwrapping, which is described in a separate paper. The key idea of our multigrid approach is to maintain the partial derivatives of the phase data in separate arrays and to correct these derivatives at the boundaries of the coarser grids. This maintains the boundary conditions necessary for rapid convergence to the correct solution. Although the multigrid algorithm is an iterative algorithm, we demonstrate that it is nearly as fast as the direct Fourier-based method. We also describe how to parallelize the algorithm for execution on a distributed-memory parallel processor computer or a network-cluster of workstations.

  11. New method to incorporate Type B uncertainty into least-squares procedures in radionuclide metrology

    International Nuclear Information System (INIS)

    Han, Jubong; Lee, K.B.; Lee, Jong-Man; Park, Tae Soon; Oh, J.S.; Oh, Pil-Jei

    2016-01-01

    We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. - Highlights: • A new method proposed to incorporate Type B uncertainty into least-squares method. • The method constructed from the likelihood function and PDFs of Type B uncertainty. • A case study performed to compare results from the new and the conventional method. • Fitted parameters are consistent but with larger uncertainties in the new method.

  12. BRGLM, Interactive Linear Regression Analysis by Least Square Fit

    International Nuclear Information System (INIS)

    Ringland, J.T.; Bohrer, R.E.; Sherman, M.E.

    1985-01-01

    1 - Description of program or function: BRGLM is an interactive program written to fit general linear regression models by least squares and to provide a variety of statistical diagnostic information about the fit. Stepwise and all-subsets regression can be carried out also. There are facilities for interactive data management (e.g. setting missing value flags, data transformations) and tools for constructing design matrices for the more commonly-used models such as factorials, cubic Splines, and auto-regressions. 2 - Method of solution: The least squares computations are based on the orthogonal (QR) decomposition of the design matrix obtained using the modified Gram-Schmidt algorithm. 3 - Restrictions on the complexity of the problem: The current release of BRGLM allows maxima of 1000 observations, 99 variables, and 3000 words of main memory workspace. For a problem with N observations and P variables, the number of words of main memory storage required is MAX(N*(P+6), N*P+P*P+3*N, and 3*P*P+6*N). Any linear model may be fit although the in-memory workspace will have to be increased for larger problems

  13. Feature extraction through least squares fit to a simple model

    International Nuclear Information System (INIS)

    Demuth, H.B.

    1976-01-01

    The Oak Ridge National Laboratory (ORNL) presented the Los Alamos Scientific Laboratory (LASL) with 18 radiographs of fuel rod test bundles. The problem is to estimate the thickness of the gap between some cylindrical rods and a flat wall surface. The edges of the gaps are poorly defined due to finite source size, x-ray scatter, parallax, film grain noise, and other degrading effects. The radiographs were scanned and the scan-line data were averaged to reduce noise and to convert the problem to one dimension. A model of the ideal gap, convolved with an appropriate point-spread function, was fit to the averaged data with a least squares program; and the gap width was determined from the final fitted-model parameters. The least squares routine did converge and the gaps obtained are of reasonable size. The method is remarkably insensitive to noise. This report describes the problem, the techniques used to solve it, and the results and conclusions. Suggestions for future work are also given

  14. A constrained robust least squares approach for contaminant release history identification

    Science.gov (United States)

    Sun, Alexander Y.; Painter, Scott L.; Wittmeyer, Gordon W.

    2006-04-01

    Contaminant source identification is an important type of inverse problem in groundwater modeling and is subject to both data and model uncertainty. Model uncertainty was rarely considered in the previous studies. In this work, a robust framework for solving contaminant source recovery problems is introduced. The contaminant source identification problem is first cast into one of solving uncertain linear equations, where the response matrix is constructed using a superposition technique. The formulation presented here is general and is applicable to any porous media flow and transport solvers. The robust least squares (RLS) estimator, which originated in the field of robust identification, directly accounts for errors arising from model uncertainty and has been shown to significantly reduce the sensitivity of the optimal solution to perturbations in model and data. In this work, a new variant of RLS, the constrained robust least squares (CRLS), is formulated for solving uncertain linear equations. CRLS allows for additional constraints, such as nonnegativity, to be imposed. The performance of CRLS is demonstrated through one- and two-dimensional test problems. When the system is ill-conditioned and uncertain, it is found that CRLS gave much better performance than its classical counterpart, the nonnegative least squares. The source identification framework developed in this work thus constitutes a reliable tool for recovering source release histories in real applications.

  15. PENERAPAN METODE LEAST MEDIAN SQUARE-MINIMUM COVARIANCE DETERMINANT (LMS-MCD DALAM REGRESI KOMPONEN UTAMA

    Directory of Open Access Journals (Sweden)

    I PUTU EKA IRAWAN

    2013-11-01

    Full Text Available Principal Component Regression is a method to overcome multicollinearity techniques by combining principal component analysis with regression analysis. The calculation of classical principal component analysis is based on the regular covariance matrix. The covariance matrix is optimal if the data originated from a multivariate normal distribution, but is very sensitive to the presence of outliers. Alternatives are used to overcome this problem the method of Least Median Square-Minimum Covariance Determinant (LMS-MCD. The purpose of this research is to conduct a comparison between Principal Component Regression (RKU and Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD in dealing with outliers. In this study, Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD has a bias and mean square error (MSE is smaller than the parameter RKU. Based on the difference of parameter estimators, still have a test that has a difference of parameter estimators method LMS-MCD greater than RKU method.

  16. Modern Clinical Research on LSD

    OpenAIRE

    Liechti, Matthias E.

    2017-01-01

    All modern clinical studies using the classic hallucinogen lysergic acid diethylamide (LSD) in healthy subjects or patients in the last 25 years are reviewed herein. There were five recent studies in healthy participants and one in patients. In a controlled setting, LSD acutely induced bliss, audiovisual synesthesia, altered meaning of perceptions, derealization, depersonalization, and mystical experiences. These subjective effects of LSD were mediated by the 5-HT2A receptor. LSD increased fe...

  17. Least-squares methods for identifying biochemical regulatory networks from noisy measurements

    Directory of Open Access Journals (Sweden)

    Heslop-Harrison Pat

    2007-01-01

    Full Text Available Abstract Background We consider the problem of identifying the dynamic interactions in biochemical networks from noisy experimental data. Typically, approaches for solving this problem make use of an estimation algorithm such as the well-known linear Least-Squares (LS estimation technique. We demonstrate that when time-series measurements are corrupted by white noise and/or drift noise, more accurate and reliable identification of network interactions can be achieved by employing an estimation algorithm known as Constrained Total Least Squares (CTLS. The Total Least Squares (TLS technique is a generalised least squares method to solve an overdetermined set of equations whose coefficients are noisy. The CTLS is a natural extension of TLS to the case where the noise components of the coefficients are correlated, as is usually the case with time-series measurements of concentrations and expression profiles in gene networks. Results The superior performance of the CTLS method in identifying network interactions is demonstrated on three examples: a genetic network containing four genes, a network describing p53 activity and mdm2 messenger RNA interactions, and a recently proposed kinetic model for interleukin (IL-6 and (IL-12b messenger RNA expression as a function of ATF3 and NF-κB promoter binding. For the first example, the CTLS significantly reduces the errors in the estimation of the Jacobian for the gene network. For the second, the CTLS reduces the errors from the measurements that are corrupted by white noise and the effect of neglected kinetics. For the third, it allows the correct identification, from noisy data, of the negative regulation of (IL-6 and (IL-12b by ATF3. Conclusion The significant improvements in performance demonstrated by the CTLS method under the wide range of conditions tested here, including different levels and types of measurement noise and different numbers of data points, suggests that its application will enable

  18. Constrained Balancing of Two Industrial Rotor Systems: Least Squares and Min-Max Approaches

    Directory of Open Access Journals (Sweden)

    Bin Huang

    2009-01-01

    Full Text Available Rotor vibrations caused by rotor mass unbalance distributions are a major source of maintenance problems in high-speed rotating machinery. Minimizing this vibration by balancing under practical constraints is quite important to industry. This paper considers balancing of two large industrial rotor systems by constrained least squares and min-max balancing methods. In current industrial practice, the weighted least squares method has been utilized to minimize rotor vibrations for many years. One of its disadvantages is that it cannot guarantee that the maximum value of vibration is below a specified value. To achieve better balancing performance, the min-max balancing method utilizing the Second Order Cone Programming (SOCP with the maximum correction weight constraint, the maximum residual response constraint as well as the weight splitting constraint has been utilized for effective balancing. The min-max balancing method can guarantee a maximum residual vibration value below an optimum value and is shown by simulation to significantly outperform the weighted least squares method.

  19. Penalized Nonlinear Least Squares Estimation of Time-Varying Parameters in Ordinary Differential Equations

    KAUST Repository

    Cao, Jiguo; Huang, Jianhua Z.; Wu, Hulin

    2012-01-01

    Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online.

  20. Geodesic least squares regression for scaling studies in magnetic confinement fusion

    International Nuclear Information System (INIS)

    Verdoolaege, Geert

    2015-01-01

    In regression analyses for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. However, concerns have been raised with respect to several assumptions underlying OLS in its application to scaling laws. We here discuss a new regression method that is robust in the presence of significant uncertainty on both the data and the regression model. The method, which we call geodesic least squares regression (GLS), is based on minimization of the Rao geodesic distance on a probabilistic manifold. We demonstrate the superiority of the method using synthetic data and we present an application to the scaling law for the power threshold for the transition to the high confinement regime in magnetic confinement fusion devices

  1. An animal model of schizophrenia based on chronic LSD administration: old idea, new results.

    Science.gov (United States)

    Marona-Lewicka, Danuta; Nichols, Charles D; Nichols, David E

    2011-09-01

    Many people who take LSD experience a second temporal phase of LSD intoxication that is qualitatively different, and was described by Daniel Freedman as "clearly a paranoid state." We have previously shown that the discriminative stimulus effects of LSD in rats also occur in two temporal phases, with initial effects mediated by activation of 5-HT(2A) receptors (LSD30), and the later temporal phase mediated by dopamine D2-like receptors (LSD90). Surprisingly, we have now found that non-competitive NMDA antagonists produced full substitution in LSD90 rats, but only in older animals, whereas in LSD30, or in younger animals, these drugs did not mimic LSD. Chronic administration of low doses of LSD (>3 months, 0.16 mg/kg every other day) induces a behavioral state characterized by hyperactivity and hyperirritability, increased locomotor activity, anhedonia, and impairment in social interaction that persists at the same magnitude for at least three months after cessation of LSD treatment. These behaviors, which closely resemble those associated with psychosis in humans, are not induced by withdrawal from LSD; rather, they are the result of neuroadaptive changes occurring in the brain during the chronic administration of LSD. These persistent behaviors are transiently reversed by haloperidol and olanzapine, but are insensitive to MDL-100907. Gene expression analysis data show that chronic LSD treatment produced significant changes in multiple neurotransmitter system-related genes, including those for serotonin and dopamine. Thus, we propose that chronic treatment of rats with low doses of LSD can serve as a new animal model of psychosis that may mimic the development and progression of schizophrenia, as well as model the established disease better than current acute drug administration models utilizing amphetamine or NMDA antagonists such as PCP. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. AN ANIMAL MODEL OF SCHIZOPHRENIA BASED ON CHRONIC LSD ADMINISTRATION: OLD IDEA, NEW RESULTS

    Science.gov (United States)

    Marona-Lewicka, Danuta; Nichols, Charles D.; Nichols, David E.

    2011-01-01

    Many people who take LSD experience a second temporal phase of LSD intoxication that is qualitatively different, and was described by Daniel Freedman as “clearly a paranoid state.” We have previously shown that the discriminative stimulus effects of LSD in rats also occur in two temporal phases, with initial effects mediated by activation of 5-HT2A receptors (LSD30), and the later temporal phase mediated by dopamine D2-like receptors (LSD90). Surprisingly, we have now found that non-competitive NMDA antagonists produced full substitution in LSD90 rats, but only in older animals, whereas in LSD30, or in younger animals, these drugs did not mimic LSD. Chronic administration of low doses of LSD (>3 months, 0.16 mg/kg every other day) induces a behavioral state characterized by hyperactivity and hyperirritability, increased locomotor activity, anhedonia, and impairment in social interaction that persists at the same magnitude for at least three months after cessation of LSD treatment. These behaviors, which closely resemble those associated with psychosis in humans, are not induced by withdrawal from LSD; rather, they are the result of neuroadaptive changes occurring in the brain during the chronic administration of LSD. These persistent behaviors are transiently reversed by haloperidol and olanzapine, but are insensitive to MDL-100907. Gene expression analysis data show that chronic LSD treatment produced significant changes in multiple neurotransmitter system-related genes, including those for serotonin and dopamine. Thus, we propose that chronic treatment of rats with low doses of LSD can serve as a new animal model of psychosis that may mimic the development and progression of schizophrenia, as well as model the established disease better than current acute drug administration models utilizing amphetamine or NMDA antagonists such as PCP. PMID:21352832

  3. Analysis of a plane stress wave by the moving least squares method

    Directory of Open Access Journals (Sweden)

    Wojciech Dornowski

    2014-08-01

    Full Text Available A meshless method based on the moving least squares approximation is applied to stress wave propagation analysis. Two kinds of node meshes, the randomly generated mesh and the regular mesh are used. The nearest neighbours’ problem is developed from a triangulation that satisfies minimum edges length conditions. It is found that this method of neighbours’ choice significantly improves the solution accuracy. The reflection of stress waves from the free edge is modelled using fictitious nodes (outside the plate. The comparison with the finite difference results also demonstrated the accuracy of the proposed approach.[b]Keywords[/b]: civil engineering, meshless method, moving least squares method, elastic waves

  4. Regularized plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin

    2013-09-22

    A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity model. Both synthetic and field results show that: 1) LSM with a reflectivity model common for all the plane-wave gathers provides the best image when the migration velocity model is accurate, but it is more sensitive to the velocity errors, 2) the regularized plane-wave LSM is more robust in the presence of velocity errors, and 3) LSM achieves both computational and IO saving by plane-wave encoding compared to shot-domain LSM for the models tested.

  5. Multiples least-squares reverse time migration

    KAUST Repository

    Zhang, Dongliang

    2013-01-01

    To enhance the image quality, we propose multiples least-squares reverse time migration (MLSRTM) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. Since each recorded trace is treated as a virtual source, knowledge of the source wavelet is not required. Numerical tests on synthetic data for the Sigsbee2B model and field data from Gulf of Mexico show that MLSRTM can improve the image quality by removing artifacts, balancing amplitudes, and suppressing crosstalk compared to standard migration of the free-surface multiples. The potential liability of this method is that multiples require several roundtrips between the reflector and the free surface, so that high frequencies in the multiples are attenuated compared to the primary reflections. This can lead to lower resolution in the migration image compared to that computed from primaries.

  6. Penalized linear regression for discrete ill-posed problems: A hybrid least-squares and mean-squared error approach

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.

    2016-01-01

    This paper proposes a new approach to find the regularization parameter for linear least-squares discrete ill-posed problems. In the proposed approach, an artificial perturbation matrix with a bounded norm is forced into the discrete ill-posed model

  7. Application of new least-squares methods for the quantitative infrared analysis of multicomponent samples

    International Nuclear Information System (INIS)

    Haaland, D.M.; Easterling, R.G.

    1982-01-01

    Improvements have been made in previous least-squares regression analyses of infrared spectra for the quantitative estimation of concentrations of multicomponent mixtures. Spectral baselines are fitted by least-squares methods, and overlapping spectral features are accounted for in the fitting procedure. Selection of peaks above a threshold value reduces computation time and data storage requirements. Four weighted least-squares methods incorporating different baseline assumptions were investigated using FT-IR spectra of the three pure xylene isomers and their mixtures. By fitting only regions of the spectra that follow Beer's Law, accurate results can be obtained using three of the fitting methods even when baselines are not corrected to zero. Accurate results can also be obtained using one of the fits even in the presence of Beer's Law deviations. This is a consequence of pooling the weighted results for each spectral peak such that the greatest weighting is automatically given to those peaks that adhere to Beer's Law. It has been shown with the xylene spectra that semiquantitative results can be obtained even when all the major components are not known or when expected components are not present. This improvement over previous methods greatly expands the utility of quantitative least-squares analyses

  8. Least squares shadowing sensitivity analysis of a modified Kuramoto–Sivashinsky equation

    International Nuclear Information System (INIS)

    Blonigan, Patrick J.; Wang, Qiqi

    2014-01-01

    Highlights: •Modifying the Kuramoto–Sivashinsky equation and changing its boundary conditions make it an ergodic dynamical system. •The modified Kuramoto–Sivashinsky equation exhibits distinct dynamics for three different ranges of system parameters. •Least squares shadowing sensitivity analysis computes accurate gradients for a wide range of system parameters. - Abstract: Computational methods for sensitivity analysis are invaluable tools for scientists and engineers investigating a wide range of physical phenomena. However, many of these methods fail when applied to chaotic systems, such as the Kuramoto–Sivashinsky (K–S) equation, which models a number of different chaotic systems found in nature. The following paper discusses the application of a new sensitivity analysis method developed by the authors to a modified K–S equation. We find that least squares shadowing sensitivity analysis computes accurate gradients for solutions corresponding to a wide range of system parameters

  9. First-order system least-squares for second-order elliptic problems with discontinuous coefficients: Further results

    Energy Technology Data Exchange (ETDEWEB)

    Bloechle, B.; Manteuffel, T.; McCormick, S.; Starke, G.

    1996-12-31

    Many physical phenomena are modeled as scalar second-order elliptic boundary value problems with discontinuous coefficients. The first-order system least-squares (FOSLS) methodology is an alternative to standard mixed finite element methods for such problems. The occurrence of singularities at interface corners and cross-points requires that care be taken when implementing the least-squares finite element method in the FOSLS context. We introduce two methods of handling the challenges resulting from singularities. The first method is based on a weighted least-squares functional and results in non-conforming finite elements. The second method is based on the use of singular basis functions and results in conforming finite elements. We also share numerical results comparing the two approaches.

  10. Feasibility study on the least square method for fitting non-Gaussian noise data

    Science.gov (United States)

    Xu, Wei; Chen, Wen; Liang, Yingjie

    2018-02-01

    This study is to investigate the feasibility of least square method in fitting non-Gaussian noise data. We add different levels of the two typical non-Gaussian noises, Lévy and stretched Gaussian noises, to exact value of the selected functions including linear equations, polynomial and exponential equations, and the maximum absolute and the mean square errors are calculated for the different cases. Lévy and stretched Gaussian distributions have many applications in fractional and fractal calculus. It is observed that the non-Gaussian noises are less accurately fitted than the Gaussian noise, but the stretched Gaussian cases appear to perform better than the Lévy noise cases. It is stressed that the least-squares method is inapplicable to the non-Gaussian noise cases when the noise level is larger than 5%.

  11. An Incremental Weighted Least Squares Approach to Surface Lights Fields

    Science.gov (United States)

    Coombe, Greg; Lastra, Anselmo

    An Image-Based Rendering (IBR) approach to appearance modelling enables the capture of a wide variety of real physical surfaces with complex reflectance behaviour. The challenges with this approach are handling the large amount of data, rendering the data efficiently, and previewing the model as it is being constructed. In this paper, we introduce the Incremental Weighted Least Squares approach to the representation and rendering of spatially and directionally varying illumination. Each surface patch consists of a set of Weighted Least Squares (WLS) node centers, which are low-degree polynomial representations of the anisotropic exitant radiance. During rendering, the representations are combined in a non-linear fashion to generate a full reconstruction of the exitant radiance. The rendering algorithm is fast, efficient, and implemented entirely on the GPU. The construction algorithm is incremental, which means that images are processed as they arrive instead of in the traditional batch fashion. This human-in-the-loop process enables the user to preview the model as it is being constructed and to adapt to over-sampling and under-sampling of the surface appearance.

  12. Non-stationary least-squares complex decomposition for microseismic noise attenuation

    Science.gov (United States)

    Chen, Yangkang

    2018-06-01

    Microseismic data processing and imaging are crucial for subsurface real-time monitoring during hydraulic fracturing process. Unlike the active-source seismic events or large-scale earthquake events, the microseismic event is usually of very small magnitude, which makes its detection challenging. The biggest trouble of microseismic data is the low signal-to-noise ratio issue. Because of the small energy difference between effective microseismic signal and ambient noise, the effective signals are usually buried in strong random noise. I propose a useful microseismic denoising algorithm that is based on decomposing a microseismic trace into an ensemble of components using least-squares inversion. Based on the predictive property of useful microseismic event along the time direction, the random noise can be filtered out via least-squares fitting of multiple damping exponential components. The method is flexible and almost automated since the only parameter needed to be defined is a decomposition number. I use some synthetic and real data examples to demonstrate the potential of the algorithm in processing complicated microseismic data sets.

  13. A cross-correlation objective function for least-squares migration and visco-acoustic imaging

    KAUST Repository

    Dutta, Gaurav

    2014-08-05

    Conventional acoustic least-squares migration inverts for a reflectivity image that best matches the amplitudes of the observed data. However, for field data applications, it is not easy to match the recorded amplitudes because of the visco-elastic nature of the earth and inaccuracies in the estimation of source signature and strength at different shot locations. To relax the requirement for strong amplitude matching of least-squares migration, we use a normalized cross-correlation objective function that is only sensitive to the similarity between the predicted and the observed data. Such a normalized cross-correlation objective function is also equivalent to a time-domain phase inversion method where the main emphasis is only on matching the phase of the data rather than the amplitude. Numerical tests on synthetic and field data show that such an objective function can be used as an alternative to visco-acoustic least-squares reverse time migration (Qp-LSRTM) when there is strong attenuation in the subsurface and the estimation of the attenuation parameter Qp is insufficiently accurate.

  14. A cross-correlation objective function for least-squares migration and visco-acoustic imaging

    KAUST Repository

    Dutta, Gaurav; Sinha, Mrinal; Schuster, Gerard T.

    2014-01-01

    Conventional acoustic least-squares migration inverts for a reflectivity image that best matches the amplitudes of the observed data. However, for field data applications, it is not easy to match the recorded amplitudes because of the visco-elastic nature of the earth and inaccuracies in the estimation of source signature and strength at different shot locations. To relax the requirement for strong amplitude matching of least-squares migration, we use a normalized cross-correlation objective function that is only sensitive to the similarity between the predicted and the observed data. Such a normalized cross-correlation objective function is also equivalent to a time-domain phase inversion method where the main emphasis is only on matching the phase of the data rather than the amplitude. Numerical tests on synthetic and field data show that such an objective function can be used as an alternative to visco-acoustic least-squares reverse time migration (Qp-LSRTM) when there is strong attenuation in the subsurface and the estimation of the attenuation parameter Qp is insufficiently accurate.

  15. Skeletonized Least Squares Wave Equation Migration

    KAUST Repository

    Zhan, Ge

    2010-10-17

    The theory for skeletonized least squares wave equation migration (LSM) is presented. The key idea is, for an assumed velocity model, the source‐side Green\\'s function and the geophone‐side Green\\'s function are computed by a numerical solution of the wave equation. Only the early‐arrivals of these Green\\'s functions are saved and skeletonized to form the migration Green\\'s function (MGF) by convolution. Then the migration image is obtained by a dot product between the recorded shot gathers and the MGF for every trial image point. The key to an efficient implementation of iterative LSM is that at each conjugate gradient iteration, the MGF is reused and no new finitedifference (FD) simulations are needed to get the updated migration image. It is believed that this procedure combined with phase‐encoded multi‐source technology will allow for the efficient computation of wave equation LSM images in less time than that of conventional reverse time migration (RTM).

  16. Elastic least-squares reverse time migration

    KAUST Repository

    Feng, Zongcai

    2017-03-08

    We use elastic least-squares reverse time migration (LSRTM) to invert for the reflectivity images of P- and S-wave impedances. Elastic LSRTMsolves the linearized elastic-wave equations for forward modeling and the adjoint equations for backpropagating the residual wavefield at each iteration. Numerical tests on synthetic data and field data reveal the advantages of elastic LSRTM over elastic reverse time migration (RTM) and acoustic LSRTM. For our examples, the elastic LSRTM images have better resolution and amplitude balancing, fewer artifacts, and less crosstalk compared with the elastic RTM images. The images are also better focused and have better reflector continuity for steeply dipping events compared to the acoustic LSRTM images. Similar to conventional leastsquares migration, elastic LSRTM also requires an accurate estimation of the P- and S-wave migration velocity models. However, the problem remains that, when there are moderate errors in the velocity model and strong multiples, LSRTMwill produce migration noise stronger than that seen in the RTM images.

  17. Elastic least-squares reverse time migration

    KAUST Repository

    Feng, Zongcai; Schuster, Gerard T.

    2017-01-01

    We use elastic least-squares reverse time migration (LSRTM) to invert for the reflectivity images of P- and S-wave impedances. Elastic LSRTMsolves the linearized elastic-wave equations for forward modeling and the adjoint equations for backpropagating the residual wavefield at each iteration. Numerical tests on synthetic data and field data reveal the advantages of elastic LSRTM over elastic reverse time migration (RTM) and acoustic LSRTM. For our examples, the elastic LSRTM images have better resolution and amplitude balancing, fewer artifacts, and less crosstalk compared with the elastic RTM images. The images are also better focused and have better reflector continuity for steeply dipping events compared to the acoustic LSRTM images. Similar to conventional leastsquares migration, elastic LSRTM also requires an accurate estimation of the P- and S-wave migration velocity models. However, the problem remains that, when there are moderate errors in the velocity model and strong multiples, LSRTMwill produce migration noise stronger than that seen in the RTM images.

  18. Time-domain full waveform inversion of exponentially damped wavefield using the deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok

    2017-11-15

    Full waveform inversion (FWI) suffers from the cycle-skipping problem when the available frequency-band of data is not low enough. We apply an exponential damping to the data to generate artificial low frequencies, which helps FWI avoid cycle skipping. In this case, the least-square misfit function does not properly deal with the exponentially damped wavefield in FWI, because the amplitude of traces decays almost exponentially with increasing offset in a damped wavefield. Thus, we use a deconvolution-based objective function for FWI of the exponentially damped wavefield. The deconvolution filter includes inherently a normalization between the modeled and observed data, thus it can address the unbalanced amplitude of a damped wavefield. We, specifically, normalize the modeled data with the observed data in the frequency-domain to estimate the deconvolution filter and selectively choose a frequency-band for normalization that mainly includes the artificial low frequencies. We calculate the gradient of the objective function using the adjoint-state method. The synthetic and benchmark data examples show that our FWI algorithm generates a convergent long wavelength structure without low frequency information in the recorded data.

  19. Time-domain full waveform inversion of exponentially damped wavefield using the deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok; Alkhalifah, Tariq Ali

    2017-01-01

    Full waveform inversion (FWI) suffers from the cycle-skipping problem when the available frequency-band of data is not low enough. We apply an exponential damping to the data to generate artificial low frequencies, which helps FWI avoid cycle skipping. In this case, the least-square misfit function does not properly deal with the exponentially damped wavefield in FWI, because the amplitude of traces decays almost exponentially with increasing offset in a damped wavefield. Thus, we use a deconvolution-based objective function for FWI of the exponentially damped wavefield. The deconvolution filter includes inherently a normalization between the modeled and observed data, thus it can address the unbalanced amplitude of a damped wavefield. We, specifically, normalize the modeled data with the observed data in the frequency-domain to estimate the deconvolution filter and selectively choose a frequency-band for normalization that mainly includes the artificial low frequencies. We calculate the gradient of the objective function using the adjoint-state method. The synthetic and benchmark data examples show that our FWI algorithm generates a convergent long wavelength structure without low frequency information in the recorded data.

  20. Efectivity of Additive Spline for Partial Least Square Method in Regression Model Estimation

    Directory of Open Access Journals (Sweden)

    Ahmad Bilfarsah

    2005-04-01

    Full Text Available Additive Spline of Partial Least Square method (ASPL as one generalization of Partial Least Square (PLS method. ASPLS method can be acommodation to non linear and multicollinearity case of predictor variables. As a principle, The ASPLS method approach is cahracterized by two idea. The first is to used parametric transformations of predictors by spline function; the second is to make ASPLS components mutually uncorrelated, to preserve properties of the linear PLS components. The performance of ASPLS compared with other PLS method is illustrated with the fisher economic application especially the tuna fish production.

  1. Analysis of total least squares in estimating the parameters of a mortar trajectory

    Energy Technology Data Exchange (ETDEWEB)

    Lau, D.L.; Ng, L.C.

    1994-12-01

    Least Squares (LS) is a method of curve fitting used with the assumption that error exists in the observation vector. The method of Total Least Squares (TLS) is more useful in cases where there is error in the data matrix as well as the observation vector. This paper describes work done in comparing the LS and TLS results for parameter estimation of a mortar trajectory based on a time series of angular observations. To improve the results, we investigated several derivations of the LS and TLS methods, and early findings show TLS provided slightly, 10%, improved results over the LS method.

  2. LSD enhances suggestibility in healthy volunteers.

    Science.gov (United States)

    Carhart-Harris, R L; Kaelen, M; Whalley, M G; Bolstridge, M; Feilding, A; Nutt, D J

    2015-02-01

    Lysergic acid diethylamide (LSD) has a history of use as a psychotherapeutic aid in the treatment of mood disorders and addiction, and it was also explored as an enhancer of mind control. The present study sought to test the effect of LSD on suggestibility in a modern research study. Ten healthy volunteers were administered with intravenous (i.v.) LSD (40-80 μg) in a within-subject placebo-controlled design. Suggestibility and cued mental imagery were assessed using the Creative Imagination Scale (CIS) and a mental imagery test (MIT). CIS and MIT items were split into two versions (A and B), balanced for 'efficacy' (i.e. A ≈ B) and counterbalanced across conditions (i.e. 50 % completed version 'A' under LSD). The MIT and CIS were issued 110 and 140 min, respectively, post-infusion, corresponding with the peak drug effects. Volunteers gave significantly higher ratings for the CIS (p = 0.018), but not the MIT (p = 0.11), after LSD than placebo. The magnitude of suggestibility enhancement under LSD was positively correlated with trait conscientiousness measured at baseline (p = 0.0005). These results imply that the influence of suggestion is enhanced by LSD. Enhanced suggestibility under LSD may have implications for its use as an adjunct to psychotherapy, where suggestibility plays a major role. That cued imagery was unaffected by LSD implies that suggestions must be of a sufficient duration and level of detail to be enhanced by the drug. The results also imply that individuals with high trait conscientiousness are especially sensitive to the suggestibility-enhancing effects of LSD.

  3. Time-Series INSAR: An Integer Least-Squares Approach For Distributed Scatterers

    Science.gov (United States)

    Samiei-Esfahany, Sami; Hanssen, Ramon F.

    2012-01-01

    The objective of this research is to extend the geode- tic mathematical model which was developed for persistent scatterers to a model which can exploit distributed scatterers (DS). The main focus is on the integer least- squares framework, and the main challenge is to include the decorrelation effect in the mathematical model. In order to adapt the integer least-squares mathematical model for DS we altered the model from a single master to a multi-master configuration and introduced the decorrelation effect stochastically. This effect is described in our model by a full covariance matrix. We propose to de- rive this covariance matrix by numerical integration of the (joint) probability distribution function (PDF) of interferometric phases. This PDF is a function of coherence values and can be directly computed from radar data. We show that the use of this model can improve the performance of temporal phase unwrapping of distributed scatterers.

  4. The thermoluminescence glow-curve analysis using GlowFit - the new powerful tool for deconvolution

    International Nuclear Information System (INIS)

    Puchalska, M.; Bilski, P.

    2005-10-01

    A new computer program, GlowFit, for deconvoluting first-order kinetics thermoluminescence (TL) glow-curves has been developed. A non-linear function describing a single glow-peak is fitted to experimental points using the least squares Levenberg-Marquardt method. The main advantage of GlowFit is in its ability to resolve complex TL glow-curves consisting of strongly overlapping peaks, such as those observed in heavily doped LiF:Mg,Ti (MTT) detectors. This resolution is achieved mainly by setting constraints or by fixing selected parameters. The initial values of the fitted parameters are placed in the so-called pattern files. GlowFit is a Microsoft Windows-operated user-friendly program. Its graphic interface enables easy intuitive manipulation of glow-peaks, at the initial stage (parameter initialization) and at the final stage (manual adjustment) of fitting peak parameters to the glow-curves. The program is freely downloadable from the web site www.ifj.edu.pl/NPP/deconvolution.htm (author)

  5. Adaptive Noise Canceling Menggunakan Algoritma Least Mean Square (Lms)

    OpenAIRE

    Nardiana, Anita; Sumaryono, Sari Sujoko

    2011-01-01

    Noise is inevitable in communication system. In some cases, noise can disturb signal. It is veryannoying as the received signal is jumbled with the noise itself. To reduce or remove noise, filter lowpass,highpass or bandpass can solve the problems, but this method cannot reach a maximum standard. One ofthe alternatives to solve the problem is by using adaptive filter. Adaptive algorithm frequently used is LeastMean Square (LMS) Algorithm which is compatible to Finite Impulse Response (FIR). T...

  6. Range resolution improvement in passive bistatic radars using nested FM channels and least squares approach

    Science.gov (United States)

    Arslan, Musa T.; Tofighi, Mohammad; Sevimli, Rasim A.; ćetin, Ahmet E.

    2015-05-01

    One of the main disadvantages of using commercial broadcasts in a Passive Bistatic Radar (PBR) system is the range resolution. Using multiple broadcast channels to improve the radar performance is offered as a solution to this problem. However, it suffers from detection performance due to the side-lobes that matched filter creates for using multiple channels. In this article, we introduce a deconvolution algorithm to suppress the side-lobes. The two-dimensional matched filter output of a PBR is further analyzed as a deconvolution problem. The deconvolution algorithm is based on making successive projections onto the hyperplanes representing the time delay of a target. Resulting iterative deconvolution algorithm is globally convergent because all constraint sets are closed and convex. Simulation results in an FM based PBR system are presented.

  7. LSD and Genetic Damage

    Science.gov (United States)

    Dishotsky, Norman I.; And Others

    1971-01-01

    Reviews studies of the effects of lysergic acid diethylamide (LSD) on man and other organisms. Concludes that pure LSD injected in moderate doses does not cause chromosome or detectable genetic damage and is not a teratogen or carcinogen. (JM)

  8. The paradoxical psychological effects of lysergic acid diethylamide (LSD).

    Science.gov (United States)

    Carhart-Harris, R L; Kaelen, M; Bolstridge, M; Williams, T M; Williams, L T; Underwood, R; Feilding, A; Nutt, D J

    2016-05-01

    Lysergic acid diethylamide (LSD) is a potent serotonergic hallucinogen or psychedelic that modulates consciousness in a marked and novel way. This study sought to examine the acute and mid-term psychological effects of LSD in a controlled study. A total of 20 healthy volunteers participated in this within-subjects study. Participants received LSD (75 µg, intravenously) on one occasion and placebo (saline, intravenously) on another, in a balanced order, with at least 2 weeks separating sessions. Acute subjective effects were measured using the Altered States of Consciousness questionnaire and the Psychotomimetic States Inventory (PSI). A measure of optimism (the Revised Life Orientation Test), the Revised NEO Personality Inventory, and the Peter's Delusions Inventory were issued at baseline and 2 weeks after each session. LSD produced robust psychological effects; including heightened mood but also high scores on the PSI, an index of psychosis-like symptoms. Increased optimism and trait openness were observed 2 weeks after LSD (and not placebo) and there were no changes in delusional thinking. The present findings reinforce the view that psychedelics elicit psychosis-like symptoms acutely yet improve psychological wellbeing in the mid to long term. It is proposed that acute alterations in mood are secondary to a more fundamental modulation in the quality of cognition, and that increased cognitive flexibility subsequent to serotonin 2A receptor (5-HT2AR) stimulation promotes emotional lability during intoxication and leaves a residue of 'loosened cognition' in the mid to long term that is conducive to improved psychological wellbeing.

  9. The consistency of ordinary least-squares and generalized least-squares polynomial regression on characterizing the mechanomyographic amplitude versus torque relationship

    International Nuclear Information System (INIS)

    Herda, Trent J; Ryan, Eric D; Costa, Pablo B; DeFreitas, Jason M; Walter, Ashley A; Stout, Jeffrey R; Beck, Travis W; Cramer, Joel T; Housh, Terry J; Weir, Joseph P

    2009-01-01

    The primary purpose of this study was to examine the consistency of ordinary least-squares (OLS) and generalized least-squares (GLS) polynomial regression analyses utilizing linear, quadratic and cubic models on either five or ten data points that characterize the mechanomyographic amplitude (MMG RMS ) versus isometric torque relationship. The secondary purpose was to examine the consistency of OLS and GLS polynomial regression utilizing only linear and quadratic models (excluding cubic responses) on either ten or five data points. Eighteen participants (mean ± SD age = 24 ± 4 yr) completed ten randomly ordered isometric step muscle actions from 5% to 95% of the maximal voluntary contraction (MVC) of the right leg extensors during three separate trials. MMG RMS was recorded from the vastus lateralis during the MVCs and each submaximal muscle action. MMG RMS versus torque relationships were analyzed on a subject-by-subject basis using OLS and GLS polynomial regression. When using ten data points, only 33% and 27% of the subjects were fitted with the same model (utilizing linear, quadratic and cubic models) across all three trials for OLS and GLS, respectively. After eliminating the cubic model, there was an increase to 55% of the subjects being fitted with the same model across all trials for both OLS and GLS regression. Using only five data points (instead of ten data points), 55% of the subjects were fitted with the same model across all trials for OLS and GLS regression. Overall, OLS and GLS polynomial regression models were only able to consistently describe the torque-related patterns of response for MMG RMS in 27–55% of the subjects across three trials. Future studies should examine alternative methods for improving the consistency and reliability of the patterns of response for the MMG RMS versus isometric torque relationship

  10. Precision PEP-II optics measurement with an SVD-enhanced Least-Square fitting

    Science.gov (United States)

    Yan, Y. T.; Cai, Y.

    2006-03-01

    A singular value decomposition (SVD)-enhanced Least-Square fitting technique is discussed. By automatic identifying, ordering, and selecting dominant SVD modes of the derivative matrix that responds to the variations of the variables, the converging process of the Least-Square fitting is significantly enhanced. Thus the fitting speed can be fast enough for a fairly large system. This technique has been successfully applied to precision PEP-II optics measurement in which we determine all quadrupole strengths (both normal and skew components) and sextupole feed-downs as well as all BPM gains and BPM cross-plane couplings through Least-Square fitting of the phase advances and the Local Green's functions as well as the coupling ellipses among BPMs. The local Green's functions are specified by 4 local transfer matrix components R12, R34, R32, R14. These measurable quantities (the Green's functions, the phase advances and the coupling ellipse tilt angles and axis ratios) are obtained by analyzing turn-by-turn Beam Position Monitor (BPM) data with a high-resolution model-independent analysis (MIA). Once all of the quadrupoles and sextupole feed-downs are determined, we obtain a computer virtual accelerator which matches the real accelerator in linear optics. Thus, beta functions, linear coupling parameters, and interaction point (IP) optics characteristics can be measured and displayed.

  11. An Inverse Function Least Square Fitting Approach of the Buildup Factor for Radiation Shielding Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Park, Chang Je [Sejong Univ., Seoul (Korea, Republic of); Alkhatee, Sari; Roh, Gyuhong; Lee, Byungchul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Dose absorption and energy absorption buildup factors are widely used in the shielding analysis. The dose rate of the medium is main concern in the dose buildup factor, however energy absorption is an important parameter in the energy buildup factors. ANSI/ANS-6.4.3-1991 standard data is widely used based on interpolation and extrapolation by means of an approximation method. Recently, Yoshida's geometric progression (GP) formulae are also popular and it is already implemented in QAD code. In the QAD code, two buildup factors are notated as DOSE for standard air exposure response and ENG for the response of the energy absorbed in the material itself. In this paper, a new least square fitting method is suggested to obtain a reliable buildup factors proposed since 1991. Total 4 datasets of air exposure buildup factors are used for evaluation including ANSI/ANS-6.4.3-1991, Taylor, Berger, and GP data. The standard deviation of the fitted data are analyzed based on the results. A new reverse least square fitting method is proposed in this study in order to reduce the fitting uncertainties. It adapts an inverse function rather than the original function by the distribution slope of dataset. Some quantitative comparisons are provided for concrete and lead in this paper, too. This study is focused on the least square fitting of existing buildup factors to be utilized in the point-kernel code for radiation shielding analysis. The inverse least square fitting method is suggested to obtain more reliable results of concave shaped dataset such as concrete. In the concrete case, the variance and residue are decreased significantly, too. However, the convex shaped case of lead can be applied to the usual least square fitting method. In the future, more datasets will be tested by using the least square fitting. And the fitted data could be implemented to the existing point-kernel codes.

  12. Gauss’s, Cholesky’s and Banachiewicz’s Contributions to Least Squares

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Wasniewski, Jerzy

    This paper describes historically Gauss’s contributions to the area of Least Squares. Also mentioned are Cholesky’s and Banachiewicz’s contributions to linear algebra. The material given is backup information to a Tutorial given at PPAM 2011 to honor Cholesky on the hundred anniversary of his...

  13. Seismic time-lapse imaging using Interferometric least-squares migration

    KAUST Repository

    Sinha, Mrinal

    2016-09-06

    One of the problems with 4D surveys is that the environmental conditions change over time so that the experiment is insufficiently repeatable. To mitigate this problem, we propose the use of interferometric least-squares migration (ILSM) to estimate the migration image for the baseline and monitor surveys. Here, a known reflector is used as the reference reflector for ILSM. Results with synthetic and field data show that ILSM can eliminate artifacts caused by non-repeatability in time-lapse surveys.

  14. Seismic time-lapse imaging using Interferometric least-squares migration

    KAUST Repository

    Sinha, Mrinal; Schuster, Gerard T.

    2016-01-01

    One of the problems with 4D surveys is that the environmental conditions change over time so that the experiment is insufficiently repeatable. To mitigate this problem, we propose the use of interferometric least-squares migration (ILSM) to estimate the migration image for the baseline and monitor surveys. Here, a known reflector is used as the reference reflector for ILSM. Results with synthetic and field data show that ILSM can eliminate artifacts caused by non-repeatability in time-lapse surveys.

  15. Partial least squares path modeling basic concepts, methodological issues and applications

    CERN Document Server

    Noonan, Richard

    2017-01-01

    This edited book presents the recent developments in partial least squares-path modeling (PLS-PM) and provides a comprehensive overview of the current state of the most advanced research related to PLS-PM. The first section of this book emphasizes the basic concepts and extensions of the PLS-PM method. The second section discusses the methodological issues that are the focus of the recent development of the PLS-PM method. The third part discusses the real world application of the PLS-PM method in various disciplines. The contributions from expert authors in the field of PLS focus on topics such as the factor-based PLS-PM, the perfect match between a model and a mode, quantile composite-based path modeling (QC-PM), ordinal consistent partial least squares (OrdPLSc), non-symmetrical composite-based path modeling (NSCPM), modern view for mediation analysis in PLS-PM, a multi-method approach for identifying and treating unobserved heterogeneity, multigroup analysis (PLS-MGA), the assessment of the common method b...

  16. Multivariat least-squares methods applied to the quantitative spectral analysis of multicomponent samples

    International Nuclear Information System (INIS)

    Haaland, D.M.; Easterling, R.G.; Vopicka, D.A.

    1985-01-01

    In an extension of earlier work, weighted multivariate least-squares methods of quantitative FT-IR analysis have been developed. A linear least-squares approximation to nonlinearities in the Beer-Lambert law is made by allowing the reference spectra to be a set of known mixtures, The incorporation of nonzero intercepts in the relation between absorbance and concentration further improves the approximation of nonlinearities while simultaneously accounting for nonzero spectra baselines. Pathlength variations are also accommodated in the analysis, and under certain conditions, unknown sample pathlengths can be determined. All spectral data are used to improve the precision and accuracy of the estimated concentrations. During the calibration phase of the analysis, pure component spectra are estimated from the standard mixture spectra. These can be compared with the measured pure component spectra to determine which vibrations experience nonlinear behavior. In the predictive phase of the analysis, the calculated spectra are used in our previous least-squares analysis to estimate sample component concentrations. These methods were applied to the analysis of the IR spectra of binary mixtures of esters. Even with severely overlapping spectral bands and nonlinearities in the Beer-Lambert law, the average relative error in the estimated concentration was <1%

  17. Chronic LSD alters gene expression profiles in the mPFC relevant to schizophrenia.

    Science.gov (United States)

    Martin, David A; Marona-Lewicka, Danuta; Nichols, David E; Nichols, Charles D

    2014-08-01

    Chronic administration of lysergic acid diethylamide (LSD) every other day to rats results in a variety of abnormal behaviors. These build over the 90 day course of treatment and can persist at full strength for at least several months after cessation of treatment. The behaviors are consistent with those observed in animal models of schizophrenia and include hyperactivity, reduced sucrose-preference, and decreased social interaction. In order to elucidate molecular changes that underlie these aberrant behaviors, we chronically treated rats with LSD and performed RNA-sequencing on the medial prefrontal cortex (mPFC), an area highly associated with both the actions of LSD and the pathophysiology of schizophrenia and other psychiatric illnesses. We observed widespread changes in the neurogenetic state of treated animals four weeks after cessation of LSD treatment. QPCR was used to validate a subset of gene expression changes observed with RNA-Seq, and confirmed a significant correlation between the two methods. Functional clustering analysis indicates differentially expressed genes are enriched in pathways involving neurotransmission (Drd2, Gabrb1), synaptic plasticity (Nr2a, Krox20), energy metabolism (Atp5d, Ndufa1) and neuropeptide signaling (Npy, Bdnf), among others. Many processes identified as altered by chronic LSD are also implicated in the pathogenesis of schizophrenia, and genes affected by LSD are enriched with putative schizophrenia genes. Our results provide a relatively comprehensive analysis of mPFC transcriptional regulation in response to chronic LSD, and indicate that the long-term effects of LSD may bear relevance to psychiatric illnesses, including schizophrenia. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Block Least Mean Squares Algorithm over Distributed Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    T. Panigrahi

    2012-01-01

    Full Text Available In a distributed parameter estimation problem, during each sampling instant, a typical sensor node communicates its estimate either by the diffusion algorithm or by the incremental algorithm. Both these conventional distributed algorithms involve significant communication overheads and, consequently, defeat the basic purpose of wireless sensor networks. In the present paper, we therefore propose two new distributed algorithms, namely, block diffusion least mean square (BDLMS and block incremental least mean square (BILMS by extending the concept of block adaptive filtering techniques to the distributed adaptation scenario. The performance analysis of the proposed BDLMS and BILMS algorithms has been carried out and found to have similar performances to those offered by conventional diffusion LMS and incremental LMS algorithms, respectively. The convergence analyses of the proposed algorithms obtained from the simulation study are also found to be in agreement with the theoretical analysis. The remarkable and interesting aspect of the proposed block-based algorithms is that their communication overheads per node and latencies are less than those of the conventional algorithms by a factor as high as the block size used in the algorithms.

  19. Linear least squares compartmental-model-independent parameter identification in PET

    International Nuclear Information System (INIS)

    Thie, J.A.; Smith, G.T.; Hubner, K.F.

    1997-01-01

    A simplified approach involving linear-regression straight-line parameter fitting of dynamic scan data is developed for both specific and nonspecific models. Where compartmental-model topologies apply, the measured activity may be expressed in terms of: its integrals, plasma activity and plasma integrals -- all in a linear expression with macroparameters as coefficients. Multiple linear regression, as in spreadsheet software, determines parameters for best data fits. Positron emission tomography (PET)-acquired gray-matter images in a dynamic scan are analyzed: both by this method and by traditional iterative nonlinear least squares. Both patient and simulated data were used. Regression and traditional methods are in expected agreement. Monte-Carlo simulations evaluate parameter standard deviations, due to data noise, and much smaller noise-induced biases. Unique straight-line graphical displays permit visualizing data influences on various macroparameters as changes in slopes. Advantages of regression fitting are: simplicity, speed, ease of implementation in spreadsheet software, avoiding risks of convergence failures or false solutions in iterative least squares, and providing various visualizations of the uptake process by straight line graphical displays. Multiparameter model-independent analyses on lesser understood systems is also made possible

  20. Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kookjin [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science; Carlberg, Kevin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Elman, Howard C. [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science and Inst. for Advanced Computer Studies

    2018-03-29

    Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weighted $\\ell^2$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $\\ell^2$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.

  1. Least squares methodology applied to LWR-PV damage dosimetry, experience and expectations

    International Nuclear Information System (INIS)

    Wagschal, J.J.; Broadhead, B.L.; Maerker, R.E.

    1979-01-01

    The development of an advanced methodology for Light Water Reactors (LWR) Pressure Vessel (PV) damage dosimetry applications is the subject of an ongoing EPRI-sponsored research project at ORNL. This methodology includes a generalized least squares approach to a combination of data. The data include measured foil activations, evaluated cross sections and calculated fluxes. The uncertainties associated with the data as well as with the calculational methods are an essential component of this methodology. Activation measurements in two NBS benchmark neutron fields ( 252 Cf ISNF) and in a prototypic reactor field (Oak Ridge Pool Critical Assembly - PCA) are being analyzed using a generalized least squares method. The sensitivity of the results to the representation of the uncertainties (covariances) was carefully checked. Cross element covariances were found to be of utmost importance

  2. Positive Scattering Cross Sections using Constrained Least Squares

    International Nuclear Information System (INIS)

    Dahl, J.A.; Ganapol, B.D.; Morel, J.E.

    1999-01-01

    A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented

  3. Least-squares reverse time migration of marine data with frequency-selection encoding

    KAUST Repository

    Dai, Wei; Huang, Yunsong; Schuster, Gerard T.

    2013-01-01

    The phase-encoding technique can sometimes increase the efficiency of the least-squares reverse time migration (LSRTM) by more than one order of magnitude. However, traditional random encoding functions require all the encoded shots to share

  4. Handbook of Partial Least Squares Concepts, Methods and Applications

    CERN Document Server

    Vinzi, Vincenzo Esposito; Henseler, Jörg

    2010-01-01

    This handbook provides a comprehensive overview of Partial Least Squares (PLS) methods with specific reference to their use in marketing and with a discussion of the directions of current research and perspectives. It covers the broad area of PLS methods, from regression to structural equation modeling applications, software and interpretation of results. The handbook serves both as an introduction for those without prior knowledge of PLS and as a comprehensive reference for researchers and practitioners interested in the most recent advances in PLS methodology.

  5. On the use of a penalized least squares method to process kinematic full-field measurements

    International Nuclear Information System (INIS)

    Moulart, Raphaël; Rotinat, René

    2014-01-01

    This work is aimed at exploring the performances of an alternative procedure to smooth and differentiate full-field displacement measurements. After recalling the strategies currently used by the experimental mechanics community, a short overview of the available smoothing algorithms is drawn up and the requirements that such an algorithm has to fulfil to be applicable to process kinematic measurements are listed. A comparative study of the chosen algorithm is performed including the 2D penalized least squares method and two other commonly implemented strategies. The results obtained by penalized least squares are comparable in terms of quality to those produced by the two other algorithms, while the penalized least squares method appears to be the fastest and the most flexible. Unlike both the other considered methods, it is possible with penalized least squares to automatically choose the parameter governing the amount of smoothing to apply. Unfortunately, it appears that this automation is not suitable for the proposed application since it does not lead to optimal strain maps. Finally, it is possible with this technique to perform the derivation to obtain strain maps before smoothing them (while the smoothing is normally applied to displacement maps before the differentiation), which can lead in some cases to a more effective reconstruction of the strain fields. (paper)

  6. Partial Least Square with Savitzky Golay Derivative in Predicting Blood Hemoglobin Using Near Infrared Spectrum

    Directory of Open Access Journals (Sweden)

    Mohd Idrus Mohd Nazrul Effendy

    2018-01-01

    Full Text Available Near infrared spectroscopy (NIRS is a reliable technique that widely used in medical fields. Partial least square was developed to predict blood hemoglobin concentration using NIRS. The aims of this paper are (i to develop predictive model for near infrared spectroscopic analysis in blood hemoglobin prediction, (ii to establish relationship between blood hemoglobin and near infrared spectrum using a predictive model, (iii to evaluate the predictive accuracy of a predictive model based on root mean squared error (RMSE and coefficient of determination rp2. Partial least square with first order Savitzky Golay (SG derivative preprocessing (PLS-SGd1 showed the higher performance of predictions with RMSE = 0.7965 and rp2= 0.9206 in K-fold cross validation. Optimum number of latent variable (LV and frame length (f were 32 and 27 nm, respectively. These findings suggest that the relationship between blood hemoglobin and near infrared spectrum is strong, and the partial least square with first order SG derivative is able to predict the blood hemoglobin using near infrared spectral data.

  7. Enhanced least squares Monte Carlo method for real-time decision optimizations for evolving natural hazards

    DEFF Research Database (Denmark)

    Anders, Annett; Nishijima, Kazuyoshi

    The present paper aims at enhancing a solution approach proposed by Anders & Nishijima (2011) to real-time decision problems in civil engineering. The approach takes basis in the Least Squares Monte Carlo method (LSM) originally proposed by Longstaff & Schwartz (2001) for computing American option...... prices. In Anders & Nishijima (2011) the LSM is adapted for a real-time operational decision problem; however it is found that further improvement is required in regard to the computational efficiency, in order to facilitate it for practice. This is the focus in the present paper. The idea behind...... the improvement of the computational efficiency is to “best utilize” the least squares method; i.e. least squares method is applied for estimating the expected utility for terminal decisions, conditional on realizations of underlying random phenomena at respective times in a parametric way. The implementation...

  8. A deconvolution technique for processing small intestinal transit data

    Energy Technology Data Exchange (ETDEWEB)

    Brinch, K. [Department of Clinical Physiology and Nuclear Medicine, Glostrup Hospital, University Hospital of Copenhagen (Denmark); Larsson, H.B.W. [Danish Research Center of Magnetic Resonance, Hvidovre Hospital, University Hospital of Copenhagen (Denmark); Madsen, J.L. [Department of Clinical Physiology and Nuclear Medicine, Hvidovre Hospital, University Hospital of Copenhagen (Denmark)

    1999-03-01

    The deconvolution technique can be used to compute small intestinal impulse response curves from scintigraphic data. Previously suggested approaches, however, are sensitive to noise from the data. We investigated whether deconvolution based on a new simple iterative convolving technique can be recommended. Eight healthy volunteers ingested a meal that contained indium-111 diethylene triamine penta-acetic acid labelled water and technetium-99m stannous colloid labelled omelette. Imaging was performed at 30-min intervals until all radioactivity was located in the colon. A Fermi function=(1+e{sup -{alpha}{beta}})/(1+e{sup (t-{alpha}){beta}}) was chosen to characterize the small intestinal impulse response function. By changing only two parameters, {alpha} and {beta}, it is possible to obtain configurations from nearly a square function to nearly a monoexponential function. Small intestinal input function was obtained from the gastric emptying curve and convolved with the Fermi function. The sum of least squares was used to find {alpha} and {beta} yielding the best fit of the convolved curve to the oberved small intestinal time-activity curve. Finally, a small intestinal mean transit time was calculated from the Fermi function referred to. In all cases, we found an excellent fit of the convolved curve to the observed small intestinal time-activity curve, that is the Fermi function reflected the small intestinal impulse response curve. Small intestinal mean transit time of liquid marker (median 2.02 h) was significantly shorter than that of solid marker (median 2.99 h; P<0.02). The iterative convolving technique seems to be an attractive alternative to ordinary approaches for the processing of small intestinal transit data. (orig.) With 2 figs., 13 refs.

  9. ANYOLS, Least Square Fit by Stepwise Regression

    International Nuclear Information System (INIS)

    Atwoods, C.L.; Mathews, S.

    1986-01-01

    Description of program or function: ANYOLS is a stepwise program which fits data using ordinary or weighted least squares. Variables are selected for the model in a stepwise way based on a user- specified input criterion or a user-written subroutine. The order in which variables are entered can be influenced by user-defined forcing priorities. Instead of stepwise selection, ANYOLS can try all possible combinations of any desired subset of the variables. Automatic output for the final model in a stepwise search includes plots of the residuals, 'studentized' residuals, and leverages; if the model is not too large, the output also includes partial regression and partial leverage plots. A data set may be re-used so that several selection criteria can be tried. Flexibility is increased by allowing the substitution of user-written subroutines for several default subroutines

  10. Least Squares Inference on Integrated Volatility and the Relationship between Efficient Prices and Noise

    DEFF Research Database (Denmark)

    Nolte, Ingmar; Voev, Valeri

    The expected value of sums of squared intraday returns (realized variance) gives rise to a least squares regression which adapts itself to the assumptions of the noise process and allows for a joint inference on integrated volatility (IV), noise moments and price-noise relations. In the iid noise...

  11. Preprocessing in Matlab Inconsistent Linear System for a Meaningful Least Squares Solution

    Science.gov (United States)

    Sen, Symal K.; Shaykhian, Gholam Ali

    2011-01-01

    Mathematical models of many physical/statistical problems are systems of linear equations Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the . minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.

  12. A least squares calculational method: application to e±-H elastic scattering

    International Nuclear Information System (INIS)

    Das, J.N.; Chakraborty, S.

    1989-01-01

    The least squares calcualtional method proposed by Das has been applied for the e ± -H elastic scattering problems for intermediate energies. Some important conclusions are made on the basis of the calculation. (author). 7 refs ., 2 tabs

  13. Regularized Partial Least Squares with an Application to NMR Spectroscopy

    OpenAIRE

    Allen, Genevera I.; Peterson, Christine; Vannucci, Marina; Maletic-Savatic, Mirjana

    2012-01-01

    High-dimensional data common in genomics, proteomics, and chemometrics often contains complicated correlation structures. Recently, partial least squares (PLS) and Sparse PLS methods have gained attention in these areas as dimension reduction techniques in the context of supervised data analysis. We introduce a framework for Regularized PLS by solving a relaxation of the SIMPLS optimization problem with penalties on the PLS loadings vectors. Our approach enjoys many advantages including flexi...

  14. Multisource Least-squares Reverse Time Migration

    KAUST Repository

    Dai, Wei

    2012-12-01

    Least-squares migration has been shown to be able to produce high quality migration images, but its computational cost is considered to be too high for practical imaging. In this dissertation, a multisource least-squares reverse time migration algorithm (LSRTM) is proposed to increase by up to 10 times the computational efficiency by utilizing the blended sources processing technique. There are three main chapters in this dissertation. In Chapter 2, the multisource LSRTM algorithm is implemented with random time-shift and random source polarity encoding functions. Numerical tests on the 2D HESS VTI data show that the multisource LSRTM algorithm suppresses migration artifacts, balances the amplitudes, improves image resolution, and reduces crosstalk noise associated with the blended shot gathers. For this example, multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution, and fewer migration artifacts compared to conventional RTM. The empirical results suggest that the multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with similar or less computational cost. The caveat is that LSRTM image is sensitive to large errors in the migration velocity model. In Chapter 3, the multisource LSRTM algorithm is implemented with frequency selection encoding strategy and applied to marine streamer data, for which traditional random encoding functions are not applicable. The frequency-selection encoding functions are delta functions in the frequency domain, so that all the encoded shots have unique non-overlapping frequency content. Therefore, the receivers can distinguish the wavefield from each shot according to the frequencies. With the frequency-selection encoding method, the computational efficiency of LSRTM is increased so that its cost is

  15. Ordinary Least Squares and Quantile Regression: An Inquiry-Based Learning Approach to a Comparison of Regression Methods

    Science.gov (United States)

    Helmreich, James E.; Krog, K. Peter

    2018-01-01

    We present a short, inquiry-based learning course on concepts and methods underlying ordinary least squares (OLS), least absolute deviation (LAD), and quantile regression (QR). Students investigate squared, absolute, and weighted absolute distance functions (metrics) as location measures. Using differential calculus and properties of convex…

  16. A new finite element formulation for CFD:VIII. The Galerkin/least-squares method for advective-diffusive equations

    International Nuclear Information System (INIS)

    Hughes, T.J.R.; Hulbert, G.M.; Franca, L.P.

    1988-10-01

    Galerkin/least-squares finite element methods are presented for advective-diffusive equations. Galerkin/least-squares represents a conceptual simplification of SUPG, and is in fact applicable to a wide variety of other problem types. A convergence analysis and error estimates are presented. (author) [pt

  17. The LSD1-Type Zinc Finger Motifs of Pisum sativa LSD1 Are a Novel Nuclear Localization Signal and Interact with Importin Alpha

    OpenAIRE

    He, Shanping; Huang, Kuowei; Zhang, Xu; Yu, Xiangchun; Huang, Ping; An, Chengcai

    2011-01-01

    BACKGROUND: Genetic studies of the Arabidopsis mutant lsd1 highlight the important role of LSD1 in the negative regulation of plant programmed cell death (PCD). Arabidopsis thaliana LSD1 (AtLSD1) contains three LSD1-type zinc finger motifs, which are involved in the protein-protein interaction. METHODOLOGY/PRINCIPAL FINDINGS: To further understand the function of LSD1, we have analyzed cellular localization and functional localization domains of Pisum sativa LSD1 (PsLSD1), which is a homolog ...

  18. A Monte Carlo Investigation of the Box-Cox Model and a Nonlinear Least Squares Alternative.

    OpenAIRE

    Showalter, Mark H

    1994-01-01

    This paper reports a Monte Carlo study of the Box-Cox model and a nonlinear least squares alternative. Key results include the following: the transformation parameter in the Box-Cox model appears to be inconsistently estimated in the presence of conditional heteroskedasticity; the constant term in both the Box-Cox and the nonlinear least squares models is poorly estimated in small samples; conditional mean forecasts tend to underestimate their true value in the Box-Cox model when the transfor...

  19. A rigid-body least-squares program with angular and translation scan facilities

    CERN Document Server

    Kutschabsky, L

    1981-01-01

    The described computer program, written in CERN Fortran, is designed to enlarge the convergence radius of the rigid-body least-squares method by allowing a stepwise change of the angular and/or translational parameters within a chosen range. (6 refs).

  20. Harmonic tidal analysis at a few stations using the least squares method

    Digital Repository Service at National Institute of Oceanography (India)

    Fernandes, A.A.; Das, V.K.; Bahulayan, N.

    Using the least squares method, harmonic analysis has been performed on hourly water level records of 29 days at several stations depicting different types of non-tidal noise. For a tidal record at Mormugao, which was free from storm surges (low...

  1. A high order compact least-squares reconstructed discontinuous Galerkin method for the steady-state compressible flows on hybrid grids

    Science.gov (United States)

    Cheng, Jian; Zhang, Fan; Liu, Tiegang

    2018-06-01

    In this paper, a class of new high order reconstructed DG (rDG) methods based on the compact least-squares (CLS) reconstruction [23,24] is developed for simulating the two dimensional steady-state compressible flows on hybrid grids. The proposed method combines the advantages of the DG discretization with the flexibility of the compact least-squares reconstruction, which exhibits its superior potential in enhancing the level of accuracy and reducing the computational cost compared to the underlying DG methods with respect to the same number of degrees of freedom. To be specific, a third-order compact least-squares rDG(p1p2) method and a fourth-order compact least-squares rDG(p2p3) method are developed and investigated in this work. In this compact least-squares rDG method, the low order degrees of freedom are evolved through the underlying DG(p1) method and DG(p2) method, respectively, while the high order degrees of freedom are reconstructed through the compact least-squares reconstruction, in which the constitutive relations are built by requiring the reconstructed polynomial and its spatial derivatives on the target cell to conserve the cell averages and the corresponding spatial derivatives on the face-neighboring cells. The large sparse linear system resulted by the compact least-squares reconstruction can be solved relatively efficient when it is coupled with the temporal discretization in the steady-state simulations. A number of test cases are presented to assess the performance of the high order compact least-squares rDG methods, which demonstrates their potential to be an alternative approach for the high order numerical simulations of steady-state compressible flows.

  2. Time-domain least-squares migration using the Gaussian beam summation method

    Science.gov (United States)

    Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo

    2018-04-01

    With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modeling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modeling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a preconditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.

  3. Robust analysis of trends in noisy tokamak confinement data using geodesic least squares regression

    Energy Technology Data Exchange (ETDEWEB)

    Verdoolaege, G., E-mail: geert.verdoolaege@ugent.be [Department of Applied Physics, Ghent University, B-9000 Ghent (Belgium); Laboratory for Plasma Physics, Royal Military Academy, B-1000 Brussels (Belgium); Shabbir, A. [Department of Applied Physics, Ghent University, B-9000 Ghent (Belgium); Max Planck Institute for Plasma Physics, Boltzmannstr. 2, 85748 Garching (Germany); Hornung, G. [Department of Applied Physics, Ghent University, B-9000 Ghent (Belgium)

    2016-11-15

    Regression analysis is a very common activity in fusion science for unveiling trends and parametric dependencies, but it can be a difficult matter. We have recently developed the method of geodesic least squares (GLS) regression that is able to handle errors in all variables, is robust against data outliers and uncertainty in the regression model, and can be used with arbitrary distribution models and regression functions. We here report on first results of application of GLS to estimation of the multi-machine scaling law for the energy confinement time in tokamaks, demonstrating improved consistency of the GLS results compared to standard least squares.

  4. A Hybrid Least Square Support Vector Machine Model with Parameters Optimization for Stock Forecasting

    Directory of Open Access Journals (Sweden)

    Jian Chai

    2015-01-01

    Full Text Available This paper proposes an EMD-LSSVM (empirical mode decomposition least squares support vector machine model to analyze the CSI 300 index. A WD-LSSVM (wavelet denoising least squares support machine is also proposed as a benchmark to compare with the performance of EMD-LSSVM. Since parameters selection is vital to the performance of the model, different optimization methods are used, including simplex, GS (grid search, PSO (particle swarm optimization, and GA (genetic algorithm. Experimental results show that the EMD-LSSVM model with GS algorithm outperforms other methods in predicting stock market movement direction.

  5. Updating QR factorization procedure for solution of linear least squares problem with equality constraints.

    Science.gov (United States)

    Zeb, Salman; Yousaf, Muhammad

    2017-01-01

    In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.

  6. Least square regularized regression in sum space.

    Science.gov (United States)

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  7. Prediction of toxicity of nitrobenzenes using ab initio and least squares support vector machines

    International Nuclear Information System (INIS)

    Niazi, Ali; Jameh-Bozorghi, Saeed; Nori-Shargh, Davood

    2008-01-01

    A quantitative structure-property relationship (QSPR) study is suggested for the prediction of toxicity (IGC 50 ) of nitrobenzenes. Ab initio theory was used to calculate some quantum chemical descriptors including electrostatic potentials and local charges at each atom, HOMO and LUMO energies, etc. Modeling of the IGC 50 of nitrobenzenes as a function of molecular structures was established by means of the least squares support vector machines (LS-SVM). This model was applied for the prediction of the toxicity (IGC 50 ) of nitrobenzenes, which were not in the modeling procedure. The resulted model showed high prediction ability with root mean square error of prediction of 0.0049 for LS-SVM. Results have shown that the introduction of LS-SVM for quantum chemical descriptors drastically enhances the ability of prediction in QSAR studies superior to multiple linear regression and partial least squares

  8. The least weighted squares I. The asymptotic linearity of normal equations

    Czech Academy of Sciences Publication Activity Database

    Víšek, Jan Ámos

    2002-01-01

    Roč. 9, č. 15 (2002), s. 31-58 ISSN 1212-074X R&D Projects: GA AV ČR KSK1019101 Grant - others:GA UK(CZ) 255/2002/A EK /FSV Institutional research plan: CEZ:AV0Z1075907 Keywords : the least weighted squares * robust regression * asymptotic normality and representation Subject RIV: BA - General Mathematics

  9. Least-squares Minimization Approaches to Interpret Total Magnetic Anomalies Due to Spheres

    Science.gov (United States)

    Abdelrahman, E. M.; El-Araby, T. M.; Soliman, K. S.; Essa, K. S.; Abo-Ezz, E. R.

    2007-05-01

    We have developed three different least-squares approaches to determine successively: the depth, magnetic angle, and amplitude coefficient of a buried sphere from a total magnetic anomaly. By defining the anomaly value at the origin and the nearest zero-anomaly distance from the origin on the profile, the problem of depth determination is transformed into the problem of finding a solution of a nonlinear equation of the form f(z)=0. Knowing the depth and applying the least-squares method, the magnetic angle and amplitude coefficient are determined using two simple linear equations. In this way, the depth, magnetic angle, and amplitude coefficient are determined individually from all observed total magnetic data. The method is applied to synthetic examples with and without random errors and tested on a field example from Senegal, West Africa. In all cases, the depth solutions are in good agreement with the actual ones.

  10. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef

    2017-06-30

    Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.

  11. LSD1/KDM1 isoform LSD1+8a contributes to neural differentiation in small cell lung cancer

    Directory of Open Access Journals (Sweden)

    Takanobu Jotatsu

    2017-03-01

    Full Text Available Small cell lung cancer (SCLC is an aggressive neuroendocrine tumor characterized by rapid progression. The mechanisms that lead to a shift from initial therapeutic sensitivity to ultimate therapeutic resistance are poorly understood. Although the SCLC genomic landscape led to the discovery of promising agents targeting genetic alterations that were already under investigation, results have been disappointing. Achievements in targeted therapeutics have not been observed for over 30 years. Therefore, the underlying disease biology and novel targets urgently require a better understanding. Epigenetic regulation is deeply involved in the cellular plasticity that could shift tumor cells to the malignant phenotype. We have focused on a histone modifier, LSD1, that is overexpressed in SCLC and is a potent therapeutic target. Interestingly, the LSD1 splice variant LSD1+8a, the expression of which has been reported to be restricted to neural tissue, was detected and was involved in the expression of neuroendocrine marker genes in SCLC cell lines. Cells with high expression of LSD1+8a were resistant to CDDP and LSD1 inhibitor. Moreover, suppression of LSD1+8a inhibited cell proliferation, indicating that LSD1+8a could play a critical role in SCLC. These findings suggest that LSD1+8a should be considered a novel therapeutic target in SCLC.

  12. Convergence of Inner-Iteration GMRES Methods for Rank-Deficient Least Squares Problems

    Czech Academy of Sciences Publication Activity Database

    Morikuni, Keiichi; Hayami, K.

    2015-01-01

    Roč. 36, č. 1 (2015), s. 225-250 ISSN 0895-4798 Institutional support: RVO:67985807 Keywords : least squares problem * iterative methods * preconditioner * inner-outer iteration * GMRES method * stationary iterative method * rank-deficient problem Subject RIV: BA - General Mathematics Impact factor: 1.883, year: 2015

  13. Least-squares approximation of an improper correlation matrix by a proper one

    NARCIS (Netherlands)

    Knol, Dirk L.; ten Berge, Jos M.F.

    1989-01-01

    An algorithm is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. The proposed algorithm is based upon a solution for Mosier's oblique Procrustes rotation problem offered by ten Berge and Nevels. A necessary and

  14. Acute LSD effects on response inhibition neural networks.

    Science.gov (United States)

    Schmidt, A; Müller, F; Lenz, C; Dolder, P C; Schmid, Y; Zanchi, D; Lang, U E; Liechti, M E; Borgwardt, S

    2017-10-02

    Recent evidence shows that the serotonin 2A receptor (5-hydroxytryptamine2A receptor, 5-HT2AR) is critically involved in the formation of visual hallucinations and cognitive impairments in lysergic acid diethylamide (LSD)-induced states and neuropsychiatric diseases. However, the interaction between 5-HT2AR activation, cognitive impairments and visual hallucinations is still poorly understood. This study explored the effect of 5-HT2AR activation on response inhibition neural networks in healthy subjects by using LSD and further tested whether brain activation during response inhibition under LSD exposure was related to LSD-induced visual hallucinations. In a double-blind, randomized, placebo-controlled, cross-over study, LSD (100 µg) and placebo were administered to 18 healthy subjects. Response inhibition was assessed using a functional magnetic resonance imaging Go/No-Go task. LSD-induced visual hallucinations were measured using the 5 Dimensions of Altered States of Consciousness (5D-ASC) questionnaire. Relative to placebo, LSD administration impaired inhibitory performance and reduced brain activation in the right middle temporal gyrus, superior/middle/inferior frontal gyrus and anterior cingulate cortex and in the left superior frontal and postcentral gyrus and cerebellum. Parahippocampal activation during response inhibition was differently related to inhibitory performance after placebo and LSD administration. Finally, activation in the left superior frontal gyrus under LSD exposure was negatively related to LSD-induced cognitive impairments and visual imagery. Our findings show that 5-HT2AR activation by LSD leads to a hippocampal-prefrontal cortex-mediated breakdown of inhibitory processing, which might subsequently promote the formation of LSD-induced visual imageries. These findings help to better understand the neuropsychopharmacological mechanisms of visual hallucinations in LSD-induced states and neuropsychiatric disorders.

  15. Robust regularized least-squares beamforming approach to signal estimation

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2017-05-12

    In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill-conditioned covariance matrix of the received signals. Secondly, the steering vector pertaining to the direction of arrival of the signal of interest is not known precisely. To tackle these two challenges, the standard capon beamformer is manipulated to a form where the beamformer output is obtained as a scaled version of the inner product of two vectors. The two vectors are linearly related to the steering vector and the received signal snapshot, respectively. The linear operator, in both cases, is the square root of the covariance matrix. A regularized least-squares (RLS) approach is proposed to estimate these two vectors and to provide robustness without exploiting prior information. Simulation results show that the RLS beamformer using the proposed regularization algorithm outperforms state-of-the-art beamforming algorithms, as well as another RLS beamformers using a standard regularization approaches.

  16. Crystal Structure of an LSD-Bound Human Serotonin Receptor.

    Science.gov (United States)

    Wacker, Daniel; Wang, Sheng; McCorvy, John D; Betz, Robin M; Venkatakrishnan, A J; Levit, Anat; Lansu, Katherine; Schools, Zachary L; Che, Tao; Nichols, David E; Shoichet, Brian K; Dror, Ron O; Roth, Bryan L

    2017-01-26

    The prototypical hallucinogen LSD acts via serotonin receptors, and here we describe the crystal structure of LSD in complex with the human serotonin receptor 5-HT 2B . The complex reveals conformational rearrangements to accommodate LSD, providing a structural explanation for the conformational selectivity of LSD's key diethylamide moiety. LSD dissociates exceptionally slow from both 5-HT 2B R and 5-HT 2A R-a major target for its psychoactivity. Molecular dynamics (MD) simulations suggest that LSD's slow binding kinetics may be due to a "lid" formed by extracellular loop 2 (EL2) at the entrance to the binding pocket. A mutation predicted to increase the mobility of this lid greatly accelerates LSD's binding kinetics and selectively dampens LSD-mediated β-arrestin2 recruitment. This study thus reveals an unexpected binding mode of LSD; illuminates key features of its kinetics, stereochemistry, and signaling; and provides a molecular explanation for LSD's actions at human serotonin receptors. PAPERCLIP. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Immunoassay screening of lysergic acid diethylamide (LSD) and its confirmation by HPLC and fluorescence detection following LSD ImmunElute extraction.

    Science.gov (United States)

    Grobosch, T; Lemm-Ahlers, U

    2002-04-01

    In all, 3872 urine specimens were screened for lysergic acid diethylamide (LSD) using the CEDIA DAU LSD assay. Forty-eight samples, mainly from psychiatric patients or drug abusers, were found to be LSD positive, but only 13 (27%) of these could be confirmed by high-performance liquid chromatography with fluorescence detection (HPLC-FLD) following immunoaffinity extraction (IAE). Additional analysis for LSD using the DPC Coat-a-Count RIA was performed to compare the two immunoassay screening methods. Complete agreement between the DPC RIA assay and HPLC-FLD results was observed at concentrations below a cutoff concentration of 500 pg/mL. Samples that were LSD positive in the CEDIA DAU assay but not confirmed by HPLC-FLD were also investigated for interfering compounds using REMEDI HS drug-profiling system. REMEDI HS analysis identified 15 compounds (parent drugs and metabolites) that are believed to cross-react in the CEDIA DAU LSD assay: ambroxol, prilocaine, pipamperone, diphenhydramine, metoclopramide, amitriptyline, doxepine, atracurium, bupivacaine, doxylamine, lidocaine, mepivacaine, promethazine, ranitidine, and tramadole. The IAE/HPLC-FLD combination is rapid, easy to perform and reliable. It can reduce costs when standard, rather than more advanced, HPLC equipment is used, especially for labs that perform analyses for LSD infrequently. The chromatographic analysis of LSD, nor-LSD, and iso-LSD is not influenced by any of the tested cross-reacting compounds even at a concentration of 100 ng/mL.

  18. Implementation of the Least-Squares Lattice with Order and Forgetting Factor Estimation for FPGA

    Czech Academy of Sciences Publication Activity Database

    Pohl, Zdeněk; Tichý, Milan; Kadlec, Jiří

    2008-01-01

    Roč. 2008, č. 2008 (2008), s. 1-11 ISSN 1687-6172 R&D Projects: GA MŠk(CZ) 1M0567 EU Projects: European Commission(XE) 027611 - AETHER Program:FP6 Institutional research plan: CEZ:AV0Z10750506 Keywords : DSP * Least-squares lattice * order estimation * exponential forgetting factor estimation * FPGA implementation * scheduling * dynamic reconfiguration * microblaze Subject RIV: IN - Informatics, Computer Science Impact factor: 1.055, year: 2008 http://library.utia.cas.cz/separaty/2008/ZS/pohl-tichy-kadlec-implementation%20of%20the%20least-squares%20lattice%20with%20order%20and%20forgetting%20factor%20estimation%20for%20fpga.pdf

  19. On Solution of Total Least Squares Problems with Multiple Right-hand Sides

    Czech Academy of Sciences Publication Activity Database

    Hnětynková, I.; Plešinger, Martin; Strakoš, Zdeněk

    2008-01-01

    Roč. 8, č. 1 (2008), s. 10815-10816 ISSN 1617-7061 R&D Projects: GA AV ČR IAA100300802 Institutional research plan: CEZ:AV0Z10300504 Keywords : total least squares problem * multiple right-hand sides * linear approximation problem Subject RIV: BA - General Mathematics

  20. A Least Square-Based Self-Adaptive Localization Method for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Baoguo Yu

    2016-01-01

    Full Text Available In the wireless sensor network (WSN localization methods based on Received Signal Strength Indicator (RSSI, it is usually required to determine the parameters of the radio signal propagation model before estimating the distance between the anchor node and an unknown node with reference to their communication RSSI value. And finally we use a localization algorithm to estimate the location of the unknown node. However, this localization method, though high in localization accuracy, has weaknesses such as complex working procedure and poor system versatility. Concerning these defects, a self-adaptive WSN localization method based on least square is proposed, which uses the least square criterion to estimate the parameters of radio signal propagation model, which positively reduces the computation amount in the estimation process. The experimental results show that the proposed self-adaptive localization method outputs a high processing efficiency while satisfying the high localization accuracy requirement. Conclusively, the proposed method is of definite practical value.

  1. LSD in pubic hair in a fatality.

    Science.gov (United States)

    Gaulier, Jean-michel; Maublanc, Julie; Lamballais, Florence; Bargel, Sophie; Lachâtre, Gérard

    2012-05-10

    Lysergic acid diethylamide (LSD) is a potent hallucinogen, active at very low dosage and its determination in body fluids in a forensic context may present some difficulties, even more so in hair. A dedicated liquid chromatography-electrospray-tandem mass spectrometry (LC-ES-MS/MS) assay in hair was used to document the case of a 24-year-old man found dead after a party. Briefly, after a decontamination step, a 50mg sample of the victim's pubic hair was cut into small pieces (LSD. A LSD concentration of 0.66pg/mg of pubic hair was observed. However, this result remains difficult to interpret owing to the concomitant LSD presence in the victim's post mortem blood and urine, the lack of previously reported LSD concentrations in hair, and the absence of data about LSD incorporation and stability in pubic hair. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  2. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    Science.gov (United States)

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  3. Least-squares migration of multisource data with a deblurring filter

    KAUST Repository

    Dai, Wei; Wang, Xin; Schuster, Gerard T.

    2011-01-01

    Least-squares migration (LSM) has been shown to be able to produce high-quality migration images, but its computational cost is considered to be too high for practical imaging. We have developed a multisource least-squares migration algorithm (MLSM) to increase the computational efficiency by using the blended sources processing technique. To expedite convergence, a multisource deblurring filter is used as a preconditioner to reduce the data residual. This MLSM algorithm is applicable with Kirchhoff migration, wave-equation migration, or reverse time migration, and the gain in computational efficiency depends on the choice of migration method. Numerical results with Kirchhoff LSM on the 2D SEG/EAGE salt model show that an accurate image is obtained by migrating a supergather of 320 phase-encoded shots. When the encoding functions are the same for every iteration, the input/output cost of MLSM is reduced by 320 times. Empirical results show that the crosstalk noise introduced by blended sources is more effectively reduced when the encoding functions are changed at every iteration. The analysis of signal-to-noise ratio (S/N) suggests that not too many iterations are needed to enhance the S/N to an acceptable level. Therefore, when implemented with wave-equation migration or reverse time migration methods, the MLSM algorithm can be more efficient than the conventional migration method. © 2011 Society of Exploration Geophysicists.

  4. Least-squares migration of multisource data with a deblurring filter

    KAUST Repository

    Dai, Wei

    2011-09-01

    Least-squares migration (LSM) has been shown to be able to produce high-quality migration images, but its computational cost is considered to be too high for practical imaging. We have developed a multisource least-squares migration algorithm (MLSM) to increase the computational efficiency by using the blended sources processing technique. To expedite convergence, a multisource deblurring filter is used as a preconditioner to reduce the data residual. This MLSM algorithm is applicable with Kirchhoff migration, wave-equation migration, or reverse time migration, and the gain in computational efficiency depends on the choice of migration method. Numerical results with Kirchhoff LSM on the 2D SEG/EAGE salt model show that an accurate image is obtained by migrating a supergather of 320 phase-encoded shots. When the encoding functions are the same for every iteration, the input/output cost of MLSM is reduced by 320 times. Empirical results show that the crosstalk noise introduced by blended sources is more effectively reduced when the encoding functions are changed at every iteration. The analysis of signal-to-noise ratio (S/N) suggests that not too many iterations are needed to enhance the S/N to an acceptable level. Therefore, when implemented with wave-equation migration or reverse time migration methods, the MLSM algorithm can be more efficient than the conventional migration method. © 2011 Society of Exploration Geophysicists.

  5. Variable forgetting factor mechanisms for diffusion recursive least squares algorithm in sensor networks

    Science.gov (United States)

    Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.

    2017-12-01

    In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.

  6. Mitigation of defocusing by statics and near-surface velocity errors by interferometric least-squares migration

    KAUST Repository

    Sinha, Mrinal

    2015-08-19

    We propose an interferometric least-squares migration method that can significantly reduce migration artifacts due to statics and errors in the near-surface velocity model. We first choose a reference reflector whose topography is well known from the, e.g., well logs. Reflections from this reference layer are correlated with the traces associated with reflections from deeper interfaces to get crosscorrelograms. These crosscorrelograms are then migrated using interferometric least-squares migration (ILSM). In this way statics and velocity errors at the near surface are largely eliminated for the examples in our paper.

  7. Least-Squares Approximation of an Improper Correlation Matrix by a Proper One.

    Science.gov (United States)

    Knol, Dirk L.; ten Berge, Jos M. F.

    1989-01-01

    An algorithm, based on a solution for C. I. Mosier's oblique Procrustes rotation problem, is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. Results are of interest for missing value and tetrachoric correlation, indefinite matrix correlation, and constrained…

  8. An improved partial least-squares regression method for Raman spectroscopy

    Science.gov (United States)

    Momenpour Tehran Monfared, Ali; Anis, Hanan

    2017-10-01

    It is known that the performance of partial least-squares (PLS) regression analysis can be improved using the backward variable selection method (BVSPLS). In this paper, we further improve the BVSPLS based on a novel selection mechanism. The proposed method is based on sorting the weighted regression coefficients, and then the importance of each variable of the sorted list is evaluated using root mean square errors of prediction (RMSEP) criterion in each iteration step. Our Improved BVSPLS (IBVSPLS) method has been applied to leukemia and heparin data sets and led to an improvement in limit of detection of Raman biosensing ranged from 10% to 43% compared to PLS. Our IBVSPLS was also compared to the jack-knifing (simpler) and Genetic Algorithm (more complex) methods. Our method was consistently better than the jack-knifing method and showed either a similar or a better performance compared to the genetic algorithm.

  9. Partial Least Squares tutorial for analyzing neuroimaging data

    Directory of Open Access Journals (Sweden)

    Patricia Van Roon

    2014-09-01

    Full Text Available Partial least squares (PLS has become a respected and meaningful soft modeling analysis technique that can be applied to very large datasets where the number of factors or variables is greater than the number of observations. Current biometric studies (e.g., eye movements, EKG, body movements, EEG are often of this nature. PLS eliminates the multiple linear regression issues of over-fitting data by finding a few underlying or latent variables (factors that account for most of the variation in the data. In real-world applications, where linear models do not always apply, PLS can model the non-linear relationship well. This tutorial introduces two PLS methods, PLS Correlation (PLSC and PLS Regression (PLSR and their applications in data analysis which are illustrated with neuroimaging examples. Both methods provide straightforward and comprehensible techniques for determining and modeling relationships between two multivariate data blocks by finding latent variables that best describes the relationships. In the examples, the PLSC will analyze the relationship between neuroimaging data such as Event-Related Potential (ERP amplitude averages from different locations on the scalp with their corresponding behavioural data. Using the same data, the PLSR will be used to model the relationship between neuroimaging and behavioural data. This model will be able to predict future behaviour solely from available neuroimaging data. To find latent variables, Singular Value Decomposition (SVD for PLSC and Non-linear Iterative PArtial Least Squares (NIPALS for PLSR are implemented in this tutorial. SVD decomposes the large data block into three manageable matrices containing a diagonal set of singular values, as well as left and right singular vectors. For PLSR, NIPALS algorithms are used because it provides amore precise estimation of the latent variables. Mathematica notebooks are provided for each PLS method with clearly labeled sections and subsections. The

  10. Moving Least Squares Method for a One-Dimensional Parabolic Inverse Problem

    Directory of Open Access Journals (Sweden)

    Baiyu Wang

    2014-01-01

    Full Text Available This paper investigates the numerical solution of a class of one-dimensional inverse parabolic problems using the moving least squares approximation; the inverse problem is the determination of an unknown source term depending on time. The collocation method is used for solving the equation; some numerical experiments are presented and discussed to illustrate the stability and high efficiency of the method.

  11. Simultaneous determination of LSD and 2-oxo-3-hydroxy LSD in hair and urine by LC-MS/MS and its application to forensic cases.

    Science.gov (United States)

    Jang, Moonhee; Kim, Jihyun; Han, Inhoi; Yang, Wonkyung

    2015-11-10

    Lysergic acid diethylamide (LSD) is administered in low dosages, which makes its detection in biological matrices a major challenge in forensic toxicology. In this study, two sensitive and reliable methods based on liquid chromatography-tandem mass spectrometry (LC-MS/MS) were established and validated for the simultaneous determination of LSD and its metabolite, 2-oxo-3-hydroxy-LSD (O-H-LSD), in hair and urine. Target analytes in hair were extracted using methanol at 38°C for 15h and analyzed by LC-MS/MS. For urine sample preparation, liquid-liquid extraction was performed. Limits of detection (LODs) in hair were 0.25pg/mg for LSD and 0.5pg/mg for O-H-LSD. In urine, LODs were 0.01 and 0.025ng/ml for LSD and O-H-LSD, respectively. Method validation results showed good linearity and acceptable precision and accuracy. The developed methods were applied to authentic specimens from two legal cases of LSD ingestion, and allowed identification and quantification of LSD and O-H-LSD in the specimens. In the two cases, LSD concentrations in hair were 1.27 and 0.95pg/mg; O-H-LSD was detected in one case, but its concentration was below the limit of quantification. In urine samples collected from the two suspects 8 and 3h after ingestion, LSD concentrations were 0.48 and 2.70ng/ml, respectively, while O-H-LSD concentrations were 4.19 and 25.2ng/ml, respectively. These methods can be used for documenting LSD intake in clinical and forensic settings. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Space-time coupled spectral/hp least-squares finite element formulation for the incompressible Navier-Stokes equations

    International Nuclear Information System (INIS)

    Pontaza, J.P.; Reddy, J.N.

    2004-01-01

    We consider least-squares finite element models for the numerical solution of the non-stationary Navier-Stokes equations governing viscous incompressible fluid flows. The paper presents a formulation where the effects of space and time are coupled, resulting in a true space-time least-squares minimization procedure, as opposed to a space-time decoupled formulation where a least-squares minimization procedure is performed in space at each time step. The formulation is first presented for the linear advection-diffusion equation and then extended to the Navier-Stokes equations. The formulation has no time step stability restrictions and is spectrally accurate in both space and time. To allow the use of practical C 0 element expansions in the resulting finite element model, the Navier-Stokes equations are expressed as an equivalent set of first-order equations by introducing vorticity as an additional independent variable and the least-squares method is used to develop the finite element model of the governing equations. High-order element expansions are used to construct the discrete model. The discrete model thus obtained is linearized by Newton's method, resulting in a linear system of equations with a symmetric positive definite coefficient matrix that is solved in a fully coupled manner by a preconditioned conjugate gradient method in matrix-free form. Spectral convergence of the L 2 least-squares functional and L 2 error norms in space-time is verified using a smooth solution to the two-dimensional non-stationary incompressible Navier-Stokes equations. Numerical results are presented for impulsively started lid-driven cavity flow, oscillatory lid-driven cavity flow, transient flow over a backward-facing step, and flow around a circular cylinder; the results demonstrate the predictive capability and robustness of the proposed formulation. Even though the space-time coupled formulation is emphasized, we also present the formulation and numerical results for least-squares

  13. Discrete least squares polynomial approximation with random evaluations − application to parametric and stochastic elliptic PDEs

    KAUST Repository

    Chkifa, Abdellah

    2015-04-08

    Motivated by the numerical treatment of parametric and stochastic PDEs, we analyze the least-squares method for polynomial approximation of multivariate functions based on random sampling according to a given probability measure. Recent work has shown that in the univariate case, the least-squares method is quasi-optimal in expectation in [A. Cohen, M A. Davenport and D. Leviatan. Found. Comput. Math. 13 (2013) 819–834] and in probability in [G. Migliorati, F. Nobile, E. von Schwerin, R. Tempone, Found. Comput. Math. 14 (2014) 419–456], under suitable conditions that relate the number of samples with respect to the dimension of the polynomial space. Here “quasi-optimal” means that the accuracy of the least-squares approximation is comparable with that of the best approximation in the given polynomial space. In this paper, we discuss the quasi-optimality of the polynomial least-squares method in arbitrary dimension. Our analysis applies to any arbitrary multivariate polynomial space (including tensor product, total degree or hyperbolic crosses), under the minimal requirement that its associated index set is downward closed. The optimality criterion only involves the relation between the number of samples and the dimension of the polynomial space, independently of the anisotropic shape and of the number of variables. We extend our results to the approximation of Hilbert space-valued functions in order to apply them to the approximation of parametric and stochastic elliptic PDEs. As a particular case, we discuss “inclusion type” elliptic PDE models, and derive an exponential convergence estimate for the least-squares method. Numerical results confirm our estimate, yet pointing out a gap between the condition necessary to achieve optimality in the theory, and the condition that in practice yields the optimal convergence rate.

  14. Incoherent dictionary learning for reducing crosstalk noise in least-squares reverse time migration

    Science.gov (United States)

    Wu, Juan; Bai, Min

    2018-05-01

    We propose to apply a novel incoherent dictionary learning (IDL) algorithm for regularizing the least-squares inversion in seismic imaging. The IDL is proposed to overcome the drawback of traditional dictionary learning algorithm in losing partial texture information. Firstly, the noisy image is divided into overlapped image patches, and some random patches are extracted for dictionary learning. Then, we apply the IDL technology to minimize the coherency between atoms during dictionary learning. Finally, the sparse representation problem is solved by a sparse coding algorithm, and image is restored by those sparse coefficients. By reducing the correlation among atoms, it is possible to preserve most of the small-scale features in the image while removing much of the long-wavelength noise. The application of the IDL method to regularization of seismic images from least-squares reverse time migration shows successful performance.

  15. Penalized linear regression for discrete ill-posed problems: A hybrid least-squares and mean-squared error approach

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-12-19

    This paper proposes a new approach to find the regularization parameter for linear least-squares discrete ill-posed problems. In the proposed approach, an artificial perturbation matrix with a bounded norm is forced into the discrete ill-posed model matrix. This perturbation is introduced to enhance the singular-value (SV) structure of the matrix and hence to provide a better solution. The proposed approach is derived to select the regularization parameter in a way that minimizes the mean-squared error (MSE) of the estimator. Numerical results demonstrate that the proposed approach outperforms a set of benchmark methods in most cases when applied to different scenarios of discrete ill-posed problems. Jointly, the proposed approach enjoys the lowest run-time and offers the highest level of robustness amongst all the tested methods.

  16. Small-kernel, constrained least-squares restoration of sampled image data

    Science.gov (United States)

    Hazra, Rajeeb; Park, Stephen K.

    1992-01-01

    Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.

  17. Error analysis of some Galerkin - least squares methods for the elasticity equations

    International Nuclear Information System (INIS)

    Franca, L.P.; Stenberg, R.

    1989-05-01

    We consider the recent technique of stabilizing mixed finite element methods by augmenting the Galerkin formulation with least squares terms calculated separately on each element. The error analysis is performed in a unified manner yielding improved results for some methods introduced earlier. In addition, a new formulation is introduced and analyzed [pt

  18. COMPARISON OF PARTIAL LEAST SQUARES REGRESSION METHOD ALGORITHMS: NIPALS AND PLS-KERNEL AND AN APPLICATION

    Directory of Open Access Journals (Sweden)

    ELİF BULUT

    2013-06-01

    Full Text Available Partial Least Squares Regression (PLSR is a multivariate statistical method that consists of partial least squares and multiple linear regression analysis. Explanatory variables, X, having multicollinearity are reduced to components which explain the great amount of covariance between explanatory and response variable. These components are few in number and they don’t have multicollinearity problem. Then multiple linear regression analysis is applied to those components to model the response variable Y. There are various PLSR algorithms. In this study NIPALS and PLS-Kernel algorithms will be studied and illustrated on a real data set.

  19. Estimating Frequency by Interpolation Using Least Squares Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Changwei Ma

    2015-01-01

    Full Text Available Discrete Fourier transform- (DFT- based maximum likelihood (ML algorithm is an important part of single sinusoid frequency estimation. As signal to noise ratio (SNR increases and is above the threshold value, it will lie very close to Cramer-Rao lower bound (CRLB, which is dependent on the number of DFT points. However, its mean square error (MSE performance is directly proportional to its calculation cost. As a modified version of support vector regression (SVR, least squares SVR (LS-SVR can not only still keep excellent capabilities for generalizing and fitting but also exhibit lower computational complexity. In this paper, therefore, LS-SVR is employed to interpolate on Fourier coefficients of received signals and attain high frequency estimation accuracy. Our results show that the proposed algorithm can make a good compromise between calculation cost and MSE performance under the assumption that the sample size, number of DFT points, and resampling points are already known.

  20. Optimization of sequential decisions by least squares Monte Carlo method

    DEFF Research Database (Denmark)

    Nishijima, Kazuyoshi; Anders, Annett

    change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which...... is proposed by Longstaff and Schwartz (2001) for pricing of American options. The present paper formulates the decision problem in a more general manner and explains how the solution scheme proposed by Anders and Nishijima (2011) is implemented for the optimization of the formulated decision problem...

  1. Least-squares resolution of gamma-ray spectra in environmental samples

    International Nuclear Information System (INIS)

    Kanipe, L.G.; Seale, S.K.; Liggett, W.S.

    1977-08-01

    The use of ALPHA-M, a least squares computer program for analyzing NaI (Tl) gamma spectra of environmental samples, is evaluated. Included is a comprehensive set of program instructions, listings, and flowcharts. Two other programs, GEN4 and SIMSPEC, are also described. GEN4 is used to create standard libraries for ALPHA-M, and SIMSPEC is used to simulate spectra for ALPHA-M analysis. Tests to evaluate the standard libraries selected for use in analyzing environmental samples are provided. An evaluation of the results of sample analyses is discussed

  2. Canonical Least-Squares Monte Carlo Valuation of American Options: Convergence and Empirical Pricing Analysis

    Directory of Open Access Journals (Sweden)

    Xisheng Yu

    2014-01-01

    Full Text Available The paper by Liu (2010 introduces a method termed the canonical least-squares Monte Carlo (CLM which combines a martingale-constrained entropy model and a least-squares Monte Carlo algorithm to price American options. In this paper, we first provide the convergence results of CLM and numerically examine the convergence properties. Then, the comparative analysis is empirically conducted using a large sample of the S&P 100 Index (OEX puts and IBM puts. The results on the convergence show that choosing the shifted Legendre polynomials with four regressors is more appropriate considering the pricing accuracy and the computational cost. With this choice, CLM method is empirically demonstrated to be superior to the benchmark methods of binominal tree and finite difference with historical volatilities.

  3. Weighted least-square approach for simultaneous measurement of multiple reflective surfaces

    Science.gov (United States)

    Tang, Shouhong; Bills, Richard E.; Freischlad, Klaus

    2007-09-01

    Phase shifting interferometry (PSI) is a highly accurate method for measuring the nanometer-scale relative surface height of a semi-reflective test surface. PSI is effectively used in conjunction with Fizeau interferometers for optical testing, hard disk inspection, and semiconductor wafer flatness. However, commonly-used PSI algorithms are unable to produce an accurate phase measurement if more than one reflective surface is present in the Fizeau interferometer test cavity. Examples of test parts that fall into this category include lithography mask blanks and their protective pellicles, and plane parallel optical beam splitters. The plane parallel surfaces of these parts generate multiple interferograms that are superimposed in the recording plane of the Fizeau interferometer. When using wavelength shifting in PSI the phase shifting speed of each interferogram is proportional to the optical path difference (OPD) between the two reflective surfaces. The proposed method is able to differentiate each underlying interferogram from each other in an optimal manner. In this paper, we present a method for simultaneously measuring the multiple test surfaces of all underlying interferograms from these superimposed interferograms through the use of a weighted least-square fitting technique. The theoretical analysis of weighted least-square technique and the measurement results will be described in this paper.

  4. Data-adapted moving least squares method for 3-D image interpolation

    International Nuclear Information System (INIS)

    Jang, Sumi; Lee, Yeon Ju; Jeong, Byeongseon; Nam, Haewon; Lee, Rena; Yoon, Jungho

    2013-01-01

    In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons. (paper)

  5. Influence of the least-squares phase on optical vortices in strongly scintillated beams

    CSIR Research Space (South Africa)

    Chen, M

    2009-06-01

    Full Text Available , the average total number of vortices is reduced further. However, the reduction becomes smaller for each succes- sive step. This indicates that the ability of getting rid of optical vortices by removing the least-squares phase becomes progressively less...

  6. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  7. Expectile smoothing: new perspectives on asymmetric least squares. An application to life expectancy

    NARCIS (Netherlands)

    Schnabel, S.K.

    2011-01-01

    While initially motivated from a demographic application, this thesis develops methodology for expectile estimation. To this end first the basic model for expectile curves using least asymmetrically weighted squares (LAWS) was introduced as well as methods for smoothing in this context. The simple

  8. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    Directory of Open Access Journals (Sweden)

    Weiqiang Pan

    2015-03-01

    Full Text Available In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  9. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    Science.gov (United States)

    Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing

    2015-06-01

    In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  10. Joint 2D-DOA and Frequency Estimation for L-Shaped Array Using Iterative Least Squares Method

    Directory of Open Access Journals (Sweden)

    Ling-yun Xu

    2012-01-01

    Full Text Available We introduce an iterative least squares method (ILS for estimating the 2D-DOA and frequency based on L-shaped array. The ILS iteratively finds direction matrix and delay matrix, then 2D-DOA and frequency can be obtained by the least squares method. Without spectral peak searching and pairing, this algorithm works well and pairs the parameters automatically. Moreover, our algorithm has better performance than conventional ESPRIT algorithm and propagator method. The useful behavior of the proposed algorithm is verified by simulations.

  11. Response of cluster headache to psilocybin and LSD.

    Science.gov (United States)

    Sewell, R Andrew; Halpern, John H; Pope, Harrison G

    2006-06-27

    The authors interviewed 53 cluster headache patients who had used psilocybin or lysergic acid diethylamide (LSD) to treat their condition. Twenty-two of 26 psilocybin users reported that psilocybin aborted attacks; 25 of 48 psilocybin users and 7 of 8 LSD users reported cluster period termination; 18 of 19 psilocybin users and 4 of 5 LSD users reported remission period extension. Research on the effects of psilocybin and LSD on cluster headache may be warranted.

  12. Least median of squares filtering of locally optimal point matches for compressible flow image registration

    International Nuclear Information System (INIS)

    Castillo, Edward; Guerrero, Thomas; Castillo, Richard; White, Benjamin; Rojo, Javier

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. (paper)

  13. Ordinary least square regression, orthogonal regression, geometric mean regression and their applications in aerosol science

    International Nuclear Information System (INIS)

    Leng Ling; Zhang Tianyi; Kleinman, Lawrence; Zhu Wei

    2007-01-01

    Regression analysis, especially the ordinary least squares method which assumes that errors are confined to the dependent variable, has seen a fair share of its applications in aerosol science. The ordinary least squares approach, however, could be problematic due to the fact that atmospheric data often does not lend itself to calling one variable independent and the other dependent. Errors often exist for both measurements. In this work, we examine two regression approaches available to accommodate this situation. They are orthogonal regression and geometric mean regression. Comparisons are made theoretically as well as numerically through an aerosol study examining whether the ratio of organic aerosol to CO would change with age

  14. Least median of squares and iteratively re-weighted least squares as robust linear regression methods for fluorimetric determination of α-lipoic acid in capsules in ideal and non-ideal cases of linearity.

    Science.gov (United States)

    Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F

    2018-03-26

    This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.

  15. Least Squares Inference on Integrated Volatility and the Relationship between Efficient Prices and Noise

    OpenAIRE

    Nolte, Ingmar; Voev, Valeri

    2009-01-01

    The expected value of sums of squared intraday returns (realized variance)gives rise to a least squares regression which adapts itself to the assumptions ofthe noise process and allows for a joint inference on integrated volatility (IV),noise moments and price-noise relations. In the iid noise case we derive theasymptotic variance of the regression parameter estimating the IV, show thatit is consistent and compare its asymptotic efficiency against alternative consistentIV measures. In case of...

  16. Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model

    Science.gov (United States)

    Yu, Lean; Wang, Shouyang; Lai, K. K.

    Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.

  17. Non-linear HVAC computations using least square support vector machines

    International Nuclear Information System (INIS)

    Kumar, Mahendra; Kar, I.N.

    2009-01-01

    This paper aims to demonstrate application of least square support vector machines (LS-SVM) to model two complex heating, ventilating and air-conditioning (HVAC) relationships. The two applications considered are the estimation of the predicted mean vote (PMV) for thermal comfort and the generation of psychrometric chart. LS-SVM has the potential for quick, exact representations and also possesses a structure that facilitates hardware implementation. The results show very good agreement between function values computed from conventional model and LS-SVM model in real time. The robustness of LS-SVM models against input noises has also been analyzed.

  18. Single Directional SMO Algorithm for Least Squares Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Xigao Shao

    2013-01-01

    Full Text Available Working set selection is a major step in decomposition methods for training least squares support vector machines (LS-SVMs. In this paper, a new technique for the selection of working set in sequential minimal optimization- (SMO- type decomposition methods is proposed. By the new method, we can select a single direction to achieve the convergence of the optimality condition. A simple asymptotic convergence proof for the new algorithm is given. Experimental comparisons demonstrate that the classification accuracy of the new method is not largely different from the existing methods, but the training speed is faster than existing ones.

  19. Uncertainty analysis of pollutant build-up modelling based on a Bayesian weighted least squares approach

    International Nuclear Information System (INIS)

    Haddad, Khaled; Egodawatta, Prasanna; Rahman, Ataur; Goonetilleke, Ashantha

    2013-01-01

    Reliable pollutant build-up prediction plays a critical role in the accuracy of urban stormwater quality modelling outcomes. However, water quality data collection is resource demanding compared to streamflow data monitoring, where a greater quantity of data is generally available. Consequently, available water quality datasets span only relatively short time scales unlike water quantity data. Therefore, the ability to take due consideration of the variability associated with pollutant processes and natural phenomena is constrained. This in turn gives rise to uncertainty in the modelling outcomes as research has shown that pollutant loadings on catchment surfaces and rainfall within an area can vary considerably over space and time scales. Therefore, the assessment of model uncertainty is an essential element of informed decision making in urban stormwater management. This paper presents the application of a range of regression approaches such as ordinary least squares regression, weighted least squares regression and Bayesian weighted least squares regression for the estimation of uncertainty associated with pollutant build-up prediction using limited datasets. The study outcomes confirmed that the use of ordinary least squares regression with fixed model inputs and limited observational data may not provide realistic estimates. The stochastic nature of the dependent and independent variables need to be taken into consideration in pollutant build-up prediction. It was found that the use of the Bayesian approach along with the Monte Carlo simulation technique provides a powerful tool, which attempts to make the best use of the available knowledge in prediction and thereby presents a practical solution to counteract the limitations which are otherwise imposed on water quality modelling. - Highlights: ► Water quality data spans short time scales leading to significant model uncertainty. ► Assessment of uncertainty essential for informed decision making in water

  20. Attenuation compensation in least-squares reverse time migration using the visco-acoustic wave equation

    KAUST Repository

    Dutta, Gaurav; Lu, Kai; Wang, Xin; Schuster, Gerard T.

    2013-01-01

    Attenuation leads to distortion of amplitude and phase of seismic waves propagating inside the earth. Conventional acoustic and least-squares reverse time migration do not account for this distortion which leads to defocusing of migration images

  1. Internal displacement and strain measurement using digital volume correlation: a least-squares framework

    International Nuclear Information System (INIS)

    Pan, Bing; Wu, Dafang; Wang, Zhaoyang

    2012-01-01

    As a novel tool for quantitative 3D internal deformation measurement throughout the interior of a material or tissue, digital volume correlation (DVC) has increasingly gained attention and application in the fields of experimental mechanics, material research and biomedical engineering. However, the practical implementation of DVC involves important challenges such as implementation complexity, calculation accuracy and computational efficiency. In this paper, a least-squares framework is presented for 3D internal displacement and strain field measurement using DVC. The proposed DVC combines a practical linear-intensity-change model with an easy-to-implement iterative least-squares (ILS) algorithm to retrieve 3D internal displacement vector field with sub-voxel accuracy. Because the linear-intensity-change model is capable of accounting for both the possible intensity changes and the relative geometric transform of the target subvolume, the presented DVC thus provides the highest sub-voxel registration accuracy and widest applicability. Furthermore, as the ILS algorithm uses only first-order spatial derivatives of the deformed volumetric image, the developed DVC thus significantly reduces computational complexity. To further extract 3D strain distributions from the 3D discrete displacement vectors obtained by the ILS algorithm, the presented DVC employs a pointwise least-squares algorithm to estimate the strain components for each measurement point. Computer-simulated volume images with controlled displacements are employed to investigate the performance of the proposed DVC method in terms of mean bias error and standard deviation error. Results reveal that the present technique is capable of providing accurate measurements in an easy-to-implement manner, and can be applied to practical 3D internal displacement and strain calculation. (paper)

  2. Flow Applications of the Least Squares Finite Element Method

    Science.gov (United States)

    Jiang, Bo-Nan

    1998-01-01

    The main thrust of the effort has been towards the development, analysis and implementation of the least-squares finite element method (LSFEM) for fluid dynamics and electromagnetics applications. In the past year, there were four major accomplishments: 1) special treatments in computational fluid dynamics and computational electromagnetics, such as upwinding, numerical dissipation, staggered grid, non-equal order elements, operator splitting and preconditioning, edge elements, and vector potential are unnecessary; 2) the analysis of the LSFEM for most partial differential equations can be based on the bounded inverse theorem; 3) the finite difference and finite volume algorithms solve only two Maxwell equations and ignore the divergence equations; and 4) the first numerical simulation of three-dimensional Marangoni-Benard convection was performed using the LSFEM.

  3. Dual stacked partial least squares for analysis of near-infrared spectra

    Energy Technology Data Exchange (ETDEWEB)

    Bi, Yiming [Institute of Automation, Chinese Academy of Sciences, 100190 Beijing (China); Xie, Qiong, E-mail: yimbi@163.com [Institute of Automation, Chinese Academy of Sciences, 100190 Beijing (China); Peng, Silong; Tang, Liang; Hu, Yong; Tan, Jie [Institute of Automation, Chinese Academy of Sciences, 100190 Beijing (China); Zhao, Yuhui [School of Economics and Business, Northeastern University at Qinhuangdao, 066000 Qinhuangdao City (China); Li, Changwen [Food Research Institute of Tianjin Tasly Group, 300410 Tianjin (China)

    2013-08-20

    Graphical abstract: -- Highlights: •Dual stacking steps are used for multivariate calibration of near-infrared spectra. •A selective weighting strategy is introduced that only a subset of all available sub-models is used for model fusion. •Using two public near-infrared datasets, the proposed method achieved competitive results. •The method can be widely applied in many fields, such as Mid-infrared spectra data and Raman spectra data. -- Abstract: A new ensemble learning algorithm is presented for quantitative analysis of near-infrared spectra. The algorithm contains two steps of stacked regression and Partial Least Squares (PLS), termed Dual Stacked Partial Least Squares (DSPLS) algorithm. First, several sub-models were generated from the whole calibration set. The inner-stack step was implemented on sub-intervals of the spectrum. Then the outer-stack step was used to combine these sub-models. Several combination rules of the outer-stack step were analyzed for the proposed DSPLS algorithm. In addition, a novel selective weighting rule was also involved to select a subset of all available sub-models. Experiments on two public near-infrared datasets demonstrate that the proposed DSPLS with selective weighting rule provided superior prediction performance and outperformed the conventional PLS algorithm. Compared with the single model, the new ensemble model can provide more robust prediction result and can be considered an alternative choice for quantitative analytical applications.

  4. Dual stacked partial least squares for analysis of near-infrared spectra

    International Nuclear Information System (INIS)

    Bi, Yiming; Xie, Qiong; Peng, Silong; Tang, Liang; Hu, Yong; Tan, Jie; Zhao, Yuhui; Li, Changwen

    2013-01-01

    Graphical abstract: -- Highlights: •Dual stacking steps are used for multivariate calibration of near-infrared spectra. •A selective weighting strategy is introduced that only a subset of all available sub-models is used for model fusion. •Using two public near-infrared datasets, the proposed method achieved competitive results. •The method can be widely applied in many fields, such as Mid-infrared spectra data and Raman spectra data. -- Abstract: A new ensemble learning algorithm is presented for quantitative analysis of near-infrared spectra. The algorithm contains two steps of stacked regression and Partial Least Squares (PLS), termed Dual Stacked Partial Least Squares (DSPLS) algorithm. First, several sub-models were generated from the whole calibration set. The inner-stack step was implemented on sub-intervals of the spectrum. Then the outer-stack step was used to combine these sub-models. Several combination rules of the outer-stack step were analyzed for the proposed DSPLS algorithm. In addition, a novel selective weighting rule was also involved to select a subset of all available sub-models. Experiments on two public near-infrared datasets demonstrate that the proposed DSPLS with selective weighting rule provided superior prediction performance and outperformed the conventional PLS algorithm. Compared with the single model, the new ensemble model can provide more robust prediction result and can be considered an alternative choice for quantitative analytical applications

  5. Deconvolution of H-alpha profiles measured by Thompson scattering collecting optics

    International Nuclear Information System (INIS)

    LeBlanc, B.; Grek, B.

    1986-01-01

    This paper discusses that optically fast multichannel Thomson scattering optics that can be used for H-alpha emission profile measurement. A technique based on the fact that a particular volume element of the overall field of view can be seen by many channels, depending on its location, is discussed. It is applied to measurement made on PDX with the vertically viewing TVTS collecting optics (56 channels). The authors found that for this case, about 28 Fourier modes are optimum to represent the spatial behavior of the plasma emissivity. The coefficients for these modes are obtained by doing a least-square-fit to the data subjet to certain constraints. The important constraints are non-negative emissivity, the assumed up and down symmetry and zero emissivity beyond the liners. H-alpha deconvolutions are presented for diverted and circular discharges

  6. Speed control of induction motor using fuzzy recursive least squares technique

    OpenAIRE

    Santiago Sánchez; Eduardo Giraldo

    2008-01-01

    A simple adaptive controller design is presented in this paper, the control system uses the adaptive fuzzy logic, sliding modes and is trained with the recursive least squares technique. The problem of parameter variation is solved with the adaptive controller; the use of an internal PI regulator produces that the speed control of the induction motor be achieved by the stator currents instead the input voltage. The rotor-flux oriented coordinated system model is used to develop and test the c...

  7. Analysis of neutron and x-ray reflectivity data by constrained least-squares methods

    DEFF Research Database (Denmark)

    Pedersen, J.S.; Hamley, I.W.

    1994-01-01

    . The coefficients in the series are determined by constrained nonlinear least-squares methods, in which the smoothest solution that agrees with the data is chosen. In the second approach the profile is expressed as a series of sine and cosine terms. A smoothness constraint is used which reduces the coefficients...

  8. Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine

    International Nuclear Information System (INIS)

    Xu Ruirui; Bian Guoxing; Gao Chenfeng; Chen Tianlun

    2005-01-01

    The least squares support vector machine (LS-SVM) is used to study the nonlinear time series prediction. First, the parameter γ and multi-step prediction capabilities of the LS-SVM network are discussed. Then we employ clustering method in the model to prune the number of the support values. The learning rate and the capabilities of filtering noise for LS-SVM are all greatly improved.

  9. Consistent Partial Least Squares Path Modeling via Regularization.

    Science.gov (United States)

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  10. Consistent Partial Least Squares Path Modeling via Regularization

    Directory of Open Access Journals (Sweden)

    Sunho Jung

    2018-02-01

    Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  11. A Galerkin least squares approach to viscoelastic flow.

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    A Galerkin/least-squares stabilization technique is applied to a discrete Elastic Viscous Stress Splitting formulation of for viscoelastic flow. From this, a possible viscoelastic stabilization method is proposed. This method is tested with the flow of an Oldroyd-B fluid past a rigid cylinder, where it is found to produce inaccurate drag coefficients. Furthermore, it fails for relatively low Weissenberg number indicating it is not suited for use as a general algorithm. In addition, a decoupled approach is used as a way separating the constitutive equation from the rest of the system. A Pressure Poisson equation is used when the velocity and pressure are sought to be decoupled, but this fails to produce a solution when inflow/outflow boundaries are considered. However, a coupled pressure-velocity equation with a decoupled constitutive equation is successful for the flow past a rigid cylinder and seems to be suitable as a general-use algorithm.

  12. Least-squares reverse time migration with radon preconditioning

    KAUST Repository

    Dutta, Gaurav

    2016-09-06

    We present a least-squares reverse time migration (LSRTM) method using Radon preconditioning to regularize noisy or severely undersampled data. A high resolution local radon transform is used as a change of basis for the reflectivity and sparseness constraints are applied to the inverted reflectivity in the transform domain. This reflects the prior that for each location of the subsurface the number of geological dips is limited. The forward and the adjoint mapping of the reflectivity to the local Radon domain and back are done through 3D Fourier-based discrete Radon transform operators. The sparseness is enforced by applying weights to the Radon domain components which either vary with the amplitudes of the local dips or are thresholded at given quantiles. Numerical tests on synthetic and field data validate the effectiveness of the proposed approach in producing images with improved SNR and reduced aliasing artifacts when compared with standard RTM or LSRTM.

  13. Modern status of the LSD experiment

    International Nuclear Information System (INIS)

    Dadykin, V.L.; Zatsepin, G.T.; Korchagin, V.B.

    1989-01-01

    Possibility of experiment statement using LSD neutrino detector is considered in order to investigate probability of detection of solar neutrino flux within >or approx. 7 MeV energy range. Main sources of background, its characteristics, energy yield spectrum of γ quanta in LSD counter are presented

  14. Discrete least squares polynomial approximation with random evaluations - application to PDEs with Random parameters

    KAUST Repository

    Nobile, Fabio

    2015-01-01

    the parameter-to-solution map u(y) from random noise-free or noisy observations in random points by discrete least squares on polynomial spaces. The noise-free case is relevant whenever the technique is used to construct metamodels, based on polynomial

  15. Modeling geochemical datasets for source apportionment: Comparison of least square regression and inversion approaches.

    Digital Repository Service at National Institute of Oceanography (India)

    Tripathy, G.R.; Das, Anirban.

    used methods, the Least Square Regression (LSR) and Inverse Modeling (IM), to determine the contributions of (i) solutes from different sources to global river water, and (ii) various rocks to a glacial till. The purpose of this exercise is to compare...

  16. Speed control of induction motor using fuzzy recursive least squares technique

    Directory of Open Access Journals (Sweden)

    Santiago Sánchez

    2008-12-01

    Full Text Available A simple adaptive controller design is presented in this paper, the control system uses the adaptive fuzzy logic, sliding modes and is trained with the recursive least squares technique. The problem of parameter variation is solved with the adaptive controller; the use of an internal PI regulator produces that the speed control of the induction motor be achieved by the stator currents instead the input voltage. The rotor-flux oriented coordinated system model is used to develop and test the control system.

  17. Power system state estimation using an iteratively reweighted least squares method for sequential L{sub 1}-regression

    Energy Technology Data Exchange (ETDEWEB)

    Jabr, R.A. [Electrical, Computer and Communication Engineering Department, Notre Dame University, P.O. Box 72, Zouk Mikhael, Zouk Mosbeh (Lebanon)

    2006-02-15

    This paper presents an implementation of the least absolute value (LAV) power system state estimator based on obtaining a sequence of solutions to the L{sub 1}-regression problem using an iteratively reweighted least squares (IRLS{sub L1}) method. The proposed implementation avoids reformulating the regression problem into standard linear programming (LP) form and consequently does not require the use of common methods of LP, such as those based on the simplex method or interior-point methods. It is shown that the IRLS{sub L1} method is equivalent to solving a sequence of linear weighted least squares (LS) problems. Thus, its implementation presents little additional effort since the sparse LS solver is common to existing LS state estimators. Studies on the termination criteria of the IRLS{sub L1} method have been carried out to determine a procedure for which the proposed estimator is more computationally efficient than a previously proposed non-linear iteratively reweighted least squares (IRLS) estimator. Indeed, it is revealed that the proposed method is a generalization of the previously reported IRLS estimator, but is based on more rigorous theory. (author)

  18. Study of the convergence behavior of the complex kernel least mean square algorithm.

    Science.gov (United States)

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2013-09-01

    The complex kernel least mean square (CKLMS) algorithm is recently derived and allows for online kernel adaptive learning for complex data. Kernel adaptive methods can be used in finding solutions for neural network and machine learning applications. The derivation of CKLMS involved the development of a modified Wirtinger calculus for Hilbert spaces to obtain the cost function gradient. We analyze the convergence of the CKLMS with different kernel forms for complex data. The expressions obtained enable us to generate theory-predicted mean-square error curves considering the circularity of the complex input signals and their effect on nonlinear learning. Simulations are used for verifying the analysis results.

  19. A weak Galerkin least-squares finite element method for div-curl systems

    Science.gov (United States)

    Li, Jichun; Ye, Xiu; Zhang, Shangyou

    2018-06-01

    In this paper, we introduce a weak Galerkin least-squares method for solving div-curl problem. This finite element method leads to a symmetric positive definite system and has the flexibility to work with general meshes such as hybrid mesh, polytopal mesh and mesh with hanging nodes. Error estimates of the finite element solution are derived. The numerical examples demonstrate the robustness and flexibility of the proposed method.

  20. Least Squares Adjustment: Linear and Nonlinear Weighted Regression Analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2007-01-01

    This note primarily describes the mathematics of least squares regression analysis as it is often used in geodesy including land surveying and satellite positioning applications. In these fields regression is often termed adjustment. The note also contains a couple of typical land surveying...... and satellite positioning application examples. In these application areas we are typically interested in the parameters in the model typically 2- or 3-D positions and not in predictive modelling which is often the main concern in other regression analysis applications. Adjustment is often used to obtain...... the clock error) and to obtain estimates of the uncertainty with which the position is determined. Regression analysis is used in many other fields of application both in the natural, the technical and the social sciences. Examples may be curve fitting, calibration, establishing relationships between...

  1. semPLS: Structural Equation Modeling Using Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Armin Monecke

    2012-05-01

    Full Text Available Structural equation models (SEM are very popular in many disciplines. The partial least squares (PLS approach to SEM offers an alternative to covariance-based SEM, which is especially suited for situations when data is not normally distributed. PLS path modelling is referred to as soft-modeling-technique with minimum demands regarding mea- surement scales, sample sizes and residual distributions. The semPLS package provides the capability to estimate PLS path models within the R programming environment. Different setups for the estimation of factor scores can be used. Furthermore it contains modular methods for computation of bootstrap confidence intervals, model parameters and several quality indices. Various plot functions help to evaluate the model. The well known mobile phone dataset from marketing research is used to demonstrate the features of the package.

  2. New Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case

    Directory of Open Access Journals (Sweden)

    Monika Pinchas

    2016-02-01

    Full Text Available Recently, a new blind adaptive deconvolution algorithm was proposed based on a new closed-form approximated expression for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output where the output and input probability density function (pdf of the deconvolutional process were approximated with the maximum entropy density approximation technique. The Lagrange multipliers for the output pdf were set to those used for the input pdf. Although this new blind adaptive deconvolution method has been shown to have improved equalization performance compared to the maximum entropy blind adaptive deconvolution algorithm recently proposed by the same author, it is not applicable for the very noisy case. In this paper, we derive new Lagrange multipliers for the output and input pdfs, where the Lagrange multipliers related to the output pdf are a function of the channel noise power. Simulation results indicate that the newly obtained blind adaptive deconvolution algorithm using these new Lagrange multipliers is robust to signal-to-noise ratios (SNR, unlike the previously proposed method, and is applicable for the whole range of SNR down to 7 dB. In addition, we also obtain new closed-form approximated expressions for the conditional expectation and mean square error (MSE.

  3. Online Least Squares One-Class Support Vector Machines-Based Abnormal Visual Event Detection

    Directory of Open Access Journals (Sweden)

    Tian Wang

    2013-12-01

    Full Text Available The abnormal event detection problem is an important subject in real-time video surveillance. In this paper, we propose a novel online one-class classification algorithm, online least squares one-class support vector machine (online LS-OC-SVM, combined with its sparsified version (sparse online LS-OC-SVM. LS-OC-SVM extracts a hyperplane as an optimal description of training objects in a regularized least squares sense. The online LS-OC-SVM learns a training set with a limited number of samples to provide a basic normal model, then updates the model through remaining data. In the sparse online scheme, the model complexity is controlled by the coherence criterion. The online LS-OC-SVM is adopted to handle the abnormal event detection problem. Each frame of the video is characterized by the covariance matrix descriptor encoding the moving information, then is classified into a normal or an abnormal frame. Experiments are conducted, on a two-dimensional synthetic distribution dataset and a benchmark video surveillance dataset, to demonstrate the promising results of the proposed online LS-OC-SVM method.

  4. Equalization of Loudspeaker and Room Responses Using Kautz Filters: Direct Least Squares Design

    Directory of Open Access Journals (Sweden)

    Karjalainen Matti

    2007-01-01

    Full Text Available DSP-based correction of loudspeaker and room responses is becoming an important part of improving sound reproduction. Such response equalization (EQ is based on using a digital filter in cascade with the reproduction channel to counteract the response errors introduced by loudspeakers and room acoustics. Several FIR and IIR filter design techniques have been proposed for equalization purposes. In this paper we investigate Kautz filters, an interesting class of IIR filters, from the point of view of direct least squares EQ design. Kautz filters can be seen as generalizations of FIR filters and their frequency-warped counterparts. They provide a flexible means to obtain desired frequency resolution behavior, which allows low filter orders even for complex corrections. Kautz filters have also the desirable property to avoid inverting dips in transfer function to sharp and long-ringing resonances in the equalizer. Furthermore, the direct least squares design is applicable to nonminimum-phase EQ design and allows using a desired target response. The proposed method is demonstrated by case examples with measured and synthetic loudspeaker and room responses.

  5. Hourly cooling load forecasting using time-indexed ARX models with two-stage weighted least squares regression

    International Nuclear Information System (INIS)

    Guo, Yin; Nazarian, Ehsan; Ko, Jeonghan; Rajurkar, Kamlakar

    2014-01-01

    Highlights: • Developed hourly-indexed ARX models for robust cooling-load forecasting. • Proposed a two-stage weighted least-squares regression approach. • Considered the effect of outliers as well as trend of cooling load and weather patterns. • Included higher order terms and day type patterns in the forecasting models. • Demonstrated better accuracy compared with some ARX and ANN models. - Abstract: This paper presents a robust hourly cooling-load forecasting method based on time-indexed autoregressive with exogenous inputs (ARX) models, in which the coefficients are estimated through a two-stage weighted least squares regression. The prediction method includes a combination of two separate time-indexed ARX models to improve prediction accuracy of the cooling load over different forecasting periods. The two-stage weighted least-squares regression approach in this study is robust to outliers and suitable for fast and adaptive coefficient estimation. The proposed method is tested on a large-scale central cooling system in an academic institution. The numerical case studies show the proposed prediction method performs better than some ANN and ARX forecasting models for the given test data set

  6. Strong source heat transfer simulations based on a GalerKin/Gradient - least - squares method

    International Nuclear Information System (INIS)

    Franca, L.P.; Carmo, E.G.D. do.

    1989-05-01

    Heat conduction problems with temperature-dependent strong sources are modeled by an equation with a laplacian term, a linear term and a given source distribution term. When the linear-temperature-dependent source term is much larger than the laplacian term, we have a singular perturbation problem. In this case, boundary layers are formed to satisfy the Dirichlet boundary conditions. Although this is an elliptic equation, the standard Galerkin method solution is contaminated by spurious oscillations in the neighborhood of the boundary layers. Herein we employ a Galerkin/Gradient-least-squares method which eliminates all pathological phenomena of the Galerkin method. The method is constructed by adding to the Galerkin method a mesh-dependent term obtained by the least-squares form of the gradient of the Euler-Lagrange equation. Error estimates, numerical simulations in one-and multi-dimensions are given that attest the good stability and accuracy properties of the method [pt

  7. Parameter Estimation of Permanent Magnet Synchronous Motor Using Orthogonal Projection and Recursive Least Squares Combinatorial Algorithm

    Directory of Open Access Journals (Sweden)

    Iman Yousefi

    2015-01-01

    Full Text Available This paper presents parameter estimation of Permanent Magnet Synchronous Motor (PMSM using a combinatorial algorithm. Nonlinear fourth-order space state model of PMSM is selected. This model is rewritten to the linear regression form without linearization. Noise is imposed to the system in order to provide a real condition, and then combinatorial Orthogonal Projection Algorithm and Recursive Least Squares (OPA&RLS method is applied in the linear regression form to the system. Results of this method are compared to the Orthogonal Projection Algorithm (OPA and Recursive Least Squares (RLS methods to validate the feasibility of the proposed method. Simulation results validate the efficacy of the proposed algorithm.

  8. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    Science.gov (United States)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  9. Least Squares Estimate of the Initial Phases in STFT based Speech Enhancement

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie; Krawczyk-Becker, Martin; Gerkmann, Timo

    2015-01-01

    In this paper, we consider single-channel speech enhancement in the short time Fourier transform (STFT) domain. We suggest to improve an STFT phase estimate by estimating the initial phases. The method is based on the harmonic model and a model for the phase evolution over time. The initial phases...... are estimated by setting up a least squares problem between the noisy phase and the model for phase evolution. Simulations on synthetic and speech signals show a decreased error on the phase when an estimate of the initial phase is included compared to using the noisy phase as an initialisation. The error...... on the phase is decreased at input SNRs from -10 to 10 dB. Reconstructing the signal using the clean amplitude, the mean squared error is decreased and the PESQ score is increased....

  10. Estimasi Kanal Akustik Bawah Air Untuk Perairan Dangkal Menggunakan Metode Least Square (LS dan Minimum Mean Square Error (MMSE

    Directory of Open Access Journals (Sweden)

    Mardawia M Panrereng

    2015-06-01

    Full Text Available Dalam beberapa tahun terakhir, sistem komunikasi akustik bawah air banyak dikembangkan oleh beberapa peneliti. Besarnya tantangan yang dihadapi membuat para peneliti semakin tertarik untuk mengembangkan penelitian dibidang ini. Kanal bawah air merupakan media komunikasi yang sulit karena adanya attenuasi, absorption, dan multipath yang disebabkan oleh gerakan gelombang air setiap saat. Untuk perairan dangkal, multipath disebabkan adanya pantulan dari permukaan dan dasar laut. Kebutuhan pengiriman data cepat dengan bandwidth terbatas menjadikan Ortogonal Frequency Division Multiplexing (OFDM sebagai solusi untuk komunikasi transmisi tinggi dengan modulasi menggunakan Binary Phase-Shift Keying (BPSK. Estimasi kanal bertujuan untuk mengetahui karakteristik respon impuls kanal propagasi dengan mengirimkan pilot simbol. Pada estimasi kanal menggunakan metode Least Square (LS nilai Mean Square Error (MSE yang diperoleh cenderung lebih besar dari metode estimasi kanal menggunakan metode Minimum Mean Square (MMSE. Hasil kinerja estimasi kanal berdasarkan perhitungan Bit Error Rate (BER untuk estimasi kanal menggunakan metode LS dan metode MMSE tidak menunjukkan perbedaan yang signifikan yaitu berselisih satu SNR untuk setiap metode estimasi kanal yang digunakan.

  11. A negative-norm least-squares method for time-harmonic Maxwell equations

    KAUST Repository

    Copeland, Dylan M.

    2012-04-01

    This paper presents and analyzes a negative-norm least-squares finite element discretization method for the dimension-reduced time-harmonic Maxwell equations in the case of axial symmetry. The reduced equations are expressed in cylindrical coordinates, and the analysis consequently involves weighted Sobolev spaces based on the degenerate radial weighting. The main theoretical results established in this work include existence and uniqueness of the continuous and discrete formulations and error estimates for simple finite element functions. Numerical experiments confirm the error estimates and efficiency of the method for piecewise constant coefficients. © 2011 Elsevier Inc.

  12. And still, a new beginning: the Galerkin least-squares gradient method

    International Nuclear Information System (INIS)

    Franca, L.P.; Carmo, E.G.D. do

    1988-08-01

    A finite element method is proposed to solve a scalar singular diffusion problem. The method is constructed by adding to the standard Galerkin a mesh-dependent term obtained by taking the gradient of the Euler-lagrange equation and multiplying it by its least-squares. For the one-dimensional homogeneous problem the method is designed to develop nodal exact solution. An error estimate shows that the method converges optimaly for any value of the singular parameter. Numerical results demonstrate the good stability and accuracy properties of the method. (author) [pt

  13. LSD Acutely Impairs Fear Recognition and Enhances Emotional Empathy and Sociality.

    Science.gov (United States)

    Dolder, Patrick C; Schmid, Yasmin; Müller, Felix; Borgwardt, Stefan; Liechti, Matthias E

    2016-10-01

    Lysergic acid diethylamide (LSD) is used recreationally and has been evaluated as an adjunct to psychotherapy to treat anxiety in patients with life-threatening illness. LSD is well-known to induce perceptual alterations, but unknown is whether LSD alters emotional processing in ways that can support psychotherapy. We investigated the acute effects of LSD on emotional processing using the Face Emotion Recognition Task (FERT) and Multifaceted Empathy Test (MET). The effects of LSD on social behavior were tested using the Social Value Orientation (SVO) test. Two similar placebo-controlled, double-blind, random-order, crossover studies were conducted using 100 μg LSD in 24 subjects and 200 μg LSD in 16 subjects. All of the subjects were healthy and mostly hallucinogen-naive 25- to 65-year-old volunteers (20 men, 20 women). LSD produced feelings of happiness, trust, closeness to others, enhanced explicit and implicit emotional empathy on the MET, and impaired the recognition of sad and fearful faces on the FERT. LSD enhanced the participants' desire to be with other people and increased their prosocial behavior on the SVO test. These effects of LSD on emotion processing and sociality may be useful for LSD-assisted psychotherapy.

  14. Comment on "Fringe projection profilometry with nonparallel illumination: a least-squares approach"

    Science.gov (United States)

    Wang, Zhaoyang; Bi, Hongbo

    2006-07-01

    We comment on the recent Letter by Chen and Quan [Opt. Lett.30, 2101 (2005)] in which a least-squares approach was proposed to cope with the nonparallel illumination in fringe projection profilometry. It is noted that the previous mathematical derivations of the fringe pitch and carrier phase functions on the reference plane were incorrect. In addition, we suggest that the variation of carrier phase along the vertical direction should be considered.

  15. LSD Flashbacks: An Overview of the Literature for Counselors.

    Science.gov (United States)

    Silling, S. Marc

    1980-01-01

    Surveyed the literature to delineate the etiology of LSD flashbacks. Concluded that adverse experiences while using LSD are predictive of flashbacks; physiological effects of LSD use may linger after the drug has been metabolized; and individuals who have flashbacks are highly suggestive and play a flashback "role."

  16. Parametric output-only identification of time-varying structures using a kernel recursive extended least squares TARMA approach

    Science.gov (United States)

    Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim

    2018-01-01

    The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.

  17. Crystal Structure of an LSD-Bound Human Serotonin Receptor

    Energy Technology Data Exchange (ETDEWEB)

    Wacker, Daniel; Wang, Sheng; McCorvy, John D.; Betz, Robin M.; Venkatakrishnan, A.J.; Levit, Anat; Lansu, Katherine; Schools, Zachary L.; Che, Tao; Nichols, David E.; Shoichet, Brian K.; Dror, Ron O.; Roth, Bryan L. (UNCSM); (UNC); (Stanford); (Stanford-MED); (UCSF)

    2017-01-01

    The prototypical hallucinogen LSD acts via serotonin receptors, and here we describe the crystal structure of LSD in complex with the human serotonin receptor 5-HT2B. The complex reveals conformational rearrangements to accommodate LSD, providing a structural explanation for the conformational selectivity of LSD’s key diethylamide moiety. LSD dissociates exceptionally slow from both 5-HT2BR and 5-HT2AR—a major target for its psychoactivity. Molecular dynamics (MD) simulations suggest that LSD’s slow binding kinetics may be due to a “lid” formed by extracellular loop 2 (EL2) at the entrance to the binding pocket. A mutation predicted to increase the mobility of this lid greatly accelerates LSD’s binding kinetics and selectively dampens LSD-mediated β-arrestin2 recruitment. This study thus reveals an unexpected binding mode of LSD; illuminates key features of its kinetics, stereochemistry, and signaling; and provides a molecular explanation for LSD’s actions at human serotonin receptors.

  18. Phase-unwrapping algorithm by a rounding-least-squares approach

    Science.gov (United States)

    Juarez-Salazar, Rigoberto; Robledo-Sanchez, Carlos; Guerrero-Sanchez, Fermin

    2014-02-01

    A simple and efficient phase-unwrapping algorithm based on a rounding procedure and a global least-squares minimization is proposed. Instead of processing the gradient of the wrapped phase, this algorithm operates over the gradient of the phase jumps by a robust and noniterative scheme. Thus, the residue-spreading and over-smoothing effects are reduced. The algorithm's performance is compared with four well-known phase-unwrapping methods: minimum cost network flow (MCNF), fast Fourier transform (FFT), quality-guided, and branch-cut. A computer simulation and experimental results show that the proposed algorithm reaches a high-accuracy level than the MCNF method by a low-computing time similar to the FFT phase-unwrapping method. Moreover, since the proposed algorithm is simple, fast, and user-free, it could be used in metrological interferometric and fringe-projection automatic real-time applications.

  19. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Qiqi, E-mail: qiqi@mit.edu; Hu, Rui, E-mail: hurui@mit.edu; Blonigan, Patrick, E-mail: blonigan@mit.edu

    2014-06-15

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

  20. 3D plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin

    2014-08-05

    A three dimensional least-squares Kirchhoff migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images and the computational efficiency. Due to the limitation of current 3D marine acquisition geometries, a cylindrical-wave encoding is adopted for the narrow azimuth streamer data. To account for the mispositioning of reflectors due to errors in the velocity model, a regularized LSM is devised so that each plane-wave or cylindrical-wave gather gives rise to an individual migration image, and a regularization term is included to encourage the similarities between the migration images of similar encoding schemes. Both synthetic and field results show that: 1) plane-wave or cylindrical-wave encoding LSM can achieve both computational and IO saving, compared to shot-domain LSM, however, plane-wave LSM is still about 5 times more expensive than plane-wave migration; 2) the regularized LSM is more robust compared to LSM with one reflectivity model common for all the plane-wave or cylindrical-wave gathers.

  1. Modeling and forecasting monthly movement of annual average solar insolation based on the least-squares Fourier-model

    International Nuclear Information System (INIS)

    Yang, Zong-Chang

    2014-01-01

    Highlights: • Introduce a finite Fourier-series model for evaluating monthly movement of annual average solar insolation. • Present a forecast method for predicting its movement based on the extended Fourier-series model in the least-squares. • Shown its movement is well described by a low numbers of harmonics with approximately 6-term Fourier series. • Predict its movement most fitting with less than 6-term Fourier series. - Abstract: Solar insolation is one of the most important measurement parameters in many fields. Modeling and forecasting monthly movement of annual average solar insolation is of increasingly importance in areas of engineering, science and economics. In this study, Fourier-analysis employing finite Fourier-series is proposed for evaluating monthly movement of annual average solar insolation and extended in the least-squares for forecasting. The conventional Fourier analysis, which is the most common analysis method in the frequency domain, cannot be directly applied for prediction. Incorporated with the least-square method, the introduced Fourier-series model is extended to predict its movement. The extended Fourier-series forecasting model obtains its optimums Fourier coefficients in the least-square sense based on its previous monthly movements. The proposed method is applied to experiments and yields satisfying results in the different cities (states). It is indicated that monthly movement of annual average solar insolation is well described by a low numbers of harmonics with approximately 6-term Fourier series. The extended Fourier forecasting model predicts the monthly movement of annual average solar insolation most fitting with less than 6-term Fourier series

  2. Use of correspondence analysis partial least squares on linear and unimodal data

    DEFF Research Database (Denmark)

    Frisvad, Jens Christian; Norsker, Merete

    1996-01-01

    Correspondence analysis partial least squares (CA-PLS) has been compared with PLS conceming classification and prediction of unimodal growth temperature data and an example using infrared (IR) spectroscopy for predicting amounts of chemicals in mixtures. CA-PLS was very effective for ordinating...... that could only be seen in two-dimensional plots, and also less effective predictions. PLS was the best method in the linear case treated, with fewer components and a better prediction than CA-PLS....

  3. New recursive-least-squares algorithms for nonlinear active control of sound and vibration using neural networks.

    Science.gov (United States)

    Bouchard, M

    2001-01-01

    In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.

  4. Space-time least-squares Petrov-Galerkin projection in nonlinear model reduction.

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Youngsoo [Sandia National Laboratories (SNL-CA), Livermore, CA (United States). Extreme-scale Data Science and Analytics Dept.; Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carlberg, Kevin Thomas [Sandia National Laboratories (SNL-CA), Livermore, CA (United States). Extreme-scale Data Science and Analytics Dept.

    2017-09-01

    Our work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply Petrov-Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discrete-optimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over all space and time in a weighted ℓ2-norm. This norm can be de ned to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time GNAT variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include: (1) a reduction of both the spatial and temporal dimensions of the dynamical system, (2) the removal of spurious temporal modes (e.g., unstable growth) from the state space, and (3) error bounds that exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy.

  5. A Bayesian least-squares support vector machine method for predicting the remaining useful life of a microwave component

    Directory of Open Access Journals (Sweden)

    Fuqiang Sun

    2017-01-01

    Full Text Available Rapid and accurate lifetime prediction of critical components in a system is important to maintaining the system’s reliable operation. To this end, many lifetime prediction methods have been developed to handle various failure-related data collected in different situations. Among these methods, machine learning and Bayesian updating are the most popular ones. In this article, a Bayesian least-squares support vector machine method that combines least-squares support vector machine with Bayesian inference is developed for predicting the remaining useful life of a microwave component. A degradation model describing the change in the component’s power gain over time is developed, and the point and interval remaining useful life estimates are obtained considering a predefined failure threshold. In our case study, the radial basis function neural network approach is also implemented for comparison purposes. The results indicate that the Bayesian least-squares support vector machine method is more precise and stable in predicting the remaining useful life of this type of components.

  6. PERBANDINGAN ANALISIS LEAST ABSOLUTE SHRINKAGE AND SELECTION OPERATOR DAN PARTIAL LEAST SQUARES (Studi Kasus: Data Microarray

    Directory of Open Access Journals (Sweden)

    KADEK DWI FARMANI

    2012-09-01

    Full Text Available Linear regression analysis is one of the parametric statistical methods which utilize the relationship between two or more quantitative variables. In linear regression analysis, there are several assumptions that must be met that is normal distribution of errors, there is no correlation between the error and error variance is constant and homogent. There are some constraints that caused the assumption can not be met, for example, the correlation between independent variables (multicollinearity, constraints on the number of data and independent variables are obtained. When the number of samples obtained less than the number of independent variables, then the data is called the microarray data. Least Absolute shrinkage and Selection Operator (LASSO and Partial Least Squares (PLS is a statistical method that can be used to overcome the microarray, overfitting, and multicollinearity. From the above description, it is necessary to study with the intention of comparing LASSO and PLS method. This study uses coronary heart and stroke patients data which is a microarray data and contain multicollinearity. With these two characteristics of the data that most have a weak correlation between independent variables, LASSO method produces a better model than PLS seen from the large RMSEP.

  7. Recursive least squares method of regression coefficients estimation as a special case of Kalman filter

    Science.gov (United States)

    Borodachev, S. M.

    2016-06-01

    The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.

  8. Gravity Search Algorithm hybridized Recursive Least Square method for power system harmonic estimation

    Directory of Open Access Journals (Sweden)

    Santosh Kumar Singh

    2017-06-01

    Full Text Available This paper presents a new hybrid method based on Gravity Search Algorithm (GSA and Recursive Least Square (RLS, known as GSA-RLS, to solve the harmonic estimation problems in the case of time varying power signals in presence of different noises. GSA is based on the Newton’s law of gravity and mass interactions. In the proposed method, the searcher agents are a collection of masses that interact with each other using Newton’s laws of gravity and motion. The basic GSA algorithm strategy is combined with RLS algorithm sequentially in an adaptive way to update the unknown parameters (weights of the harmonic signal. Simulation and practical validation are made with the experimentation of the proposed algorithm with real time data obtained from a heavy paper industry. A comparative performance of the proposed algorithm is evaluated with other recently reported algorithms like, Differential Evolution (DE, Particle Swarm Optimization (PSO, Bacteria Foraging Optimization (BFO, Fuzzy-BFO (F-BFO hybridized with Least Square (LS and BFO hybridized with RLS algorithm, which reveals that the proposed GSA-RLS algorithm is the best in terms of accuracy, convergence and computational time.

  9. Extreme Learning Machine and Moving Least Square Regression Based Solar Panel Vision Inspection

    Directory of Open Access Journals (Sweden)

    Heng Liu

    2017-01-01

    Full Text Available In recent years, learning based machine intelligence has aroused a lot of attention across science and engineering. Particularly in the field of automatic industry inspection, the machine learning based vision inspection plays a more and more important role in defect identification and feature extraction. Through learning from image samples, many features of industry objects, such as shapes, positions, and orientations angles, can be obtained and then can be well utilized to determine whether there is defect or not. However, the robustness and the quickness are not easily achieved in such inspection way. In this work, for solar panel vision inspection, we present an extreme learning machine (ELM and moving least square regression based approach to identify solder joint defect and detect the panel position. Firstly, histogram peaks distribution (HPD and fractional calculus are applied for image preprocessing. Then an ELM-based defective solder joints identification is discussed in detail. Finally, moving least square regression (MLSR algorithm is introduced for solar panel position determination. Experimental results and comparisons show that the proposed ELM and MLSR based inspection method is efficient not only in detection accuracy but also in processing speed.

  10. Automated extraction of lysergic acid diethylamide (LSD) and N-demethyl-LSD from blood, serum, plasma, and urine samples using the Zymark RapidTrace with LC/MS/MS confirmation.

    Science.gov (United States)

    de Kanel, J; Vickery, W E; Waldner, B; Monahan, R M; Diamond, F X

    1998-05-01

    A forensic procedure for the quantitative confirmation of lysergic acid diethylamide (LSD) and the qualitative confirmation of its metabolite, N-demethyl-LSD, in blood, serum, plasma, and urine samples is presented. The Zymark RapidTrace was used to perform fully automated solid-phase extractions of all specimen types. After extract evaporation, confirmations were performed using liquid chromatography (LC) followed by positive electrospray ionization (ESI+) mass spectrometry/mass spectrometry (MS/MS) without derivatization. Quantitation of LSD was accomplished using LSD-d3 as an internal standard. The limit of quantitation (LOQ) for LSD was 0.05 ng/mL. The limit of detection (LOD) for both LSD and N-demethyl-LSD was 0.025 ng/mL. The recovery of LSD was greater than 95% at levels of 0.1 ng/mL and 2.0 ng/mL. For LSD at 1.0 ng/mL, the within-run and between-run (different day) relative standard deviation (RSD) was 2.2% and 4.4%, respectively.

  11. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    Science.gov (United States)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  12. Locally Linear Embedding of Local Orthogonal Least Squares Images for Face Recognition

    Science.gov (United States)

    Hafizhelmi Kamaru Zaman, Fadhlan

    2018-03-01

    Dimensionality reduction is very important in face recognition since it ensures that high-dimensionality data can be mapped to lower dimensional space without losing salient and integral facial information. Locally Linear Embedding (LLE) has been previously used to serve this purpose, however, the process of acquiring LLE features requires high computation and resources. To overcome this limitation, we propose a locally-applied Local Orthogonal Least Squares (LOLS) model can be used as initial feature extraction before the application of LLE. By construction of least squares regression under orthogonal constraints we can preserve more discriminant information in the local subspace of facial features while reducing the overall features into a more compact form that we called LOLS images. LLE can then be applied on the LOLS images to maps its representation into a global coordinate system of much lower dimensionality. Several experiments carried out using publicly available face datasets such as AR, ORL, YaleB, and FERET under Single Sample Per Person (SSPP) constraint demonstrates that our proposed method can reduce the time required to compute LLE features while delivering better accuracy when compared to when either LLE or OLS alone is used. Comparison against several other feature extraction methods and more recent feature-learning method such as state-of-the-art Convolutional Neural Networks (CNN) also reveal the superiority of the proposed method under SSPP constraint.

  13. LSD enhances the emotional response to music.

    Science.gov (United States)

    Kaelen, M; Barrett, F S; Roseman, L; Lorenz, R; Family, N; Bolstridge, M; Curran, H V; Feilding, A; Nutt, D J; Carhart-Harris, R L

    2015-10-01

    There is renewed interest in the therapeutic potential of psychedelic drugs such as lysergic acid diethylamide (LSD). LSD was used extensively in the 1950s and 1960s as an adjunct in psychotherapy, reportedly enhancing emotionality. Music is an effective tool to evoke and study emotion and is considered an important element in psychedelic-assisted psychotherapy; however, the hypothesis that psychedelics enhance the emotional response to music has yet to be investigated in a modern placebo-controlled study. The present study sought to test the hypothesis that music-evoked emotions are enhanced under LSD. Ten healthy volunteers listened to five different tracks of instrumental music during each of two study days, a placebo day followed by an LSD day, separated by 5-7 days. Subjective ratings were completed after each music track and included a visual analogue scale (VAS) and the nine-item Geneva Emotional Music Scale (GEMS-9). Results demonstrated that the emotional response to music is enhanced by LSD, especially the emotions "wonder", "transcendence", "power" and "tenderness". These findings reinforce the long-held assumption that psychedelics enhance music-evoked emotion, and provide tentative and indirect support for the notion that this effect can be harnessed in the context of psychedelic-assisted psychotherapy. Further research is required to test this link directly.

  14. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    Science.gov (United States)

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  15. Stable Galerkin versus equal-order Galerkin least-squares elements for the stokes flow problem

    International Nuclear Information System (INIS)

    Franca, L.P.; Frey, S.L.; Sampaio, R.

    1989-11-01

    Numerical experiments are performed for the stokes flow problem employing a stable Galerkin method and a Galerkin/Least-squares method with equal-order elements. Error estimates for the methods tested herein are reviewed. The numerical results presented attest the good stability properties of all methods examined herein. (A.C.A.S.) [pt

  16. Cognitive assessment in mathematics with the least squares distance method.

    Science.gov (United States)

    Ma, Lin; Çetin, Emre; Green, Kathy E

    2012-01-01

    This study investigated the validation of comprehensive cognitive attributes of an eighth-grade mathematics test using the least squares distance method and compared performance on attributes by gender and region. A sample of 5,000 students was randomly selected from the data of the 2005 Turkish national mathematics assessment of eighth-grade students. Twenty-five math items were assessed for presence or absence of 20 cognitive attributes (content, cognitive processes, and skill). Four attributes were found to be misspecified or nonpredictive. However, results demonstrated the validity of cognitive attributes in terms of the revised set of 17 attributes. The girls had similar performance on the attributes as the boys. The students from the two eastern regions significantly underperformed on the most attributes.

  17. Recursive N-way partial least squares for brain-computer interface.

    Directory of Open Access Journals (Sweden)

    Andrey Eliseyev

    Full Text Available In the article tensor-input/tensor-output blockwise Recursive N-way Partial Least Squares (RNPLS regression is considered. It combines the multi-way tensors decomposition with a consecutive calculation scheme and allows blockwise treatment of tensor data arrays with huge dimensions, as well as the adaptive modeling of time-dependent processes with tensor variables. In the article the numerical study of the algorithm is undertaken. The RNPLS algorithm demonstrates fast and stable convergence of regression coefficients. Applied to Brain Computer Interface system calibration, the algorithm provides an efficient adjustment of the decoding model. Combining the online adaptation with easy interpretation of results, the method can be effectively applied in a variety of multi-modal neural activity flow modeling tasks.

  18. Medium Band Least Squares Estimation of Fractional Cointegration in the Presence of Low-Frequency Contamination

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; Varneskov, Rasmus T.

    band least squares (MBLS) estimator uses sample dependent trimming of frequencies in the vicinity of the origin to account for such contamination. Consistency and asymptotic normality of the MBLS estimator are established, a feasible inference procedure is proposed, and rigorous tools for assessing...

  19. Multisource least-squares migration of marine streamer and land data with frequency-division encoding

    KAUST Repository

    Huang, Yunsong; Schuster, Gerard T.

    2012-01-01

    Multisource migration of phase-encoded supergathers has shown great promise in reducing the computational cost of conventional migration. The accompanying crosstalk noise, in addition to the migration footprint, can be reduced by least-squares inversion. But the application of this approach to marine streamer data is hampered by the mismatch between the limited number of live traces/shot recorded in the field and the pervasive number of traces generated by the finite-difference modelling method. This leads to a strong mismatch in the misfit function and results in strong artefacts (crosstalk) in the multisource least-squares migration image. To eliminate this noise, we present a frequency-division multiplexing (FDM) strategy with iterative least-squares migration (ILSM) of supergathers. The key idea is, at each ILSM iteration, to assign a unique frequency band to each shot gather. In this case there is no overlap in the crosstalk spectrum of each migrated shot gather m(x, ω i), so the spectral crosstalk product m(x, ω i)m(x, ω j) =δ i, j is zero, unless i=j. Our results in applying this method to 2D marine data for a SEG/EAGE salt model show better resolved images than standard migration computed at about 1/10 th of the cost. Similar results are achieved after applying this method to synthetic data for a 3D SEG/EAGE salt model, except the acquisition geometry is similar to that of a marine OBS survey. Here, the speedup of this method over conventional migration is more than 10. We conclude that multisource migration for a marine geometry can be successfully achieved by a frequency-division encoding strategy, as long as crosstalk-prone sources are segregated in their spectral content. This is both the strength and the potential limitation of this method. © 2012 European Association of Geoscientists & Engineers.

  20. Multisource least-squares migration of marine streamer and land data with frequency-division encoding

    KAUST Repository

    Huang, Yunsong

    2012-05-22

    Multisource migration of phase-encoded supergathers has shown great promise in reducing the computational cost of conventional migration. The accompanying crosstalk noise, in addition to the migration footprint, can be reduced by least-squares inversion. But the application of this approach to marine streamer data is hampered by the mismatch between the limited number of live traces/shot recorded in the field and the pervasive number of traces generated by the finite-difference modelling method. This leads to a strong mismatch in the misfit function and results in strong artefacts (crosstalk) in the multisource least-squares migration image. To eliminate this noise, we present a frequency-division multiplexing (FDM) strategy with iterative least-squares migration (ILSM) of supergathers. The key idea is, at each ILSM iteration, to assign a unique frequency band to each shot gather. In this case there is no overlap in the crosstalk spectrum of each migrated shot gather m(x, ω i), so the spectral crosstalk product m(x, ω i)m(x, ω j) =δ i, j is zero, unless i=j. Our results in applying this method to 2D marine data for a SEG/EAGE salt model show better resolved images than standard migration computed at about 1/10 th of the cost. Similar results are achieved after applying this method to synthetic data for a 3D SEG/EAGE salt model, except the acquisition geometry is similar to that of a marine OBS survey. Here, the speedup of this method over conventional migration is more than 10. We conclude that multisource migration for a marine geometry can be successfully achieved by a frequency-division encoding strategy, as long as crosstalk-prone sources are segregated in their spectral content. This is both the strength and the potential limitation of this method. © 2012 European Association of Geoscientists & Engineers.

  1. CHARACTERIZING THE PSYCHOLOGICAL STATE PRODUCED BY LSD.

    Science.gov (United States)

    KATZ, MARTIN M.; AND OTHERS

    THE DEVELOPMENT AND COMPONENTS OF LYSERGIC ACID DIETHYLAMIDE (LSD) PRODUCED PSYCHOLOGICAL STATES ARE INVESTIGATED. THE SUBJECTS WERE PAID VOLUNTEERS FROM THE PATUXENT INSTITUTION, A TREATMENT CENTER FOR EMOTIONALLY UNSTABLE CRIMINAL OFFENDERS. IN ONE STUDY, GROUPS OF 23 SUBJECTS RECEIVED LSD, AN AMPHETAMINE, OR A PLACEBO. IN THE SECOND STUDY, 11…

  2. Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares

    Science.gov (United States)

    Orr, Jeb S.

    2012-01-01

    A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed

  3. Least-squares fit of a linear combination of functions

    Directory of Open Access Journals (Sweden)

    Niraj Upadhyay

    2013-12-01

    Full Text Available We propose that given a data-set $S=\\{(x_i,y_i/i=1,2,{\\dots}n\\}$ and real-valued functions $\\{f_\\alpha(x/\\alpha=1,2,{\\dots}m\\},$ the least-squares fit vector $A=\\{a_\\alpha\\}$ for $y=\\sum_\\alpha a_{\\alpha}f_\\alpha(x$ is $A = (F^TF^{-1}F^TY$ where $[F_{i\\alpha}]=[f_\\alpha(x_i].$ We test this formalism by deriving the algebraic expressions of the regression coefficients in $y = ax + b$ and in $y = ax^2 + bx + c.$ As a practical application, we successfully arrive at the coefficients in the semi-empirical mass formula of nuclear physics. The formalism is {\\it generic} - it has the potential of being applicable to any {\\it type} of $\\{x_i\\}$ as long as there exist appropriate $\\{f_\\alpha\\}.$ The method can be exploited with a CAS or an object-oriented language and is excellently suitable for parallel-processing.

  4. Robust Homography Estimation Based on Nonlinear Least Squares Optimization

    Directory of Open Access Journals (Sweden)

    Wei Mou

    2014-01-01

    Full Text Available The homography between image pairs is normally estimated by minimizing a suitable cost function given 2D keypoint correspondences. The correspondences are typically established using descriptor distance of keypoints. However, the correspondences are often incorrect due to ambiguous descriptors which can introduce errors into following homography computing step. There have been numerous attempts to filter out these erroneous correspondences, but it is unlikely to always achieve perfect matching. To deal with this problem, we propose a nonlinear least squares optimization approach to compute homography such that false matches have no or little effect on computed homography. Unlike normal homography computation algorithms, our method formulates not only the keypoints’ geometric relationship but also their descriptor similarity into cost function. Moreover, the cost function is parametrized in such a way that incorrect correspondences can be simultaneously identified while the homography is computed. Experiments show that the proposed approach can perform well even with the presence of a large number of outliers.

  5. Commutative discrete filtering on unstructured grids based on least-squares techniques

    International Nuclear Information System (INIS)

    Haselbacher, Andreas; Vasilyev, Oleg V.

    2003-01-01

    The present work is concerned with the development of commutative discrete filters for unstructured grids and contains two main contributions. First, building on the work of Marsden et al. [J. Comp. Phys. 175 (2002) 584], a new commutative discrete filter based on least-squares techniques is constructed. Second, a new analysis of the discrete commutation error is carried out. The analysis indicates that the discrete commutation error is not only dependent on the number of vanishing moments of the filter weights, but also on the order of accuracy of the discrete gradient operator. The results of the analysis are confirmed by grid-refinement studies

  6. Proton Exchange Membrane Fuel Cell Modelling Using Moving Least Squares Technique

    Directory of Open Access Journals (Sweden)

    Radu Tirnovan

    2009-07-01

    Full Text Available Proton exchange membrane fuel cell, with low polluting emissions, is a great alternative to replace the traditional electrical power sources for automotive applications or for small stationary consumers. This paper presents a numerical method, for the fuel cell modelling, based on moving least squares (MLS. Experimental data have been used for developing an approximated model of the PEMFC function of the current density, air inlet pressure and operating temperature of the fuel cell. The method can be applied for modelling others fuel cell sub-systems, such as the compressor. The method can be used for off-line or on-line identification of the PEMFC stack.

  7. SPECTROPOLARIMETRY OF THE CLASSICAL T TAURI STAR BP TAU

    International Nuclear Information System (INIS)

    Chen, Wei; Johns-Krull, Christopher M.

    2013-01-01

    We implement a least-squares deconvolution (LSD) code to study magnetic fields on cool stars. We first apply our code to high-resolution optical echelle spectra of 53 Cam (a magnetic Ap star) and three well-studied cool stars (Arcturus, 61 Cyg A, and ξ Boo A) as well as the Sun (by observing the asteroid Vesta) as tests of the code and the instrumentation. Our analysis is based on several hundred photospheric lines spanning the wavelength range 5000 Å to 9000 Å. We then apply our LSD code to six nights of data on the Classical T Tauri Star BP Tau. A maximum longitudinal field of 370 ± 80 G is detected from the photospheric lines on BP Tau. A 1.8 kG dipole tilted at 129° with respect to the rotation axis and a 1.4 kG octupole tilted at 104° with respect to the rotation axis, both with a filling factor of 0.25, best fit our LSD Stokes V profiles. Measurements of several emission lines (He I 5876 Å, Ca II 8498 Å, and 8542 Å) show the presence of strong magnetic fields in the line formation regions of these lines, which are believed to be the base of the accretion footpoints. The field strength measured from these lines shows night-to-night variability consistent with rotation of the star

  8. SPECTROPOLARIMETRY OF THE CLASSICAL T TAURI STAR BP TAU

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Wei; Johns-Krull, Christopher M., E-mail: wc2@rice.edu, E-mail: cmj@rice.edu [Department of Physics and Astronomy, Rice University, Houston, TX 77005 (United States)

    2013-10-20

    We implement a least-squares deconvolution (LSD) code to study magnetic fields on cool stars. We first apply our code to high-resolution optical echelle spectra of 53 Cam (a magnetic Ap star) and three well-studied cool stars (Arcturus, 61 Cyg A, and ξ Boo A) as well as the Sun (by observing the asteroid Vesta) as tests of the code and the instrumentation. Our analysis is based on several hundred photospheric lines spanning the wavelength range 5000 Å to 9000 Å. We then apply our LSD code to six nights of data on the Classical T Tauri Star BP Tau. A maximum longitudinal field of 370 ± 80 G is detected from the photospheric lines on BP Tau. A 1.8 kG dipole tilted at 129° with respect to the rotation axis and a 1.4 kG octupole tilted at 104° with respect to the rotation axis, both with a filling factor of 0.25, best fit our LSD Stokes V profiles. Measurements of several emission lines (He I 5876 Å, Ca II 8498 Å, and 8542 Å) show the presence of strong magnetic fields in the line formation regions of these lines, which are believed to be the base of the accretion footpoints. The field strength measured from these lines shows night-to-night variability consistent with rotation of the star.

  9. Medicinal chemistry insights in the discovery of novel LSD1 inhibitors.

    Science.gov (United States)

    Wang, Xueshun; Huang, Boshi; Suzuki, Takayoshi; Liu, Xinyong; Zhan, Peng

    2015-01-01

    LSD1 is an epigenetic modulator associated with transcriptional regulation of genes involved in a broad spectrum of key cellular processes, and its activity is often altered under pathological conditions. LSD1 inhibitors are considered to be candidates for therapy of cancer, viral diseases and neurodegeneration. Many LSD1 inhibitors with various scaffolds have been disclosed, and a few potent molecules are in different stages of clinical development. In this review, we summarize recent biological findings on the roles of LSD1 and the current understanding of the clinical significance of LSD1, and focus on the medicinal chemistry strategies used in the design and development of LSD1 inhibitors as drug-like epigenetic modulators since 2012, including a brief consideration of structure-activity relationships.

  10. Lsd1 Ablation Triggers Metabolic Reprogramming of Brown Adipose Tissue

    Directory of Open Access Journals (Sweden)

    Delphine Duteil

    2016-10-01

    Full Text Available Previous work indicated that lysine-specific demethylase 1 (Lsd1 can positively regulate the oxidative and thermogenic capacities of white and beige adipocytes. Here we investigate the role of Lsd1 in brown adipose tissue (BAT and find that BAT-selective Lsd1 ablation induces a shift from oxidative to glycolytic metabolism. This shift is associated with downregulation of BAT-specific and upregulation of white adipose tissue (WAT-selective gene expression. This results in the accumulation of di- and triacylglycerides and culminates in a profound whitening of BAT in aged Lsd1-deficient mice. Further studies show that Lsd1 maintains BAT properties via a dual role. It activates BAT-selective gene expression in concert with the transcription factor Nrf1 and represses WAT-selective genes through recruitment of the CoREST complex. In conclusion, our data uncover Lsd1 as a key regulator of gene expression and metabolic function in BAT.

  11. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    Science.gov (United States)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-10-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  12. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    Science.gov (United States)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-01-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  13. Least square method of estimation of ecological half-lives of radionuclides in sediments

    International Nuclear Information System (INIS)

    Ranade, A.K.; Pandey, M.; Datta, D.; Ravi, P.M.

    2012-01-01

    Long term behavior of radionuclides in the environment is an important issue for estimating probable radiological consequences and associated risks. It is also useful for evaluating potential use of contaminated areas and the possible effectiveness of remediation activities. The long term behavior is quantified by means of ecological half life, a parameter that aggregates all processes except radioactive decay which causes a decrease of activity in a specific medium. The process involved in ecological half life depends upon the environmental condition of the medium involved. A fitting model based on least square regression approach was used to evaluate the ecological half life. This least square method has to run several times to evaluate the number of ecological half lives present in the medium for the radionuclide. The case study data considered here is for 137 Cs in Mumbai Harbour Bay. The study shows the trend of 137 Cs over the years at a location in Mumbai Harbour Bay. First iteration model illustrate the ecological half life as 4.94 y and subsequently it passes through a number of runs for more number of ecological half-life present by goodness of fit test. The paper presents a methodology for evaluating ecological half life and exemplifies it with a case study of 137 Cs. (author)

  14. Short-term traffic flow prediction model using particle swarm optimization–based combined kernel function-least squares support vector machine combined with chaos theory

    Directory of Open Access Journals (Sweden)

    Qiang Shang

    2016-08-01

    Full Text Available Short-term traffic flow prediction is an important part of intelligent transportation systems research and applications. For further improving the accuracy of short-time traffic flow prediction, a novel hybrid prediction model (multivariate phase space reconstruction–combined kernel function-least squares support vector machine based on multivariate phase space reconstruction and combined kernel function-least squares support vector machine is proposed. The C-C method is used to determine the optimal time delay and the optimal embedding dimension of traffic variables’ (flow, speed, and occupancy time series for phase space reconstruction. The G-P method is selected to calculate the correlation dimension of attractor which is an important index for judging chaotic characteristics of the traffic variables’ series. The optimal input form of combined kernel function-least squares support vector machine model is determined by multivariate phase space reconstruction, and the model’s parameters are optimized by particle swarm optimization algorithm. Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. The experimental results suggest that the new proposed model yields better predictions compared with similar models (combined kernel function-least squares support vector machine, multivariate phase space reconstruction–generalized kernel function-least squares support vector machine, and phase space reconstruction–combined kernel function-least squares support vector machine, which indicates that the new proposed model exhibits stronger prediction ability and robustness.

  15. Multisource least-squares reverse-time migration with structure-oriented filtering

    Science.gov (United States)

    Fan, Jing-Wen; Li, Zhen-Chun; Zhang, Kai; Zhang, Min; Liu, Xue-Tong

    2016-09-01

    The technology of simultaneous-source acquisition of seismic data excited by several sources can significantly improve the data collection efficiency. However, direct imaging of simultaneous-source data or blended data may introduce crosstalk noise and affect the imaging quality. To address this problem, we introduce a structure-oriented filtering operator as preconditioner into the multisource least-squares reverse-time migration (LSRTM). The structure-oriented filtering operator is a nonstationary filter along structural trends that suppresses crosstalk noise while maintaining structural information. The proposed method uses the conjugate-gradient method to minimize the mismatch between predicted and observed data, while effectively attenuating the interference noise caused by exciting several sources simultaneously. Numerical experiments using synthetic data suggest that the proposed method can suppress the crosstalk noise and produce highly accurate images.

  16. 21 CFR 862.3580 - Lysergic acid diethylamide (LSD) test system.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Lysergic acid diethylamide (LSD) test system. 862... Test Systems § 862.3580 Lysergic acid diethylamide (LSD) test system. (a) Identification. A lysergic acid diethylamide (LSD) test system is a device intended to measure lysergic acid diethylamide, a...

  17. Analysis of psilocin, bufotenine and LSD in hair.

    Science.gov (United States)

    Martin, Rafaela; Schürenkamp, Jennifer; Gasse, Angela; Pfeiffer, Heidi; Köhler, Helga

    2015-03-01

    A method for the simultaneous extraction of the hallucinogens psilocin, bufotenine, lysergic acid diethylamide (LSD) as well as iso-LSD, nor-LSD and O-H-LSD from hair with hydrochloride acid and methanol is presented. Clean-up of the hair extracts is performed with solid phase extraction using a mixed-mode cation exchanger. Extracts are measured with liquid chromatography coupled with electrospray tandem mass spectrometry. The method was successfully validated according to the guidelines of the 'Society of Toxicological and Forensic Chemistry' (GTFCh). To obtain reference material hair was soaked in a solution of the analytes in dimethyl sulfoxide/methanol to allow incorporation into the hair. These fortified hair samples were used for method development and can be employed as quality controls. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. An improved conjugate gradient scheme to the solution of least squares SVM.

    Science.gov (United States)

    Chu, Wei; Ong, Chong Jin; Keerthi, S Sathiya

    2005-03-01

    The least square support vector machines (LS-SVM) formulation corresponds to the solution of a linear system of equations. Several approaches to its numerical solutions have been proposed in the literature. In this letter, we propose an improved method to the numerical solution of LS-SVM and show that the problem can be solved using one reduced system of linear equations. Compared with the existing algorithm for LS-SVM, the approach used in this letter is about twice as efficient. Numerical results using the proposed method are provided for comparisons with other existing algorithms.

  19. Multivariate fault isolation of batch processes via variable selection in partial least squares discriminant analysis.

    Science.gov (United States)

    Yan, Zhengbing; Kuang, Te-Hui; Yao, Yuan

    2017-09-01

    In recent years, multivariate statistical monitoring of batch processes has become a popular research topic, wherein multivariate fault isolation is an important step aiming at the identification of the faulty variables contributing most to the detected process abnormality. Although contribution plots have been commonly used in statistical fault isolation, such methods suffer from the smearing effect between correlated variables. In particular, in batch process monitoring, the high autocorrelations and cross-correlations that exist in variable trajectories make the smearing effect unavoidable. To address such a problem, a variable selection-based fault isolation method is proposed in this research, which transforms the fault isolation problem into a variable selection problem in partial least squares discriminant analysis and solves it by calculating a sparse partial least squares model. As different from the traditional methods, the proposed method emphasizes the relative importance of each process variable. Such information may help process engineers in conducting root-cause diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Alterations of consciousness and mystical-type experiences after acute LSD in humans.

    Science.gov (United States)

    Liechti, Matthias E; Dolder, Patrick C; Schmid, Yasmin

    2017-05-01

    Lysergic acid diethylamide (LSD) is used recreationally and in clinical research. Acute mystical-type experiences that are acutely induced by hallucinogens are thought to contribute to their potential therapeutic effects. However, no data have been reported on LSD-induced mystical experiences and their relationship to alterations of consciousness. Additionally, LSD dose- and concentration-response functions with regard to alterations of consciousness are lacking. We conducted two placebo-controlled, double-blind, cross-over studies using oral administration of 100 and 200 μg LSD in 24 and 16 subjects, respectively. Acute effects of LSD were assessed using the 5 Dimensions of Altered States of Consciousness (5D-ASC) scale after both doses and the Mystical Experience Questionnaire (MEQ) after 200 μg. On the MEQ, 200 μg LSD induced mystical experiences that were comparable to those in patients who underwent LSD-assisted psychotherapy but were fewer than those reported for psilocybin in healthy subjects or patients. On the 5D-ASC scale, LSD produced higher ratings of blissful state, insightfulness, and changed meaning of percepts after 200 μg compared with 100 μg. Plasma levels of LSD were not positively correlated with its effects, with the exception of ego dissolution at 100 μg. Mystical-type experiences were infrequent after LSD, possibly because of the set and setting used in the present study. LSD may produce greater or different alterations of consciousness at 200 μg (i.e., a dose that is currently used in psychotherapy in Switzerland) compared with 100 μg (i.e., a dose used in imaging studies). Ego dissolution may reflect plasma levels of LSD, whereas more robustly induced effects of LSD may not result in such associations.

  1. LSD1 is Required for Hair Cell Regeneration in Zebrafish.

    Science.gov (United States)

    He, Yingzi; Tang, Dongmei; Cai, Chengfu; Chai, Renjie; Li, Huawei

    2016-05-01

    Lysine-specific demethylase 1 (LSD1/KDM1A) plays an important role in complex cellular processes such as differentiation, proliferation, apoptosis, and cell cycle progression. It has recently been demonstrated that during development, downregulation of LSD1 inhibits cell proliferation, modulates the expression of cell cycle regulators, and reduces hair cell formation in the zebrafish lateral line, which suggests that LSD1-mediated epigenetic regulation plays a key role in the development of hair cells. However, the role of LSD1 in hair cell regeneration after hair cell loss remains poorly understood. Here, we demonstrate the effect of LSD1 on hair cell regeneration following neomycin-induced hair cell loss. We show that the LSD1 inhibitor trans-2-phenylcyclopropylamine (2-PCPA) significantly decreases the regeneration of hair cells in zebrafish after neomycin damage. In addition, immunofluorescent staining demonstrates that 2-PCPA administration suppresses supporting cell proliferation and alters cell cycle progression. Finally, in situ hybridization shows that 2-PCPA significantly downregulates the expression of genes related to Wnt/β-catenin and Fgf activation. Altogether, our data suggest that downregulation of LSD1 significantly decreases hair cell regeneration after neomycin-induced hair cell loss through inactivation of the Wnt/β-catenin and Fgf signaling pathways. Thus, LSD1 plays a critical role in hair cell regeneration and might represent a novel biomarker and potential therapeutic approach for the treatment of hearing loss.

  2. EXPALS, Least Square Fit of Linear Combination of Exponential Decay Function

    International Nuclear Information System (INIS)

    Douglas Gardner, C.

    1980-01-01

    1 - Description of problem or function: This program fits by least squares a function which is a linear combination of real exponential decay functions. The function is y(k) = summation over j of a(j) * exp(-lambda(j) * k). Values of the independent variable (k) and the dependent variable y(k) are specified as input data. Weights may be specified as input information or set by the program (w(k) = 1/y(k)). 2 - Method of solution: The Prony-Householder iteration method is used. For unequally-spaced data, a number of interpolation options are provided. This revision includes an option to call a differential correction subroutine REFINE to improve the approximation to unequally-spaced data when equal-interval interpolation is faulty. If convergence is achieved, the probable errors in the computed parameters are calculated also. 3 - Restrictions on the complexity of the problem: Generally, it is desirable to have at least 10n observations where n equals the number of terms and to input k+n significant figures if k significant figures are expected

  3. Sulfur Speciation of Crude Oils by Partial Least Squares Regression Modeling of Their Infrared Spectra

    NARCIS (Netherlands)

    de Peinder, P.; Visser, T.; Wagemans, R.W.P.; Blomberg, J.; Chaabani, H.; Soulimani, F.; Weckhuysen, B.M.

    2013-01-01

    Research has been carried out to determine the feasibility of partial least-squares regression (PLS) modeling of infrared (IR) spectra of crude oils as a tool for fast sulfur speciation. The study is a continuation of a previously developed method to predict long and short residue properties of

  4. The current strain distribution in the North China Basin of eastern China by least-squares collocation

    Science.gov (United States)

    Wu, J. C.; Tang, H. W.; Chen, Y. Q.; Li, Y. X.

    2006-07-01

    In this paper, the velocities of 154 stations obtained in 2001 and 2003 GPS survey campaigns are applied to formulate a continuous velocity field by the least-squares collocation method. The strain rate field obtained by the least-squares collocation method shows more clear deformation patterns than that of the conventional discrete triangle method. The significant deformation zones obtained are mainly located in three places, to the north of Tangshan, between Tianjing and Shijiazhuang, and to the north of Datong, which agree with the places of the Holocene active deformation zones obtained by geological investigations. The maximum shear strain rate is located at latitude 38.6°N and longitude 116.8°E, with a magnitude of 0.13 ppm/a. The strain rate field obtained can be used for earthquake prediction research in the North China Basin.

  5. Dark Classics in Chemical Neuroscience: Lysergic Acid Diethylamide (LSD).

    Science.gov (United States)

    Nichols, David E

    2018-03-01

    Lysergic acid diethylamide (LSD) is one of the most potent psychoactive agents known, producing dramatic alterations of consciousness after submilligram (≥20 μg) oral doses. Following the accidental discovery of its potent psychoactive effects in 1943, it was supplied by Sandoz Laboratories as an experimental drug that might be useful as an adjunct for psychotherapy, or to give psychiatrists insight into the mental processes in their patients. The finding of serotonin in the mammalian brain in 1953, and its structural resemblance to LSD, quickly led to ideas that serotonin in the brain might be involved in mental disorders, initiating rapid research interest in the neurochemistry of serotonin. LSD proved to be physiologically very safe and nonaddictive, with a very low incidence of adverse events when used in controlled experiments. Widely hailed by psychiatry as a breakthrough in the 1950s and early 1960s, clinical research with LSD ended by about 1970, when it was formally placed into Schedule 1 of the Controlled Substances Act of 1970 following its growing popularity as a recreational drug. Within the past 5 years, clinical research with LSD has begun in Europe, but there has been none in the United States. LSD is proving to be a powerful tool to help understand brain dynamics when combined with modern brain imaging methods. It remains to be seen whether therapeutic value for LSD can be confirmed in controlled clinical trials, but promising results have been obtained in small pilot trials of depression, anxiety, and addictions using psilocybin, a related psychedelic molecule.

  6. Least square methods and covariance matrix applied to the relative efficiency calibration of a Ge(Li) detector

    International Nuclear Information System (INIS)

    Geraldo, L.P.; Smith, D.L.

    1989-01-01

    The methodology of covariance matrix and square methods have been applied in the relative efficiency calibration for a Ge(Li) detector apllied in the relative efficiency calibration for a Ge(Li) detector. Procedures employed to generate, manipulate and test covariance matrices which serve to properly represent uncertainties of experimental data are discussed. Calibration data fitting using least square methods has been performed for a particular experimental data set. (author) [pt

  7. Masses and fission barriers of nuclei in the LSD model

    Energy Technology Data Exchange (ETDEWEB)

    Pomorski, Krzysztof

    2009-07-01

    Recently developed Lublin-Strasbourg Drop (LSD) model together with the microscopic corrections taken r is very successful in describing many features of nuclei. In addition to the classical liquid drop model the LSD contains the curvature term proportional to the A{sup 1/3}. The r.m.s. deviation of the LSD binding energies of 2766 isotopes with Z,N>7 from the experimental ones is 0.698 MeV only. It turns out that the LSD model gives also a satisfactory prediction of the fission barrier heights. In addition, it was found in that taking into account the deformation dependence of the congruence energy proposed by Myers and Swiatecki significantly approaches the LSD-model barrier-heights to the experimental data in the case of light isotopes while the fission barriers for heavy nuclei remain nearly unchanged and agree well with experiment. It was also shown in that the saddle point masses of transactinides from {sup 232}Th to {sup 250}Cf evaluated using the LSD differ by less than 0.67 MeV from the experimental data.

  8. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  9. Intelligent Quality Prediction Using Weighted Least Square Support Vector Regression

    Science.gov (United States)

    Yu, Yaojun

    A novel quality prediction method with mobile time window is proposed for small-batch producing process based on weighted least squares support vector regression (LS-SVR). The design steps and learning algorithm are also addressed. In the method, weighted LS-SVR is taken as the intelligent kernel, with which the small-batch learning is solved well and the nearer sample is set a larger weight, while the farther is set the smaller weight in the history data. A typical machining process of cutting bearing outer race is carried out and the real measured data are used to contrast experiment. The experimental results demonstrate that the prediction accuracy of the weighted LS-SVR based model is only 20%-30% that of the standard LS-SVR based one in the same condition. It provides a better candidate for quality prediction of small-batch producing process.

  10. Reversal learning enhanced by lysergic acid diethylamide (LSD)

    Science.gov (United States)

    King, A.R.; Martin, I.L.; Arabella Melville, K.

    1974-01-01

    1 Small doses of lysergic acid diethylamide (LSD) (12.5-50 μg/kg) consistently facilitated learning of a brightness discrimination reversal. 2 2-Bromo-lysergic acid diethylamide (BOL-148), a structural analogue of LSD, with similar peripheral anti-5-hydroxytrypamine activity but no psychotomimetic properties, had no effect in this learning situation at a similar dose (25 μg/kg). 3 LSD, but not BOL-148, caused a small but significant increase in brain 5-hydroxytryptamine levels, but had no effect on the levels of catecholamines in the brain at 25 μg/kg. PMID:4458849

  11. [The substance experience, a history of LSD].

    Science.gov (United States)

    Beck, François; Bonnet, Nicolas

    2013-04-01

    This article reviews the recent knowledge on LSD stemming from various disciplines among which pharmacology, sociology and epidemiology. The d-lysergic acid diethylamide (LSD) is a particularly powerful hallucinogenic substance. It produces distortions and hearing, visual and tactile hallucinations. Rarely used (only 1.7% of people aged 15-64 years old have tried it in their lifetime), this very powerful drug generates a strong apprehension within the general population, but the ethnographical studies show that its image seems rather good among illicit drug users. This representation relies both on the proper effects of this substance and also on the history of LSD very closely linked to the counterculture characteristic of the years 1960-1970. © 2013 médecine/sciences – Inserm / SRMS.

  12. Development and validation of a rapid turboflow LC-MS/MS method for the quantification of LSD and 2-oxo-3-hydroxy LSD in serum and urine samples of emergency toxicological cases.

    Science.gov (United States)

    Dolder, Patrick C; Liechti, Matthias E; Rentsch, Katharina M

    2015-02-01

    Lysergic acid diethylamide (LSD) is a widely used recreational drug. The aim of the present study is to develop a quantitative turboflow LC-MS/MS method that can be used for rapid quantification of LSD and its main metabolite 2-oxo-3-hydroxy LSD (O-H-LSD) in serum and urine in emergency toxicological cases without time-consuming extraction steps. The method was developed on an ion-trap LC-MS/MS instrument coupled to a turbulent-flow extraction system. The validation data showed no significant matrix effects and no ion suppression has been observed in serum and urine. Mean intraday accuracy and precision for LSD were 101 and 6.84%, in urine samples and 97.40 and 5.89% in serum, respectively. For O-H-LSD, the respective values were 97.50 and 4.99% in urine and 107 and 4.70% in serum. Mean interday accuracy and precision for LSD were 100 and 8.26% in urine and 101 and 6.56% in serum, respectively. For O-H-LSD, the respective values were 101 and 8.11% in urine and 99.8 and 8.35% in serum, respectively. The lower limit of quantification for LSD was determined to be 0.1 ng/ml. LSD concentrations in serum were expected to be up to 8 ng/ml. 2-Oxo-3-hydroxy LSD concentrations in urine up to 250 ng/ml. The new method was accurate and precise in the range of expected serum and urine concentrations in patients with a suspected LSD intoxication. Until now, the method has been applied in five cases with suspected LSD intoxication where the intake of the drug has been verified four times with LSD concentrations in serum in the range of 1.80-14.70 ng/ml and once with a LSD concentration of 1.25 ng/ml in urine. In serum of two patients, the O-H-LSD concentration was determined to be 0.99 and 0.45 ng/ml. In the urine of a third patient, the O-H-LSD concentration was 9.70 ng/ml.

  13. Characterization of behavioral and endocrine effects of LSD on zebrafish.

    Science.gov (United States)

    Grossman, Leah; Utterback, Eli; Stewart, Adam; Gaikwad, Siddharth; Chung, Kyung Min; Suciu, Christopher; Wong, Keith; Elegante, Marco; Elkhayat, Salem; Tan, Julia; Gilder, Thomas; Wu, Nadine; Dileo, John; Cachat, Jonathan; Kalueff, Allan V

    2010-12-25

    Lysergic acid diethylamide (LSD) is a potent hallucinogenic drug that strongly affects animal and human behavior. Although adult zebrafish (Danio rerio) are emerging as a promising neurobehavioral model, the effects of LSD on zebrafish have not been investigated previously. Several behavioral paradigms (the novel tank, observation cylinder, light-dark box, open field, T-maze, social preference and shoaling tests), as well as modern video-tracking tools and whole-body cortisol assay were used to characterize the effects of acute LSD in zebrafish. While lower doses (5-100 microg/L) did not affect zebrafish behavior, 250 microg/L LSD increased top dwelling and reduced freezing in the novel tank and observation cylinder tests, also affecting spatiotemporal patterns of activity (as assessed by 3D reconstruction of zebrafish traces and ethograms). LSD evoked mild thigmotaxis in the open field test, increased light behavior in the light-dark test, reduced the number of arm entries and freezing in the T-maze and social preference test, without affecting social preference. In contrast, LSD affected zebrafish shoaling (increasing the inter-fish distance in a group), and elevated whole-body cortisol levels. Overall, our findings show sensitivity of zebrafish to LSD action, and support the use of zebrafish models to study hallucinogenic drugs of abuse. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  14. LSD Increases Primary Process Thinking via Serotonin 2A Receptor Activation

    Directory of Open Access Journals (Sweden)

    Rainer Kraehenmann

    2017-11-01

    Full Text Available Rationale: Stimulation of serotonin 2A (5-HT2A receptors by lysergic acid diethylamide (LSD and related compounds such as psilocybin has previously been shown to increase primary process thinking – an ontologically and evolutionary early, implicit, associative, and automatic mode of thinking which is typically occurring during altered states of consciousness such as dreaming. However, it is still largely unknown whether LSD induces primary process thinking under placebo-controlled, standardized experimental conditions and whether these effects are related to subjective experience and 5-HT2A receptor activation. Therefore, this study aimed to test the hypotheses that LSD increases primary process thinking and that primary process thinking depends on 5-HT2A receptor activation and is related to subjective drug effects.Methods: Twenty-five healthy subjects performed an audio-recorded mental imagery task 7 h after drug administration during three drug conditions: placebo, LSD (100 mcg orally and LSD together with the 5-HT2A receptor antagonist ketanserin (40 mg orally. The main outcome variable in this study was primary index (PI, a formal measure of primary process thinking in the imagery reports. State of consciousness was evaluated using the Altered State of Consciousness (5D-ASC rating scale.Results: LSD, compared with placebo, significantly increased primary index (p < 0.001, Bonferroni-corrected. The LSD-induced increase in primary index was positively correlated with LSD-induced disembodiment (p < 0.05, Bonferroni-corrected, and blissful state (p < 0.05, Bonferroni-corrected on the 5D-ASC. Both LSD-induced increases in primary index and changes in state of consciousness were fully blocked by ketanserin.Conclusion: LSD induces primary process thinking via activation of 5-HT2A receptors and in relation to disembodiment and blissful state. Primary process thinking appears to crucially organize inner experiences during both dreams and

  15. LSD Increases Primary Process Thinking via Serotonin 2A Receptor Activation

    Science.gov (United States)

    Kraehenmann, Rainer; Pokorny, Dan; Aicher, Helena; Preller, Katrin H.; Pokorny, Thomas; Bosch, Oliver G.; Seifritz, Erich; Vollenweider, Franz X.

    2017-01-01

    Rationale: Stimulation of serotonin 2A (5-HT2A) receptors by lysergic acid diethylamide (LSD) and related compounds such as psilocybin has previously been shown to increase primary process thinking – an ontologically and evolutionary early, implicit, associative, and automatic mode of thinking which is typically occurring during altered states of consciousness such as dreaming. However, it is still largely unknown whether LSD induces primary process thinking under placebo-controlled, standardized experimental conditions and whether these effects are related to subjective experience and 5-HT2A receptor activation. Therefore, this study aimed to test the hypotheses that LSD increases primary process thinking and that primary process thinking depends on 5-HT2A receptor activation and is related to subjective drug effects. Methods: Twenty-five healthy subjects performed an audio-recorded mental imagery task 7 h after drug administration during three drug conditions: placebo, LSD (100 mcg orally) and LSD together with the 5-HT2A receptor antagonist ketanserin (40 mg orally). The main outcome variable in this study was primary index (PI), a formal measure of primary process thinking in the imagery reports. State of consciousness was evaluated using the Altered State of Consciousness (5D-ASC) rating scale. Results: LSD, compared with placebo, significantly increased primary index (p LSD-induced increase in primary index was positively correlated with LSD-induced disembodiment (p LSD-induced increases in primary index and changes in state of consciousness were fully blocked by ketanserin. Conclusion: LSD induces primary process thinking via activation of 5-HT2A receptors and in relation to disembodiment and blissful state. Primary process thinking appears to crucially organize inner experiences during both dreams and psychedelic states of consciousness. PMID:29167644

  16. Geometric Least Square Models for Deriving [0,1]-Valued Interval Weights from Interval Fuzzy Preference Relations Based on Multiplicative Transitivity

    Directory of Open Access Journals (Sweden)

    Xuan Yang

    2015-01-01

    Full Text Available This paper presents a geometric least square framework for deriving [0,1]-valued interval weights from interval fuzzy preference relations. By analyzing the relationship among [0,1]-valued interval weights, multiplicatively consistent interval judgments, and planes, a geometric least square model is developed to derive a normalized [0,1]-valued interval weight vector from an interval fuzzy preference relation. Based on the difference ratio between two interval fuzzy preference relations, a geometric average difference ratio between one interval fuzzy preference relation and the others is defined and employed to determine the relative importance weights for individual interval fuzzy preference relations. A geometric least square based approach is further put forward for solving group decision making problems. An individual decision numerical example and a group decision making problem with the selection of enterprise resource planning software products are furnished to illustrate the effectiveness and applicability of the proposed models.

  17. Pseudoinverse preconditioners and iterative methods for large dense linear least-squares problems

    Directory of Open Access Journals (Sweden)

    Oskar Cahueñas

    2013-05-01

    Full Text Available We address the issue of approximating the pseudoinverse of the coefficient matrix for dynamically building preconditioning strategies for the numerical solution of large dense linear least-squares problems. The new preconditioning strategies are embedded into simple and well-known iterative schemes that avoid the use of the, usually ill-conditioned, normal equations. We analyze a scheme to approximate the pseudoinverse, based on Schulz iterative method, and also different iterative schemes, based on extensions of Richardson's method, and the conjugate gradient method, that are suitable for preconditioning strategies. We present preliminary numerical results to illustrate the advantages of the proposed schemes.

  18. Battery state-of-charge estimation using approximate least squares

    Science.gov (United States)

    Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.

    2015-03-01

    In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.

  19. Application of partial least squares near-infrared spectral classification in diabetic identification

    Science.gov (United States)

    Yan, Wen-juan; Yang, Ming; He, Guo-quan; Qin, Lin; Li, Gang

    2014-11-01

    In order to identify the diabetic patients by using tongue near-infrared (NIR) spectrum - a spectral classification model of the NIR reflectivity of the tongue tip is proposed, based on the partial least square (PLS) method. 39sample data of tongue tip's NIR spectra are harvested from healthy people and diabetic patients , respectively. After pretreatment of the reflectivity, the spectral data are set as the independent variable matrix, and information of classification as the dependent variables matrix, Samples were divided into two groups - i.e. 53 samples as calibration set and 25 as prediction set - then the PLS is used to build the classification model The constructed modelfrom the 53 samples has the correlation of 0.9614 and the root mean square error of cross-validation (RMSECV) of 0.1387.The predictions for the 25 samples have the correlation of 0.9146 and the RMSECV of 0.2122.The experimental result shows that the PLS method can achieve good classification on features of healthy people and diabetic patients.

  20. Exploring the limits of cryospectroscopy: Least-squares based approaches for analyzing the self-association of HCl

    Science.gov (United States)

    De Beuckeleer, Liene I.; Herrebout, Wouter A.

    2016-02-01

    To rationalize the concentration dependent behavior observed for a large spectral data set of HCl recorded in liquid argon, least-squares based numerical methods are developed and validated. In these methods, for each wavenumber a polynomial is used to mimic the relation between monomer concentrations and measured absorbances. Least-squares fitting of higher degree polynomials tends to overfit and thus leads to compensation effects where a contribution due to one species is compensated for by a negative contribution of another. The compensation effects are corrected for by carefully analyzing, using AIC and BIC information criteria, the differences observed between consecutive fittings when the degree of the polynomial model is systematically increased, and by introducing constraints prohibiting negative absorbances to occur for the monomer or for one of the oligomers. The method developed should allow other, more complicated self-associating systems to be analyzed with a much higher accuracy than before.

  1. Dynamic least-squares kernel density modeling of Fokker-Planck equations with application to neural population.

    Science.gov (United States)

    Shotorban, Babak

    2010-04-01

    The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.

  2. Partial Deconvolution with Inaccurate Blur Kernel.

    Science.gov (United States)

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  3. Neurotoxicity and LSD treatment: a follow-up study of 151 patients in Denmark.

    Science.gov (United States)

    Larsen, Jens Knud

    2016-06-01

    LSD was introduced in psychiatry in the 1950s. Between 1960 and 1973, nearly 400 patients were treated with LSD in Denmark. By 1964, one homicide, two suicides and four suicide attempts had been reported. In 1986 the Danish LSD Damages Law was passed after complaints by only one patient. According to the Law, all 154 applicants received financial compensation for LSD-inflicted harm. The Danish State Archives has preserved the case material of 151 of the 154 applicants. Most of the patients suffered from severe side effects of the LSD treatment many years afterwards. In particular, two-thirds of the patients had flashbacks. With the recent interest in LSD therapy, we should consider the neurotoxic potential of LSD. © The Author(s) 2016.

  4. A least squares principle unifying finite element, finite difference and nodal methods for diffusion theory

    International Nuclear Information System (INIS)

    Ackroyd, R.T.

    1987-01-01

    A least squares principle is described which uses a penalty function treatment of boundary and interface conditions. Appropriate choices of the trial functions and vectors employed in a dual representation of an approximate solution established complementary principles for the diffusion equation. A geometrical interpretation of the principles provides weighted residual methods for diffusion theory, thus establishing a unification of least squares, variational and weighted residual methods. The complementary principles are used with either a trial function for the flux or a trial vector for the current to establish for regular meshes a connection between finite element, finite difference and nodal methods, which can be exact if the mesh pitches are chosen appropriately. Whereas the coefficients in the usual nodal equations have to be determined iteratively, those derived via the complementary principles are given explicitly in terms of the data. For the further development of the connection between finite element, finite difference and nodal methods, some hybrid variational methods are described which employ both a trial function and a trial vector. (author)

  5. Discrete least squares polynomial approximation with random evaluations − application to parametric and stochastic elliptic PDEs

    KAUST Repository

    Chkifa, Abdellah; Cohen, Albert; Migliorati, Giovanni; Nobile, Fabio; Tempone, Raul

    2015-01-01

    shown that in the univariate case, the least-squares method is quasi-optimal in expectation in [A. Cohen, M A. Davenport and D. Leviatan. Found. Comput. Math. 13 (2013) 819–834] and in probability in [G. Migliorati, F. Nobile, E. von Schwerin, R. Tempone

  6. Deconvolution algorithms applied in ultrasonics; Methodes de deconvolution en echographie ultrasonore

    Energy Technology Data Exchange (ETDEWEB)

    Perrot, P

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs.

  7. Emulating facial biomechanics using multivariate partial least squares surrogate models.

    Science.gov (United States)

    Wu, Tim; Martens, Harald; Hunter, Peter; Mithraratne, Kumar

    2014-11-01

    A detailed biomechanical model of the human face driven by a network of muscles is a useful tool in relating the muscle activities to facial deformations. However, lengthy computational times often hinder its applications in practical settings. The objective of this study is to replace precise but computationally demanding biomechanical model by a much faster multivariate meta-model (surrogate model), such that a significant speedup (to real-time interactive speed) can be achieved. Using a multilevel fractional factorial design, the parameter space of the biomechanical system was probed from a set of sample points chosen to satisfy maximal rank optimality and volume filling. The input-output relationship at these sampled points was then statistically emulated using linear and nonlinear, cross-validated, partial least squares regression models. It was demonstrated that these surrogate models can mimic facial biomechanics efficiently and reliably in real-time. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Risk and Management Control: A Partial Least Square Modelling Approach

    DEFF Research Database (Denmark)

    Nielsen, Steen; Pontoppidan, Iens Christian

    Risk and economic theory goes many year back (e.g. to Keynes & Knight 1921) and risk/uncertainty belong to one of the explanations for the existence of the firm (Coarse, 1937). The present financial crisis going on in the past years have re-accentuated risk and the need of coherence...... and interrelations between risk and areas within management accounting. The idea is that management accounting should be able to conduct a valid feed forward but also predictions for decision making including risk. This study reports the test of a theoretical model using partial least squares (PLS) on survey data...... and a external attitude dimension. The results have important implications for both management control research and for the management control systems design for the way accountants consider the element of risk in their different tasks, both operational and strategic. Specifically, it seems that different risk...

  9. Optimization Method of Fusing Model Tree into Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Yu Fang

    2017-01-01

    Full Text Available Partial Least Square (PLS can’t adapt to the characteristics of the data of many fields due to its own features multiple independent variables, multi-dependent variables and non-linear. However, Model Tree (MT has a good adaptability to nonlinear function, which is made up of many multiple linear segments. Based on this, a new method combining PLS and MT to analysis and predict the data is proposed, which build MT through the main ingredient and the explanatory variables(the dependent variable extracted from PLS, and extract residual information constantly to build Model Tree until well-pleased accuracy condition is satisfied. Using the data of the maxingshigan decoction of the monarch drug to treat the asthma or cough and two sample sets in the UCI Machine Learning Repository, the experimental results show that, the ability of explanation and predicting get improved in the new method.

  10. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    Science.gov (United States)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  11. LSD Now: 1973

    Science.gov (United States)

    Chunko, John A.

    1973-01-01

    LSD NOW is a nationwide, statistical survey and analysis of hallucinogenic drug use by individuals presently in formal educational surroundings. Analysis, concentrating on the extent and rationale related to the use of such drugs, now offers a deeper and more meaningful understanding of a particular facet of the drug culture. This understanding…

  12. Least square fitting of low resolution gamma ray spectra with cubic B-spline basis functions

    International Nuclear Information System (INIS)

    Zhu Menghua; Liu Lianggang; Qi Dongxu; You Zhong; Xu Aoao

    2009-01-01

    In this paper, the least square fitting method with the cubic B-spline basis functions is derived to reduce the influence of statistical fluctuations in the gamma ray spectra. The derived procedure is simple and automatic. The results show that this method is better than the convolution method with a sufficient reduction of statistical fluctuation. (authors)

  13. Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points

    KAUST Repository

    Migliorati, Giovanni; Nobile, Fabio; Tempone, Raul

    2015-01-01

    We study the accuracy of the discrete least-squares approximation on a finite dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability

  14. Discrete least squares polynomial approximation with random evaluations - application to PDEs with Random parameters

    KAUST Repository

    Nobile, Fabio

    2015-01-07

    We consider a general problem F(u, y) = 0 where u is the unknown solution, possibly Hilbert space valued, and y a set of uncertain parameters. We specifically address the situation in which the parameterto-solution map u(y) is smooth, however y could be very high (or even infinite) dimensional. In particular, we are interested in cases in which F is a differential operator, u a Hilbert space valued function and y a distributed, space and/or time varying, random field. We aim at reconstructing the parameter-to-solution map u(y) from random noise-free or noisy observations in random points by discrete least squares on polynomial spaces. The noise-free case is relevant whenever the technique is used to construct metamodels, based on polynomial expansions, for the output of computer experiments. In the case of PDEs with random parameters, the metamodel is then used to approximate statistics of the output quantity. We discuss the stability of discrete least squares on random points show convergence estimates both in expectation and probability. We also present possible strategies to select, either a-priori or by adaptive algorithms, sequences of approximating polynomial spaces that allow to reduce, and in some cases break, the curse of dimensionality

  15. Polynomial curve fitting for control rod worth using least square numerical analysis

    International Nuclear Information System (INIS)

    Muhammad Husamuddin Abdul Khalil; Mark Dennis Usang; Julia Abdul Karim; Mohd Amin Sharifuldin Salleh

    2012-01-01

    RTP must have sufficient excess reactivity to compensate the negative reactivity feedback effects such as those caused by the fuel temperature and power defects of reactivity, fuel burn-up and to allow full power operation for predetermined period of time. To compensate this excess reactivity, it is necessary to introduce an amount of negative reactivity by adjusting or controlling the control rods at will. Control rod worth depends largely upon the value of the neutron flux at the location of the rod and reflected by a polynomial curve. Purpose of this paper is to rule out the polynomial curve fitting using least square numerical techniques via MATLAB compatible language. (author)

  16. Decentralized Gauss-Newton method for nonlinear least squares on wide area network

    Science.gov (United States)

    Liu, Lanchao; Ling, Qing; Han, Zhu

    2014-10-01

    This paper presents a decentralized approach of Gauss-Newton (GN) method for nonlinear least squares (NLLS) on wide area network (WAN). In a multi-agent system, a centralized GN for NLLS requires the global GN Hessian matrix available at a central computing unit, which may incur large communication overhead. In the proposed decentralized alternative, each agent only needs local GN Hessian matrix to update iterates with the cooperation of neighbors. The detail formulation of decentralized NLLS on WAN is given, and the iteration at each agent is defined. The convergence property of the decentralized approach is analyzed, and numerical results validate the effectiveness of the proposed algorithm.

  17. Transrepressive function of TLX requires the histone demethylase LSD1.

    Science.gov (United States)

    Yokoyama, Atsushi; Takezawa, Shinichiro; Schüle, Roland; Kitagawa, Hirochika; Kato, Shigeaki

    2008-06-01

    TLX is an orphan nuclear receptor (also called NR2E1) that regulates the expression of target genes by functioning as a constitutive transrepressor. The physiological significance of TLX in the cytodifferentiation of neural cells in the brain is known. However, the corepressors supporting the transrepressive function of TLX have yet to be identified. In this report, Y79 retinoblastoma cells were subjected to biochemical techniques to purify proteins that interact with TLX, and we identified LSD1 (also called KDM1), which appears to form a complex with CoREST and histone deacetylase 1. LSD1 interacted with TLX directly through its SWIRM and amine oxidase domains. LSD1 potentiated the transrepressive function of TLX through its histone demethylase activity as determined by a luciferase assay using a genomically integrated reporter gene. LSD1 and TLX were recruited to a TLX-binding site in the PTEN gene promoter, accompanied by the demethylation of H3K4me2 and deacetylation of H3. Knockdown of either TLX or LSD1 derepressed expression of the endogenous PTEN gene and inhibited cell proliferation of Y79 cells. Thus, the present study suggests that LSD1 is a prime corepressor for TLX.

  18. Prediction of earth rotation parameters based on improved weighted least squares and autoregressive model

    Directory of Open Access Journals (Sweden)

    Sun Zhangzhen

    2012-08-01

    Full Text Available In this paper, an improved weighted least squares (WLS, together with autoregressive (AR model, is proposed to improve prediction accuracy of earth rotation parameters(ERP. Four weighting schemes are developed and the optimal power e for determination of the weight elements is studied. The results show that the improved WLS-AR model can improve the ERP prediction accuracy effectively, and for different prediction intervals of ERP, different weight scheme should be chosen.

  19. Development and validation of an ultra-fast and sensitive microflow liquid chromatography-tandem mass spectrometry (MFLC-MS/MS) method for quantification of LSD and its metabolites in plasma and application to a controlled LSD administration study in humans.

    Science.gov (United States)

    Steuer, Andrea E; Poetzsch, Michael; Stock, Lorena; Eisenbeiss, Lisa; Schmid, Yasmin; Liechti, Matthias E; Kraemer, Thomas

    2017-05-01

    Lysergic acid diethylamide (LSD) is a semi-synthetic hallucinogen that has gained popularity as a recreational drug and has been investigated as an adjunct to psychotherapy. Analysis of LSD represents a major challenge in forensic toxicology due to its instability, low drug concentrations, and short detection windows in biological samples. A new, fast, and sensitive microflow liquid chromatography (MFLC) tandem mass spectrometry method for the validated quantification of LSD, iso-LSD, 2-oxo 3-hydroxy-LSD (oxo-HO-LSD), and N-desmethyl-LSD (nor-LSD) was developed in plasma and applied to a controlled pharmacokinetic (PK) study in humans to test whether LSD metabolites would offer for longer detection windows. Five hundred microlitres of plasma were extracted by solid phase extraction. Analysis was performed on a Sciex Eksigent MFLC system coupled to a Sciex 5500 QTrap. The method was validated according to (inter)-national guidelines. MFLC allowed for separation of the mentioned analytes within 3 minutes and limits of quantification of 0.01 ng/mL. Validation criteria were fulfilled for all analytes. PK data could be calculated for LSD, iso-LSD, and oxo-HO-LSD in all participants. Additionally, hydroxy-LSD (HO-LSD) and HO-LSD glucuronide could be qualitatively detected and PK determined in 11 and 8 subjects, respectively. Nor-LSD was only sporadically detected. Elimination half-lives of iso-LSD (median 12 h) and LSD metabolites (median 9, 7.4, 12, and 11 h for oxo-HO-LSD, HO-LSD, HO-LSD-gluc, and nor-LSD, respectively) exceeded those of LSD (median 4.2 h). However, screening for metabolites to increase detection windows in plasma seems not to be constructive due to their very low concentrations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Fault Estimation for Fuzzy Delay Systems: A Minimum Norm Least Squares Solution Approach.

    Science.gov (United States)

    Huang, Sheng-Juan; Yang, Guang-Hong

    2017-09-01

    This paper mainly focuses on the problem of fault estimation for a class of Takagi-Sugeno fuzzy systems with state delays. A minimum norm least squares solution (MNLSS) approach is first introduced to establish a fault estimation compensator, which is able to optimize the fault estimator. Compared with most of the existing fault estimation methods, the MNLSS-based fault estimation method can effectively decrease the effect of state errors on the accuracy of fault estimation. Finally, three examples are given to illustrate the effectiveness and merits of the proposed method.

  1. Increased Global Functional Connectivity Correlates with LSD-Induced Ego Dissolution.

    Science.gov (United States)

    Tagliazucchi, Enzo; Roseman, Leor; Kaelen, Mendel; Orban, Csaba; Muthukumaraswamy, Suresh D; Murphy, Kevin; Laufs, Helmut; Leech, Robert; McGonigle, John; Crossley, Nicolas; Bullmore, Edward; Williams, Tim; Bolstridge, Mark; Feilding, Amanda; Nutt, David J; Carhart-Harris, Robin

    2016-04-25

    Lysergic acid diethylamide (LSD) is a non-selective serotonin-receptor agonist that was first synthesized in 1938 and identified as (potently) psychoactive in 1943. Psychedelics have been used by indigenous cultures for millennia [1]; however, because of LSD's unique potency and the timing of its discovery (coinciding with a period of major discovery in psychopharmacology), it is generally regarded as the quintessential contemporary psychedelic [2]. LSD has profound modulatory effects on consciousness and was used extensively in psychological research and psychiatric practice in the 1950s and 1960s [3]. In spite of this, however, there have been no modern human imaging studies of its acute effects on the brain. Here we studied the effects of LSD on intrinsic functional connectivity within the human brain using fMRI. High-level association cortices (partially overlapping with the default-mode, salience, and frontoparietal attention networks) and the thalamus showed increased global connectivity under the drug. The cortical areas showing increased global connectivity overlapped significantly with a map of serotonin 2A (5-HT2A) receptor densities (the key site of action of psychedelic drugs [4]). LSD also increased global integration by inflating the level of communication between normally distinct brain networks. The increase in global connectivity observed under LSD correlated with subjective reports of "ego dissolution." The present results provide the first evidence that LSD selectively expands global connectivity in the brain, compromising the brain's modular and "rich-club" organization and, simultaneously, the perceptual boundaries between the self and the environment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. LSD: Still with Us after All These Years.

    Science.gov (United States)

    Henderson, Leigh A., Ed.; Glass, William J., Ed.

    This volume offers insight for parents, counselors, and educators as to why young people in the 1990s are using LSD--its appeal, the experience, and where kids are getting it. Current studies and anecdotes are woven with recent statistics to create a clear picture of contemporary LSD use. The introduction offers some history and background on the…

  3. Weighted least squares phase unwrapping based on the wavelet transform

    Science.gov (United States)

    Chen, Jiafeng; Chen, Haiqin; Yang, Zhengang; Ren, Haixia

    2007-01-01

    The weighted least squares phase unwrapping algorithm is a robust and accurate method to solve phase unwrapping problem. This method usually leads to a large sparse linear equation system. Gauss-Seidel relaxation iterative method is usually used to solve this large linear equation. However, this method is not practical due to its extremely slow convergence. The multigrid method is an efficient algorithm to improve convergence rate. However, this method needs an additional weight restriction operator which is very complicated. For this reason, the multiresolution analysis method based on the wavelet transform is proposed. By applying the wavelet transform, the original system is decomposed into its coarse and fine resolution levels and an equivalent equation system with better convergence condition can be obtained. Fast convergence in separate coarse resolution levels speeds up the overall system convergence rate. The simulated experiment shows that the proposed method converges faster and provides better result than the multigrid method.

  4. Quantification of LSD in illicit samples by high performance liquid chromatography

    Directory of Open Access Journals (Sweden)

    Pablo Alves Marinho

    2010-12-01

    Full Text Available In the present study, a method using high performance liquid chromatography to quantify LSD, in blotter papers seized in Minas Gerais, was optimized and validated. Linearity, precision, recovery, limits of detection and quantification, and selectivity were the parameters used to evaluate performance. The samples were extracted with methanol:water (1: 1 in an ultra-sound bath. The linearity between 0.05 and 20.00 μg/mL (0.5 and 200.0μg of LSD/blotter was observed with satisfactory mean intra and inter assay precision (RSDr = 4.4% and RSD R = 6.4%, respectively and with mean recoveries of 83.4% and 84.9% to the levels of 1.00 and 20.00 μg/mL (10 and 200μg LSD/blotter. The limits of detection and quantification were 0.01 and 0.05 μg/mL, respectively (0.1 and 0.5 μg of LSD/blotter. The samples of blotters (n =22 were analyzed and the mean value of 67.55 μg of LSD/blotter (RSD=27.5% was found. Thus, the method used showed satisfactory analytical performance, and proved suitable as an analytical tool for LSD determination in illicit samples seized by police forces.No presente trabalho, um método utilizando cromatografia líquida de alta eficiência foi otimizado e validado para quantificar o LSD em selos apreendidos em Minas Gerais. A linearidade, precisão, recuperação, limites de detecção e quantificação e seletividade foram os parâmetros de desempenho avaliados. As amostras foram extraídas com metanol: água (1:1 em banho de ultra-som. A linearidade entre 0,05 a 20,00 mg/mL (0,5 a 200 μg LSD/blotter foi observada com precisão média, intra e inter ensaio, satisfatória (RSDr = 4,4% e RSD R = 6,4%, respectivamente e com recuperações médias de 83,4% e 84,9% para os níveis de LSD de 1,00 e 20,00 mg/mL (10 e 200 μg LSD/selo. Os limites de detecção e quantificação encontrados foram de 0,01 e 0,05 mg/mL, respectivamente (0,1 e 0,5 μg LSD/selo. As amostras de selos (n = 22 foram analisadas e o valor médio encontrado foi de 67

  5. Altered network hub connectivity after acute LSD administration

    Directory of Open Access Journals (Sweden)

    Felix Müller

    Full Text Available LSD is an ambiguous substance, said to mimic psychosis and to improve mental health in people suffering from anxiety and depression. Little is known about the neuronal correlates of altered states of consciousness induced by this substance. Limited previous studies indicated profound changes in functional connectivity of resting state networks after the administration of LSD. The current investigation attempts to replicate and extend those findings in an independent sample. In a double-blind, randomized, cross-over study, 100 μg LSD and placebo were orally administered to 20 healthy participants. Resting state brain activity was assessed by functional magnetic resonance imaging. Within-network and between-network connectivity measures of ten established resting state networks were compared between drug conditions. Complementary analysis were conducted using resting state networks as sources in seed-to-voxel analyses. Acute LSD administration significantly decreased functional connectivity within visual, sensorimotor and auditory networks and the default mode network. While between-network connectivity was widely increased and all investigated networks were affected to some extent, seed-to-voxel analyses consistently indicated increased connectivity between networks and subcortical (thalamus, striatum and cortical (precuneus, anterior cingulate cortex hub structures. These latter observations are consistent with findings on the importance of hubs in psychopathological states, especially in psychosis, and could underlay therapeutic effects of hallucinogens as proposed by a recent model. Keywords: LSD, fMRI, Functional connectivity, Networks, Hubs

  6. BER analysis of regularized least squares for BPSK recovery

    KAUST Repository

    Ben Atitallah, Ismail; Thrampoulidis, Christos; Kammoun, Abla; Al-Naffouri, Tareq Y.; Hassibi, Babak; Alouini, Mohamed-Slim

    2017-01-01

    This paper investigates the problem of recovering an n-dimensional BPSK signal x0 ∈ {−1, 1}n from m-dimensional measurement vector y = Ax+z, where A and z are assumed to be Gaussian with iid entries. We consider two variants of decoders based on the regularized least squares followed by hard-thresholding: the case where the convex relaxation is from {−1, 1}n to ℝn and the box constrained case where the relaxation is to [−1, 1]n. For both cases, we derive an exact expression of the bit error probability when n and m grow simultaneously large at a fixed ratio. For the box constrained case, we show that there exists a critical value of the SNR, above which the optimal regularizer is zero. On the other side, the regularization can further improve the performance of the box relaxation at low to moderate SNR regimes. We also prove that the optimal regularizer in the bit error rate sense for the unboxed case is nothing but the MMSE detector.

  7. BER analysis of regularized least squares for BPSK recovery

    KAUST Repository

    Ben Atitallah, Ismail

    2017-06-20

    This paper investigates the problem of recovering an n-dimensional BPSK signal x0 ∈ {−1, 1}n from m-dimensional measurement vector y = Ax+z, where A and z are assumed to be Gaussian with iid entries. We consider two variants of decoders based on the regularized least squares followed by hard-thresholding: the case where the convex relaxation is from {−1, 1}n to ℝn and the box constrained case where the relaxation is to [−1, 1]n. For both cases, we derive an exact expression of the bit error probability when n and m grow simultaneously large at a fixed ratio. For the box constrained case, we show that there exists a critical value of the SNR, above which the optimal regularizer is zero. On the other side, the regularization can further improve the performance of the box relaxation at low to moderate SNR regimes. We also prove that the optimal regularizer in the bit error rate sense for the unboxed case is nothing but the MMSE detector.

  8. LC-mS analysis of human urine specimens for 2-oxo-3-hydroxy LSD: method validation for potential interferants and stability study of 2-oxo-3-hydroxy LSD under various storage conditions.

    Science.gov (United States)

    Klette, Kevin L; Horn, Carl K; Stout, Peter R; Anderson, Cynthia J

    2002-01-01

    2-Oxo-3-hydroxy lysergic acid diethylamide (O-H-LSD), a major LSD metabolite, has previously been demonstrated to be a superior marker for identifying LSD use compared with the parent drug, LSD. Specifically, O-H-LSD analyzed using liquid chromatography-mass spectrometry has been reported to be present in urine at concentrations 16 to 43 times greater than LSD. To further support forensic application of this procedure, the specificity of the assay was assessed using compounds that have structural and chemical properties similar to O-H-LSD, common over-the-counter products, prescription drugs and some of their metabolites, and other drugs of abuse. Of the wide range of compounds studied, none were found to interfere with the detection of O-H-LSD or the internal standard 2-oxo-3-hydroxy lysergic acid methyl propylamide. The stability of O-H-LSD was investigated from 0 to 9 days at various temperatures, pH conditions, and exposures to fluorescent light. Additionally, the effect of long-term frozen storage and pH was investigated from 0 to 60 days. There was no significant loss of O-H-LSD under both refrigerated and frozen conditions within the normal human physiological pH range of urine (4.6-8.4). However, significant loss of O-H-LSD was observed in samples prepared at pH 4.6-8.4 and stored at room temperature or higher (24-50 degrees C).

  9. Lysine-specific demethylase 1 (LSD1) destabilizes p62 and inhibits autophagy in gynecologic malignancies.

    Science.gov (United States)

    Chao, Angel; Lin, Chiao-Yun; Chao, An-Ning; Tsai, Chia-Lung; Chen, Ming-Yu; Lee, Li-Yu; Chang, Ting-Chang; Wang, Tzu-Hao; Lai, Chyong-Huey; Wang, Hsin-Shih

    2017-09-26

    Lysine-specific demethylase 1 (LSD1) - also known as KDM1A - is the first identified histone demethylase. LSD1 is highly expressed in numerous human malignancies and has recently emerged as a target for anticancer drugs. Owing to the presence of several functional domains, we speculated that LSD1 could have additional functions other than histone demethylation. P62 - also termed sequestasome 1 (SQSTM1) - plays a key role in malignant transformation, apoptosis, and autophagy. Here, we show that a high LSD1 expression promotes tumorigenesis in gynecologic malignancies. Notably, LSD1 inhibition with either siRNA or pharmacological agents activates autophagy. Mechanistically, LSD1 decreases p62 protein stability in a demethylation-independent manner. Inhibition of LSD1 reduces both tumor growth and p62 protein degradation in vivo . The combination of LSD1 inhibition and p62 knockdown exerts additive anticancer effects. We conclude that LSD1 destabilizes p62 and inhibits autophagy in gynecologic cancers. LSD1 inhibition reduces malignant cell growth and activates autophagy. The combinations of LSD1 inhibition and autophagy blockade display additive inhibitory effect on cancer cell viability. A better understanding of the role played by p62 will shed more light on the anticancer effects of LSD1 inhibitors.

  10. PROPOSED MODIFICATIONS OF K2-TEMPERATURE RELATION AND LEAST SQUARES ESTIMATES OF BOD (BIOCHEMICAL OXYGEN DEMAND) PARAMETERS

    Science.gov (United States)

    A technique is presented for finding the least squares estimates for the ultimate biochemical oxygen demand (BOD) and rate coefficient for the BOD reaction without resorting to complicated computer algorithms or subjective graphical methods. This may be used in stream water quali...

  11. Identification of downstream metastasis-associated target genes regulated by LSD1 in colon cancer cells.

    Science.gov (United States)

    Chen, Jiang; Ding, Jie; Wang, Ziwei; Zhu, Jian; Wang, Xuejian; Du, Jiyi

    2017-03-21

    This study aims to identify downstream target genes regulated by lysine-specific demethylase 1 (LSD1) in colon cancer cells and investigate the molecular mechanisms of LSD1 influencing invasion and metastasis of colon cancer. We obtained the expression changes of downstream target genes regulated by small-interfering RNA-LSD1 and LSD1-overexpression via gene expression profiling in two human colon cancer cell lines. An Affymetrix Human Transcriptome Array 2.0 was used to identify differentially expressed genes (DEGs). We screened out LSD1-target gene associated with proliferation, metastasis, and invasion from DEGs via Gene Ontology and Pathway Studio. Subsequently, four key genes (CABYR, FOXF2, TLE4, and CDH1) were computationally predicted as metastasis-related LSD1-target genes. ChIp-PCR was applied after RT-PCR and Western blot validations to detect the occupancy of LSD1-target gene promoter-bound LSD1. A total of 3633 DEGs were significantly upregulated, and 4642 DEGs were downregulated in LSD1-silenced SW620 cells. A total of 4047 DEGs and 4240 DEGs were upregulated and downregulated in LSD1-overexpressed HT-29 cells, respectively. RT-PCR and Western blot validated the microarray analysis results. ChIP assay results demonstrated that LSD1 might be negative regulators for target genes CABYR and CDH1. The expression level of LSD1 is negatively correlated with mono- and dimethylation of histone H3 lysine4(H3K4) at LSD1- target gene promoter region. No significant mono-methylation and dimethylation of H3 lysine9 methylation was detected at the promoter region of CABYR and CDH1. LSD1- depletion contributed to the upregulation of CABYR and CDH1 through enhancing the dimethylation of H3K4 at the LSD1-target genes promoter. LSD1- overexpression mediated the downregulation of CABYR and CDH1expression through decreasing the mono- and dimethylation of H3K4 at LSD1-target gene promoter in colon cancer cells. CABYR and CDH1 might be potential LSD1-target genes in colon

  12. Damped least square based genetic algorithm with Gaussian distribution of damping factor for singularity-robust inverse kinematics

    International Nuclear Information System (INIS)

    Phuoc, Le Minh; Lee, Suk Han; Kim, Hun Mo; Martinet, Philippe

    2008-01-01

    Robot inverse kinematics based on Jacobian inversion encounters critical issues of kinematic singularities. In this paper, several techniques based on damped least squares are proposed to lead robot pass through kinematic singularities without excessive joint velocities. Unlike other work in which the same damping factor is used for all singular vectors, this paper proposes a different damping coefficient for each singular vector based on corresponding singular value of the Jacobian. Moreover, a continuous distribution of damping factor following Gaussian function guarantees the continuous in joint velocities. A genetic algorithm is utilized to search for the best maximum damping factor and singular region, which used to require ad hoc searching in other works. As a result, end effector tracking error, which is inherited from damped least squares by introducing damping factors, is minimized. The effectiveness of our approach is compared with other methods in both non-redundant robot and redundant robot

  13. Damped least square based genetic algorithm with Gaussian distribution of damping factor for singularity-robust inverse kinematics

    Energy Technology Data Exchange (ETDEWEB)

    Phuoc, Le Minh; Lee, Suk Han; Kim, Hun Mo [Sungkyunkwan University, Suwon (Korea, Republic of); Martinet, Philippe [Blaise Pascal University, Clermont-Ferrand Cedex (France)

    2008-07-15

    Robot inverse kinematics based on Jacobian inversion encounters critical issues of kinematic singularities. In this paper, several techniques based on damped least squares are proposed to lead robot pass through kinematic singularities without excessive joint velocities. Unlike other work in which the same damping factor is used for all singular vectors, this paper proposes a different damping coefficient for each singular vector based on corresponding singular value of the Jacobian. Moreover, a continuous distribution of damping factor following Gaussian function guarantees the continuous in joint velocities. A genetic algorithm is utilized to search for the best maximum damping factor and singular region, which used to require ad hoc searching in other works. As a result, end effector tracking error, which is inherited from damped least squares by introducing damping factors, is minimized. The effectiveness of our approach is compared with other methods in both non-redundant robot and redundant robot

  14. Direct integral linear least square regression method for kinetic evaluation of hepatobiliary scintigraphy

    International Nuclear Information System (INIS)

    Shuke, Noriyuki

    1991-01-01

    In hepatobiliary scintigraphy, kinetic model analysis, which provides kinetic parameters like hepatic extraction or excretion rate, have been done for quantitative evaluation of liver function. In this analysis, unknown model parameters are usually determined using nonlinear least square regression method (NLS method) where iterative calculation and initial estimate for unknown parameters are required. As a simple alternative to NLS method, direct integral linear least square regression method (DILS method), which can determine model parameters by a simple calculation without initial estimate, is proposed, and tested the applicability to analysis of hepatobiliary scintigraphy. In order to see whether DILS method could determine model parameters as good as NLS method, or to determine appropriate weight for DILS method, simulated theoretical data based on prefixed parameters were fitted to 1 compartment model using both DILS method with various weightings and NLS method. The parameter values obtained were then compared with prefixed values which were used for data generation. The effect of various weights on the error of parameter estimate was examined, and inverse of time was found to be the best weight to make the error minimum. When using this weight, DILS method could give parameter values close to those obtained by NLS method and both parameter values were very close to prefixed values. With appropriate weighting, the DILS method could provide reliable parameter estimate which is relatively insensitive to the data noise. In conclusion, the DILS method could be used as a simple alternative to NLS method, providing reliable parameter estimate. (author)

  15. LSD Dimensions: Use and Reuse of Linked Statistical Data

    NARCIS (Netherlands)

    Meroño-Peñuela, Albert

    2014-01-01

    RDF Data Cube (QB) has boosted the publication of Linked Statistical Data (LSD) on the Web, making them linkable to other related datasets and concepts following the Linked Data paradigm. In this demo we present LSD Dimensions, a web based application that monitors the usage of dimensions and codes

  16. A Generalized Least Squares Regression Approach for Computing Effect Sizes in Single-Case Research: Application Examples

    Science.gov (United States)

    Maggin, Daniel M.; Swaminathan, Hariharan; Rogers, Helen J.; O'Keeffe, Breda V.; Sugai, George; Horner, Robert H.

    2011-01-01

    A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of…

  17. A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality

    Science.gov (United States)

    Cheung, KW; So, HC; Ma, W.-K.; Chan, YT

    2006-12-01

    The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.

  18. Obtention of the parameters of the Voigt function using the least square fit method

    International Nuclear Information System (INIS)

    Flores Ll, H.; Cabral P, A.; Jimenez D, H.

    1990-01-01

    The fundamental parameters of the Voigt function are determined: lorentzian wide (Γ L ) and gaussian wide (Γ G ) with an error for almost all the cases inferior to 1% in the intervals 0.01 ≤ Γ L / Γ G ≤1 and 0.3 ≤ Γ G / Γ L ≤1. This is achieved using the least square fit method with an algebraic function, being obtained a simple method to obtain the fundamental parameters of the Voigt function used in many spectroscopies. (Author)

  19. Evaluation of unconfined-aquifer parameters from pumping test data by nonlinear least squares

    Science.gov (United States)

    Heidari, Manoutchehr; Wench, Allen

    1997-05-01

    Nonlinear least squares (NLS) with automatic differentiation was used to estimate aquifer parameters from drawdown data obtained from published pumping tests conducted in homogeneous, water-table aquifers. The method is based on a technique that seeks to minimize the squares of residuals between observed and calculated drawdown subject to bounds that are placed on the parameter of interest. The analytical model developed by Neuman for flow to a partially penetrating well of infinitesimal diameter situated in an infinite, homogeneous and anisotropic aquifer was used to obtain calculated drawdown. NLS was first applied to synthetic drawdown data from a hypothetical but realistic aquifer to demonstrate that the relevant hydraulic parameters (storativity, specific yield, and horizontal and vertical hydraulic conductivity) can be evaluated accurately. Next the method was used to estimate the parameters at three field sites with widely varying hydraulic properties. NLS produced unbiased estimates of the aquifer parameters that are close to the estimates obtained with the same data using a visual curve-matching approach. Small differences in the estimates are a consequence of subjective interpretation introduced in the visual approach.

  20. Deconvolution of Positrons' Lifetime spectra

    International Nuclear Information System (INIS)

    Calderin Hidalgo, L.; Ortega Villafuerte, Y.

    1996-01-01

    In this paper, we explain the iterative method previously develop for the deconvolution of Doppler broadening spectra using the mathematical optimization theory. Also, we start the adaptation and application of this method to the deconvolution of positrons' lifetime annihilation spectra

  1. Comparison between results of solution of Burgers' equation and Laplace's equation by Galerkin and least-square finite element methods

    Science.gov (United States)

    Adib, Arash; Poorveis, Davood; Mehraban, Farid

    2018-03-01

    In this research, two equations are considered as examples of hyperbolic and elliptic equations. In addition, two finite element methods are applied for solving of these equations. The purpose of this research is the selection of suitable method for solving each of two equations. Burgers' equation is a hyperbolic equation. This equation is a pure advection (without diffusion) equation. This equation is one-dimensional and unsteady. A sudden shock wave is introduced to the model. This wave moves without deformation. In addition, Laplace's equation is an elliptical equation. This equation is steady and two-dimensional. The solution of Laplace's equation in an earth dam is considered. By solution of Laplace's equation, head pressure and the value of seepage in the directions X and Y are calculated in different points of earth dam. At the end, water table is shown in the earth dam. For Burgers' equation, least-square method can show movement of wave with oscillation but Galerkin method can not show it correctly (the best method for solving of the Burgers' equation is discrete space by least-square finite element method and discrete time by forward difference.). For Laplace's equation, Galerkin and least square methods can show water table correctly in earth dam.

  2. INTRAVAL project phase 2. Analysis of STRIPA 3D data by a deconvolution technique

    International Nuclear Information System (INIS)

    Ilvonen, M.; Hautojaervi, A.; Paatero, P.

    1994-09-01

    The data analysed in this report were obtained in tracer experiments performed from a specially excavated drift in good granite rock at the level of 360 m below the ground in the Stripa mine. Tracer transport paths from the injection points to the collecting sheets at the tunnel walls were tens of meters long. Data for six tracers that arrived in measurable concentrations were elaborated by different means of data analysis to reveal the transport behaviour of solutes in the rock fractures. Techniques like direct inversion of the data, Fourier analysis, Singular Value Decomposition (SVD) and non-negative least squares fitting (NNLS) were employed. A newly developed code based on a general-purpose approach for solving deconvolution-type or integral equation problems, Extreme Value Estimation (EVE), proved to be a very helpful tool in deconvolving impulse responses from the injection flow rates and break-through curves of tracers and assessing the physical confidence of the results. (23 refs., 33 figs.)

  3. First-order system least-squares for the Helmholtz equation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, B.; Manteuffel, T.; McCormick, S.; Ruge, J.

    1996-12-31

    We apply the FOSLS methodology to the exterior Helmholtz equation {Delta}p + k{sup 2}p = 0. Several least-squares functionals, some of which include both H{sup -1}({Omega}) and L{sup 2}({Omega}) terms, are examined. We show that in a special subspace of [H(div; {Omega}) {intersection} H(curl; {Omega})] x H{sup 1}({Omega}), each of these functionals are equivalent independent of k to a scaled H{sup 1}({Omega}) norm of p and u = {del}p. This special subspace does not include the oscillatory near-nullspace components ce{sup ik}({sup {alpha}x+{beta}y)}, where c is a complex vector and where {alpha}{sub 2} + {beta}{sup 2} = 1. These components are eliminated by applying a non-standard coarsening scheme. We achieve this scheme by introducing {open_quotes}ray{close_quotes} basis functions which depend on the parameter pair ({alpha}, {beta}), and which approximate ce{sup ik}({sup {alpha}x+{beta}y)} well on the coarser levels where bilinears cannot. We use several pairs of these parameters on each of these coarser levels so that several coarse grid problems are spun off from the finer levels. Some extensions of this theory to the transverse electric wave solution for Maxwell`s equations will also be presented.

  4. Least Square Fast Learning Network for modeling the combustion efficiency of a 300WM coal-fired boiler.

    Science.gov (United States)

    Li, Guoqiang; Niu, Peifeng; Wang, Huaibao; Liu, Yongchao

    2014-03-01

    This paper presents a novel artificial neural network with a very fast learning speed, all of whose weights and biases are determined by the twice Least Square method, so it is called Least Square Fast Learning Network (LSFLN). In addition, there is another difference from conventional neural networks, which is that the output neurons of LSFLN not only receive the information from the hidden layer neurons, but also receive the external information itself directly from the input neurons. In order to test the validity of LSFLN, it is applied to 6 classical regression applications, and also employed to build the functional relation between the combustion efficiency and operating parameters of a 300WM coal-fired boiler. Experimental results show that, compared with other methods, LSFLN with very less hidden neurons could achieve much better regression precision and generalization ability at a much faster learning speed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. AN ANIMAL MODEL OF SCHIZOPHRENIA BASED ON CHRONIC LSD ADMINISTRATION: OLD IDEA, NEW RESULTS

    OpenAIRE

    Marona-Lewicka, Danuta; Nichols, Charles D.; Nichols, David E.

    2011-01-01

    Many people who take LSD experience a second temporal phase of LSD intoxication that is qualitatively different, and was described by Daniel Freedman as “clearly a paranoid state.” We have previously shown that the discriminative stimulus effects of LSD in rats also occur in two temporal phases, with initial effects mediated by activation of 5-HT2A receptors (LSD30), and the later temporal phase mediated by dopamine D2-like receptors (LSD90). Surprisingly, we have now found that non-competiti...

  6. Classification of Hyperspectral Images Using Kernel Fully Constrained Least Squares

    Directory of Open Access Journals (Sweden)

    Jianjun Liu

    2017-11-01

    Full Text Available As a widely used classifier, sparse representation classification (SRC has shown its good performance for hyperspectral image classification. Recent works have highlighted that it is the collaborative representation mechanism under SRC that makes SRC a highly effective technique for classification purposes. If the dimensionality and the discrimination capacity of a test pixel is high, other norms (e.g., ℓ 2 -norm can be used to regularize the coding coefficients, except for the sparsity ℓ 1 -norm. In this paper, we show that in the kernel space the nonnegative constraint can also play the same role, and thus suggest the investigation of kernel fully constrained least squares (KFCLS for hyperspectral image classification. Furthermore, in order to improve the classification performance of KFCLS by incorporating spatial-spectral information, we investigate two kinds of spatial-spectral methods using two regularization strategies: (1 the coefficient-level regularization strategy, and (2 the class-level regularization strategy. Experimental results conducted on four real hyperspectral images demonstrate the effectiveness of the proposed KFCLS, and show which way to incorporate spatial-spectral information efficiently in the regularization framework.

  7. RCS Leak Rate Calculation with High Order Least Squares Method

    International Nuclear Information System (INIS)

    Lee, Jeong Hun; Kang, Young Kyu; Kim, Yang Ki

    2010-01-01

    As a part of action items for Application of Leak before Break(LBB), RCS Leak Rate Calculation Program is upgraded in Kori unit 3 and 4. For real time monitoring of operators, periodic calculation is needed and corresponding noise reduction scheme is used. This kind of study was issued in Korea, so there have upgraded and used real time RCS Leak Rate Calculation Program in UCN unit 3 and 4 and YGN unit 1 and 2. For reduction of the noise in signals, Linear Regression Method was used in those programs. Linear Regression Method is powerful method for noise reduction. But the system is not static with some alternative flow paths and this makes mixed trend patterns of input signal values. In this condition, the trend of signal and average of Linear Regression are not entirely same pattern. In this study, high order Least squares Method is used to follow the trend of signal and the order of calculation is rearranged. The result of calculation makes reasonable trend and the procedure is physically consistence

  8. Application of the Least Squares Method in Axisymmetric Biharmonic Problems

    Directory of Open Access Journals (Sweden)

    Vasyl Chekurin

    2016-01-01

    Full Text Available An approach for solving of the axisymmetric biharmonic boundary value problems for semi-infinite cylindrical domain was developed in the paper. On the lateral surface of the domain homogeneous Neumann boundary conditions are prescribed. On the remaining part of the domain’s boundary four different biharmonic boundary pieces of data are considered. To solve the formulated biharmonic problems the method of least squares on the boundary combined with the method of homogeneous solutions was used. That enabled reducing the problems to infinite systems of linear algebraic equations which can be solved with the use of reduction method. Convergence of the solution obtained with developed approach was studied numerically on some characteristic examples. The developed approach can be used particularly to solve axisymmetric elasticity problems for cylindrical bodies, the heights of which are equal to or exceed their diameters, when on their lateral surface normal and tangential tractions are prescribed and on the cylinder’s end faces various types of boundary conditions in stresses in displacements or mixed ones are given.

  9. Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Jaechil [Univ. of Wisconsin, Madison, WI (United States)

    1996-12-31

    Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.

  10. A least-squares minimisation approach to depth determination from numerical second horizontal self-potential anomalies

    Science.gov (United States)

    Abdelrahman, El-Sayed Mohamed; Soliman, Khalid; Essa, Khalid Sayed; Abo-Ezz, Eid Ragab; El-Araby, Tarek Mohamed

    2009-06-01

    This paper develops a least-squares minimisation approach to determine the depth of a buried structure from numerical second horizontal derivative anomalies obtained from self-potential (SP) data using filters of successive window lengths. The method is based on using a relationship between the depth and a combination of observations at symmetric points with respect to the coordinate of the projection of the centre of the source in the plane of the measurement points with a free parameter (graticule spacing). The problem of depth determination from second derivative SP anomalies has been transformed into the problem of finding a solution to a non-linear equation of the form f(z)=0. Formulas have been derived for horizontal cylinders, spheres, and vertical cylinders. Procedures are also formulated to determine the electric dipole moment and the polarization angle. The proposed method was tested on synthetic noisy and real SP data. In the case of the synthetic data, the least-squares method determined the correct depths of the sources. In the case of practical data (SP anomalies over a sulfide ore deposit, Sariyer, Turkey and over a Malachite Mine, Jefferson County, Colorado, USA), the estimated depths of the buried structures are in good agreement with the results obtained from drilling and surface geology.

  11. Third-order least squares modelling of milling state term for improved computation of stability boundaries

    Directory of Open Access Journals (Sweden)

    C.G. Ozoegwu

    2016-01-01

    Full Text Available The general least squares model for milling process state term is presented. A discrete map for milling stability analysis that is based on the third-order case of the presented general least squares milling state term model is first studied and compared with its third-order counterpart that is based on the interpolation theory. Both numerical rate of convergence and chatter stability results of the two maps are compared using the single degree of freedom (1DOF milling model. The numerical rate of convergence of the presented third-order model is also studied using the two degree of freedom (2DOF milling process model. Comparison gave that stability results from the two maps agree closely but the presented map demonstrated reduction in number of needed calculations leading to about 30% savings in computational time (CT. It is seen in earlier works that accuracy of milling stability analysis using the full-discretization method rises from first-order theory to second-order theory and continues to rise to the third-order theory. The present work confirms this trend. In conclusion, the method presented in this work will enable fast and accurate computation of stability diagrams for use by machinists.

  12. Quantification of anaesthetic effects on atrial fibrillation rate by partial least-squares

    International Nuclear Information System (INIS)

    Cervigón, R; Moreno, J; Pérez-Villacastín, J; Reilly, R B; Castells, F

    2012-01-01

    The mechanism underlying atrial fibrillation (AF) remains poorly understood. Multiple wandering propagation wavelets drifting through both atria under hierarchical models are not understood. Some pharmacological drugs, known as antiarrhythmics, modify the cardiac ionic currents supporting the fibrillation process within the atria and may modify the AF propagation dynamics terminating the fibrillation process. Other medications, theoretically non-antiarrhythmic, may slightly affect the fibrillation process in non-defined mechanisms. We evaluated whether the most commonly used anaesthetic agent, propofol, affects AF patterns. Partial least-squares (PLS) analysis was performed to reduce significant noise into the main latent variables to find the differences between groups. The final results showed an excellent discrimination between groups with slow atrial activity during the propofol infusion. (paper)

  13. Wavelength detection in FBG sensor networks using least squares support vector regression

    Science.gov (United States)

    Chen, Jing; Jiang, Hao; Liu, Tundong; Fu, Xiaoli

    2014-04-01

    A wavelength detection method for a wavelength division multiplexing (WDM) fiber Bragg grating (FBG) sensor network is proposed based on least squares support vector regression (LS-SVR). As a kind of promising machine learning technique, LS-SVR is employed to approximate the inverse function of the reflection spectrum. The LS-SVR detection model is established from the training samples, and then the Bragg wavelength of each FBG can be directly identified by inputting the measured spectrum into the well-trained model. We also discuss the impact of the sample size and the preprocess of the input spectrum on the performance of the training effectiveness. The results demonstrate that our approach is effective in improving the accuracy for sensor networks with a large number of FBGs.

  14. A Collocation Method by Moving Least Squares Applicable to European Option Pricing

    Directory of Open Access Journals (Sweden)

    M. Amirfakhrian

    2016-05-01

    Full Text Available The subject matter of the present inquiry is the pricing of European options in the actual form of numbers. To assess the numerical prices of European options, a scheme independent of any kind of mesh but rather powered by moving least squares (MLS estimation is made. In practical terms, first the discretion of time variable is implemented and then, an MLS-powered method is applied for spatial approximation. As, unlike other methods, these courses of action mentioned here don't rely on a mesh, one can firmly claim they are to be categorized under mesh-less methods. And, of course, at the end of the paper, various experiments are offered to prove how efficient and how powerful the introduced approach is.

  15. Least-squares approximation of an improper by a proper correlation matrix using a semi-infinite convex program

    NARCIS (Netherlands)

    Knol, Dirk L.; ten Berge, Jos M.F.

    1987-01-01

    An algorithm is presented for the best least-squares fitting correlation matrix approximating a given missing value or improper correlation matrix. The proposed algorithm is based on a solution for C. I. Mosier's oblique Procrustes rotation problem offered by J. M. F. ten Berge and K. Nevels (1977).

  16. Regression model of support vector machines for least squares prediction of crystallinity of cracking catalysts by infrared spectroscopy

    International Nuclear Information System (INIS)

    Comesanna Garcia, Yumirka; Dago Morales, Angel; Talavera Bustamante, Isneri

    2010-01-01

    The recently introduction of the least squares support vector machines method for regression purposes in the field of Chemometrics has provided several advantages to linear and nonlinear multivariate calibration methods. The objective of the paper was to propose the use of the least squares support vector machine as an alternative multivariate calibration method for the prediction of the percentage of crystallinity of fluidized catalytic cracking catalysts, by means of Fourier transform mid-infrared spectroscopy. A linear kernel was used in the calculations of the regression model. The optimization of its gamma parameter was carried out using the leave-one-out cross-validation procedure. The root mean square error of prediction was used to measure the performance of the model. The accuracy of the results obtained with the application of the method is in accordance with the uncertainty of the X-ray powder diffraction reference method. To compare the generalization capability of the developed method, a comparison study was carried out, taking into account the results achieved with the new model and those reached through the application of linear calibration methods. The developed method can be easily implemented in refinery laboratories

  17. A hybrid least squares support vector machines and GMDH approach for river flow forecasting

    Science.gov (United States)

    Samsudin, R.; Saad, P.; Shabri, A.

    2010-06-01

    This paper proposes a novel hybrid forecasting model, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM), known as GLSSVM. The GMDH is used to determine the useful input variables for LSSVM model and the LSSVM model which works as time series forecasting. In this study the application of GLSSVM for monthly river flow forecasting of Selangor and Bernam River are investigated. The results of the proposed GLSSVM approach are compared with the conventional artificial neural network (ANN) models, Autoregressive Integrated Moving Average (ARIMA) model, GMDH and LSSVM models using the long term observations of monthly river flow discharge. The standard statistical, the root mean square error (RMSE) and coefficient of correlation (R) are employed to evaluate the performance of various models developed. Experiment result indicates that the hybrid model was powerful tools to model discharge time series and can be applied successfully in complex hydrological modeling.

  18. The quantitation of 2-oxo-3-hydroxy lysergic acid diethylamide (O-H-LSD) in human urine specimens, a metabolite of LSD: comparative analysis using liquid chromatography-selected ion monitoring mass spectrometry and liquid chromatography-ion trap mass spectrometry.

    Science.gov (United States)

    Poch, G K; Klette, K L; Anderson, C

    2000-04-01

    This paper compares the potential forensic application of two sensitive and rapid procedures (liquid chromatography-mass spectrometry and liquid chromatography-ion trap mass spectrometry) for the detection and quantitation of 2-oxo-3-hydroxy lysergic acid diethylamide (O-H-LSD) a major LSD metabolite. O-H-LSD calibration curves for both procedures were linear over the concentration range 0-8,000 pg/mL with correlation coefficients (r2) greater than 0.99. The observed limit of detection (LOD) and limit of quantitation (LOQ) for O-H-LSD in both procedures was 400 pg/mL. Sixty-eight human urine specimens that had previously been found to contain LSD by gas chromatography-mass spectrometry were reanalyzed by both procedures for LSD and O-H-LSD. These specimens contained a mean concentration of O-H-LSD approximately 16 times higher than the LSD concentration. Because both LC methods produce similar results, either procedure can be readily adapted to O-H-LSD analysis for use in high-volume drug-testing laboratories. In addition, the possibility of significantly increasing the LSD detection time window by targeting this major LSD metabolite for analysis may influence other drug-free workplace programs to test for LSD.

  19. Temporal gravity field modeling based on least square collocation with short-arc approach

    Science.gov (United States)

    ran, jiangjun; Zhong, Min; Xu, Houze; Liu, Chengshu; Tangdamrongsub, Natthachet

    2014-05-01

    After the launch of the Gravity Recovery And Climate Experiment (GRACE) in 2002, several research centers have attempted to produce the finest gravity model based on different approaches. In this study, we present an alternative approach to derive the Earth's gravity field, and two main objectives are discussed. Firstly, we seek the optimal method to estimate the accelerometer parameters, and secondly, we intend to recover the monthly gravity model based on least square collocation method. The method has been paid less attention compared to the least square adjustment method because of the massive computational resource's requirement. The positions of twin satellites are treated as pseudo-observations and unknown parameters at the same time. The variance covariance matrices of the pseudo-observations and the unknown parameters are valuable information to improve the accuracy of the estimated gravity solutions. Our analyses showed that introducing a drift parameter as an additional accelerometer parameter, compared to using only a bias parameter, leads to a significant improvement of our estimated monthly gravity field. The gravity errors outside the continents are significantly reduced based on the selected set of the accelerometer parameters. We introduced the improved gravity model namely the second version of Institute of Geodesy and Geophysics, Chinese Academy of Sciences (IGG-CAS 02). The accuracy of IGG-CAS 02 model is comparable to the gravity solutions computed from the Geoforschungszentrum (GFZ), the Center for Space Research (CSR) and the NASA Jet Propulsion Laboratory (JPL). In term of the equivalent water height, the correlation coefficients over the study regions (the Yangtze River valley, the Sahara desert, and the Amazon) among four gravity models are greater than 0.80.

  20. A complex linear least-squares method to derive relative and absolute orientations of seismic sensors

    OpenAIRE

    F. Grigoli; Simone Cesca; Torsten Dahm; L. Krieger

    2012-01-01

    Determining the relative orientation of the horizontal components of seismic sensors is a common problem that limits data analysis and interpretation for several acquisition setups, including linear arrays of geophones deployed in borehole installations or ocean bottom seismometers deployed at the seafloor. To solve this problem we propose a new inversion method based on a complex linear algebra approach. Relative orientation angles are retrieved by minimizing, in a least-squares sense, the l...

  1. Least Squares Approach to the Alignment of the Generic High Precision Tracking System

    Science.gov (United States)

    de Renstrom, Pawel Brückman; Haywood, Stephen

    2006-04-01

    A least squares method to solve a generic alignment problem of a high granularity tracking system is presented. The algorithm is based on an analytical linear expansion and allows for multiple nested fits, e.g. imposing a common vertex for groups of particle tracks is of particular interest. We present a consistent and complete recipe to impose constraints on either implicit or explicit parameters. The method has been applied to the full simulation of a subset of the ATLAS silicon tracking system. The ultimate goal is to determine ≈35,000 degrees of freedom (DoF's). We present a limited scale exercise exploring various aspects of the solution.

  2. Combining Approach in Stages with Least Squares for fits of data in hyperelasticity

    Science.gov (United States)

    Beda, Tibi

    2006-10-01

    The present work concerns a method of continuous approximation by block of a continuous function; a method of approximation combining the Approach in Stages with the finite domains Least Squares. An identification procedure by sub-domains: basic generating functions are determined step-by-step permitting their weighting effects to be felt. This procedure allows one to be in control of the signs and to some extent of the optimal values of the parameters estimated, and consequently it provides a unique set of solutions that should represent the real physical parameters. Illustrations and comparisons are developed in rubber hyperelastic modeling. To cite this article: T. Beda, C. R. Mecanique 334 (2006).

  3. Increased Global Functional Connectivity Correlates with LSD-Induced Ego Dissolution

    NARCIS (Netherlands)

    Tagliazucchi, E.; Roseman, Leor; Kaelen, Mendel; Orban, Csaba; Muthukumaraswamy, Suresh D; Murphy, Kevin; Laufs, Helmut; Leech, Robert; McGonigle, John; Crossley, Nicolas; Bullmore, Edward; Williams, Tim; Bolstridge, Mark; Feilding, Amanda; Nutt, David J; Carhart-Harris, Robin

    2016-01-01

    Lysergic acid diethylamide (LSD) is a non-selective serotonin-receptor agonist that was first synthesized in 1938 and identified as (potently) psychoactive in 1943. Psychedelics have been used by indigenous cultures for millennia [1]; however, because of LSD's unique potency and the timing of its

  4. Machine Learning Approaches to Image Deconvolution

    OpenAIRE

    Schuler, Christian

    2017-01-01

    Image blur is a fundamental problem in both photography and scientific imaging. Even the most well-engineered optics are imperfect, and finite exposure times cause motion blur. To reconstruct the original sharp image, the field of image deconvolution tries to recover recorded photographs algorithmically. When the blur is known, this problem is called non-blind deconvolution. When the blur is unknown and has to be inferred from the observed image, it is called blind deconvolution. The key to r...

  5. Respiratory mechanics by least squares fitting in mechanically ventilated patients: application on flow-limited COPD patients.

    Science.gov (United States)

    Volta, Carlo A; Marangoni, Elisabetta; Alvisi, Valentina; Capuzzo, Maurizia; Ragazzi, Riccardo; Pavanelli, Lina; Alvisi, Raffaele

    2002-01-01

    Although computerized methods of analyzing respiratory system mechanics such as the least squares fitting method have been used in various patient populations, no conclusive data are available in patients with chronic obstructive pulmonary disease (COPD), probably because they may develop expiratory flow limitation (EFL). This suggests that respiratory mechanics be determined only during inspiration. Eight-bed multidisciplinary ICU of a teaching hospital. Eight non-flow-limited postvascular surgery patients and eight flow-limited COPD patients. Patients were sedated, paralyzed for diagnostic purposes, and ventilated in volume control ventilation with constant inspiratory flow rate. Data on resistance, compliance, and dynamic intrinsic positive end-expiratory pressure (PEEPi,dyn) obtained by applying the least squares fitting method during inspiration, expiration, and the overall breathing cycle were compared with those obtained by the traditional method (constant flow, end-inspiratory occlusion method). Our results indicate that (a) the presence of EFL markedly decreases the precision of resistance and compliance values measured by the LSF method, (b) the determination of respiratory variables during inspiration allows the calculation of respiratory mechanics in flow limited COPD patients, and (c) the LSF method is able to detect the presence of PEEPi,dyn if only inspiratory data are used.

  6. Classifying Physical Morphology of Cocoa Beans Digital Images using Multiclass Ensemble Least-Squares Support Vector Machine

    Science.gov (United States)

    Lawi, Armin; Adhitya, Yudhi

    2018-03-01

    The objective of this research is to determine the quality of cocoa beans through morphology of their digital images. Samples of cocoa beans were scattered on a bright white paper under a controlled lighting condition. A compact digital camera was used to capture the images. The images were then processed to extract their morphological parameters. Classification process begins with an analysis of cocoa beans image based on morphological feature extraction. Parameters for extraction of morphological or physical feature parameters, i.e., Area, Perimeter, Major Axis Length, Minor Axis Length, Aspect Ratio, Circularity, Roundness, Ferret Diameter. The cocoa beans are classified into 4 groups, i.e.: Normal Beans, Broken Beans, Fractured Beans, and Skin Damaged Beans. The model of classification used in this paper is the Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM), a proposed improvement model of SVM using ensemble method in which the separate hyperplanes are obtained by least square approach and the multiclass procedure uses One-Against- All method. The result of our proposed model showed that the classification with morphological feature input parameters were accurately as 99.705% for the four classes, respectively.

  7. SECOND ORDER LEAST SQUARE ESTIMATION ON ARCH(1 MODEL WITH BOX-COX TRANSFORMED DEPENDENT VARIABLE

    Directory of Open Access Journals (Sweden)

    Herni Utami

    2014-03-01

    Full Text Available Box-Cox transformation is often used to reduce heterogeneity and to achieve a symmetric distribution of response variable. In this paper, we estimate the parameters of Box-Cox transformed ARCH(1 model using second-order leastsquare method and then we study the consistency and asymptotic normality for second-order least square (SLS estimators. The SLS estimation was introduced byWang (2003, 2004 to estimate the parameters of nonlinear regression models with independent and identically distributed errors

  8. A library least-squares approach for scatter correction in gamma-ray tomography

    Science.gov (United States)

    Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro

    2015-03-01

    Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.

  9. A least-squares minimization approach for model parameters estimate by using a new magnetic anomaly formula

    Science.gov (United States)

    Abo-Ezz, E. R.; Essa, K. S.

    2016-04-01

    A new linear least-squares approach is proposed to interpret magnetic anomalies of the buried structures by using a new magnetic anomaly formula. This approach depends on solving different sets of algebraic linear equations in order to invert the depth ( z), amplitude coefficient ( K), and magnetization angle ( θ) of buried structures using magnetic data. The utility and validity of the new proposed approach has been demonstrated through various reliable synthetic data sets with and without noise. In addition, the method has been applied to field data sets from USA and India. The best-fitted anomaly has been delineated by estimating the root-mean squared (rms). Judging satisfaction of this approach is done by comparing the obtained results with other available geological or geophysical information.

  10. Brightness-normalized Partial Least Squares Regression for hyperspectral data

    International Nuclear Information System (INIS)

    Feilhauer, Hannes; Asner, Gregory P.; Martin, Roberta E.; Schmidtlein, Sebastian

    2010-01-01

    Developed in the field of chemometrics, Partial Least Squares Regression (PLSR) has become an established technique in vegetation remote sensing. PLSR was primarily designed for laboratory analysis of prepared material samples. Under field conditions in vegetation remote sensing, the performance of the technique may be negatively affected by differences in brightness due to amount and orientation of plant tissues in canopies or the observing conditions. To minimize these effects, we introduced brightness normalization to the PLSR approach and tested whether this modification improves the performance under changing canopy and observing conditions. This test was carried out using high-fidelity spectral data (400-2510 nm) to model observed leaf chemistry. The spectral data was combined with a canopy radiative transfer model to simulate effects of varying canopy structure and viewing geometry. Brightness normalization enhanced the performance of PLSR by dampening the effects of canopy shade, thus providing a significant improvement in predictions of leaf chemistry (up to 3.6% additional explained variance in validation) compared to conventional PLSR. Little improvement was made on effects due to variable leaf area index, while minor improvement (mostly not significant) was observed for effects of variable viewing geometry. In general, brightness normalization increased the stability of model fits and regression coefficients for all canopy scenarios. Brightness-normalized PLSR is thus a promising approach for application on airborne and space-based imaging spectrometer data.

  11. Least squares deconvolution for leak detection with a pseudo random binary sequence excitation

    Science.gov (United States)

    Nguyen, Si Tran Nguyen; Gong, Jinzhe; Lambert, Martin F.; Zecchin, Aaron C.; Simpson, Angus R.

    2018-01-01

    Leak detection and localisation is critical for water distribution system pipelines. This paper examines the use of the time-domain impulse response function (IRF) for leak detection and localisation in a pressurised water pipeline with a pseudo random binary sequence (PRBS) signal excitation. Compared to the conventional step wave generated using a single fast operation of a valve closure, a PRBS signal offers advantageous correlation properties, in that the signal has very low autocorrelation for lags different from zero and low cross correlation with other signals including noise and other interference. These properties result in a significant improvement in the IRF signal to noise ratio (SNR), leading to more accurate leak localisation. In this paper, the estimation of the system IRF is formulated as an optimisation problem in which the l2 norm of the IRF is minimised to suppress the impact of noise and interference sources. Both numerical and experimental data are used to verify the proposed technique. The resultant estimated IRF provides not only accurate leak location estimation, but also good sensitivity to small leak sizes due to the improved SNR.

  12. An improved algorithm for the determination of the system paramters of a visual binary by least squares

    Science.gov (United States)

    Xu, Yu-Lin

    The problem of computing the orbit of a visual binary from a set of observed positions is reconsidered. It is a least squares adjustment problem, if the observational errors follow a bias-free multivariate Gaussian distribution and the covariance matrix of the observations is assumed to be known. The condition equations are constructed to satisfy both the conic section equation and the area theorem, which are nonlinear in both the observations and the adjustment parameters. The traditional least squares algorithm, which employs condition equations that are solved with respect to the uncorrelated observations and either linear in the adjustment parameters or linearized by developing them in Taylor series by first-order approximation, is inadequate in our orbit problem. D.C. Brown proposed an algorithm solving a more general least squares adjustment problem in which the scalar residual function, however, is still constructed by first-order approximation. Not long ago, a completely general solution was published by W.H Jefferys, who proposed a rigorous adjustment algorithm for models in which the observations appear nonlinearly in the condition equations and may be correlated, and in which construction of the normal equations and the residual function involves no approximation. This method was successfully applied in our problem. The normal equations were first solved by Newton's scheme. Practical examples show that this converges fast if the observational errors are sufficiently small and the initial approximate solution is sufficiently accurate, and that it fails otherwise. Newton's method was modified to yield a definitive solution in the case the normal approach fails, by combination with the method of steepest descent and other sophisticated algorithms. Practical examples show that the modified Newton scheme can always lead to a final solution. The weighting of observations, the orthogonal parameters and the efficiency of a set of adjustment parameters are also

  13. Normalization Ridge Regression in Practice I: Comparisons Between Ordinary Least Squares, Ridge Regression and Normalization Ridge Regression.

    Science.gov (United States)

    Bulcock, J. W.

    The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…

  14. LSD treatment in Scandinavia: emphasizing indications and short-term treatment outcomes of 151 patients in Denmark.

    Science.gov (United States)

    Larsen, Jens Knud

    2017-10-01

    New research has suggested the clinical use of lysergic acid diethylamide (LSD) and psilocybin in selected patient populations. However, concerns about the clinical use of LSD were advanced in a large Danish follow-up study that assessed 151 LSD-treated psychiatric patients approximately 25 years after their treatment in the 1960s. The purpose of the present study was to give a retrospective account of the short-term outcome of LSD treatment in these 151 Danish psychiatric patients. The LSD case material in the Danish State Archives consists of medical case records of 151 LSD-treated patients, who complained and received economic compensation with the LSD Damages Law. The author carefully read and reviewed the LSD case material. LSD was used to treat a wide spectrum of mental disorders. Independent of diagnoses, 52 patients improved, and 48 patients worsened acutely with the LSD treatment. In a subgroup of 82 neurotic patients, the LSD dose-index (number of treatments multiplied by the maximal LSD dose) indicated the risk of acute worsening. In another subgroup of 19 patients with obsessive-compulsive neurosis, five patients later underwent psychosurgery. A small subgroup of 12 patients was treated with psilocybin. The long-term outcome was poor in most of the patients. Despite the significant limitations to a retrospective design, this database warrants caution in mental health patients. The use of LSD and psilocybin in mental health patients may be associated with serious short- and long-term side effects. Until further trials with rigorous designs have cleared these drugs of their potential harms, their clinical utility in these groups of patients has not been fully clarified.

  15. A task specific uncertainty analysis method for least-squares-based form characterization of ultra-precision freeform surfaces

    International Nuclear Information System (INIS)

    Ren, M J; Cheung, C F; Kong, L B

    2012-01-01

    In the measurement of ultra-precision freeform surfaces, least-squares-based form characterization methods are widely used to evaluate the form error of the measured surfaces. Although many methodologies have been proposed in recent years to improve the efficiency of the characterization process, relatively little research has been conducted on the analysis of associated uncertainty in the characterization results which may result from those characterization methods being used. As a result, this paper presents a task specific uncertainty analysis method with application in the least-squares-based form characterization of ultra-precision freeform surfaces. That is, the associated uncertainty in the form characterization results is estimated when the measured data are extracted from a specific surface with specific sampling strategy. Three factors are considered in this study which include measurement error, surface form error and sample size. The task specific uncertainty analysis method has been evaluated through a series of experiments. The results show that the task specific uncertainty analysis method can effectively estimate the uncertainty of the form characterization results for a specific freeform surface measurement

  16. Does model fit decrease the uncertainty of the data in comparison with a general non-model least squares fit?

    International Nuclear Information System (INIS)

    Pronyaev, V.G.

    2003-01-01

    The information entropy is taken as a measure of knowledge about the object and the reduced univariante variance as a common measure of uncertainty. Covariances in the model versus non-model least square fits are discussed

  17. Antena Cerdas untuk Mitigasi Interferensi dengan Algoritma Least Mean Square

    Directory of Open Access Journals (Sweden)

    Rahmad Hidayat

    2017-06-01

    Full Text Available Antena cerdas pada dasarnya merupakan susunan antena dengan kemampuan pemrosesan sinyal untuk mengirim/menerima informasi secara adaptif. Kemampuan ini harus terus didalami untuk dicarikan algoritma adaptif terbaik bagi kemampuan beamforming yang diinginkan. Tulisan ini bertujuan untuk memberikan kajian dan analisis pengaruh algoritma Least Mean Square (LMS pada pengaturan nulling beam pola radiasi susunan antena cerdas dalam perannya terhadap mitigasi interferensi. Simulasi kinerja beamformer untuk sebanyak 250 iterasi dilakukan dengan tool Matlab pada kanal AWGN (Additional White Noise Gaussian dan parameter simulasi diubah untuk membandingkan dua buah harga step size m pada algoritma LMS untuk beberapa jumlah elemen antena. Pengaruh nilai step size m, terlihat pada jumlah iterasi yang dilangsungkan sebelum error noise minimum diperoleh, dimana dengan naiknya nilai step size ini maka semakin mengurangi jumlah iterasi, rata-rata menjadi 60. Dari pola respon amplitudo setelah proses beamforming , posisi sinyal utama (0 dB tepat di sudut 30° dan dihasilkan 15 posisi nulling untuk 16 elemen antena. Sumber interferensi dihilangkan / ditutup dengan menempatkan ’nulls’ dalam arah sumber interferensi tersebut di posisi 60° dan -40° dengan masing-masing level diperoleh berkisar sebesar -115 dB

  18. Comparison of remove-compute-restore and least squares modification of Stokes' formula techniques to quasi-geoid determination over the Auvergne test area

    DEFF Research Database (Denmark)

    Yildiz, H.; Forsberg, René; Ågren, J.

    2012-01-01

    The remove-compute-restore (RCR) technique for regional geoid determination implies that both topography and low-degree global geopotential model signals are removed before computation and restored after Stokes' integration or Least Squares Collocation (LSC) solution. The Least Squares Modification...... area. All methods showed a reasonable agreement with GPS-levelling data, in the order of a 3-3.5 cm in the central region having relatively smooth topography, which is consistent with the accuracies of GPS and levelling. When a 1-parameter fit is used, the FFT method using kernel modification performs...

  19. Crystal structure of histone demethylase LSD1 and tranylcypromine at 2.25 A

    International Nuclear Information System (INIS)

    Mimasu, Shinya; Sengoku, Toru; Fukuzawa, Seketsu; Umehara, Takashi; Yokoyama, Shigeyuki

    2008-01-01

    Transcriptional activity and chromatin structure accessibility are correlated with the methylation of specific histone residues. Lysine-specific demethylase 1 (LSD1) is the first discovered histone demethylase, which demethylates Lys4 or Lys9 of histone H3, using FAD. Among the known monoamine oxidase inhibitors, tranylcypromine (Parnate) showed the most potent inhibitory effect on LSD1. Recently, the crystal structure of LSD1 and tranylcypromine was solved at 2.75 A, revealing a five-membered ring fused to the flavin of LSD1. In this study, we refined the crystal structure of the LSD1-tranylcypromine complex to 2.25 A. The five-membered ring model did not fit completely with the electron density, giving R work /R free values of 0.226/0.254. On the other hand, the N(5) adduct gave the lowest R work /R free values of 0.218/0.248, among the tested models. These results imply that the LSD1-tranylcypromine complex is not completely composed of the five-membered adduct, but partially contains an intermediate, such as the N(5) adduct

  20. Efficacy and enlightenment: LSD psychotherapy and the Drug Amendments of 1962.

    Science.gov (United States)

    Oram, Matthew

    2014-04-01

    The decline in therapeutic research with lysergic acid diethylamide (LSD) in the United States over the course of the 1960s has commonly been attributed to the growing controversy surrounding its recreational use. However, research difficulties played an equal role in LSD psychotherapy's demise, as they frustrated researchers' efforts to clearly establish the efficacy of treatment. Once the Kefauver Harris Drug Amendments of 1962 introduced the requirement that proof of efficacy be established through controlled clinical trials before a drug could be approved to market, the value of clinical research became increasingly dependent on the scientific rigor of the trial's design. LSD psychotherapy's complex method of utilizing drug effects to catalyze a psychological treatment clashed with the controlled trial methodology on both theoretical and practical levels, making proof of efficacy difficult to obtain. Through a close examination of clinical trials performed after 1962, this article explores how the new emphasis on controlled clinical trials frustrated the progress of LSD psychotherapy research by focusing researchers' attention on trial design to the detriment of their therapeutic method. This analysis provides a new perspective on the death of LSD psychotherapy and explores the implications of the Drug Amendments of 1962.

  1. A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery

    Science.gov (United States)

    Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang

    2009-11-01

    Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.

  2. Transrepressive Function of TLX Requires the Histone Demethylase LSD1 ▿ †

    Science.gov (United States)

    Yokoyama, Atsushi; Takezawa, Shinichiro; Schüle, Roland; Kitagawa, Hirochika; Kato, Shigeaki

    2008-01-01

    TLX is an orphan nuclear receptor (also called NR2E1) that regulates the expression of target genes by functioning as a constitutive transrepressor. The physiological significance of TLX in the cytodifferentiation of neural cells in the brain is known. However, the corepressors supporting the transrepressive function of TLX have yet to be identified. In this report, Y79 retinoblastoma cells were subjected to biochemical techniques to purify proteins that interact with TLX, and we identified LSD1 (also called KDM1), which appears to form a complex with CoREST and histone deacetylase 1. LSD1 interacted with TLX directly through its SWIRM and amine oxidase domains. LSD1 potentiated the transrepressive function of TLX through its histone demethylase activity as determined by a luciferase assay using a genomically integrated reporter gene. LSD1 and TLX were recruited to a TLX-binding site in the PTEN gene promoter, accompanied by the demethylation of H3K4me2 and deacetylation of H3. Knockdown of either TLX or LSD1 derepressed expression of the endogenous PTEN gene and inhibited cell proliferation of Y79 cells. Thus, the present study suggests that LSD1 is a prime corepressor for TLX. PMID:18391013

  3. An Improved Generalized Predictive Control in a Robust Dynamic Partial Least Square Framework

    Directory of Open Access Journals (Sweden)

    Jin Xin

    2015-01-01

    Full Text Available To tackle the sensitivity to outliers in system identification, a new robust dynamic partial least squares (PLS model based on an outliers detection method is proposed in this paper. An improved radial basis function network (RBFN is adopted to construct the predictive model from inputs and outputs dataset, and a hidden Markov model (HMM is applied to detect the outliers. After outliers are removed away, a more robust dynamic PLS model is obtained. In addition, an improved generalized predictive control (GPC with the tuning weights under dynamic PLS framework is proposed to deal with the interaction which is caused by the model mismatch. The results of two simulations demonstrate the effectiveness of proposed method.

  4. Interaction between LSD and dopamine D2/3 binding sites in pig brain.

    Science.gov (United States)

    Minuzzi, Luciano; Nomikos, George G; Wade, Mark R; Jensen, Svend B; Olsen, Aage K; Cumming, Paul

    2005-06-15

    The psychoactive properties of the hallucinogen LSD have frequently been attributed to high affinity interactions with serotonin 5HT2 receptors in brain. Possible effects of LSD on dopamine D2/3 receptor availability have not previously been investigated in living brain. Therefore, we used PET to map the binding potential (pB) of [11C]raclopride in brain of three pigs, first in a baseline condition, and again at 1 and 4 h after administration of LSD (2.5 microg/kg, i.v.). There was a progressive treatment effect in striatum, where the pB was significantly reduced by 19% at 4 h after LSD administration. Concomitant maps of cerebral blood flow did not reveal significant changes in perfusion during this interval. Subsequent in vitro studies showed that LSD displaced [3H]raclopride (2 nM) from pig brain cryostat sections with an IC50 of 275 nM according to a one-site model. Fitting of a two-site model to the data suggested the presence of a component of the displacement curves with a subnanomolar IC50, comprising 20% of the total [3H]raclopride binding. In microdialysis experiments, LSD at similar and higher doses did not evoke changes in the interstitial concentration of dopamine or its acidic metabolites in rat striatum. Together, these results are consistent with a direct interaction between LSD and a portion of dopamine D2/3 receptors in pig brain, possibly contributing to the psychopharmacology of LSD. (c) 2005 Wiley-Liss, Inc.

  5. A scaled Lagrangian method for performing a least squares fit of a model to plant data

    International Nuclear Information System (INIS)

    Crisp, K.E.

    1988-01-01

    Due to measurement errors, even a perfect mathematical model will not be able to match all the corresponding plant measurements simultaneously. A further discrepancy may be introduced if an un-modelled change in conditions occurs within the plant which should have required a corresponding change in model parameters - e.g. a gradual deterioration in the performance of some component(s). Taking both these factors into account, what is required is that the overall discrepancy between the model predictions and the plant data is kept to a minimum. This process is known as 'model fitting', A method is presented for minimising any function which consists of the sum of squared terms, subject to any constraints. Its most obvious application is in the process of model fitting, where a weighted sum of squares of the differences between model predictions and plant data is the function to be minimised. When implemented within existing Central Electricity Generating Board computer models, it will perform a least squares fit of a model to plant data within a single job submission. (author)

  6. Performance improvement of shunt active power filter based on non-linear least-square approach

    DEFF Research Database (Denmark)

    Terriche, Yacine

    2018-01-01

    . This paper proposes an improved open loop strategy which is unconditionally stable and flexible. The proposed method which is based on non-linear least square (NLS) approach can extract the fundamental voltage and estimates its phase within only half cycle, even in the presence of odd harmonics and dc offset......). The synchronous reference frame (SRF) approach is widely used for generating the RCC due to its simplicity and computation efficiency. However, the SRF approach needs precise information of the voltage phase which becomes a challenge under adverse grid conditions. A typical solution to answer this need...

  7. Convolution-deconvolution in DIGES

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Simos, N.

    1995-01-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities

  8. Acute effects of LSD on amygdala activity during processing of fearful stimuli in healthy subjects.

    Science.gov (United States)

    Mueller, F; Lenz, C; Dolder, P C; Harder, S; Schmid, Y; Lang, U E; Liechti, M E; Borgwardt, S

    2017-04-04

    Lysergic acid diethylamide (LSD) induces profound changes in various mental domains, including perception, self-awareness and emotions. We used functional magnetic resonance imaging (fMRI) to investigate the acute effects of LSD on the neural substrate of emotional processing in humans. Using a double-blind, randomised, cross-over study design, placebo or 100 μg LSD were orally administered to 20 healthy subjects before the fMRI scan, taking into account the subjective and pharmacological peak effects of LSD. The plasma levels of LSD were determined immediately before and after the scan. The study (including the a priori-defined study end point) was registered at ClinicalTrials.gov before study start (NCT02308969). The administration of LSD reduced reactivity of the left amygdala and the right medial prefrontal cortex relative to placebo during the presentation of fearful faces (PLSD-induced amygdala response to fearful stimuli and the LSD-induced subjective drug effects (PLSD modulates the engagement of brain regions that mediate emotional processing.

  9. An estimation of the height system bias parameter N (0) using least squares collocation from observed gravity and GPS-levelling data

    DEFF Research Database (Denmark)

    Sadiq, Muhammad; Tscherning, Carl C.; Ahmad, Zulfiqar

    2009-01-01

    This paper deals with the analysis of gravity anomaly and precise levelling in conjunction with GPS-Levelling data for the computation of a gravimetric geoid and an estimate of the height system bias parameter N-o for the vertical datum in Pakistan by means of least squares collocation technique...... covariance parameters has facilitated to achieve gravimetric height anomalies in a global geocentric datum. Residual terrain modeling (RTM) technique has been used in combination with the EGM96 for the reduction and smoothing of the gravity data. A value for the bias parameter N-o has been estimated...... with reference to the local GPS-Levelling datum that appears to be 0.705 m with 0.07 m mean square error. The gravimetric height anomalies were compared with height anomalies obtained from GPS-Levelling stations using least square collocation with and without bias adjustment. The bias adjustment minimizes...

  10. Extracting information from two-dimensional electrophoresis gels by partial least squares regression

    DEFF Research Database (Denmark)

    Jessen, Flemming; Lametsch, R.; Bendixen, E.

    2002-01-01

    of all proteins/spots in the gels. In the present study it is demonstrated how information can be extracted by multivariate data analysis. The strategy is based on partial least squares regression followed by variable selection to find proteins that individually or in combination with other proteins vary......Two-dimensional gel electrophoresis (2-DE) produces large amounts of data and extraction of relevant information from these data demands a cautious and time consuming process of spot pattern matching between gels. The classical approach of data analysis is to detect protein markers that appear...... or disappear depending on the experimental conditions. Such biomarkers are found by comparing the relative volumes of individual spots in the individual gels. Multivariate statistical analysis and modelling of 2-DE data for comparison and classification is an alternative approach utilising the combination...

  11. Baseline configuration for GNSS attitude determination with an analytical least-squares solution

    International Nuclear Information System (INIS)

    Chang, Guobin; Wang, Qianxin; Xu, Tianhe

    2016-01-01

    The GNSS attitude determination using carrier phase measurements with 4 antennas is studied on condition that the integer ambiguities have been resolved. The solution to the nonlinear least-squares is often obtained iteratively, however an analytical solution can exist for specific baseline configurations. The main aim of this work is to design this class of configurations. Both single and double difference measurements are treated which refer to the dedicated and non-dedicated receivers respectively. More realistic error models are employed in which the correlations between different measurements are given full consideration. The desired configurations are worked out. The configurations are rotation and scale equivariant and can be applied to both the dedicated and non-dedicated receivers. For these configurations, the analytical and optimal solution for the attitude is also given together with its error variance–covariance matrix. (paper)

  12. Wave-equation Q tomography and least-squares migration

    KAUST Repository

    Dutta, Gaurav

    2016-01-01

    optimization method that inverts for the subsurface Q distribution by minimizing a skeletonized misfit function ε. Here, ε is the sum of the squared differences between the observed and the predicted peak/centroid-frequency shifts of the early-arrivals. Through

  13. Interaction of electron neutrino with LSD detector

    Science.gov (United States)

    Ryazhskaya, O. G.; Semenov, S. V.

    2016-06-01

    The interaction of electron neutrino flux, originating in the rotational collapse mechanism on the first stage of Supernova burst, with the LSD detector components, such as 56Fe (a large amount of this metal is included in as shielding material) and liquid scintillator barNnH2n+2, is being investigated. Both charged and neutral channels of neutrino reaction with 12barN and 56Fe are considered. Experimental data, giving the possibility to extract information for nuclear matrix elements calculation are used. The number of signals, produced in LSD by the neutrino pulse of Supernova 1987A is determined. The obtained results are in good agreement with experimental data.

  14. Validating the Galerkin least-squares finite element methods in predicting mixing flows in stirred tank reactors

    International Nuclear Information System (INIS)

    Johnson, K.; Bittorf, K.J.

    2002-01-01

    A novel approach for computer aided modeling and optimizing mixing process has been developed using Galerkin least-squares finite element technology. Computer aided mixing modeling and analysis involves Lagrangian and Eulerian analysis for relative fluid stretching, and energy dissipation concepts for laminar and turbulent flows. High quality, conservative, accurate, fluid velocity, and continuity solutions are required for determining mixing quality. The ORCA Computational Fluid Dynamics (CFD) package, based on a finite element formulation, solves the incompressible Reynolds Averaged Navier Stokes (RANS) equations. Although finite element technology has been well used in areas of heat transfer, solid mechanics, and aerodynamics for years, it has only recently been applied to the area of fluid mixing. ORCA, developed using the Galerkin Least-Squares (GLS) finite element technology, provides another formulation for numerically solving the RANS based and LES based fluid mechanics equations. The ORCA CFD package is validated against two case studies. The first, a free round jet, demonstrates that the CFD code predicts the theoretical velocity decay rate, linear expansion rate, and similarity profile. From proper prediction of fundamental free jet characteristics, confidence can be derived when predicting flows in a stirred tank, as a stirred tank reactor can be considered a series of free jets and wall jets. (author)

  15. Kennard-Stone combined with least square support vector machine method for noncontact discriminating human blood species

    Science.gov (United States)

    Zhang, Linna; Li, Gang; Sun, Meixiu; Li, Hongxiao; Wang, Zhennan; Li, Yingxin; Lin, Ling

    2017-11-01

    Identifying whole bloods to be either human or nonhuman is an important responsibility for import-export ports and inspection and quarantine departments. Analytical methods and DNA testing methods are usually destructive. Previous studies demonstrated that visible diffuse reflectance spectroscopy method can realize noncontact human and nonhuman blood discrimination. An appropriate method for calibration set selection was very important for a robust quantitative model. In this paper, Random Selection (RS) method and Kennard-Stone (KS) method was applied in selecting samples for calibration set. Moreover, proper stoichiometry method can be greatly beneficial for improving the performance of classification model or quantification model. Partial Least Square Discrimination Analysis (PLSDA) method was commonly used in identification of blood species with spectroscopy methods. Least Square Support Vector Machine (LSSVM) was proved to be perfect for discrimination analysis. In this research, PLSDA method and LSSVM method was used for human blood discrimination. Compared with the results of PLSDA method, this method could enhance the performance of identified models. The overall results convinced that LSSVM method was more feasible for identifying human and animal blood species, and sufficiently demonstrated LSSVM method was a reliable and robust method for human blood identification, and can be more effective and accurate.

  16. The Multivariate Regression Statistics Strategy to Investigate Content-Effect Correlation of Multiple Components in Traditional Chinese Medicine Based on a Partial Least Squares Method.

    Science.gov (United States)

    Peng, Ying; Li, Su-Ning; Pei, Xuexue; Hao, Kun

    2018-03-01

    Amultivariate regression statisticstrategy was developed to clarify multi-components content-effect correlation ofpanaxginseng saponins extract and predict the pharmacological effect by components content. In example 1, firstly, we compared pharmacological effects between panax ginseng saponins extract and individual saponin combinations. Secondly, we examined the anti-platelet aggregation effect in seven different saponin combinations of ginsenoside Rb1, Rg1, Rh, Rd, Ra3 and notoginsenoside R1. Finally, the correlation between anti-platelet aggregation and the content of multiple components was analyzed by a partial least squares algorithm. In example 2, firstly, 18 common peaks were identified in ten different batches of panax ginseng saponins extracts from different origins. Then, we investigated the anti-myocardial ischemia reperfusion injury effects of the ten different panax ginseng saponins extracts. Finally, the correlation between the fingerprints and the cardioprotective effects was analyzed by a partial least squares algorithm. Both in example 1 and 2, the relationship between the components content and pharmacological effect was modeled well by the partial least squares regression equations. Importantly, the predicted effect curve was close to the observed data of dot marked on the partial least squares regression model. This study has given evidences that themulti-component content is a promising information for predicting the pharmacological effects of traditional Chinese medicine.

  17. Least-squares wave-front reconstruction of Shack-Hartmann sensors and shearing interferometers using multigrid techniques

    International Nuclear Information System (INIS)

    Baker, K.L.

    2005-01-01

    This article details a multigrid algorithm that is suitable for least-squares wave-front reconstruction of Shack-Hartmann and shearing interferometer wave-front sensors. The algorithm detailed in this article is shown to scale with the number of subapertures in the same fashion as fast Fourier transform techniques, making it suitable for use in applications requiring a large number of subapertures and high Strehl ratio systems such as for high spatial frequency characterization of high-density plasmas, optics metrology, and multiconjugate and extreme adaptive optics systems

  18. A new method of measuring centre-of-mass velocities of radially pulsating stars from high-resolution spectroscopy

    Science.gov (United States)

    Britavskiy, N.; Pancino, E.; Tsymbal, V.; Romano, D.; Fossati, L.

    2018-03-01

    We present a radial velocity analysis of 20 solar neighbourhood RR Lyrae and three Population II Cepheid variables. We obtained high-resolution, moderate-to-high signal-to-noise ratio spectra for most stars; these spectra covered different pulsation phases for each star. To estimate the gamma (centre-of-mass) velocities of the programme stars, we use two independent methods. The first, `classic' method is based on RR Lyrae radial velocity curve templates. The second method is based on the analysis of absorption-line profile asymmetry to determine both pulsational and gamma velocities. This second method is based on the least-squares deconvolution (LSD) technique applied to analyse the line asymmetry that occurs in the spectra. We obtain measurements of the pulsation component of the radial velocity with an accuracy of ±3.5 km s-1. The gamma velocity was determined with an accuracy of ±10 km s-1, even for those stars having a small number of spectra. The main advantage of this method is the possibility of obtaining an estimation of gamma velocity even from one spectroscopic observation with uncertain pulsation phase. A detailed investigation of LSD profile asymmetry shows that the projection factor p varies as a function of the pulsation phase - this is a key parameter, which converts observed spectral line radial velocity variations into photospheric pulsation velocities. As a by-product of our study, we present 41 densely spaced synthetic grids of LSD profile bisectors based on atmospheric models of RR Lyr covering all pulsation phases.

  19. LSD-induced entropic brain activity predicts subsequent personality change.

    Science.gov (United States)

    Lebedev, A V; Kaelen, M; Lövdén, M; Nilsson, J; Feilding, A; Nutt, D J; Carhart-Harris, R L

    2016-09-01

    Personality is known to be relatively stable throughout adulthood. Nevertheless, it has been shown that major life events with high personal significance, including experiences engendered by psychedelic drugs, can have an enduring impact on some core facets of personality. In the present, balanced-order, placebo-controlled study, we investigated biological predictors of post-lysergic acid diethylamide (LSD) changes in personality. Nineteen healthy adults underwent resting state functional MRI scans under LSD (75µg, I.V.) and placebo (saline I.V.). The Revised NEO Personality Inventory (NEO-PI-R) was completed at screening and 2 weeks after LSD/placebo. Scanning sessions consisted of three 7.5-min eyes-closed resting-state scans, one of which involved music listening. A standardized preprocessing pipeline was used to extract measures of sample entropy, which characterizes the predictability of an fMRI time-series. Mixed-effects models were used to evaluate drug-induced shifts in brain entropy and their relationship with the observed increases in the personality trait openness at the 2-week follow-up. Overall, LSD had a pronounced global effect on brain entropy, increasing it in both sensory and hierarchically higher networks across multiple time scales. These shifts predicted enduring increases in trait openness. Moreover, the predictive power of the entropy increases was greatest for the music-listening scans and when "ego-dissolution" was reported during the acute experience. These results shed new light on how LSD-induced shifts in brain dynamics and concomitant subjective experience can be predictive of lasting changes in personality. Hum Brain Mapp 37:3203-3213, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Lysergic acid diethylamide (LSD) administration selectively downregulates serotonin2 receptors in rat brain.

    Science.gov (United States)

    Buckholtz, N S; Zhou, D F; Freedman, D X; Potter, W Z

    1990-04-01

    A dosage regimen of lysergic acid diethylamide (LSD) that reliably produces behavioral tolerance in rats was evaluated for effects on neurotransmitter receptor binding in rat brain using a variety of radioligands selective for amine receptor subtypes. Daily administration of LSD [130 micrograms/kg (0.27 mumol/kg) intraperitoneally (IP)] for 5 days produced a decrease in serotonin2 (5-hydroxytryptamine2, 5-HT2) binding in cortex (measured 24 hours after the last drug administration) but did not affect binding to other receptor systems (5-HT1A, 5-HT1B, beta-adrenergic, alpha 1- or alpha 2-adrenergic, D2-dopaminergic) or to a recognition site for 5-HT uptake. The decrease was evident within 3 days of LSD administration but was not demonstrable after the first LSD dose. Following 5 days of LSD administration the decrease was still present 48 hours, but not 96 hours, after the last administration. The indole hallucinogen psilocybin [1.0 mg/kg (3.5 mumol/kg) for 8 days] also produced a significant decrease in 5HT2 binding, but neither the nonhallucinogenic analog bromo-LSD [1.3 mg/kg (2.4 mumol/kg) for 5 days] nor mescaline [10 mg/kg (40.3 mumol/kg) for 5 or 10 days] affected 5-HT2 binding. These observations suggest that LSD and other indole hallucinogens may act as 5-HT2 agonists at postsynaptic 5-HT2 receptors. Decreased 5-HT2 binding strikingly parallels the development and loss of behavioral tolerance seen with repeated LSD administration, but the decreased binding per se cannot explain the gamut of behavioral tolerance and cross-tolerance phenomena among the indole and phenylethylamine hallucinogens.

  1. On the Performance of Maximum Likelihood versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

    Science.gov (United States)

    Beauducel, Andre; Herzberg, Philipp Yorck

    2006-01-01

    This simulation study compared maximum likelihood (ML) estimation with weighted least squares means and variance adjusted (WLSMV) estimation. The study was based on confirmatory factor analyses with 1, 2, 4, and 8 factors, based on 250, 500, 750, and 1,000 cases, and on 5, 10, 20, and 40 variables with 2, 3, 4, 5, and 6 categories. There was no…

  2. Interaction of D-LSD with binding sites in brain: a study in vivo and in vitro

    International Nuclear Information System (INIS)

    Ebersole, B.L.J.

    1985-01-01

    The localization of [ 3 H]-d-lysergic acid diethylamide ([ 3 H]LSD) binding sites in the mouse brain was compared in vivo and in vitro. Radioautography of brain sections incubated with [ 3 H]LSD in vitro revealed substantial specific [ 3 H]LSD binding in cortical layers III-IV and areas CA1 and dentate gyrus in hippocampus. In contrast, in brain sections from animals that received [ 3 H]LSD in vivo, binding in hippocampus was scant and diffuse, although the pattern of labeling in cortex was similar to that seen in vitro. The low specific binding in hippocampus relative to cortex was confirmed by homogenate filtration studies of brain areas from mice that received injections of [ 3 H]LSD. Time-course studies established that peak specific binding at ten minutes was the same in cortex and hippocampus. At all times, binding in hippocampus was about one-third of that in cortex; in contrast, the concentration of free [ 3 H]LSD did not vary between regions. This finding was unexpected, because binding studies in vitro in membrane preparations indicated that the density and affinity of [ 3 H]LSD binding sites were similar in both brain regions. Saturation binding studies in vivo showed that the lower amount of [ 3 H]LSD binding in hippocampus was attributable to a lower density of sites labeled by [ 3 H]LSD. The pharmacological identify of [ 3 H]LSD binding sites in vivo may be relevant to the hallucinogenic properties of LSD and of other related hallucinogens

  3. Determination of carbohydrates present in Saccharomyces cerevisiae using mid-infrared spectroscopy and partial least squares regression

    OpenAIRE

    Plata, Maria R.; Koch, Cosima; Wechselberger, Patrick; Herwig, Christoph; Lendl, Bernhard

    2013-01-01

    A fast and simple method to control variations in carbohydrate composition of Saccharomyces cerevisiae, baker's yeast, during fermentation was developed using mid-infrared (mid-IR) spectroscopy. The method allows for precise and accurate determinations with minimal or no sample preparation and reagent consumption based on mid-IR spectra and partial least squares (PLS) regression. The PLS models were developed employing the results from reference analysis of the yeast cells. The reference anal...

  4. Estimation of active pharmaceutical ingredients content using locally weighted partial least squares and statistical wavelength selection.

    OpenAIRE

    Kim, Sanghong; Kano, Manabu; Nakagawa, Hiroshi; Hasebe, Shinji

    2011-01-01

    Development of quality estimation models using near infrared spectroscopy (NIRS) and multivariate analysis has been accelerated as a process analytical technology (PAT) tool in the pharmaceutical industry. Although linear regression methods such as partial least squares (PLS) are widely used, they cannot always achieve high estimation accuracy because physical and chemical properties of a measuring object have a complex effect on NIR spectra. In this research, locally weighted PLS (LW-PLS) wh...

  5. Due Date Assignment in a Dynamic Job Shop with the Orthogonal Kernel Least Squares Algorithm

    Science.gov (United States)

    Yang, D. H.; Hu, L.; Qian, Y.

    2017-06-01

    Meeting due dates is a key goal in the manufacturing industries. This paper proposes a method for due date assignment (DDA) by using the Orthogonal Kernel Least Squares Algorithm (OKLSA). A simulation model is built to imitate the production process of a highly dynamic job shop. Several factors describing job characteristics and system state are extracted as attributes to predict job flow-times. A number of experiments under conditions of varying dispatching rules and 90% shop utilization level have been carried out to evaluate the effectiveness of OKLSA applied for DDA. The prediction performance of OKLSA is compared with those of five conventional DDA models and back-propagation neural network (BPNN). The experimental results indicate that OKLSA is statistically superior to other DDA models in terms of mean absolute lateness and root mean squares lateness in most cases. The only exception occurs when the shortest processing time rule is used for dispatching jobs, the difference between OKLSA and BPNN is not statistically significant.

  6. Least Squares Neural Network-Based Wireless E-Nose System Using an SnO₂ Sensor Array.

    Science.gov (United States)

    Shahid, Areej; Choi, Jong-Hyeok; Rana, Abu Ul Hassan Sarwar; Kim, Hyun-Seok

    2018-05-06

    Over the last few decades, the development of the electronic nose (E-nose) for detection and quantification of dangerous and odorless gases, such as methane (CH₄) and carbon monoxide (CO), using an array of SnO₂ gas sensors has attracted considerable attention. This paper addresses sensor cross sensitivity by developing a classifier and estimator using an artificial neural network (ANN) and least squares regression (LSR), respectively. Initially, the ANN was implemented using a feedforward pattern recognition algorithm to learn the collective behavior of an array as the signature of a particular gas. In the second phase, the classified gas was quantified by minimizing the mean square error using LSR. The combined approach produced 98.7% recognition probability, with 95.5 and 94.4% estimated gas concentration accuracies for CH₄ and CO, respectively. The classifier and estimator parameters were deployed in a remote microcontroller for the actualization of a wireless E-nose system.

  7. Perfusion Quantification Using Gaussian Process Deconvolution

    DEFF Research Database (Denmark)

    Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...

  8. Deconvolution algorithms applied in ultrasonics

    International Nuclear Information System (INIS)

    Perrot, P.

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs

  9. Blind source deconvolution for deep Earth seismology

    Science.gov (United States)

    Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.

    2007-12-01

    We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.

  10. d-Lysergic Acid Diethylamide (LSD) as a Model of Psychosis: Mechanism of Action and Pharmacology.

    Science.gov (United States)

    De Gregorio, Danilo; Comai, Stefano; Posa, Luca; Gobbi, Gabriella

    2016-11-23

    d-Lysergic Acid Diethylamide (LSD) is known for its hallucinogenic properties and psychotic-like symptoms, especially at high doses. It is indeed used as a pharmacological model of psychosis in preclinical research. The goal of this review was to understand the mechanism of action of psychotic-like effects of LSD. We searched Pubmed, Web of Science, Scopus, Google Scholar and articles' reference lists for preclinical studies regarding the mechanism of action involved in the psychotic-like effects induced by LSD. LSD's mechanism of action is pleiotropic, primarily mediated by the serotonergic system in the Dorsal Raphe, binding the 5-HT 2A receptor as a partial agonist and 5-HT 1A as an agonist. LSD also modulates the Ventral Tegmental Area, at higher doses, by stimulating dopamine D₂, Trace Amine Associate receptor 1 (TAAR₁) and 5-HT 2A . More studies clarifying the mechanism of action of the psychotic-like symptoms or psychosis induced by LSD in humans are needed. LSD's effects are mediated by a pleiotropic mechanism involving serotonergic, dopaminergic, and glutamatergic neurotransmission. Thus, the LSD-induced psychosis is a useful model to test the therapeutic efficacy of potential novel antipsychotic drugs, particularly drugs with dual serotonergic and dopaminergic (DA) mechanism or acting on TAAR₁ receptors.

  11. The Role of Programmed Cell Death Regulator LSD1 in Nematode-Induced Syncytium Formation

    Directory of Open Access Journals (Sweden)

    Mateusz Matuszkiewicz

    2018-03-01

    Full Text Available Cyst-forming plant-parasitic nematodes are common pests of many crops. They inject secretions into host cells to induce the developmental and metabolic reprogramming that leads to the formation of a syncytium, which is the sole food source for growing nematodes. As in other host-parasite models, avirulence leads to rapid and local programmed cell death (PCD known as the hypersensitive response (HR, whereas in the case of virulence, PCD is still observed but is limited to only some cells. Several regulators of PCD were analyzed to understand the role of PCD in compatible plant–nematode interactions. Thus, Arabidopsis plants carrying recessive mutations in LESION SIMULATING DISEASE1 (LSD1 family genes were subjected to nematode infection assays with juveniles of Heterodera schachtii. LSD1 is a negative and conditional regulator of PCD, and fewer and smaller syncytia were induced in the roots of lsd1 mutants than in wild-type Col-0 plants. Mutation in LSD ONE LIKE2 (LOL2 revealed a pattern of susceptibility to H. schachtii antagonistic to lsd1. Syncytia induced on lsd1 roots compared to Col0 showed significantly retarded growth, modified cell wall structure, increased vesiculation, and some myelin-like bodies present at 7 and 12 days post-infection. To place these data in a wider context, RNA-sequencing analysis of infected and uninfected roots was conducted. During nematode infection, the number of transcripts with changed expression in lsd1 was approximately three times smaller than in wild-type plants (1440 vs. 4206 differentially expressed genes, respectively. LSD1-dependent PCD in roots is thus a highly regulated process in compatible plant–nematode interactions. Two genes identified in this analysis, coding for AUTOPHAGY-RELATED PROTEIN 8F and 8H were down-regulated in syncytia in the presence of LSD1 and showed an increased susceptibility to nematode infection contrasting with lsd1 phenotype. Our data indicate that molecular regulators

  12. The Role of Programmed Cell Death Regulator LSD1 in Nematode-Induced Syncytium Formation

    Science.gov (United States)

    Matuszkiewicz, Mateusz; Sobczak, Miroslaw; Cabrera, Javier; Escobar, Carolina; Karpiński, Stanislaw; Filipecki, Marcin

    2018-01-01

    Cyst-forming plant-parasitic nematodes are common pests of many crops. They inject secretions into host cells to induce the developmental and metabolic reprogramming that leads to the formation of a syncytium, which is the sole food source for growing nematodes. As in other host-parasite models, avirulence leads to rapid and local programmed cell death (PCD) known as the hypersensitive response (HR), whereas in the case of virulence, PCD is still observed but is limited to only some cells. Several regulators of PCD were analyzed to understand the role of PCD in compatible plant–nematode interactions. Thus, Arabidopsis plants carrying recessive mutations in LESION SIMULATING DISEASE1 (LSD1) family genes were subjected to nematode infection assays with juveniles of Heterodera schachtii. LSD1 is a negative and conditional regulator of PCD, and fewer and smaller syncytia were induced in the roots of lsd1 mutants than in wild-type Col-0 plants. Mutation in LSD ONE LIKE2 (LOL2) revealed a pattern of susceptibility to H. schachtii antagonistic to lsd1. Syncytia induced on lsd1 roots compared to Col0 showed significantly retarded growth, modified cell wall structure, increased vesiculation, and some myelin-like bodies present at 7 and 12 days post-infection. To place these data in a wider context, RNA-sequencing analysis of infected and uninfected roots was conducted. During nematode infection, the number of transcripts with changed expression in lsd1 was approximately three times smaller than in wild-type plants (1440 vs. 4206 differentially expressed genes, respectively). LSD1-dependent PCD in roots is thus a highly regulated process in compatible plant–nematode interactions. Two genes identified in this analysis, coding for AUTOPHAGY-RELATED PROTEIN 8F and 8H were down-regulated in syncytia in the presence of LSD1 and showed an increased susceptibility to nematode infection contrasting with lsd1 phenotype. Our data indicate that molecular regulators belonging to the

  13. Analysis of Shift and Deformation of Planar Surfaces Using the Least Squares Plane

    Directory of Open Access Journals (Sweden)

    Hrvoje Matijević

    2006-12-01

    Full Text Available Modern methods of measurement developed on the basis of advanced reflectorless distance measurement have paved the way for easier detection and analysis of shift and deformation. A large quantity of collected data points will often require a mathematical model of the surface that fits best into these. Although this can be a complex task, in the case of planar surfaces it is easily done, enabling further processing and analysis of measurement results. The paper describes the fitting of a plane to a set of collected points using the least squares distance, with previously excluded outliers via the RANSAC algorithm. Based on that, a method for analysis of the deformation and shift of planar surfaces is also described.

  14. Least squares approach for initial data recovery in dynamic data-driven applications simulations

    KAUST Repository

    Douglas, C.

    2010-12-01

    In this paper, we consider the initial data recovery and the solution update based on the local measured data that are acquired during simulations. Each time new data is obtained, the initial condition, which is a representation of the solution at a previous time step, is updated. The update is performed using the least squares approach. The objective function is set up based on both a measurement error as well as a penalization term that depends on the prior knowledge about the solution at previous time steps (or initial data). Various numerical examples are considered, where the penalization term is varied during the simulations. Numerical examples demonstrate that the predictions are more accurate if the initial data are updated during the simulations. © Springer-Verlag 2011.

  15. Resolution of the neutron transport equation by a three-dimensional least square method

    International Nuclear Information System (INIS)

    Varin, Elisabeth

    2001-01-01

    The knowledge of space and time distribution of neutrons with a certain energy or speed allows the exploitation and control of a nuclear reactor and the assessment of the irradiation dose about an irradiated nuclear fuel storage site. The neutron density is described by a transport equation. The objective of this research thesis is to develop a software for the resolution of this stationary equation in a three-dimensional Cartesian domain by means of a deterministic method. After a presentation of the transport equation, the author gives an overview of the different deterministic resolution approaches, identifies their benefits and drawbacks, and discusses the choice of the Ressel method. The least square method is precisely described and then applied. Numerical benchmarks are reported for validation purposes

  16. Estimating the kinetic parameters of activated sludge storage using weighted non-linear least-squares and accelerating genetic algorithm.

    Science.gov (United States)

    Fang, Fang; Ni, Bing-Jie; Yu, Han-Qing

    2009-06-01

    In this study, weighted non-linear least-squares analysis and accelerating genetic algorithm are integrated to estimate the kinetic parameters of substrate consumption and storage product formation of activated sludge. A storage product formation equation is developed and used to construct the objective function for the determination of its production kinetics. The weighted least-squares analysis is employed to calculate the differences in the storage product concentration between the model predictions and the experimental data as the sum of squared weighted errors. The kinetic parameters for the substrate consumption and the storage product formation are estimated to be the maximum heterotrophic growth rate of 0.121/h, the yield coefficient of 0.44 mg CODX/mg CODS (COD, chemical oxygen demand) and the substrate half saturation constant of 16.9 mg/L, respectively, by minimizing the objective function using a real-coding-based accelerating genetic algorithm. Also, the fraction of substrate electrons diverted to the storage product formation is estimated to be 0.43 mg CODSTO/mg CODS. The validity of our approach is confirmed by the results of independent tests and the kinetic parameter values reported in literature, suggesting that this approach could be useful to evaluate the product formation kinetics of mixed cultures like activated sludge. More importantly, as this integrated approach could estimate the kinetic parameters rapidly and accurately, it could be applied to other biological processes.

  17. Unlocking interpretation in near infrared multivariate calibrations by orthogonal partial least squares.

    Science.gov (United States)

    Stenlund, Hans; Johansson, Erik; Gottfries, Johan; Trygg, Johan

    2009-01-01

    Near infrared spectroscopy (NIR) was developed primarily for applications such as the quantitative determination of nutrients in the agricultural and food industries. Examples include the determination of water, protein, and fat within complex samples such as grain and milk. Because of its useful properties, NIR analysis has spread to other areas such as chemistry and pharmaceutical production. NIR spectra consist of infrared overtones and combinations thereof, making interpretation of the results complicated. It can be very difficult to assign peaks to known constituents in the sample. Thus, multivariate analysis (MVA) has been crucial in translating spectral data into information, mainly for predictive purposes. Orthogonal partial least squares (OPLS), a new MVA method, has prediction and modeling properties similar to those of other MVA techniques, e.g., partial least squares (PLS), a method with a long history of use for the analysis of NIR data. OPLS provides an intrinsic algorithmic improvement for the interpretation of NIR data. In this report, four sets of NIR data were analyzed to demonstrate the improved interpretation provided by OPLS. The first two sets included simulated data to demonstrate the overall principles; the third set comprised a statistically replicated design of experiments (DoE), to demonstrate how instrumental difference could be accurately visualized and correctly attributed to Wood's anomaly phenomena; the fourth set was chosen to challenge the MVA by using data relating to powder mixing, a crucial step in the pharmaceutical industry prior to tabletting. Improved interpretation by OPLS was demonstrated for all four examples, as compared to alternative MVA approaches. It is expected that OPLS will be used mostly in applications where improved interpretation is crucial; one such area is process analytical technology (PAT). PAT involves fewer independent samples, i.e., batches, than would be associated with agricultural applications; in

  18. LSD, 5-HT (serotonin), and the evolution of a behavioral assay.

    Science.gov (United States)

    Appel, James B; West, William B; Buggy, James

    2004-01-01

    Research in our laboratory, supported by NIDA and facilitated by Roger Brown, has indicated that serotonergic neuronal systems are involved in the discriminative stimulus effects of LSD. However, the only compounds that fully antagonize the LSD cue act at both serotonin (5-HT) and dopamine (DA) receptors. In addition, substitution for LSD in standard drug vs. no-drug (DND) discriminations does not necessarily predict either similar mechanisms of action or hallucinogenic potency because 'false positives' occur when animals are given drugs such as lisuride (LHM), quipazine, or, possibly, yohimbine. These effects can be greatly reduced by using drug vs. drug (D-D), drug vs. drug vs. no drug (D-ND), or drug vs. ' other' drug (saline, cocaine, pentobarbital) training procedures. Additional studies, in which drugs were administered directly into the cerebral ventricles or specific brain areas, suggest that structures containing terminal fields of serotonergic neurons might be involved in the stimulus effects of LSD.

  19. Non-stationary covariance function modelling in 2D least-squares collocation

    Science.gov (United States)

    Darbeheshti, N.; Featherstone, W. E.

    2009-06-01

    Standard least-squares collocation (LSC) assumes 2D stationarity and 3D isotropy, and relies on a covariance function to account for spatial dependence in the observed data. However, the assumption that the spatial dependence is constant throughout the region of interest may sometimes be violated. Assuming a stationary covariance structure can result in over-smoothing of, e.g., the gravity field in mountains and under-smoothing in great plains. We introduce the kernel convolution method from spatial statistics for non-stationary covariance structures, and demonstrate its advantage for dealing with non-stationarity in geodetic data. We then compared stationary and non- stationary covariance functions in 2D LSC to the empirical example of gravity anomaly interpolation near the Darling Fault, Western Australia, where the field is anisotropic and non-stationary. The results with non-stationary covariance functions are better than standard LSC in terms of formal errors and cross-validation against data not used in the interpolation, demonstrating that the use of non-stationary covariance functions can improve upon standard (stationary) LSC.

  20. Amplitude differences least squares method applied to temporal cardiac beat alignment

    International Nuclear Information System (INIS)

    Correa, R O; Laciar, E; Valentinuzzi, M E

    2007-01-01

    High resolution averaged ECG is an important diagnostic technique in post-infarcted and/or chagasic patients with high risk of ventricular tachycardia (VT). It calls for precise determination of the synchronism point (fiducial point) in each beat to be averaged. Cross-correlation (CC) between each detected beat and a reference beat is, by and large, the standard alignment procedure. However, the fiducial point determination is not precise in records contaminated with high levels of noise. Herein, we propose an alignment procedure based on the least squares calculation of the amplitude differences (LSAD) between the ECG samples and a reference or template beat. Both techniques, CC and LSAD, were tested in high resolution ECG's corrupted with white noise and 50 Hz line interference of varying amplitudes (RMS range: 0-100μV). Results point out that LSDA produced a lower alignment error in all contaminated records while in those blurred by power line interference better results were found only within the 0-40 μV range. It is concluded that the proposed method represents a valid alignment alternative