Analysis of scintigrams by singular value decomposition (SVD) technique
Energy Technology Data Exchange (ETDEWEB)
Savolainen, S.E.; Liewendahl, B.K. (Helsinki Univ. (Finland). Dept. of Physics)
1994-05-01
The singular value decomposition (SVD) method is presented as a potential tool for analyzing gamma camera images. Mathematically image analysis is a study of matrixes as the standard scintigram is a digitized matrix presentation of the recorded photon fluence from radioactivity of the object. Each matrix element (pixel) consists of a number, which equals the detected counts of the object position. The analysis of images can be reduced to the analysis of the singular values of the matrix decomposition. In the present study the clinical usefulness of SVD was tested by analyzing two different kinds of scintigrams: brain images by single photon emission tomography (SPET), and liver and spleen planar images. It is concluded that SVD can be applied to the analysis of gamma camera images, and that it provides an objective method for interpretation of clinically relevant information contained in the images. In image filtering, SVD provides results comparable to conventional filtering. In addition, the study of singular values can be used for semiquantitation of radionuclide images as exemplified by brain SPET studies and liver-spleen planar studies. (author).
2017-09-27
Approximation of the tSVD 5 1.5 Approximate Eigenvectors and Eigenvalues from Krylov Subspaces 6 1.6 A Motivating Example 7 2. Corruption of Subspaces...with Noise 9 2.1 Principal Angles for Quantifying Subspace Overlap 9 2.2 A Measure of Corruption : ρ-Free of Noise 10 3. Two Sufficient Conditions on...tSVD. We assume without any loss of generality that N ≥ M—otherwise simply replace A with AT. Then, any N × M matrix A can be factorized as A = UΣVT
Directory of Open Access Journals (Sweden)
Huaishuo Xiao
2017-11-01
Full Text Available In order to identify various kinds of combined power quality disturbances, the singular value decomposition (SVD and the improved total least squares-estimation of signal parameters via rotational invariance techniques (TLS-ESPRIT are combined as the basis of disturbance identification in this paper. SVD is applied to identify the catastrophe points of disturbance intervals, based on which the disturbance intervals are segmented. Then the improved TLS-ESPRIT optimized by singular value norm method is used to analyze each data segment, and extract the amplitude, frequency, attenuation coefficient and initial phase of various kinds of disturbances. Multi-group combined disturbance test signals are constructed by MATLAB and the proposed method is also tested by the measured data of IEEE Power and Energy Society (PES Database. The test results show that the new method proposed has a relatively higher accuracy than conventional TLS-ESPRIT, which could be used in the identification of measured data.
Riwinoto
2012-01-01
Sekarang ini, metode clustering telah diimplementasikan dalam riset DNA. Data dari DNA didapat melalui teknik microarray. Dengan menggunakan metode teknik SVD, dimensi data dikurangi sehingga mempermudah proses komputasi. Dalam paper ini, ditampilkan hasil clustering tanpa pengarahan terhadap gen-gen dari data bakteri ragi dengan menggunakan metode quantum clustering. Sebagai pembanding, dilakukan juga clustering menggunakan metoda Support Vector Clustering. Selain itu juga ditampilkan data h...
Directory of Open Access Journals (Sweden)
Zhu Dongxiao
2010-06-01
Full Text Available Abstract Background Comparative analysis of gene expression profiling of multiple biological categories, such as different species of organisms or different kinds of tissue, promises to enhance the fundamental understanding of the universality as well as the specialization of mechanisms and related biological themes. Grouping genes with a similar expression pattern or exhibiting co-expression together is a starting point in understanding and analyzing gene expression data. In recent literature, gene module level analysis is advocated in order to understand biological network design and system behaviors in disease and life processes; however, practical difficulties often lie in the implementation of existing methods. Results Using the singular value decomposition (SVD technique, we developed a new computational tool, named svdPPCS (SVD-based Pattern Pairing and Chart Splitting, to identify conserved and divergent co-expression modules of two sets of microarray experiments. In the proposed methods, gene modules are identified by splitting the two-way chart coordinated with a pair of left singular vectors factorized from the gene expression matrices of the two biological categories. Importantly, the cutoffs are determined by a data-driven algorithm using the well-defined statistic, SVD-p. The implementation was illustrated on two time series microarray data sets generated from the samples of accessory gland (ACG and malpighian tubule (MT tissues of the line W118 of M. drosophila. Two conserved modules and six divergent modules, each of which has a unique characteristic profile across tissue kinds and aging processes, were identified. The number of genes contained in these models ranged from five to a few hundred. Three to over a hundred GO terms were over-represented in individual modules with FDR Conclusions svdPPCS is a novel computational tool for the comparative analysis of transcriptional profiling. It especially fits the comparison of time
Singular value decomposition for collaborative filtering on a GPU
Kato, Kimikazu; Hosino, Tikara
2010-06-01
A collaborative filtering predicts customers' unknown preferences from known preferences. In a computation of the collaborative filtering, a singular value decomposition (SVD) is needed to reduce the size of a large scale matrix so that the burden for the next phase computation will be decreased. In this application, SVD means a roughly approximated factorization of a given matrix into smaller sized matrices. Webb (a.k.a. Simon Funk) showed an effective algorithm to compute SVD toward a solution of an open competition called "Netflix Prize". The algorithm utilizes an iterative method so that the error of approximation improves in each step of the iteration. We give a GPU version of Webb's algorithm. Our algorithm is implemented in the CUDA and it is shown to be efficient by an experiment.
Advances in audio watermarking based on singular value decomposition
Dhar, Pranab Kumar
2015-01-01
This book introduces audio watermarking methods for copyright protection, which has drawn extensive attention for securing digital data from unauthorized copying. The book is divided into two parts. First, an audio watermarking method in discrete wavelet transform (DWT) and discrete cosine transform (DCT) domains using singular value decomposition (SVD) and quantization is introduced. This method is robust against various attacks and provides good imperceptible watermarked sounds. Then, an audio watermarking method in fast Fourier transform (FFT) domain using SVD and Cartesian-polar transformation (CPT) is presented. This method has high imperceptibility and high data payload and it provides good robustness against various attacks. These techniques allow media owners to protect copyright and to show authenticity and ownership of their material in a variety of applications. · Features new methods of audio watermarking for copyright protection and ownership protection · Outl...
DEFF Research Database (Denmark)
Christensen, Steen; Doherty, John
2008-01-01
over the model area. Singular value decomposition (SVD) of the (possibly weighted) sensitivity matrix of the pilot point based model produces eigenvectors of which we pick a small number corresponding to significant eigenvalues. Super parameters are defined as factors through which parameter...... nonlinear functions. Recommendations concerning the use of pilot points and singular value decomposition in real-world groundwater model calibration are finally given. (c) 2008 Elsevier Ltd. All rights reserved....
Biclustering via Sparse Singular Value Decomposition
Lee, Mihee
2010-02-16
Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets. © 2010, The International Biometric Society.
Using many pilot points and singular value decomposition in groundwater model calibration
DEFF Research Database (Denmark)
Christensen, Steen; Doherty, John
2008-01-01
over the model area. Singular value decomposition (SVD) of the normal matrix is used to reduce the large number of pilot point parameters to a smaller number of so-called super parameters that can be estimated by nonlinear regression from the available observations. A number of eigenvectors...
On reliability of singular-value decomposition in attractor reconstruction
International Nuclear Information System (INIS)
Palus, M.; Dvorak, I.
1990-12-01
Applicability of singular-value decomposition for reconstructing the strange attractor from one-dimensional chaotic time series, proposed by Broomhead and King, is extensively tested and discussed. Previously published doubts about its reliability are confirmed: singular-value decomposition, by nature a linear method, is only of a limited power when nonlinear structures are studied. (author). 29 refs, 9 figs
Directory of Open Access Journals (Sweden)
Khaled Loukhaoukha
2013-01-01
Full Text Available We present a new optimal watermarking scheme based on discrete wavelet transform (DWT and singular value decomposition (SVD using multiobjective ant colony optimization (MOACO. A binary watermark is decomposed using a singular value decomposition. Then, the singular values are embedded in a detailed subband of host image. The trade-off between watermark transparency and robustness is controlled by multiple scaling factors (MSFs instead of a single scaling factor (SSF. Determining the optimal values of the multiple scaling factors (MSFs is a difficult problem. However, a multiobjective ant colony optimization is used to determine these values. Experimental results show much improved performances of the proposed scheme in terms of transparency and robustness compared to other watermarking schemes. Furthermore, it does not suffer from the problem of high probability of false positive detection of the watermarks.
Heriyanto, Mohammad; Srigutomo, Wahyu
2017-01-01
Exploration of natural or energy resources requires geophysical survey to determine the subsurface structure, such as DC resistivity method. In this research, field and synthetic data were used using Schlumberger configuration. One-dimensional (1-D) DC resistivity inversion was carried out using Singular Value Decomposition (SVD) and Levenberg-Marquardt (LM) techniques to obtain layered resistivity structure. We have developed software to perform both inversion met...
Object detection with a multistatic array using singular value decomposition
Hallquist, Aaron T.; Chambers, David H.
2014-07-01
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across a surface and that travels down the surface. The detection system converts the return signals from a time domain to a frequency domain, resulting in frequency return signals. The detection system then performs a singular value decomposition for each frequency to identify singular values for each frequency. The detection system then detects the presence of a subsurface object based on a comparison of the identified singular values to expected singular values when no subsurface object is present.
Image Fakery Detection Based on Singular Value Decomposition
Directory of Open Access Journals (Sweden)
T. Basaruddin
2009-11-01
Full Text Available The growing of image processing technology nowadays make it easier for user to modify and fake the images. Image fakery is a process to manipulate part or whole areas of image either in it content or context with the help of digital image processing techniques. Image fakery is barely unrecognizable because the fake image is looking so natural. Yet by using the numerical computation technique it is able to detect the evidence of fake image. This research is successfully applied the singular value decomposition method to detect image fakery. The image preprocessing algorithm prior to the detection process yields two vectors orthogonal to the singular value vector which are important to detect fake image. The result of experiment to images in several conditions successfully detects the fake images with threshold value 0.2. Singular value decomposition-based detection of image fakery can be used to investigate fake image modified from original image accurately.
Edward Jero, S; Ramu, Palaniappan; Ramakrishnan, S
2014-10-01
ECG Steganography provides secured transmission of secret information such as patient personal information through ECG signals. This paper proposes an approach that uses discrete wavelet transform to decompose signals and singular value decomposition (SVD) to embed the secret information into the decomposed ECG signal. The novelty of the proposed method is to embed the watermark using SVD into the two dimensional (2D) ECG image. The embedding of secret information in a selected sub band of the decomposed ECG is achieved by replacing the singular values of the decomposed cover image by the singular values of the secret data. The performance assessment of the proposed approach allows understanding the suitable sub-band to hide secret data and the signal degradation that will affect diagnosability. Performance is measured using metrics like Kullback-Leibler divergence (KL), percentage residual difference (PRD), peak signal to noise ratio (PSNR) and bit error rate (BER). A dynamic location selection approach for embedding the singular values is also discussed. The proposed approach is demonstrated on a MIT-BIH database and the observations validate that HH is the ideal sub-band to hide data. It is also observed that the signal degradation (less than 0.6%) is very less in the proposed approach even with the secret data being as large as the sub band size. So, it does not affect the diagnosability and is reliable to transmit patient information.
Using many pilot points and singular value decomposition in groundwater model calibration
DEFF Research Database (Denmark)
Christensen, Steen; Doherty, John
2008-01-01
over the model area. Singular value decomposition (SVD) of the normal matrix is used to reduce the large number of pilot point parameters to a smaller number of so-called super parameters that can be estimated by nonlinear regression from the available observations. A number of eigenvectors...... corresponding to significant Eigen values (resulting from the decomposition) is used to transform the model from having many pilot point parameters to having a few super parameters. A synthetic case model is used to analyze and demonstrate the application of the presented method of model parameterization...
Energy Technology Data Exchange (ETDEWEB)
Xing, Zhanqiang; Qu, Jianfeng; Chai, Yi; Tang, Qiu; Zhou, Yuming [Chongqing University, Chongqing (China)
2017-02-15
The gear vibration signal is nonlinear and non-stationary, gear fault diagnosis under variable conditions has always been unsatisfactory. To solve this problem, an intelligent fault diagnosis method based on Intrinsic time-scale decomposition (ITD)-Singular value decomposition (SVD) and Support vector machine (SVM) is proposed in this paper. The ITD method is adopted to decompose the vibration signal of gearbox into several Proper rotation components (PRCs). Subsequently, the singular value decomposition is proposed to obtain the singular value vectors of the proper rotation components and improve the robustness of feature extraction under variable conditions. Finally, the Support vector machine is applied to classify the fault type of gear. According to the experimental results, the performance of ITD-SVD exceeds those of the time-frequency analysis methods with EMD and WPT combined with SVD for feature extraction, and the classifier of SVM outperforms those for K-nearest neighbors (K-NN) and Back propagation (BP). Moreover, the proposed approach can accurately diagnose and identify different fault types of gear under variable conditions.
Biplot and Singular Value Decomposition Macros for Excel©
Directory of Open Access Journals (Sweden)
Ilya A. Lipkovich
2002-06-01
Full Text Available The biplot display is a graph of row and column markers obtained from data that forms a two-way table. The markers are calculated from the singular value decomposition of the data matrix. The biplot display may be used with many multivariate methods to display relationships between variables and objects. It is commonly used in ecological applications to plot relationships between species and sites. This paper describes a set of Excel macros that may be used to draw a biplot display based on results from principal components analysis, correspondence analysis, canonical discriminant analysis, metric multidimensional scaling, redundancy analysis, canonical correlation analysis or canonical correspondence analysis. The macros allow for a variety of transformations of the data prior to the singular value decomposition and scaling of the markers following the decomposition.
On the use of the singular value decomposition for text retrieval
Energy Technology Data Exchange (ETDEWEB)
Husbands, P.; Simon, H.D.; Ding, C.
2000-12-04
The use of the Singular Value Decomposition (SVD) has been proposed for text retrieval in several recent works. This technique uses the SVD to project very high dimensional document and query vectors into a low dimensional space. In this new space it is hoped that the underlying structure of the collection is revealed thus enhancing retrieval performance. Theoretical results have provided some evidence for this claim and to some extent experiments have confirmed this. However, these studies have mostly used small test collections and simplified document models. In this work we investigate the use of the SVD on large document collections. We show that, if interpreted as a mechanism for representing the terms of the collection, this technique alone is insufficient for dealing with the variability in term occurrence. Section 2 introduces the text retrieval concepts necessary for our work. A short description of our experimental architecture is presented in Section 3. Section 4 describes how term occurrence variability affects the SVD and then shows how the decomposition influences retrieval performance. A possible way of improving SVD-based techniques is presented in Section 5 and concluded in Section 6.
Singular value decomposition methods for wave propagation analysis
Czech Academy of Sciences Publication Activity Database
Santolík, Ondřej; Parrot, M.; Lefeuvre, F.
2003-01-01
Roč. 38, č. 1 (2003), s. 10-1-10-13 ISSN 0048-6604 R&D Projects: GA ČR GA205/01/1064 Grant - others:Barrande(CZ) 98039/98055 Institutional research plan: CEZ:AV0Z3042911; CEZ:MSM 113200004 Keywords : wave propagation * singular value decomposition Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.832, year: 2003
Biplot and Singular Value Decomposition Macros for Excel©
Lipkovich, Ilya A.; Smith, Eric P.
2002-01-01
The biplot display is a graph of row and column markers obtained from data that forms a two-way table. The markers are calculated from the singular value decomposition of the data matrix. The biplot display may be used with many multivariate methods to display relationships between variables and objects. It is commonly used in ecological applications to plot relationships between species and sites. This paper describes a set of Excel macros that may be used to draw a biplot display based ...
Directory of Open Access Journals (Sweden)
Khaled Loukhaoukha
2017-12-01
Full Text Available Among emergent applications of digital watermarking are copyright protection and proof of ownership. Recently, Makbol and Khoo (2013 have proposed for these applications a new robust blind image watermarking scheme based on the redundant discrete wavelet transform (RDWT and the singular value decomposition (SVD. In this paper, we present two ambiguity attacks on this algorithm that have shown that this algorithm fails when used to provide robustness applications like owner identification, proof of ownership, and transaction tracking. Keywords: Ambiguity attack, Image watermarking, Singular value decomposition, Redundant discrete wavelet transform
Energy Technology Data Exchange (ETDEWEB)
Emery, L.
1999-04-13
Magnet errors and off-center orbits through sextuples perturb the dispersion and beta functions in a storage ring (SR), which affects machine performance. In a large ring such as the Advanced Photon Source (APS), the magnet errors are difficult to determine with beam-based methods. Also the non-zero orbit through sextuples result from user requests for steering at light source points. For expediency, a singular value decomposition (SVD) matrix method analogous to orbit correction was adopted to make global corrections to these functions using strengths of several quadrupoles as correcting elements. The direct response matrix is calculated from the model of the perfect lattice. The inverse is calculated by SVD with a selected number of singular vectors. Resulting improvement in the lattice functions and machine performance will be presented.
Heriyanto, M.; Srigutomo, W.
2017-07-01
Exploration of natural or energy resources requires geophysical survey to determine the subsurface structure, such as DC resistivity method. In this research, field and synthetic data were used using Schlumberger configuration. One-dimensional (1-D) DC resistivity inversion was carried out using Singular Value Decomposition (SVD) and Levenberg-Marquardt (LM) techniques to obtain layered resistivity structure. We have developed software to perform both inversion methods accompanied by a user-friendly interface. Both of the methods were compared one another to determine the number of iteration, robust to noise, elapsed time of computation, and inversion results. SVD inversion generated faster process and better results than LM did. The inversion showed both of these methods were appropriate to interpret subsurface resistivity structure.
Directory of Open Access Journals (Sweden)
Te-Jen Su
2007-01-01
Full Text Available In this letter, clonal selection algorithm (CSA with singular value decomposition (SVD method is investigated for the realization of two-dimentional (2D infinite-impulse response (IIR filters with arbitrary magnitude responses. The CSA is applied to optimize the sampled frequencies of transition band of digital filters, then producing a planar response matrix of a 2D IIR digital filter. By using the SVD, 2D magnitude specifications can be decomposed into a pair of 1D filters, and thus the problem of designing a 2D digital filter can be reduced to the one of designing a pair of 1D digital filters or even only one 1D digital filter. The stimulation results show the proposed method has the better performance of the minimum attenuation between the passband and stopband.
Directory of Open Access Journals (Sweden)
P. K. Dhar
2017-06-01
Full Text Available Digital watermarking has drawn extensive attention for copyright protection of multimedia data. This paper introduces a blind audio watermarking scheme in discrete cosine transform (DCT domain based on singular value decomposition (SVD, exponential operation (EO, and logarithm operation (LO. In our proposed scheme, initially the original audio is segmented into non-overlapping frames and DCT is applied to each frame. Low frequency DCT coefficients are divided into sub-bands and power of each sub band is calculated. EO is performed on the sub-band with highest power of the DCT coefficients of each frame. SVD is applied to the exponential coefficients of each sub bands with highest power represented in matrix form. Watermark information is embedded into the largest singular value by using a quantization function. Simulation results indicate that the proposed watermarking scheme is highly robust against different attacks. In addition, it has high data payload and shows low error probability rates. Moreover, it provides good performance in terms of imperceptibility, robustness, and data payload compared with some recent state-of-the-art watermarking methods.
Men, Kuo; Quan, Hong; Yang, Peipei; Cao, Ting; Li, Weihao
2010-04-01
The frequency-domain magnetic resonance spectroscopy (MRS) is achieved by the Fast Fourier Transform (FFT) of the time-domain signals. Usually we are only interested in the portion lying in a frequency band of the whole spectrum. A method based on the singular value decomposition (SVD) and frequency-selection is presented in this article. The method quantifies the spectrum lying in the interested frequency band and reduces the interference of the parts lying out of the band in a computationally efficient way. Comparative experiments with the standard time-domain SVD method indicate that the method introduced in this article is accurate and timesaving in practical situations.
Use of multiple singular value decompositions to analyze complex intracellular calcium ion signals
Martinez, Josue G.
2009-12-01
We compare calcium ion signaling (Ca(2+)) between two exposures; the data are present as movies, or, more prosaically, time series of images. This paper describes novel uses of singular value decompositions (SVD) and weighted versions of them (WSVD) to extract the signals from such movies, in a way that is semi-automatic and tuned closely to the actual data and their many complexities. These complexities include the following. First, the images themselves are of no interest: all interest focuses on the behavior of individual cells across time, and thus, the cells need to be segmented in an automated manner. Second, the cells themselves have 100+ pixels, so that they form 100+ curves measured over time, so that data compression is required to extract the features of these curves. Third, some of the pixels in some of the cells are subject to image saturation due to bit depth limits, and this saturation needs to be accounted for if one is to normalize the images in a reasonably un-biased manner. Finally, the Ca(2+) signals have oscillations or waves that vary with time and these signals need to be extracted. Thus, our aim is to show how to use multiple weighted and standard singular value decompositions to detect, extract and clarify the Ca(2+) signals. Our signal extraction methods then lead to simple although finely focused statistical methods to compare Ca(2+) signals across experimental conditions.
Modal Analysis Using the Singular Value Decomposition and Rational Fraction Polynomials
2017-04-06
TECHNICAL REPORT MODAL ANALYSIS USING THE SINGULAR VALUE DECOMPOSITION AND RATIONAL FRACTION POLYNOMIALS By J. B...Modal Analysis Using the Singular Value Decomposition and Rational Fraction Polynomials By J. B. Fahnline, R. L. Campbell, S. A. Hambric, and M. R...October 2004 - April 2017 Modal Analysis Using the Singular Value Decomposition and Rational Fraction Polynomials J. B. Fahnline, R. L. Campbell, S. A
Stoica, Petre; Sandgren, Niclas; Selén, Yngve; Vanhamme, Leentje; Van Huffel, Sabine
2003-11-01
In several applications of NMR spectroscopy the user is interested only in the components lying in a small frequency band of the spectrum. A frequency selective analysis deals precisely with this kind of NMR spectroscopy: parameter estimation of only those spectroscopic components that lie in a preselected frequency band of the NMR data spectrum, with as little interference as possible from the out-of-band components and in a computationally efficient way. In this paper we introduce a frequency-domain singular value decomposition (SVD)-based method for frequency selective spectroscopy that is computationally simple, statistically accurate, and which has a firm theoretical basis. To illustrate the good performance of the proposed method we present a number of numerical examples for both simulated and in vitro NMR data.
DEFF Research Database (Denmark)
Christensen, Steen; Doherty, John
2008-01-01
A significant practical problem with the pilot point method is to choose the location of the pilot points. We present a method that is intended to relieve the modeler from much of this responsibility. The basic idea is that a very large number of pilot points are distributed more or less uniformly...... over the model area. Singular value decomposition (SVD) of the (possibly weighted) sensitivity matrix of the pilot point based model produces eigenvectors of which we pick a small number corresponding to significant eigenvalues. Super parameters are defined as factors through which parameter...... combinations corresponding to the chosen eigenvectors are multiplied to obtain the pilot point values. The model can thus be transformed from having many-pilot-point parameters to having a few super parameters that can be estimated by nonlinear regression on the basis of the available observations. (This...
Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen
2017-12-27
Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP
Large-scale genomic prediction using singular value decomposition of the genotype matrix.
Ødegård, Jørgen; Indahl, Ulf; Strandén, Ismo; Meuwissen, Theo H E
2018-02-28
For marker effect models and genomic animal models, computational requirements increase with the number of loci and the number of genotyped individuals, respectively. In the latter case, the inverse genomic relationship matrix (GRM) is typically needed, which is computationally demanding to compute for large datasets. Thus, there is a great need for dimensionality-reduction methods that can analyze massive genomic data. For this purpose, we developed reduced-dimension singular value decomposition (SVD) based models for genomic prediction. Fast SVD is performed by analyzing different chromosomes/genome segments in parallel and/or by restricting SVD to a limited core of genotyped individuals, producing chromosome- or segment-specific principal components (PC). Given a limited effective population size, nearly all the genetic variation can be effectively captured by a limited number of PC. Genomic prediction can then be performed either by PC ridge regression (PCRR) or by genomic animal models using an inverse GRM computed from the chosen PC (PCIG). In the latter case, computation of the inverse GRM will be feasible for any number of genotyped individuals and can be readily produced row- or element-wise. Using simulated data, we show that PCRR and PCIG models, using chromosome-wise SVD of a core sample of individuals, are appropriate for genomic prediction in a larger population, and results in virtually identical predicted breeding values as the original full-dimension genomic model (r = 1.000). Compared with other algorithms (e.g. algorithm for proven and young animals, APY), the (chromosome-wise SVD-based) PCRR and PCIG models were more robust to size of the core sample, giving nearly identical results even down to 500 core individuals. The method was also successfully tested on a large multi-breed dataset. SVD can be used for dimensionality reduction of large genomic datasets. After SVD, genomic prediction using dense genomic data and many genotyped individuals
Application of generalized singular value decomposition to ionospheric tomography
Directory of Open Access Journals (Sweden)
K. Bhuyan
2004-11-01
Full Text Available The electron density distribution of the low- and mid-latitude ionosphere has been investigated by the computerized tomography technique using a Generalized Singular Value Decomposition (GSVD based algorithm. Model ionospheric total electron content (TEC data obtained from the International Reference Ionosphere 2001 and slant relative TEC data measured at a chain of three stations receiving transit satellite transmissions in Alaska, USA are used in this analysis. The issue of optimum efficiency of the GSVD algorithm in the reconstruction of ionospheric structures is being addressed through simulation of the equatorial ionization anomaly (EIA, in addition to its application to investigate complicated ionospheric density irregularities. Results show that the Generalized Cross Validation approach to find the regularization parameter and the corresponding solution gives a very good reconstructed image of the low-latitude ionosphere and the EIA within it. Provided that some minimum norm is fulfilled, the GSVD solution is found to be least affected by considerations, such as pixel size and number of ray paths. The method has also been used to investigate the behaviour of the mid-latitude ionosphere under magnetically quiet and disturbed conditions.
Robust regularized singular value decomposition with application to mortality data
Zhang, Lingsong
2013-09-01
We develop a robust regularized singular value decomposition (RobRSVD) method for analyzing two-way functional data. The research is motivated by the application of modeling human mortality as a smooth two-way function of age group and year. The RobRSVD is formulated as a penalized loss minimization problem where a robust loss function is used to measure the reconstruction error of a low-rank matrix approximation of the data, and an appropriately defined two-way roughness penalty function is used to ensure smoothness along each of the two functional domains. By viewing the minimization problem as two conditional regularized robust regressions, we develop a fast iterative reweighted least squares algorithm to implement the method. Our implementation naturally incorporates missing values. Furthermore, our formulation allows rigorous derivation of leaveone- row/column-out cross-validation and generalized cross-validation criteria, which enable computationally efficient data-driven penalty parameter selection. The advantages of the new robust method over nonrobust ones are shown via extensive simulation studies and the mortality rate application. © Institute of Mathematical Statistics, 2013.
Kumar, Ravi; Bhaduri, Basanta; Nishchal, Naveen K.
2018-01-01
In this study, we propose a quick response (QR) code based nonlinear optical image encryption technique using spiral phase transform (SPT), equal modulus decomposition (EMD) and singular value decomposition (SVD). First, the primary image is converted into a QR code and then multiplied with a spiral phase mask (SPM). Next, the product is spiral phase transformed with particular spiral phase function, and further, the EMD is performed on the output of SPT, which results into two complex images, Z 1 and Z 2. Among these, Z 1 is further Fresnel propagated with distance d, and Z 2 is reserved as a decryption key. Afterwards, SVD is performed on Fresnel propagated output to get three decomposed matrices i.e. one diagonal matrix and two unitary matrices. The two unitary matrices are modulated with two different SPMs and then, the inverse SVD is performed using the diagonal matrix and modulated unitary matrices to get the final encrypted image. Numerical simulation results confirm the validity and effectiveness of the proposed technique. The proposed technique is robust against noise attack, specific attack, and brutal force attack. Simulation results are presented in support of the proposed idea.
Bidirectional texture function image super-resolution using singular value decomposition.
Dong, Wei; Shen, Hui-Liang; Pan, Zhi-Wei; Xin, John H
2017-04-01
The bidirectional texture function (BTF) is widely employed to achieve realistic digital reproduction of real-world material appearance. In practice, a BTF measurement device usually does not use high-resolution (HR) cameras in data collection, considering the high equipment cost and huge data space required. The limited image resolution consequently leads to the loss of texture details in BTF data. This paper proposes a fast BTF image super-resolution (SR) algorithm to deal with this issue. The algorithm uses singular value decomposition (SVD) to separate the collected low-resolution (LR) BTF data into intrinsic textures and eigen-apparent bidirectional reflectance distribution functions (eigen-ABRDFs) and then improves the resolution of the intrinsic textures via image SR. The HR BTFs can be finally obtained by fusing the reconstructed HR intrinsic textures with the LR eigen-ABRDFs. Experimental results show that the proposed algorithm outperforms the state-of-the-art single-image SR algorithms in terms of reconstruction accuracy. In addition, thanks to the employment of SVD, the proposed algorithm is computationally efficient and robust to noise corruption.
Multi-Label Classification by Semi-Supervised Singular Value Decomposition.
Jing, Liping; Shen, Chenyang; Yang, Liu; Yu, Jian; Ng, Michael K
2017-10-01
Multi-label problems arise in various domains, including automatic multimedia data categorization, and have generated significant interest in computer vision and machine learning community. However, existing methods do not adequately address two key challenges: exploiting correlations between labels and making up for the lack of labelled data or even missing labelled data. In this paper, we proposed to use a semi-supervised singular value decomposition (SVD) to handle these two challenges. The proposed model takes advantage of the nuclear norm regularization on the SVD to effectively capture the label correlations. Meanwhile, it introduces manifold regularization on mapping to capture the intrinsic structure among data, which provides a good way to reduce the required labelled data with improving the classification performance. Furthermore, we designed an efficient algorithm to solve the proposed model based on the alternating direction method of multipliers, and thus, it can efficiently deal with large-scale data sets. Experimental results for synthetic and real-world multimedia data sets demonstrate that the proposed method can exploit the label correlations and obtain promising and better label prediction results than the state-of-the-art methods.
Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen
2018-01-01
With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP
Local sensitivity analysis for inverse problems solved by singular value decomposition
Hill, M.C.; Nolan, B.T.
2010-01-01
Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by
Yang, Honggang; Lin, Huibin; Ding, Kang
2018-05-01
The performance of sparse features extraction by commonly used K-Singular Value Decomposition (K-SVD) method depends largely on the signal segment selected in rolling bearing diagnosis, furthermore, the calculating speed is relatively slow and the dictionary becomes so redundant when the fault signal is relatively long. A new sliding window denoising K-SVD (SWD-KSVD) method is proposed, which uses only one small segment of time domain signal containing impacts to perform sliding window dictionary learning and select an optimal pattern with oscillating information of the rolling bearing fault according to a maximum variance principle. An inner product operation between the optimal pattern and the whole fault signal is performed to enhance the characteristic of the impacts' occurrence moments. Lastly, the signal is reconstructed at peak points of the inner product to realize the extraction of the rolling bearing fault features. Both simulation and experiments verify that the method could extract the fault features effectively.
The Analysis of Two-Way Functional Data Using Two-Way Regularized Singular Value Decompositions
Huang, Jianhua Z.
2009-12-01
Two-way functional data consist of a data matrix whose row and column domains are both structured, for example, temporally or spatially, as when the data are time series collected at different locations in space. We extend one-way functional principal component analysis (PCA) to two-way functional data by introducing regularization of both left and right singular vectors in the singular value decomposition (SVD) of the data matrix. We focus on a penalization approach and solve the nontrivial problem of constructing proper two-way penalties from oneway regression penalties. We introduce conditional cross-validated smoothing parameter selection whereby left-singular vectors are cross- validated conditional on right-singular vectors, and vice versa. The concept can be realized as part of an alternating optimization algorithm. In addition to the penalization approach, we briefly consider two-way regularization with basis expansion. The proposed methods are illustrated with one simulated and two real data examples. Supplemental materials available online show that several "natural" approaches to penalized SVDs are flawed and explain why so. © 2009 American Statistical Association.
Iris identification system based on Fourier coefficients and singular value decomposition
Somnugpong, Sawet; Phimoltares, Suphakant; Maneeroj, Saranya
2011-12-01
Nowadays, both personal identification and classification are very important. In order to identify the person for some security applications, physical or behavior-based characteristics of individuals with high uniqueness might be analyzed. Biometric becomes the mostly used in personal identification purpose. There are many types of biometric information currently used. In this work, iris, one kind of personal characteristics is considered because of its uniqueness and collectable. Recently, the problem of various iris recognition systems is the limitation of space to store the data in a variety of environments. This work proposes the iris recognition system with small-size of feature vector causing a reduction in space complexity term. For this experiment, each iris is presented in terms of frequency domain, and based on neural network classification model. First, Fast Fourier Transform (FFT) is used to compute the Discrete Fourier Coefficients of iris data in frequency domain. Once the iris data was transformed into frequency-domain matrix, Singular Value Decomposition (SVD) is used to reduce a size of the complex matrix to single vector. All of these vectors would be input for neural networks for the classification step. With this approach, the merit of our technique is that size of feature vector is smaller than that of other techniques with the acceptable level of accuracy when compared with other existing techniques.
Directory of Open Access Journals (Sweden)
Vahid Faghih Dinevari
2016-01-01
Full Text Available Wireless capsule endoscopy (WCE is a new noninvasive instrument which allows direct observation of the gastrointestinal tract to diagnose its relative diseases. Because of the large number of images obtained from the capsule endoscopy per patient, doctors need too much time to investigate all of them. So, it would be worthwhile to design a system for detecting diseases automatically. In this paper, a new method is presented for automatic detection of tumors in the WCE images. This method will utilize the advantages of the discrete wavelet transform (DWT and singular value decomposition (SVD algorithms to extract features from different color channels of the WCE images. Therefore, the extracted features are invariant to rotation and can describe multiresolution characteristics of the WCE images. In order to classify the WCE images, the support vector machine (SVM method is applied to a data set which includes 400 normal and 400 tumor WCE images. The experimental results show proper performance of the proposed algorithm for detection and isolation of the tumor images which, in the best way, shows 94%, 93%, and 93.5% of sensitivity, specificity, and accuracy in the RGB color space, respectively.
Faghih Dinevari, Vahid; Karimian Khosroshahi, Ghader; Zolfy Lighvan, Mina
2016-01-01
Wireless capsule endoscopy (WCE) is a new noninvasive instrument which allows direct observation of the gastrointestinal tract to diagnose its relative diseases. Because of the large number of images obtained from the capsule endoscopy per patient, doctors need too much time to investigate all of them. So, it would be worthwhile to design a system for detecting diseases automatically. In this paper, a new method is presented for automatic detection of tumors in the WCE images. This method will utilize the advantages of the discrete wavelet transform (DWT) and singular value decomposition (SVD) algorithms to extract features from different color channels of the WCE images. Therefore, the extracted features are invariant to rotation and can describe multiresolution characteristics of the WCE images. In order to classify the WCE images, the support vector machine (SVM) method is applied to a data set which includes 400 normal and 400 tumor WCE images. The experimental results show proper performance of the proposed algorithm for detection and isolation of the tumor images which, in the best way, shows 94%, 93%, and 93.5% of sensitivity, specificity, and accuracy in the RGB color space, respectively.
International Nuclear Information System (INIS)
Kulisek, Jonathan A.; Anderson, Kevin K.; Casella, Andrew M.; Gesh, Christopher J.; Warren, Glen A.
2013-01-01
This study investigates the use of a Lead Slowing-Down Spectrometer (LSDS) for the direct and independent measurement of fissile isotopes in light-water nuclear reactor fuel assemblies. The current study applies MCNPX, a Monte Carlo radiation transport code, to simulate the measurement of the assay of the used nuclear fuel assemblies in the LSDS. An empirical model has been developed based on the calibration of the LSDS to responses generated from the simulated assay of six well-characterized fuel assemblies. The effects of self-shielding are taken into account by using empirical basis vectors calculated from the singular value decomposition (SVD) of a matrix containing the self-shielding functions from the assay of assemblies in the calibration set. The performance of the empirical algorithm was tested on version 1 of the Next-Generation Safeguards Initiative (NGSI) used fuel library consisting of 64 assemblies, as well as a set of 27 diversion assemblies, both of which were developed by Los Alamos National Laboratory. The potential for direct and independent assay of the sum of the masses of Pu-239 and Pu-241 to within 2%, on average, has been demonstrated
Biton, Yaacov; Rabinovitch, Avinoam; Braunstein, Doron; Aviram, Ira; Campbell, Katherine; Mironov, Sergey; Herron, Todd; Jalife, José; Berenfeld, Omer
2018-01-01
Cardiac fibrillation is a major clinical and societal burden. Rotors may drive fibrillation in many cases, but their role and patterns are often masked by complex propagation. We used Singular Value Decomposition (SVD), which ranks patterns of activation hierarchically, together with Wiener-Granger causality analysis (WGCA), which analyses direction of information among observations, to investigate the role of rotors in cardiac fibrillation. We hypothesized that combining SVD analysis with WGCA should reveal whether rotor activity is the dominant driving force of fibrillation even in cases of high complexity. Optical mapping experiments were conducted in neonatal rat cardiomyocyte monolayers (diameter, 35 mm), which were genetically modified to overexpress the delayed rectifier K+ channel IKr only in one half of the monolayer. Such monolayers have been shown previously to sustain fast rotors confined to the IKr overexpressing half and driving fibrillatory-like activity in the other half. SVD analysis of the optical mapping movies revealed a hierarchical pattern in which the primary modes corresponded to rotor activity in the IKr overexpressing region and the secondary modes corresponded to fibrillatory activity elsewhere. We then applied WGCA to evaluate the directionality of influence between modes in the entire monolayer using clear and noisy movies of activity. We demonstrated that the rotor modes influence the secondary fibrillatory modes, but influence was detected also in the opposite direction. To more specifically delineate the role of the rotor in fibrillation, we decomposed separately the respective SVD modes of the rotor and fibrillatory domains. In this case, WGCA yielded more information from the rotor to the fibrillatory domains than in the opposite direction. In conclusion, SVD analysis reveals that rotors can be the dominant modes of an experimental model of fibrillation. Wiener-Granger causality on modes of the rotor domains confirms their
Analysis and modelling of septic shock microarray data using Singular Value Decomposition.
Allanki, Srinivas; Dixit, Madhulika; Thangaraj, Paul; Sinha, Nandan Kumar
2017-06-01
Being a high throughput technique, enormous amounts of microarray data has been generated and there arises a need for more efficient techniques of analysis, in terms of speed and accuracy. Finding the differentially expressed genes based on just fold change and p-value might not extract all the vital biological signals that occur at a lower gene expression level. Besides this, numerous mathematical models have been generated to predict the clinical outcome from microarray data, while very few, if not none, aim at predicting the vital genes that are important in a disease progression. Such models help a basic researcher narrow down and concentrate on a promising set of genes which leads to the discovery of gene-based therapies. In this article, as a first objective, we have used the lesser known and used Singular Value Decomposition (SVD) technique to build a microarray data analysis tool that works with gene expression patterns and intrinsic structure of the data in an unsupervised manner. We have re-analysed a microarray data over the clinical course of Septic shock from Cazalis et al. (2014) and have shown that our proposed analysis provides additional information compared to the conventional method. As a second objective, we developed a novel mathematical model that predicts a set of vital genes in the disease progression that works by generating samples in the continuum between health and disease, using a simple normal-distribution-based random number generator. We also verify that most of the predicted genes are indeed related to septic shock. Copyright © 2017 Elsevier Inc. All rights reserved.
Huang, Kuan-Ju; Shih, Wei-Yeh; Chang, Jui Chung; Feng, Chih Wei; Fang, Wai-Chi
2013-01-01
This paper presents a pipeline VLSI design of fast singular value decomposition (SVD) processor for real-time electroencephalography (EEG) system based on on-line recursive independent component analysis (ORICA). Since SVD is used frequently in computations of the real-time EEG system, a low-latency and high-accuracy SVD processor is essential. During the EEG system process, the proposed SVD processor aims to solve the diagonal, inverse and inverse square root matrices of the target matrices in real time. Generally, SVD requires a huge amount of computation in hardware implementation. Therefore, this work proposes a novel design concept for data flow updating to assist the pipeline VLSI implementation. The SVD processor can greatly improve the feasibility of real-time EEG system applications such as brain computer interfaces (BCIs). The proposed architecture is implemented using TSMC 90 nm CMOS technology. The sample rate of EEG raw data adopts 128 Hz. The core size of the SVD processor is 580×580 um(2), and the speed of operation frequency is 20MHz. It consumes 0.774mW of power during the 8-channel EEG system per execution time.
Application of reiteration of Hankel singular value decomposition in quality control
Staniszewski, Michał; Skorupa, Agnieszka; Boguszewicz, Łukasz; Michalczuk, Agnieszka; Wereszczyński, Kamil; Wicher, Magdalena; Konopka, Marek; Sokół, Maria; Polański, Andrzej
2017-07-01
Medical centres are obliged to store past medical records, including the results of quality assurance (QA) tests of the medical equipment, which is especially useful in checking reproducibility of medical devices and procedures. Analysis of multivariate time series is an important part of quality control of NMR data. In this work we proposean anomaly detection tool based on Reiteration of Hankel Singular Value Decomposition method. The presented method was compared with external software and authors obtained comparable results.
DEFF Research Database (Denmark)
Haldrup, Kristoffer
2014-01-01
The development of new X-ray light sources, XFELs, with unprecedented time and brilliance characteristics has led to the availability of very large datasets with high time resolution and superior signal strength. The chaotic nature of the emission processes in such sources as well as entirely nov...... on singular-value decomposition of no-signal subsets of acquired datasets in combination with model inputs and appears generally applicable to time-resolved X-ray diffuse scattering experiments....
Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta
2016-01-01
This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.
Romppanen, Sari; Häkkänen, Heikki; Kaski, Saara
2017-01-01
Laser-induced breakdown spectroscopy (LIBS) has been used in analysis of rare earth element (REE) ores from the geological formation of Norra Kärr Alkaline Complex in southern Sweden. Yttrium has been detected in eudialyte (Na15 Ca6(Fe,Mn)3 Zr3Si(Si25O73)(O,OH,H2O)3 (OH,Cl)2) and catapleiite (Ca/Na2ZrSi3O9·2H2O). Singular value decomposition (SVD) has been employed in classification of the minerals in the rock samples and maps representing the mineralogy in the sampled area have been construc...
Nuclear power plant sensor fault detection using singular value
Indian Academy of Sciences (India)
The validation process consists of two steps: (i) residual generation and (ii) fault detection by residual evaluation.Singular value decomposition (SVD) and Euclidean distance (ED) methods are used to generate the residual and evaluate the fault on the residual space, respectively. This paper claims that SVD-based fault ...
Quantum singular-value decomposition of nonsparse low-rank matrices
Rebentrost, Patrick; Steffens, Adrian; Marvian, Iman; Lloyd, Seth
2018-01-01
We present a method to exponentiate nonsparse indefinite low-rank matrices on a quantum computer. Given access to the elements of the matrix, our method allows one to determine the singular values and their associated singular vectors in time exponentially faster in the dimension of the matrix than known classical algorithms. The method extends to non-Hermitian and nonsquare matrices via matrix embedding. Moreover, our method preserves the phase relations between the singular spaces allowing for efficient algorithms that require operating on the entire singular-value decomposition of a matrix. As an example of such an algorithm, we discuss the Procrustes problem of finding a closest isometry to a given matrix.
International Nuclear Information System (INIS)
Sun Bin; Zhou Yunlong; Zhao Peng; Guan Yuebo
2007-01-01
Aiming at the non-stationary characteristics of differential pressure fluctuation signals of gas-liquid two-phase flow, and the slow convergence of learning and liability of dropping into local minima for BP neural networks, flow regime identification method based on Singular Value Decomposition (SVD) and Least Square Support Vector Machine (LS-SVM) is presented. First of all, the Empirical Mode Decomposition (EMD) method is used to decompose the differential pressure fluctuation signals of gas-liquid two-phase flow into a number of stationary Intrinsic Mode Functions (IMFs) components from which the initial feature vector matrix is formed. By applying the singular vale decomposition technique to the initial feature vector matrixes, the singular values are obtained. Finally, the singular values serve as the flow regime characteristic vector to be LS-SVM classifier and flow regimes are identified by the output of the classifier. The identification result of four typical flow regimes of air-water two-phase flow in horizontal pipe has shown that this method achieves a higher identification rate. (authors)
Singh, Phool; Yadav, A. K.; Singh, Kehar; Saini, Indu
2017-01-01
A new scheme for image encryption is proposed, using fractional Hartley transform followed by Arnold transform and singular value decomposition in the frequency domain. As the plaintext is an amplitude image, the mask used in the spatial domain is a random phase mask (RPM). The proposed scheme has been validated for grayscale images and is sensitive to the encryption parameters such as order of Arnold transform and fractional orders of the Hartley transform. We have also evaluated the scheme's resistance to the well-known noise and occlusions attacks.
Singh, Phool; Yadav, A. K.; Singh, Kehar
2017-04-01
A novel scheme for image encryption of phase images is proposed, using fractional Hartley transform followed by Arnold transform and singular value decomposition in the frequency domain. Since the plaintext is a phase image, the mask used in the spatial domain is a random amplitude mask. The proposed scheme has been validated for grayscale images and is sensitive to the encryption parameters such as the order of the Arnold transform and the fractional orders of the Hartley transform. We have also evaluated the scheme's resistance to the well-known noise and occlusion attacks.
International Nuclear Information System (INIS)
Erba, M.; Mattioli, M.; Segui, J.L.
1997-10-01
This paper addresses the problem of removing sawtooth oscillations from multichannel plasma data in a self-consistent way, thereby preserving transients that have a different physical origin. The technique which does this is called the Generalized Singular Value Decomposition (GSVD), and its properties are discussed. Using the GSVD, we analyze spatially resolved electron temperature measurements from the Tore Supra tokamak, made in transient regimes that are perturbed either by the laser blow-off injection of impurities or by pellet injection. Non-local transport issues are briefly discussed. (author)
On low-rank updates to the singular value and Tucker decompositions
Energy Technology Data Exchange (ETDEWEB)
O' Hara, M J
2009-10-06
The singular value decomposition is widely used in signal processing and data mining. Since the data often arrives in a stream, the problem of updating matrix decompositions under low-rank modification has been widely studied. Brand developed a technique in 2006 that has many advantages. However, the technique does not directly approximate the updated matrix, but rather its previous low-rank approximation added to the new update, which needs justification. Further, the technique is still too slow for large information processing problems. We show that the technique minimizes the change in error per update, so if the error is small initially it remains small. We show that an updating algorithm for large sparse matrices should be sub-linear in the matrix dimension in order to be practical for large problems, and demonstrate a simple modification to the original technique that meets the requirements.
Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A
2015-09-21
Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and
Emge, Darren K.; Adalı, Tülay
2014-06-01
As the availability and use of imaging methodologies continues to increase, there is a fundamental need to jointly analyze data that is collected from multiple modalities. This analysis is further complicated when, the size or resolution of the images differ, implying that the observation lengths of each of modality can be highly varying. To address this expanding landscape, we introduce the multiset singular value decomposition (MSVD), which can perform a joint analysis on any number of modalities regardless of their individual observation lengths. Through simulations, the inter modal relationships across the different modalities which are revealed by the MSVD are shown. We apply the MSVD to forensic fingerprint analysis, showing that MSVD joint analysis successfully identifies relevant similarities for further analysis, significantly reducing the processing time required. This reduction, takes this technique from a laboratory method to a useful forensic tool with applications across the law enforcement and security regimes.
International Nuclear Information System (INIS)
Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.
2017-01-01
Highlights: • A novel approach to classify the fault pattern using data-driven methods. • Application of robust reconstruction method (SVD) to identify the faulty sensor. • Analysing fault pattern for plenty of sensors using SDF with less time complexity. • An efficient data-driven model is designed to the false and missed alarms. - Abstract: A mathematical model with two layers is developed using data-driven methods for thermocouple sensor fault detection and classification in Nuclear Power Plants (NPP). The Singular Value Decomposition (SVD) based method is applied to detect the faulty sensor from a data set of all sensors, at the first layer. In the second layer, the Symbolic Dynamic Filter (SDF) is employed to classify the fault pattern. If SVD detects any false fault, it is also re-evaluated by the SDF, i.e., the model has two layers of checking to balance the false alarms. The proposed fault detection and classification method is compared with the Principal Component Analysis. Two case studies are taken from Fast Breeder Test Reactor (FBTR) to prove the efficiency of the proposed method.
Interior sound field control using generalized singular value decomposition in the frequency domain.
Pasco, Yann; Gauthier, Philippe-Aubert; Berry, Alain; Moreau, Stéphane
2017-01-01
The problem of controlling a sound field inside a region surrounded by acoustic control sources is considered. Inspired by the Kirchhoff-Helmholtz integral, the use of double-layer source arrays allows such a control and avoids the modification of the external sound field by the control sources by the approximation of the sources as monopole and radial dipole transducers. However, the practical implementation of the Kirchhoff-Helmholtz integral in physical space leads to large numbers of control sources and error sensors along with excessive controller complexity in three dimensions. The present study investigates the potential of the Generalized Singular Value Decomposition (GSVD) to reduce the controller complexity and separate the effect of control sources on the interior and exterior sound fields, respectively. A proper truncation of the singular basis provided by the GSVD factorization is shown to lead to effective cancellation of the interior sound field at frequencies below the spatial Nyquist frequency of the control sources array while leaving the exterior sound field almost unchanged. Proofs of concept are provided through simulations achieved for interior problems by simulations in a free field scenario with circular arrays and in a reflective environment with square arrays.
Compressive spectral image super-resolution by using singular value decomposition
Marquez, M.; Mejia, Y.; Arguello, Henry
2017-12-01
Compressive sensing (CS) has been recently applied to the acquisition and reconstruction of spectral images (SI). This field is known as compressive spectral imaging (CSI). The attainable resolution of SI depends on the sensor characteristics, whose cost increases in proportion to the resolution. Super-resolution (SR) approaches are usually applied to low-resolution (LR) CSI systems to improve the quality of the reconstructions by solving two consecutive optimization problems. In contrast, this work aims at reconstructing a high resolution (HR) SI from LR compressive measurements by solving a single convex optimization problem based on the fusion of CS and SR techniques. Furthermore, the truncated singular value decomposition is used to alleviate the computational complexity of the inverse reconstruction problem. The proposed method is tested by using the coded aperture snapshot spectral imager (CASSI), and the results are compared to HR-SI images directly reconstructed from LR-SI images by using an SR algorithm via sparse representation. In particular, a gain of up to 1.5 dB of PSNR is attained with the proposed method.
Leblond, Frederic; Tichauer, Kenneth M; Pogue, Brian W
2010-11-29
The spatial resolution and recovered contrast of images reconstructed from diffuse fluorescence tomography data are limited by the high scattering properties of light propagation in biological tissue. As a result, the image reconstruction process can be exceedingly vulnerable to inaccurate prior knowledge of tissue optical properties and stochastic noise. In light of these limitations, the optimal source-detector geometry for a fluorescence tomography system is non-trivial, requiring analytical methods to guide design. Analysis of the singular value decomposition of the matrix to be inverted for image reconstruction is one potential approach, providing key quantitative metrics, such as singular image mode spatial resolution and singular data mode frequency as a function of singular mode. In the present study, these metrics are used to analyze the effects of different sources of noise and model errors as related to image quality in the form of spatial resolution and contrast recovery. The image quality is demonstrated to be inherently noise-limited even when detection geometries were increased in complexity to allow maximal tissue sampling, suggesting that detection noise characteristics outweigh detection geometry for achieving optimal reconstructions.
Nuclear power plant sensor fault detection using singular value ...
Indian Academy of Sciences (India)
In this paper, a method is proposed to detect and identify any degradation of sensor performance. The validation process consists of two steps: (i) residual generation and (ii) fault detection by residual evaluation.Singular value decomposition (SVD) and Euclidean distance (ED) methods are used to generate the residual ...
Construction Method of Regularization by Singular Value Decomposition of Design Matrix
Directory of Open Access Journals (Sweden)
LIN Dongfang
2016-08-01
Full Text Available Tikhonov regularization introduces regularization parameter and stable functional to improve the ill-condition. When the stable functional expressed as two-norm constraint, the regularization method is the same as ridge estimation. The analysis of the variance and bias of the ridge estimation shows that ridge estimation improved the ill-condition but introduced more bias. The estimation reliability is lowered. We get that correct the larger singular values cannot decrease the variance effectively but introduced more bias, correcting the smaller singular values can decrease the variance effectively. We choose the eigenvectors of the smaller singular values to construct the regularization matrix. It can adjust the correction of the singular values, decrease the variance and biases and finally get a more reliable estimation.
Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an
2017-09-01
High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.
Automatic online spike sorting with singular value decomposition and fuzzy C-mean clustering
Directory of Open Access Journals (Sweden)
Oliynyk Andriy
2012-08-01
Full Text Available Abstract Background Understanding how neurons contribute to perception, motor functions and cognition requires the reliable detection of spiking activity of individual neurons during a number of different experimental conditions. An important problem in computational neuroscience is thus to develop algorithms to automatically detect and sort the spiking activity of individual neurons from extracellular recordings. While many algorithms for spike sorting exist, the problem of accurate and fast online sorting still remains a challenging issue. Results Here we present a novel software tool, called FSPS (Fuzzy SPike Sorting, which is designed to optimize: (i fast and accurate detection, (ii offline sorting and (iii online classification of neuronal spikes with very limited or null human intervention. The method is based on a combination of Singular Value Decomposition for fast and highly accurate pre-processing of spike shapes, unsupervised Fuzzy C-mean, high-resolution alignment of extracted spike waveforms, optimal selection of the number of features to retain, automatic identification the number of clusters, and quantitative quality assessment of resulting clusters independent on their size. After being trained on a short testing data stream, the method can reliably perform supervised online classification and monitoring of single neuron activity. The generalized procedure has been implemented in our FSPS spike sorting software (available free for non-commercial academic applications at the address: http://www.spikesorting.com using LabVIEW (National Instruments, USA. We evaluated the performance of our algorithm both on benchmark simulated datasets with different levels of background noise and on real extracellular recordings from premotor cortex of Macaque monkeys. The results of these tests showed an excellent accuracy in discriminating low-amplitude and overlapping spikes under strong background noise. The performance of our method is
Automatic online spike sorting with singular value decomposition and fuzzy C-mean clustering.
Oliynyk, Andriy; Bonifazzi, Claudio; Montani, Fernando; Fadiga, Luciano
2012-08-08
Understanding how neurons contribute to perception, motor functions and cognition requires the reliable detection of spiking activity of individual neurons during a number of different experimental conditions. An important problem in computational neuroscience is thus to develop algorithms to automatically detect and sort the spiking activity of individual neurons from extracellular recordings. While many algorithms for spike sorting exist, the problem of accurate and fast online sorting still remains a challenging issue. Here we present a novel software tool, called FSPS (Fuzzy SPike Sorting), which is designed to optimize: (i) fast and accurate detection, (ii) offline sorting and (iii) online classification of neuronal spikes with very limited or null human intervention. The method is based on a combination of Singular Value Decomposition for fast and highly accurate pre-processing of spike shapes, unsupervised Fuzzy C-mean, high-resolution alignment of extracted spike waveforms, optimal selection of the number of features to retain, automatic identification the number of clusters, and quantitative quality assessment of resulting clusters independent on their size. After being trained on a short testing data stream, the method can reliably perform supervised online classification and monitoring of single neuron activity. The generalized procedure has been implemented in our FSPS spike sorting software (available free for non-commercial academic applications at the address: http://www.spikesorting.com) using LabVIEW (National Instruments, USA). We evaluated the performance of our algorithm both on benchmark simulated datasets with different levels of background noise and on real extracellular recordings from premotor cortex of Macaque monkeys. The results of these tests showed an excellent accuracy in discriminating low-amplitude and overlapping spikes under strong background noise. The performance of our method is competitive with respect to other robust spike
Multifractal singular value decomposition (MSVD) for extraction of marine gravity anomaly
LYU, Wenchao; Zhu, Benduo; Qiu, Yan
2015-04-01
The concept of singularity is used for characterizing different types of nonlinear natural processes, including volcanic eruptions, faults, cloud formation, landslides, rainfall, hurricanes, flooding, earthquakes, wildfires, oil fields and mineralization. The singularity often results in anomalous amounts of energy release or material accumulation within a narrow spatial-temporal interval.The marine gravitation field has multi-fractal features, which show different scale invariant properties in region and local field. The SVD can be used in geophysical data processing for signal and noise separation, radar processing for enhancing weak signals in vertical seismic profiles (VSP). It has also been used in multi component seismic polarization filters and evaluating the amount of wavy reflections in ground-penetrating radar (GPR) images of base surge deposits. With the SVD, a matrix X can be decomposed to a series of eigenvalues. The eigenvalues conformed fractal or multi-fractal distribution described with the power-law function. The multi-fractal SVD can be used for feature extraction and anomaly identification for marine gravity investigation.This paper aims to analyze the marine gravitation data using the SVD and multifractal methods. This paper will also aim to more clearly define the spatial relationship between marine mineralization and the deep geological structures in the field by extracting the marine gravitation information at a particular frequency to provide valuable in depth evidence for predicting new deposits and deep tectonic.
International Nuclear Information System (INIS)
Xu Li-Qing; Hu Li-Qun; Li Er-Zhong; Chen Kai-Yun; Liu Zhi-Yuan; Chen Ye-Bin; Zhang Ji-Zong; Zhou Rui-Jie; Yang Mao; Mao Song-Tao; Duan Yan-Min
2012-01-01
In this paper, the singular value decomposition (SVD) method as a filter is applied before the tomographic inversion of soft-X-ray emission. Series of ‘filtered’ signals including specific chronos and topos are obtained. (Here, chronos and topos are the decomposed spatial vectors and the decomposed temporal vectors, respectively). Given specific magnetic flux function with coupling m = 1 and m = 2 modes, the line-integrated soft-X-ray signals at all chords have been obtained. Then m = 1 and m = 2 modes have been identified by tomography of simulated ‘filtered’ signals extracted by the SVD method. Finaly, using the experimental line-integrated soft-X-ray signals, m = 2 competent mode of complex magnetohydrodynamics(MHD) activities during internal soft disruption is observed. This result demonstrates that m = 2 mode plays an important role in internal disruption (Here, m is the poloidal mode number). (physics of gases, plasmas, and electric discharges)
Implicit-shifted Symmetric QR Singular Value Decomposition of 3x3 Matrices
2016-04-01
due to the loss of information form constructing ATA explicitly. For the tests we performed, implicit QR SVD is the fastest in floats, and comparable...SCHROEDER, C., AND TERAN, J. M. 2013. A level set method for ductile fracture. In Proc ACM SIGGRAPH/Eurograp Symp Comp Anim , 193–201. IRVING, G., TERAN...J., AND FEDKIW, R. 2004. Invertible finite elements for robust simulation of large deformation. In Proc ACM SIGGRAPH/Eurograph Symp Comp Anim , 131
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2018-04-01
We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.
Statistical analysis of effective singular values in matrix rank determination
Konstantinides, Konstantinos; Yao, Kung
1988-01-01
A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.
Directory of Open Access Journals (Sweden)
Jinyu Lu
2014-01-01
Full Text Available A novel iris biometric watermarking scheme is proposed focusing on iris recognition instead of the traditional watermark for increasing the security of the digital products. The preprocess of iris image is to be done firstly, which generates the iris biometric template from person's eye images. And then the templates are to be on discrete cosine transform; the value of the discrete cosine is encoded to BCH error control coding. The host image is divided into four areas equally correspondingly. The BCH codes are embedded in the singular values of each host image's coefficients which are obtained through discrete cosine transform (DCT. Numerical results reveal that proposed method can extract the watermark effectively and illustrate its security and robustness.
Analysis of local ionospheric time varying characteristics with singular value decomposition
DEFF Research Database (Denmark)
Jakobsen, Jakob Anders; Knudsen, Per; Jensen, Anna B. O.
2010-01-01
in Denmark located in the midlatitude region. The station separation between the three stations is 132–208 km (the time series of the TEC can be freely downloaded at http://www.heisesgade.dk). For each year, a SVD has been performed on the TEC time series in order to identify the three time varying (daily...... filter processing making it more robust, but can also be used as starting values in the initialization phase in case of gaps in the data stream. Furthermore, the models can be used to detect variations from the normal local ionospheric activity....
Khoshbin, Fatemeh; Bonakdari, Hossein; Hamed Ashraf Talesh, Seyed; Ebtehaj, Isa; Zaji, Amir Hossein; Azimi, Hamed
2016-06-01
In the present article, the adaptive neuro-fuzzy inference system (ANFIS) is employed to model the discharge coefficient in rectangular sharp-crested side weirs. The genetic algorithm (GA) is used for the optimum selection of membership functions, while the singular value decomposition (SVD) method helps in computing the linear parameters of the ANFIS results section (GA/SVD-ANFIS). The effect of each dimensionless parameter on discharge coefficient prediction is examined in five different models to conduct sensitivity analysis by applying the above-mentioned dimensionless parameters. Two different sets of experimental data are utilized to examine the models and obtain the best model. The study results indicate that the model designed through GA/SVD-ANFIS predicts the discharge coefficient with a good level of accuracy (mean absolute percentage error = 3.362 and root mean square error = 0.027). Moreover, comparing this method with existing equations and the multi-layer perceptron-artificial neural network (MLP-ANN) indicates that the GA/SVD-ANFIS method has superior performance in simulating the discharge coefficient of side weirs.
Directory of Open Access Journals (Sweden)
Haichao Cai
2015-01-01
Full Text Available When detecting the ultrasonic flaw of thick-walled pipe, the flaw echo signals are often interrupted by scanning system frequency and background noise. In particular when the thick-walled pipe defect is small, echo signal amplitude is often drowned in noise signal and affects the extraction of defect signal and the position determination accuracy. This paper presents the modified S-transform domain singular value decomposition method for the analysis of ultrasonic flaw echo signals. By changing the scale rule of Gaussian window functions with S-transform to improve the time-frequency resolution. And the paper tries to decompose the singular value decomposition of time-frequency matrix after the S-transform to determine the singular entropy of effective echo signal and realize the adaptive filter. Experiments show that, using this method can not only remove high frequency noise but also remove the low frequency noise and improve the signal-to-noise ratio of echo signal.
Directory of Open Access Journals (Sweden)
Patricia López-Rodríguez
2014-12-01
Full Text Available Radar high resolution range profiles are widely used among the target recognition community for the detection and identification of flying targets. In this paper, singular value decomposition is applied to extract the relevant information and to model each aircraft as a subspace. The identification algorithm is based on angle between subspaces and takes place in a transformed domain. In order to have a wide database of radar signatures and evaluate the performance, simulated range profiles are used as the recognition database while the test samples comprise data of actual range profiles collected in a measurement campaign. Thanks to the modeling of aircraft as subspaces only the valuable information of each target is used in the recognition process. Thus, one of the main advantages of using singular value decomposition, is that it helps to overcome the notable dissimilarities found in the shape and signal-to-noise ratio between actual and simulated profiles due to their difference in nature. Despite these differences, the recognition rates obtained with the algorithm are quite promising.
SVD Based Image Processing Applications: State of The Art, Contributions and Research Challenges
Sadek, Rowayda A.
2012-01-01
Singular Value Decomposition (SVD) has recently emerged as a new paradigm for processing different types of images. SVD is an attractive algebraic transform for image processing applications. The paper proposes an experimental survey for the SVD as an efficient transform in image processing applications. Despite the well known fact that SVD offers attractive properties in imaging, the exploring of using its properties in various image applications is currently at its infancy. Since the SVD ha...
Jiang, Jiaqi; Gu, Rongbao
2016-04-01
This paper generalizes the method of traditional singular value decomposition entropy by incorporating orders q of Rényi entropy. We analyze the predictive power of the entropy based on trajectory matrix using Shanghai Composite Index and Dow Jones Index data in both static test and dynamic test. In the static test on SCI, results of global granger causality tests all turn out to be significant regardless of orders selected. But this entropy fails to show much predictability in American stock market. In the dynamic test, we find that the predictive power can be significantly improved in SCI by our generalized method but not in DJI. This suggests that noises and errors affect SCI more frequently than DJI. In the end, results obtained using different length of sliding window also corroborate this finding.
International Nuclear Information System (INIS)
Yang Jia; Ge Liangquan; Xiong Shengqing
2010-01-01
From the features of spectra shape of Chang'e-1 γ-ray spectrometer(CE1-GRS) data, it is difficult to determine elemental compositions on the lunar surface. Aimed at this problem, this paper proposes using noise adjusted singular value decomposition (NASVD) method to extract orthogonal spectral components from CE1-GRS data. Then the peak signals in the spectra of lower-order layers corresponding to the observed spectrum of each lunar region are respectively analyzed. Elemental compositions of each lunar region can be determined based upon whether the energy corresponding to each peak signal equals to the energy corresponding to the characteristic gamma-ray line emissions of specific elements. The result shows that a number of elements such as U, Th, K, Fe, Ti, Si, O, Al, Mg, Ca and Na are qualitatively determined by this method. (authors)
Wang, Qingzhu; Chen, Xiaoming; Zhu, Yihai
2017-09-01
Existing image compression and encryption methods have several shortcomings: they have low reconstruction accuracy and are unsuitable for three-dimensional (3D) images. To overcome these limitations, this paper proposes a tensor-based approach adopting tensor compressive sensing and tensor discrete fractional random transform (TDFRT). The source video images are measured by three key-controlled sensing matrices. Subsequently, the resulting tensor image is further encrypted using 3D cat map and the proposed TDFRT, which is based on higher-order singular value decomposition. A multiway projection algorithm is designed to reconstruct the video images. The proposed algorithm can greatly reduce the data volume and improve the efficiency of the data transmission and key distribution. The simulation results validate the good compression performance, efficiency, and security of the proposed algorithm.
Das, Siddhartha; Siopsis, George; Weedbrook, Christian
2018-02-01
With the significant advancement in quantum computation during the past couple of decades, the exploration of machine-learning subroutines using quantum strategies has become increasingly popular. Gaussian process regression is a widely used technique in supervised classical machine learning. Here we introduce an algorithm for Gaussian process regression using continuous-variable quantum systems that can be realized with technology based on photonic quantum computers under certain assumptions regarding distribution of data and availability of efficient quantum access. Our algorithm shows that by using a continuous-variable quantum computer a dramatic speedup in computing Gaussian process regression can be achieved, i.e., the possibility of exponentially reducing the time to compute. Furthermore, our results also include a continuous-variable quantum-assisted singular value decomposition method of nonsparse low rank matrices and forms an important subroutine in our Gaussian process regression algorithm.
Sun, Qi; Fu, Shujun
2017-09-20
Fringe orientation is an important feature of fringe patterns and has a wide range of applications such as guiding fringe pattern filtering, phase unwrapping, and abstraction. Estimating fringe orientation is a basic task for subsequent processing of fringe patterns. However, various noise, singular and obscure points, and orientation data degeneration lead to inaccurate calculations of fringe orientation. Thus, to deepen the understanding of orientation estimation and to better guide orientation estimation in fringe pattern processing, some advanced gradient-field-based orientation estimation methods are compared and analyzed. At the same time, following the ideas of smoothing regularization and computing of bigger gradient fields, a regularized singular-value decomposition (RSVD) technique is proposed for fringe orientation estimation. To compare the performance of these gradient-field-based methods, quantitative results and visual effect maps of orientation estimation are given on simulated and real fringe patterns that demonstrate that the RSVD produces the best estimation results at a cost of relatively less time.
Digital Repository Service at National Institute of Oceanography (India)
Murty, T.V.R.; Rao, M.M.M.; SuryaPrakash, S.; Chandramouli, P.; Murthy, K.S.R.
An integrated friendly-user interactive multiple Ocean Application Pacakage has been developed utilizing the well known statistical technique called Singular Value Decomposition (SVD) to achieve image and data compression in MATLAB environment...
Fahnline, John B.
2003-10-01
In many acoustic design problems, it would be useful to be able to compute fluid-coupled resonance frequencies, mode shapes, and their associated damping levels. Unfortunately, conventional eigenvalue solution procedures are either computationally-inefficient, unreliable, or have limited applicability. Sophisticated methods for identifying modal parameters using the singular value decomposition have recently emerged in the area of experimental modal analysis, where the available data typically consists of velocity to force transfer function data as a function of frequency for several drive point locations. Here, these techniques are shown to be even more effective for coupled finite element/boundary element solutions because full matrices of transfer function data can be computed as a function of frequency. This allows the modes to be completely separated from each other, such that the modal parameters can be identified using simple methods designed for single degree of freedom systems. Several benchmark example problems are solved numerically including a baffled circular plate, an unbaffled rectangular plate, and a spring-mounted piston coupled to fluid within a rigid-walled pipe.
Directory of Open Access Journals (Sweden)
Arif Fadllullah
2016-02-01
Full Text Available Ant-based document clustering is a cluster method of measuring text documents similarity based on the shortest path between nodes (trial phase and determines the optimal clusters of sequence document similarity (dividing phase. The processing time of trial phase Ant algorithms to make document vectors is very long because of high dimensional Document-Term Matrix (DTM. In this paper, we proposed a document clustering method for optimizing dimension reduction using Singular Value Decomposition-Principal Component Analysis (SVDPCA and Ant algorithms. SVDPCA reduces size of the DTM dimensions by converting freq-term of conventional DTM to score-pc of Document-PC Matrix (DPCM. Ant algorithms creates documents clustering using the vector space model based on the dimension reduction result of DPCM. The experimental results on 506 news documents in Indonesian language demonstrated that the proposed method worked well to optimize dimension reduction up to 99.7%. We could speed up execution time efficiently of the trial phase and maintain the best F-measure achieved from experiments was 0.88 (88%.
Koh, T S; Wu, X Y; Cheong, L H; Lim, C C T
2004-12-01
The assessment of tissue perfusion by dynamic contrast-enhanced (DCE) imaging involves a deconvolution process. For analysis of DCE imaging data, we implemented a regression approach to select appropriate regularization parameters for deconvolution using the standard and generalized singular value decomposition methods. Monte Carlo simulation experiments were carried out to study the performance and to compare with other existing methods used for deconvolution analysis of DCE imaging data. The present approach is found to be robust and reliable at the levels of noise commonly encountered in DCE imaging, and for different models of the underlying tissue vasculature. The advantages of the present method, as compared with previous methods, include its efficiency of computation, ability to achieve adequate regularization to reproduce less noisy solutions, and that it does not require prior knowledge of the noise condition. The proposed method is applied on actual patient study cases with brain tumors and ischemic stroke, to illustrate its applicability as a clinical tool for diagnosis and assessment of treatment response.
Image Denoising Using Singular Value Difference in the Wavelet Domain
Directory of Open Access Journals (Sweden)
Min Wang
2018-01-01
Full Text Available Singular value (SV difference is the difference in the singular values between a noisy image and the original image; it varies regularly with noise intensity. This paper proposes an image denoising method using the singular value difference in the wavelet domain. First, the SV difference model is generated for different noise variances in the three directions of the wavelet transform and the noise variance of a new image is used to make the calculation by the diagonal part. Next, the single-level discrete 2-D wavelet transform is used to decompose each noisy image into its low-frequency and high-frequency parts. Then, singular value decomposition (SVD is used to obtain the SVs of the three high-frequency parts. Finally, the three denoised high-frequency parts are reconstructed by SVD from the SV difference, and the final denoised image is obtained using the inverse wavelet transform. Experiments show the effectiveness of this method compared with relevant existing methods.
Classification of subsurface objects using singular values derived from signal frames
Chambers, David H; Paglieroni, David W
2014-05-06
The classification system represents a detected object with a feature vector derived from the return signals acquired by an array of N transceivers operating in multistatic mode. The classification system generates the feature vector by transforming the real-valued return signals into complex-valued spectra, using, for example, a Fast Fourier Transform. The classification system then generates a feature vector of singular values for each user-designated spectral sub-band by applying a singular value decomposition (SVD) to the N.times.N square complex-valued matrix formed from sub-band samples associated with all possible transmitter-receiver pairs. The resulting feature vector of singular values may be transformed into a feature vector of singular value likelihoods and then subjected to a multi-category linear or neural network classifier for object classification.
Yuan, Rui; Lv, Yong; Song, Gangbing
2018-04-16
Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal.
Directory of Open Access Journals (Sweden)
Morris Brian J
2011-05-01
Full Text Available Abstract Background The quantification of experimentally-induced alterations in biological pathways remains a major challenge in systems biology. One example of this is the quantitative characterization of alterations in defined, established metabolic pathways from complex metabolomic data. At present, the disruption of a given metabolic pathway is inferred from metabolomic data by observing an alteration in the level of one or more individual metabolites present within that pathway. Not only is this approach open to subjectivity, as metabolites participate in multiple pathways, but it also ignores useful information available through the pairwise correlations between metabolites. This extra information may be incorporated using a higher-level approach that looks for alterations between a pair of correlation networks. In this way experimentally-induced alterations in metabolic pathways can be quantitatively defined by characterizing group differences in metabolite clustering. Taking this approach increases the objectivity of interpreting alterations in metabolic pathways from metabolomic data. Results We present and justify a new technique for comparing pairs of networks--in our case these networks are based on the same set of nodes and there are two distinct types of weighted edges. The algorithm is based on the Generalized Singular Value Decomposition (GSVD, which may be regarded as an extension of Principle Components Analysis to the case of two data sets. We show how the GSVD can be interpreted as a technique for reordering the two networks in order to reveal clusters that are exclusive to only one. Here we apply this algorithm to a new set of metabolomic data from the prefrontal cortex (PFC of a translational model relevant to schizophrenia, rats treated subchronically with the N-methyl-D-Aspartic acid (NMDA receptor antagonist phencyclidine (PCP. This provides us with a means to quantify which predefined metabolic pathways (Kyoto
Czech Academy of Sciences Publication Activity Database
Vohradský, Jiří; Branny, Pavel; Thompson, CH. J.
2007-01-01
Roč. 7, č. 21 (2007), s. 3866-3853 ISSN 1615-9853 R&D Projects: GA ČR GA310/07/1009; GA ČR GA310/04/0804 Grant - others:XE(XE) EC Integrated Project ActinoGEN, LSHM-CT-2004-005224 Institutional research plan: CEZ:AV0Z50200510 Source of funding: R - rámcový projekt EK Keywords : development * gene expression profiling * singular value decomposition Subject RIV: EE - Microbiology, Virology Impact factor: 5.479, year: 2007
A High Performance QDWH-SVD Solver using Hardware Accelerators
Sukkari, Dalal E.
2015-04-08
This paper describes a new high performance implementation of the QR-based Dynamically Weighted Halley Singular Value Decomposition (QDWH-SVD) solver on multicore architecture enhanced with multiple GPUs. The standard QDWH-SVD algorithm was introduced by Nakatsukasa and Higham (SIAM SISC, 2013) and combines three successive computational stages: (1) the polar decomposition calculation of the original matrix using the QDWH algorithm, (2) the symmetric eigendecomposition of the resulting polar factor to obtain the singular values and the right singular vectors and (3) the matrix-matrix multiplication to get the associated left singular vectors. A comprehensive test suite highlights the numerical robustness of the QDWH-SVD solver. Although it performs up to two times more flops when computing all singular vectors compared to the standard SVD solver algorithm, our new high performance implementation on single GPU results in up to 3.8x improvements for asymptotic matrix sizes, compared to the equivalent routines from existing state-of-the-art open-source and commercial libraries. However, when only singular values are needed, QDWH-SVD is penalized by performing up to 14 times more flops. The singular value only implementation of QDWH-SVD on single GPU can still run up to 18% faster than the best existing equivalent routines. Integrating mixed precision techniques in the solver can additionally provide up to 40% improvement at the price of losing few digits of accuracy, compared to the full double precision floating point arithmetic. We further leverage the single GPU QDWH-SVD implementation by introducing the first multi-GPU SVD solver to study the scalability of the QDWH-SVD framework.
Directory of Open Access Journals (Sweden)
Jie Gao
2016-05-01
Full Text Available Singular value decomposition (SVD is a widely used and powerful tool for signal extraction under noise. Noise attenuation relies on the selection of the effective singular value because these values are significant features of the useful signal. Traditional methods of selecting effective singular values (or selecting the useful components to rebuild the faulty signal consist of seeking the maximum peak of the differential spectrum of singular values. However, owing to the small number of selected effective singular values, these methods lead to excessive de-noised effects. In order to get a more appropriate number of effective singular values, which preserves the components of the original signal as much as possible, this paper used a difference curvature spectrum of incremental singular entropy to determine the number of effective singular values. Then the position was found where the difference of two peaks in the spectrum declines in an infinitely large degree for the first time, and this position was regarded as the boundary of singular values between noise and a useful signal. The experimental results showed that the modified methods could accurately extract the non-stationary bearing faulty signal under real background noise.
Huang, Yulin; Zha, Yuebo; Wang, Yue; Yang, Jianyu
2015-06-18
The forward looking radar imaging task is a practical and challenging problem for adverse weather aircraft landing industry. Deconvolution method can realize the forward looking imaging but it often leads to the noise amplification in the radar image. In this paper, a forward looking radar imaging based on deconvolution method is presented for adverse weather aircraft landing. We first present the theoretical background of forward looking radar imaging task and its application for aircraft landing. Then, we convert the forward looking radar imaging task into a corresponding deconvolution problem, which is solved in the framework of algebraic theory using truncated singular decomposition method. The key issue regarding the selecting of the truncated parameter is addressed using generalized cross validation approach. Simulation and experimental results demonstrate that the proposed method is effective in achieving angular resolution enhancement with suppressing the noise amplification in forward looking radar imaging.
Hoogestraat, D; Al-Shamery, K
2010-03-03
The observation of periodic responses after absorption of ultrashort laser pulses in condensed media and at solid interfaces is a common phenomena in various time-resolved spectroscopic methods using laser pulses shorter than the period of the coherently excited vibrations. Normally these signals have to be separated from strong slowly decaying backgrounds related to the creation of nonequilibrium carriers. The recording normally requires either a small period of time or lacks temporal resolution to obtain the good signal-to-noise ratio necessary for the observation of the vibrations. The standard method used for the analysis of the data is a curve-fitting routine to the time-domain data. However, the disadvantage is the necessity to estimate the number of spectral components before fitting. This paper will introduce under which conditions linear prediction and singular value decomposition in combination with an iterative nonlinear fitting in the time and spectral domain may extract an unknown number of spectral components including amplitude, lifetime, frequency and phase. Such information is essential to unambiguously evaluate the dominant optical excitation process, the phase of the initial displacement, the symmetry of the excited vibrational mode and the specific vibration generation process.
Yongqiang Liu
2003-01-01
It was suggested in a recent statistical correlation analysis that predictability of monthly-seasonal precipitation could be improved by using coupled singular value decomposition (SVD) pattems between soil moisture and precipitation instead of their values at individual locations. This study provides predictive evidence for this suggestion by comparing skills of two...
Stable computation of generalized singular values
Energy Technology Data Exchange (ETDEWEB)
Drmac, Z.; Jessup, E.R. [Univ. of Colorado, Boulder, CO (United States)
1996-12-31
We study floating-point computation of the generalized singular value decomposition (GSVD) of a general matrix pair (A, B), where A and B are real matrices with the same numbers of columns. The GSVD is a powerful analytical and computational tool. For instance, the GSVD is an implicit way to solve the generalized symmetric eigenvalue problem Kx = {lambda}Mx, where K = A{sup {tau}}A and M = B{sup {tau}}B. Our goal is to develop stable numerical algorithms for the GSVD that are capable of computing the singular value approximations with the high relative accuracy that the perturbation theory says is possible. We assume that the singular values are well-determined by the data, i.e., that small relative perturbations {delta}A and {delta}B (pointwise rounding errors, for example) cause in each singular value {sigma} of (A, B) only a small relative perturbation {vert_bar}{delta}{sigma}{vert_bar}/{sigma}.
Application of SVM and SVD Technique Based on EMD to the Fault Diagnosis of the Rotating Machinery
Directory of Open Access Journals (Sweden)
Junsheng Cheng
2009-01-01
Full Text Available Targeting the characteristics that periodic impulses usually occur whilst the rotating machinery exhibits local faults and the limitations of singular value decomposition (SVD techniques, the SVD technique based on empirical mode decomposition (EMD is applied to the fault feature extraction of the rotating machinery vibration signals. The EMD method is used to decompose the vibration signal into a number of intrinsic mode functions (IMFs by which the initial feature vector matrices could be formed automatically. By applying the SVD technique to the initial feature vector matrices, the singular values of matrices could be obtained, which could be used as the fault feature vectors of support vector machines (SVMs classifier. The analysis results from the gear and roller bearing vibration signals show that the fault diagnosis method based on EMD, SVD and SVM can extract fault features effectively and classify working conditions and fault patterns of gears and roller bearings accurately even when the number of samples is small.
New Collaborative Filtering Algorithms Based on SVD++ and Differential Privacy
Xian, Zhengzheng; Li, Qiliang; Li, Gai; Li, Lei
2017-01-01
Collaborative filtering technology has been widely used in the recommender system, and its implementation is supported by the large amount of real and reliable user data from the big-data era. However, with the increase of the users’ information-security awareness, these data are reduced or the quality of the data becomes worse. Singular Value Decomposition (SVD) is one of the common matrix factorization methods used in collaborative filtering, which introduces the bias information of users a...
Unitary embedding for data hiding with the SVD
Bergman, Clifford; Davidson, Jennifer
2005-03-01
Steganography is the study of data hiding for the purpose of covert communication. A secret message is inserted into a cover file so that the very existence of the message is not apparent. Most current steganography algorithms insert data in the spatial or transform domains; common transforms include the discrete cosine transform, the discrete Fourier transform, and discrete wavelet transform. In this paper, we present a data-hiding algorithm that exploits a decomposition representation of the data instead of a frequency-based transformation of the data. The decomposition transform used is the singular value decomposition (SVD). The SVD of a matrix A is a decomposition A= USV' in which S is a nonnegative diagonal matrix and U and V are orthogonal matrices. We show how to use the orthogonal matrices in the SVD as a vessel in which to embed information. Several challenges were presented in order to accomplish this, and we give effective information-hiding using the SVD can be just as effective as using transform-based techniques. Furthermore, different problems arise when using the SVD than using a transform-based technique. We have applied the SVD to image data, but the technique can be formulated for other data types such as audio and video.
Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure.
Zhang, Wen; Xiao, Fan; Li, Bin; Zhang, Siguang
2016-01-01
Recently, LSI (Latent Semantic Indexing) based on SVD (Singular Value Decomposition) is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods.
Zhao, Ming; Jia, Xiaodong
2017-09-01
Singular value decomposition (SVD), as an effective signal denoising tool, has been attracting considerable attention in recent years. The basic idea behind SVD denoising is to preserve the singular components (SCs) with significant singular values. However, it is shown that the singular values mainly reflect the energy of decomposed SCs, therefore traditional SVD denoising approaches are essentially energy-based, which tend to highlight the high-energy regular components in the measured signal, while ignoring the weak feature caused by early fault. To overcome this issue, a reweighted singular value decomposition (RSVD) strategy is proposed for signal denoising and weak feature enhancement. In this work, a novel information index called periodic modulation intensity is introduced to quantify the diagnostic information in a mechanical signal. With this index, the decomposed SCs can be evaluated and sorted according to their information levels, rather than energy. Based on that, a truncated linear weighting function is proposed to control the contribution of each SC in the reconstruction of the denoised signal. In this way, some weak but informative SCs could be highlighted effectively. The advantages of RSVD over traditional approaches are demonstrated by both simulated signals and real vibration/acoustic data from a two-stage gearbox as well as train bearings. The results demonstrate that the proposed method can successfully extract the weak fault feature even in the presence of heavy noise and ambient interferences.
Image fusion via nonlocal sparse K-SVD dictionary learning.
Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang
2016-03-01
Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation.
Lin, Dan; Hossack, John A.
2012-01-01
A general filtering method, called the singular value filter (SVF), is presented as a framework for principal component analysis (PCA) based filter design in medical ultrasound imaging. The SVF approach operates by projecting the original data onto a new set of bases determined from PCA using singular value decomposition (SVD). The shape of the SVF weighting function, which relates the singular value spectrum of the input data to the filtering coefficients assigned to each basis function, is designed in accordance with a signal model and statistical assumptions regarding the underlying source signals. In this paper, we applied SVF for the specific application of clutter artifact rejection in diagnostic ultrasound imaging. SVF was compared to a conventional PCA-based filtering technique, which we refer to as the blind source separation (BSS) method, as well as a simple frequency-based finite impulse response (FIR) filter used as a baseline for comparison. The performance of each filter was quantified in simulated lesion images as well as experimental cardiac ultrasound data. SVF was demonstrated in both simulation and experimental results, over a wide range of imaging conditions, to outperform the BSS and FIR filtering methods in terms of contrast-to-noise ratio (CNR) and motion tracking performance. In experimental mouse heart data, SVF provided excellent artifact suppression with an average CNR improvement of 1.8 dB (P filtering was achieved using complex pulse-echo received data and non-binary filter coefficients. PMID:21693416
New Collaborative Filtering Algorithms Based on SVD++ and Differential Privacy
Directory of Open Access Journals (Sweden)
Zhengzheng Xian
2017-01-01
Full Text Available Collaborative filtering technology has been widely used in the recommender system, and its implementation is supported by the large amount of real and reliable user data from the big-data era. However, with the increase of the users’ information-security awareness, these data are reduced or the quality of the data becomes worse. Singular Value Decomposition (SVD is one of the common matrix factorization methods used in collaborative filtering, which introduces the bias information of users and items and is realized by using algebraic feature extraction. The derivative model SVD++ of SVD achieves better predictive accuracy due to the addition of implicit feedback information. Differential privacy is defined very strictly and can be proved, which has become an effective measure to solve the problem of attackers indirectly deducing the personal privacy information by using background knowledge. In this paper, differential privacy is applied to the SVD++ model through three approaches: gradient perturbation, objective-function perturbation, and output perturbation. Through theoretical derivation and experimental verification, the new algorithms proposed can better protect the privacy of the original data on the basis of ensuring the predictive accuracy. In addition, an effective scheme is given that can measure the privacy protection strength and predictive accuracy, and a reasonable range for selection of the differential privacy parameter is provided.
A New Adaptive Gamma Correction Based Algorithm Using DWT-SVD for Non-Contrast CT Image Enhancement.
Kallel, Fathi; Ben Hamida, Ahmed
2017-12-01
The performances of medical image processing techniques, in particular CT scans, are usually affected by poor contrast quality introduced by some medical imaging devices. This suggests the use of contrast enhancement methods as a solution to adjust the intensity distribution of the dark image. In this paper, an advanced adaptive and simple algorithm for dark medical image enhancement is proposed. This approach is principally based on adaptive gamma correction using discrete wavelet transform with singular-value decomposition (DWT-SVD). In a first step, the technique decomposes the input medical image into four frequency sub-bands by using DWT and then estimates the singular-value matrix of the low-low (LL) sub-band image. In a second step, an enhanced LL component is generated using an adequate correction factor and inverse singular value decomposition (SVD). In a third step, for an additional improvement of LL component, obtained LL sub-band image from SVD enhancement stage is classified into two main classes (low contrast and moderate contrast classes) based on their statistical information and therefore processed using an adaptive dynamic gamma correction function. In fact, an adaptive gamma correction factor is calculated for each image according to its class. Finally, the obtained LL sub-band image undergoes inverse DWT together with the unprocessed low-high (LH), high-low (HL), and high-high (HH) sub-bands for enhanced image generation. Different types of non-contrast CT medical images are considered for performance evaluation of the proposed contrast enhancement algorithm based on adaptive gamma correction using DWT-SVD (DWT-SVD-AGC). Results show that our proposed algorithm performs better than other state-of-the-art techniques.
Fault diagnosis for tilting-pad journal bearing based on SVD and LMD
Directory of Open Access Journals (Sweden)
Zhang Xiaotao
2016-01-01
Full Text Available Aiming at fault diagnosis for tilting-pad journal bearing with fluid support developed recently, a new method based on singular value decomposition (SVD and local mean decomposition (LMD is proposed. First, the phase space reconstruction of Hankel matrix and SVD method are used as pre-filter process unit to reduce the random noises in the original signal. Then the purified signal is decomposed by LMD into a series of production functions (PFs. Based on PFs, time frequency map and marginal spectrum can be obtained for fault diagnosis. Finally, this method is applied to numerical simulation and practical experiment data. The results show that the proposed method can effectively detect fault features of tilting-pad journal bearing.
A Fast SVD-Hidden-nodes based Extreme Learning Machine for Large-Scale Data Analytics.
Deng, Wan-Yu; Bai, Zuo; Huang, Guang-Bin; Zheng, Qing-Hua
2016-05-01
Big dimensional data is a growing trend that is emerging in many real world contexts, extending from web mining, gene expression analysis, protein-protein interaction to high-frequency financial data. Nowadays, there is a growing consensus that the increasing dimensionality poses impeding effects on the performances of classifiers, which is termed as the "peaking phenomenon" in the field of machine intelligence. To address the issue, dimensionality reduction is commonly employed as a preprocessing step on the Big dimensional data before building the classifiers. In this paper, we propose an Extreme Learning Machine (ELM) approach for large-scale data analytic. In contrast to existing approaches, we embed hidden nodes that are designed using singular value decomposition (SVD) into the classical ELM. These SVD nodes in the hidden layer are shown to capture the underlying characteristics of the Big dimensional data well, exhibiting excellent generalization performances. The drawback of using SVD on the entire dataset, however, is the high computational complexity involved. To address this, a fast divide and conquer approximation scheme is introduced to maintain computational tractability on high volume data. The resultant algorithm proposed is labeled here as Fast Singular Value Decomposition-Hidden-nodes based Extreme Learning Machine or FSVD-H-ELM in short. In FSVD-H-ELM, instead of identifying the SVD hidden nodes directly from the entire dataset, SVD hidden nodes are derived from multiple random subsets of data sampled from the original dataset. Comprehensive experiments and comparisons are conducted to assess the FSVD-H-ELM against other state-of-the-art algorithms. The results obtained demonstrated the superior generalization performance and efficiency of the FSVD-H-ELM. Copyright © 2016 Elsevier Ltd. All rights reserved.
Batched QR and SVD Algorithms on GPUs with Applications in Hierarchical Matrix Compression
Halim Boukaram, Wajih
2017-09-14
We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.
Juang, Jer-Nan; Kim, Hye-Young; Junkins, John L.
2003-01-01
A new star pattern recognition method is developed using singular value decomposition of a measured unit column vector matrix in a measurement frame and the corresponding cataloged vector matrix in a reference frame. It is shown that singular values and right singular vectors are invariant with respect to coordinate transformation and robust under uncertainty. One advantage of singular value comparison is that a pairing process for individual measured and cataloged stars is not necessary, and the attitude estimation and pattern recognition process are not separated. An associated method for mission catalog design is introduced and simulation results are presented.
Polynomial computation of Hankel singular values
Kwakernaak, H.
1992-01-01
A revised and improved version of a polynomial algorithm is presented. It was published by N.J. Young (1990) for the computation of the singular values and vectors of the Hankel operator defined by a linear time-invariant system with a rotational transfer matrix. Tentative numerical experiments
Directory of Open Access Journals (Sweden)
Wenjun Chen
2016-03-01
Full Text Available A nuclear magnetic resonance (NMR experiment for measurement of time-dependent magnetic fields was introduced. To improve the signal-to-interference-plus-noise ratio (SINR of NMR data, a new method for interference cancellation and noise reduction (ICNR based on singular value decomposition (SVD was proposed. The singular values corresponding to the radio frequency interference (RFI signal were identified in terms of the correlation between the FID data and the reference data, and then the RFI and noise were suppressed by setting the corresponding singular values to zero. The validity of the algorithm was verified by processing the measured NMR data. The results indicated that, this method has a significantly suppression of RFI and random noise, and can well preserve the FID signal. At present, the major limitation of the proposed SVD-based ICNR technique is that the threshold value for interference cancellation needs to be manually selected. Finally, the inversion waveform of the applied alternating magnetic field was given by fitting the processed experimental data.
Chen, Wenjun; Ma, Hong; Yu, De; Zhang, Hua
2016-03-04
A nuclear magnetic resonance (NMR) experiment for measurement of time-dependent magnetic fields was introduced. To improve the signal-to-interference-plus-noise ratio (SINR) of NMR data, a new method for interference cancellation and noise reduction (ICNR) based on singular value decomposition (SVD) was proposed. The singular values corresponding to the radio frequency interference (RFI) signal were identified in terms of the correlation between the FID data and the reference data, and then the RFI and noise were suppressed by setting the corresponding singular values to zero. The validity of the algorithm was verified by processing the measured NMR data. The results indicated that, this method has a significantly suppression of RFI and random noise, and can well preserve the FID signal. At present, the major limitation of the proposed SVD-based ICNR technique is that the threshold value for interference cancellation needs to be manually selected. Finally, the inversion waveform of the applied alternating magnetic field was given by fitting the processed experimental data.
Change Detection in SAR Images Based on Deep Semi-NMF and SVD Networks
Directory of Open Access Journals (Sweden)
Feng Gao
2017-05-01
Full Text Available With the development of Earth observation programs, more and more multi-temporal synthetic aperture radar (SAR data are available from remote sensing platforms. Therefore, it is demanding to develop unsupervised methods for SAR image change detection. Recently, deep learning-based methods have displayed promising performance for remote sensing image analysis. However, these methods can only provide excellent performance when the number of training samples is sufficiently large. In this paper, a novel simple method for SAR image change detection is proposed. The proposed method uses two singular value decomposition (SVD analyses to learn the non-linear relations between multi-temporal images. By this means, the proposed method can generate more representative feature expressions with fewer samples. Therefore, it provides a simple yet effective way to be designed and trained easily. Firstly, deep semi-nonnegative matrix factorization (Deep Semi-NMF is utilized to select pixels that have a high probability of being changed or unchanged as samples. Next, image patches centered at these sample pixels are generated from the input multi-temporal SAR images. Then, we build SVD networks, which are comprised of two SVD convolutional layers and one histogram feature generation layer. Finally, pixels in both multi-temporal SAR images are classified by the SVD networks, and then the final change map can be obtained. The experimental results of three SAR datasets have demonstrated the effectiveness and robustness of the proposed method.
SVD analysis in application to full waveform inversion of multicomponent seismic data
International Nuclear Information System (INIS)
Silvestrov, Ilya; Tcheverda, Vladimir
2011-01-01
An inverse problem of recovery the Earth's interior by multi-shot/multi-offset multicomponent seismic data is considered in this work. This problem may be considered as a nonlinear operational equation, and local derivative-based techniques are commonly used for its solution. Such method is known in seismic precessing as 'full-waveform inversion'. The major properties of the inversion process are governed by a Frechet derivative of the forward map. We show and study these properties by means of singular value decomposition (SVD) truncation. This decomposition depends strongly on the acquisition system and on the parameterization of the problem. We show, that it is very important to study the inverse problem in each particular case, otherwise unreliable results may be obtained. Surface and cross-well acquisition systems are considered in this work. Appropriate parameterizations for them are determined, and typical behavior of the inverse problem solution is studied.
A Note on Inclusion Intervals of Matrix Singular Values
Cui, Shu-Yu; Tian, Gui-Xian
2012-01-01
We establish an inclusion relation between two known inclusion intervals of matrix singular values in some special case. In addition, based on the use of positive scale vectors, a known inclusion interval of matrix singular values is also improved.
SVD-based filter design for the trajectory feedback of CLIC
Pfingstner, J; Schulte, D; Snuverink, J
2011-01-01
The trajectory feedback of the Compact Linear Collider (CLIC) is an essential mitigation method for ground motion effects at CLIC. In this paper signicant improvements of the design of this feedback are presented. The new controller is based on a singular value decomposition (SVD) of the orbit response matrix to decouple the in- and outputs of the accelerator. For each decoupled channel one independent controller is designed by utilising ground motion and noise models. This new design allows a relaxation of the required resolution of the beam position monitor from 10 to 50 nm. At the same time the suppression of ground motion effects is improved. As a consequence, the tight tolerances for the allowable luminosity loss due to ground motion effects in CLIC can be met. The presented methods can be easily adapted to other accelerators in order to loosen sensor tolerances and to efciently suppress ground motion effects.
A Fault Diagnosis Model Based on LCD-SVD-ANN-MIV and VPMCD for Rotating Machinery
Directory of Open Access Journals (Sweden)
Songrong Luo
2016-01-01
Full Text Available The fault diagnosis process is essentially a class discrimination problem. However, traditional class discrimination methods such as SVM and ANN fail to capitalize the interactions among the feature variables. Variable predictive model-based class discrimination (VPMCD can adequately use the interactions. But the feature extraction and selection will greatly affect the accuracy and stability of VPMCD classifier. Aiming at the nonstationary characteristics of vibration signal from rotating machinery with local fault, singular value decomposition (SVD technique based local characteristic-scale decomposition (LCD was developed to extract the feature variables. Subsequently, combining artificial neural net (ANN and mean impact value (MIV, ANN-MIV as a kind of feature selection approach was proposed to select more suitable feature variables as input vector of VPMCD classifier. In the end of this paper, a novel fault diagnosis model based on LCD-SVD-ANN-MIV and VPMCD is proposed and proved by an experimental application for roller bearing fault diagnosis. The results show that the proposed method is effective and noise tolerant. And the comparative results demonstrate that the proposed method is superior to the other methods in diagnosis speed, diagnosis success rate, and diagnosis stability.
A Note on Inclusion Intervals of Matrix Singular Values
Directory of Open Access Journals (Sweden)
Shu-Yu Cui
2012-01-01
Full Text Available We establish an inclusion relation between two known inclusion intervals of matrix singular values in some special case. In addition, based on the use of positive scale vectors, a known inclusion interval of matrix singular values is also improved.
Using singular value decomposition of component eigenmodes for interface reduction
Tournaire, Hadrien; Renaud, Franck; Dion, Jean Luc
2018-02-01
The aim of this paper is to describe the development of a reduced order model for modal analysis in a design context. The design process of most industrial systems is based on the re-utilization of certain components. Here, we propose a reduction method involving component eigenmodes to recover the modal behaviour of an assembled structure. The contribution of this work is that it uses component eigenmodes to build an interface reduction basis. Lastly, the reduction methodology proposed is compared to the Craig and Bampton method by applying it to two case studies of which one is an industrial model of an open rotor blade.
Generalized reduced rank tests using the singular value decomposition
Kleibergen, F.R.; Paap, R.
2002-01-01
We propose a novel statistic to test the rank of a matrix. The rank statistic overcomes deficiencies of existing rank statistics, like: necessity of a Kronecker covariance matrix for the canonical correlation rank statistic of Anderson (1951), sensitivity to the ordering of the variables for the LDU
Generalized Reduced Rank Tests using the Singular Value Decomposition
F.R. Kleibergen (Frank); R. Paap (Richard)
2003-01-01
textabstractWe propose a novel statistic to test the rank of a matrix. The rank statistic overcomes deficiencies of existing rank statistics, like: necessity of a Kronecker covariance matrix for the canonical correlation rank statistic of Anderson (1951), sensitivity to the ordering of the variables
International comparisons of road safety using Singular Value Decomposition.
Oppe, S.
2001-01-01
There is a general interest in the comparison of road safety developments in different countries. Comparisons have been made, based on absolute levels of accident or fatality risk or on the rate of change of functions regarding risk, the number of accidents, fatalities or injuries over time. Such
Generalized Reduced Rank Tests using the Singular Value Decomposition
Kleibergen, F.R.; Paap, R.
2006-01-01
We propose a novel statistic to test the rank of a matrix. The rank statistic overcomes deficiencies of existing rank statistics, like: a Kronecker covariance matrix for the canonical correlation rank statistic of Anderson [Annals of Mathematical Statistics (1951), 22, 327-351] sensitivity to the
High Performance Polar Decomposition on Distributed Memory Systems
Sukkari, Dalal E.
2016-08-08
The polar decomposition of a dense matrix is an important operation in linear algebra. It can be directly calculated through the singular value decomposition (SVD) or iteratively using the QR dynamically-weighted Halley algorithm (QDWH). The former is difficult to parallelize due to the preponderant number of memory-bound operations during the bidiagonal reduction. We investigate the latter scenario, which performs more floating-point operations but exposes at the same time more parallelism, and therefore, runs closer to the theoretical peak performance of the system, thanks to more compute-bound matrix operations. Profiling results show the performance scalability of QDWH for calculating the polar decomposition using around 9200 MPI processes on well and ill-conditioned matrices of 100K×100K problem size. We study then the performance impact of the QDWH-based polar decomposition as a pre-processing step toward calculating the SVD itself. The new distributed-memory implementation of the QDWH-SVD solver achieves up to five-fold speedup against current state-of-the-art vendor SVD implementations. © Springer International Publishing Switzerland 2016.
A singular value sensitivity approach to robust eigenstructure assignment
DEFF Research Database (Denmark)
Søgaard-Andersen, Per; Trostmann, Erik; Conrad, Finn
1986-01-01
A design technique for improving the feedback properties of multivariable state feedback systems designed using eigenstructure assignment is presented. Based on a singular value analysis of the feedback properties a design parameter adjustment procedure is outlined. This procedure allows...
Bullinaria, John A; Levy, Joseph P
2012-09-01
In a previous article, we presented a systematic computational study of the extraction of semantic representations from the word-word co-occurrence statistics of large text corpora. The conclusion was that semantic vectors of pointwise mutual information values from very small co-occurrence windows, together with a cosine distance measure, consistently resulted in the best representations across a range of psychologically relevant semantic tasks. This article extends that study by investigating the use of three further factors--namely, the application of stop-lists, word stemming, and dimensionality reduction using singular value decomposition (SVD)--that have been used to provide improved performance elsewhere. It also introduces an additional semantic task and explores the advantages of using a much larger corpus. This leads to the discovery and analysis of improved SVD-based methods for generating semantic representations (that provide new state-of-the-art performance on a standard TOEFL task) and the identification and discussion of problems and misleading results that can arise without a full systematic study.
Using dynamic mode decomposition for real-time background/foreground separation in video
Kutz, Jose Nathan; Grosek, Jacob; Brunton, Steven; Fu, Xing; Pendergrass, Seth
2017-06-06
The technique of dynamic mode decomposition (DMD) is disclosed herein for the purpose of robustly separating video frames into background (low-rank) and foreground (sparse) components in real-time. Foreground/background separation is achieved at the computational cost of just one singular value decomposition (SVD) and one linear equation solve, thus producing results orders of magnitude faster than robust principal component analysis (RPCA). Additional techniques, including techniques for analyzing the video for multi-resolution time-scale components, and techniques for reusing computations to allow processing of streaming video in real time, are also described herein.
A Hybrid EEMD-Based SampEn and SVD for Acoustic Signal Processing and Fault Diagnosis
Directory of Open Access Journals (Sweden)
Zhi-Xin Yang
2016-04-01
Full Text Available Acoustic signals are an ideal source of diagnosis data thanks to their intrinsic non-directional coverage, sensitivity to incipient defects, and insensitivity to structural resonance characteristics. However this makes prevailing signal de-nosing and feature extraction methods suffer from high computational cost, low signal to noise ratio (S/N, and difficulty to extract the compound acoustic emissions for various failure types. To address these challenges, we propose a hybrid signal processing technique to depict the embedded signal using generally effective features. The ensemble empirical mode decomposition (EEMD is adopted as the fundamental pre-processor, which is integrated with the sample entropy (SampEn, singular value decomposition (SVD, and statistic feature processing (SFP methods. The SampEn and SVD are identified as the condition indicators for periodical and irregular signals, respectively. Moreover, such a hybrid module is self-adaptive and robust to different signals, which ensures the generality of its performance. The hybrid signal processor is further integrated with a probabilistic classifier, pairwise-coupled relevance vector machine (PCRVM, to construct a new fault diagnosis system. Experimental verifications for industrial equipment show that the proposed diagnostic system is superior to prior methods in computational efficiency and the capability of simultaneously processing non-stationary and nonlinear condition monitoring signals.
Singular value correlation functions for products of Wishart random matrices
International Nuclear Information System (INIS)
Akemann, Gernot; Kieburg, Mario; Wei, Lu
2013-01-01
We consider the product of M quadratic random matrices with complex elements and no further symmetry, where all matrix elements of each factor have a Gaussian distribution. This generalizes the classical Wishart–Laguerre Gaussian unitary ensemble with M = 1. In this paper, we first compute the joint probability distribution for the singular values of the product matrix when the matrix size N and the number M are fixed but arbitrary. This leads to a determinantal point process which can be realized in two different ways. First, it can be written as a one-matrix singular value model with a non-standard Jacobian, or second, for M ⩾ 2, as a two-matrix singular value model with a set of auxiliary singular values and a weight proportional to the Meijer G-function. For both formulations, we determine all singular value correlation functions in terms of the kernels of biorthogonal polynomials which we explicitly construct. They are given in terms of the hypergeometric and Meijer G-functions, generalizing the Laguerre polynomials for M = 1. Our investigation was motivated from applications in telecommunication of multi-layered scattering multiple-input and multiple-output channels. We present the ergodic mutual information for finite-N for such a channel model with M − 1 layers of scatterers as an example. (paper)
Directory of Open Access Journals (Sweden)
Abhijeet Ravankar
2016-05-01
Full Text Available Line detection is an important problem in computer vision, graphics and autonomous robot navigation. Lines detected using a laser range sensor (LRS mounted on a robot can be used as features to build a map of the environment, and later to localize the robot in the map, in a process known as Simultaneous Localization and Mapping (SLAM. We propose an efficient algorithm for line detection from LRS data using a novel hopping-points Singular Value Decomposition (SVD and Hough transform-based algorithm, in which SVD is applied to intermittent LRS points to accelerate the algorithm. A reverse-hop mechanism ensures that the end points of the line segments are accurately extracted. Line segments extracted from the proposed algorithm are used to form a map and, subsequently, LRS data points are matched with the line segments to localize the robot. The proposed algorithm eliminates the drawbacks of point-based matching algorithms like the Iterative Closest Points (ICP algorithm, the performance of which degrades with an increasing number of points. We tested the proposed algorithm for mapping and localization in both simulated and real environments, and found it to detect lines accurately and build maps with good self-localization.
Multiscale singular value manifold for rotating machinery fault diagnosis
Energy Technology Data Exchange (ETDEWEB)
Feng, Yi; Lu, BaoChun; Zhang, Deng Feng [School of Mechanical Engineering, Nanjing University of Science and Technology,Nanjing (United States)
2017-01-15
Time-frequency distribution of vibration signal can be considered as an image that contains more information than signal in time domain. Manifold learning is a novel theory for image recognition that can be also applied to rotating machinery fault pattern recognition based on time-frequency distributions. However, the vibration signal of rotating machinery in fault condition contains cyclical transient impulses with different phrases which are detrimental to image recognition for time-frequency distribution. To eliminate the effects of phase differences and extract the inherent features of time-frequency distributions, a multiscale singular value manifold method is proposed. The obtained low-dimensional multiscale singular value manifold features can reveal the differences of different fault patterns and they are applicable to classification and diagnosis. Experimental verification proves that the performance of the proposed method is superior in rotating machinery fault diagnosis.
Nuclear power plant sensor fault detection using singular value ...
Indian Academy of Sciences (India)
Shyamapada Mandal
2017-07-27
Jul 27, 2017 ... Hotelling T2-statistic and Q-statistic. Results obtained with the proposed method are compared with the existing PCA- based method and elaborated in section 5. The final con- clusion of the paper and recommendations for future research work are given in section 6. 2. Proposed method. The SVD is an ...
Algorithms for large scale singular value analysis of spatially variant tomography systems
International Nuclear Information System (INIS)
Cao-Huu, Tuan; Brownell, G.; Lachiver, G.
1996-01-01
The problem of determining the eigenvalues of large matrices occurs often in the design and analysis of modem tomography systems. As there is an interest in solving systems containing an ever-increasing number of variables, current research effort is being made to create more robust solvers which do not depend on some special feature of the matrix for convergence (e.g. block circulant), and to improve the speed of already known and understood solvers so that solving even larger systems in a reasonable time becomes viable. Our standard techniques for singular value analysis are based on sparse matrix factorization and are not applicable when the input matrices are large because the algorithms cause too much fill. Fill refers to the increase of non-zero elements in the LU decomposition of the original matrix A (the system matrix). So we have developed iterative solutions that are based on sparse direct methods. Data motion and preconditioning techniques are critical for performance. This conference paper describes our algorithmic approaches for large scale singular value analysis of spatially variant imaging systems, and in particular of PCR2, a cylindrical three-dimensional PET imager 2 built at the Massachusetts General Hospital (MGH) in Boston. We recommend the desirable features and challenges for the next generation of parallel machines for optimal performance of our solver
SVD-BASED TRANSMIT BEAMFORMING FOR VARIOUS MODULATIONS WITH CONVOLUTION ENCODING
Directory of Open Access Journals (Sweden)
M. Raja
2011-09-01
Full Text Available This paper present a new beamforming technique using singular value decomposition (SVD for closed loop Multiple-input, multiple-output (MIMO wireless systems with various modulation techniques such as BPSK, 16-QAM, 16-PSK, DPSK and PAM along with convolution encoder. The channel matrix is decomposed into a number of independent orthogonal modes of excitation, which refer to as eigenmodes of the channel. Transmit precoding is performed by multiplying the input symbols with unitary matrix to produce the transmit beamforming, and the precoded symbols are transmitted over Rayleigh fading channel. At the receiver, combining process is performed by using maximum ratio combiner (MRC, and the receiver shaping is performed to retrieve the original input symbols by multiplying the received signal with conjugate transpose of the unitary matrix. Furthermore, the expressions for average bit error rate (BER for M-PSK and average BER for M-QAM are derived. The superiority of the proposed work is proved by simulation results and the proposed work is compared to the other beamforming methods.
Comparison of DCT, SVD and BFOA based multimodal biometric watermarking system
Directory of Open Access Journals (Sweden)
S. Anu H. Nair
2015-12-01
Full Text Available Digital image watermarking is a major domain for hiding the biometric information, in which the watermark data are made to be concealed inside a host image imposing imperceptible change in the picture. Due to the advance in digital image watermarking, the majority of research aims to make a reliable improvement in robustness to prevent the attack. The reversible invisible watermarking scheme is used for fingerprint and iris multimodal biometric system. A novel approach is used for fusing different biometric modalities. Individual unique modalities of fingerprint and iris biometric are extracted and fused using different fusion techniques. The performance of different fusion techniques is evaluated and the Discrete Wavelet Transform fusion method is identified as the best. Then the best fused biometric template is watermarked into a cover image. The various watermarking techniques such as the Discrete Cosine Transform (DCT, Singular Value Decomposition (SVD and Bacterial Foraging Optimization Algorithm (BFOA are implemented to the fused biometric feature image. Performance of watermarking systems is compared using different metrics. It is found that the watermarked images are found robust over different attacks and they are able to reverse the biometric template for Bacterial Foraging Optimization Algorithm (BFOA watermarking technique.
Multi-linear sparse reconstruction for SAR imaging based on higher-order SVD
Gao, Yu-Fei; Gui, Guan; Cong, Xun-Chao; Yang, Yue; Zou, Yan-Bin; Wan, Qun
2017-12-01
This paper focuses on the spotlight synthetic aperture radar (SAR) imaging for point scattering targets based on tensor modeling. In a real-world scenario, scatterers usually distribute in the block sparse pattern. Such a distribution feature has been scarcely utilized by the previous studies of SAR imaging. Our work takes advantage of this structure property of the target scene, constructing a multi-linear sparse reconstruction algorithm for SAR imaging. The multi-linear block sparsity is introduced into higher-order singular value decomposition (SVD) with a dictionary constructing procedure by this research. The simulation experiments for ideal point targets show the robustness of the proposed algorithm to the noise and sidelobe disturbance which always influence the imaging quality of the conventional methods. The computational resources requirement is further investigated in this paper. As a consequence of the algorithm complexity analysis, the present method possesses the superiority on resource consumption compared with the classic matching pursuit method. The imaging implementations for practical measured data also demonstrate the effectiveness of the algorithm developed in this paper.
Directory of Open Access Journals (Sweden)
Nicolas M Bertagnolli
Full Text Available To search for evolutionary forces that might act upon transcript length, we use the singular value decomposition (SVD to identify the length distribution functions of sets and subsets of human and yeast transcripts from profiles of mRNA abundance levels across gel electrophoresis migration distances that were previously measured by DNA microarrays. We show that the SVD identifies the transcript length distribution functions as "asymmetric generalized coherent states" from the DNA microarray data and with no a-priori assumptions. Comparing subsets of human and yeast transcripts of the same gene ontology annotations, we find that in both disparate eukaryotes, transcripts involved in protein synthesis or mitochondrial metabolism are significantly shorter than typical, and in particular, significantly shorter than those involved in glucose metabolism. Comparing the subsets of human transcripts that are overexpressed in glioblastoma multiforme (GBM or normal brain tissue samples from The Cancer Genome Atlas, we find that GBM maintains normal brain overexpression of significantly short transcripts, enriched in transcripts that are involved in protein synthesis or mitochondrial metabolism, but suppresses normal overexpression of significantly longer transcripts, enriched in transcripts that are involved in glucose metabolism and brain activity. These global relations among transcript length, cellular metabolism and tumor development suggest a previously unrecognized physical mode for tumor and normal cells to differentially regulate metabolism in a transcript length-dependent manner. The identified distribution functions support a previous hypothesis from mathematical modeling of evolutionary forces that act upon transcript length in the manner of the restoring force of the harmonic oscillator.
Application of Hilbert space decomposition to acoustical inverse problems
Lehman, Sean K.
2003-04-01
In a recently developed theory, the forward integral acoustic scattering operator is cast into the formalism of a Hilbert space operator which projects the continuous space scattering object into the discrete measurement space. By determining the singular value decomposition (SVD) of the forward scattering operator, one obtains optimal, orthonormal bases for each of these spaces in the form of the singular vectors. In formulating the inverse scattering problem, it is best to express the unknown object distribution in terms of an expansion of the singular vectors. Using this expansion, the best reconstruction, in a LMS sense, can be obtained from measured scattered field data. We present reconstruction results using this new theory. [Work performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48.
Directory of Open Access Journals (Sweden)
Akira R Kinjo
Full Text Available Position-specific scoring matrices (PSSMs are useful for detecting weak homology in protein sequence analysis, and they are thought to contain some essential signatures of the protein families. In order to elucidate what kind of ingredients constitute such family-specific signatures, we apply singular value decomposition to a set of PSSMs and examine the properties of dominant right and left singular vectors. The first right singular vectors were correlated with various amino acid indices including relative mutability, amino acid composition in protein interior, hydropathy, or turn propensity, depending on proteins. A significant correlation between the first left singular vector and a measure of site conservation was observed. It is shown that the contribution of the first singular component to the PSSMs act to disfavor potentially but falsely functionally important residues at conserved sites. The second right singular vectors were highly correlated with hydrophobicity scales, and the corresponding left singular vectors with contact numbers of protein structures. It is suggested that sequence alignment with a PSSM is essentially equivalent to threading supplemented with functional information. In addition, singular vectors may be useful for analyzing and annotating the characteristics of conserved sites in protein families.
Radar Micro-Doppler Feature Extraction Using the Singular Value Decomposition
Wit, J.J.M. de; Harmanny, R.I.A.; Molchanov, P.
2014-01-01
Abstract—The micro-Doppler spectrogram depends on parts of a target moving and rotating in addition to the main body motion (e.g., spinning rotor blades) and is thus characteristic for the type of target. In this study, the micro-Doppler spectrogram is exploited to distinguish between birds and
Wrapper Feature Extraction for Time Series Classification Using Singular Value Decomposition
Hui, Zhang; Tu, Bao Ho; Kawasaki, Saori
2005-01-01
Time series classification is an important aspect of time series mining. Recently, time series classification has attracted increasing interests in various domains. However, the high dimensionality property of time series makes time series classification a difficult problem. The so-called curse of dimensionality not only slows down the process of classification but also decreases the classification quality. Many dimensionality reduction techniques have been proposed to circumvent the curse of...
Directory of Open Access Journals (Sweden)
AMBIKA DORAISAMY
2017-06-01
Full Text Available A digital watermark is defined as inaudible data, permanently embedded in a speech file for authenticating the secret data. The main goal of this paper is to embed a watermark in the speech signal without any degradation. Here the hybrid watermarking is performed based on the three techniques such as Discrete Cosine Transform (DCT with Singular Value Decomposition (SVD and Discrete Wavelet Transform (DWT and it is optimized by performing the separation of speech and silent regions using a voice activity detection algorithm. The performances were evaluated based on Peak Signal to Noise Ratio (PSNR and Normalized Cross Correlation (NCC. The result shows that the optimization method performs better than the existing algorithm and it is robust against different kinds of attacks. It also shows that the algorithm is efficient in terms of robustness, security, and imperceptibility and also the watermarked signal is perceptually similar to the original audio signal.
Directory of Open Access Journals (Sweden)
Thaned Rojsiraphisal
2009-01-01
Full Text Available Sea surface height (SSH and sea surface temperature (SST in the North Indian Ocean are affected predominantly by the seasonally reversing monsoons and in turn feed back on monsoon variability. In this study, a set of data generated from a data-assimilative ocean model is used to examine coherent spatiotemporal modes of variability of winds and surface parameters using a frequency domain technique, Multiple Taper Method with Singular Value Decomposition (MTM-SVD. The analysis shows significant variability at annual and semiannual frequencies in these fields individually and jointly. The joint variability of winds and SSH is significant at interannual (2-3 years timescale related to the ENSO mode—with a “/dipole/” like spatial pattern. Joint variability with SST showed similar but somewhat weaker behavior. Winds appear to be the driver of variability in both SSH and SST at these frequency bands. This offers prospects for long-lead projections of the North Indian Ocean climate.
The comparison between SVD-DCT and SVD-DWT digital image watermarking
Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas
2018-03-01
With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.
Burns, Jack O.; Tauscher, Keith; Rapetti, David; Mirocha, Jordan; Switzer, Eric
2018-01-01
We have designed a complete data analysis pipeline for constraining Cosmic Dawn physics using sky-averaged spectra in the VHF range (40-200 MHz) obtained either from the ground (e.g., the Experiment to Detect Global Epoch of Reionization Signal, EDGES; and the Cosmic Twilight Polarimeter, CTP) or from orbit above the lunar farside (e.g., the Dark Ages Radio Explorer, DARE). In the case of DARE, we avoid Earth-based RFI, ionospheric effects, and radio solar emissions (when observing at night). To extract the 21-cm spectrum, we parametrize the cosmological signal and systematics with two separate sets of modes defined through Singular Value Decomposition (SVD) of training set curves. The training set for the 21-cm spin-flip brightness temperatures is composed of theoretical models of the first stars, galaxies and black holes created by varying physical parameters within the ares code. The systematics training set is created using sky and beam data to model the beam-weighted foregrounds (which are about four orders of magnitude larger than the signal) as well as expected lab data to model receiver systematics. To constrain physical parameters determining the 21-cm spectrum, we apply to the extracted signal a series of consecutive fitting techniques including two usages of a Markov Chain Monte Carlo (MCMC) algorithm. Importantly, our pipeline efficiently utilizes the significant differences between the foreground and the 21-cm signal in spatial and spectral variations. In addition, it incorporates for the first time polarization data, dramatically improving the constraining power. We are currently validating this end-to-end pipeline using detailed simulations of the signal, foregrounds and instruments. This work was directly supported by the NASA Solar System Exploration Research Virtual Institute cooperative agreement number 80ARC017M0006 and funding from the NASA Ames Research Center cooperative agreement NNX16AF59G.
Directory of Open Access Journals (Sweden)
Ivaniš Predrag
2004-01-01
Full Text Available This paper presents combination of Channel Optimized Vector Quantization based on LBG algorithm and sub channel power allocation for MIMO systems with Singular Value Decomposition and limited number of active sub channels. Proposed algorithm is designed to enable maximal throughput with bit error rate bellow some tar- get level in case of backward channel capacity limitation. Presence of errors effect in backward channel is also considered.
Hutterer, Victoria; Ramlau, Ronny
2018-03-01
The new generation of extremely large telescopes includes adaptive optics systems to correct for atmospheric blurring. In this paper, we present a new method of wavefront reconstruction from non-modulated pyramid wavefront sensor data. The approach is based on a simplified sensor model represented as the finite Hilbert transform of the incoming phase. Due to the non-compactness of the finite Hilbert transform operator the classical theory for singular systems is not applicable. Nevertheless, we can express the Moore–Penrose inverse as a singular value type expansion with weighted Chebychev polynomials.
A Jacobi-Davidson like SVD method
Hochstenbach, Michiel Erik
2000-01-01
We discuss a new method for the iterative computation of a portion of the singular values and vectors of a large sparse matrix Similar to the JacobiDavidson method for the eigenvalue problem we compute in each step a correction by approximately solving a correction equation We give a few variants of
Middleton, Beth A.
2014-01-01
A cornerstone of ecosystem ecology, decomposition was recognized as a fundamental process driving the exchange of energy in ecosystems by early ecologists such as Lindeman 1942 and Odum 1960). In the history of ecology, studies of decomposition were incorporated into the International Biological Program in the 1960s to compare the nature of organic matter breakdown in various ecosystem types. Such studies still have an important role in ecological studies of today. More recent refinements have brought debates on the relative role microbes, invertebrates and environment in the breakdown and release of carbon into the atmosphere, as well as how nutrient cycling, production and other ecosystem processes regulated by decomposition may shift with climate change. Therefore, this bibliography examines the primary literature related to organic matter breakdown, but it also explores topics in which decomposition plays a key supporting role including vegetation composition, latitudinal gradients, altered ecosystems, anthropogenic impacts, carbon storage, and climate change models. Knowledge of these topics is relevant to both the study of ecosystem ecology as well projections of future conditions for human societies.
Status of the Belle SVD detector
Abe, R; Alimonti, G; Asano, Y; Bakich, A; Banas, E; Bozek, A; Browder, T E; Dragic, J; Everton, C W; Fukunaga, C; Gordon, A; Guler, H; Haba, J; Hara, K; Hara, T; Hastings, N; Hazumi, M; Heenan, E; Higuchi, T; Hojo, T; Ishino, H; Iwai, G; Jalocha, P; Kaneko, J; Kapusta, P; Kawasaki, T; Korotushenko, K; Lange, J; Li, Y; Marlow, D; Matsubara, T; Miyake, H; Moffitt, L; Moloney, G R; Mori, S; Nagashima, Y; Nakadaira, T; Nakamura, T; Natkaniec, Z; Okuno, S; Olsen, S; Ostrowicz, W; Palka, H; Peak, L; Rozanka, M; Ryuko, J; Sevior, M E; Shimada, K; Stanic, S; Sumisawa, K; Stock, R; Swain, S; Tajima, H; Takahashi, S; Tagomori, H; Takasaki, F; Tamura, N; Tanaka, J; Tanaka, M; Taylor, G N; Tomura, T; Trabelsi, K; Tsuboyama, T; Tsujita, Y; Varner, G; Varvell, K E; Watanabe, Y; Yamada, Y; Yamamoto, H; Yokoyama, M; Zhao, H; Zontar, D
2002-01-01
The Belle spectrometer was designed for studies of B meson decays at an asymmetric e sup + e sup - collider operating at the UPSILON(4S) resonance. One of its crucial components, a silicon vertex detector (SVD), is placed just outside a cylindrical beryllium beam-pipe. After a year of Belle operation an upgraded version of SVD was installed during the regular summer shut-down. The new SVD follows the same design, with a few important improvements. Rad-soft readout electronics was replaced by rad-tolerant one, allowing for longer lifetime of the detector. A new radiation and temperature monitoring system was developed and implemented. A saw-shaped inner surface was introduced in the beam-pipe to prevent potential synchrotron radiation damage. The upgraded detector started operating successfully in October 2000.
Passive forensics for copy-move image forgery using a method based on DCT and SVD.
Zhao, Jie; Guo, Jichang
2013-12-10
As powerful image editing tools are widely used, the demand for identifying the authenticity of an image is much increased. Copy-move forgery is one of the tampering techniques which are frequently used. Most existing techniques to expose this forgery need to improve the robustness for common post-processing operations and fail to precisely locate the tampering region especially when there are large similar or flat regions in the image. In this paper, a robust method based on DCT and SVD is proposed to detect this specific artifact. Firstly, the suspicious image is divided into fixed-size overlapping blocks and 2D-DCT is applied to each block, then the DCT coefficients are quantized by a quantization matrix to obtain a more robust representation of each block. Secondly, each quantized block is divided non-overlapping sub-blocks and SVD is applied to each sub-block, then features are extracted to reduce the dimension of each block using its largest singular value. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks will be matched by predefined shift frequency threshold. Experiment results demonstrate that our proposed method can effectively detect multiple copy-move forgery and precisely locate the duplicated regions, even when an image was distorted by Gaussian blurring, AWGN, JPEG compression and their mixed operations. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
A QDWH-Based SVD Software Framework on Distributed-Memory Manycore Systems
Sukkari, Dalal
2017-01-01
This paper presents a high performance software framework for computing a dense SVD on distributed- memory manycore systems. Originally introduced by Nakatsukasa et al. (Nakatsukasa et al. 2010; Nakatsukasa and Higham 2013), the SVD solver relies on the polar decomposition using the QR Dynamically-Weighted Halley algorithm (QDWH). Although the QDWH-based SVD algorithm performs a significant amount of extra floating-point operations compared to the traditional SVD with the one-stage bidiagonal reduction, the inherent high level of concurrency associated with Level 3 BLAS compute-bound kernels ultimately compensates for the arithmetic complexity overhead. Using the ScaLAPACK two-dimensional block cyclic data distribution with a rectangular processor topology, the resulting QDWH-SVD further reduces excessive communications during the panel factorization, while increasing the degree of parallelism during the update of the trailing submatrix, as opposed to relying to the default square processor grid. After detailing the algorithmic complexity and the memory footprint of the algorithm, we conduct a thorough performance analysis and study the impact of the grid topology on the performance by looking at the communication and computation profiling trade-offs. We report performance results against state-of-the-art existing QDWH software implementations (e.g., Elemental) and their SVD extensions on large-scale distributed-memory manycore systems based on commodity Intel x86 Haswell processors and Knights Landing (KNL) architecture. The QDWH-SVD framework achieves up to 3/8-fold on the Haswell/KNL-based platforms, respectively, against ScaLAPACK PDGESVD and turns out to be a competitive alternative for well and ill-conditioned matrices. We finally come up herein with a performance model based on these empirical results. Our QDWH-based polar decomposition and its SVD extension are freely available at https://github.com/ecrc/qdwh.git and https
Directory of Open Access Journals (Sweden)
Lin Ma
2015-01-01
Full Text Available Green WLAN is a promising technique for accessing future indoor Internet services. It is designed not only for high-speed data communication purposes but also for energy efficiency. The basic strategy of green WLAN is that all the access points are not always powered on, but rather work on-demand. Though powering off idle access points does not affect data communication, a serious asymmetric matching problem will arise in a WLAN indoor positioning system due to the fact the received signal strength (RSS readings from the available access points are different in their offline and online phases. This asymmetry problem will no doubt invalidate the fingerprint algorithm used to estimate the mobile device location. Therefore, in this paper we propose a green WLAN indoor positioning system, which can recover RSS readings and achieve good localization performance based on singular value thresholding (SVT theory. By solving the nuclear norm minimization problem, SVT recovers not only the radio map, but also online RSS readings from a sparse matrix by sensing only a fraction of the RSS readings. We have implemented the method in our lab and evaluated its performances. The experimental results indicate the proposed system could recover the RSS readings and achieve good localization performance.
Robust Stability Analysis of the Space Launch System Control Design: A Singular Value Approach
Pei, Jing; Newsome, Jerry R.
2015-01-01
Classical stability analysis consists of breaking the feedback loops one at a time and determining separately how much gain or phase variations would destabilize the stable nominal feedback system. For typical launch vehicle control design, classical control techniques are generally employed. In addition to stability margins, frequency domain Monte Carlo methods are used to evaluate the robustness of the design. However, such techniques were developed for Single-Input-Single-Output (SISO) systems and do not take into consideration the off-diagonal terms in the transfer function matrix of Multi-Input-Multi-Output (MIMO) systems. Robust stability analysis techniques such as H(sub infinity) and mu are applicable to MIMO systems but have not been adopted as standard practices within the launch vehicle controls community. This paper took advantage of a simple singular-value-based MIMO stability margin evaluation method based on work done by Mukhopadhyay and Newsom and applied it to the SLS high-fidelity dynamics model. The method computes a simultaneous multi-loop gain and phase margin that could be related back to classical margins. The results presented in this paper suggest that for the SLS system, traditional SISO stability margins are similar to the MIMO margins. This additional level of verification provides confidence in the robustness of the control design.
On the singular values decoupling in the Singular Spectrum Analysis of volcanic tremor at Stromboli
Directory of Open Access Journals (Sweden)
R. Carniel
2006-01-01
Full Text Available The well known strombolian activity at Stromboli volcano is occasionally interrupted by rarer episodes of paroxysmal activity which can lead to considerable hazard for Stromboli inhabitants and tourists. On 5 April 2003 a powerful explosion, which can be compared in size with the latest one of 1930, covered with bombs a good part of the normally tourist-accessible summit area. This explosion was not forecasted, although the island was by then effectively monitored by a dense deployment of instruments. After having tackled in a previous paper the problem of highlighting the timescale of preparation of this event, we investigate here the possibility of highlighting precursors in the volcanic tremor continuously recorded by a short period summit seismic station. We show that a promising candidate is found by examining the degree of coupling between successive singular values that result from the Singular Spectrum Analysis of the raw seismic data. We suggest therefore that possible anomalies in the time evolution of this parameter could be indicators of volcano instability to be taken into account e.g. in a bayesian eruptive scenario evaluator. Obviously, further (and possibly forward testing on other cases is needed to confirm the usefulness of this parameter.
Through-wall image enhancement using fuzzy and QR decomposition.
Riaz, Muhammad Mohsin; Ghafoor, Abdul
2014-01-01
QR decomposition and fuzzy logic based scheme is proposed for through-wall image enhancement. QR decomposition is less complex compared to singular value decomposition. Fuzzy inference engine assigns weights to different overlapping subspaces. Quantitative measures and visual inspection are used to analyze existing and proposed techniques.
Chaotic SVD method for minimizing the effect of exponential trends in detrended fluctuation analysis
Shang, Pengjian; Lin, Aijing; Liu, Liang
2009-03-01
The Detrended Fluctuation Analysis (DFA) and its extensions (MF-DFA) have been used extensively to determine possible long-range correlations in self-affine signals. However, recent studies have reported the susceptibility of DFA to trends which give rise to spurious crossovers and prevent reliable estimation of the scaling exponents. In this study, a smoothing algorithm based on the Chaotic Singular-Value Decomposition (CSVD) is proposed to minimize the effect of exponential trends and distortion in the log-log plots obtained by DFA techniques. The effectiveness of the technique is demonstrated on monofractal and multifractal data corrupted with exponential trends.
Choong, Miew Keen; Levy, David; Yan, Hong
2009-01-01
We propose a method to analyse the periodicities of gene expression profiles based on the spectral domain approach. Our spectral reconstruction method outperforms three other recently proposed methods, which do not require any prior knowledge. It is proven that an alternative method for studying cell-cycle regulation is possible even where very little prior knowledge is available. We also investigate the potential of combining signals with similar frequency components to form an overdetermined system of equations, and use least squares solution to estimate the spectral frequency. Results show that this new method is able to estimate the peak frequency more accurately.
Water quality assessment using SVD-based principal component ...
African Journals Online (AJOL)
SVD) of hydrological data was tested for water quality assessment. Using two case studies of waste- and drinking water, PCA via SVD was able to find latent variables which explain 80.8% and 83.7% of the variance, respectively. By means of ...
Mukhopadhyay, V.; Newsom, J. R.
1982-01-01
A stability margin evaluation method in terms of simultaneous gain and phase changes in all loops of a multiloop system is presented. A universal gain-phase margin evaluation diagram is constructed by generalizing an existing method using matrix singular value properties. Using this diagram and computing the minimum singular value of the system return difference matrix over the operating frequency range, regions of guaranteed stability margins can be obtained. Singular values are computed for a wing flutter suppression and a drone lateral attitude control problem. The numerical results indicate that this method predicts quite conservative stability margins. In the second example if the eigenvalue magnitude is used instead of the singular value, as a measure of nearness to singularity, more realistic stability margins are obtained. However, this relaxed measure generally cannot guarantee global stability.
Integrated ensemble noise-reconstructed empirical mode decomposition for mechanical fault detection
Yuan, Jing; Ji, Feng; Gao, Yuan; Zhu, Jun; Wei, Chenjun; Zhou, Yu
2018-05-01
A new branch of fault detection is utilizing the noise such as enhancing, adding or estimating the noise so as to improve the signal-to-noise ratio (SNR) and extract the fault signatures. Hereinto, ensemble noise-reconstructed empirical mode decomposition (ENEMD) is a novel noise utilization method to ameliorate the mode mixing and denoised the intrinsic mode functions (IMFs). Despite the possibility of superior performance in detecting weak and multiple faults, the method still suffers from the major problems of the user-defined parameter and the powerless capability for a high SNR case. Hence, integrated ensemble noise-reconstructed empirical mode decomposition is proposed to overcome the drawbacks, improved by two noise estimation techniques for different SNRs as well as the noise estimation strategy. Independent from the artificial setup, the noise estimation by the minimax thresholding is improved for a low SNR case, which especially shows an outstanding interpretation for signature enhancement. For approximating the weak noise precisely, the noise estimation by the local reconfiguration using singular value decomposition (SVD) is proposed for a high SNR case, which is particularly powerful for reducing the mode mixing. Thereinto, the sliding window for projecting the phase space is optimally designed by the correlation minimization. Meanwhile, the reasonable singular order for the local reconfiguration to estimate the noise is determined by the inflection point of the increment trend of normalized singular entropy. Furthermore, the noise estimation strategy, i.e. the selection approaches of the two estimation techniques along with the critical case, is developed and discussed for different SNRs by means of the possible noise-only IMF family. The method is validated by the repeatable simulations to demonstrate the synthetical performance and especially confirm the capability of noise estimation. Finally, the method is applied to detect the local wear fault
A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain
Directory of Open Access Journals (Sweden)
Lida Barba
2017-01-01
Full Text Available Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT. SWT in conjunction with the Autoregressive model (SWT + MIMO-AR and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT.
A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain.
Barba, Lida; Rodríguez, Nibaldo
2017-01-01
Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT.
The Belle II SVD data readout system
Energy Technology Data Exchange (ETDEWEB)
Thalmeier, R., E-mail: Richard.Thalmeier@oeaw.ac.at [Institute of High Energy Physics, Austrian Academy of Sciences, 1050 Vienna (Austria); Adamczyk, K. [H. Niewodniczanski Institute of Nuclear Physics, Krakow 31-342 (Poland); Aihara, H. [Department of Physics, University of Tokyo, Tokyo 113-0033 (Japan); Angelini, C. [Dipartimento di Fisica, Universita’ di Pisa, I-56127 Pisa (Italy); INFN Sezione di Pisa, I-56127 Pisa (Italy); Aziz, T.; Babu, V. [Tata Institute of Fundamental Research, Mumbai 400005 (India); Bacher, S. [H. Niewodniczanski Institute of Nuclear Physics, Krakow 31-342 (Poland); Bahinipati, S. [Indian Institute of Technology Bhubaneswar, Satya Nagar (India); Barberio, E.; Baroncelli, Ti.; Baroncelli, To. [School of Physics, University of Melbourne, Melbourne, Victoria 3010 (Australia); Basith, A.K. [Indian Institute of Technology Madras, Chennai 600036 (India); Batignani, G. [Dipartimento di Fisica, Universita’ di Pisa, I-56127 Pisa (Italy); INFN Sezione di Pisa, I-56127 Pisa (Italy); Bauer, A. [Institute of High Energy Physics, Austrian Academy of Sciences, 1050 Vienna (Austria); Behera, P.K. [Indian Institute of Technology Madras, Chennai 600036 (India); Bergauer, T. [Institute of High Energy Physics, Austrian Academy of Sciences, 1050 Vienna (Austria); Bettarini, S. [Dipartimento di Fisica, Universita’ di Pisa, I-56127 Pisa (Italy); INFN Sezione di Pisa, I-56127 Pisa (Italy); Bhuyan, B. [Indian Institute of Technolog y Guwahati, Assam 781039 (India); Bilka, T. [Faculty of Mathematics and Physics, Charles University, 12116 Prague (Czech Republic); Bosi, F. [INFN Sezione di Pisa, I-56127 Pisa (Italy); and others
2017-02-11
The Belle II Experiment at the High Energy Accelerator Research Organization (KEK) in Tsukuba, Japan, will explore the asymmetry between matter and antimatter and search for new physics beyond the standard model. 172 double-sided silicon strip detectors are arranged cylindrically in four layers around the collision point to be part of a system which measures the tracks of the collision products of electrons and positrons. A total of 1748 radiation-hard APV25 chips read out 128 silicon strips each and send the analog signals by time-division multiplexing out of the radiation zone to 48 Flash Analog Digital Converter Modules (FADC). Each of them applies processing to the data; for example, it uses a digital finite impulse response filter to compensate line signal distortions, and it extracts the peak timing and amplitude from a set of several data points for each hit, using a neural network. We present an overview of the SVD data readout system, along with front-end electronics, cabling, power supplies and data processing.
Wang, Chengwen; Quan, Long; Zhang, Shijie; Meng, Hongjun; Lan, Yuan
2017-03-01
Hydraulic servomechanism is the typical mechanical/hydraulic double-dynamics coupling system with the high stiffness control and mismatched uncertainties input problems, which hinder direct applications of many advanced control approaches in the hydraulic servo fields. In this paper, by introducing the singular value perturbation theory, the original double-dynamics coupling model of the hydraulic servomechanism was reduced to a integral chain system. So that, the popular ADRC (active disturbance rejection control) technology could be directly applied to the reduced system. In addition, the high stiffness control and mismatched uncertainties input problems are avoided. The validity of the simplified model is analyzed and proven theoretically. The standard linear ADRC algorithm is then developed based on the obtained reduced-order model. Extensive comparative co-simulations and experiments are carried out to illustrate the effectiveness of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Load Estimation by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Pedersen, Ivar Chr. Bjerg; Hansen, Søren Mosegaard; Brincker, Rune
2007-01-01
by analysis of simulated responses of a 4 DOF system, for which the exact modal parameters are known. This estimation approach entails modal identification of the natural eigenfrequencies, mode shapes and damping ratios by the frequency domain decomposition technique. Scaled mode shapes are determined by use......When performing operational modal analysis the dynamic loading is unknown, however, once the modal properties of the structure have been estimated, the transfer matrix can be obtained, and the loading can be estimated by inverse filtering. In this paper loads in frequency domain are estimated...... of the mass change method. The problem of inverting the often singular or nearly singular transfer function matrix is solved by the singular value decomposition technique using a limited number of singular values. The dependence of the eigenfrequencies on the accuracy of the scaling factors is investigated...
SVD-based digital image watermarking using complex wavelet ...
Indian Academy of Sciences (India)
Keywords. Digital image watermarking; complex wavelet transform; singular ... In watermarking trial, SVD is applied to the image matrix; then watermark ..... IEEE. Trans. on Multimedia 4(1): 121–128. Loo P, Kingsbury N G 2000 Digital watermarking using complex wavelets. Int. Conf. on Image. Processing 29–32. Loo P ...
FPGA-Based Online PQD Detection and Classification through DWT, Mathematical Morphology and SVD
Directory of Open Access Journals (Sweden)
Misael Lopez-Ramirez
2018-03-01
Full Text Available Power quality disturbances (PQD in electric distribution systems can be produced by the utilization of non-linear loads or environmental circumstances, causing electrical equipment malfunction and reduction of its useful life. Detecting and classifying different PQDs implies great efforts in planning and structuring the monitoring system. The main disadvantage of most works in the literature is that they treat a limited number of electrical disturbances through personal computer (PC-based computation techniques, which makes it difficult to perform an online PQD classification. In this work, the novel contribution is a methodology for PQD recognition and classification through discrete wavelet transform, mathematical morphology, decomposition of singular values, and statistical analysis. Furthermore, the timely and reliable classification of different disturbances is necessary; hence, a field programmable gate array (FPGA-based integrated circuit is developed to offer a portable hardware processing unit to perform fast, online PQD classification. The obtained numerical and experimental results demonstrate that the proposed method guarantees high effectiveness during online PQD detection and classification of real voltage/current signals.
DEKOMPOSISI NILAI SINGULAR PADA SISTEM PENGENALAN WAJAH
Beni Utomo
2012-01-01
Dekomposisi Nilai Singular atau Singular Value Decomposition (SVD)merupakan salah satu cara untuk menyatakan Principal Component Analysis (PCA).PCA sendiri merupakan suatu proses untuk menemukan kontributor-kontributorpenting dari suatu data berdasarkan besaran statistika deviasi standart dan variansi.SVD merupakan proses untuk mendapatkan matriks diagonal yang elementak nolnya merupakan nilai singular yang akarnya merupakan eigenvalue.SVD atas matriks kovarian C berbentuk C = U?V T dengan ma...
Comon, Pierre
2014-01-01
International audience; Tensor decompositions are at the core of many Blind Source Separation (BSS) algorithms, either explicitly or implicitly. In particular, the Canonical Polyadic (CP) tensor decomposition plays a central role in identification of underdetermined mixtures. Despite some similarities, CP and Singular value Decomposition (SVD) are quite different. More generally, tensors and matrices enjoy different properties, as pointed out in this brief survey.
Energy Technology Data Exchange (ETDEWEB)
Oang, Key Young; Yang, Cheolhee; Muniyappan, Srinivasan; Kim, Jeongho; Ihee, Hyotcherl
2017-07-01
Determination of the optimum kinetic model is an essential prerequisite for characterizing dynamics and mechanism of a reaction. Here, we propose a simple method, termed as singular value decomposition-aided pseudo principal-component analysis (SAPPA), to facilitate determination of the optimum kinetic model from time-resolved data by bypassing any need to examine candidate kinetic models. We demonstrate the wide applicability of SAPPA by examining three different sets of experimental time-resolved data and show that SAPPA can efficiently determine the optimum kinetic model. In addition, the results of SAPPA for both time-resolved X-ray solution scattering (TRXSS) and transient absorption (TA) data of the same protein reveal that global structural changes of protein, which is probed by TRXSS, may occur more slowly than local structural changes around the chromophore, which is probed by TA spectroscopy.
Tensor decompositions for the analysis of atomic resolution electron energy loss spectra
Energy Technology Data Exchange (ETDEWEB)
Spiegelberg, Jakob; Rusz, Ján [Department of Physics and Astronomy, Uppsala University, Box 516, S-751 20 Uppsala (Sweden); Pelckmans, Kristiaan [Department of Information Technology, Uppsala University, Box 337, S-751 05 Uppsala (Sweden)
2017-04-15
A selection of tensor decomposition techniques is presented for the detection of weak signals in electron energy loss spectroscopy (EELS) data. The focus of the analysis lies on the correct representation of the simulated spatial structure. An analysis scheme for EEL spectra combining two-dimensional and n-way decomposition methods is proposed. In particular, the performance of robust principal component analysis (ROBPCA), Tucker Decompositions using orthogonality constraints (Multilinear Singular Value Decomposition (MLSVD)) and Tucker decomposition without imposed constraints, canonical polyadic decomposition (CPD) and block term decompositions (BTD) on synthetic as well as experimental data is examined. - Highlights: • A scheme for compression and analysis of EELS or EDX data is proposed. • Several tensor decomposition techniques are presented for BSS on hyperspectral data. • Robust PCA and MLSVD are discussed for denoising of raw data.
An Implementation and Detailed Analysis of the K-SVD Image Denoising Algorithm
Directory of Open Access Journals (Sweden)
Marc Lebrun
2012-05-01
Full Text Available K-SVD is a signal representation method which, from a set of signals, can derive a dictionary able to approximate each signal with a sparse combination of the atoms. This paper focuses on the K-SVD-based image denoising algorithm. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.
Zhang, Min; Zhu, Wusheng; Yun, Wenwei; Wang, Qizhang; Cheng, Maogang; Zhang, Zhizhong; Liu, Xinfeng; Zhou, Xianju; Xu, Gelin
2015-09-15
Maladjustment of matrix metalloproteinases (MMPs) results in cerebral vasculature and blood-brain barrier dysfunction, which is associated with small vessel disease (SVD). This study was to aim at evaluating correlations between matrix metalloproteinase-2 and 9 single nucleotide polymorphisms and the risk of SVD. A total of 178 patients with SVD were enrolled into this study via Nanjing Stroke Registry Program (NSRP) from January 2010 to November 2011. SVD patients were further subtyped as isolated lacunar infarction (ILI, absent or with mild leukoaraiosis) and ischemic leukoaraiosis (ILA, with moderate or severe leukoaraiosis) according to the Fazekas scale. 100 age- and gender-matched individuals from outpatient medical examination were recruited as the control group. The genotypes of MMP-2-1306 T/C and MMP-9-1562 C/T were determined by the TaqMan method. Of 178 SVD patients, 86 and 92 patients were classified as ILI and ILA, respectively. Comparison analysis between SVD patients and controls revealed a significant correlation between SVD and hypertension, as well as a prevalence of hypertension in ILA. Further genotype analysis showed that the frequency of MMP-2-1306 CC genotype was higher in ILA patients than in controls (P=0.009, χ(2) test; P=0.027, the multiple test with Bonferroni correction). Finally, logistic regression analysis with adjustment of age, sex and vascular risk factors showed that the MMP-2-1306 T/C polymorphism was an independent predictor for ILA (OR: 2.605; 95% confidence interval [CI], 1.067-6.364; P=0.036). Our findings suggest that the MMP-2-1306 T/C polymorphism is a direct risk factor for ILA. Copyright © 2015. Published by Elsevier B.V.
Pannwitz, Gunter; Haas, Bernd; Hoffmann, Bernd; Fischer, Sebastian
2009-01-01
In a closed pig establishment housing about 18,000 pigs, 2895 gilts were tested pre-export for SVD (swine vesicular disease) antibodies using Ceditest/PrioCHECK SVDV-AB ELISA. 130 gilts (4.5%) tested positive. In addition, 561 animals of this farm were sampled per random for SVD serology. One in 241 weaners (0.4%), eight in 150 gilts (5.3%) and 18 in 170 (10.6%) pregnant sows tested ELISA SVD-antibody positive. Of the ELISA positive samples, 23 tested positive in VNT (virus neutralization test). Of these, 20 VNT-positive animals were re-sampled two weeks later and re-tested via ELISA and VNT in different laboratories, displaying falling titres with one to two animals remaining VNT-positive. Epidemiological investigations and clinical examinations on site did not yield any evidence for SVD. 745 faecal samples taken from individual pigs and collected from pens tested negative in SVDV-RNA-PCR. 40 of these samples tested negative in virus isolation on cell culture. Pathological examinations on fallen pigs did not reveal any evidence for SVD either. After comparing our ELISA results with data recorded in the ELISA validation by Chenard et al. (1998), we propose that the published test performance is perhaps not currently applicable for the commercial test. Provided that SVD-antibody negative pigs were tested, a specificity of 99.6% in weaners, 95.5% in gilts and 89.4% in pregnant sows would appear to be more appropriate for the Ceditest/PrioCHECK SVDV-AB ELISA. Details are provided for all examined pigs regarding husbandry, breed, age, weeks pregnant and previous vaccinations. The results of other serological tests on the same sera are given. Possible clusterings of false-positive SVD-ELISA results are discussed.
Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions
DEFF Research Database (Denmark)
Hansen, Per Christian; Jensen, Søren Holdt
2007-01-01
We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... diagonal (eigenvalue and singular value) decompositions and rank-revealing triangular decompositions (ULV, URV, VSV, ULLV and ULLIV). In addition we show how the subspace-based algorithms can be evaluated and compared by means of simple FIR filter interpretations. The algorithms are illustrated...... with working Matlab code and applications in speech processing....
Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions
DEFF Research Database (Denmark)
Hansen, Per Christian; Jensen, Søren Holdt
We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... diagonal (eigenvalue and singular value) decompositions and rank-revealing triangular decompositions (ULV, URV, VSV, ULLV and ULLIV). In addition we show how the subspace-based algorithms can be evaluated and compared by means of simple FIR filter interpretations. The algorithms are illustrated...... with working Matlab code and applications in speech processing....
Decomposition of Space-Variant Blur in Image Deconvolution
Czech Academy of Sciences Publication Activity Database
Šroubek, Filip; Kamenický, Jan; Lu, Y. M.
2016-01-01
Roč. 23, č. 3 (2016), s. 346-350 ISSN 1070-9908 R&D Projects: GA ČR GA13-29225S; GA MŠk 7H14004 Grant - others:GA AV ČR(CZ) M100751201 Institutional support: RVO:67985556 Keywords : space-variant convolution * singular value decomposition * alternating direction method of multipliers Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.528, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/sroubek-0456182.pdf
Directory of Open Access Journals (Sweden)
Søren Holdt Jensen
2007-01-01
Full Text Available We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both diagonal (eigenvalue and singular value decompositions and rank-revealing triangular decompositions (ULV, URV, VSV, ULLV, and ULLIV. In addition, we show how the subspace-based algorithms can be analyzed and compared by means of simple FIR filter interpretations. The algorithms are illustrated with working Matlab code and applications in speech processing.
The detection of flaws in austenitic welds using the decomposition of the time-reversal operator.
Cunningham, Laura J; Mulholland, Anthony J; Tant, Katherine M M; Gachagan, Anthony; Harvey, Gerry; Bird, Colin
2016-04-01
The non-destructive testing of austenitic welds using ultrasound plays an important role in the assessment of the structural integrity of safety critical structures. The internal microstructure of these welds is highly scattering and can lead to the obscuration of defects when investigated by traditional imaging algorithms. This paper proposes an alternative objective method for the detection of flaws embedded in austenitic welds based on the singular value decomposition of the time-frequency domain response matrices. The distribution of the singular values is examined in the cases where a flaw exists and where there is no flaw present. A lower threshold on the singular values, specific to austenitic welds, is derived which, when exceeded, indicates the presence of a flaw. The detection criterion is successfully implemented on both synthetic and experimental data. The datasets arising from welds containing a flaw are further interrogated using the decomposition of the time-reversal operator (DORT) method and the total focusing method (TFM), and it is shown that images constructed via the DORT algorithm typically exhibit a higher signal-to-noise ratio than those constructed by the TFM algorithm.
Downscaling atmospheric patterns to multi-site precipitation amounts in southern Scandinavia
DEFF Research Database (Denmark)
Gelati, Emiliano; Christensen, O.B.; Rasmussen, P.F.
2010-01-01
depend on current atmospheric information. The gridded atmospheric fields are summarized through the singular value decomposition (SVD) technique. SVD is applied to geopotential height and relative humidity at several pressure levels, to identify their principal spatial patterns co......A non-homogeneous hidden Markov model (NHMM) is applied for downscaling atmospheric synoptic patterns to winter multi-site daily precipitation amounts. The implemented NHMM assumes precipitation to be conditional on a hidden weather state that follows a Markov chain, whose transition probabilities...
Cloud detection for MIPAS using singular vector decomposition
Directory of Open Access Journals (Sweden)
J. Hurley
2009-09-01
Full Text Available Satellite-borne high-spectral-resolution limb sounders, such as the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS onboard ENVISAT, provide information on clouds, especially optically thin clouds, which have been difficult to observe in the past. The aim of this work is to develop, implement and test a reliable cloud detection method for infrared spectra measured by MIPAS.
Current MIPAS cloud detection methods used operationally have been developed to detect cloud effective filling more than 30% of the measurement field-of-view (FOV, under geometric and optical considerations – and hence are limited to detecting fairly thick cloud, or large physical extents of thin cloud. In order to resolve thin clouds, a new detection method using Singular Vector Decomposition (SVD is formulated and tested. This new SVD detection method has been applied to a year's worth of MIPAS data, and qualitatively appears to be more sensitive to thin cloud than the current operational method.
Journal of Earth System Science | Indian Academy of Sciences
Indian Academy of Sciences (India)
An attempt has been made to improve the accuracy of predicted rainfall using three different multi-model ensemble (MME) schemes, viz., simple arithmetic mean of models (EM), principal component regression (PCR) and singular value decomposition based multiple linear regressions (SVD). It is found out that among ...
Measurement, analysis and correction of the closed orbit distortion ...
Indian Academy of Sciences (India)
The paper presents the measurement, analysis and correction of closed orbit distortion (COD) in Indus-2 at 550 MeV injection energy and 2 GeV synchrotron radiation user run energy ... In this paper, the method of global COD correction based on singular value decomposition (SVD) of the orbit response matrix is described.
A multivariate calibration procedure for the tensammetric determination of detergents
Bos, M.
1989-01-01
A multivariate calibration procedure based on singular value decomposition (SVD) and the Ho-Kashyap algorithm is used for the tensammetric determination of the cationic detergents Hyamine 1622, benzalkonium chloride (BACl), N-cetyl-N,N,N-trimethylammonium bromide (CTABr) and mixtures of CTABr and
Tokamak Plasmas : Mirnov coil data analysis for tokamak ADITYA
Indian Academy of Sciences (India)
The spatial and temporal structures of magnetic signal in the tokamak ADITYA is analysed using recently developed singular value decomposition (SVD) technique. The analysis technique is ﬁrst tested with simulated data and then applied to the ADITYA Mirnov coil data to determine the structure of current peturbation as ...
Anti-plagiarism certification be an academic mandate
Digital Repository Service at National Institute of Oceanography (India)
Lakshminarayana, S.
-window of three to eight. Singular Value Decomposition (SVD) method is applied to compute the effects with different sets and keywords. An uniform gradient is noticed in the case of two documents similar in greater extent. In all other cases, it appeared to be a...
Tokamak Plasmas: Mirnov coil data analysis for tokamak ADITYA
Indian Academy of Sciences (India)
The spatial and temporal structures of magnetic signal in the tokamak ADITYA is analysed using recently developed singular value decomposition (SVD) technique. The analysis technique is ﬁrst tested with simulated data and then applied to the ADITYA Mirnov coil data to determine the structure of current peturbation as ...
Disentangling detector data in XFEL studies of temporally resolved solution state chemistry
DEFF Research Database (Denmark)
Brandt van Driel, Tim; Kjær, Kasper Skov; Biasin, Elisa
2015-01-01
such artefacts, Singular Value Decomposition (SVD) can be used to identify and characterize the observed detector fluctuations and assist in assigning some of them to variations in physical parameters such as X-ray energy and X-ray intensity. This paper presents a methodology for robustly identifying, separating...
Efficient two-dimensional magnetotellurics modelling using implicitly ...
Indian Academy of Sciences (India)
integral equation methods, we have opted for the. Keywords. Finite difference; eigenmode method; multi-frequency approach. J. Earth Syst. Sci. 120, No. 4, August 2011, pp. 595–604. cO Indian Academy of Sciences. 595 ... (1984) used Singular Value Decomposition (SVD) for 2D forward modelling. However, the versatile.
Multi-model ensemble schemes for predicting northeast monsoon ...
Indian Academy of Sciences (India)
An attempt has been made to improve the accuracy of predicted rainfall using three different multi-model ensemble (MME) schemes, viz., simple arithmetic mean of models (EM), principal component regression (PCR) and singular value decomposition based multiple linear regressions (SVD). It is found out that among ...
International Nuclear Information System (INIS)
Tanaka, Yuho; Uruma, Kazunori; Furukawa, Toshihiro; Nakao, Tomoki; Izumi, Kenya; Utsumi, Hiroaki
2017-01-01
This paper deals with an analysis problem for diffusion-ordered NMR spectroscopy (DOSY). DOSY is formulated as a matrix factorization problem of a given observed matrix. In order to solve this problem, a direct exponential curve resolution algorithm (DECRA) is well known. DECRA is based on singular value decomposition; the advantage of this algorithm is that the initial value is not required. However, DECRA requires a long calculating time, depending on the size of the given observed matrix due to the singular value decomposition, and this is a serious problem in practical use. Thus, this paper proposes a new analysis algorithm for DOSY to achieve a short calculating time. In order to solve matrix factorization for DOSY without using singular value decomposition, this paper focuses on the size of the given observed matrix. The observed matrix in DOSY is also a rectangular matrix with more columns than rows, due to limitation of the measuring time; thus, the proposed algorithm transforms the given observed matrix into a small observed matrix. The proposed algorithm applies the eigenvalue decomposition and the difference approximation to the small observed matrix, and the matrix factorization problem for DOSY is solved. The simulation and a data analysis show that the proposed algorithm achieves a lower calculating time than DECRA as well as similar analysis result results to DECRA. (author)
DEFF Research Database (Denmark)
Merker, Martin
The topic of this PhD thesis is graph decompositions. While there exist various kinds of decompositions, this thesis focuses on three problems concerning edgedecompositions. Given a family of graphs H we ask the following question: When can the edge-set of a graph be partitioned so that each part...... k(T)-edge-connected graph whose size is divisible by the size of T admits a T-decomposition. This proves a conjecture by Barát and Thomassen from 2006. Moreover, we introduce a new arboricity notion where we restrict the diameter of the trees in a decomposition into forests. We conjecture......-connected planar graph contains two edge-disjoint 18/19 -thin spanning trees. Finally, we make progress on a conjecture by Baudon, Bensmail, Przybyło, and Wozniak stating that if a graph can be decomposed into locally irregular graphs, then there exists such a decomposition with at most 3 parts. We show...
International Nuclear Information System (INIS)
Zhang Fang; Zhang Qianqian; Wang Weiguo; Zhu Chenjian; Wang Xiulin
2007-01-01
The interactions of fs DNA and two metal complexes [Cu(phen)SO 4 ].2H 2 O and [Ni(phen)SO 4 ].2H 2 O were explored by several chemometric methods, including parallel factor (PARAFAC), singular value decomposition-least squares (SVD-LS), and singular value decomposition-nonnegative least squares (SVD-NNLS) of excitation-emission matrix spectra (EEMs). The applications of SVD-LS and SVD-NNLS in this domain have been discussed. Rayleigh scatter part is avoided by ordered zero and reconstructed by linear interpolation. The importance of avoiding Rayleigh scatter has also been discussed. All the three methods do well in qualitative analysis. SVD-LS does best in present small changes of ethidium bromide (EB). In order to get accurate results, PARAFAC and SVD-NNLS can be utilized together in quantitative analysis. All the three chemometric methods indicate that the DNA binding modes of [Cu(phen)SO 4 ].2H 2 O are hydrogen bond effect and intercalation, while intercalation is the only DNA binding mode for [Ni(phen)SO 4 ].2H 2 O. These results are verified by the electronic absorption and emission fluorescence spectra. Just like PARAFAC, both SVD-LS and SVD-NNLS are proven to be convenient and convincing in studying the interactions between nucleic acids and complexes
A Hybrid DWT-SVD Image-Coding System (HDWTSVD for Color Images
Directory of Open Access Journals (Sweden)
Humberto Ochoa
2003-04-01
Full Text Available In this paper, we propose the HDWTSVD system to encode color images. Before encoding, the color components (RGB are transformed into YCbCr. Cb and Cr components are downsampled by a factor of two, both horizontally and vertically, before sending them through the encoder. A criterion based on the average standard deviation of 8x8 subblocks of the Y component is used to choose DWT or SVD for all the components. Standard test images are compressed based on the proposed algorithm.
K-SVD and its non-negative variant for dictionary design
Aharon, Michal; Elad, Michael; Bruckstein, Alfred M.
2005-08-01
In recent years there is a growing interest in the study of sparse representation for signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described as sparse linear combinations of these atoms. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting pre-specified transforms, or by adapting the dictionary to a set of training signals. Both these techniques have been considered in recent years, however this topic is largely still open. In this paper we address the latter problem of designing dictionaries, and introduce the K-SVD algorithm for this task. We show how this algorithm could be interpreted as a generalization of the K-Means clustering process, and demonstrate its behavior in both synthetic tests and in applications on real data. Finally, we turn to describe its generalization to nonnegative matrix factorization problem that suits signals generated under an additive model with positive atoms. We present a simple and yet efficient variation of the K-SVD that handles such extraction of non-negative dictionaries.
DEFF Research Database (Denmark)
Engholm, Rasmus
of this thesis presents new methods of noise reduction in video. Video versions of the two dimensional steering kernel are developed. One of the advantages of steering kernels is that edges are not blurred during the noise re- duction process. Taking time asymmetry into account improves the performance...... detection and image sharp- ness. Singular value decomposition (SVD) of an image is used to develop a new sharpness metric, SVD-G. In order to reduce the computational load, a local ver- sion of the metric is suggested, the SVD-L metric. It is shown that there is a tight relationship between the value...
DEFF Research Database (Denmark)
Dyson, Mark
2003-01-01
. Not only have design tools changed character, but also the processes associated with them. Today, the composition of problems and their decomposition into parcels of information, calls for a new paradigm. This paradigm builds on the networking of agents and specialisations, and the paths of communication...
Limited-memory adaptive snapshot selection for proper orthogonal decomposition
Energy Technology Data Exchange (ETDEWEB)
Oxberry, Geoffrey M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kostova-Vassilevska, Tanya [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Arrighi, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chand, Kyle [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-04-02
Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory bounding the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.
Improved Analyses of the Randomized Power Method and Block Lanczos Method
Wang, Shusen; Zhang, Zhihua; Zhang, Tong
2015-01-01
The power method and block Lanczos method are popular numerical algorithms for computing the truncated singular value decomposition (SVD) and eigenvalue decomposition problems. Especially in the literature of randomized numerical linear algebra, the power method is widely applied to improve the quality of randomized sketching, and relative-error bounds have been well established. Recently, Musco & Musco (2015) proposed a block Krylov subspace method that fully exploits the intermediate result...
A Robust Color Image Watermarking Scheme Using Entropy and QR Decomposition
Directory of Open Access Journals (Sweden)
L. Laur
2015-12-01
Full Text Available Internet has affected our everyday life drastically. Expansive volumes of information are exchanged over the Internet consistently which causes numerous security concerns. Issues like content identification, document and image security, audience measurement, ownership, copyrights and others can be settled by using digital watermarking. In this work, robust and imperceptible non-blind color image watermarking algorithm is proposed, which benefit from the fact that watermark can be hidden in different color channel which results into further robustness of the proposed technique to attacks. Given method uses some algorithms such as entropy, discrete wavelet transform, Chirp z-transform, orthogonal-triangular decomposition and Singular value decomposition in order to embed the watermark in a color image. Many experiments are performed using well-known signal processing attacks such as histogram equalization, adding noise and compression. Experimental results show that proposed scheme is imperceptible and robust against common signal processing attacks.
High capacity image steganography method based on framelet and compressive sensing
Xiao, Moyan; He, Zhibiao
2015-12-01
To improve the capacity and imperceptibility of image steganography, a novel high capacity and imperceptibility image steganography method based on a combination of framelet and compressive sensing (CS) is put forward. Firstly, SVD (Singular Value Decomposition) transform to measurement values obtained by compressive sensing technique to the secret data. Then the singular values in turn embed into the low frequency coarse subbands of framelet transform to the blocks of the cover image which is divided into non-overlapping blocks. Finally, use inverse framelet transforms and combine to obtain the stego image. The experimental results show that the proposed steganography method has a good performance in hiding capacity, security and imperceptibility.
Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina
2014-03-01
We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.
On the Dynamics of Rigid Manipulators
International Nuclear Information System (INIS)
Hamdan, H.; Nuseirat, A.
2001-01-01
In this paper , the dynamics of rigid robot manipulators are investigated using Selective Modal Analysis (SMA). The method allows the determination of quantitative measures of the degree of the participation of the state variables in the system modes.Using Singular Value Decomposition (SVD) of appropriate matrices, we get useful information about the controllability and the observability of the manipulator. Numerical examples to illustrate the developed methods are presented. (authors) . 20 refs.,3 figs., 5 tabs
On One-Point Iterations and DIIS
DEFF Research Database (Denmark)
Østerby, Ole; Sørensen, Hans Henrik Brandenborg
2009-01-01
We analyze various iteration procedures in many dimensions inspired by the SCF iteration used in first principles electronic structure calculations. We show that the simple mixing of densities can turn a divergent (or slowly convergent) iteration into a (faster) convergent process provided all th...... occur if the residual vectors are (nearly) linearly dependent. We show how to remove this linear dependence using the singular value decomposition (SVD)....
DROP: Dimensionality Reduction Optimization for Time Series
Suri, Sahaana; Bailis, Peter
2017-01-01
Dimensionality reduction is critical in analyzing increasingly high-volume, high-dimensional time series. In this paper, we revisit a now-classic study of time series dimensionality reduction operators and find that for a given quality constraint, Principal Component Analysis (PCA) uncovers representations that are over 2x smaller than those obtained via alternative techniques favored in the literature. However, as classically implemented via Singular Value Decomposition (SVD), PCA is incredi...
PECASE - Multi-Scale Experiments and Modeling in Wall Turbulence
2014-12-23
amplitude and singular value. The SVD of the resolvent before Fourier decomposition is imposed would itself naturally result in a de- composition into the...the measurement location. The flow was conditioned by passing it through a perforated plate, a honey comb, three turbulence reducing screens and...and a small, well resolved, field from which a composite spectrum can be produced covering the whole wavenumber range, which is the focus of future
Directory of Open Access Journals (Sweden)
Hugo Lara
2014-12-01
Full Text Available The matrix completion problem (MC has been approximated by using the nuclear norm relaxation. Some algorithms based on this strategy require the computationally expensive singular value decomposition (SVD at each iteration. One way to avoid SVD calculations is to use alternating methods, which pursue the completion through matrix factorization with a low rank condition. In this work an augmented Lagrangean-type alternating algorithm is proposed. The new algorithm uses duality information to define the iterations, in contrast to the solely primal LMaFit algorithm, which employs a Successive Over Relaxation scheme. The convergence result is studied. Some numerical experiments are given to compare numerical performance of both proposals.
DEKOMPOSISI NILAI SINGULAR PADA SISTEM PENGENALAN WAJAH
Directory of Open Access Journals (Sweden)
Beni Utomo
2012-11-01
Full Text Available Dekomposisi Nilai Singular atau Singular Value Decomposition (SVDmerupakan salah satu cara untuk menyatakan Principal Component Analysis (PCA.PCA sendiri merupakan suatu proses untuk menemukan kontributor-kontributorpenting dari suatu data berdasarkan besaran statistika deviasi standart dan variansi.SVD merupakan proses untuk mendapatkan matriks diagonal yang elementak nolnya merupakan nilai singular yang akarnya merupakan eigenvalue.SVD atas matriks kovarian C berbentuk C = U?V T dengan matriks U dan Vmemuat eigenvektor yang sudah terurut dari nilai variansi terbesar ke nilai variansiterkecilnya. Variansi terbesar memiliki arti eigenvektor menangkap ciri-ciri yangpaling banyak berubah. Sifat inilah yang dipakai untuk membentuk eigenface.
Tschan, Regine; Best, Christoph; Beutel, Manfred E; Knebel, Achim; Wiltink, Jörg; Dieterich, Marianne; Eckhardt-Henn, Annegret
2011-01-01
Secondary somatoform dizziness and vertigo (SVD) is an underdiagnosed and handicapping psychosomatic disorder, leading to extensive utilization of health care and maladaptive coping. Few long-term follow-up studies have focused on the assessment of risk factors and little is known about protective factors. The aim of this 1-year follow-up study was to identify neurootological patients at risk for the development of secondary SVD with respect to individual psychopathological disposition, subjective well-being and resilient coping. In a prospective interdisciplinary study, we assessed mental disorders in n=59 patients with peripheral and central vestibular disorders (n=15 benign paroxysmal positional vertigo, n=15 vestibular neuritis, n=8 Menière's disease, n=24 vestibular migraine) at baseline (T0) and 1 year after admission (T1). Psychosomatic examinations included the structured clinical interview for DSM-IV, the Vertigo Symptom Scale (VSS), and a psychometric test battery measuring resilience (RS), sense of coherence (SOC), and satisfaction with life (SWLS). Subjective well-being significantly predicted the development of secondary SVD: Patients with higher scores of RS, SOC, and SWLS at T0 were less likely to acquire secondary SVD at T1. Lifetime mental disorders correlated with a reduced subjective well-being at T0. Patients with mental comorbidity at T0 were generally more at risk for developing secondary SVD at T1. Patients' dispositional psychopathology and subjective well-being play a major predictive role for the long-term prognosis of dizziness and vertigo. To prevent secondary SVD, patients should be screened for risk and preventive factors, and offered psychotherapeutic treatment in case of insufficient coping capacity.
On the Performance of SVD-DWT Based Digital Video Watermarking Technique with Semi-Blind Detector
Directory of Open Access Journals (Sweden)
Tarzan Basaruddin
2010-10-01
Full Text Available This paper presents a watermarking technique for digital video. The proposed scheme is developed based on the work of Ganic and Chan which took the virtue of SVD and DWT. While the previous works of Chan has the blind detector property, our attempt is to develop a scheme with semi-blind detector, by using the merit of the DWT-SDV technique proposed by Ganic which was originally applied to still image. Overall, our experimental results show that our proposed scheme has a very good imperceptibility and is reasonably robust especially under several attacks such as compression, blurring, cropping, and sharpening.
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Thermal decomposition of pyrite
International Nuclear Information System (INIS)
Music, S.; Ristic, M.; Popovic, S.
1992-01-01
Thermal decomposition of natural pyrite (cubic, FeS 2 ) has been investigated using X-ray diffraction and 57 Fe Moessbauer spectroscopy. X-ray diffraction analysis of pyrite ore from different sources showed the presence of associated minerals, such as quartz, szomolnokite, stilbite or stellerite, micas and hematite. Hematite, maghemite and pyrrhotite were detected as thermal decomposition products of natural pyrite. The phase composition of the thermal decomposition products depends on the terature, time of heating and starting size of pyrite chrystals. Hematite is the end product of the thermal decomposition of natural pyrite. (author) 24 refs.; 6 figs.; 2 tabs
Lopez Villaverde, Eduardo; Robert, Sébastien; Prada, Claire
2016-07-01
In the present work, the Synthetic Transmit Aperture (STA) imaging is combined with the Decomposition of the Time Reversal Operator (DORT) method to image a coarse grained austenitic-ferritic steel using a contact transducer array. The highly heterogeneous structure of this material produces a strong scattering noise in ultrasound images. Furthermore, the surface waves guided along the array interfere with the bulk waves backscattered by defects. In order to overcome these problems, the DORT method is applied before calculating images with the STA algorithm. The method consists in analyzing in the frequency domain the singular values and singular vectors of the full array transfer matrix. This paper first presents an analysis of the singular values of different waves contained in the data acquisition, which facilitates the identification of the subspace associated with the surface guided waves for filtering operations. Then, a filtered matrix is defined where the contribution of structural noise and guided waves are reduced. Finally, in the time domain, the STA algorithm is applied to this matrix in order to calculate an image with reduced structural noise. Experiments demonstrate that this filtering improves the signal-to-noise ratio by more than 12 dB in comparison with the STA image before filtering.
Decomposition of Sodium Tetraphenylborate
International Nuclear Information System (INIS)
Barnes, M.J.
1998-01-01
The chemical decomposition of aqueous alkaline solutions of sodium tetraphenylborate (NaTPB) has been investigated. The focus of the investigation is on the determination of additives and/or variables which influence NaTBP decomposition. This document describes work aimed at providing better understanding into the relationship of copper (II), solution temperature, and solution pH to NaTPB stability
Probability matrix decomposition models
Maris, E.; DeBoeck, P.; Mechelen, I. van
1996-01-01
In this paper, we consider a class of models for two-way matrices with binary entries of 0 and 1. First, we consider Boolean matrix decomposition, conceptualize it as a latent response model (LRM) and, by making use of this conceptualization, generalize it to a larger class of matrix decomposition
Le, Thien-Phu
2017-10-01
The frequency-scale domain decomposition technique has recently been proposed for operational modal analysis. The technique is based on the Cauchy mother wavelet. In this paper, the approach is extended to the Morlet mother wavelet, which is very popular in signal processing due to its superior time-frequency localization. Based on the regressive form and an appropriate norm of the Morlet mother wavelet, the continuous wavelet transform of the power spectral density of ambient responses enables modes in the frequency-scale domain to be highlighted. Analytical developments first demonstrate the link between modal parameters and the local maxima of the continuous wavelet transform modulus. The link formula is then used as the foundation of the proposed modal identification method. Its practical procedure, combined with the singular value decomposition algorithm, is presented step by step. The proposition is finally verified using numerical examples and a laboratory test.
Azimuthal decomposition of optical modes
CSIR Research Space (South Africa)
Dudley, Angela L
2012-07-01
Full Text Available This presentation analyses the azimuthal decomposition of optical modes. Decomposition of azimuthal modes need two steps, namely generation and decomposition. An azimuthally-varying phase (bounded by a ring-slit) placed in the spatial frequency...
Radiation decomposition of chlorates
International Nuclear Information System (INIS)
Patil, S.F.; Patil, B.T.
1980-01-01
Radiation induced decomposition yields of chloride, hypochlorite and chlorite have been determined in chlorates of barium, calcium and strontium for different γ doses. The G-values of the products in anhydrous barium and calcium chlorates are found to be greater than in the corresponding hydrated forms showing that the water of crystallization plays a quenching role in the radiation decomposition of chlorates. The results are explained on the assumption that the excitation energy of the fraction of the metal ions is transferred to the water molecules in the crystal lattice resulting in the excitation or decomposition of the water molecules rather than chlorate ions. (orig.) [de
Daverman, Robert J
2007-01-01
Decomposition theory studies decompositions, or partitions, of manifolds into simple pieces, usually cell-like sets. Since its inception in 1929, the subject has become an important tool in geometric topology. The main goal of the book is to help students interested in geometric topology to bridge the gap between entry-level graduate courses and research at the frontier as well as to demonstrate interrelations of decomposition theory with other parts of geometric topology. With numerous exercises and problems, many of them quite challenging, the book continues to be strongly recommended to eve
Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook
2015-01-01
Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.
Directory of Open Access Journals (Sweden)
Xiaomin Chen
2012-01-01
Full Text Available The SVD-aided joint transmitter and receiver design for the uplink of CDMA-based synchronous multiuser Turbo-BLAST systems is proposed in the presence of channel state information (CSI imperfection. At the transmitter, the beamforming and power allocation schemes are developed to maximize the capacity of the desired user. At the receiver, a suboptimal decorrelating scheme is first proposed to mitigate the multiuser interference (MUI and decouple the detection of different users with imperfect CSI, and then the iterative detecting algorithm that takes the channel estimation error into account is designed to cancel the coantenna interference (CAI and enhance the bit error rate (BER results further. Simulation results show that the proposed uplink CDMA-based multiuser Turbo-BLAST model is effective, the detection from every user is completely independent to each other after decorrelating, and the system performance can be enhanced by the proposed beamforming and power allocation schemes. Furthermore, BER performance can be enhanced by the modified iterative detection. The effect of CSI imperfection is evaluated, which is proved to be a useful tool to assess the system performance with imperfect CSI.
Photochemical decomposition of catecholamines
International Nuclear Information System (INIS)
Mol, N.J. de; Henegouwen, G.M.J.B. van; Gerritsma, K.W.
1979-01-01
During photochemical decomposition (lambda=254 nm) adrenaline, isoprenaline and noradrenaline in aqueous solution were converted to the corresponding aminochrome for 65, 56 and 35% respectively. In determining this conversion, photochemical instability of the aminochromes was taken into account. Irradiations were performed in such dilute solutions that the neglect of the inner filter effect is permissible. Furthermore, quantum yields for the decomposition of the aminochromes in aqueous solution are given. (Author)
Linear analysis of rotationally invariant, radially variant tomographic imaging systems
International Nuclear Information System (INIS)
Huesmann, R.H.
1990-01-01
This paper describes a method to analyze the linear imaging characteristics of rotationally invariant, radially variant tomographic imaging systems using singular value decomposition (SVD). When the projection measurements from such a system are assumed to be samples from independent and identically distributed multi-normal random variables, the best estimate of the emission intensity is given by the unweighted least squares estimator. The noise amplification of this estimator is inversely proportional to the singular values of the normal matrix used to model projection and backprojection. After choosing an acceptable noise amplification, the new method can determine the number of parameters and hence the number of pixels that should be estimated from data acquired from an existing system with a fixed number of angles and projection bins. Conversely, for the design of a new system, the number of angles and projection bins necessary for a given number of pixels and noise amplification can be determined. In general, computing the SVD of the projection normal matrix has cubic computational complexity. However, the projection normal matrix for this class of rotationally invariant, radially variant systems has a block circulant form. A fast parallel algorithm to compute the SVD of this block circulant matrix makes the singular value analysis practical by asymptotically reducing the computation complexity of the method by a multiplicative factor equal to the number of angles squared
Manthe, Uwe; Ellerbrock, Roman
2016-05-28
A new approach for the quantum-state resolved analysis of polyatomic reactions is introduced. Based on the singular value decomposition of the S-matrix, energy-dependent natural reaction channels and natural reaction probabilities are defined. It is shown that the natural reaction probabilities are equal to the eigenvalues of the reaction probability operator [U. Manthe and W. H. Miller, J. Chem. Phys. 99, 3411 (1993)]. Consequently, the natural reaction channels can be interpreted as uniquely defined pathways through the transition state of the reaction. The analysis can efficiently be combined with reactive scattering calculations based on the propagation of thermal flux eigenstates. In contrast to a decomposition based straightforwardly on thermal flux eigenstates, it does not depend on the choice of the dividing surface separating reactants from products. The new approach is illustrated studying a prototypical example, the H + CH4 → H2 + CH3 reaction. The natural reaction probabilities and the contributions of the different vibrational states of the methyl product to the natural reaction channels are calculated and discussed. The relation between the thermal flux eigenstates and the natural reaction channels is studied in detail.
Symmetric Tensor Decomposition
DEFF Research Database (Denmark)
Brachat, Jerome; Comon, Pierre; Mourrain, Bernard
2010-01-01
We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables...... of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation...... of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions and for detecting the rank....
Extreme Computing for Extreme Adaptive Optics: the Key to Finding Life Outside our Solar System
Ltaief, Hatem
2018-01-01
The real-time correction of telescopic images in the search for exoplanets is highly sensitive to atmospheric aberrations. The pseudo- inverse algorithm is an efficient mathematical method to filter out these turbulences. We introduce a new partial singular value decomposition (SVD) algorithm based on QR-based Diagonally Weighted Halley (QDWH) iteration for the pseudo-inverse method of adaptive optics. The QDWH partial SVD algorithm selectively calculates the most significant singular values and their corresponding singular vectors. We develop a high performance implementation and demonstrate the numerical robustness of the QDWH-based partial SVD method. We also perform a benchmarking campaign on various generations of GPU hardware accelerators and compare against the state-of-the-art SVD implementation SGESDD from the MAGMA library. Numerical accuracy and performance results are reported using synthetic and real observational datasets from the Subaru telescope. Our implementation outperforms SGESDD by up to fivefold and fourfold performance speedups on ill-conditioned synthetic matrices and real observational datasets, respectively. The pseudo-inverse simulation code will be deployed on-sky for the Subaru telescope during observation nights scheduled early 2018.
Singular values of the Rogers-Ramanujan continued fraction
Gee, A.C.P.; Honsbeek, M
1999-01-01
Let $z\\in\\C$ be imaginary quadratic in the upper half plane.Then the Rogers-Ramanujan continued fraction evaluated at $q=e^{2\\pi i z}$ is contained in a class field of $\\Q(z)$. Ramanujan showed that for certain values of $z$, one can write these continued fractions as nested radicals. We use the
Nuclear power plant sensor fault detection using singular value ...
Indian Academy of Sciences (India)
Shyamapada Mandal
2017-07-27
Jul 27, 2017 ... NN scheme was designed by Zhu et al [15] in the AHU system sensor FDD. A hybrid of data-driven soft mea- surement and modeling method was proposed for power plant sensor condition monitoring [16]. This method includes generalised regression neural network (GRNN), mean impact value (MIV), ...
Inverse scale space decomposition
DEFF Research Database (Denmark)
Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane
2018-01-01
We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...
Cacciatori, Sergio L; Marrani, Alessio
2013-01-01
By exploiting a "mixed" non-symmetric Freudenthal-Rozenfeld-Tits magic square, two types of coset decompositions are analyzed for the non-compact special K\\"ahler symmetric rank-3 coset E7(-25)/[(E6(-78) x U(1))/Z_3], occurring in supergravity as the vector multiplets' scalar manifold in N=2, D=4 exceptional Maxwell-Einstein theory. The first decomposition exhibits maximal manifest covariance, whereas the second (triality-symmetric) one is of Iwasawa type, with maximal SO(8) covariance. Generalizations to conformal non-compact, real forms of non-degenerate, simple groups "of type E7" are presented for both classes of coset parametrizations, and relations to rank-3 simple Euclidean Jordan algebras and normed trialities over division algebras are also discussed.
Decomposition mechanisms of tertiarybutylarsine
Larsen, C. A.; Buchan, N. I.; Li, S. H.; Stringfellow, G. B.
1989-03-01
As a new source compound to replace AsH 3 for organometallic vapor phase epitaxy (OMVPE) of III/V semiconductors, tertiarybutylarsine (TBAs) has the advantages of low decomposition temperatures, lower safety hazards, and low carbon contamination in OMVPE grown GaAs layers. The vapor pressure of TBAs was measured, and is given by log 10P( Torr) = 7.500 - 1562.3/ T( K). The decomposition mechanisms of TBAs were studied in a D 2 ambient using a time-of-flight mass spectrometer to analyze the gaseous products. Although a free radical mechanisms would seem the most likely, it is not the dominant route for decomposition. Instead, unimolecular processes are the preferred pathway. Two such reactions are proposed. The major step is intramolecular coupling yielding AsH and isobutane. At higher temperatures β-elimination becomes important, producing AsH 3 and isobutene. The reactions are catalyzed by GaAs surfaces, but not by silica. The temperature dependence of the reaction rates was studied, and Arrhenius parameters for the rate constants are given.
Huang, Nantian; Chen, Huaijin; Cai, Guowei; Fang, Lihua; Wang, Yuqiang
2016-11-10
Mechanical fault diagnosis of high-voltage circuit breakers (HVCBs) based on vibration signal analysis is one of the most significant issues in improving the reliability and reducing the outage cost for power systems. The limitation of training samples and types of machine faults in HVCBs causes the existing mechanical fault diagnostic methods to recognize new types of machine faults easily without training samples as either a normal condition or a wrong fault type. A new mechanical fault diagnosis method for HVCBs based on variational mode decomposition (VMD) and multi-layer classifier (MLC) is proposed to improve the accuracy of fault diagnosis. First, HVCB vibration signals during operation are measured using an acceleration sensor. Second, a VMD algorithm is used to decompose the vibration signals into several intrinsic mode functions (IMFs). The IMF matrix is divided into submatrices to compute the local singular values (LSV). The maximum singular values of each submatrix are selected as the feature vectors for fault diagnosis. Finally, a MLC composed of two one-class support vector machines (OCSVMs) and a support vector machine (SVM) is constructed to identify the fault type. Two layers of independent OCSVM are adopted to distinguish normal or fault conditions with known or unknown fault types, respectively. On this basis, SVM recognizes the specific fault type. Real diagnostic experiments are conducted with a real SF₆ HVCB with normal and fault states. Three different faults (i.e., jam fault of the iron core, looseness of the base screw, and poor lubrication of the connecting lever) are simulated in a field experiment on a real HVCB to test the feasibility of the proposed method. Results show that the classification accuracy of the new method is superior to other traditional methods.
Peng, Bo; Kowalski, Karol
2017-09-12
The representation and storage of two-electron integral tensors are vital in large-scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this work, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of the atomic basis set, N b , ranging from ∼100 up to ∼2,000, the observed numerical scaling of our implementation shows [Formula: see text] versus [Formula: see text] cost of performing single CD on the two-electron integral tensor in most of the other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic orbital (AO) two-electron integral tensor from [Formula: see text] to [Formula: see text] with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled cluster formalism employing single and double excitations (CCSD) on several benchmark systems including the C 60 molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10 -4 to 10 -3 to give acceptable compromise between efficiency and accuracy.
Directory of Open Access Journals (Sweden)
Ibgtc Bowala
2017-06-01
Full Text Available With the rapid growth of financial markets, analyzers are paying more attention on predictions. Stock data are time series data, with huge amounts. Feasible solution for handling the increasing amount of data is to use a cluster for parallel processing, and Hadoop parallel computing platform is a typical representative. There are various statistical models for forecasting time series data, but accurate clusters are a pre-requirement. Clustering analysis for time series data is one of the main methods for mining time series data for many other analysis processes. However, general clustering algorithms cannot perform clustering for time series data because series data has a special structure and a high dimensionality has highly co-related values due to high noise level. A novel model for time series clustering is presented using BIRCH, based on piecewise SVD, leading to a novel dimension reduction approach. Highly co-related features are handled using SVD with a novel approach for dimensionality reduction in order to keep co-related behavior optimal and then use BIRCH for clustering. The algorithm is a novel model that can handle massive time series data. Finally, this new model is successfully applied to real stock time series data of Yahoo finance with satisfactory results.
Clustering via Kernel Decomposition
DEFF Research Database (Denmark)
Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan
2006-01-01
Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....
Mode decomposition evolution equations.
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2012-03-01
Partial differential equation (PDE) based methods have become some of the most powerful tools for exploring the fundamental problems in signal processing, image processing, computer vision, machine vision and artificial intelligence in the past two decades. The advantages of PDE based approaches are that they can be made fully automatic, robust for the analysis of images, videos and high dimensional data. A fundamental question is whether one can use PDEs to perform all the basic tasks in the image processing. If one can devise PDEs to perform full-scale mode decomposition for signals and images, the modes thus generated would be very useful for secondary processing to meet the needs in various types of signal and image processing. Despite of great progress in PDE based image analysis in the past two decades, the basic roles of PDEs in image/signal analysis are only limited to PDE based low-pass filters, and their applications to noise removal, edge detection, segmentation, etc. At present, it is not clear how to construct PDE based methods for full-scale mode decomposition. The above-mentioned limitation of most current PDE based image/signal processing methods is addressed in the proposed work, in which we introduce a family of mode decomposition evolution equations (MoDEEs) for a vast variety of applications. The MoDEEs are constructed as an extension of a PDE based high-pass filter (Europhys. Lett., 59(6): 814, 2002) by using arbitrarily high order PDE based low-pass filters introduced by Wei (IEEE Signal Process. Lett., 6(7): 165, 1999). The use of arbitrarily high order PDEs is essential to the frequency localization in the mode decomposition. Similar to the wavelet transform, the present MoDEEs have a controllable time-frequency localization and allow a perfect reconstruction of the original function. Therefore, the MoDEE operation is also called a PDE transform. However, modes generated from the present approach are in the spatial or time domain and can be
Hydrogen peroxide catalytic decomposition
Parrish, Clyde F. (Inventor)
2010-01-01
Nitric oxide in a gaseous stream is converted to nitrogen dioxide using oxidizing species generated through the use of concentrated hydrogen peroxide fed as a monopropellant into a catalyzed thruster assembly. The hydrogen peroxide is preferably stored at stable concentration levels, i.e., approximately 50%-70% by volume, and may be increased in concentration in a continuous process preceding decomposition in the thruster assembly. The exhaust of the thruster assembly, rich in hydroxyl and/or hydroperoxy radicals, may be fed into a stream containing oxidizable components, such as nitric oxide, to facilitate their oxidation.
Regularization by truncated total least squares
DEFF Research Database (Denmark)
Hansen, Per Christian; Fierro, R.D; Golub, G.H
1997-01-01
The total least squares (TLS) method is a successful method for noise reduction in linear least squares problems in a number of applications. The TLS method is suited to problems in which both the coefficient matrix and the right-hand side are not precisely known. This paper focuses on the use...... matrix. We express our results in terms of the singular value decomposition (SVD) of the coefficient matrix rather than the augmented matrix. This leads to insight into the filtering properties of the truncated TLS method as compared to regularized least squares solutions. In addition, we propose...
Contributions to Automated Realtime Underwater Navigation
2012-02-01
dτ (1.62) where Rvi ∈ SO(3) is the matrix describing the time invariant rotation from instru- ment coordinates to vehicle coordinates. This is the...used in the KF framework. A minimum mean-squared error ( MMSE ) derivation of the KF (e.g., [99]) gives the Kalman gain [97, 101]: Kk = P − x̃kỹk ( P...heading reference system LADCP Lowered Acoustic Doppler Current Profiler 188 GA Geometric Algebra LA Linear Algebra SVD singular value decomposition MMSE
Erbium hydride decomposition kinetics.
Energy Technology Data Exchange (ETDEWEB)
Ferrizz, Robert Matthew
2006-11-01
Thermal desorption spectroscopy (TDS) is used to study the decomposition kinetics of erbium hydride thin films. The TDS results presented in this report are analyzed quantitatively using Redhead's method to yield kinetic parameters (E{sub A} {approx} 54.2 kcal/mol), which are then utilized to predict hydrogen outgassing in vacuum for a variety of thermal treatments. Interestingly, it was found that the activation energy for desorption can vary by more than 7 kcal/mol (0.30 eV) for seemingly similar samples. In addition, small amounts of less-stable hydrogen were observed for all erbium dihydride films. A detailed explanation of several approaches for analyzing thermal desorption spectra to obtain kinetic information is included as an appendix.
Decomposition methods for unsupervised learning
DEFF Research Database (Denmark)
Mørup, Morten
2008-01-01
This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...
Thermal decomposition of lutetium propionate
DEFF Research Database (Denmark)
Grivel, Jean-Claude
2010-01-01
The thermal decomposition of lutetium(III) propionate monohydrate (Lu(C2H5CO2)3·H2O) in argon was studied by means of thermogravimetry, differential thermal analysis, IR-spectroscopy and X-ray diffraction. Dehydration takes place around 90 °C. It is followed by the decomposition of the anhydrous ...... of the oxycarbonate to the rare-earth oxide proceeds in a different way, which is here reminiscent of the thermal decomposition path of Lu(C3H5O2)·2CO(NH2)2·2H2O...
AUTONOMOUS GAUSSIAN DECOMPOSITION
International Nuclear Information System (INIS)
Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian; Heiles, Carl; Hennebelle, Patrick; Goss, W. M.; Dickey, John
2015-01-01
We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes
AUTONOMOUS GAUSSIAN DECOMPOSITION
Energy Technology Data Exchange (ETDEWEB)
Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian [Department of Astronomy, University of Wisconsin, 475 North Charter Street, Madison, WI 53706 (United States); Heiles, Carl [Radio Astronomy Lab, UC Berkeley, 601 Campbell Hall, Berkeley, CA 94720 (United States); Hennebelle, Patrick [Laboratoire AIM, Paris-Saclay, CEA/IRFU/SAp-CNRS-Université Paris Diderot, F-91191 Gif-sur Yvette Cedex (France); Goss, W. M. [National Radio Astronomy Observatory, P.O. Box O, 1003 Lopezville, Socorro, NM 87801 (United States); Dickey, John, E-mail: rlindner@astro.wisc.edu [University of Tasmania, School of Maths and Physics, Private Bag 37, Hobart, TAS 7001 (Australia)
2015-04-15
We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.
Decomposition of Network Communication Games
Dietzenbacher, Bas; Borm, Peter; Hendrickx, Ruud
2015-01-01
Using network control structures this paper introduces network communication games as a generalization of vertex games and edge games corresponding to communication situations and studies their decomposition into unanimity games. We obtain a relation between the dividends of the network
NRSA enzyme decomposition model data
U.S. Environmental Protection Agency — Microbial enzyme activities measured at more than 2000 US streams and rivers. These enzyme data were then used to predict organic matter decomposition and microbial...
Some nonlinear space decomposition algorithms
Energy Technology Data Exchange (ETDEWEB)
Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)
1996-12-31
Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.
Enhancing multilingual latent semantic analysis with term alignment information.
Energy Technology Data Exchange (ETDEWEB)
Chew, Peter A.; Bader, Brett William
2008-08-01
Latent Semantic Analysis (LSA) is based on the Singular Value Decomposition (SVD) of a term-by-document matrix for identifying relationships among terms and documents from co-occurrence patterns. Among the multiple ways of computing the SVD of a rectangular matrix X, one approach is to compute the eigenvalue decomposition (EVD) of a square 2 x 2 composite matrix consisting of four blocks with X and XT in the off-diagonal blocks and zero matrices in the diagonal blocks. We point out that significant value can be added to LSA by filling in some of the values in the diagonal blocks (corresponding to explicit term-to-term or document-to-document associations) and computing a term-by-concept matrix from the EVD. For the case of multilingual LSA, we incorporate information on cross-language term alignments of the same sort used in Statistical Machine Translation (SMT). Since all elements of the proposed EVD-based approach can rely entirely on lexical statistics, hardly any price is paid for the improved empirical results. In particular, the approach, like LSA or SMT, can still be generalized to virtually any language(s); computation of the EVD takes similar resources to that of the SVD since all the blocks are sparse; and the results of EVD are just as economical as those of SVD.
Directory of Open Access Journals (Sweden)
Xiaoyang Liu
2017-01-01
Full Text Available In order to analyze the channel estimation performance of near space high altitude platform station (HAPS in wireless communication system, the structure and formation of HAPS are studied in this paper. The traditional Least Squares (LS channel estimation method and Singular Value Decomposition-Linear Minimum Mean-Squared (SVD-LMMS channel estimation method are compared and investigated. A novel channel estimation method and model are proposed. The channel estimation performance of HAPS is studied deeply. The simulation and theoretical analysis results show that the performance of the proposed method is better than the traditional methods. The lower Bit Error Rate (BER and higher Signal Noise Ratio (SNR can be obtained by the proposed method compared with the LS and SVD-LMMS methods.
Zhang, P; Jones, R M
2014-01-01
Beam-excited higher order modes (HOM) can be used to provide beam diagnostics. Here we focus on 3.9 GHz superconducting accelerating cavities. In particular we study dipole mode excitation and its application to beam position determinations. In order to extract beam position information, linear regression can be used. Due to a large number of sampling points in the waveforms, statistical methods are used to effectively reduce the dimension of the system, such as singular value decomposition (SVD) and k-means clustering. These are compared with the direct linear regression (DLR) on the entire waveforms. A cross-validation technique is used to study the sample independent precisions of the position predictions given by these three methods. A RMS prediction error in the beam position of approximately 50 micron can be achieved by DLR and SVD, while k-means clustering suggests 70 micron.
Time evolution of two holes in t - J chains with anisotropic couplings
Manmana, Salvatore R.; Thyen, Holger; Köhler, Thomas; Kramer, Stephan C.
Using time-dependent Matrix Product State (MPS) methods we study the real-time evolution of hole-excitations in t-J chains close to filling n = 1 . The dynamics in 'standard' t - J chains with SU(2) invariant spin couplings is compared to the one when introducing anisotropic, XXZ-type spin interactions as realizable, e.g., by ultracold polar molecules on optical lattices. The simulations are performed with MPS implementations based on the usual singular value decompositions (SVD) as well as ones using the adaptive cross approximation (ACA) instead. The ACA can be seen as an iterative approach to SVD which is often used, e.g., in the context of finite-element-methods, leading to a substantial speedup. A comparison of the performance of both algorithms in the MPS context is discussed. Financial support via DFG through CRC 1073 (''Atomic scale control of energy conversion''), project B03 is gratefully acknowledged.
Shi, L; Jones., R M
2014-01-01
erating cavities at FLASH linac, DESY, are equipped with electronics for beam position monitoring, which are based on HOM signals from special couplers. These monitors provide the beam position without additional vacuum components and at low cost. Moreover, they can be used to align the beam in the cavities to reduce the HOM effects on the beam. However, the HOMBPM (Higher Order Mode based Beam Position Monitor) shows an instability problem over time. In this paper, we will present the status of studies on this issue. Several methods are utilized to calibrate the HOMBPMs. These methods include DLR (Direct Linear Regression), and SVD (Singular Value Decomposition). We found that SVD generally is more suitable for HOMBPM calibration. We focus on the HOMBPMs at 1.3 GHz cavities. Techniques developed here are applicable to 3.9 ...
Missing value imputation in multi-environment trials: Reconsidering the Krzanowski method
Directory of Open Access Journals (Sweden)
Sergio Arciniegas-Alarcón
2016-07-01
Full Text Available We propose a new methodology for multiple imputation when faced with missing data in multi-environmental trials with genotype-by-environment interaction, based on the imputation system developed by Krzanowski that uses the singular value decomposition (SVD of a matrix. Several different iterative variants are described; differential weights can also be included in each variant to represent the influence of different components of SVD in the imputation process. The methods are compared through a simulation study based on three real data matrices that have values deleted randomly at different percentages, using as measure of overall accuracy a combination of the variance between imputations and their mean square deviations relative to the deleted values. The best results are shown by two of the iterative schemes that use weights belonging to the interval [0.75, 1]. These schemes provide imputations that have higher quality when compared with other multiple imputation methods based on the Krzanowski method.
Geladi, Paul; Nelson, Andrew; Lindholm-Sethson, Britta
2007-07-09
Electrical impedance gives multivariate complex number data as results. Two examples of multivariate electrical impedance data measured on lipid monolayers in different solutions give rise to matrices (16x50 and 38x50) of complex numbers. Multivariate data analysis by principal component analysis (PCA) or singular value decomposition (SVD) can be used for complex data and the necessary equations are given. The scores and loadings obtained are vectors of complex numbers. It is shown that the complex number PCA and SVD are better at concentrating information in a few components than the naïve juxtaposition method and that Argand diagrams can replace score and loading plots. Different concentrations of Magainin and Gramicidin A give different responses and also the role of the electrolyte medium can be studied. An interaction of Gramicidin A in the solution with the monolayer over time can be observed.
Multiview Trajectory Mapping Using Homography with Lens Distortion Correction
Directory of Open Access Journals (Sweden)
Andrea Cavallaro
2008-11-01
Full Text Available We present a trajectory mapping algorithm for a distributed camera setting that is based on statistical homography estimation accounting for the distortion introduced by camera lenses. Unlike traditional approaches based on the direct linear transformation (DLT algorithm and singular value decomposition (SVD, the planar homography estimation is derived from renormalization. In addition to this, the algorithm explicitly introduces a correction parameter to account for the nonlinear radial lens distortion, thus improving the accuracy of the transformation. We demonstrate the proposed algorithm by generating mosaics of the observed scenes and by registering the spatial locations of moving objects (trajectories from multiple cameras on the mosaics. Moreover, we objectively compare the transformed trajectories with those obtained by SVD and least mean square (LMS methods on standard datasets and demonstrate the advantages of the renormalization and the lens distortion correction.
Multiview Trajectory Mapping Using Homography with Lens Distortion Correction
Directory of Open Access Journals (Sweden)
Kayumbi Gabin
2008-01-01
Full Text Available Abstract We present a trajectory mapping algorithm for a distributed camera setting that is based on statistical homography estimation accounting for the distortion introduced by camera lenses. Unlike traditional approaches based on the direct linear transformation (DLT algorithm and singular value decomposition (SVD, the planar homography estimation is derived from renormalization. In addition to this, the algorithm explicitly introduces a correction parameter to account for the nonlinear radial lens distortion, thus improving the accuracy of the transformation. We demonstrate the proposed algorithm by generating mosaics of the observed scenes and by registering the spatial locations of moving objects (trajectories from multiple cameras on the mosaics. Moreover, we objectively compare the transformed trajectories with those obtained by SVD and least mean square (LMS methods on standard datasets and demonstrate the advantages of the renormalization and the lens distortion correction.
A reconstructed South Atlantic Meridional Overturning Circulation time series since 1870
Lopez, Hosmay; Goni, Gustavo; Dong, Shenfu
2017-04-01
This study reconstructs a century-long South Atlantic Meridional Overturning Circulation (SAMOC) index. The reconstruction is possible due to its covariability with sea surface temperature (SST). A singular value decomposition (SVD) method is applied to the correlation matrix of SST and SAMOC. The SVD is performed on the trained period (1993 to present) for which Expendable Bathythermographs and satellite altimetry observations are available. The joint modes obtained are used in the reconstruction of a monthly SAMOC time series from 1870 to present. The reconstructed index is highly correlated to the observational based SAMOC time series during the trained period and provides a long historical estimate. It is shown that the Interdecadal Pacific Oscillation (IPO) is the leading mode of SAMOC-SST covariability, explaining 85% with the Atlantic Niño accounting for less than 10%. The reconstruction shows that SAMOC has recently shifted to an anomalous positive period, consistent with a recent positive shift of the IPO.
Zhang, P; Jones, R M; Shinton, I R R; Flisgen, T; Glock, H W
2012-01-01
We investigate the feasibility of beam position diagnostics using Higher Order Mode (HOM) signals excited by an electron beam in the third harmonic 3.9 GHz superconducting accelerating cavities at FLASH. After careful theoretical and experimental assessment of the HOM spectrum, three modal choices have been narrowed down to fulfill different diagnostics requirements. These are localized dipole beam-pipe modes, trapped cavity modes from the fifth dipole band and propagating modes from the first two dipole bands. These modes are treated with various data analysis techniques: modal identification, direct linear regression (DLR) and singular value decomposition (SVD). Promising options for beam diagnostics are found from all three modal choices. This constitutes the first prediction, subsequently confirmed by experiments, of trapped HOMs in third harmonic cavities, and also the first direct comparison of DLR and SVD in the analysis of HOM-based beam diagnostics.
Thermal decomposition of biphenyl (1963)
International Nuclear Information System (INIS)
Clerc, M.
1962-06-01
The rates of formation of the decomposition products of biphenyl; hydrogen, methane, ethane, ethylene, as well as triphenyl have been measured in the vapour and liquid phases at 460 deg. C. The study of the decomposition products of biphenyl at different temperatures between 400 and 460 deg. C has provided values of the activation energies of the reactions yielding the main products of pyrolysis in the vapour phase. Product and Activation energy: Hydrogen 73 ± 2 kCal/Mole; Benzene 76 ± 2 kCal/Mole; Meta-triphenyl 53 ± 2 kCal/Mole; Biphenyl decomposition 64 ± 2 kCal/Mole; The rate of disappearance of biphenyl is only very approximately first order. These results show the major role played at the start of the decomposition by organic impurities which are not detectable by conventional physico-chemical analysis methods and the presence of which accelerates noticeably the decomposition rate. It was possible to eliminate these impurities by zone-melting carried out until the initial gradient of the formation curves for the products became constant. The composition of the high-molecular weight products (over 250) was deduced from the mean molecular weight and the dosage of the aromatic C - H bonds by infrared spectrophotometry. As a result the existence in tars of hydrogenated tetra, penta and hexaphenyl has been demonstrated. (author) [fr
Spectral Tensor-Train Decomposition
DEFF Research Database (Denmark)
Bigoni, Daniele; Engsig-Karup, Allan Peter; Marzouk, Youssef M.
2016-01-01
discretizations of the target function. We assess the performance of the method on a range of numerical examples: a modified set of Genz functions with dimension up to 100, and functions with mixed Fourier modes or with local features. We observe significant improvements in performance over an anisotropic......The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We first define a functional version of the TT.......e., the “cores”) comprising the functional TT decomposition. This result motivates an approximation scheme employing polynomial approximations of the cores. For functions with appropriate regularity, the resulting spectral tensor-train decomposition combines the favorable dimension-scaling of the TT...
On the hadron mass decomposition
Lorcé, Cédric
2018-02-01
We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force.
Abstract decomposition theorem and applications
Grossberg, R; Grossberg, Rami; Lessmann, Olivier
2005-01-01
Let K be an Abstract Elementary Class. Under the asusmptions that K has a nicely behaved forking-like notion, regular types and existence of some prime models we establish a decomposition theorem for such classes. The decomposition implies a main gap result for the class K. The setting is general enough to cover \\aleph_0-stable first-order theories (proved by Shelah in 1982), Excellent Classes of atomic models of a first order tehory (proved Grossberg and Hart 1987) and the class of submodels of a large sequentially homogenuus \\aleph_0-stable model (which is new).
A Tensor Decomposition-Based Approach for Detecting Dynamic Network States From EEG.
Mahyari, Arash Golibagh; Zoltowski, David M; Bernat, Edward M; Aviyente, Selin
2017-01-01
Functional connectivity (FC), defined as the statistical dependency between distinct brain regions, has been an important tool in understanding cognitive brain processes. Most of the current works in FC have focused on the assumption of temporally stationary networks. However, recent empirical work indicates that FC is dynamic due to cognitive functions. The purpose of this paper is to understand the dynamics of FC for understanding the formation and dissolution of networks of the brain. In this paper, we introduce a two-step approach to characterize the dynamics of functional connectivity networks (FCNs) by first identifying change points at which the network connectivity across subjects shows significant changes and then summarizing the FCNs between consecutive change points. The proposed approach is based on a tensor representation of FCNs across time and subjects yielding a four-mode tensor. The change points are identified using a subspace distance measure on low-rank approximations to the tensor at each time point. The network summarization is then obtained through tensor-matrix projections across the subject and time modes. The proposed framework is applied to electroencephalogram (EEG) data collected during a cognitive control task. The detected change-points are consistent with a priori known ERN interval. The results show significant connectivities in medial-frontal regions which are consistent with widely observed ERN amplitude measures. The tensor-based method outperforms conventional matrix-based methods such as singular value decomposition in terms of both change-point detection and state summarization. The proposed tensor-based method captures the topological structure of FCNs which provides more accurate change-point-detection and state summarization.
Decomposition of metal nitrate solutions
International Nuclear Information System (INIS)
Haas, P.A.; Stines, W.B.
1982-01-01
Oxides in powder form are obtained from aqueous solutions of one or more heavy metal nitrates (e.g. U, Pu, Th, Ce) by thermal decomposition at 300 to 800 deg C in the presence of about 50 to 500% molar concentration of ammonium nitrate to total metal. (author)
Decomposition of network communication games
Dietzenbacher, Bas; Borm, Peter; Hendrickx, Ruud
Using network control structures, this paper introduces a general class of network communication games and studies their decomposition into unanimity games. We obtain a relation between the dividends in any network communication game and its underlying transferable utility game, which depends on the
Kosambi and Proper Orthogonal Decomposition
Indian Academy of Sciences (India)
In 1943 Kosambi published a paper titled 'Statis- tics in function space' in the Journal of the Indian. Mathematical Society. This paper was the first to propose the technique of statistical analysis of- ten called proper orthogonal decomposition to- day. This article describes the contents of that paper and Kosambi's approach to ...
Thermal decomposition of ammonium hexachloroosmate
DEFF Research Database (Denmark)
Asanova, T I; Kantor, Innokenty; Asanov, I. P.
2016-01-01
Structural changes of (NH4)2[OsCl6] occurring during thermal decomposition in a reduction atmosphere have been studied in situ using combined energy-dispersive X-ray absorption spectroscopy (ED-XAFS) and powder X-ray diffraction (PXRD). According to PXRD, (NH4)2[OsCl6] transforms directly...
Modular Decomposition of Boolean Functions
J.C. Bioch (Cor)
2002-01-01
textabstractModular decomposition is a thoroughly investigated topic in many areas such as switching theory, reliability theory, game theory and graph theory. Most appli- cations can be formulated in the framework of Boolean functions. In this paper we give a uni_ed treatment of modular
Cartoon+Texture Image Decomposition
Directory of Open Access Journals (Sweden)
Antoni Buades
2011-09-01
Full Text Available In this article we give a thorough description of the algorithm proposed in [A. Buades, T. Le, J.M. Morel and L. Vese, Fast cartoon + texture image filters, IEEE Transactions on Image Processing, 2010] for cartoon+texture decomposition using of a nonlinear low pass-high pass filter pair.
Probability inequalities for decomposition integrals
Czech Academy of Sciences Publication Activity Database
Agahi, H.; Mesiar, Radko
2017-01-01
Roč. 315, č. 1 (2017), s. 240-248 ISSN 0377-0427 Institutional support: RVO:67985556 Keywords : Decomposition integral * Superdecomposition integral * Probability inequalities Subject RIV: BA - General Mathematics OBOR OECD: Statistics and probability Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2017/E/mesiar-0470959.pdf
Dimensionality reduction in control and coordination of the human hand.
Vinjamuri, Ramana; Sun, Mingui; Chang, Cheng-Chun; Lee, Heung-No; Sclabassi, Robert J; Mao, Zhi-Hong
2010-02-01
The concept of kinematic synergies is proposed to address the dimensionality reduction problem in control and coordination of the human hand. This paper develops a method for extracting kinematic synergies from joint-angular-velocity profiles of hand movements. Decomposition of a limited set of synergies from numerous movements is a complex optimization problem. This paper splits the decomposition process into two stages. The first stage is to extract synergies from rapid movement tasks using singular value decomposition (SVD). A bank of template functions is then created from shifted versions of the extracted synergies. The second stage is to find weights and onset times of the synergies based on l(1) -minimization, whose solutions provide sparse representations of hand movements using synergies.
Energy Technology Data Exchange (ETDEWEB)
Pee, J H; Kim, Y J; Kim, J Y; Cho, W S; Kim, K J [Whiteware Ceramic Center, KICET (Korea, Republic of); Seong, N E, E-mail: pee@kicet.re.kr [Recytech Korea Co., Ltd. (Korea, Republic of)
2011-10-29
Decomposition promoting factors and decomposition mechanism in the zinc decomposition process of waste hard metals which are composed mostly of tungsten carbide and cobalt were evaluated. Zinc volatility amount was suppressed and zinc steam pressure was produced in the reaction graphite crucible inside an electric furnace for ZDP. Reaction was done for 2 hrs at 650 deg. C, which 100% decomposed the waste hard metals that were over 30 mm thick. As for the separation-decomposition of waste hard metals, zinc melted alloy formed a liquid composed of a mixture of {gamma}-{beta}1 phase from the cobalt binder layer (reaction interface). The volume of reacted zone was expanded and the waste hard metal layer was decomposed-separated horizontally from the hard metal. Zinc used in the ZDP process was almost completely removed-collected by decantation and volatilization-collection process at 1000 deg. C. The small amount of zinc remaining in the tungsten carbide-cobalt powder which was completely decomposed was fully removed by using phosphate solution which had a slow cobalt dissolution speed.
Investigating hydrogel dosimeter decomposition by chemical methods
International Nuclear Information System (INIS)
Jordan, Kevin
2015-01-01
The chemical oxidative decomposition of leucocrystal violet micelle hydrogel dosimeters was investigated using the reaction of ferrous ions with hydrogen peroxide or sodium bicarbonate with hydrogen peroxide. The second reaction is more effective at dye decomposition in gelatin hydrogels. Additional chemical analysis is required to determine the decomposition products
The Policy Significance of Inequality Decompositions
Kanbur, Ravi
2003-01-01
Economists are now familiar with “between” and “within” group inequality decompositions, for race, gender, spatial units, etc. But what exactly is the normative significance of the empirical results produced by these decompositions? This paper raises some basic questions about policy interpretations of decompositions that are found in the literature.
International Nuclear Information System (INIS)
Murase, Kenya; Yamazaki, Youichi; Shinohara, Masaaki
2003-01-01
The purpose of this study was to investigate the feasibility of the autoregressive moving average (ARMA) model for quantification of cerebral blood flow (CBF) with dynamic susceptibility contrast-enhanced magnetic resonance imaging (DSC-MRI) in comparison with deconvolution analysis based on singular value decomposition (DA-SVD). Using computer simulations, we generated a time-dependent concentration of the contrast agent in the volume of interest (VOI) from the arterial input function (AIF) modeled as a gamma-variate function under various CBFs, cerebral blood volumes and signal-to-noise ratios (SNRs) for three different types of residue function (exponential, triangular, and box-shaped). We also considered the effects of delay and dispersion in AIF. The ARMA model and DA-SVD were used to estimate CBF values from the simulated concentration-time curves in the VOI and AIFs, and the estimated values were compared with the assumed values. We found that the CBF value estimated by the ARMA model was more sensitive to the SNR and the delay in AIF than that obtained by DA-SVD. Although the ARMA model considerably overestimated CBF at low SNRs, it estimated the CBF more accurately than did DA-SVD at high SNRs for the exponential or triangular residue function. We believe this study will contribute to an understanding of the usefulness and limitations of the ARMA model when applied to quantification of CBF with DSC-MRI. (author)
Using DEDICOM for completely unsupervised part-of-speech tagging.
Energy Technology Data Exchange (ETDEWEB)
Chew, Peter A.; Bader, Brett William; Rozovskaya, Alla (University of Illinois, Urbana, IL)
2009-02-01
A standard and widespread approach to part-of-speech tagging is based on Hidden Markov Models (HMMs). An alternative approach, pioneered by Schuetze (1993), induces parts of speech from scratch using singular value decomposition (SVD). We introduce DEDICOM as an alternative to SVD for part-of-speech induction. DEDICOM retains the advantages of SVD in that it is completely unsupervised: no prior knowledge is required to induce either the tagset or the associations of terms with tags. However, unlike SVD, it is also fully compatible with the HMM framework, in that it can be used to estimate emission- and transition-probability matrices which can then be used as the input for an HMM. We apply the DEDICOM method to the CONLL corpus (CONLL 2000) and compare the output of DEDICOM to the part-of-speech tags given in the corpus, and find that the correlation (almost 0.5) is quite high. Using DEDICOM, we also estimate part-of-speech ambiguity for each term, and find that these estimates correlate highly with part-of-speech ambiguity as measured in the original corpus (around 0.88). Finally, we show how the output of DEDICOM can be evaluated and compared against the more familiar output of supervised HMM-based tagging.
A general approach to regularizing inverse problems with regional data using Slepian wavelets
Michel, Volker; Simons, Frederik J.
2017-12-01
Slepian functions are orthogonal function systems that live on subdomains (for example, geographical regions on the Earth’s surface, or bandlimited portions of the entire spectrum). They have been firmly established as a useful tool for the synthesis and analysis of localized (concentrated or confined) signals, and for the modeling and inversion of noise-contaminated data that are only regionally available or only of regional interest. In this paper, we consider a general abstract setup for inverse problems represented by a linear and compact operator between Hilbert spaces with a known singular-value decomposition (svd). In practice, such an svd is often only given for the case of a global expansion of the data (e.g. on the whole sphere) but not for regional data distributions. We show that, in either case, Slepian functions (associated to an arbitrarily prescribed region and the given compact operator) can be determined and applied to construct a regularization for the ill-posed regional inverse problem. Moreover, we describe an algorithm for constructing the Slepian basis via an algebraic eigenvalue problem. The obtained Slepian functions can be used to derive an svd for the combination of the regionalizing projection and the compact operator. As a result, standard regularization techniques relying on a known svd become applicable also to those inverse problems where the data are regionally given only. In particular, wavelet-based multiscale techniques can be used. An example for the latter case is elaborated theoretically and tested on two synthetic numerical examples.
Dictionary-Based Tensor Canonical Polyadic Decomposition
Cohen, Jeremy Emile; Gillis, Nicolas
2018-04-01
To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.
Variance decomposition in stochastic simulators
Le Maître, O. P.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Multiple Descriptions Using Sparse Decompositions
DEFF Research Database (Denmark)
Jensen, Tobias Lindstrøm; Østergaard, Jan; Dahl, Joachim
2010-01-01
In this paper, we consider the design of multiple descriptions (MDs) using sparse decompositions. In a description erasure channel only a subset of the transmitted descriptions is received. The MD problem concerns the design of the descriptions such that they individually approximate the source...... ﬁrst-order method to the proposed convex problem such that we can solve large-scale instances for image sequences....
Time-Frequency Decomposition of an Ultrashort Pulse: Wavelet Decomposition
Directory of Open Access Journals (Sweden)
M. Khelladi
2008-04-01
Full Text Available An efficient numerical algorithm is presented for the numerical modeling of the propagation of ultrashort pulses with arbitrary temporal and frequency characteristics through linear homogeneous dielectrics. The consequences of proper sampling of the spectral phase in pulse propagation and its influence on the efficiency of computation are discussed in detail. The numerical simulation presented here is capable of analyzing the pulse in the temporal-frequency domain. As an example, pulse propagation effects such as temporal and spectral shifts, pulse broadening effects, asymmetry and chirping in dispersive media are demonstrated for wavelet decomposition.
Automatic analysis of multichannel time series data applied to MHD fluctuations
International Nuclear Information System (INIS)
Pretty, D.G.; Blackwell, B.D.; Detering, F.; Howard, J.; Oliver, D.; Hegland, M.; Hole, M.J.; Harris, J.H.
2008-01-01
We present a data mining technique for the analysis of multichannel oscillatory timeseries data and show an application using poloidal arrays of magnetic sensors installed in the H-1 heliac. The procedure is highly automated, and scales well to large datasets. In a preprocessing step, the timeseries data is split into short time segments to provide time resolution, and each segment is represented by a singular value decomposition (SVD). By comparing power spectra of the temporal singular vectors, singular values are grouped into subsets which define fluctuation structures. Thresholds for the normalised energy of the fluctuation structure and the normalised entropy of the SVD are used to filter the dataset. We assume that distinct classes of fluctuations are localised in the space of phase differences (n, n+1) between each pair of nearest neighbour channels. An expectation maximisation (EM) clustering algorithm is used to locate the distinct classes of fluctuations, and a cluster tree mapping is used to visualise the results. Different classes of fluctuations in H-1 distinguished by this procedure are shown to be associated with MHD activity around separate resonant surfaces, with corresponding toroidal and poloidal mode numbers. Equally interesting are some clusters that don't exhibit this behaviour. (author)
Sparse matrix decompositions for clustering
Blumensath, Thomas
2014-01-01
Clustering can be understood as a matrix decomposition problem, where a feature vector matrix is represented as a product of two matrices, a matrix of cluster centres and a matrix with sparse columns, where each column assigns individual features to one of the cluster centres. This matrix factorisation is the basis of classical clustering methods, such as those based on non-negative matrix factorisation but can also be derived for other methods, such as k-means clustering. In this paper we de...
Mathematical modelling of the decomposition of explosives
International Nuclear Information System (INIS)
Smirnov, Lev P
2010-01-01
Studies on mathematical modelling of the molecular and supramolecular structures of explosives and the elementary steps and overall processes of their decomposition are analyzed. Investigations on the modelling of combustion and detonation taking into account the decomposition of explosives are also considered. It is shown that solution of problems related to the decomposition kinetics of explosives requires the use of a complex strategy based on the methods and concepts of chemical physics, solid state physics and theoretical chemistry instead of empirical approach.
Highly Scalable Matching Pursuit Signal Decomposition Algorithm
National Aeronautics and Space Administration — In this research, we propose a variant of the classical Matching Pursuit Decomposition (MPD) algorithm with significantly improved scalability and computational...
Parallel QR Decomposition for Electromagnetic Scattering Problems
National Research Council Canada - National Science Library
Boleng, Jeff
1997-01-01
This report introduces a new parallel QR decomposition algorithm. Test results are presented for several problem sizes, numbers of processors, and data from the electromagnetic scattering problem domain...
Infrared multiphoton absorption and decomposition
International Nuclear Information System (INIS)
Evans, D.K.; McAlpine, R.D.
1984-01-01
The discovery of infrared laser induced multiphoton absorption (IRMPA) and decomposition (IRMPD) by Isenor and Richardson in 1971 generated a great deal of interest in these phenomena. This interest was increased with the discovery by Ambartzumian, Letokhov, Ryadbov and Chekalin that isotopically selective IRMPD was possible. One of the first speculations about these phenomena was that it might be possible to excite a particular mode of a molecule with the intense infrared laser beam and cause decomposition or chemical reaction by channels which do not predominate thermally, thus providing new synthetic routes for complex chemicals. The potential applications to isotope separation and novel chemistry stimulated efforts to understand the underlying physics and chemistry of these processes. At ICOMP I, in 1977 and at ICOMP II in 1980, several authors reviewed the current understandings of IRMPA and IRMPD as well as the particular aspect of isotope separation. There continues to be a great deal of effort into understanding IRMPA and IRMPD and we will briefly review some aspects of these efforts with particular emphasis on progress since ICOMP II. 31 references
Variance reduction and cluster decomposition
Liu, Keh-Fei; Liang, Jian; Yang, Yi-Bo
2018-02-01
It is a common problem in lattice QCD calculation of the mass of the hadron with an annihilation channel that the signal falls off in time while the noise remains constant. In addition, the disconnected insertion calculation of the three-point function and the calculation of the neutron electric dipole moment with the θ term suffer from a noise problem due to the √{V } fluctuation. We identify these problems to have the same origin and the √{V } problem can be overcome by utilizing the cluster decomposition principle. We demonstrate this by considering the calculations of the glueball mass, the strangeness content in the nucleon, and the C P violation angle in the nucleon due to the θ term. It is found that for lattices with physical sizes of 4.5-5.5 fm, the statistical errors of these quantities can be reduced by a factor of 3 to 4. The systematic errors can be estimated from the Akaike information criterion. For the strangeness content, we find that the systematic error is of the same size as that of the statistical one when the cluster decomposition principle is utilized. This results in a 2 to 3 times reduction in the overall error.
Decomposition in pelagic marine ecosytems
International Nuclear Information System (INIS)
Lucas, M.I.
1986-01-01
During the decomposition of plant detritus, complex microbial successions develop which are dominated in the early stages by a number of distinct bacterial morphotypes. The microheterotrophic community rapidly becomes heterogenous and may include cyanobacteria, fungi, yeasts and bactivorous protozoans. Microheterotrophs in the marine environment may have a biomass comparable to that of all other heterotrophs and their significance as a resource to higher trophic orders, and in the regeneration of nutrients, particularly nitrogen, that support 'regenerated' primary production, has aroused both attention and controversy. Numerous methods have been employed to measure heterotrophic bacterial production and activity. The most widely used involve estimates of 14 C-glucose uptake; the frequency of dividing cells; the incorporation of 3 H-thymidine and exponential population growth in predator-reduced filtrates. Recent attempts to model decomposition processes and C and N fluxes in pelagic marine ecosystems are described. This review examines the most sensitive components and predictions of the models with particular reference to estimates of bacterial production, net growth yield and predictions of N cycling determined by 15 N methodology. Directed estimates of nitrogen (and phosphorus) flux through phytoplanktonic and bacterioplanktonic communities using 15 N (and 32 P) tracer methods are likely to provide more realistic measures of nitrogen flow through planktonic communities
Symmetric Decomposition of Asymmetric Games.
Tuyls, Karl; Pérolat, Julien; Lanctot, Marc; Ostrovski, Georg; Savani, Rahul; Leibo, Joel Z; Ord, Toby; Graepel, Thore; Legg, Shane
2018-01-17
We introduce new theoretical insights into two-population asymmetric games allowing for an elegant symmetric decomposition into two single population symmetric games. Specifically, we show how an asymmetric bimatrix game (A,B) can be decomposed into its symmetric counterparts by envisioning and investigating the payoff tables (A and B) that constitute the asymmetric game, as two independent, single population, symmetric games. We reveal several surprising formal relationships between an asymmetric two-population game and its symmetric single population counterparts, which facilitate a convenient analysis of the original asymmetric game due to the dimensionality reduction of the decomposition. The main finding reveals that if (x,y) is a Nash equilibrium of an asymmetric game (A,B), this implies that y is a Nash equilibrium of the symmetric counterpart game determined by payoff table A, and x is a Nash equilibrium of the symmetric counterpart game determined by payoff table B. Also the reverse holds and combinations of Nash equilibria of the counterpart games form Nash equilibria of the asymmetric game. We illustrate how these formal relationships aid in identifying and analysing the Nash structure of asymmetric games, by examining the evolutionary dynamics of the simpler counterpart games in several canonical examples.
Yousefi, Bardia; Sfarra, Stefano; Ibarra Castanedo, Clemente; Maldague, Xavier P. V.
2017-09-01
Thermal and infrared imagery creates considerable developments in Non-Destructive Testing (NDT) area. Here, a thermography method for NDT specimens inspection is addressed by applying a technique for computation of eigen-decomposition which refers as Candid Covariance-Free Incremental Principal Component Thermography (CCIPCT). The proposed approach uses a shorter computational alternative to estimate covariance matrix and Singular Value Decomposition (SVD) to obtain the result of Principal Component Thermography (PCT) and ultimately segments the defects in the specimens applying color based K-medoids clustering approach. The problem of computational expenses for high-dimensional thermal image acquisition is also investigated. Three types of specimens (CFRP, Plexiglas and Aluminium) have been used for comparative benchmarking. The results conclusively indicate the promising performance and demonstrate a confirmation for the outlined properties.
Aerial image simulation for partial coherent system with programming development in MATLAB
Hasan, Md. Nazmul; Rahman, Md. Momtazur; Udoy, Ariful Banna
2014-10-01
Aerial image can be calculated by either Abbe's method or sum of coherent system decomposition (SOCS) method for partial coherent system. This paper introduces a programming with Matlab code that changes the analytical representation of Abbe's method to the matrix form, which has advantages for both Abbe's method and SOCS since matrix calculation is easier than double integration over object plane or pupil plane. First a singular matrix P is derived from a pupil function and effective light source in the spatial frequency domain. By applying Singular Value Decomposition (SVD) to the matrix P, eigenvalues and eigenfunctions are obtained. The aerial image can then be computed by the eigenvalues and eigenfunctions without calculation of Transmission Cross Coefficient (TCC). The aerial final image is almost identical as an original cross mask and the intensity distribution on image plane shows that it is almost uniform across the linewidth of the mask.
Early stage litter decomposition across biomes
Ika Djukic; Sebastian Kepfer-Rojas; Inger Kappel Schmidt; Klaus Steenberg Larsen; Claus Beier; Björn Berg; Kris Verheyen; Adriano Caliman; Alain Paquette; Alba Gutiérrez-Girón; Alberto Humber; Alejandro Valdecantos; Alessandro Petraglia; Heather Alexander; Algirdas Augustaitis; Amélie Saillard; Ana Carolina Ruiz Fernández; Ana I. Sousa; Ana I. Lillebø; Anderson da Rocha Gripp; André-Jean Francez; Andrea Fischer; Andreas Bohner; Andrey Malyshev; Andrijana Andrić; Andy Smith; Angela Stanisci; Anikó Seres; Anja Schmidt; Anna Avila; Anne Probst; Annie Ouin; Anzar A. Khuroo; Arne Verstraeten; Arely N. Palabral-Aguilera; Artur Stefanski; Aurora Gaxiola; Bart Muys; Bernard Bosman; Bernd Ahrends; Bill Parker; Birgit Sattler; Bo Yang; Bohdan Juráni; Brigitta Erschbamer; Carmen Eugenia Rodriguez Ortiz; Casper T. Christiansen; E. Carol Adair; Céline Meredieu; Cendrine Mony; Charles A. Nock; Chi-Ling Chen; Chiao-Ping Wang; Christel Baum; Christian Rixen; Christine Delire; Christophe Piscart; Christopher Andrews; Corinna Rebmann; Cristina Branquinho; Dana Polyanskaya; David Fuentes Delgado; Dirk Wundram; Diyaa Radeideh; Eduardo Ordóñez-Regil; Edward Crawford; Elena Preda; Elena Tropina; Elli Groner; Eric Lucot; Erzsébet Hornung; Esperança Gacia; Esther Lévesque; Evanilde Benedito; Evgeny A. Davydov; Evy Ampoorter; Fabio Padilha Bolzan; Felipe Varela; Ferdinand Kristöfel; Fernando T. Maestre; Florence Maunoury-Danger; Florian Hofhansl; Florian Kitz; Flurin Sutter; Francisco Cuesta; Francisco de Almeida Lobo; Franco Leandro de Souza; Frank Berninger; Franz Zehetner; Georg Wohlfahrt; George Vourlitis; Geovana Carreño-Rocabado; Gina Arena; Gisele Daiane Pinha; Grizelle González; Guylaine Canut; Hanna Lee; Hans Verbeeck; Harald Auge; Harald Pauli; Hassan Bismarck Nacro; Héctor A. Bahamonde; Heike Feldhaar; Heinke Jäger; Helena C. Serrano; Hélène Verheyden; Helge Bruelheide; Henning Meesenburg; Hermann Jungkunst; Hervé Jactel; Hideaki Shibata; Hiroko Kurokawa; Hugo López Rosas; Hugo L. Rojas Villalobos; Ian Yesilonis; Inara Melece; Inge Van Halder; Inmaculada García Quirós; Isaac Makelele; Issaka Senou; István Fekete; Ivan Mihal; Ivika Ostonen; Jana Borovská; Javier Roales; Jawad Shoqeir; Jean-Christophe Lata; Jean-Paul Theurillat; Jean-Luc Probst; Jess Zimmerman; Jeyanny Vijayanathan; Jianwu Tang; Jill Thompson; Jiří Doležal; Joan-Albert Sanchez-Cabeza; Joël Merlet; Joh Henschel; Johan Neirynck; Johannes Knops; John Loehr; Jonathan von Oppen; Jónína Sigríður Þorláksdóttir; Jörg Löffler; José-Gilberto Cardoso-Mohedano; José-Luis Benito-Alonso; Jose Marcelo Torezan; Joseph C. Morina; Juan J. Jiménez; Juan Dario Quinde; Juha Alatalo; Julia Seeber; Jutta Stadler; Kaie Kriiska; Kalifa Coulibaly; Karibu Fukuzawa; Katalin Szlavecz; Katarína Gerhátová; Kate Lajtha; Kathrin Käppeler; Katie A. Jennings; Katja Tielbörger; Kazuhiko Hoshizaki; Ken Green; Lambiénou Yé; Laryssa Helena Ribeiro Pazianoto; Laura Dienstbach; Laura Williams; Laura Yahdjian; Laurel M. Brigham; Liesbeth van den Brink; Lindsey Rustad; al. et
2018-01-01
Through litter decomposition enormous amounts of carbon is emitted to the atmosphere. Numerous large-scale decomposition experiments have been conducted focusing on this fundamental soil process in order to understand the controls on the terrestrial carbon transfer to the atmosphere. However, previous studies were mostly based on site-specific litter and methodologies...
Climate history shapes contemporary leaf litter decomposition
Michael S. Strickland; Ashley D. Keiser; Mark A. Bradford
2015-01-01
Litter decomposition is mediated by multiple variables, of which climate is expected to be a dominant factor at global scales. However, like other organisms, traits of decomposers and their communities are shaped not just by the contemporary climate but also their climate history. Whether or not this affects decomposition rates is underexplored. Here we source...
Spinodal decomposition in fine grained materials
Indian Academy of Sciences (India)
Unknown
Spinodal decomposition in fine grained materials. H RAMANARAYAN and T A ABINANDANAN*. Department of Metallurgy, Indian Institute of Science, Bangalore 560 012, India. Abstract. We have used a phase field model to study spinodal decomposition in polycrystalline materials in which the grain size is of the same ...
Spinodal decomposition in fine grained materials
Indian Academy of Sciences (India)
We have used a phase field model to study spinodal decomposition in polycrystalline materials in which the grain size is of the same order of magnitude as the characteristic decomposition wavelength ( λ S D ). In the spirit of phase field models, each grain () in our model has an order parameter ( η i ) associated with it; ...
Nutrient Dynamics and Litter Decomposition in Leucaena ...
African Journals Online (AJOL)
Nutrient contents and rate of litter decomposition were investigated in Leucaena leucocephala plantation in the University of Agriculture, Abeokuta, Ogun State, Nigeria. Litter bag technique was used to study the pattern and rate of litter decomposition and nutrient release of Leucaena leucocephala. Fifty grams of oven-dried ...
Moisture controls decomposition rate in thawing tundra
C.E. Hicks-Pries; E.A.G. Schuur; S.M. Natali; J.G. Vogel
2013-01-01
Permafrost thaw can affect decomposition rates by changing environmental conditions and litter quality. As permafrost thaws, soils warm and thermokarst (ground subsidence) features form, causing some areas to become wetter while other areas become drier. We used a common substrate to measure how permafrost thaw affects decomposition rates in the surface soil in a...
Decomposition and flame structure of hydrazinium nitroformate
Louwers, J.; Parr, T.; Hanson-Parr, D.
1999-01-01
The decomposition of hydrazinium nitroformate (HNF) was studied in a hot quartz cell and by dropping small amounts of HNF on a hot plate. The species formed during the decomposition were identified by ultraviolet-visible absorption experiments. These experiments reveal that first HONO is formed. The
An analysis of scatter decomposition
Nicol, David M.; Saltz, Joel H.
1990-01-01
A formal analysis of a mapping method known as scatter decomposition (SD) is presented. SD divides an irregular domain into many equal-size pieces and distributes them modularly among processors. It is shown that, if a correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance; if the workload process is stationary Gaussian and the correlation function decreases linearly in distance to zero and then remains zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. Finally, if the correlation function decreases linearly across the entire domain, then (among all mappings that assign an equal number of domain pieces to each processor) SD minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with cases where a coarser granularity actually achieves better load balance.
Multilinear operators for higher-order decompositions.
Energy Technology Data Exchange (ETDEWEB)
Kolda, Tamara Gibson
2006-04-01
We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.
Management intensity alters decomposition via biological pathways
Wickings, Kyle; Grandy, A. Stuart; Reed, Sasha; Cleveland, Cory
2011-01-01
Current conceptual models predict that changes in plant litter chemistry during decomposition are primarily regulated by both initial litter chemistry and the stage-or extent-of mass loss. Far less is known about how variations in decomposer community structure (e.g., resulting from different ecosystem management types) could influence litter chemistry during decomposition. Given the recent agricultural intensification occurring globally and the importance of litter chemistry in regulating soil organic matter storage, our objectives were to determine the potential effects of agricultural management on plant litter chemistry and decomposition rates, and to investigate possible links between ecosystem management, litter chemistry and decomposition, and decomposer community composition and activity. We measured decomposition rates, changes in litter chemistry, extracellular enzyme activity, microarthropod communities, and bacterial versus fungal relative abundance in replicated conventional-till, no-till, and old field agricultural sites for both corn and grass litter. After one growing season, litter decomposition under conventional-till was 20% greater than in old field communities. However, decomposition rates in no-till were not significantly different from those in old field or conventional-till sites. After decomposition, grass residue in both conventional- and no-till systems was enriched in total polysaccharides relative to initial litter, while grass litter decomposed in old fields was enriched in nitrogen-bearing compounds and lipids. These differences corresponded with differences in decomposer communities, which also exhibited strong responses to both litter and management type. Overall, our results indicate that agricultural intensification can increase litter decomposition rates, alter decomposer communities, and influence litter chemistry in ways that could have important and long-term effects on soil organic matter dynamics. We suggest that future
Vortex lattice theory: A linear algebra approach
Chamoun, George C.
Vortex lattices are prevalent in a large class of physical settings that are characterized by different mathematical models. We present a coherent and generalized Hamiltonian fluid mechanics-based formulation that reduces all vortex lattices into a classic problem in linear algebra for a non-normal matrix A. Via Singular Value Decomposition (SVD), the solution lies in the null space of the matrix (i.e., we require nullity( A) > 0) as well as the distribution of its singular values. We demonstrate that this approach provides a good model for various types of vortex lattices, and makes it possible to extract a rich amount of information on them. The contributions of this thesis can be classified into four main points. The first is asymmetric equilibria. A 'Brownian ratchet' construct was used which converged to asymmetric equilibria via a random walk scheme that utilized the smallest singular value of A. Distances between configurations and equilibria were measured using the Frobenius norm ||·||F and 2-norm ||·||2, and conclusions were made on the density of equilibria within the general configuration space. The second contribution used Shannon Entropy, which we interpret as a scalar measure of the robustness, or likelihood of lattices to occur in a physical setting. Third, an analytic model was produced for vortex street patterns on the sphere by using SVD in conjunction with expressions for the center of vorticity vector and angular velocity. Equilibrium curves within the configuration space were presented as a function of the geometry, and pole vortices were shown to have a critical role in the formation and destruction of vortex streets. The fourth contribution entailed a more complete perspective of the streamline topology of vortex streets, linking the bifurcations to critical points on the equilibrium curves.
Solid state exchange reactions and thermal decomposition
Energy Technology Data Exchange (ETDEWEB)
Albarran, G.; Archundia, C.; Maddock, A.G.
1982-01-01
A further study of exchange of the cobalt atoms in solid Co(H/sub 2/O)/sub 6/(Co EDTA)/sub 2/ x 4H/sub 2/O has been made. The exchange is more easily measured when the compound has been ..gamma.. irradiated before heating. Without irradiation the exchange is complicated by substantial concurrent thermal decomposition. Vacuum dehydration to the tetrahydrate can be effected at 366 K without appreciable exchange. A relation between exchange, annealing of radiolytic decomposition and thermal decomposition in such compounds is suggested.
Decomposition of lake phytoplankton. 1
International Nuclear Information System (INIS)
Hansen, L.; Krog, G.F.; Soendergaard, M.
1986-01-01
Short-time (24 h) and long-time (4-6 d) decomposition of phytoplankton cells were investigasted under in situ conditions in four Danish lakes. Carbon-14-labelled, dead algae were exposed to sterile or natural lake water and the dynamics of cell lysis and bacterial utilization of the leached products were followed. The lysis process was dominated by an initial fast water extraction. Within 2 to 4 h from 4 to 34% of the labelled carbon leached from the algal cells. After 24 h from 11 to 43% of the initial particulate carbon was found as dissolved carbon in the experiments with sterile lake water; after 4 to 6 d the leaching was from 67 to 78% of the initial 14 C. The leached compounds were utilized by bacteria. A comparison of the incubations using sterile and natural water showed that a mean of 71% of the lysis products was metabolized by microorganisms within 24 h. In two experiments the uptake rate equalled the leaching rate. (author)
Decomposition of lake phytoplankton. 2
International Nuclear Information System (INIS)
Hansen, L.; Krog, G.F.; Soendergaard, M.
1986-01-01
The lysis process of phytoplankton was followed in 24 h incubations in three Danish lakes. By means of gel-chromatography it was shown that the dissolved carbon leaching from different algal groups differed in molecular weight composition. Three distinct molecular weight classes (>10,000; 700 to 10,000 and < 700 Daltons) leached from blue-green algae in almost equal proportion. The lysis products of spring-bloom diatoms included only the two smaller size classes, and the molecules between 700 and 10,000 Daltons dominated. Measurements of cell content during decomposition of the diatoms revealed polysaccharides and low molecular weight compounds to dominate the lysis products. No proteins were leached during the first 24 h after cell death. By incubating the dead algae in natural lake water, it was possible to detect a high bacterial affinity towards molecules between 700 and 10,000 Daltons, although the other size classes were also utilized. Bacterial transformation of small molecules to larger molecules could be demonstrated. (author)
Aggregation Processes with Catalysis-Driven Decomposition
International Nuclear Information System (INIS)
Xiang Rong; Zhuang Youyi; Ke Jianhong; Lin Zhenquan
2009-01-01
We propose a three-species aggregation model with catalysis-driven decomposition. Based on the mean-field rate equations, we investigate the evolution behavior of the system with the size-dependent catalysis-driven decomposition rate J(i; j; k) = Jijk v and the constant aggregation rates. The results show that the cluster size distribution of the species without decomposition can always obey the conventional scaling law in the case of 0 ≤ v ≤ 1, while the kinetic evolution of the decomposed species depends crucially on the index v. Moreover, the total size of the species without decomposition can keep a nonzero value at large times, while the total size of the decomposed species decreases exponentially with time and vanishes finally. (general)
Observation of spinodal decomposition in nuclei?
International Nuclear Information System (INIS)
Guarnera, A.; Colonna, M.; Chomaz, Ph.
1996-01-01
Multifragmentation in heavy ion collisions is investigated in the framework of mean-field theory, in order to gain information on the equation of state of nuclear matter. Spinodal decomposition in nuclei is studied. (K.A.)
Modeling Decomposition of Unconfined Rigid Polyurethane Foam
National Research Council Canada - National Science Library
Hobbs, Michael
1999-01-01
The decomposition of unconfined rigid polyurethane foam has been modeled by a kinetic bond-breaking scheme describing degradation of a primary polymer and formation of a thermally stable secondary polymer...
A Decomposition Theorem for Finite Automata.
Santa Coloma, Teresa L.; Tucci, Ralph P.
1990-01-01
Described is automata theory which is a branch of theoretical computer science. A decomposition theorem is presented that is easier than the Krohn-Rhodes theorem. Included are the definitions, the theorem, and a proof. (KR)
Joint Matrices Decompositions and Blind Source Separation
Czech Academy of Sciences Publication Activity Database
Chabriel, G.; Kleinsteuber, M.; Moreau, E.; Shen, H.; Tichavský, Petr; Yeredor, A.
2014-01-01
Roč. 31, č. 3 (2014), s. 34-43 ISSN 1053-5888 R&D Projects: GA ČR GA102/09/1278 Institutional support: RVO:67985556 Keywords : joint matrices decomposition * tensor decomposition * blind source separation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 5.852, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/tichavsky-0427607.pdf
Lagrange relaxation and Dantzig-Wolfe decomposition
DEFF Research Database (Denmark)
Vidal, Rene Victor Valqui
1989-01-01
The paper concerns a large-scale linear programming problem having a block-diagonal structure with coupling constraints. It is shown that there are deep connections between the Lagrange relaxation techniques and the Dantzig-Wolfe decomposition methods......The paper concerns a large-scale linear programming problem having a block-diagonal structure with coupling constraints. It is shown that there are deep connections between the Lagrange relaxation techniques and the Dantzig-Wolfe decomposition methods...
Spatial relationships between weather and air pollution in China
Zhang, L.; Liu, Y.
2017-12-01
Spatial patterns are important features for understanding regional air quality variability. Statistical analysis tools, such as empirical orthogonal function (EOF), have been extensively used to identify and classify spatial patterns. These tools, however, do not directly reveal the related weather conditions. This study used singular value decomposition (SVD) to identify spatial air pollution index (API) patterns related to meteorological conditions in China, one of world's regions facing catastrophic air pollution. The monthly API and four meteorological variables (precipitation, surface air temperature, humidity, and wind speed) during 2001-2012 in 42 cities in China were used. The two leading SVD spatial patterns display the API anomalies with the same sign across China and opposite signs between northern and southern China, respectively. The meteorological variables have different relationships with these patterns. For the first pattern, wind speed is the most important. The key regions, where the correlations between the API field and the wind speed's SVD time series are significant at the 99% confidence interval, are found nationwide. Precipitation and air temperature are also important in the southern and northern portions of eastern China, respectively. For the second pattern, the key regions occur mainly in northern China for temperature and humidity and southern China for wind speed. Air humidity has the largest contribution to this pattern. The weather-API relationships characterized by these spatial patterns are useful for selecting factors for statistical air quality prediction models and determining the geographic regions with high prediction skills.
Directory of Open Access Journals (Sweden)
Lee Yun-Shien
2008-03-01
Full Text Available Abstract Background The hierarchical clustering tree (HCT with a dendrogram 1 and the singular value decomposition (SVD with a dimension-reduced representative map 2 are popular methods for two-way sorting the gene-by-array matrix map employed in gene expression profiling. While HCT dendrograms tend to optimize local coherent clustering patterns, SVD leading eigenvectors usually identify better global grouping and transitional structures. Results This study proposes a flipping mechanism for a conventional agglomerative HCT using a rank-two ellipse (R2E, an improved SVD algorithm for sorting purpose seriation by Chen 3 as an external reference. While HCTs always produce permutations with good local behaviour, the rank-two ellipse seriation gives the best global grouping patterns and smooth transitional trends. The resulting algorithm automatically integrates the desirable properties of each method so that users have access to a clustering and visualization environment for gene expression profiles that preserves coherent local clusters and identifies global grouping trends. Conclusion We demonstrate, through four examples, that the proposed method not only possesses better numerical and statistical properties, it also provides more meaningful biomedical insights than other sorting algorithms. We suggest that sorted proximity matrices for genes and arrays, in addition to the gene-by-array expression matrix, can greatly aid in the search for comprehensive understanding of gene expression structures. Software for the proposed methods can be obtained at http://gap.stat.sinica.edu.tw/Software/GAP.
Monte-Carlo error analysis in x-ray spectral deconvolution
International Nuclear Information System (INIS)
Shirk, D.G.; Hoffman, N.M.
1985-01-01
The deconvolution of spectral information from sparse x-ray data is a widely encountered problem in data analysis. An often-neglected aspect of this problem is the propagation of random error in the deconvolution process. We have developed a Monte-Carlo approach that enables us to attach error bars to unfolded x-ray spectra. Our Monte-Carlo error analysis has been incorporated into two specific deconvolution techniques: the first is an iterative convergent weight method; the second is a singular-value-decomposition (SVD) method. These two methods were applied to an x-ray spectral deconvolution problem having m channels of observations with n points in energy space. When m is less than n, this problem has no unique solution. We discuss the systematics of nonunique solutions and energy-dependent error bars for both methods. The Monte-Carlo approach has a particular benefit in relation to the SVD method: It allows us to apply the constraint of spectral nonnegativity after the SVD deconvolution rather than before. Consequently, we can identify inconsistencies between different detector channels
3D FACE RECOGNITION FROM RANGE IMAGES BASED ON CURVATURE ANALYSIS
Directory of Open Access Journals (Sweden)
Suranjan Ganguly
2014-02-01
Full Text Available In this paper, we present a novel approach for three-dimensional face recognition by extracting the curvature maps from range images. There are four types of curvature maps: Gaussian, Mean, Maximum and Minimum curvature maps. These curvature maps are used as a feature for 3D face recognition purpose. The dimension of these feature vectors is reduced using Singular Value Decomposition (SVD technique. Now from calculated three components of SVD, the non-negative values of ‘S’ part of SVD is ranked and used as feature vector. In this proposed method, two pair-wise curvature computations are done. One is Mean, and Maximum curvature pair and another is Gaussian and Mean curvature pair. These are used to compare the result for better recognition rate. This automated 3D face recognition system is focused in different directions like, frontal pose with expression and illumination variation, frontal face along with registered face, only registered face and registered face from different pose orientation across X, Y and Z axes. 3D face images used for this research work are taken from FRAV3D database. The pose variation of 3D facial image is being registered to frontal pose by applying one to all registration technique then curvature mapping is applied on registered face images along with remaining frontal face images. For the classification and recognition purpose five layer feed-forward back propagation neural network classifiers is used, and the corresponding result is discussed in section 4.
A DDoS Attack Detection Method Based on Hybrid Heterogeneous Multiclassifier Ensemble Learning
Directory of Open Access Journals (Sweden)
Bin Jia
2017-01-01
Full Text Available The explosive growth of network traffic and its multitype on Internet have brought new and severe challenges to DDoS attack detection. To get the higher True Negative Rate (TNR, accuracy, and precision and to guarantee the robustness, stability, and universality of detection system, in this paper, we propose a DDoS attack detection method based on hybrid heterogeneous multiclassifier ensemble learning and design a heuristic detection algorithm based on Singular Value Decomposition (SVD to construct our detection system. Experimental results show that our detection method is excellent in TNR, accuracy, and precision. Therefore, our algorithm has good detective performance for DDoS attack. Through the comparisons with Random Forest, k-Nearest Neighbor (k-NN, and Bagging comprising the component classifiers when the three algorithms are used alone by SVD and by un-SVD, it is shown that our model is superior to the state-of-the-art attack detection techniques in system generalization ability, detection stability, and overall detection performance.
Schroeder, M. A.
1980-01-01
A summary of a literature review on thermal decomposition of HMX and RDX is presented. The decomposition apparently fits first order kinetics. Recommended values for Arrhenius parameters for HMX and RDX decomposition in the gaseous and liquid phases and for decomposition of RDX in solution in TNT are given. The apparent importance of autocatalysis is pointed out, as are some possible complications that may be encountered in interpreting extending or extrapolating kinetic data for these compounds from measurements carried out below their melting points to the higher temperatures and pressure characteristic of combustion.
Vibration fatigue using modal decomposition
Mršnik, Matjaž; Slavič, Janko; Boltežar, Miha
2018-01-01
Vibration-fatigue analysis deals with the material fatigue of flexible structures operating close to natural frequencies. Based on the uniaxial stress response, calculated in the frequency domain, the high-cycle fatigue model using the S-N curve material data and the Palmgren-Miner hypothesis of damage accumulation is applied. The multiaxial criterion is used to obtain the equivalent uniaxial stress response followed by the spectral moment approach to the cycle-amplitude probability density estimation. The vibration-fatigue analysis relates the fatigue analysis in the frequency domain to the structural dynamics. However, once the stress response within a node is obtained, the physical model of the structure dictating that response is discarded and does not propagate through the fatigue-analysis procedure. The structural model can be used to evaluate how specific dynamic properties (e.g., damping, modal shapes) affect the damage intensity. A new approach based on modal decomposition is presented in this research that directly links the fatigue-damage intensity with the dynamic properties of the system. It thus offers a valuable insight into how different modes of vibration contribute to the total damage to the material. A numerical study was performed showing good agreement between results obtained using the newly presented approach with those obtained using the classical method, especially with regards to the distribution of damage intensity and critical point location. The presented approach also offers orders of magnitude faster calculation in comparison with the conventional procedure. Furthermore, it can be applied in a straightforward way to strain experimental modal analysis results, taking advantage of experimentally measured strains.
Aridity and decomposition processes in complex landscapes
Ossola, Alessandro; Nyman, Petter
2015-04-01
Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally
Decomposition of forest products buried in landfills
International Nuclear Information System (INIS)
Wang, Xiaoming; Padgett, Jennifer M.; Powell, John S.; Barlaz, Morton A.
2013-01-01
Highlights: • This study tracked chemical changes of wood and paper in landfills. • A decomposition index was developed to quantify carbohydrate biodegradation. • Newsprint biodegradation as measured here is greater than previous reports. • The field results correlate well with previous laboratory measurements. - Abstract: The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5 yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C + H) loss of up to 38%, while loss for the other wood types was 0–10% in most samples. The C + H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27 g OC g −1 dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than
Global decomposition experiment shows soil animal impacts on decomposition are climate-dependent
Czech Academy of Sciences Publication Activity Database
Wall, D.H.; Bradford, M.A.; John, M.G.St.; Trofymow, J.A.; Behan-Pelletier, V.; Bignell, D.E.; Dangerfield, J.M.; Parton, W.J.; Rusek, Josef; Voigt, W.; Wolters, V.; Gardel, H.Z.; Ayuke, F. O.; Bashford, R.; Beljakova, O.I.; Bohlen, P.J.; Brauman, A.; Flemming, S.; Henschel, J.R.; Johnson, D.L.; Jones, T.H.; Kovářová, Marcela; Kranabetter, J.M.; Kutny, L.; Lin, K.-Ch.; Maryati, M.; Masse, D.; Pokarzhevskii, A.; Rahman, H.; Sabará, M.G.; Salamon, J.-A.; Swift, M.J.; Varela, A.; Vasconcelos, H.L.; White, D.; Zou, X.
2008-01-01
Roč. 14, č. 11 (2008), s. 2661-2677 ISSN 1354-1013 Institutional research plan: CEZ:AV0Z60660521; CEZ:AV0Z60050516 Keywords : climate decomposition index * decomposition * litter Subject RIV: EH - Ecology, Behaviour Impact factor: 5.876, year: 2008
Directory of Open Access Journals (Sweden)
Sheng-Ping Yan
2014-01-01
Full Text Available We perform a comparison between the local fractional Adomian decomposition and local fractional function decomposition methods applied to the Laplace equation. The operators are taken in the local sense. The results illustrate the significant features of the two methods which are both very effective and straightforward for solving the differential equations with local fractional derivative.
Tang, Yadan; Roberts, Charles A.; Perkins, Ryan T.; Wachs, Israel E.
2016-08-01
This study revisits the classic volcano curve for HCOOH decomposition by metal catalysts by taking a modern catalysis approach. The metal catalysts (Au, Ag, Cu, Pt, Pd, Ni, Rh, Co and Fe) were prepared by H2 reduction of the corresponding metal oxides. The number of surface active sites (Ns) was determined by formic acid chemisorption. In situ IR indicated that both monodentate and bidentate/bridged surface HCOO* were present on the metals. Heats of adsorption (ΔHads) for surface HCOO* values on metals were taken from recently reported DFT calculations. Kinetics for surface HCOO* decomposition (krds) were determined with TPD spectroscopy. Steady-state specific activity (TOF = activity/Ns) for HCOOH decomposition over the metals was calculated from steady-state activity (μmol/g-s) and Ns (μmol/g). Steady-state TOFs for HCOOH decomposition weakly correlated with surface HCOO* decomposition kinetics (krds) and ΔHads of surface HCOO* intermediates. The plot of TOF vs. ΔHads for HCOOH decomposition on metal catalysts does not reproduce the classic volcano curve, but shows that TOF depends on both ΔHads and decomposition kinetics (krds) of surface HCOO* intermediates. This is the first time that the classic catalysis study of HCOOH decomposition on metallic powder catalysts has been repeated since its original publication.
Radar rainfall image repair techniques
Directory of Open Access Journals (Sweden)
Stephen M. Wesson
2004-01-01
Full Text Available There are various quality problems associated with radar rainfall data viewed in images that include ground clutter, beam blocking and anomalous propagation, to name a few. To obtain the best rainfall estimate possible, techniques for removing ground clutter (non-meteorological echoes that influence radar data quality on 2-D radar rainfall image data sets are presented here. These techniques concentrate on repairing the images in both a computationally fast and accurate manner, and are nearest neighbour techniques of two sub-types: Individual Target and Border Tracing. The contaminated data is estimated through Kriging, considered the optimal technique for the spatial interpolation of Gaussian data, where the 'screening effect' that occurs with the Kriging weighting distribution around target points is exploited to ensure computational efficiency. Matrix rank reduction techniques in combination with Singular Value Decomposition (SVD are also suggested for finding an efficient solution to the Kriging Equations which can cope with near singular systems. Rainfall estimation at ground level from radar rainfall volume scan data is of interest and importance in earth bound applications such as hydrology and agriculture. As an extension of the above, Ordinary Kriging is applied to three-dimensional radar rainfall data to estimate rainfall rate at ground level. Keywords: ground clutter, data infilling, Ordinary Kriging, nearest neighbours, Singular Value Decomposition, border tracing, computation time, ground level rainfall estimation
Thermal decomposition process of silver behenate
International Nuclear Information System (INIS)
Liu Xianhao; Lu Shuxia; Zhang Jingchang; Cao Weiliang
2006-01-01
The thermal decomposition processes of silver behenate have been studied by infrared spectroscopy (IR), X-ray diffraction (XRD), combined thermogravimetry-differential thermal analysis-mass spectrometry (TG-DTA-MS), transmission electron microscopy (TEM) and UV-vis spectroscopy. The TG-DTA and the higher temperature IR and XRD measurements indicated that complicated structural changes took place while heating silver behenate, but there were two distinct thermal transitions. During the first transition at 138 deg. C, the alkyl chains of silver behenate were transformed from an ordered into a disordered state. During the second transition at about 231 deg. C, a structural change took place for silver behenate, which was the decomposition of silver behenate. The major products of the thermal decomposition of silver behenate were metallic silver and behenic acid. Upon heating up to 500 deg. C, the final product of the thermal decomposition was metallic silver. The combined TG-MS analysis showed that the gas products of the thermal decomposition of silver behenate were carbon dioxide, water, hydrogen, acetylene and some small molecule alkenes. TEM and UV-vis spectroscopy were used to investigate the process of the formation and growth of metallic silver nanoparticles
Denoising time-resolved microscopy image sequences with singular value thresholding
Energy Technology Data Exchange (ETDEWEB)
Furnival, Tom, E-mail: tjof2@cam.ac.uk; Leary, Rowan K., E-mail: rkl26@cam.ac.uk; Midgley, Paul A., E-mail: pam33@cam.ac.uk
2017-07-15
Time-resolved imaging in microscopy is important for the direct observation of a range of dynamic processes in both the physical and life sciences. However, the image sequences are often corrupted by noise, either as a result of high frame rates or a need to limit the radiation dose received by the sample. Here we exploit both spatial and temporal correlations using low-rank matrix recovery methods to denoise microscopy image sequences. We also make use of an unbiased risk estimator to address the issue of how much thresholding to apply in a robust and automated manner. The performance of the technique is demonstrated using simulated image sequences, as well as experimental scanning transmission electron microscopy data, where surface adatom motion and nanoparticle structural dynamics are recovered at rates of up to 32 frames per second. - Highlights: • Correlations in space and time are harnessed to denoise microscopy image sequences. • A robust estimator provides automated selection of the denoising parameter. • Motion tracking and automated noise estimation provides a versatile algorithm. • Application to time-resolved STEM enables study of atomic and nanoparticle dynamics.
Regularised reconstruction of sound fields with a spherical microphone array
DEFF Research Database (Denmark)
Granados Corsellas, Alba; Jacobsen, Finn; Fernandez Grande, Efren
2013-01-01
implementation might lead to disastrous reconstructions. A large number of regularisation tools based on singular value decomposition are available, and it has been found that the acoustic holography problem for certain geometries can be formulated in such a way that similarities to singular value decomposition...... become apparent. Hence, a number of regularisation methods, including truncated singular value decomposition, standard Tikhonov, constrained Tikhonov, iterative Tikhonov, Landweber and Rutishauser, have been adapted for spherical near field acoustic holography. The accuracy of the methods is examined...
Phase Noise Effect on MIMO-OFDM Systems with Common and Independent Oscillators
DEFF Research Database (Denmark)
Chen, Xiaoming; Wang, Hua; Fan, Wei
2018-01-01
of (single-stream) beamforming MIMO-OFDM systems, yet different influences on spatial multiplexing MIMOOFDM systems with singular value decomposition (SVD) based precoding/decoding. When each antenna is equipped with an independent oscillator, the PNs at the transmitter and at the receiver have different...... influences on beamforming MIMO-OFDM systems as well as spatial multiplexing MIMO-OFDM systems. Specifically, the PN effect at the transmitter (receiver) can be alleviated by having more transmit (receive) antennas for the case of independent oscillators. It is found that the independent oscillator case......In this paper, the effects of oscillator phase noises (PNs) on multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems are studied. It is shown that PNs of common oscillators at the transmitter and at the receiver have the same influence on the performance...
A Fast and High-Resolution Multi-Target Localization Approach in MIMO Radar
Directory of Open Access Journals (Sweden)
Yu Zhang
2013-09-01
Full Text Available This paper presents a fast and high-resolution estimation approach using polarization information combined with angle information for multi-target localization in bistatic multiple-input multiple-output (MIMO radar. The propagator method (PM is extended to jointly estimate the direction of departure (DOD, the direction of arrival (DOA and the polarization parameters. The PM avoids the singular value decomposition (SVD of the covariance matrix of the received signals so that the computational complexity is reduced. In addition, the closely spaced targets can be well distinguished by polarization diversity. The Cramer-Rao bounds (CRBs of the estimated parameters are derived. The position of a target is calculated based on the estimated angles. The simulation results demonstrate that the proposed approach can achieve better performance compared with conventional methods of target localization.
Phase Noise Effect on MIMO-OFDM Systems with Common and Independent Oscillators
Directory of Open Access Journals (Sweden)
Xiaoming Chen
2017-01-01
Full Text Available The effects of oscillator phase noises (PNs on multiple-input multiple-output (MIMO orthogonal frequency division multiplexing (OFDM systems are studied. It is shown that PNs of common oscillators at the transmitter and at the receiver have the same influence on the performance of (single-stream beamforming MIMO-OFDM systems, yet different influences on spatial multiplexing MIMO-OFDM systems with singular value decomposition (SVD based precoding/decoding. When each antenna is equipped with an independent oscillator, the PNs at the transmitter and at the receiver have different influences on beamforming MIMO-OFDM systems as well as spatial multiplexing MIMO-OFDM systems. Specifically, the PN effect on the transmitter (receiver can be alleviated by having more transmit (receive antennas for the case of independent oscillators. It is found that the independent oscillator case outperforms the common oscillator case in terms of error vector magnitude (EVM.
Eigenspace-based fuzzy c-means for sensing trending topics in Twitter
Muliawati, T.; Murfi, H.
2017-07-01
As the information and communication technology are developed, the fulfillment of information can be obtained through social media, like Twitter. The enormous number of internet users has triggered fast and large data flow, thus making the manual analysis is difficult or even impossible. An automated methods for data analysis is needed, one of which is the topic detection and tracking. An alternative method other than latent Dirichlet allocation (LDA) is a soft clustering approach using Fuzzy C-Means (FCM). FCM meets the assumption that a document may consist of several topics. However, FCM works well in low-dimensional data but fails in high-dimensional data. Therefore, we propose an approach where FCM works on low-dimensional data by reducing the data using singular value decomposition (SVD). Our simulations show that this approach gives better accuracies in term of topic recall than LDA for sensing trending topic in Twitter about an event.
The correction of linear lattice gradient errors using an AC dipole
Energy Technology Data Exchange (ETDEWEB)
Wang,G.; Bai, M.; Litvinenko, V.N.; Satogata, T.
2009-05-04
Precise measurement of optics from coherent betatron oscillations driven by ac dipoles have been demonstrated at RHIC and the Tevatron. For RHIC, the observed rms beta-beat is about 10%. Reduction of beta-beating is an essential component of performance optimization at high energy colliders. A scheme of optics correction was developed and tested in the RHIC 2008 run, using ac dipole optics for measurement and a few adjustable trim quadruples for correction. In this scheme, we first calculate the phase response matrix from the. measured phase advance, and then apply singular value decomposition (SVD) algorithm to the phase response matrix to find correction quadruple strengths. We present both simulation and some preliminary experimental results of this correction.
Using linear algebra for protein structural comparison and classification.
Gomide, Janaína; Melo-Minardi, Raquel; Dos Santos, Marcos Augusto; Neshich, Goran; Meira, Wagner; Lopes, Júlio César; Santoro, Marcelo
2009-07-01
In this article, we describe a novel methodology to extract semantic characteristics from protein structures using linear algebra in order to compose structural signature vectors which may be used efficiently to compare and classify protein structures into fold families. These signatures are built from the pattern of hydrophobic intrachain interactions using Singular Value Decomposition (SVD) and Latent Semantic Indexing (LSI) techniques. Considering proteins as documents and contacts as terms, we have built a retrieval system which is able to find conserved contacts in samples of myoglobin fold family and to retrieve these proteins among proteins of varied folds with precision of up to 80%. The classifier is a web tool available at our laboratory website. Users can search for similar chains from a specific PDB, view and compare their contact maps and browse their structures using a JMol plug-in.
Wang, Wei; Chen, Xiyuan
2018-02-23
In view of the fact the accuracy of the third-degree Cubature Kalman Filter (CKF) used for initial alignment under large misalignment angle conditions is insufficient, an improved fifth-degree CKF algorithm is proposed in this paper. In order to make full use of the innovation on filtering, the innovation covariance matrix is calculated recursively by an innovative sequence with an exponent fading factor. Then a new adaptive error covariance matrix scaling algorithm is proposed. The Singular Value Decomposition (SVD) method is used for improving the numerical stability of the fifth-degree CKF in this paper. In order to avoid the overshoot caused by excessive scaling of error covariance matrix during the convergence stage, the scaling scheme is terminated when the gradient of azimuth reaches the maximum. The experimental results show that the improved algorithm has better alignment accuracy with large misalignment angles than the traditional algorithm.
Phase Noise Effect on MIMO-OFDM Systems with Common and Independent Oscillators
DEFF Research Database (Denmark)
Chen, Xiaoming; Wang, Hua; Fan, Wei
2018-01-01
In this paper, the effects of oscillator phase noises (PNs) on multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems are studied. It is shown that PNs of common oscillators at the transmitter and at the receiver have the same influence on the performance...... of (single-stream) beamforming MIMO-OFDM systems, yet different influences on spatial multiplexing MIMOOFDM systems with singular value decomposition (SVD) based precoding/decoding. When each antenna is equipped with an independent oscillator, the PNs at the transmitter and at the receiver have different...... influences on beamforming MIMO-OFDM systems as well as spatial multiplexing MIMO-OFDM systems. Specifically, the PN effect at the transmitter (receiver) can be alleviated by having more transmit (receive) antennas for the case of independent oscillators. It is found that the independent oscillator case...
Fast determination of plasma parameters
International Nuclear Information System (INIS)
Wijnands, T.J.; Parlange, F.; Joffrin, E.
1995-01-01
Fast analysis of diagnostic signals of a tokamak discharge is demonstrated by using 4 fundamentally different techniques. A comparison between Function Parametrization (FP), Canonical Correlation Analysis (CCA) and a particular Neural Network (NN) configuration known as the Multi Layer Perceptron (MLP) is carried out, thereby taking a unique linear model based on a Singular Value Decomposition (SVD) as a reference. The various techniques provide all functional representations of characteristic plasma parameters in terms of the values of the measurements and are based on an analysis of a large, experimentally achieved database. A brief mathematical description of the various techniques is given, followed by two particular applications to Tore Supra diagnostic data. The first problem is concerned with the identification of the plasma boundary parameters using the poloidal field and differential poloidal flux measurements. A second application involves the interpretation of line integrated data from the multichannel interfero-polarimeter to obtain the central value of the safety factor. (author) 4 refs.; 3 figs
Optimal PMU Placement with Uncertainty Using Pareto Method
Directory of Open Access Journals (Sweden)
A. Ketabi
2012-01-01
Full Text Available This paper proposes a method for optimal placement of Phasor Measurement Units (PMUs in state estimation considering uncertainty. State estimation has first been turned into an optimization exercise in which the objective function is selected to be the number of unobservable buses which is determined based on Singular Value Decomposition (SVD. For the normal condition, Differential Evolution (DE algorithm is used to find the optimal placement of PMUs. By considering uncertainty, a multiobjective optimization exercise is hence formulated. To achieve this, DE algorithm based on Pareto optimum method has been proposed here. The suggested strategy is applied on the IEEE 30-bus test system in several case studies to evaluate the optimal PMUs placement.
[Efficient method of analysis of optical spectra from kinetic studies].
Skvortsov, A N
2009-01-01
The application of principal components for the analysis of kinetic data obtained by optical spectroscopy is described. The use of singular value decomposition (SVD) for stable and reproducible generation of principal components, details of realization, advantages and drawbacks of the method are discussed. The described method with minor modifications may be used in a wide variety of UV-spectroscopy applications in molecular biology and biophysics. The developed method was applied to study the reaction of platinum anticancer drug, cisplatin, with DNA and methionine. Use of sensitive UV-spectroscopy allowed to study low platinum concentrations, typical for biological systems. It has been shown, that reactions of cisplatin with DNA and L-methionine generally follow the same pathway both at high and low concentrations.
Feature selection for high-dimensional integrated data
Zheng, Charles
2012-04-26
Motivated by the problem of identifying correlations between genes or features of two related biological systems, we propose a model of feature selection in which only a subset of the predictors Xt are dependent on the multidimensional variate Y, and the remainder of the predictors constitute a “noise set” Xu independent of Y. Using Monte Carlo simulations, we investigated the relative performance of two methods: thresholding and singular-value decomposition, in combination with stochastic optimization to determine “empirical bounds” on the small-sample accuracy of an asymptotic approximation. We demonstrate utility of the thresholding and SVD feature selection methods to with respect to a recent infant intestinal gene expression and metagenomics dataset.
International Nuclear Information System (INIS)
Jiang, Hongkai; Xia, Yong; Wang, Xiaodong
2013-01-01
Defect faults on the surface of rolling bearing elements are the most frequent cause of malfunctions and breakages of electrical machines. Due to increasing demands for quality and reliability, extracting fault features in vibration signals is an important topic for fault detection in rolling bearings. In this paper, a novel adaptive lifting multiwavelet packet with 1(1/2) dimension spectrum to detect defects in rolling bearing elements is developed. The adaptive lifting multiwavelet packet is constructed to match vibration signal properties based on the minimum singular value decomposition (SVD) entropy using a genetic algorithm. A 1(1/2) dimension spectrum is further employed to extract rolling bearing fault characteristic frequencies from background noise. The proposed method is applied to analyze the vibration signal collected from electric locomotive rolling bearings with outer raceway and inner raceway defects. The experimental investigation shows that the method is accurate and robust in rolling bearing fault detection. (paper)
Enhanced 2D-DOA Estimation for Large Spacing Three-Parallel Uniform Linear Arrays
Directory of Open Access Journals (Sweden)
Dong Zhang
2018-01-01
Full Text Available An enhanced two-dimensional direction of arrival (2D-DOA estimation algorithm for large spacing three-parallel uniform linear arrays (ULAs is proposed in this paper. Firstly, we use the propagator method (PM to get the highly accurate but ambiguous estimation of directional cosine. Then, we use the relationship between the directional cosine to eliminate the ambiguity. This algorithm not only can make use of the elements of the three-parallel ULAs but also can utilize the connection between directional cosine to improve the estimation accuracy. Besides, it has satisfied estimation performance when the elevation angle is between 70° and 90° and it can automatically pair the estimated azimuth and elevation angles. Furthermore, it has low complexity without using any eigen value decomposition (EVD or singular value decompostion (SVD to the covariance matrix. Simulation results demonstrate the effectiveness of our proposed algorithm.
Environmental Performance in Countries Worldwide: Determinant Factors and Multivariate Analysis
Directory of Open Access Journals (Sweden)
Isabel Gallego-Alvarez
2014-11-01
Full Text Available The aim of this study is to analyze the environmental performance of countries and the variables that can influence it. At the same time, we performed a multivariate analysis using the HJ-biplot, an exploratory method that looks for hidden patterns in the data, obtained from the usual singular value decomposition (SVD of the data matrix, to contextualize the countries grouped by geographical areas and the variables relating to environmental indicators included in the environmental performance index. The sample used comprises 149 countries of different geographic areas. The findings obtained from the empirical analysis emphasize that socioeconomic factors, such as economic wealth and education, as well as institutional factors represented by the style of public administration, in particular control of corruption, are determinant factors of environmental performance in the countries analyzed. In contrast, no effect on environmental performance was found for factors relating to the internal characteristics of a country or political factors.
Dynamic normal forms and dynamic characteristic polynomial
DEFF Research Database (Denmark)
Frandsen, Gudmund Skovbjerg; Sankowski, Piotr
2011-01-01
We present the first fully dynamic algorithm for computing the characteristic polynomial of a matrix. In the generic symmetric case, our algorithm supports rank-one updates in O(n2logn) randomized time and queries in constant time, whereas in the general case the algorithm works in O(n2klogn......) randomized time, where k is the number of invariant factors of the matrix. The algorithm is based on the first dynamic algorithm for computing normal forms of a matrix such as the Frobenius normal form or the tridiagonal symmetric form. The algorithm can be extended to solve the matrix eigenproblem...... with relative error 2−b in additional O(nlog2nlogb) time. Furthermore, it can be used to dynamically maintain the singular value decomposition (SVD) of a generic matrix. Together with the algorithm, the hardness of the problem is studied. For the symmetric case, we present an Ω(n2) lower bound for rank...
Sparse Bayesian Learning for DOA Estimation with Mutual Coupling
Directory of Open Access Journals (Sweden)
Jisheng Dai
2015-10-01
Full Text Available Sparse Bayesian learning (SBL has given renewed interest to the problem of direction-of-arrival (DOA estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs. Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise.
Bipartite graph partitioning and data clustering
Energy Technology Data Exchange (ETDEWEB)
Zha, Hongyuan; He, Xiaofeng; Ding, Chris; Gu, Ming; Simon, Horst D.
2001-05-07
Many data types arising from data mining applications can be modeled as bipartite graphs, examples include terms and documents in a text corpus, customers and purchasing items in market basket analysis and reviewers and movies in a movie recommender system. In this paper, the authors propose a new data clustering method based on partitioning the underlying biopartite graph. The partition is constructed by minimizing a normalized sum of edge weights between unmatched pairs of vertices of the bipartite graph. They show that an approximate solution to the minimization problem can be obtained by computing a partial singular value decomposition (SVD) of the associated edge weight matrix of the bipartite graph. They point out the connection of their clustering algorithm to correspondence analysis used in multivariate analysis. They also briefly discuss the issue of assigning data objects to multiple clusters. In the experimental results, they apply their clustering algorithm to the problem of document clustering to illustrate its effectiveness and efficiency.
Fang, Ming; Bowin, Carl
1992-01-01
To construct Venus' gravity disturbance field (or gravity anomaly) with the spacecraft-observer line of site (LOS) acceleration perturbation data, both a global and a local approach can be used. The global approach, e.g., spherical harmonic coefficients, and the local approach, e.g., the integral operator method, based on geodetic techniques are generally not the same, so that they must be used separately for mapping long wavelength features and short wavelength features. Harmonic spline, as an interpolation and extrapolation technique, is intrinsically flexible to both global and local mapping of a potential field. Theoretically, it preserves the information of the potential field up to the bound by sampling theorem regardless of whether it is global or local mapping, and is never bothered with truncation errors. The improvement of harmonic spline methodology for global mapping is reported. New basis functions, a singular value decomposition (SVD) based modification to Parker & Shure's numerical procedure, and preliminary results are presented.
Speech/Nonspeech Detection Using Minimal Walsh Basis Functions
Directory of Open Access Journals (Sweden)
Pwint Moe
2007-01-01
Full Text Available This paper presents a new method to detect speech/nonspeech components of a given noisy signal. Employing the combination of binary Walsh basis functions and an analysis-synthesis scheme, the original noisy speech signal is modified first. From the modified signals, the speech components are distinguished from the nonspeech components by using a simple decision scheme. Minimal number of Walsh basis functions to be applied is determined using singular value decomposition (SVD. The main advantages of the proposed method are low computational complexity, less parameters to be adjusted, and simple implementation. It is observed that the use of Walsh basis functions makes the proposed algorithm efficiently applicable in real-world situations where processing time is crucial. Simulation results indicate that the proposed algorithm achieves high-speech and nonspeech detection rates while maintaining a low error rate for different noisy conditions.
Blind distributed estimation algorithms for adaptive networks
Bin Saeed, Muhammad O.; Zerguine, Azzedine; Zummo, Salam A.
2014-12-01
Until recently, a lot of work has been done to develop algorithms that utilize the distributed structure of an ad hoc wireless sensor network to estimate a certain parameter of interest. However, all these algorithms assume that the input regressor data is available to the sensors, but this data is not always available to the sensors. In such cases, blind estimation of the required parameter is needed. This work formulates two newly developed blind block-recursive algorithms based on singular value decomposition (SVD) and Cholesky factorization-based techniques. These adaptive algorithms are then used for blind estimation in a wireless sensor network using diffusion of data among cooperative sensors. Simulation results show that the performance greatly improves over the case where no cooperation among sensors is involved.
Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min
2008-07-01
In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.
Speech/Nonspeech Detection Using Minimal Walsh Basis Functions
Directory of Open Access Journals (Sweden)
Moe Pwint
2006-10-01
Full Text Available This paper presents a new method to detect speech/nonspeech components of a given noisy signal. Employing the combination of binary Walsh basis functions and an analysis-synthesis scheme, the original noisy speech signal is modified first. From the modified signals, the speech components are distinguished from the nonspeech components by using a simple decision scheme. Minimal number of Walsh basis functions to be applied is determined using singular value decomposition (SVD. The main advantages of the proposed method are low computational complexity, less parameters to be adjusted, and simple implementation. It is observed that the use of Walsh basis functions makes the proposed algorithm efficiently applicable in real-world situations where processing time is crucial. Simulation results indicate that the proposed algorithm achieves high-speech and nonspeech detection rates while maintaining a low error rate for different noisy conditions.
Predicting responses from Rasch measures.
Linacre, John M
2010-01-01
There is a growing family of Rasch models for polytomous observations. Selecting a suitable model for an existing dataset, estimating its parameters and evaluating its fit is now routine. Problems arise when the model parameters are to be estimated from the current data, but used to predict future data. In particular, ambiguities in the nature of the current data, or overfit of the model to the current dataset, may mean that better fit to the current data may lead to worse fit to future data. The predictive power of several Rasch and Rasch-related models are discussed in the context of the Netflix Prize. Rasch-related models are proposed based on Singular Value Decomposition (SVD) and Boltzmann Machines.
Adaptive PCA based fault diagnosis scheme in imperial smelting process.
Hu, Zhikun; Chen, Zhiwen; Gui, Weihua; Jiang, Bin
2014-09-01
In this paper, an adaptive fault detection scheme based on a recursive principal component analysis (PCA) is proposed to deal with the problem of false alarm due to normal process changes in real process. Our further study is also dedicated to develop a fault isolation approach based on Generalized Likelihood Ratio (GLR) test and Singular Value Decomposition (SVD) which is one of general techniques of PCA, on which the off-set and scaling fault can be easily isolated with explicit off-set fault direction and scaling fault classification. The identification of off-set and scaling fault is also applied. The complete scheme of PCA-based fault diagnosis procedure is proposed. The proposed scheme is first applied to Imperial Smelting Process, and the results show that the proposed strategies can be able to mitigate false alarms and isolate faults efficiently. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Isothermal Decomposition of Hydrogen Peroxide Dihydrate
Loeffler, M. J.; Baragiola, R. A.
2011-01-01
We present a new method of growing pure solid hydrogen peroxide in an ultra high vacuum environment and apply it to determine thermal stability of the dihydrate compound that forms when water and hydrogen peroxide are mixed at low temperatures. Using infrared spectroscopy and thermogravimetric analysis, we quantified the isothermal decomposition of the metastable dihydrate at 151.6 K. This decomposition occurs by fractional distillation through the preferential sublimation of water, which leads to the formation of pure hydrogen peroxide. The results imply that in an astronomical environment where condensed mixtures of H2O2 and H2O are shielded from radiolytic decomposition and warmed to temperatures where sublimation is significant, highly concentrated or even pure hydrogen peroxide may form.
Fast approximate convex decomposition using relative concavity
Ghosh, Mukulika
2013-02-01
Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.
Two Notes on Discrimination and Decomposition
DEFF Research Database (Denmark)
Nielsen, Helena Skyt
1998-01-01
1. It turns out that the Oaxaca-Blinder wage decomposition is inadequate when it comes to calculation of separate contributions for indicator variables. The contributions are not robust against a change of reference group. I extend the Oaxaca-Blinder decomposition to handle this problem. 2. The p....... The paper suggests how to use the logit model to decompose the gender difference in the probability of an occurrence. The technique is illustrated by an analysis of discrimination in child labor in rural Zambia....
Eigenvalue Decomposition-Based Modified Newton Algorithm
Directory of Open Access Journals (Sweden)
Wen-jun Wang
2013-01-01
Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.
Multiresolution signal decomposition transforms, subbands, and wavelets
Akansu, Ali N; Haddad, Paul R
2001-01-01
The uniqueness of this book is that it covers such important aspects of modern signal processing as block transforms from subband filter banks and wavelet transforms from a common unifying standpoint, thus demonstrating the commonality among these decomposition techniques. In addition, it covers such ""hot"" areas as signal compression and coding, including particular decomposition techniques and tables listing coefficients of subband and wavelet filters and other important properties.The field of this book (Electrical Engineering/Computer Science) is currently booming, which is, of course
Vector domain decomposition schemes for parabolic equations
Vabishchevich, P. N.
2017-09-01
A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.
Separable decompositions of bipartite mixed states
Li, Jun-Li; Qiao, Cong-Feng
2018-04-01
We present a practical scheme for the decomposition of a bipartite mixed state into a sum of direct products of local density matrices, using the technique developed in Li and Qiao (Sci. Rep. 8:1442, 2018). In the scheme, the correlation matrix which characterizes the bipartite entanglement is first decomposed into two matrices composed of the Bloch vectors of local states. Then, we show that the symmetries of Bloch vectors are consistent with that of the correlation matrix, and the magnitudes of the local Bloch vectors are lower bounded by the correlation matrix. Concrete examples for the separable decompositions of bipartite mixed states are presented for illustration.
Convergence Analysis of a Domain Decomposition Paradigm
Energy Technology Data Exchange (ETDEWEB)
Bank, R E; Vassilevski, P S
2006-06-12
We describe a domain decomposition algorithm for use in several variants of the parallel adaptive meshing paradigm of Bank and Holst. This algorithm has low communication, makes extensive use of existing sequential solvers, and exploits in several important ways data generated as part of the adaptive meshing paradigm. We show that for an idealized version of the algorithm, the rate of convergence is independent of both the global problem size N and the number of subdomains p used in the domain decomposition partition. Numerical examples illustrate the effectiveness of the procedure.
Thermal decomposition of dinitratobis(carbamido) uranyl
International Nuclear Information System (INIS)
Kobets, L.V.; Kostyukov, N.N.; Umrejko, D.S.
1987-01-01
Thermal stability of dinitratobis (carbamido) uranyl was investigated by the methods of DTA, TG and isothermal heatings at different temperatures. It was revealed that urea as a whole was not removed from the complex. Urea loses ammonia simultaneously with substance melting (498K); biuret and cyanuric acid form at that. Ammonia removal to gaseous phase decelerates in 573-593 K range due to its binding with formation of amination products. Decomposition of nitrate groups begins at temperatures above 570 K. Heating products were studied by the methods of vibrational spectroscopy, chemical and X-ray phase analyses. Ideas about mechanism of decomposition were considered
Orthomodularity of Decompositions in a Categorical Setting
Harding, John
2006-06-01
We provide a method to construct a type of orthomodular structure known as an orthoalgebra from the direct product decompositions of an object in a category that has finite products and whose ternary product diagrams give rise to certain pushouts. This generalizes a method to construct an orthomodular poset from the direct product decompositions of familiar mathematical structures such as non-empty sets, groups, and topological spaces, as well as a method to construct an orthomodular poset from the complementary pairs of elements of a bounded modular lattice.
Type-Decomposition of an Effect Algebra
Foulis, David J.; Pulmannová, Sylvia
2010-10-01
Effect algebras (EAs), play a significant role in quantum logic, are featured in the theory of partially ordered Abelian groups, and generalize orthoalgebras, MV-algebras, orthomodular posets, orthomodular lattices, modular ortholattices, and boolean algebras. We study centrally orthocomplete effect algebras (COEAs), i.e., EAs satisfying the condition that every family of elements that is dominated by an orthogonal family of central elements has a supremum. For COEAs, we introduce a general notion of decomposition into types; prove that a COEA factors uniquely as a direct sum of types I, II, and III; and obtain a generalization for COEAs of Ramsay’s fourfold decomposition of a complete orthomodular lattice.
High temperature decomposition of hydrogen peroxide
Parrish, Clyde F. (Inventor)
2011-01-01
Nitric oxide (NO) is oxidized into nitrogen dioxide (NO.sub.2) by the high temperature decomposition of a hydrogen peroxide solution to produce the oxidative free radicals, hydroxyl and hydroperoxyl. The hydrogen peroxide solution is impinged upon a heated surface in a stream of nitric oxide where it decomposes to produce the oxidative free radicals. Because the decomposition of the hydrogen peroxide solution occurs within the stream of the nitric oxide, rapid gas-phase oxidation of nitric oxide into nitrogen dioxide occurs.
Decomposition of aquatic plants in lakes
Energy Technology Data Exchange (ETDEWEB)
Godshalk, G.L.
1977-01-01
This study was carried out to systematically determine the effects of temperature and oxygen concentration, two environmental parameters crucial to lake metabolism in general, on decomposition of five species of aquatic vascular plants of three growth forms in a Michigan lake. Samples of dried plant material were decomposed in flasks in the laboratory under three different oxygen regimes, aerobic-to-anaerobic, strict anaerobic, and aerated, each at 10/sup 0/C and 25/sup 0/C. In addition, in situ decomposition of the same species was monitored using the litter bag technique under four conditions.
Directory of Open Access Journals (Sweden)
Lin Chen
2011-09-01
Full Text Available The main purpose of this paper is to establish a signal decomposition system aiming at mixed over-voltages in power systems. In an electric power system, over-voltage presents a great threat for the system safety. Analysis and identification of over-voltages is helpful to improve the stability and safety of power systems. Through statistical analysis of a collection of field over-voltage records, it was found that a kind of complicated signals created by mixing of multiple different over-voltages is difficult to identify correctly with current classification algorithms. In order to improve the classification and identification accuracy of over-voltages, a mixed over-voltage decomposition system based on the atomic decomposition and a damped sinusoid atom dictionary has been established. This decomposition system is optimized by using particle swarm optimization and the fast Fourier transform. Aiming at possible fault decomposition results during decomposition of the over-voltage signal, a double-atom decomposition algorithm is proposed in this paper. By taking three typical mixed over-voltages as examples, the validity of the algorithm is demonstrated.
Maadooliat, Mehdi
2012-08-27
Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence-structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu. edu/~madoliat/LagSVD) that can be used to produce informative animations. © The Author 2012. Published by Oxford University Press.
Low Temperature Decomposition Rates for Tetraphenylborate Ion
Energy Technology Data Exchange (ETDEWEB)
Walker, D.D.
1998-11-18
Previous studies indicated that palladium is catalyzes rapid decomposition of alkaline tetraphenylborate slurries. Additional evidence suggest that Pd(II) reduces to Pd(0) during catalyst activation. Further use of tetraphenylborate ion in the decontamination of radioactive waste may require removal of the catalyst or cooling to temperatures at which the decomposition reaction proceeds slowly and does not adversely affect processing. Recent tests showed that tetraphenylborate did not react appreciably at 25 degrees Celsius over six months suggesting the potential to avoid the decomposition at low temperatures. The lack of reaction at low temperature could reflect very slow kinetics at the lower temperature, or may indicate a catalyst ''deactivation'' process. Previous tests in the temperature range 35 to 70 degrees Celsius provided a low precision estimate of the activation energy of the reaction with which to predict the rate of reaction at 25 percent Celsius. To understand the observations at 25 degrees Celsius, experiments must separate the catalyst activation step and the subsequent reaction with TPB. Tests described in this report represent an initial attempt to separate the two steps and determine the rate and activation energy of the reaction between active catalyst and TPB. The results of these tests indicate that the absence of reaction at 25 degrees Celsius was caused by failure to activate the catalyst or the presence of a deactivating mechanism. In the presence of activated catalyst, the decomposition reaction rate is significant.
Decomposition and Nutrient Release Patterns of Pueraria ...
African Journals Online (AJOL)
We report on a study to determine the decomposition and nutrient release patterns of Pueraria phaseoloides and Flemingia macrophylla leaf residues under two rainfall regimes in southern Cameroon. Fresh leaf material of the two legume species were put in litter bags and placed on the soil surface for 120 days at ...
MADCam: The multispectral active decomposition camera
DEFF Research Database (Denmark)
Hilger, Klaus Baggesen; Stegmann, Mikkel Bille
2001-01-01
A real-time spectral decomposition of streaming three-band image data is obtained by applying linear transformations. The Principal Components (PC), the Maximum Autocorrelation Factors (MAF), and the Maximum Noise Fraction (MNF) transforms are applied. In the presented case study the PC transform...
Detailed Chemical Kinetic Modeling of Hydrazine Decomposition
Meagher, Nancy E.; Bates, Kami R.
2000-01-01
The purpose of this research project is to develop and validate a detailed chemical kinetic mechanism for gas-phase hydrazine decomposition. Hydrazine is used extensively in aerospace propulsion, and although liquid hydrazine is not considered detonable, many fuel handling systems create multiphase mixtures of fuels and fuel vapors during their operation. Therefore, a thorough knowledge of the decomposition chemistry of hydrazine under a variety of conditions can be of value in assessing potential operational hazards in hydrazine fuel systems. To gain such knowledge, a reasonable starting point is the development and validation of a detailed chemical kinetic mechanism for gas-phase hydrazine decomposition. A reasonably complete mechanism was published in 1996, however, many of the elementary steps included had outdated rate expressions and a thorough investigation of the behavior of the mechanism under a variety of conditions was not presented. The current work has included substantial revision of the previously published mechanism, along with a more extensive examination of the decomposition behavior of hydrazine. An attempt to validate the mechanism against the limited experimental data available has been made and was moderately successful. Further computational and experimental research into the chemistry of this fuel needs to be completed.
Decomposition of Quaternary Signed-Graphic Matroids
Pitsoulis, Leonidas; Vretta, Eleni-Maria
2015-01-01
In this work we provide a decomposition theorem for the class of quaternary and non-binary signed-graphic matroids. This generalizes previous results for binary signed-graphic matroids and graphic matroids, and it provides the theoretical basis for a recognition algorithm.
Influence of Family Structure on Variance Decomposition
DEFF Research Database (Denmark)
Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter
Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained...
Organic matter decomposition in simulated aquaculture ponds
Torres Beristain, B.
2005-01-01
Different kinds of organic and inorganic compounds (e.g. formulated food, manures, fertilizers) are added to aquaculture ponds to increase fish production. However, a large part of these inputs are not utilized by the fish and are decomposed inside the pond. The microbiological decomposition of the
The decomposition of estuarine macrophytes under different ...
African Journals Online (AJOL)
The low levels of total oxidised nitrogen (nitrate and nitrite) released during decomposition were attributed to the inhibition of nitrification by heterotrophic bacteria under anoxic conditions. The relative levels of dissolved inorganic phosphorus (DIP) released were lower than that observed for DIN, and peaked early on in the ...
Radiolytic decomposition of dioxins in liquid wastes
International Nuclear Information System (INIS)
Zhao Changli; Taguchi, M.; Hirota, K.; Takigami, M.; Kojima, T.
2006-01-01
The dioxins including polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs) are some of the most toxic persistent organic pollutants. These chemicals have widely contaminated the air, water, and soil. They would accumulate in the living body through the food chains, leading to a serious public health hazard. In the present study, radiolytic decomposition of dioxins has been investigated in liquid wastes, including organic waste and waste-water. Dioxin-containing organic wastes are commonly generated in nonane or toluene. However, it was found that high radiation doses are required to completely decompose dioxins in the two solvents. The decomposition was more efficient in ethanol than in nonane or toluene. The addition of ethanol to toluene or nonane could achieve >90% decomposition of dioxins at the dose of 100 kGy. Thus, dioxin-containing organic wastes can be treated as regular organic wastes after addition of ethanol and subsequent γ-ray irradiation. On the other hand, radiolytic decomposition of dioxins easily occurred in pure-water than in waste-water, because the reaction species is largely scavenged by the dominant organic materials in waste-water. Dechlorination was not a major reaction pathway for the radiolysis of dioxin in water. In addition, radiolytic mechanism and dechlorinated pathways in liquid wastes were also discussed. (authors)
Decomposition and nutrient release patterns of Pueraria ...
African Journals Online (AJOL)
Decomposition and nutrient release patterns of Pueraria phaseoloides, Flemingia macrophylla and Chromolaena odorata leaf residues in tropical land use ... The slowest releases, irrespective of type of leaf residue, were in Ca and Mg. The study concluded that among the planted fallows, Pueraria phaseoloides had the ...
Linear, Constant-rounds Bit-decomposition
DEFF Research Database (Denmark)
Reistad, Tord; Toft, Tomas
2010-01-01
When performing secure multiparty computation, tasks may often be simple or difficult depending on the representation chosen. Hence, being able to switch representation efficiently may allow more efficient protocols. We present a new protocol for bit-decomposition: converting a ring element x ∈ ℤ M...
Wood decomposition as influenced by invertebrates
Michael D. Ulyshen
2014-01-01
The diversity and habitat requirements of invertebrates associated with dead wood have been the subjects of hundreds of studies in recent years but we still know very little about the ecological or economic importance of these organisms. The purpose of this review is to examine whether, how and to what extent invertebrates affect wood decomposition in terrestrial...
The Algorithmic Complexity of Modular Decomposition
J.C. Bioch (Cor)
2001-01-01
textabstractModular decomposition is a thoroughly investigated topic in many areas such as switching theory, reliability theory, game theory and graph theory. We propose an O(mn)-algorithm for the recognition of a modular set of a monotone Boolean function f with m prime implicants and n variables.
Methodologies in forensic and decomposition microbiology
Culturable microorganisms represent only 0.1-1% of the total microbial diversity of the biosphere. This has severely restricted the ability of scientists to study the microbial biodiversity associated with the decomposition of ephemeral resources in the past. Innovations in technology are bringing...
Fluidized-Bed Silane-Decomposition Reactor
Iya, Sridhar K.
1991-01-01
Fluidized-bed pyrolysis reactor produces high-purity polycrystalline silicon from silane or halosilane via efficient heterogeneous deposition of silicon on silicon seed particles. Formation of silicon dust via homogeneous decomposition of silane minimized, and deposition of silicon on wall of reactor effectively eliminated. Silicon used to construct solar cells and other semiconductor products.
A Martingale Decomposition of Discrete Markov Chains
DEFF Research Database (Denmark)
Hansen, Peter Reinhard
We consider a multivariate time series whose increments are given from a homogeneous Markov chain. We show that the martingale component of this process can be extracted by a filtering method and establish the corresponding martingale decomposition in closed-form. This representation is useful...
Nash-Williams’ cycle-decomposition theorem
DEFF Research Database (Denmark)
Thomassen, Carsten
2016-01-01
We give an elementary proof of the theorem of Nash-Williams that a graph has an edge-decomposition into cycles if and only if it does not contain an odd cut. We also prove that every bridgeless graph has a collection of cycles covering each edge at least once and at most 7 times. The two results...
Compactly supported frames for decomposition spaces
DEFF Research Database (Denmark)
Nielsen, Morten; Rasmussen, Kenneth Niemann
2012-01-01
In this article we study a construction of compactly supported frame expansions for decomposition spaces of Triebel-Lizorkin type and for the associated modulation spaces. This is done by showing that finite linear combinations of shifts and dilates of a single function with sufficient decay in b...
TP89 - SIRZ Decomposition Spectral Estimation
Energy Technology Data Exchange (ETDEWEB)
Seetho, Isacc M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Azevedo, Steve [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, Jerel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brown, William D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Martz, Jr., Harry E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-12-08
The primary objective of this test plan is to provide X-ray CT measurements of known materials for the purposes of generating and testing MicroCT and EDS spectral estimates. These estimates are to be used in subsequent Ze/RhoE decomposition analyses of acquired data.
Thermal Decomposition of Aluminium Chloride Hexahydrate
Czech Academy of Sciences Publication Activity Database
Hartman, Miloslav; Trnka, Otakar; Šolcová, Olga
2005-01-01
Roč. 44, č. 17 (2005), s. 6591-6598 ISSN 0888-5885 R&D Projects: GA ČR(CZ) GA203/02/0002 Institutional research plan: CEZ:AV0Z40720504 Keywords : aluminum chloride hexahydrate * thermal decomposition * reaction kinetics Subject RIV: CI - Industrial Chemistry, Chemical Engineering Impact factor: 1.504, year: 2005
Decomposition approaches to integration without a measure
Czech Academy of Sciences Publication Activity Database
Greco, S.; Mesiar, Radko; Rindone, F.; Sipeky, L.
2016-01-01
Roč. 287, č. 1 (2016), s. 37-47 ISSN 0165-0114 Institutional support: RVO:67985556 Keywords : Choquet integral * Decision making * Decomposition integral Subject RIV: BA - General Mathematics Impact factor: 2.718, year: 2016 http://library.utia.cas.cz/separaty/2016/E/mesiar-0457408.pdf
Decomposition and aggregation in queueing networks
Huisman, Tijs; Boucherie, Richardus J.; van Dijk, Nico; van Dijk, Nico M.
2011-01-01
This chapter considers the decomposition and aggregation of multiclass queueing networks with state-dependent routing. Combining state-dependent generalisations of quasi-reversibility and biased local balance, sufficient conditions are obtained under which the stationary distribution of the network
Overlapping domain decomposition methods for elliptic quasi ...
Indian Academy of Sciences (India)
[10] Lions P L, On the Schwarz alternating method ˙I˙I, Stochastic interpretation and order properties, Domain Decomposition Methods (Los Angeles, California, 1988) (SIAM,. Philadelphia) (1989) pp. 47–70. [11] Perthame B, Some remarks on quasi-variational inequalities and the associated impulsive control problem ...
Distributed Model Predictive Control via Dual Decomposition
DEFF Research Database (Denmark)
Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle
2014-01-01
. This allows coordination of all the subsystems without the need of sharing local dynamics, objectives and constraints. To illustrate this, an example is included where dual decomposition is used to resolve power grid congestion in a distributed manner among a number of players coupled by distribution grid...
Laamiri, Imen; Khouaja, Anis; Messaoud, Hassani
2015-03-01
In this paper we provide a convergence analysis of the alternating RGLS (Recursive Generalized Least Square) algorithm used for the identification of the reduced complexity Volterra model describing stochastic non-linear systems. The reduced Volterra model used is the 3rd order SVD-PARAFC-Volterra model provided using the Singular Value Decomposition (SVD) and the Parallel Factor (PARAFAC) tensor decomposition of the quadratic and the cubic kernels respectively of the classical Volterra model. The Alternating RGLS (ARGLS) algorithm consists on the execution of the classical RGLS algorithm in alternating way. The ARGLS convergence was proved using the Ordinary Differential Equation (ODE) method. It is noted that the algorithm convergence canno׳t be ensured when the disturbance acting on the system to be identified has specific features. The ARGLS algorithm is tested in simulations on a numerical example by satisfying the determined convergence conditions. To raise the elegies of the proposed algorithm, we proceed to its comparison with the classical Alternating Recursive Least Squares (ARLS) presented in the literature. The comparison has been built on a non-linear satellite channel and a benchmark system CSTR (Continuous Stirred Tank Reactor). Moreover the efficiency of the proposed identification approach is proved on an experimental Communicating Two Tank system (CTTS). Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Sliding Window Empirical Mode Decomposition -its performance and quality
Directory of Open Access Journals (Sweden)
Stepien Pawel
2014-12-01
Proposed algorithm speeds up (about 10 times the computation with acceptable quality of decomposition. Conclusions Sliding Window EMD algorithm is suitable for decomposition of long signals with high sampling frequency.
Litter decomposition and nutrient dynamics of ten selected tree ...
African Journals Online (AJOL)
Litter decomposition processes in tropical rainforests are still poorly understood. Leaf litter decomposition and nutrient dynamics of ten contrasting tree species, Entandraphragma utile, Guibourtia tessmannii, Klainedoxa gabonensis, Musanga cecropioides, Panda oleosa, Plagiostyles africana, Pterocarpus soyauxii, ...
Decomposition of Amino Diazeniumdiolates (NONOates): Molecular Mechanisms
Energy Technology Data Exchange (ETDEWEB)
Shaikh, Nizamuddin; Valiev, Marat; Lymar, Sergei V.
2014-08-23
Although diazeniumdiolates (X[N(O)NO]-) are extensively used in biochemical, physiological, and pharmacological studies due to their ability to slowly release NO and/or its congeneric nitroxyl, the mechanisms of these processes remain obscure. In this work, we used a combination of spectroscopic, kinetic, and computational techniques to arrive at a qualitatively consistent molecular mechanism for decomposition of amino diazeniumdiolates (amino NONOates: R2N[N(O)NO]-, where R = -N(C2H5)2 (1), -N(C3H4NH2)2 (2), or -N(C2H4NH2)2 (3)). Decomposition of these NONOates is triggered by protonation of their [NN(O)NO]- group with apparent pKa and decomposition rate constants of 4.6 and 1 s-1 for 1-H, 3.5 and 83 x 10-3 s-1 for 2-H, and 3.8 and 3.3 x 10-3 s-1 for 3-H. Although protonation occurs mainly on the O atoms of the functional group, only the minor R2N(H)N(O)NO tautomer (population ~0.01%, for 1) undergoes the N-N heterolytic bond cleavage (k ~102 s-1 for 1) leading to amine and NO. Decompositions of protonated amino NONOates are strongly temperature-dependent; activation enthalpies are 20.4 and 19.4 kcal/mol for 1 and 2, respectively, which includes contributions from both the tautomerization and bond cleavage. The bond cleavage rates exhibit exceptional sensitivity to the nature of R substituents which strongly modulate activation entropy. At pH < 2, decompositions of all these NONOates are subject to additional acid catalysis that occurs through di-protonation of the [NN(O)NO]- group.
Wood decomposition as influenced by invertebrates.
Ulyshen, Michael D
2016-02-01
The diversity and habitat requirements of invertebrates associated with dead wood have been the subjects of hundreds of studies in recent years but we still know very little about the ecological or economic importance of these organisms. The purpose of this review is to examine whether, how and to what extent invertebrates affect wood decomposition in terrestrial ecosystems. Three broad conclusions can be reached from the available literature. First, wood decomposition is largely driven by microbial activity but invertebrates also play a significant role in both temperate and tropical environments. Primary mechanisms include enzymatic digestion (involving both endogenous enzymes and those produced by endo- and ectosymbionts), substrate alteration (tunnelling and fragmentation), biotic interactions and nitrogen fertilization (i.e. promoting nitrogen fixation by endosymbiotic and free-living bacteria). Second, the effects of individual invertebrate taxa or functional groups can be accelerative or inhibitory but the cumulative effect of the entire community is generally to accelerate wood decomposition, at least during the early stages of the process (most studies are limited to the first 2-3 years). Although methodological differences and design limitations preclude meta-analysis, studies aimed at quantifying the contributions of invertebrates to wood decomposition commonly attribute 10-20% of wood loss to these organisms. Finally, some taxa appear to be particularly influential with respect to promoting wood decomposition. These include large wood-boring beetles (Coleoptera) and termites (Termitoidae), especially fungus-farming macrotermitines. The presence or absence of these species may be more consequential than species richness and the influence of invertebrates is likely to vary biogeographically. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Implementation of domain decomposition and data decomposition algorithms in RMC code
International Nuclear Information System (INIS)
Liang, J.G.; Cai, Y.; Wang, K.; She, D.
2013-01-01
The applications of Monte Carlo method in reactor physics analysis is somewhat restricted due to the excessive memory demand in solving large-scale problems. Memory demand in MC simulation is analyzed firstly, it concerns geometry data, data of nuclear cross-sections, data of particles, and data of tallies. It appears that tally data is dominant in memory cost and should be focused on in solving the memory problem. Domain decomposition and tally data decomposition algorithms are separately designed and implemented in the reactor Monte Carlo code RMC. Basically, the domain decomposition algorithm is a strategy of 'divide and rule', which means problems are divided into different sub-domains to be dealt with separately and some rules are established to make sure the whole results are correct. Tally data decomposition consists in 2 parts: data partition and data communication. Two algorithms with differential communication synchronization mechanisms are proposed. Numerical tests have been executed to evaluate performance of the new algorithms. Domain decomposition algorithm shows potentials to speed up MC simulation as a space parallel method. As for tally data decomposition algorithms, memory size is greatly reduced
Specific leaf area predicts dryland litter decomposition via two mechanisms
Liu, Guofang; Wang, Lei; Jiang, Li; Pan, Xu; Huang, Zhenying; Dong, Ming; Cornelissen, Johannes H.C.
2018-01-01
Litter decomposition plays important roles in carbon and nutrient cycling. In dryland, both microbial decomposition and abiotic degradation (by UV light or other forces) drive variation in decomposition rates, but whether and how litter traits and position determine the balance between these
Climate fails to predict wood decomposition at regional scales
Mark A. Bradford; Robert J. Warren; Petr Baldrian; Thomas W. Crowther; Daniel S. Maynard; Emily E. Oldfield; William R. Wieder; Stephen A. Wood; Joshua R. King
2014-01-01
Decomposition of organic matter strongly influences ecosystem carbon storage1. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter2, 3, 4, 5. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on...
Biogeochemistry of Decomposition and Detrital Processing
Sanderman, J.; Amundson, R.
2003-12-01
Decomposition is a key ecological process that roughly balances net primary production in terrestrial ecosystems and is an essential process in resupplying nutrients to the plant community. Decomposition consists of three concurrent processes: communition or fragmentation, leaching of water-soluble compounds, and microbial catabolism. Decomposition can also be viewed as a sequential process, what Eijsackers and Zehnder (1990) compare to a Russian matriochka doll. Soil macrofauna fragment and partially solubilize plant residues, facilitating establishment of a community of decomposer microorganisms. This decomposer community will gradually shift as the most easily degraded plant compounds are utilized and the more recalcitrant materials begin to accumulate. Given enough time and the proper environmental conditions, most naturally occurring compounds can completely be mineralized to inorganic forms. Simultaneously with mineralization, the process of humification acts to transform a fraction of the plant residues into stable soil organic matter (SOM) or humus. For reference, Schlesinger (1990) estimated that only ˜0.7% of detritus eventually becomes stabilized into humus.Decomposition plays a key role in the cycling of most plant macro- and micronutrients and in the formation of humus. Figure 1 places the roles of detrital processing and mineralization within the context of the biogeochemical cycling of essential plant nutrients. Chapin (1991) found that while the atmosphere supplied 4% and mineral weathering supplied no nitrogen and 95% of all the nitrogen and phosphorus uptake by tundra species in Barrow, Alaska. In a cool temperate forest, nutrient recycling accounted for 93%, 89%, 88%, and 65% of total sources for nitrogen, phosphorus, potassium, and calcium, respectively ( Chapin, 1991). (13K)Figure 1. A decomposition-centric biogeochemical model of nutrient cycling. Although there is significant external input (1) and output (2) from neighboring ecosystems
ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE
Energy Technology Data Exchange (ETDEWEB)
Ketusky, E.; Subramanian, K.
2012-02-29
At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration
Advanced Oxidation: Oxalate Decomposition Testing With Ozone
International Nuclear Information System (INIS)
Ketusky, E.; Subramanian, K.
2012-01-01
At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing
El-Shafai, W.; El-Bakary, E. M.; El-Rabaie, S.; Zahran, O.; El-Halawany, M.; Abd El-Samie, F. E.
2017-06-01
Three-Dimensional Multi-View Video (3D-MVV) transmission over wireless networks suffers from Macro-Blocks losses due to either packet dropping or fading-motivated bit errors. Thus, the robust performance of 3D-MVV transmission schemes over wireless channels becomes a recent considerable hot research issue due to the restricted resources and the presence of severe channel errors. The 3D-MVV is composed of multiple video streams shot by several cameras around a single object, simultaneously. Therefore, it is an urgent task to achieve high compression ratios to meet future bandwidth constraints. Unfortunately, the highly-compressed 3D-MVV data becomes more sensitive and vulnerable to packet losses, especially in the case of heavy channel faults. Thus, in this paper, we suggest the application of a chaotic Baker interleaving approach with equalization and convolution coding for efficient Singular Value Decomposition (SVD) watermarked 3D-MVV transmission over an Orthogonal Frequency Division Multiplexing wireless system. Rayleigh fading and Additive White Gaussian Noise are considered in the real scenario of 3D-MVV transmission. The SVD watermarked 3D-MVV frames are primarily converted to their luminance and chrominance components, which are then converted to binary data format. After that, chaotic interleaving is applied prior to the modulation process. It is used to reduce the channel effects on the transmitted bit streams and it also adds a degree of encryption to the transmitted 3D-MVV frames. To test the performance of the proposed framework; several simulation experiments on different SVD watermarked 3D-MVV frames have been executed. The experimental results show that the received SVD watermarked 3D-MVV frames still have high Peak Signal-to-Noise Ratios and watermark extraction is possible in the proposed framework.
Identifying Talent in Youth Sport: A Novel Methodology Using Higher-Dimensional Analysis.
Directory of Open Access Journals (Sweden)
Kevin Till
Full Text Available Prediction of adult performance from early age talent identification in sport remains difficult. Talent identification research has generally been performed using univariate analysis, which ignores multivariate relationships. To address this issue, this study used a novel higher-dimensional model to orthogonalize multivariate anthropometric and fitness data from junior rugby league players, with the aim of differentiating future career attainment. Anthropometric and fitness data from 257 Under-15 rugby league players was collected. Players were grouped retrospectively according to their future career attainment (i.e., amateur, academy, professional. Players were blindly and randomly divided into an exploratory (n = 165 and validation dataset (n = 92. The exploratory dataset was used to develop and optimize a novel higher-dimensional model, which combined singular value decomposition (SVD with receiver operating characteristic analysis. Once optimized, the model was tested using the validation dataset. SVD analysis revealed 60 m sprint and agility 505 performance were the most influential characteristics in distinguishing future professional players from amateur and academy players. The exploratory dataset model was able to distinguish between future amateur and professional players with a high degree of accuracy (sensitivity = 85.7%, specificity = 71.1%; p<0.001, although it could not distinguish between future professional and academy players. The validation dataset model was able to distinguish future professionals from the rest with reasonable accuracy (sensitivity = 83.3%, specificity = 63.8%; p = 0.003. Through the use of SVD analysis it was possible to objectively identify criteria to distinguish future career attainment with a sensitivity over 80% using anthropometric and fitness data alone. As such, this suggests that SVD analysis may be a useful analysis tool for research and practice within talent identification.
Self-decomposition of radiochemicals. Principles, control, observations and effects
International Nuclear Information System (INIS)
Evans, E.A.
1976-01-01
The aim of the booklet is to remind the established user of radiochemicals of the problems of self-decomposition and to inform those investigators who are new to the applications of radiotracers. The section headings are: introduction; radionuclides; mechanisms of decomposition; effects of temperature; control of decomposition; observations of self-decomposition (sections for compounds labelled with (a) carbon-14, (b) tritium, (c) phosphorus-32, (d) sulphur-35, (e) gamma- or X-ray emitting radionuclides, decomposition of labelled macromolecules); effects of impurities in radiotracer investigations; stability of labelled compounds during radiotracer studies. (U.K.)
Li, Duan; Xu, Lijun; Li, Xiaolu
2017-04-01
To measure the distances and properties of the objects within a laser footprint, a decomposition method for full-waveform light detection and ranging (LiDAR) echoes is proposed. In this method, firstly, wavelet decomposition is used to filter the noise and estimate the noise level in a full-waveform echo. Secondly, peak and inflection points of the filtered full-waveform echo are used to detect the echo components in the filtered full-waveform echo. Lastly, particle swarm optimization (PSO) is used to remove the noise-caused echo components and optimize the parameters of the most probable echo components. Simulation results show that the wavelet-decomposition-based filter is of the best improvement of SNR and decomposition success rates than Wiener and Gaussian smoothing filters. In addition, the noise level estimated using wavelet-decomposition-based filter is more accurate than those estimated using other two commonly used methods. Experiments were carried out to evaluate the proposed method that was compared with our previous method (called GS-LM for short). In experiments, a lab-build full-waveform LiDAR system was utilized to provide eight types of full-waveform echoes scattered from three objects at different distances. Experimental results show that the proposed method has higher success rates for decomposition of full-waveform echoes and more accurate parameters estimation for echo components than those of GS-LM. The proposed method based on wavelet decomposition and PSO is valid to decompose the more complicated full-waveform echoes for estimating the multi-level distances of the objects and measuring the properties of the objects in a laser footprint.
Multiple Shooting and Time Domain Decomposition Methods
Geiger, Michael; Körkel, Stefan; Rannacher, Rolf
2015-01-01
This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms. The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics. This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...
hermal decomposition of irradiated casein molecules
International Nuclear Information System (INIS)
Ali, M.A.; Elsayed, A.A.
1998-01-01
NON-Isothermal studies were carried out using the derivatograph where thermogravimetry (TG) and differential thermogravimetry (DTG) measurements were used to obtain the activation energies of the first and second reactions for casein (glyco-phospho-protein) decomposition before and after exposure to 1 Gy γ-rays and up to 40 x 1 04 μg Gy fast neutrons. 25C f was used as a source of fast neutrons, associated with γ-rays. 137 Cs source was used as pure γ-source. The activation energies for the first and second reactions for casein decomposition were found to be smaller at 400 μGy than that at lower and higher fast neutron doses. However, no change in activation energies was observed after γ-irradiation. it is concluded from the present study that destruction of casein molecules by low level fast neutron doses may lead to changes of shelf storage period of milk
Thermal decomposition of barium valerate in argon
DEFF Research Database (Denmark)
Torres, P.; Norby, Poul; Grivel, Jean-Claude
2015-01-01
degrees C and evidence was found for the solidification of the melt at 380-440 degrees C, i.e. simultaneously with the onset of decomposition. Between 400 degrees C and 520 degrees C (Ba(C4H9CO2)(2) decomposes in two main steps, first into BaCO3 with release of C4H9COC4H9 (5-nonanone), whereas final......The thermal decomposition of barium valerate (Ba(C4H9CO2)(2)/Ba-pentanoate) was studied in argon by means of thermogravimetry, differential thermal analysis, IR-spectroscopy, X-ray diffraction and hot-stage optical microscopy. Melting takes place in two different steps, at 200 degrees C and 280...
Domain decomposition multigrid for unstructured grids
Energy Technology Data Exchange (ETDEWEB)
Shapira, Yair
1997-01-01
A two-level preconditioning method for the solution of elliptic boundary value problems using finite element schemes on possibly unstructured meshes is introduced. It is based on a domain decomposition and a Galerkin scheme for the coarse level vertex unknowns. For both the implementation and the analysis, it is not required that the curves of discontinuity in the coefficients of the PDE match the interfaces between subdomains. Generalizations to nonmatching or overlapping grids are made.
Heuristic decomposition for non-hierarchic systems
Bloebaum, Christina L.; Hajela, P.
1991-01-01
Design and optimization is substantially more complex in multidisciplinary and large-scale engineering applications due to the existing inherently coupled interactions. The paper introduces a quasi-procedural methodology for multidisciplinary optimization that is applicable for nonhierarchic systems. The necessary decision-making support for the design process is provided by means of an embedded expert systems capability. The method employs a decomposition approach whose modularity allows for implementation of specialized methods for analysis and optimization within disciplines.
Radiation decomposition of technetium-99m radiopharmaceuticals
International Nuclear Information System (INIS)
Billinghurst, M.W.; Rempel, S.; Westendorf, B.A.
1979-01-01
Technetium-99m radiopharmaceuticals are shown to be subject to autoradiation-induced decomposition, which results in increasing abundance of pertechnetate in the preparation. This autodecomposition is catalyzed by the presence of oxygen, although the removal of oxygen does not prevent its occurrence. The initial appearance of pertechnetate in the radiopharmaceutical is shown to be a function of the amount of radioactivity, the quantity of stannous ion used, and the ratio of /sup 99m/Tc to total technetium in the preparation
Information decomposition method to analyze symbolical sequences
International Nuclear Information System (INIS)
Korotkov, E.V.; Korotkova, M.A.; Kudryashov, N.A.
2003-01-01
The information decomposition (ID) method to analyze symbolical sequences is presented. This method allows us to reveal a latent periodicity of any symbolical sequence. The ID method is shown to have advantages in comparison with application of the Fourier transformation, the wavelet transform and the dynamic programming method to look for latent periodicity. Examples of the latent periods for poetic texts, DNA sequences and amino acids are presented. Possible origin of a latent periodicity for different symbolical sequences is discussed
Numerical CP Decomposition of Some Difficult Tensors
Czech Academy of Sciences Publication Activity Database
Tichavský, Petr; Phan, A. H.; Cichocki, A.
2017-01-01
Roč. 317, č. 1 (2017), s. 362-370 ISSN 0377-0427 R&D Projects: GA ČR(CZ) GA14-13713S Institutional support: RVO:67985556 Keywords : Small matrix multiplication * Canonical polyadic tensor decomposition * Levenberg-Marquardt method Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/tichavsky-0468385.pdf
Decomposition of monolithic web application to microservices
Zaymus, Mikulas
2017-01-01
Solteq Oyj has an internal Wellbeing project for massage reservations. The task of this thesis was to transform the monolithic architecture of this application to microservices. The thesis starts with a detailed comparison between microservices and monolithic application. It points out the benefits and disadvantages microservice architecture can bring to the project. Next, it describes the theory and possible strategies that can be used in the process of decomposition of an existing monoli...
Domain decomposition methods for mortar finite elements
Energy Technology Data Exchange (ETDEWEB)
Widlund, O.
1996-12-31
In the last few years, domain decomposition methods, previously developed and tested for standard finite element methods and elliptic problems, have been extended and modified to work for mortar and other nonconforming finite element methods. A survey will be given of work carried out jointly with Yves Achdou, Mario Casarin, Maksymilian Dryja and Yvon Maday. Results on the p- and h-p-version finite elements will also be discussed.
Nonconformity problem in 3D Grid decomposition
Czech Academy of Sciences Publication Activity Database
Kolcun, Alexej
2002-01-01
Roč. 10, č. 1 (2002), s. 249-253 ISSN 1213-6972. [International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2002/10./. Plzeň, 04.02.2002-08.02.2002] R&D Projects: GA ČR GA105/99/1229; GA ČR GA105/01/1242 Institutional research plan: CEZ:AV0Z3086906 Keywords : structured mesh * decomposition * nonconformity Subject RIV: BA - General Mathematics
Quantum Mechanical Image of Matrices' LDU Decomposition
International Nuclear Information System (INIS)
Fan Hongyi; Yuan Shaojie
2010-01-01
For classical transformation (q 1 , q 2 ) → (A q1 + B q2 , C q1 + D q2 ), where AD - CB ≠ 1, we find its quantum mechanical image by using LDU decomposition of the matrix (ABCD). The explicit operators L-circumflex, D-circumflex, and U-circumflex are derived and their physical meaning is revealed, this also provides a new way for disentangling some exponential operators. (general)
Gas hydrates forming and decomposition conditions analysis
Directory of Open Access Journals (Sweden)
А. М. Павленко
2017-07-01
Full Text Available The concept of gas hydrates has been defined; their brief description has been given; factors that affect the formation and decomposition of the hydrates have been reported; their distribution, structure and thermodynamic conditions determining the gas hydrates formation disposition in gas pipelines have been considered. Advantages and disadvantages of the known methods for removing gas hydrate plugs in the pipeline have been analyzed, the necessity of their further studies has been proved. In addition to the negative impact on the process of gas extraction, the hydrates properties make it possible to outline the following possible fields of their industrial use: obtaining ultrahigh pressures in confined spaces at the hydrate decomposition; separating hydrocarbon mixtures by successive transfer of individual components through the hydrate given the mode; obtaining cold due to heat absorption at the hydrate decomposition; elimination of the open gas fountain by means of hydrate plugs in the bore hole of the gushing gasser; seawater desalination, based on the hydrate ability to only bind water molecules into the solid state; wastewater purification; gas storage in the hydrate state; dispersion of high temperature fog and clouds by means of hydrates; water-hydrates emulsion injection into the productive strata to raise the oil recovery factor; obtaining cold in the gas processing to cool the gas, etc.
The quantile score and its decomposition
Bentzien, Sabrina; Friederichs, Petra
2014-05-01
Forecast verification for probabilistic weather and climate predictions gain more and more importance due to the increasing number of ensemble prediction systems. The predictive performance of probabilistic forecasts is generally assessed using proper score functions, which are applied to a set of forecast-observation pairs. The propriety of a score guarantees honesty and prevents hedging. A variety of proper scores exist for different types of probabilistic forecasts. Moreover, proper scoring functions can be decomposed into the three parts reliability, resolution, and uncertainty, which describe main characteristics of a forecasting scheme. This decomposition is well known for the Brier score and the continuous ranked probability score. This study expands the pool of verification methods for probabilistic forecasts by a decomposition of the quantile score (QS). Quantiles are suitable probabilistic measures especially for extreme forecast events, since they do not depend on an apriori defined threshold. The QS is a weighted absolute error between quantile forecasts and observations. We derive a decomposition of the QS in reliability, resolution, and uncertainty, and give a brief description of potential biases. A quantile reliability plot is presented. The quantile verification within this framework is illustrated on precipitation forecasts derived from the mesoscale ensemble prediction system COSMO-DE-EPS of the German Meteorological Service.
Bregmanized Domain Decomposition for Image Restoration
Langer, Andreas
2012-05-22
Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems such as total variation minimization became increasingly important for the solution of these tasks. While being favorable due to the improved enhancement of images compared to smooth imaging approaches, non-smooth minimization problems typically scale badly with the dimension of the data. Hence, for large imaging problems solved by total variation minimization domain decomposition algorithms have been proposed, aiming to split one large problem into N > 1 smaller problems which can be solved on parallel CPUs. The N subproblems constitute constrained minimization problems, where the constraint enforces the support of the minimizer to be the respective subdomain. In this paper we discuss a fast computational algorithm to solve domain decomposition for total variation minimization. In particular, we accelerate the computation of the subproblems by nested Bregman iterations. We propose a Bregmanized Operator Splitting-Split Bregman (BOS-SB) algorithm, which enforces the restriction onto the respective subdomain by a Bregman iteration that is subsequently solved by a Split Bregman strategy. The computational performance of this new approach is discussed for its application to image inpainting and image deblurring. It turns out that the proposed new solution technique is up to three times faster than the iterative algorithm currently used in domain decomposition methods for total variation minimization. © Springer Science+Business Media, LLC 2012.
Hu, Shujuan; Chou, Jifan; Cheng, Jianbo
2018-04-01
In order to study the interactions between the atmospheric circulations at the middle-high and low latitudes from the global perspective, the authors proposed the mathematical definition of three-pattern circulations, i.e., horizontal, meridional and zonal circulations with which the actual atmospheric circulation is expanded. This novel decomposition method is proved to accurately describe the actual atmospheric circulation dynamics. The authors used the NCEP/NCAR reanalysis data to calculate the climate characteristics of those three-pattern circulations, and found that the decomposition model agreed with the observed results. Further dynamical analysis indicates that the decomposition model is more accurate to capture the major features of global three dimensional atmospheric motions, compared to the traditional definitions of Rossby wave, Hadley circulation and Walker circulation. The decomposition model for the first time realized the decomposition of global atmospheric circulation using three orthogonal circulations within the horizontal, meridional and zonal planes, offering new opportunities to study the large-scale interactions between the middle-high latitudes and low latitudes circulations.
Block-diagonal representations for covariance-based anomalous change detectors
Energy Technology Data Exchange (ETDEWEB)
Matsekh, Anna M [Los Alamos National Laboratory; Theiler, James P [Los Alamos National Laboratory
2010-01-01
We use singular vectors of the whitened cross-covariance matrix of two hyper-spectral images and the Golub-Kahan permutations in order to obtain equivalent tridiagonal representations of the coefficient matrices for a family of covariance-based quadratic Anomalous Change Detection (ACD) algorithms. Due to the nature of the problem these tridiagonal matrices have block-diagonal structure, which we exploit to derive analytical expressions for the eigenvalues of the coefficient matrices in terms of the singular values of the whitened cross-covariance matrix. The block-diagonal structure of the matrices of the RX, Chronochrome, symmetrized Chronochrome, Whitened Total Least Squares, Hyperbolic and Subpixel Hyperbolic Anomalous Change Detectors are revealed by the white singular value decomposition and Golub-Kahan transformations. Similarities and differences in the properties of these change detectors are illuminated by their eigenvalue spectra. We presented a methodology that provides the eigenvalue spectrum for a wide range of quadratic anomalous change detectors. Table I summarizes these results, and Fig. I illustrates them. Although their eigenvalues differ, we find that RX, HACD, Subpixel HACD, symmetrized Chronochrome, and WTLSQ share the same eigenvectors. The eigen vectors for the two variants of Chronochrome defined in (18) are different, and are different from each other, even though they share many (but not all, unless d{sub x} = d{sub y}) eigenvalues. We demonstrated that it is sufficient to compute SVD of the whitened cross covariance matrix of the data in order to almost immediately obtain highly structured sparse matrices (and their eigenvalue spectra) of the coefficient matrices of these ACD algorithms in the white SVD-transformed coordinates. Converting to the original non-white coordinates, these eigenvalues will be modified in magnitude but not in sign. That is, the number of positive, zero-valued, and negative eigenvalues will be conserved.
Directory of Open Access Journals (Sweden)
M. Menvielle
2007-10-01
Full Text Available Thermospheric densities deduced from STAR accelerometer measurements onboard the CHAMP satellite are used to characterize the thermosphere and its response to space weather events. The STAR thermospheric density estimates are analysed using a Singular Value Decomposition (SVD approach allowing one to decouple large scale spatial and temporal variations from fast and local transients. Because SVD achieves such decomposition by using the reproducibility of orbital variations, it provides more meaningful results than any method based upon data smoothing or filtering.
SVD analysis enables us to propose a new thermosphere proxy, based on the projection coefficient of the CHAMP densities on the first singular vector. The large scale spatial variations in the density, mostly related to altitude/latitude variations are captured by the first singular vector; time variations are captured by the associated projection coefficient.
The study presented here is focused on time dependent global scale variations in the thermospheric density between 50 N and 50 S geographic latitudes. We show that the time variations in the projection coefficient do in fact represent those in the global density that are associated with magnetic activity as well as with solar EUV radiations. We also show that the NRLMSISE-00 empirical model better accounts for the density forcing by Solar radiations when tuned using Mg II indices. Using the so modified model with an additional geomagnetic parameterization corresponding to quiet geomagnetic situation enables one to define time reference values which are then used to evaluate the impact of geomagnetic activity. The ratio of CHAMP density projection coefficient to the quiet model projection coefficient is a global quantity, independent of altitude and latitude, which quantifies the thermospheric density response to auroral energy deposition. It will serve as a proxy of the response of thermospheric density to
Thermodynamic anomaly in magnesium hydroxide decomposition
International Nuclear Information System (INIS)
Reis, T.A.
1983-08-01
The Origin of the discrepancy in the equilibrium water vapor pressure measurements for the reaction Mg(OH) 2 (s) = MgO(s) + H 2 O(g) when determined by Knudsen effusion and static manometry at the same temperature was investigated. For this reaction undergoing continuous thermal decomposition in Knudsen cells, Kay and Gregory observed that by extrapolating the steady-state apparent equilibrium vapor pressure measurements to zero-orifice, the vapor pressure was approx. 10 -4 of that previously established by Giauque and Archibald as the true thermodynamic equilibrium vapor pressure using statistical mechanical entropy calculations for the entropy of water vapor. This large difference in vapor pressures suggests the possibility of the formation in a Knudsen cell of a higher energy MgO that is thermodynamically metastable by about 48 kJ / mole. It has been shown here that experimental results are qualitatively independent of the type of Mg(OH) 2 used as a starting material, which confirms the inferences of Kay and Gregory. Thus, most forms of Mg(OH) 2 are considered to be the stable thermodynamic equilibrium form. X-ray diffraction results show that during the course of the reaction only the equilibrium NaCl-type MgO is formed, and no different phases result from samples prepared in Knudsen cells. Surface area data indicate that the MgO molar surface area remains constant throughout the course of the reaction at low decomposition temperatures, and no significant annealing occurs at less than 400 0 C. Scanning electron microscope photographs show no change in particle size or particle surface morphology. Solution calorimetric measurements indicate no inherent hgher energy content in the MgO from the solid produced in Knudsen cells. The Knudsen cell vapor pressure discrepancy may reflect the formation of a transient metastable MgO or Mg(OH) 2 -MgO solid solution during continuous thermal decomposition in Knudsen cells
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introducea general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additi...... or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of Mat´ern covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees....
Decomposition of Variance for Spatial Cox Processes.
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2013-03-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees.
Empirical mode decomposition for analyzing acoustical signals
Huang, Norden E. (Inventor)
2005-01-01
The present invention discloses a computer implemented signal analysis method through the Hilbert-Huang Transformation (HHT) for analyzing acoustical signals, which are assumed to be nonlinear and nonstationary. The Empirical Decomposition Method (EMD) and the Hilbert Spectral Analysis (HSA) are used to obtain the HHT. Essentially, the acoustical signal will be decomposed into the Intrinsic Mode Function Components (IMFs). Once the invention decomposes the acoustic signal into its constituting components, all operations such as analyzing, identifying, and removing unwanted signals can be performed on these components. Upon transforming the IMFs into Hilbert spectrum, the acoustical signal may be compared with other acoustical signals.
Multiresolution signal decomposition transforms, subbands, and wavelets
Akansu, Ali N
1992-01-01
This book provides an in-depth, integrated, and up-to-date exposition of the topic of signal decomposition techniques. Application areas of these techniques include speech and image processing, machine vision, information engineering, High-Definition Television, and telecommunications. The book will serve as the major reference for those entering the field, instructors teaching some or all of the topics in an advanced graduate course and researchers needing to consult an authoritative source.n The first book to give a unified and coherent exposition of multiresolutional signal decompos
Fringe pattern denoising via image decomposition.
Fu, Shujun; Zhang, Caiming
2012-02-01
Filtering off noise from a fringe pattern is one of the key tasks in optical interferometry. In this Letter, using some suitable function spaces to model different components of a fringe pattern, we propose a new fringe pattern denoising method based on image decomposition. In our method, a fringe image is divided into three parts: low-frequency fringe, high-frequency fringe, and noise, which are processed in different spaces. An adaptive threshold in wavelet shrinkage involved in this algorithm improves its denoising performance. Simulation and experimental results show that our algorithm obtains smooth and clean fringes with different frequencies while preserving fringe features effectively.
Observation of spinodal decomposition in nuclei?
International Nuclear Information System (INIS)
Guarnera, A.; Colonna, M.; Chomaz, Ph.
1996-01-01
In the framework of the recently developed stochastic one-body descriptions it has been shown that the occurrence of nuclear multifragmentation by spinodal decomposition is characterized by typical size and time scales; in particular, the formation of nearly equal mass fragments is expected around Z=10. A first preliminary comparison of our predictions with experimental data for the Xe + Cu at 45 MeV/A and for Xe + Sn at 50 MeV/A recently measured by the Indra collaboration is presented. The agreement of the results with the data seems finally to plead in favour of a possible occurrence of a first order phase transition. (K.A.)
Enhanced double patterning decomposition using lines encoding
Directory of Open Access Journals (Sweden)
Khaled M. Soradi
2016-09-01
Full Text Available Double patterning photolithography (DPL is considered one of the best solutions used for enabling 32 nm/22 nm technology. In this paper, we propose a new technique for double patterning post decomposition conflict resolution. The algorithm is based on lines positions encoding followed by code pattern matching. Experimental results show that the usage of encoded patterns decreases the time needed for pattern matching and increases the matching accuracy. The overall manual problem solution time is reduced to about 1%.
Freeman-Durden Decomposition with Oriented Dihedral Scattering
Directory of Open Access Journals (Sweden)
Yan Jian
2014-10-01
Full Text Available In this paper, when the azimuth direction of polarimetric Synthetic Aperature Radars (SAR differs from the planting direction of crops, the double bounce of the incident electromagnetic waves from the terrain surface to the growing crops is investigated and compared with the normal double bounce. Oriented dihedral scattering model is developed to explain the investigated double bounce and is introduced into the Freeman-Durden decomposition. The decomposition algorithm corresponding to the improved decomposition is then proposed. The airborne polarimetric SAR data for agricultural land covering two flight tracks are chosen to validate the algorithm; the decomposition results show that for agricultural vegetated land, the improved Freeman-Durden decomposition has the advantage of increasing the decomposition coherency among the polarimetric SAR data along the different flight tracks.
Thermoanalytical study of the decomposition of yttrium trifluoroacetate thin films
Energy Technology Data Exchange (ETDEWEB)
Eloussifi, H. [GRMT, GRMT, Department of Physics, University of Girona, Campus Montilivi, E17071 Girona, Catalonia (Spain); Laboratoire de Chimie Inorganique, Faculté des Sciences de Sfax, Université de Sfax, BP 1171, 3000 Sfax (Tunisia); Farjas, J., E-mail: jordi.farjas@udg.cat [GRMT, GRMT, Department of Physics, University of Girona, Campus Montilivi, E17071 Girona, Catalonia (Spain); Roura, P. [GRMT, GRMT, Department of Physics, University of Girona, Campus Montilivi, E17071 Girona, Catalonia (Spain); Ricart, S.; Puig, T.; Obradors, X. [Institut de Ciència de Materials de Barcelona (CSIC), Campus UAB, 08193 Bellaterra, Catalonia (Spain); Dammak, M. [Laboratoire de Chimie Inorganique, Faculté des Sciences de Sfax, Université de Sfax, BP 1171, 3000 Sfax (Tunisia)
2013-10-31
We present the use of the thermal analysis techniques to study yttrium trifluoroacetate thin films decomposition. In situ analysis was done by means of thermogravimetry, differential thermal analysis, and evolved gas analysis. Solid residues at different stages and the final product have been characterized by X-ray diffraction and scanning electron microscopy. The thermal decomposition of yttrium trifluoroacetate thin films results in the formation of yttria and presents the same succession of intermediates than powder's decomposition, however, yttria and all intermediates but YF{sub 3} appear at significantly lower temperatures. We also observe a dependence on the water partial pressure that was not observed in the decomposition of yttrium trifluoroacetate powders. Finally, a dependence on the substrate chemical composition is discerned. - Highlights: • Thermal decomposition of yttrium trifluoroacetate films. • Very different behavior of films with respect to powders. • Decomposition is enhanced in films. • Application of thermal analysis to chemical solution deposition synthesis of films.
Thermoanalytical study of the decomposition of yttrium trifluoroacetate thin films
International Nuclear Information System (INIS)
Eloussifi, H.; Farjas, J.; Roura, P.; Ricart, S.; Puig, T.; Obradors, X.; Dammak, M.
2013-01-01
We present the use of the thermal analysis techniques to study yttrium trifluoroacetate thin films decomposition. In situ analysis was done by means of thermogravimetry, differential thermal analysis, and evolved gas analysis. Solid residues at different stages and the final product have been characterized by X-ray diffraction and scanning electron microscopy. The thermal decomposition of yttrium trifluoroacetate thin films results in the formation of yttria and presents the same succession of intermediates than powder's decomposition, however, yttria and all intermediates but YF 3 appear at significantly lower temperatures. We also observe a dependence on the water partial pressure that was not observed in the decomposition of yttrium trifluoroacetate powders. Finally, a dependence on the substrate chemical composition is discerned. - Highlights: • Thermal decomposition of yttrium trifluoroacetate films. • Very different behavior of films with respect to powders. • Decomposition is enhanced in films. • Application of thermal analysis to chemical solution deposition synthesis of films
Analysis of Self-Excited Combustion Instabilities Using Decomposition Techniques
2016-07-05
and simulation (see Fig. 2). Frequency Fig. 1 LDI computational domain used for decomposition analysis. 2792 HUANG ETAL. D ow nl oa de d by A ir F...combustors. Since each proper orthogonal decomposition mode comprises multiple frequencies , specific modes of the pressure and heat release are not related...qualitative and less efficient for identifying physical mechanisms. On the other hand, dynamic mode decomposition analysis generates a global frequency
Caffeic acid decomposition products: antioxidants or pro-oxidants?
Andueza, S. (Susana); Manzocco, L. (Lara); Peña, M.P. (María Paz) de; Cid, C. (Concepción); Nicoli, C. (Cristina)
2009-01-01
The potential of phenol antioxidants to suffer decomposition reactions leading to the formation of products exerting pro-oxidant activity was studied. A hydroalcoholic solution containing caffeic acid was assessed for antioxidant and pro-oxidant activity during heating at 90 degrees C to simulate the heat maintenance of the coffee brews in thermos. Decomposition products were also evaluated by HPLC analysis. In the early steps of caffeic acid decomposition. a decrease in antioxidant capacity ...
Stability estimates for hybrid coupled domain decomposition methods
Steinbach, Olaf
2003-01-01
Domain decomposition methods are a well established tool for an efficient numerical solution of partial differential equations, in particular for the coupling of different model equations and of different discretization methods. Based on the approximate solution of local boundary value problems either by finite or boundary element methods, the global problem is reduced to an operator equation on the skeleton of the domain decomposition. Different variational formulations then lead to hybrid domain decomposition methods.
Characteristics of Three Decomposer Accelerators on Maize Straw Decomposition
KUANG En-jun; CHI Feng-qin; SU Qing-rui; ZHANG Jiu-ming; GAO Zhong-chao; ZHU Bao-guo
2014-01-01
In order to make sure the effect of straw decomposer accelerators on the maize straws in Northeast of China, mesh bag method was used to determine the decomposition characteristics of maize straw biomass amount and nutrition release regularity in one year. The results showed that after 100 days, decomposition rates of maize straws biomass amount were between 57.1%-64.1%. The highest decomposition rate of 64.1% was the treatment with the 3rd decomposer accelerator. The nutrition release rates ...
Parallel decomposition methods for the solution of electromagnetic scattering problems
Cwik, Tom
1992-01-01
This paper contains a overview of the methods used in decomposing solutions to scattering problems onto coarse-grained parallel processors. Initially, a short summary of relevant computer architecture is presented as background to the subsequent discussion. After the introduction of a programming model for problem decomposition, specific decompositions of finite difference time domain, finite element, and integral equation solutions to Maxwell's equations are presented. The paper concludes with an outline of possible software-assisted decomposition methods and a summary.
Molecular simulation of non-equilibrium methane hydrate decomposition process
Energy Technology Data Exchange (ETDEWEB)
Bagherzadeh, S.Alireza; Englezos, Peter [Department of Chemical and Biological Engineering, University of British Columbia, Vancouver, British Columbia, V6T 1Z3 (Canada); Alavi, Saman, E-mail: saman.alavi@nrc-cnrc.gc.ca [Steacie Institute for Molecular Sciences, National Research Council of Canada, 100 Sussex Dr., Ottawa, Ontario, K1A 0R6 (Canada); Ripmeester, John A., E-mail: john.ripmeester@nrc-cnrc.gc.ca [Steacie Institute for Molecular Sciences, National Research Council of Canada, 100 Sussex Dr., Ottawa, Ontario, K1A 0R6 (Canada)
2012-01-15
Graphical abstract: Highlights: > Decomposition of methane hydrate is studied with molecular dynamics simulations. > Simulations are performed under adiabatic conditions (no thermostats). > The effects of heat and mass transfer during the decomposition are observed. > Temperature gradients are established as the hydrate decomposes. > Intrinsic reaction kinetics picture of hydrate dissociation is revisited. - Abstract: We recently performed constant energy molecular dynamics simulations of the endothermic decomposition of methane hydrate in contact with water to study phenomenologically the role of mass and heat transfer in the decomposition rate [S. Alavi, J.A. Ripmeester, J. Chem. Phys. 132 (2010) 144703]. We observed that with the progress of the decomposition front temperature gradients are established between the remaining solid hydrate and the solution phases. In this work, we provide further quantitative macroscopic and molecular level analysis of the methane hydrate decomposition process with an emphasis on elucidating microscopic details and how they affect the predicted rate of methane hydrate decomposition in natural methane hydrate reservoirs. A quantitative criterion is used to characterize the decomposition of the hydrate phase at different times. Hydrate dissociation occurs in a stepwise fashion with rows of sI cages parallel to the interface decomposing simultaneously. The correlations between decomposition times of subsequent layers of the hydrate phase are discussed.
Primary decomposition of torsion R[X]-modules
Directory of Open Access Journals (Sweden)
William A. Adkins
1994-01-01
Full Text Available This paper is concerned with studying hereditary properties of primary decompositions of torsion R[X]-modules M which are torsion free as R-modules. Specifically, if an R[X]-submodule of M is pure as an R-submodule, then the primary decomposition of M determines a primary decomposition of the submodule. This is a generalization of the classical fact from linear algebra that a diagonalizable linear transformation on a vector space restricts to a diagonalizable linear transformation of any invariant subspace. Additionally, primary decompositions are considered under direct sums and tensor product.
Decomposition synthesis strategy directed to FPGA with special MTBDD representation
Opara, Adam; Kubica, Marcin
2016-12-01
This paper presents the decompositional techniques to obtain partial logical resource sharing between logical structures associated with the respective single functions belonging to a multioutput function. In the case of the BDD function representation the decomposition is associated with the problem of single or multiple cutting diagram. In the paper, the authors focus on the problem of searching for functions for the joint implementation of the decomposition implemented by multiple cutting of SMTBDD diagrams. During the decomposition process the key is to develop effective methods of splitting and merging MTBDD diagrams. This problem was solved by introducing a new type of diagrams PMTBDD. The effectiveness of the developed methods has been confirmed experimentally.
Plant identity influences decomposition through more than one mechanism.
Directory of Open Access Journals (Sweden)
Jennie R McLaren
Full Text Available Plant litter decomposition is a critical ecosystem process representing a major pathway for carbon flux, but little is known about how it is affected by changes in plant composition and diversity. Single plant functional groups (graminoids, legumes, non-leguminous forbs were removed from a grassland in northern Canada to examine the impacts of functional group identity on decomposition. Removals were conducted within two different environmental contexts (fertilization and fungicide application to examine the context-dependency of these identity effects. We examined two different mechanisms by which the loss of plant functional groups may impact decomposition: effects of the living plant community on the decomposition microenvironment, and changes in the species composition of the decomposing litter, as well as the interaction between these mechanisms. We show that the identity of the plant functional group removed affects decomposition through both mechanisms. Removal of both graminoids and forbs slowed decomposition through changes in the decomposition microenvironment. We found non-additive effects of litter mixing, with both the direction and identity of the functional group responsible depending on year; in 2004 graminoids positively influenced decomposition whereas in 2006 forbs negatively influenced decomposition rate. Although these two mechanisms act independently, their effects may be additive if both mechanisms are considered simultaneously. It is essential to understand the variety of mechanisms through which even a single ecosystem property is affected if we are to predict the future consequences of biodiversity loss.
Decomposition of intermetallics during high-energy ball-milling
International Nuclear Information System (INIS)
Kwon, Y.S.; Choi, P.P.; Kim, J.S.; Kwon, D.H.; Gerasimov, K.B.
2007-01-01
The decomposition behavior of FeSn, CoSn and CoIn 2 intermetallics under high-energy ball-milling has been investigated using X-ray diffraction, calorimetric and magnetization measurements. Upon milling a large amount of the FeSn intermetallic decomposes into Fe 5 Sn 3 and FeSn 2 , where the average grain size of the product phases stays nearly constant with milling-time. Similar observations are made for the CoSn intermetallic, which decomposes into Co 3 Sn 2 and Sn. It is suggested that the mechanically driven decomposition of FeSn and CoSn results from local melting of powder particles due to high temperature pulses during ball collisions. In contrast to FeSn and CoSn, CoIn 2 does not undergo decomposition upon milling. The different decomposition behaviors of the studied intermetallics may be attributed to the volume changes occurring with a decomposition process. Whereas a negative volume change is associated with the decomposition of FeSn and CoSn into their product phases, the decomposition of CoIn 2 leads to an increase in volume. Hence, high local stresses under ball collisions are expected to make the mechanically induced decomposition of FeSn and CoSn favorable but rather hinder the decomposition of CoIn 2
DECOMPOSITION OF MANUFACTURING PROCESSES: A REVIEW
Directory of Open Access Journals (Sweden)
N.M.Z.N. Mohamed
2012-06-01
Full Text Available Manufacturing is a global activity that started during the industrial revolution in the late 19th century to cater for the large-scale production of products. Since then, manufacturing has changed tremendously through the innovations of technology, processes, materials, communication and transportation. The major challenge facing manufacturing is to produce more products using less material, less energy and less involvement of labour. To face these challenges, manufacturing companies must have a strategy and competitive priority in order for them to compete in a dynamic market. A review of the literature on the decomposition of manufacturing processes outlines three main processes, namely: high volume, medium volume and low volume. The decomposition shows that each sub process has its own characteristics and depends on the nature of the firm’s business. Two extreme processes are continuous line production (fast extreme and project shop (slow extreme. Other processes are in between these two extremes of the manufacturing spectrum. Process flow patterns become less complex with cellular, line and continuous flow compared with jobbing and project. The review also indicates that when the product is high variety and low volume, project or functional production is applied.
Kinetics of bromochloramine formation and decomposition.
Luh, Jeanne; Mariñas, Benito J
2014-01-01
Batch experiments were performed to study the kinetics of bromochloramine formation and decomposition from the reaction of monochloramine and bromide ion. The effects of pH, initial monochloramine and bromide ion concentrations, phosphate buffer concentration, and excess ammonia were evaluated. Results showed that the monochloramine decay rate increased with decreasing pH and increasing bromide ion concentration, and the concentration of bromochloramine increased to a maximum before decreasing gradually. The maximum bromochloramine concentration reached was found to decrease with increasing phosphate and ammonia concentrations. Previous models in the literature were not able to capture the decay of bromochloramine, and therefore we proposed an extended model consisting of reactions for monochloramine autodecomposition, the decay of bromamines in the presence of bromide, bromochloramine formation, and bromochloramine decomposition. Reaction rate constants were obtained through least-squares fitting to 11 data sets representing the effect of pH, bromide, monochloramine, phosphate, and excess ammonia. The reaction rate constants were then used to predict monochloramine and bromochloramine concentration profiles for all experimental conditions tested. In general, the modeled lines were found to provide good agreement with the experimental data under most conditions tested, with deviations occurring at low pH and high bromide concentrations.
Succinct and fast empirical mode decomposition
Li, Hongguang; Hu, Yue; Li, Fucai; Meng, Guang
2017-02-01
Empirical mode decomposition (EMD) has been extensively studied and widely utilized in various areas. In this paper, order-statistics filters are used to replace the traditional interpolation methods and estimate envelopes of the EMD method. Window size selection criteria are proposed to optimize the processing results. Both simulated and experimental signal are applied to investigate characteristics and effectiveness of the proposed method. The results demonstrate that the envelope estimation procedure provides a tremendous enhancement of the EMD method. In the proposed method, application of the order-statistics filter can simplify the process of estimating envelopes and minimize the end effect of the traditional EMD method. Moreover, sifting stop criterion and window width selection are optimized to improve the decomposition speed and processed results. Hence, the processed method is fast, time efficient and effective, and is named as succinct and fast EMD (SF-EMD) in this paper. An application of multiple fault diagnosis for a rotor test rig verifies its potential in practical engineering.
Experimental study of trimethyl aluminum decomposition
Zhang, Zhi; Pan, Yang; Yang, Jiuzhong; Jiang, Zhiming; Fang, Haisheng
2017-09-01
Trimethyl aluminum (TMA) is an important precursor used for metal-organic chemical vapor deposition (MOCVD) of most Al-containing structures, in particular of nitride structures. The reaction mechanism of TMA with ammonia is neither clear nor certain due to its complexity. Pyrolysis of trimethyl metal is the start of series of reactions, thus significantly affecting the growth. Experimental study of TMA pyrolysis, however, has not yet been conducted in detail. In this paper, a reflectron time-of-flight mass spectrometer is adopted to measure the TMA decomposition from room temperature to 800 °C in a special pyrolysis furnace, activated by soft X-ray from the synchrotron radiation. The results show that generation of methyl, ethane and monomethyl aluminum (MMA) indicates the start of the pyrolysis process. In the low temperature range from 25 °C to 700 °C, the main product is dimethyl aluminum (DMA) from decomposition of TMA. For temperatures larger than 700 °C, the main products are MMA, DMA, methyl and ethane.
Empirical Mode Decomposition and Hilbert Spectral Analysis
Huang, Norden E.
1998-01-01
The difficult facing data analysis is the lack of method to handle nonlinear and nonstationary time series. Traditional Fourier-based analyses simply could not be applied here. A new method for analyzing nonlinear and nonstationary data has been developed. The key part is the Empirical Mode Decomposition (EMD) method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF) that serve as the basis of the representation of the data. This decomposition method is adaptive, and, therefore, highly efficient. The IMFs admit well-behaved Hilbert transforms, and yield instantaneous energy and frequency as functions of time that give sharp identifications of imbedded structures. The final presentation of the results is an energy-frequency-time distribution, designated as the Hilbert Spectrum. Among the main conceptual innovations is the introduction of the instantaneous frequencies for complicated data sets, which eliminate the need of spurious harmonics to represent nonlinear and nonstationary signals. Examples from the numerical results of the classical nonlinear equation systems and data representing natural phenomena are given to demonstrate the power of this new method. The classical nonlinear system data are especially interesting, for they serve to illustrate the roles played by the nonlinear and nonstationary effects in the energy-frequency-time distribution.
Comparing structural decomposition analysis and index
International Nuclear Information System (INIS)
Hoekstra, Rutger; Van den Bergh, Jeroen C.J.M.
2003-01-01
To analyze and understand historical changes in economic, environmental, employment or other socio-economic indicators, it is useful to assess the driving forces or determinants that underlie these changes. Two techniques for decomposing indicator changes at the sector level are structural decomposition analysis (SDA) and index decomposition analysis (IDA). For example, SDA and IDA have been used to analyze changes in indicators such as energy use, CO 2 -emissions, labor demand and value added. The changes in these variables are decomposed into determinants such as technological, demand, and structural effects. SDA uses information from input-output tables while IDA uses aggregate data at the sector-level. The two methods have developed quite independently, which has resulted in each method being characterized by specific, unique techniques and approaches. This paper has three aims. First, the similarities and differences between the two approaches are summarized. Second, the possibility of transferring specific techniques and indices is explored. Finally, a numerical example is used to illustrate differences between the two approaches
Eck, Brendan L.; Fahmi, Rachid; Levi, Jacob; Fares, Anas; Wu, Hao; Li, Yuemeng; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.
2016-03-01
Myocardial perfusion imaging using CT (MPI-CT) has the potential to provide quantitative measures of myocardial blood flow (MBF) which can aid the diagnosis of coronary artery disease. We evaluated the quantitative accuracy of MPI-CT in a porcine model of balloon-induced LAD coronary artery ischemia guided by fractional flow reserve (FFR). We quantified MBF at baseline (FFR=1.0) and under moderate ischemia (FFR=0.7) using MPI-CT and compared to fluorescent microsphere-based MBF from high-resolution cryo-images. Dynamic, contrast-enhanced CT images were obtained using a spectral detector CT (Philips Healthcare). Projection-based mono-energetic images were reconstructed and processed to obtain MBF. Three MBF quantification approaches were evaluated: singular value decomposition (SVD) with fixed Tikhonov regularization (ThSVD), SVD with regularization determined by the L-Curve criterion (LSVD), and Johnson-Wilson parameter estimation (JW). The three approaches over-estimated MBF compared to cryo-images. JW produced the most accurate MBF, with average error 33.3+/-19.2mL/min/100g, whereas LSVD and ThSVD had greater over-estimation, 59.5+/-28.3mL/min/100g and 78.3+/-25.6 mL/min/100g, respectively. Relative blood flow as assessed by a flow ratio of LAD-to-remote myocardium was strongly correlated between JW and cryo-imaging, with R2=0.97, compared to R2=0.88 and 0.78 for LSVD and ThSVD, respectively. We assessed tissue impulse response functions (IRFs) from each approach for sources of error. While JW was constrained to physiologic solutions, both LSVD and ThSVD produced IRFs with non-physiologic properties due to noise. The L-curve provided noise-adaptive regularization but did not eliminate non-physiologic IRF properties or optimize for MBF accuracy. These findings suggest that model-based MPI-CT approaches may be more appropriate for quantitative MBF estimation and that cryo-imaging can support the development of MPI-CT by providing spatial distributions of MBF.
Precoding method interference management for quasi-EVD channel.
Duan, Wei; Song, Wei; Song, Sang Seob; Lee, Moon Ho
2014-01-01
The Cholesky decomposition-block diagonalization (CD-BD) interference alignment (IA) for a multiuser multiple input multiple output (MU-MIMO) relay system is proposed, which designs precoders for the multiple access channel (MAC) by employing the singular value decomposition (SVD) as well as the mean square error (MSE) detector for the broadcast Hermitian channel (BHC) taken advantage of in our design. Also, in our proposed CD-BD IA algorithm, the relaying function is made use to restructure the quasieigenvalue decomposition (quasi-EVD) equivalent channel. This approach used for the design of BD precoding matrix can significantly reduce the computational complexity and proposed algorithm can address several optimization criteria, which is achieved by designing the precoding matrices in two steps. In the first step, we use Cholesky decomposition to maximize the sum-of-rate (SR) with the minimum mean square error (MMSE) detection. In the next step, we optimize the system BER performance with the overlap of the row spaces spanned by the effective channel matrices of different users. By iterating the closed form of the solution, we are able not only to maximize the achievable sum-of-rate (ASR), but also to minimize the BER performance at a high signal-to-noise ratio (SNR) region.
Directory of Open Access Journals (Sweden)
Rong Duan
Full Text Available Aiming at the problems that huge amount of computation in ambiguity resolution with multiple epochs and high-order matrix inversion occurred in the GPS kinematic relative positioning, a modified algorithm for fast integer ambiguity resolution is proposed. Firstly, Singular Value Decomposition (SVD is applied to construct the left null space matrix in order to eliminate the baselines components, which is able to separate ambiguity parameters from the position parameters efficiently. Kalman filter is applied only to estimate the ambiguity parameters so that the real-time ambiguity float solution is obtained. Then, sorting and multi-time (inverse paired Cholesky decomposition are adopted for decorrelation of ambiguity. After diagonal elements preprocessing and diagonal elements sorting according to the results of Cholesky decomposition, the efficiency of decomposition and decorrelation is improved. Lastly, the integer search algorithm implemented in LAMBDA method is used for searching the integer ambiguity. To verify the validity and efficacy of the proposed algorithm, static and kinematic tests are carried out. Experimental results show that this algorithm has good performance of decorrelation and precision of float solution, with computation speed also increased effectively. The final positioning accuracy result with static baseline error less than 1 cm and kinematic error less than 2 cm, which indicates that it can be used for fast kinematic positioning of high precision carrier.
Aiello, Katherine A.; Alter, Orly
2016-01-01
We use the generalized singular value decomposition (GSVD), formulated as a comparative spectral decomposition, to model patient-matched grades III and II, i.e., lower-grade astrocytoma (LGA) brain tumor and normal DNA copy-number profiles. A genome-wide tumor-exclusive pattern of DNA copy-number alterations (CNAs) is revealed, encompassed in that previously uncovered in glioblastoma (GBM), i.e., grade IV astrocytoma, where GBM-specific CNAs encode for enhanced opportunities for transformation and proliferation via growth and developmental signaling pathways in GBM relative to LGA. The GSVD separates the LGA pattern from other sources of biological and experimental variation, common to both, or exclusive to one of the tumor and normal datasets. We find, first, and computationally validate, that the LGA pattern is correlated with a patient’s survival and response to treatment. Second, the GBM pattern identifies among the LGA patients a subtype, statistically indistinguishable from that among the GBM patients, where the CNA genotype is correlated with an approximately one-year survival phenotype. Third, cross-platform classification of the Affymetrix-measured LGA and GBM profiles by using the Agilent-derived GBM pattern shows that the GBM pattern is a platform-independent predictor of astrocytoma outcome. Statistically, the pattern is a better predictor (corresponding to greater median survival time difference, proportional hazard ratio, and concordance index) than the patient’s age and the tumor’s grade, which are the best indicators of astrocytoma currently in clinical use, and laboratory tests. The pattern is also statistically independent of these indicators, and, combined with either one, is an even better predictor of astrocytoma outcome. Recurring DNA CNAs have been observed in astrocytoma tumors’ genomes for decades, however, copy-number subtypes that are predictive of patients’ outcomes were not identified before. This is despite the growing number
Fogel, Paul; Gaston-Mathé, Yann; Hawkins, Douglas; Fogel, Fajwel; Luta, George; Young, S Stanley
2016-05-18
Often data can be represented as a matrix, e.g., observations as rows and variables as columns, or as a doubly classified contingency table. Researchers may be interested in clustering the observations, the variables, or both. If the data is non-negative, then Non-negative Matrix Factorization (NMF) can be used to perform the clustering. By its nature, NMF-based clustering is focused on the large values. If the data is normalized by subtracting the row/column means, it becomes of mixed signs and the original NMF cannot be used. Our idea is to split and then concatenate the positive and negative parts of the matrix, after taking the absolute value of the negative elements. NMF applied to the concatenated data, which we call PosNegNMF, offers the advantages of the original NMF approach, while giving equal weight to large and small values. We use two public health datasets to illustrate the new method and compare it with alternative clustering methods, such as K-means and clustering methods based on the Singular Value Decomposition (SVD) or Principal Component Analysis (PCA). With the exception of situations where a reasonably accurate factorization can be achieved using the first SVD component, we recommend that the epidemiologists and environmental scientists use the new method to obtain clusters with improved quality and interpretability.
Directory of Open Access Journals (Sweden)
Varsha Dhankani
2014-11-01
Full Text Available Herpes simplex virus-2 (HSV-2 is a chronic reactivating infection that leads to recurrent shedding episodes in the genital tract. A minority of episodes are prolonged, and associated with development of painful ulcers. However, currently, available tools poorly predict viral trajectories and timing of reactivations in infected individuals. We employed principal components analysis (PCA and singular value decomposition (SVD to interpret HSV-2 genital tract shedding time series data, as well as simulation output from a stochastic spatial mathematical model. Empirical and model-derived, time-series data gathered over >30 days consists of multiple complex episodes that could not be reduced to a manageable number of descriptive features with PCA and SVD. However, single HSV-2 shedding episodes, even those with prolonged duration and complex morphologies consisting of multiple erratic peaks, were consistently described using a maximum of four dominant features. Modeled and clinical episodes had equivalent distributions of dominant features, implying similar dynamics in real and simulated episodes. We applied linear discriminant analysis (LDA to simulation output and identified that local immune cell density at the viral reactivation site had a predictive effect on episode duration, though longer term shedding suggested chaotic dynamics and could not be predicted based on spatial patterns of immune cell density. These findings suggest that HSV-2 shedding patterns within an individual are impossible to predict over weeks or months, and that even highly complex single HSV-2 episodes can only be partially predicted based on spatial distribution of immune cell density.
TTF HOM Data Analysis with Curve Fitting Method
Energy Technology Data Exchange (ETDEWEB)
Pei, S.; Adolphsen, C.; Li, Z.; Bane, K.; Smith, J.; /SLAC
2009-07-14
To investigate the possibility of using HOM signals induced in SC cavities as beam and cavity diagnostics, narrow band (20 MHz) data was recorded around the strong TE111-6(6{pi}/9-like) dipole modes (1.7 GHz) in the 40 L-band (1.3 GHz) cavities at the DESY TTF facility. The analyses of these data have so far focused on using a Singular Value Decomposition (SVD) technique to correlate the signals with each other and data from conventional BPMs to show the dipole signals provide an alternate means of measuring the beam trajectory. However, these analyses do not extract the modal information (i.e., frequencies and Q's of the nearly degenerate horizontal and vertical modes). In this paper, we described a method to fit the signal frequency spectrum to obtain this information, and then use the resulting mode amplitudes and phases together with conventional BPM data to determine the mode polarizations and relative centers and tilts. Compared with the SVD analysis, this method is more physical, and can also be used to obtain the beam position and trajectory angle.
Directory of Open Access Journals (Sweden)
Yin Zhu
2016-05-01
Full Text Available Interference alignment (IA is a new approach to address interference in modern multiple-input multiple-out (MIMO cellular networks in which interference is an important factor that limits the system throughput. System throughput in most IA implementation schemes is significantly improved only with perfect channel state information and in a high signal-to-noise ratio (SNR region. Designing a simple IA scheme for the system with limited feedback and investigating system performance at a low-to-medium SNR region is important and practical. This paper proposed a precoding and user selection scheme based on partial interference alignment in two-cell downlink multi-user MIMO systems under limited feedback. This scheme aligned inter-cell interference to a predefined direction by designing user’s receive antenna combining vectors. A modified singular value decomposition (SVD-based beamforming method and a corresponding user-selection algorithm were proposed for the system with low rate limited feedback to improve sum rate performance. Simulation results show that the proposed scheme achieves a higher sum rate than traditional schemes without IA. The modified SVD-based beamforming scheme is also superior to the traditional zero-forcing beamforming scheme in low-rate limited feedback systems. The proposed partial IA scheme does not need to collaborate between transmitters and joint design between the transmitter and the users. The scheme can be implemented with low feedback overhead in current MIMO cellular networks.
Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.
2017-02-01
CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.
Xu, Kailiang; Ta, Dean; Cassereau, Didier; Hu, Bo; Wang, Weiqi; Laugier, Pascal; Minonzio, Jean-Gabriel
2016-09-01
Some pioneering studies have shown the clinical feasibility of long bones evaluation using ultrasonic guided waves. Such a strategy is typically designed to determine the dispersion information of the guided modes to infer the elastic and structural characteristics of cortical bone. However, there are still some challenges to extract multimode dispersion curves due to many practical limitations, e.g., high spectral density of modes, limited spectral resolution and poor signal-to-noise ratio. Recently, two representative signal processing methods have been proposed to improve the dispersion curves extraction. The first method is based on singular value decomposition (SVD) with advantages of multi-emitter and multi-receiver configuration for enhanced mode extraction; the second one uses linear Radon transform (LRT) with high-resolution imaging of the dispersion curves. To clarify the pros and cons, a face to face comparison was performed between the two methods. The results suggest that the LRT method is suitable to separate the guided modes at low frequency-thickness-product ( fh) range; for multimode signals in broadband fh range, the SVD-based method shows more robust performances for weak mode enhancement and noise filtering. Different methods are valuable to cover the entire fh range for processing ultrasonic axial transmission signals measured in long cortical bones.
A quasi-decadal cycle in Caribbean climate
Jury, Mark R.
2009-07-01
Climatic variables in the period from 1951 to 2000 are analyzed across the tropics from the East Pacific to Africa. A quasi-decadal mode is isolated using singular value decomposition (SVD) applied to monthly smoothed and detrended rainfall, sea surface temperature (SST), sea level pressure (SLP), and 200 hPa zonal (U) wind anomalies. Seven- to 10-year cycles in Caribbean rainfall are revealed as the dominant mode (50% variance) and are found to be related to a mode of tropical variability that is distinct from previously known global signals. Sources of this signal include east Atlantic SST and the northern subtropical ridge, which modulate upwelling off Venezuela. SVD analysis of daily rainfall suggests interaction between annual and quasi-decadal signals, with northern summer convection as a driver. The second mode of Caribbean rainfall variability derives from the east Pacific El Niño Southern Oscillation (ENSO, 21% variance) that is expressed as an east-west dipole of convection across the Caribbean. Composite analysis of rainfall for high and low phases of the quasi-decadal cycle reveals a corresponding signal that extends from the eastern Pacific Ocean across the Caribbean and West Africa to India. The southern Hadley cell spins up during wet phase, and the ITCZ migrates northward. This hemispheric-scale anomaly brings pulses of convection to the Caribbean. Impacts of the quasi-decadal cycle on socioeconomic resources are investigated.
Tensor Factorization for Low-Rank Tensor Completion.
Zhou, Pan; Lu, Canyi; Lin, Zhouchen; Zhang, Chao
2018-03-01
Recently, a tensor nuclear norm (TNN) based method was proposed to solve the tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, it requires computing tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-rank tensor factorization method for efficiently solving the 3-way tensor completion problem. Our method preserves the low-rank structure of a tensor by factorizing it into the product of two tensors of smaller sizes. In the optimization process, our method only needs to update two smaller tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush-Kuhn-Tucker point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN and matricization methods.
Rasouli, Zolaikha; Ghavami, Raouf
2016-12-01
The current study describes results of the application of radial basis function-partial least squares (RBF-PLS), partial robust M-regression (PRM), singular value decomposition (SVD), evolving factor analysis (EFA), multivariate curve resolution with alternating least squares (MCR-ALS) and rank annihilation factor analysis (RAFA) methods for the purposes of simultaneous determination of trace amounts calcium (Ca2 +) and magnesium (Mg2 +) and exploratory analysis based on their colored complexes formation with 1-(1-hydroxy-4-methyl-2-phenylazo)-2-naphthol-4-sulfonic acid (calmagite) as chromomeric reagent. The complex formation Ca2 + and Mg2 + with calmagite was investigated under pH 10.20. The performance of RBF-PLS model in detection of minerals was compared with PRM as a linear model. The pure concentration and spectral profiles were obtained using MCR-ALS. EFA and SVD were used to distinguish the number species. The stability constants of the complexes were derived using RAFA. Finally, RBF-PLS was utilized for simultaneous determination of minerals in pharmaceutical formulation and various vegetable samples.
A noise reduction technique based on nonlinear kernel function for heart sound analysis.
Mondal, Ashok; Saxena, Ishan; Tang, Hong; Banerjee, Poulami
2017-02-13
The main difficulty encountered in interpretation of cardiac sound is interference of noise. The contaminated noise obscures the relevant information which are useful for recognition of heart diseases. The unwanted signals are produced mainly by lungs and surrounding environment. In this paper, a novel heart sound de-noising technique has been introduced based on a combined framework of wavelet packet transform (WPT) and singular value decomposition (SVD). The most informative node of wavelet tree is selected on the criteria of mutual information measurement. Next, the coefficient corresponding to the selected node is processed by SVD technique to suppress noisy component from heart sound signal. To justify the efficacy of the proposed technique, several experiments have been conducted with heart sound dataset, including normal and pathological cases at different signal to noise ratios. The significance of the method is validated by statistical analysis of the results. The biological information preserved in de-noised heart sound (HS) signal is evaluated by k-means clustering algorithm and Fit Factor calculation. The overall results show that proposed method is superior than the baseline methods.
Directory of Open Access Journals (Sweden)
Dan Yang
2017-04-01
Full Text Available To solve the problem of multi-fault blind source separation (BSS in the case that the observed signals are under-determined, a novel approach for single channel blind source separation (SCBSS based on the improved tensor-based singular spectrum analysis (TSSA is proposed. As the most natural representation of high-dimensional data, tensor can preserve the intrinsic structure of the data to the maximum extent. Thus, TSSA method can be employed to extract the multi-fault features from the measured single-channel vibration signal. However, SCBSS based on TSSA still has some limitations, mainly including unsatisfactory convergence of TSSA in many cases and the number of source signals is hard to accurately estimate. Therefore, the improved TSSA algorithm based on canonical decomposition and parallel factors (CANDECOMP/PARAFAC weighted optimization, namely CP-WOPT, is proposed in this paper. CP-WOPT algorithm is applied to process the factor matrix using a first-order optimization approach instead of the original least square method in TSSA, so as to improve the convergence of this algorithm. In order to accurately estimate the number of the source signals in BSS, EMD-SVD-BIC (empirical mode decomposition—singular value decomposition—Bayesian information criterion method, instead of the SVD in the conventional TSSA, is introduced. To validate the proposed method, we applied it to the analysis of the numerical simulation signal and the multi-fault rolling bearing signals.
Harikumar, Rajaguru; Vijayakumar, Thangavel
2014-12-01
The objective of this paper is to compare the performance of singular value decomposition (SVD), expectation maximization (EM), and modified expectation maximization (MEM) as the postclassifiers for classifications of the epilepsy risk levels obtained from extracted features through wavelet transforms and morphological filters from electroencephalogram (EEG) signals. The code converter acts as a level one classifier. The seven features such as energy, variance, positive and negative peaks, spike and sharp waves, events, average duration, and covariance are extracted from EEG signals. Out of which four parameters like positive and negative peaksand spike and sharp waves, events and average duration are extracted using Haar, dB2, dB4, and Sym 8 wavelet transforms with hard and soft thresholding methods. The above said four features are also extracted through morphological filters. Then, the performance of the code converter and classifiers are compared based on the parameters such as performance index (PI) and quality value (QV).The performance index and quality value of code converters are at low value of 33.26% and 12.74, respectively. The highest PI of 98.03% and QV of 23.82 are attained at dB2 wavelet with hard thresholding method for SVD classifier. All the postclassifiers are settled at PI value of more than 90% at QV of 20.
Abe, M.; Murata, Y.; Iinuma, H.; Ogitsu, T.; Saito, N.; Sasaki, K.; Mibe, T.; Nakayama, H.
2018-05-01
A magnetic field design method of magneto-motive force (coil block (CB) and iron yoke) placements for g - 2/EDM measurements has been developed and a candidate placements were designed under superconducting limitations of current density 125 A/mm2 and maximum magnetic field on CBs less than 5.5 T. Placements of CBs and an iron yoke with poles were determined by tuning SVD (singular value decomposition) eigenmode strengths. The SVD was applied on a response matrix from magneto-motive forces to the magnetic fields in the muon storage region and two-dimensional (2D) placements of magneto-motive forces were designed by tuning the magnetic field eigenmode strengths obtained by the magnetic field. The tuning was performed iteratively. Magnetic field ripples in the azimuthal direction were minimized for the design. The candidate magnetic design had five CBs and an iron yoke with center iron poles. The magnet satisfied specifications of homogeneity (0.2 ppm peak-to-peak in 2D placements (the cylindrical coordinate of the radial position R and axial position Z) and less than 1.0 ppm ripples in the ring muon storage volume (0.318 m 0.0 m) for the spiral muon injection from the iron yoke at top.
A fast rank-reduction algorithm for three-dimensional seismic data interpolation
Jia, Yongna; Yu, Siwei; Liu, Lina; Ma, Jianwei
2016-09-01
Rank-reduction methods have been successfully used for seismic data interpolation and noise attenuation. However, highly intense computation is required for singular value decomposition (SVD) in most rank-reduction methods. In this paper, we propose a simple yet efficient interpolation algorithm, which is based on the Hankel matrix, for randomly missing traces. Following the multichannel singular spectrum analysis (MSSA) technique, we first transform the seismic data into a low-rank block Hankel matrix for each frequency slice. Then, a fast orthogonal rank-one matrix pursuit (OR1MP) algorithm is employed to minimize the low-rank constraint of the block Hankel matrix. In the new algorithm, only the left and right top singular vectors are needed to be computed, thereby, avoiding the complexity of computation required for SVD. Thus, we improve the calculation efficiency significantly. Finally, we anti-average the rank-reduction block Hankel matrix and obtain the reconstructed data in the frequency domain. Numerical experiments on 3D seismic data show that the proposed interpolation algorithm provides much better performance than the traditional MSSA algorithm in computational speed, especially for large-scale data processing.
Identifying Talent in Youth Sport: A Novel Methodology Using Higher-Dimensional Analysis.
Till, Kevin; Jones, Ben L; Cobley, Stephen; Morley, David; O'Hara, John; Chapman, Chris; Cooke, Carlton; Beggs, Clive B
2016-01-01
Prediction of adult performance from early age talent identification in sport remains difficult. Talent identification research has generally been performed using univariate analysis, which ignores multivariate relationships. To address this issue, this study used a novel higher-dimensional model to orthogonalize multivariate anthropometric and fitness data from junior rugby league players, with the aim of differentiating future career attainment. Anthropometric and fitness data from 257 Under-15 rugby league players was collected. Players were grouped retrospectively according to their future career attainment (i.e., amateur, academy, professional). Players were blindly and randomly divided into an exploratory (n = 165) and validation dataset (n = 92). The exploratory dataset was used to develop and optimize a novel higher-dimensional model, which combined singular value decomposition (SVD) with receiver operating characteristic analysis. Once optimized, the model was tested using the validation dataset. SVD analysis revealed 60 m sprint and agility 505 performance were the most influential characteristics in distinguishing future professional players from amateur and academy players. The exploratory dataset model was able to distinguish between future amateur and professional players with a high degree of accuracy (sensitivity = 85.7%, specificity = 71.1%; ptalent identification.
Chen, Jean J; Smith, Michael R; Frayne, Richard
2005-03-01
In dynamic-susceptibility contrast magnetic resonance perfusion imaging, the cerebral blood flow (CBF) is estimated from the tissue residue function obtained through deconvolution of the contrast concentration functions. However, the reliability of CBF estimates obtained by deconvolution is sensitive to various distortions including high-frequency noise amplification. The frequency-domain Fourier transform-based and the time-domain singular-value decomposition-based (SVD) algorithms both have biases introduced into their CBF estimates when noise stability criteria are applied or when contrast recirculation is present. The recovery of the desired signal components from amid these distortions by modeling the residue function in the frequency domain is demonstrated. The basic advantages and applicability of the frequency-domain modeling concept are explored through a simple frequency-domain Lorentzian model (FDLM); with results compared to standard SVD-based approaches. The performance of the FDLM method is model dependent, well representing residue functions in the exponential family while less accurately representing other functions. (c) 2005 Wiley-Liss, Inc.
Cheung, Hoffman H. N.; Keenlyside, Noel; Omrani, Nour-Eddine; Zhou, Wen
2018-01-01
We identify that the projected uncertainty of the pan-Arctic sea-ice concentration (SIC) is strongly coupled with the Eurasian circulation in the boreal winter (December-March; DJFM), based on a singular value decomposition (SVD) analysis of the forced response of 11 CMIP5 models. In the models showing a stronger sea-ice decline, the Polar cell becomes weaker and there is an anomalous increase in the sea level pressure (SLP) along 60°N, including the Urals-Siberia region and the Iceland low region. There is an accompanying weakening of both the midlatitude westerly winds and the Ferrell cell, where the SVD signals are also related to anomalous sea surface temperature warming in the midlatitude North Atlantic. In the Mediterranean region, the anomalous circulation response shows a decreasing SLP and increasing precipitation. The anomalous SLP responses over the Euro-Atlantic region project on to the negative North Atlantic Oscillation-like pattern. Altogether, pan-Arctic SIC decline could strongly impact the winter Eurasian climate, but we should be cautious about the causality of their linkage.
Joint Smoothed l₀-Norm DOA Estimation Algorithm for Multiple Measurement Vectors in MIMO Radar.
Liu, Jing; Zhou, Weidong; Juwono, Filbert H
2017-05-08
Direction-of-arrival (DOA) estimation is usually confronted with a multiple measurement vector (MMV) case. In this paper, a novel fast sparse DOA estimation algorithm, named the joint smoothed l 0 -norm algorithm, is proposed for multiple measurement vectors in multiple-input multiple-output (MIMO) radar. To eliminate the white or colored Gaussian noises, the new method first obtains a low-complexity high-order cumulants based data matrix. Then, the proposed algorithm designs a joint smoothed function tailored for the MMV case, based on which joint smoothed l 0 -norm sparse representation framework is constructed. Finally, for the MMV-based joint smoothed function, the corresponding gradient-based sparse signal reconstruction is designed, thus the DOA estimation can be achieved. The proposed method is a fast sparse representation algorithm, which can solve the MMV problem and perform well for both white and colored Gaussian noises. The proposed joint algorithm is about two orders of magnitude faster than the l 1 -norm minimization based methods, such as l 1 -SVD (singular value decomposition), RV (real-valued) l 1 -SVD and RV l 1 -SRACV (sparse representation array covariance vectors), and achieves better DOA estimation performance.
Matched field localization based on CS-MUSIC algorithm
Guo, Shuangle; Tang, Ruichun; Peng, Linhui; Ji, Xiaopeng
2016-04-01
The problem caused by shortness or excessiveness of snapshots and by coherent sources in underwater acoustic positioning is considered. A matched field localization algorithm based on CS-MUSIC (Compressive Sensing Multiple Signal Classification) is proposed based on the sparse mathematical model of the underwater positioning. The signal matrix is calculated through the SVD (Singular Value Decomposition) of the observation matrix. The observation matrix in the sparse mathematical model is replaced by the signal matrix, and a new concise sparse mathematical model is obtained, which means not only the scale of the localization problem but also the noise level is reduced; then the new sparse mathematical model is solved by the CS-MUSIC algorithm which is a combination of CS (Compressive Sensing) method and MUSIC (Multiple Signal Classification) method. The algorithm proposed in this paper can overcome effectively the difficulties caused by correlated sources and shortness of snapshots, and it can also reduce the time complexity and noise level of the localization problem by using the SVD of the observation matrix when the number of snapshots is large, which will be proved in this paper.
Directory of Open Access Journals (Sweden)
Aimei Shao
2013-02-01
Full Text Available In the ensemble-based four dimensional variational assimilation method (SVD-En4DVar, a singular value decomposition (SVD technique is used to select the leading eigenvectors and the analysis variables are expressed as the orthogonal bases expansion of the eigenvectors. The experiments with a two-dimensional shallow-water equation model and simulated observations show that the truncation error and rejection of observed signals due to the reduced-dimensional reconstruction of the analysis variable are the major factors that damage the analysis when the ensemble size is not large enough. However, a larger-sized ensemble is daunting computational burden. Experiments with a shallow-water equation model also show that the forecast error covariances remain relatively constant over time. For that reason, we propose an approach that increases the members of the forecast ensemble while reducing the update frequency of the forecast error covariance in order to increase analysis accuracy and to reduce the computational cost. A series of experiments were conducted with the shallow-water equation model to test the efficiency of this approach. The experimental results indicate that this approach is promising. Further experiments with the WRF model show that this approach is also suitable for the real atmospheric data assimilation problem, but the update frequency of the forecast error covariances should not be too low.
Fan, Jicong; Tian, Zhaoyang; Zhao, Mingbo; Chow, Tommy W S
2018-04-01
The scalability of low-rank representation (LRR) to large-scale data is still a major research issue, because it is extremely time-consuming to solve singular value decomposition (SVD) in each optimization iteration especially for large matrices. Several methods were proposed to speed up LRR, but they are still computationally heavy, and the overall representation results were also found degenerated. In this paper, a novel method, called accelerated LRR (ALRR) is proposed for large-scale data. The proposed accelerated method integrates matrix factorization with nuclear-norm minimization to find a low-rank representation. In our proposed method, the large square matrix of representation coefficients is transformed into a significantly smaller square matrix, on which SVD can be efficiently implemented. The size of the transformed matrix is not related to the number of data points and the optimization of ALRR is linear with the number of data points. The proposed ALRR is convex, accurate, robust, and efficient for large-scale data. In this paper, ALRR is compared with state-of-the-art in subspace clustering and semi-supervised classification on real image datasets. The obtained results verify the effectiveness and superiority of the proposed ALRR method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Gurgiolo, Chris; Vinas, Adolfo F.
2009-01-01
This paper presents a spherical harmonic analysis of the plasma velocity distribution function using high-angular, energy, and time resolution Cluster data obtained from the PEACE spectrometer instrument to demonstrate how this analysis models the particle distribution function and its moments and anisotropies. The results show that spherical harmonic analysis produced a robust physical representation model of the velocity distribution function, resolving the main features of the measured distributions. From the spherical harmonic analysis, a minimum set of nine spectral coefficients was obtained from which the moment (up to the heat flux), anisotropy, and asymmetry calculations of the velocity distribution function were obtained. The spherical harmonic method provides a potentially effective "compression" technique that can be easily carried out onboard a spacecraft to determine the moments and anisotropies of the particle velocity distribution function for any species. These calculations were implemented using three different approaches, namely, the standard traditional integration, the spherical harmonic (SPH) spectral coefficients integration, and the singular value decomposition (SVD) on the spherical harmonic methods. A comparison among the various methods shows that both SPH and SVD approaches provide remarkable agreement with the standard moment integration method.
Identifying Talent in Youth Sport: A Novel Methodology Using Higher-Dimensional Analysis
Till, Kevin; Jones, Ben L.; Cobley, Stephen; Morley, David; O'Hara, John; Chapman, Chris; Cooke, Carlton; Beggs, Clive B.
2016-01-01
Prediction of adult performance from early age talent identification in sport remains difficult. Talent identification research has generally been performed using univariate analysis, which ignores multivariate relationships. To address this issue, this study used a novel higher-dimensional model to orthogonalize multivariate anthropometric and fitness data from junior rugby league players, with the aim of differentiating future career attainment. Anthropometric and fitness data from 257 Under-15 rugby league players was collected. Players were grouped retrospectively according to their future career attainment (i.e., amateur, academy, professional). Players were blindly and randomly divided into an exploratory (n = 165) and validation dataset (n = 92). The exploratory dataset was used to develop and optimize a novel higher-dimensional model, which combined singular value decomposition (SVD) with receiver operating characteristic analysis. Once optimized, the model was tested using the validation dataset. SVD analysis revealed 60 m sprint and agility 505 performance were the most influential characteristics in distinguishing future professional players from amateur and academy players. The exploratory dataset model was able to distinguish between future amateur and professional players with a high degree of accuracy (sensitivity = 85.7%, specificity = 71.1%; ptalent identification. PMID:27224653
A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification.
Baali, Hamza; Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J E
2015-01-01
In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain-computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling's [Formula: see text] statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%.
Relating ocean-atmospheric climate indices with Australian river streamflow
Shams, Md Shamim; Faisal Anwar, A. H. M.; Lamb, Kenneth W.; Bari, Mohammed
2018-01-01
The relationship between climate indices with Australian river streamflow (ASF) may provide valuable information for long-lead streamflow forecasting for Australian rivers. The current study examines the correlations between three climate indices (SST, 500 mb meridional wind -U500 and 500 mb geopotential height-Z500) and 135 unimpaired ASF gauges for 1971-2011 using the singular value decomposition (SVD) method. First, SVD method was applied to check the SST-ASF correlated regions of influence and then extended SST-ASF variabilities were used to determine the correlated regions within Z500 and U500 fields. Based on the teleconnection, the most correlated region (150°E to 105°W and 35°S to 5°N) was identified and its persistency was checked by lag analysis up to 2 years from seasonal to yearly time-scale. The results displayed positive correlation for the south and south-eastern part of Australia while negative correlation prevails in the north-eastern region (at 95% significance level). The most correlated region was found situated along the South Pacific Convergence Zone (SPCZ) axis which may be considered as a probable climate driver for ASF. The persistency of this region was checked by a separate climate indicator (mean vertical velocity-500 mb) and found prominent in dry period than the wet period. This persistent teleconnected region may be potentially useful for long-lead forecasting of ASF.
Minonzio, Jean-Gabriel; Foiret, Josquin; Talmant, Maryline; Laugier, Pascal
2011-12-01
Robust signal processing methods adapted to clinical measurements of guided modes are required to assess bone properties such as cortical thickness and porosity. Recently, an approach based on the singular value decomposition (SVD) of multidimensional signals recorded with an axial transmission array of emitters and receivers has been proposed for materials with negligible absorption, see Minonzio et al. [J. Acoust. Soc. Am. 127, 2913-2919 (2010)]. In presence of absorption, the ability to extract guided mode degrades. The objective of the present study is to extend the method to the case of absorbing media, considering attenuated plane waves (complex wavenumber). The guided mode wavenumber extraction is enhanced and the order of magnitude of the attenuation of the guided mode is estimated. Experiments have been carried out on 2 mm thick plates in the 0.2-2 MHz bandwidth. Two materials are inspected: polymethylacrylate (PMMA) (isotropic with absorption) and artificial composite bones (Sawbones, Pacific Research Laboratory Inc, Vashon, WA) which is a transverse isotropic absorbing medium. Bulk wave velocities and bulk attenuation have been evaluated from transmission measurements. These values were used to compute theoretical Lamb mode wavenumbers which are consistent with the experimental ones obtained with the SVD-based approach. © 2011 Acoustical Society of America
The coil array method for creating a dynamic imaging volume.
Smith, Elliot; Freschi, Fabio; Repetto, Maurizio; Crozier, Stuart
2017-08-01
Gradient strength and speed are limited by peripheral nerve stimulation (PNS) thresholds. The coil array method allows the gradient field to be moved across the imaging area. This can help reduce PNS and provide faster imaging for image-guided therapy systems such as the magnetic resonance imaging-guided linear accelerator (MRI-linac). The coil array is designed such that many coils produce magnetic fields, which combine to give the desired gradient profile. The design of the coil array uses two methods: either the singular value decomposition (SVD) of a set of field profiles or the electromagnetic modes of the coil surface. Two whole-body coils and one experimental coil were designed to investigate the method. The field produced by the experimental coil was compared to simulated results. The experimental coil region of uniformity (ROU) was moved along the z axis as shown in simulation. The highest observed field deviation was 16.9% at the edge of the ROU with a shift of 35 mm. The whole-body coils showed a median field deviation across all offsets below 5% with an eight-coil basis when using the SVD design method. Experimental results show the feasibility of a moving imaging region within an MRI with a low number of coils in the array. Magn Reson Med 78:784-793, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
A simulation study of the global orbit feedback system for Pohang light source
International Nuclear Information System (INIS)
Kim, Kukhee; Shim, Kyuyeol; Cho, Moohyun; Namkung, Won; Ko, In Soo; Choi, Jinhyuk
2000-01-01
This paper describes the simulation of the global orbit feedback system using the singular value decomposition (SVD) method, the error minimization method, and the neural network method. Instead of facing unacceptable correction result raised occasionally in the SVD method, we choose the error minimization method for the global orbit feedback. This method provides minimum orbit errors while avoiding unacceptable corrections, and keeps the orbit within the dynamic aperture of the storage ring. We simulate the Pohang Light Source (PLS) storage ring using the Methodical Accelerator Design (MAD) code that generates the orbit distortions for the error minimization method and the learning data set for neural network method. In order to compare the effectiveness of the neural network method with others, a neural network is trained by the learning algorithm using the learning data set. The global response matrix with a minimum error and the trained neural network are used to the global orbit feedback system. The simulation shows that a selection of beam position monitors (BPMs) is very sensitive in the reduction of rms orbit distortions, and the random choice gives better results than any other cases. (author)
Revisit of combined parallel-beam/cone-beam or fan-beam/cone-beam imaging.
Zeng, Gengsheng L
2013-10-01
This aim of this paper is to revisit the parallel-beam/cone-beam or fan-beam/cone-beam imaging configuration, and to investigate whether this configuration has any advantages. Twenty years ago, it was suggested to simultaneously use a parallel-beam (or a fan-beam) collimator and a cone-beam collimator to acquire single photon emission computed tomography data. The motivation was that the parallel-beam (or the fan-beam) collimator can provide sufficient sampling, while the cone-beam collimator is able to provide higher photon counts. Even with higher total counts, this hybrid system does not give significant improvement (if any) in terms of image noise and artifacts reduction. If a conventional iterative maximum-likelihood expectation-maximization algorithm is used to reconstruct the image, the resultant reconstruction may be worse than the parallel-beam-only (or fan-beam-only) system. This paper uses the singular value decomposition (SVD) analysis to explain this phenomenon. The SVD results indicate that the parallel-beam-only and the fan-beam-only system outperform the combined systems. The optimal imaging system does not necessary to be the one that generates the projections with highest signal-to-noise ratio and best resolution.
Radiation decomposition of alcohols and chloro phenols in micellar systems
International Nuclear Information System (INIS)
Moreno A, J.
1998-01-01
The effect of surfactants on the radiation decomposition yield of alcohols and chloro phenols has been studied with gamma doses of 2, 3, and 5 KGy. These compounds were used as typical pollutants in waste water, and the effect of the water solubility, chemical structure, and the nature of the surfactant, anionic or cationic, was studied. The results show that anionic surfactant like sodium dodecylsulfate (SDS), improve the radiation decomposition yield of ortho-chloro phenol, while cationic surfactant like cetyl trimethylammonium chloride (CTAC), improve the radiation decomposition yield of butyl alcohol. A similar behavior is expected for those alcohols with water solubility close to the studied ones. Surfactant concentrations below critical micellar concentration (CMC), inhibited radiation decomposition for both types of alcohols. However radiation decomposition yield increased when surfactant concentrations were bigger than the CMC. Aromatic alcohols decomposition was more marked than for linear alcohols decomposition. On a mixture of alcohols and chloro phenols in aqueous solution the radiation decomposition yield decreased with increasing surfactant concentration. Nevertheless, there were competitive reactions between the alcohols, surfactants dimers, hydroxyl radical and other reactive species formed on water radiolysis, producing a catalytic positive effect in the decomposition of alcohols. Chemical structure and the number of carbons were not important factors in the radiation decomposition. When an alcohol like ortho-chloro phenol contained an additional chlorine atom, the decomposition of this compound was almost constant. In conclusion the micellar effect depend on both, the nature of the surfactant (anionic or cationic) and the chemical structure of the alcohols. The results of this study are useful for wastewater treatment plants based on the oxidant effect of the hydroxyl radical, like in advanced oxidation processes, or in combined treatment such as
Thermodynamic anomaly in magnesium hydroxide decomposition
Energy Technology Data Exchange (ETDEWEB)
Reis, T.A.
1983-08-01
The Origin of the discrepancy in the equilibrium water vapor pressure measurements for the reaction Mg(OH)/sub 2/(s) = MgO(s) + H/sub 2/O(g) when determined by Knudsen effusion and static manometry at the same temperature was investigated. For this reaction undergoing continuous thermal decomposition in Knudsen cells, Kay and Gregory observed that by extrapolating the steady-state apparent equilibrium vapor pressure measurements to zero-orifice, the vapor pressure was approx. 10/sup -4/ of that previously established by Giauque and Archibald as the true thermodynamic equilibrium vapor pressure using statistical mechanical entropy calculations for the entropy of water vapor. This large difference in vapor pressures suggests the possibility of the formation in a Knudsen cell of a higher energy MgO that is thermodynamically metastable by about 48 kJ / mole. It has been shown here that experimental results are qualitatively independent of the type of Mg(OH)/sub 2/ used as a starting material, which confirms the inferences of Kay and Gregory. Thus, most forms of Mg(OH)/sub 2/ are considered to be the stable thermodynamic equilibrium form. X-ray diffraction results show that during the course of the reaction only the equilibrium NaCl-type MgO is formed, and no different phases result from samples prepared in Knudsen cells. Surface area data indicate that the MgO molar surface area remains constant throughout the course of the reaction at low decomposition temperatures, and no significant annealing occurs at less than 400/sup 0/C. Scanning electron microscope photographs show no change in particle size or particle surface morphology. Solution calorimetric measurements indicate no inherent hgher energy content in the MgO from the solid produced in Knudsen cells. The Knudsen cell vapor pressure discrepancy may reflect the formation of a transient metastable MgO or Mg(OH)/sub 2/-MgO solid solution during continuous thermal decomposition in Knudsen cells.
Rate of Decomposition of Leaflitter in an Age Series Gmelina ...
African Journals Online (AJOL)
The study was carried out to investigate the rate of decomposition of Gmelina arborea Robx leaflitter in an age series in Gmelina plantation in shasa forest reserve in a Nigerian low land Forest. Rate of decomposition of Gmelina leaf litter was determined using litter bag technique and mass balance analysis to quantify the ...
Kinetics of the thermal decomposition of tetramethylsilane behind ...
Indian Academy of Sciences (India)
The decomposition of TMS seems to be initiated via Si-C bond cission by forming methyl radicals (CH3) and trimethylsilyl radicals ((CH3)3Si). The total rate coefficients obtained for the decomposition of TMS were fit to Arrhenius equation in two different temperature regions 1058–1130K and 1130–1194 K. The temperature ...
Multi hollow needle to plate plasmachemical reactor for pollutant decomposition
International Nuclear Information System (INIS)
Pekarek, S.; Kriha, V.; Viden, I.; Pospisil, M.
2001-01-01
Modification of the classical multipin to plate plasmachemical reactor for pollutant decomposition is proposed in this paper. In this modified reactor a mixture of air and pollutant flows through the needles, contrary to the classical reactor where a mixture of air and pollutant flows around the pins or through the channel plus through the hollow needles. We give the results of comparison of toluene decomposition efficiency for (a) a reactor with the main stream of a mixture through the channel around the needles and a small flow rate through the needles and (b) a modified reactor. It was found that for similar flow rates and similar energy deposition, the decomposition efficiency of toluene was increased more than six times in the modified reactor. This new modified reactor was also experimentally tested for the decomposition of volatile hydrocarbons from gasoline distillation range. An average efficiency of VOC decomposition of about 25% was reached. However, significant differences in the decomposition of various hydrocarbon types were observed. The best results were obtained for the decomposition of olefins (reaching 90%) and methyl-tert-butyl ether (about 50%). Moreover, the number of carbon atoms in the molecule affects the quality of VOC decomposition. (author)
Thermal decomposition of 2-methylbenzoates of rare earth elements
International Nuclear Information System (INIS)
Brzyska, W.; Szubartowski, L.
1980-01-01
The conditions of thermal decomposition of La, Ce(3), Pr, Nd, Sm and Y 2-methylbenzoates were examined. On the basis of obtained results it was stated that hydrated 2-methylbenzoates were subjected to dehydration passing into anhydrated salts and then they decomposed into oxides. The activation energy of dehydration and decomposition reactions of lanthanons, La and Y 2-methylbenzoates was determined. (author)
The kinetics and mechanism of induced thermal decomposition of ...
Indian Academy of Sciences (India)
Unknown
electrochemistry 4–7, photochemistry 8,9 and in polymer chemistry 10–15. The spontaneous decomposition of potassium peroxomonosulphate (PMS) in aqueous solution suggests that free radicals are not formed. The kinetics and mechanism of the aqueous decomposition of Caro's acid was investigated by Ball and ...