WorldWideScience

Sample records for spectral lanczos decomposition

  1. Application of spectral Lanczos decomposition method to large scale problems arising geophysics

    Energy Technology Data Exchange (ETDEWEB)

    Tamarchenko, T. [Western Atlas Logging Services, Houston, TX (United States)

    1996-12-31

    This paper presents an application of Spectral Lanczos Decomposition Method (SLDM) to numerical modeling of electromagnetic diffusion and elastic waves propagation in inhomogeneous media. SLDM approximates an action of a matrix function as a linear combination of basis vectors in Krylov subspace. I applied the method to model electromagnetic fields in three-dimensions and elastic waves in two dimensions. The finite-difference approximation of the spatial part of differential operator reduces the initial boundary-value problem to a system of ordinary differential equations with respect to time. The solution to this system requires calculating exponential and sine/cosine functions of the stiffness matrices. Large scale numerical examples are in a good agreement with the theoretical error bounds and stability estimates given by Druskin, Knizhnerman, 1987.

  2. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  3. Spectral Tensor-Train Decomposition

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter; Marzouk, Youssef M.

    2016-01-01

    .e., the “cores”) comprising the functional TT decomposition. This result motivates an approximation scheme employing polynomial approximations of the cores. For functions with appropriate regularity, the resulting spectral tensor-train decomposition combines the favorable dimension-scaling of the TT......The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We first define a functional version of the TT...

  4. TP89 - SIRZ Decomposition Spectral Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Seetho, Isacc M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Azevedo, Steve [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, Jerel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brown, William D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Martz, Jr., Harry E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-12-08

    The primary objective of this test plan is to provide X-ray CT measurements of known materials for the purposes of generating and testing MicroCT and EDS spectral estimates. These estimates are to be used in subsequent Ze/RhoE decomposition analyses of acquired data.

  5. Spectral decomposition of nonlinear systems with memory

    Science.gov (United States)

    Svenkeson, Adam; Glaz, Bryan; Stanton, Samuel; West, Bruce J.

    2016-02-01

    We present an alternative approach to the analysis of nonlinear systems with long-term memory that is based on the Koopman operator and a Lévy transformation in time. Memory effects are considered to be the result of interactions between a system and its surrounding environment. The analysis leads to the decomposition of a nonlinear system with memory into modes whose temporal behavior is anomalous and lacks a characteristic scale. On average, the time evolution of a mode follows a Mittag-Leffler function, and the system can be described using the fractional calculus. The general theory is demonstrated on the fractional linear harmonic oscillator and the fractional nonlinear logistic equation. When analyzing data from an ill-defined (black-box) system, the spectral decomposition in terms of Mittag-Leffler functions that we propose may uncover inherent memory effects through identification of a small set of dynamically relevant structures that would otherwise be obscured by conventional spectral methods. Consequently, the theoretical concepts we present may be useful for developing more general methods for numerical modeling that are able to determine whether observables of a dynamical system are better represented by memoryless operators, or operators with long-term memory in time, when model details are unknown.

  6. Spectral decomposition of nonlinear systems with memory.

    Science.gov (United States)

    Svenkeson, Adam; Glaz, Bryan; Stanton, Samuel; West, Bruce J

    2016-02-01

    We present an alternative approach to the analysis of nonlinear systems with long-term memory that is based on the Koopman operator and a Lévy transformation in time. Memory effects are considered to be the result of interactions between a system and its surrounding environment. The analysis leads to the decomposition of a nonlinear system with memory into modes whose temporal behavior is anomalous and lacks a characteristic scale. On average, the time evolution of a mode follows a Mittag-Leffler function, and the system can be described using the fractional calculus. The general theory is demonstrated on the fractional linear harmonic oscillator and the fractional nonlinear logistic equation. When analyzing data from an ill-defined (black-box) system, the spectral decomposition in terms of Mittag-Leffler functions that we propose may uncover inherent memory effects through identification of a small set of dynamically relevant structures that would otherwise be obscured by conventional spectral methods. Consequently, the theoretical concepts we present may be useful for developing more general methods for numerical modeling that are able to determine whether observables of a dynamical system are better represented by memoryless operators, or operators with long-term memory in time, when model details are unknown.

  7. Algorithms for Spectral Decomposition with Applications

    Data.gov (United States)

    National Aeronautics and Space Administration — The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main...

  8. Spectral Decomposition of Missing Transverse Energy at Hadron Colliders

    Science.gov (United States)

    Bae, Kyu Jung; Jung, Tae Hyun; Park, Myeonghun

    2017-12-01

    We propose a spectral decomposition to systematically extract information of dark matter at hadron colliders. The differential cross section of events with missing transverse energy (ET) can be expressed by a linear combination of basis functions. In the case of s -channel mediator models for dark matter particle production, basis functions are identified with the differential cross sections of subprocesses of virtual mediator and visible particle production while the coefficients of basis functions correspond to dark matter invariant mass distribution in the manner of the Källén-Lehmann spectral decomposition. For a given ET data set and mediator model, we show that one can differentiate a certain dark matter-mediator interaction from another through spectral decomposition.

  9. Squeezed state description of spectral decompositions of a biophoton signal

    Energy Technology Data Exchange (ETDEWEB)

    Bajpai, R.P. [Sophisticated Analytical Instruments Facility (Biophysics), North Eastern Hill University, Shillong 793022 (India) and International Institute of Biophysics, IIB e.V. ehem. Raketenstation, Kapellener Strasse, D-41472 Neuss (Germany)]. E-mail: rpbajpai@nehu.ac.in

    2005-04-11

    The shape of decaying part and photo count distribution of non-decaying part are determined in 21 spectral decompositions of a biophoton signal obtained from interference and long pass filters. A new framework that considers biophoton signal as an evolving quantum state of a frequency stable damped harmonic oscillator is used for the description of shape and photo count distribution. Shape is specified by four decay parameters and photo count distribution by four squeezed state parameters. These parameters are determined in spectral decompositions. Three parameters are situation specific and five parameters appear system specific.

  10. Spectral decomposition of a turbulence-excited vibroacoustic system

    Science.gov (United States)

    Kook, H.-S.; Park, S.-H.; Ih, K.-D.

    2013-03-01

    The applicability of the spectral decomposition method to noise spectra generated inside a cavity enclosed by turbulence-excited elastic structures is investigated. Based on previous theoretical and experimental findings, we show that the noise spectra may be spectrally decomposable if the convection speed of the boundary layer is relatively slow, and thus all resonant structural modes are hydrodynamically fast. As an application, spectral decomposition is attempted for data obtained from two ground vehicles tested in an aeroacoustically treated wind tunnel. We show that the synthesized noise spectra generated from the decomposed source and filter functions are in fairly good agreement with the measured spectra. This study is also concerned with the spectral decomposition algorithm. Similar to an algorithm recently proposed by other researchers, the proposed algorithm formulates the spectral decomposition as simultaneous equations. Input data generated from numerical simulations are used to compare the various algorithms in terms of computational accuracy and cost. Unlike conventional methods that calculate the decomposed function incrementally, we show that algorithms based on simultaneous equations yield far better results in terms of accuracy at the expense of increased computational memory. However, the proposed algorithm requires significantly fewer equations and unknowns compared to the recently proposed algorithm that is based on a formulation of simultaneous equations.

  11. Empirical Mode Decomposition and Hilbert Spectral Analysis

    Science.gov (United States)

    Huang, Norden E.

    1998-01-01

    The difficult facing data analysis is the lack of method to handle nonlinear and nonstationary time series. Traditional Fourier-based analyses simply could not be applied here. A new method for analyzing nonlinear and nonstationary data has been developed. The key part is the Empirical Mode Decomposition (EMD) method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF) that serve as the basis of the representation of the data. This decomposition method is adaptive, and, therefore, highly efficient. The IMFs admit well-behaved Hilbert transforms, and yield instantaneous energy and frequency as functions of time that give sharp identifications of imbedded structures. The final presentation of the results is an energy-frequency-time distribution, designated as the Hilbert Spectrum. Among the main conceptual innovations is the introduction of the instantaneous frequencies for complicated data sets, which eliminate the need of spurious harmonics to represent nonlinear and nonstationary signals. Examples from the numerical results of the classical nonlinear equation systems and data representing natural phenomena are given to demonstrate the power of this new method. The classical nonlinear system data are especially interesting, for they serve to illustrate the roles played by the nonlinear and nonstationary effects in the energy-frequency-time distribution.

  12. Experimental comparison of empirical material decomposition methods for spectral CT.

    Science.gov (United States)

    Zimmerman, Kevin C; Schmidt, Taly Gilat

    2015-04-21

    Material composition can be estimated from spectral information acquired using photon counting x-ray detectors with pulse height analysis. Non-ideal effects in photon counting x-ray detectors such as charge-sharing, k-escape, and pulse-pileup distort the detected spectrum, which can cause material decomposition errors. This work compared the performance of two empirical decomposition methods: a neural network estimator and a linearized maximum likelihood estimator with correction (A-table method). The two investigated methods differ in how they model the nonlinear relationship between the spectral measurements and material decomposition estimates. The bias and standard deviation of material decomposition estimates were compared for the two methods, using both simulations and experiments with a photon-counting x-ray detector. Both the neural network and A-table methods demonstrated a similar performance for the simulated data. The neural network had lower standard deviation for nearly all thicknesses of the test materials in the collimated (low scatter) and uncollimated (higher scatter) experimental data. In the experimental study of Teflon thicknesses, non-ideal detector effects demonstrated a potential bias of 11-28%, which was reduced to 0.1-11% using the proposed empirical methods. Overall, the results demonstrated preliminary experimental feasibility of empirical material decomposition for spectral CT using photon-counting detectors.

  13. A TV-constrained decomposition method for spectral CT

    Science.gov (United States)

    Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang

    2017-03-01

    Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.

  14. Reservoir hydrocarbon delineation using spectral decomposition: The application of S-Transform and empirical mode decomposition (EMD) method

    Science.gov (United States)

    Haris, A.; Morena, V.; Riyanto, A.; Zulivandama, S. R.

    2017-07-01

    Non-stationer signal from the seismic survey is difficult to be directly interpreted in time domain analysis. Spectral decomposition is one of the spectral analysis methods that can analyze the non-stationer signal in frequency domain. The Fast Fourier Transform method was commonly used for spectral decomposition analysis, however, this method had a limitation in the scaled window analysis and produced pure quality for low-frequency shadow. The S-Transform and Empirical the Mode Decomposition (EMD) is another method of spectral decomposition that can be used to enhanced low-frequency shadows. In this research, comparison of the S-Transform and the EMD methods that can show the difference imaging result of low-frequency shadows zone is applied to Eldo Field, Jambi Province. The spectral decomposition result based on the EMD method produced better imaging of low-frequency shadows zone in tuning thickness compared to S-Transform methods.

  15. Exact complexity: The spectral decomposition of intrinsic computation

    Energy Technology Data Exchange (ETDEWEB)

    Crutchfield, James P., E-mail: chaos@ucdavis.edu [Complexity Sciences Center and Department of Physics, University of California at Davis, One Shields Avenue, Davis, CA 95616 (United States); Ellison, Christopher J., E-mail: cellison@wisc.edu [Center for Complexity and Collective Computation, University of Wisconsin-Madison, Madison, WI 53706 (United States); Riechers, Paul M., E-mail: pmriechers@ucdavis.edu [Complexity Sciences Center and Department of Physics, University of California at Davis, One Shields Avenue, Davis, CA 95616 (United States)

    2016-03-06

    We give exact formulae for a wide family of complexity measures that capture the organization of hidden nonlinear processes. The spectral decomposition of operator-valued functions leads to closed-form expressions involving the full eigenvalue spectrum of the mixed-state presentation of a process's ϵ-machine causal-state dynamic. Measures include correlation functions, power spectra, past-future mutual information, transient and synchronization informations, and many others. As a result, a direct and complete analysis of intrinsic computation is now available for the temporal organization of finitary hidden Markov models and nonlinear dynamical systems with generating partitions and for the spatial organization in one-dimensional systems, including spin systems, cellular automata, and complex materials via chaotic crystallography. - Highlights: • We provide exact, closed-form expressions for a hidden stationary process' intrinsic computation. • These include information measures such as the excess entropy, transient information, and synchronization information and the entropy-rate finite-length approximations. • The method uses an epsilon-machine's mixed-state presentation. • The spectral decomposition of the mixed-state presentation relies on the recent development of meromorphic functional calculus for nondiagonalizable operators.

  16. Exact complexity: The spectral decomposition of intrinsic computation

    Science.gov (United States)

    Crutchfield, James P.; Ellison, Christopher J.; Riechers, Paul M.

    2016-03-01

    We give exact formulae for a wide family of complexity measures that capture the organization of hidden nonlinear processes. The spectral decomposition of operator-valued functions leads to closed-form expressions involving the full eigenvalue spectrum of the mixed-state presentation of a process's ɛ-machine causal-state dynamic. Measures include correlation functions, power spectra, past-future mutual information, transient and synchronization informations, and many others. As a result, a direct and complete analysis of intrinsic computation is now available for the temporal organization of finitary hidden Markov models and nonlinear dynamical systems with generating partitions and for the spatial organization in one-dimensional systems, including spin systems, cellular automata, and complex materials via chaotic crystallography.

  17. Cucheb: A GPU implementation of the filtered Lanczos procedure

    Science.gov (United States)

    Aurentz, Jared L.; Kalantzis, Vassilis; Saad, Yousef

    2017-11-01

    This paper describes the software package Cucheb, a GPU implementation of the filtered Lanczos procedure for the solution of large sparse symmetric eigenvalue problems. The filtered Lanczos procedure uses a carefully chosen polynomial spectral transformation to accelerate convergence of the Lanczos method when computing eigenvalues within a desired interval. This method has proven particularly effective for eigenvalue problems that arise in electronic structure calculations and density functional theory. We compare our implementation against an equivalent CPU implementation and show that using the GPU can reduce the computation time by more than a factor of 10. Program Summary Program title: Cucheb Program Files doi:http://dx.doi.org/10.17632/rjr9tzchmh.1 Licensing provisions: MIT Programming language: CUDA C/C++ Nature of problem: Electronic structure calculations require the computation of all eigenvalue-eigenvector pairs of a symmetric matrix that lie inside a user-defined real interval. Solution method: To compute all the eigenvalues within a given interval a polynomial spectral transformation is constructed that maps the desired eigenvalues of the original matrix to the exterior of the spectrum of the transformed matrix. The Lanczos method is then used to compute the desired eigenvectors of the transformed matrix, which are then used to recover the desired eigenvalues of the original matrix. The bulk of the operations are executed in parallel using a graphics processing unit (GPU). Runtime: Variable, depending on the number of eigenvalues sought and the size and sparsity of the matrix. Additional comments: Cucheb is compatible with CUDA Toolkit v7.0 or greater.

  18. Fourier coefficients of Eisenstein series formed with modular symbols and their spectral decomposition

    NARCIS (Netherlands)

    Bruggeman, R.W.; Diamantis, N.

    2016-01-01

    The Fourier coefficient of a second order Eisenstein series is described as a shifted convolution sum. This description is used to obtain the spectral decomposition of and estimates for the shifted convolution sum.

  19. Multi-Centrality Graph Spectral Decompositions and Their Application to Cyber Intrusion Detection

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Pin-Yu; Choudhury, Sutanay; Hero, Alfred

    2016-03-01

    Many modern datasets can be represented as graphs and hence spectral decompositions such as graph principal component analysis (PCA) can be useful. Distinct from previous graph decomposition approaches based on subspace projection of a single topological feature, e.g., the centered graph adjacency matrix (graph Laplacian), we propose spectral decomposition approaches to graph PCA and graph dictionary learning that integrate multiple features, including graph walk statistics, centrality measures and graph distances to reference nodes. In this paper we propose a new PCA method for single graph analysis, called multi-centrality graph PCA (MC-GPCA), and a new dictionary learning method for ensembles of graphs, called multi-centrality graph dictionary learning (MC-GDL), both based on spectral decomposition of multi-centrality matrices. As an application to cyber intrusion detection, MC-GPCA can be an effective indicator of anomalous connectivity pattern and MC-GDL can provide discriminative basis for attack classification.

  20. The 2D Spectral Intrinsic Decomposition Method Applied to Image Analysis

    Directory of Open Access Journals (Sweden)

    Samba Sidibe

    2017-01-01

    Full Text Available We propose a new method for autoadaptive image decomposition and recomposition based on the two-dimensional version of the Spectral Intrinsic Decomposition (SID. We introduce a faster diffusivity function for the computation of the mean envelope operator which provides the components of the SID algorithm for any signal. The 2D version of SID algorithm is implemented and applied to some very known images test. We extracted relevant components and obtained promising results in images analysis applications.

  1. Spectral decomposition of asteroid Itokawa based on principal component analysis

    Science.gov (United States)

    Koga, Sumire C.; Sugita, Seiji; Kamata, Shunichi; Ishiguro, Masateru; Hiroi, Takahiro; Tatsumi, Eri; Sasaki, Sho

    2018-01-01

    The heliocentric stratification of asteroid spectral types may hold important information on the early evolution of the Solar System. Asteroid spectral taxonomy is based largely on principal component analysis. However, how the surface properties of asteroids, such as the composition and age, are projected in the principal-component (PC) space is not understood well. We decompose multi-band disk-resolved visible spectra of the Itokawa surface with principal component analysis (PCA) in comparison with main-belt asteroids. The obtained distribution of Itokawa spectra projected in the PC space of main-belt asteroids follows a linear trend linking the Q-type and S-type regions and is consistent with the results of space-weathering experiments on ordinary chondrites and olivine, suggesting that this trend may be a space-weathering-induced spectral evolution track for S-type asteroids. Comparison with space-weathering experiments also yield a short average surface age (track, strongly suggesting that space weathering has begun saturated on this young asteroid. The freshest spectrum found on Itokawa exhibits a clear sign for space weathering, indicating again that space weathering occurs very rapidly on this body. We also conducted PCA on Itokawa spectra alone and compared the results with space-weathering experiments. The obtained results indicate that the first principal component of Itokawa surface spectra is consistent with spectral change due to space weathering and that the spatial variation in the degree of space weathering is very large (a factor of three in surface age), which would strongly suggest the presence of strong regional/local resurfacing process(es) on this small asteroid.

  2. Lanczos and modified Lanczos procedures for the Jahn-Teller systems

    Energy Technology Data Exchange (ETDEWEB)

    Bevilacqua, G; Martinelli, L; Pastori P, G. [Istituto Nazionale di Fisica della Materia, Dipartimento di Fisica dell Universita Piazza Torricelli 2, 56126 Pisa, Italy (Italy)

    1998-12-01

    The analysis of the dynamical properties of the Jahn-Teller systems requires the computation of eigenstates of large and sparse matrices, for which the use of traditional computational technique is in general precluded. Among the workable methods developed to handle these very large matrices, the Lanczos method and the related recursion method have emerged as the most simple and efficient computational tools for a large variety of applications. The merits of the Lanczos recursion method are particularly evident when a few extreme eigenvalues are desired or when the recursion coefficients can be put in analytic form, as is the case of E {epsilon} and T {epsilon} Jahn-Teller systems. In more general situations the Lanczos method is still extremely useful, but at the same time must be used with extreme caution and appropriate implementations. Its main difficulty is related to the finite precision arithmetic of the computers which causes a loss of orthogonality among the states generated by the Lanczos procedure: instabilities in the recursion coefficients can occur, producing the so called {sup L}anczos phenomena{sup (}ghost states or spurious states). However a precious tool, to identify unambiguously the good eigenvalues from the fake ones, is offered by the following implementation, developed by our group. The Lanczos scheme is applied not to H, but rather to the auxiliary operator A = (H - E{sub t}){sup 2} , whose ground state is determined through an iterative process which alternates the diagonalization of 2 x 2 Lanczos matrices to a two-pass Lanczos procedure of suitable small dimension. So the eigenstates of a system can be obtained, one at a time, within any desired energy range and with any desired precision. As an exemplification of this modified Lanczos procedure, the T {tau} Jahn-Teller system and the absorption spectrum of Zn S: Fe{sup 2+} are considered in detail. (Author)

  3. Impact of Compton scatter on material decomposition using a photon counting spectral detector

    Science.gov (United States)

    Lewis, Cale; Park, Chan-Soo; Fredette, Nathaniel R.; Das, Mini

    2017-03-01

    Photon counting spectral detectors are being investigated to allow better discrimination of multiple materials by collecting spectral data for every detector pixel. The process of material decomposition or discrimination starts with an accurate estimation of energy dependent attenuation of the composite object. Photoelectric effect and Compton scattering are two important constituents of the attenuation. Compton scattering while results in a loss of primary photon, also results in an increase in photon counts in the lower ene1rgy bins via multiple orders of scatter. This contribution to each energy bin may change with material properties, thickness and x-ray energies. There has been little investigation into the effect of this increase in counts at lower energies due to presence of these Compton scattered photons using photon counting detectors. Our investigations show that it is important to account for this effect in spectral decomposition problems.

  4. Regularization of nonlinear decomposition of spectral x-ray projection images.

    Science.gov (United States)

    Ducros, Nicolas; Abascal, Juan Felipe Perez-Juste; Sixou, Bruno; Rit, Simon; Peyrin, Françoise

    2017-09-01

    Exploiting the x-ray measurements obtained in different energy bins, spectral computed tomography (CT) has the ability to recover the 3-D description of a patient in a material basis. This may be achieved solving two subproblems, namely the material decomposition and the tomographic reconstruction problems. In this work, we address the material decomposition of spectral x-ray projection images, which is a nonlinear ill-posed problem. Our main contribution is to introduce a material-dependent spatial regularization in the projection domain. The decomposition problem is solved iteratively using a Gauss-Newton algorithm that can benefit from fast linear solvers. A Matlab implementation is available online. The proposed regularized weighted least squares Gauss-Newton algorithm (RWLS-GN) is validated on numerical simulations of a thorax phantom made of up to five materials (soft tissue, bone, lung, adipose tissue, and gadolinium), which is scanned with a 120 kV source and imaged by a 4-bin photon counting detector. To evaluate the method performance of our algorithm, different scenarios are created by varying the number of incident photons, the concentration of the marker and the configuration of the phantom. The RWLS-GN method is compared to the reference maximum likelihood Nelder-Mead algorithm (ML-NM). The convergence of the proposed method and its dependence on the regularization parameter are also studied. We show that material decomposition is feasible with the proposed method and that it converges in few iterations. Material decomposition with ML-NM was very sensitive to noise, leading to decomposed images highly affected by noise, and artifacts even for the best case scenario. The proposed method was less sensitive to noise and improved contrast-to-noise ratio of the gadolinium image. Results were superior to those provided by ML-NM in terms of image quality and decomposition was 70 times faster. For the assessed experiments, material decomposition was possible

  5. A Lanczos eigenvalue method on a parallel computer. [for large complex space structure free vibration analysis

    Science.gov (United States)

    Bostic, Susan W.; Fulton, Robert E.

    1987-01-01

    Eigenvalue analyses of complex structures is a computationally intensive task which can benefit significantly from new and impending parallel computers. This study reports on a parallel computer implementation of the Lanczos method for free vibration analysis. The approach used here subdivides the major Lanczos calculation tasks into subtasks and introduces parallelism down to the subtask levels such as matrix decomposition and forward/backward substitution. The method was implemented on a commercial parallel computer and results were obtained for a long flexible space structure. While parallel computing efficiency is problem and computer dependent, the efficiency for the Lanczos method was good for a moderate number of processors for the test problem. The greatest reduction in time was realized for the decomposition of the stiffness matrix, a calculation which took 70 percent of the time in the sequential program and which took 25 percent of the time on eight processors. For a sample calculation of the twenty lowest frequencies of a 486 degree of freedom problem, the total sequential computing time was reduced by almost a factor of ten using 16 processors.

  6. Non-canonical spectral decomposition of random functions of the traction voltage and current in electric transportation systems

    Directory of Open Access Journals (Sweden)

    N.A. Kostin

    2015-03-01

    Full Text Available The paper proposes the non-canonical spectral decomposition of random functions of the traction voltages and currents. This decomposition is adapted for the electric transportation systems. The numerical representation is carried out for the random function of voltage on the pantograph of electric locomotives VL8 and DE1.

  7. Algorithms for Spectral Decomposition with Applications to Optical Plume Anomaly Detection

    Science.gov (United States)

    Srivastava, Askok N.; Matthews, Bryan; Das, Santanu

    2008-01-01

    The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main approaches that can be taken to extract relevant features from these high-dimensional data streams. The first set of approaches relies on extracting features using a physics-based paradigm where the underlying physical mechanism that generates the spectra is used to infer the most important features in the data stream. We focus on a complementary methodology that uses a data-driven technique that is informed by the underlying physics but also has the ability to adapt to unmodeled system attributes and dynamics. We discuss the following four algorithms: Spectral Decomposition Algorithm (SDA), Non-Negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Principal Components Analysis (PCA) and compare their performance on a spectral emulator which we use to generate artificial data with known statistical properties. This spectral emulator mimics the real-world phenomena arising from the plume of the space shuttle main engine and can be used to validate the results that arise from various spectral decomposition algorithms and is very useful for situations where real-world systems have very low probabilities of fault or failure. Our results indicate that methods like SDA and NMF provide a straightforward way of incorporating prior physical knowledge while NMF with a tuning mechanism can give superior performance on some tests. We demonstrate these algorithms to detect potential system-health issues on data from a spectral emulator with tunable health parameters.

  8. TU-F-18C-01: Breast Tissue Decomposition Using Spectral CT After Distortion Correction

    Energy Technology Data Exchange (ETDEWEB)

    Ding, H; Zhao, B; Klopfer, M; Masaki, F; Baturin, P; Molloi, S [University of California, Irvine, CA (United States)

    2014-06-15

    Purpose: To investigate the feasibility of accurate breast tissue compositional characterization by using spectral-distortion-corrected dual energy images from a photon-counting spectral CT. Methods: Thirty eight postmortem breasts were imaged with a Cadmium-Zinc-Telluride (CZT)-based photon-counting spectral CT system at beam energy of 100 kVp. The energy-resolved detector sorted photons into low and high energy bins with a splitting energy of 42 keV. The estimated mean glandular dose (MGD) for each breast was approximately 2.0 mGy. Dual energy technique was used to decompose breast tissue into water, lipid, and protein contents. Two image-based methods were investigated to improve the accuracy of tissue compositional characterization. The first method simply limited the recorded spectra up to 90 keV. This reduced the pulse pile-up artifacts but it has some dose penalty. The second method corrected the spectral information of all measured photons by using a spectral distortion correction technique. Breasts were then chemically decomposed into their respective water, lipid, and protein contents, which was used as the reference standard. The accuracy of the tissue compositional measurement with spectral CT was evaluated by the root-mean-square (RMS) errors in percentage composition. Results: The errors in quantitative material decomposition were significantly reduced after the appropriate image processing methods. As compared to the chemical analysis as the reference standard, the averages of the RMS errors were estimated to be 15.5%, 3.3%, and 2.8% for the raw, energy-limited, and spectral-corrected images, respectively. Conclusion: Spectral CT can be used to accurately quantify the water, lipid, and protein contents in breast tissues by implementing a spectral distortion correction algorithm. The tissue compositional information can potentially improve the sensitivity and specificity for breast cancer diagnosis.

  9. Nonlocal low-rank and sparse matrix decomposition for spectral CT reconstruction

    Science.gov (United States)

    Niu, Shanzhou; Yu, Gaohang; Ma, Jianhua; Wang, Jing

    2018-02-01

    Spectral computed tomography (CT) has been a promising technique in research and clinics because of its ability to produce improved energy resolution images with narrow energy bins. However, the narrow energy bin image is often affected by serious quantum noise because of the limited number of photons used in the corresponding energy bin. To address this problem, we present an iterative reconstruction method for spectral CT using nonlocal low-rank and sparse matrix decomposition (NLSMD), which exploits the self-similarity of patches that are collected in multi-energy images. Specifically, each set of patches can be decomposed into a low-rank component and a sparse component, and the low-rank component represents the stationary background over different energy bins, while the sparse component represents the rest of the different spectral features in individual energy bins. Subsequently, an effective alternating optimization algorithm was developed to minimize the associated objective function. To validate and evaluate the NLSMD method, qualitative and quantitative studies were conducted by using simulated and real spectral CT data. Experimental results show that the NLSMD method improves spectral CT images in terms of noise reduction, artifact suppression and resolution preservation.

  10. Regular Mittag-Leffler kernels and spectral decomposition of a class of non-selfadjoint operators

    Energy Technology Data Exchange (ETDEWEB)

    Gubreev, G M [South Ukrainian State K.D.Ushynsky Pedagogical University, Odessa (Ukraine)

    2005-02-28

    We define abstract Mittag-Leffler kernels with values in a separable Hilbert space. A Mittag-Leffler kernel is said to be c-regular (resp. d-regular) if it generates an integral transform of Fourier-Dzhrbashyan type (resp. if the space has an unconditional basis consisting of values of the kernel). We give a complete description of d-regular and c-regular kernels, which enables us to answer a question of M.G. Krein. We apply the notion of a regular Mittag-Leffler kernel to construct the spectral decomposition for one-dimensional perturbations of fractional powers of dissipative Volterra operators.

  11. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    Directory of Open Access Journals (Sweden)

    Chulhee Park

    2016-05-01

    Full Text Available A multispectral filter array (MSFA image sensor with red, green, blue and near-infrared (NIR filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF. However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  12. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    Science.gov (United States)

    Park, Chulhee; Kang, Moon Gi

    2016-05-18

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  13. Compressive spectral image super-resolution by using singular value decomposition

    Science.gov (United States)

    Marquez, M.; Mejia, Y.; Arguello, Henry

    2017-12-01

    Compressive sensing (CS) has been recently applied to the acquisition and reconstruction of spectral images (SI). This field is known as compressive spectral imaging (CSI). The attainable resolution of SI depends on the sensor characteristics, whose cost increases in proportion to the resolution. Super-resolution (SR) approaches are usually applied to low-resolution (LR) CSI systems to improve the quality of the reconstructions by solving two consecutive optimization problems. In contrast, this work aims at reconstructing a high resolution (HR) SI from LR compressive measurements by solving a single convex optimization problem based on the fusion of CS and SR techniques. Furthermore, the truncated singular value decomposition is used to alleviate the computational complexity of the inverse reconstruction problem. The proposed method is tested by using the coded aperture snapshot spectral imager (CASSI), and the results are compared to HR-SI images directly reconstructed from LR-SI images by using an SR algorithm via sparse representation. In particular, a gain of up to 1.5 dB of PSNR is attained with the proposed method.

  14. A polychromatic adaption of the Beer-Lambert model for spectral decomposition

    Science.gov (United States)

    Sellerer, Thorsten; Ehn, Sebastian; Mechlem, Korbinian; Pfeiffer, Franz; Herzen, Julia; Noël, Peter B.

    2017-03-01

    We present a semi-empirical forward-model for spectral photon-counting CT which is fully compatible with state-of-the-art maximum-likelihood estimators (MLE) for basis material line integrals. The model relies on a minimum calibration effort to make the method applicable in routine clinical set-ups with the need for periodic re-calibration. In this work we present an experimental verifcation of our proposed method. The proposed method uses an adapted Beer-Lambert model, describing the energy dependent attenuation of a polychromatic x-ray spectrum using additional exponential terms. In an experimental dual-energy photon-counting CT setup based on a CdTe detector, the model demonstrates an accurate prediction of the registered counts for an attenuated polychromatic spectrum. Thereby deviations between model and measurement data lie within the Poisson statistical limit of the performed acquisitions, providing an effectively unbiased forward-model. The experimental data also shows that the model is capable of handling possible spectral distortions introduced by the photon-counting detector and CdTe sensor. The simplicity and high accuracy of the proposed model provides a viable forward-model for MLE-based spectral decomposition methods without the need of costly and time-consuming characterization of the system response.

  15. Analysis of daily river flow fluctuations using empirical mode decomposition and arbitrary order Hilbert spectral analysis

    Science.gov (United States)

    Huang, Yongxiang; Schmitt, François G.; Lu, Zhiming.; Liu, Yulu

    2009-06-01

    SummaryIn this paper we presented the analysis of two long time series of daily river flow data, 32 years recorded in the Seine river (France), and 25 years recorded in the Wimereux river (Wimereux, France). We applied a scale based decomposition method, namely Empirical Mode Decomposition (EMD), on these time series. The data were decomposed into several Intrinsic Mode Functions (IMF). The mean frequency of each IMF mode indicated that the EMD method acts as a filter bank. Furthermore, the cross-correlation between these IMF modes from the Seine river and Wimereux river demonstrated correlation among the large scale IMF modes, which indicates that both rivers are likely to be influenced by the same maritime climate event of Northern France. As a confirmation we found that the large scale parts have the same evolution trend. We finally applied arbitrary order Hilbert spectral analysis, a new technique coming from turbulence studies and time series analysis, on the flow discharge of the Seine river. This new method provides an amplitude-frequency representation of the original time series, giving a joint pdf p(ω,A). When marginal moments of the amplitude are computed, one obtains an intermittency study in the frequency space. Applied to river flow discharge data from the Seine river, this shows the scaling range and characterizes the intermittent fluctuations over the range of scales from 4.5 to 60 days, between synoptic and intraseasonal scales.

  16. Lanczos's equation to replace Dirac's equation ?

    CERN Document Server

    Gsponer, Andre; Gsponer, Andre; Hurni, Jean-Pierre

    1994-01-01

    Lanczos's quaternionic interpretation of Dirac's equation provides a unified description for all elementary particles of spin 0, 1/2, 1, and 3/2. The Lagrangian formulation given by Einstein and Mayer in 1933 predicts two main classes of solutions. (1) Point like partons which come in two families, quarks and leptons. The correct fractional or integral electric and baryonic charges, and zero mass for the neutrino and the u-quark, are set by eigenvalue equations. The electro-weak interaction of the partons is the same as with the Standard model, with the same two free parameters: e and sin^2 theta. There is no need for a Higgs symmetry breaking mechanism. (2) Extended hadrons for which there is no simple eigenvalue equation for the mass. The strong interaction is essentially non-local. The pion mass and pion-nucleon coupling constant determine to first order the nucleon size, mass and anomalous magnetic moment.

  17. Projection preconditioning for Lanczos-type methods

    Energy Technology Data Exchange (ETDEWEB)

    Bielawski, S.S.; Mulyarchik, S.G.; Popov, A.V. [Belarusian State Univ., Minsk (Belarus)

    1996-12-31

    We show how auxiliary subspaces and related projectors may be used for preconditioning nonsymmetric system of linear equations. It is shown that preconditioned in such a way (or projected) system is better conditioned than original system (at least if the coefficient matrix of the system to be solved is symmetrizable). Two approaches for solving projected system are outlined. The first one implies straightforward computation of the projected matrix and consequent using some direct or iterative method. The second approach is the projection preconditioning of conjugate gradient-type solver. The latter approach is developed here in context with biconjugate gradient iteration and some related Lanczos-type algorithms. Some possible particular choices of auxiliary subspaces are discussed. It is shown that one of them is equivalent to using colorings. Some results of numerical experiments are reported.

  18. Lanczos-Lovelock gravity from a thermodynamic perspective

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Sumanta [Inter-University Centre for Astronomy and Astrophysics (IUCAA), Post Bag 4, Ganeshkhind, Pune University Campus, Pune 411 007 (India)

    2015-08-07

    The deep connection between gravitational dynamics and horizon thermodynamics leads to several intriguing features both in general relativity and in Lanczos-Lovelock theories of gravity. Recently in http://arxiv.org/abs/1312.3253 several additional results strengthening the above connection have been established within the framework of general relativity. In this work we provide a generalization of the above setup to Lanczos-Lovelock gravity as well. To our expectation it turns out that most of the results obtained in the context of general relativity generalize to Lanczos-Lovelock gravity in a straightforward but non-trivial manner. First, we provide an alternative and more general derivation of the connection between Noether charge for a specific time evolution vector field and gravitational heat density of the boundary surface. This will lead to holographic equipartition for static spacetimes in Lanczos-Lovelock gravity as well. Taking a cue from this, we have introduced naturally defined four-momentum current associated with gravity and matter energy momentum tensor for both Lanczos-Lovelock Lagrangian and its quadratic part. Then, we consider the concepts of Noether charge for null boundaries in Lanczos-Lovelock gravity by providing a direct generalization of previous results derived in the context of general relativity. Another very interesting feature for gravity is that gravitational field equations for arbitrary static and spherically symmetric spacetimes with horizon can be written as a thermodynamic identity in the near horizon limit. This result holds in both general relativity and in Lanczos-Lovelock gravity as well. In a previous work [http://arxiv.org/abs/1505.05297] we have shown that, for an arbitrary spacetime, the gravitational field equations near any null surface generically leads to a thermodynamic identity. In this work, we have also generalized this result to Lanczos-Lovelock gravity by showing that gravitational field equations for Lanczos

  19. Singular solution of the Feller diffusion equation via a spectral decomposition

    Science.gov (United States)

    Gan, Xinjun; Waxman, David

    2015-01-01

    Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.

  20. Stochastic space transforms in subsurface hydrology — Part 2: Generalized spectral decompositions and plancherel representations

    Science.gov (United States)

    Christakos, G.; Hristopulos, D. T.

    1994-06-01

    In earlier publications, certain applications of space transformation operators in subsurface hydrology were considered. These operators reduce the original multi-dimensional problem to the one-dimensional space, and can be used to study stochastic partial differential equations governing groundwater flow and solute transport processes. In the present work we discuss developments in the theoretical formulation of flow models with space-dependent coefficients in terms of space transformations. The formulation is based on stochastic Radon operator representations of generalized functions. A generalized spectral decomposition of the flow parameters is introduced, which leads to analytically tractable expressions of the space transformed flow equation. A Plancherel representation of the space transformation product of the head potential and the log-conductivity is also obtained. A test problem is first considered in detail and the solutions obtained by means of the proposed approach are compared with the exact solutions obtained by standard partial differential equation methods. Then, solutions of three-dimensional groundwater flow are derived starting from solutions of a one-dimensional model along various directions in space. A step-by-step numerical formulation of the approach to the flow problem is also discussed, which is useful for practical applications. Finally, the space transformation solutions are compared with local solutions obtained by means of series expansions of the log-conductivity gradient.

  1. De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets

    Science.gov (United States)

    Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.

    2017-08-01

    The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.

  2. Assessment of the cardiovascular regulation during robotic assisted locomotion in normal subjects: autoregressive spectral analysis vs empirical mode decomposition.

    Science.gov (United States)

    Magagnin, V; Caiani, E G; Fusini, L; Turiel, M; Licari, V; Bo, I; Cerutti, S; Porta, A

    2008-01-01

    Robotic assisted locomotion systems are recently gaining appreciation as methods to rehabilitate individuals with lost sensory motor function. In the present study we compare autoregressive power spectral analysis and empirical mode decomposition (EMD) applied to the analysis of short-term heart period variability regarding their ability to typify autonomic response during a robotic assisted locomotion session consisting in the following phases: 1) sitting position; 2) standing position; 3) suspension during subject instrumentation; 4) robotic assisted treadmill locomotion with partial body weight support; 5) standing recovery after exercise. Results showed a significant tachycardia during the suspension phase, but no significant changes of spectral indexes. On the contrary, when spectral indexes were derived according to EMD, changes were evidenced during the suspension and walking phases. The EMD method is more powerful than autoregressive spectral analysis in detecting variations of parasympathetic and sympathetic modulations elicited by a robotic-assisted locomotion protocol.

  3. RESOLVING THE ACTIVE GALACTIC NUCLEUS AND HOST EMISSION IN THE MID-INFRARED USING A MODEL-INDEPENDENT SPECTRAL DECOMPOSITION

    Energy Technology Data Exchange (ETDEWEB)

    Hernán-Caballero, Antonio; Alonso-Herrero, Almudena [Instituto de Física de Cantabria, CSIC-UC, Avenida de los Castros s/n, E-39005, Santander (Spain); Hatziminaoglou, Evanthia [European Southern Observatory, Karl-Schwarzschild-Strasse 2, D-85748 Garching bei München (Germany); Spoon, Henrik W. W. [Cornell University, CRSR, Space Sciences Building, Ithaca, NY 14853 (United States); Almeida, Cristina Ramos [Instituto de Astrofísica de Canarias, Vía Láctea s/n, E-38205 La Laguna, Tenerife (Spain); Santos, Tanio Díaz [Núcleo de Astronomía de la Facultad de Ingeniería, Universidad Diego Portales, Av. Ejército Libertador 441, Santiago (Chile); Hönig, Sebastian F. [School of Physics and Astronomy, University of Southampton, Southampton SO18 1BJ (United Kingdom); González-Martín, Omaira [Centro de Radioastronomía y Astrofísica (CRyA-UNAM), 3-72 (Xangari), 8701, Morelia (Mexico); Esquej, Pilar, E-mail: ahernan@ifca.unican.es [Departamento de Astrofísica, Facultad de CC. Físicas, Universidad Complutense de Madrid, E-28040 Madrid (Spain)

    2015-04-20

    We present results on the spectral decomposition of 118 Spitzer Infrared Spectrograph (IRS) spectra from local active galactic nuclei (AGNs) using a large set of Spitzer/IRS spectra as templates. The templates are themselves IRS spectra from extreme cases where a single physical component (stellar, interstellar, or AGN) completely dominates the integrated mid-infrared emission. We show that a linear combination of one template for each physical component reproduces the observed IRS spectra of AGN hosts with unprecedented fidelity for a template fitting method with no need to model extinction separately. We use full probability distribution functions to estimate expectation values and uncertainties for observables, and find that the decomposition results are robust against degeneracies. Furthermore, we compare the AGN spectra derived from the spectral decomposition with sub-arcsecond resolution nuclear photometry and spectroscopy from ground-based observations. We find that the AGN component derived from the decomposition closely matches the nuclear spectrum with a 1σ dispersion of 0.12 dex in luminosity and typical uncertainties of ∼0.19 in the spectral index and ∼0.1 in the silicate strength. We conclude that the emission from the host galaxy can be reliably removed from the IRS spectra of AGNs. This allows for unbiased studies of the AGN emission in intermediate- and high-redshift galaxies—currently inaccesible to ground-based observations—with archival Spitzer/IRS data and in the future with the Mid-InfraRed Instrument of the James Webb Space Telescope. The decomposition code and templates are available at http://denebola.org/ahc/deblendIRS.

  4. A Lanczos algorithm for vibration, suckling and termal analysis

    Science.gov (United States)

    Bostic, Susan W.

    1993-01-01

    This paper reviews an eigensolver algorithm based on the Lanczos Method for vibration, buckling and thermal analysis. The original code was written for inclusion in the Computational Mechanics Testbed (COMET), a general purpose finite element code. A portable version of the Lanczos code that is optimized for high-performance supercomputers has been developed. Special features of the algorithm include the capability to compute rigid body modes, thermal modes and Lanczos vectors that are derived from the applied load vector. The latter is necessary when using the Lanczos vectors as reduced-basis vectors in transient structural response and transient heat conduction calculations. The modularity of the code allows the user the option of including the most up-to-date utilities, such as the equation solver best suited for the application. The algorithm is discussed in detail and results of several applications are presented. Timing results for a vibration application indicate that the Lanczos algorithm is twenty times faster than the subspace iteration method which has been extensively used in the past.

  5. Application of Spectral Decomposition Techniques in the Assessment and Intercomparison of Models and Observations

    Science.gov (United States)

    Carlson, B. E.; Li, J.; Lacis, A. A.

    2014-12-01

    In the assessment of models using observations, or the intercomparison between different observational datasets, it is necessary to examine the coherency in the spatial and temporal variability present in different datasets. Meanwhile, global datasets are always high dimensional, therefore efficient comparison is not an easy task. In this study, we apply several spectral decomposition techniques, namely Combined Principal Component Analysis (CPCA) and Combined Maximum Covariance Analysis (CMCA), as effective means to reduce data dimension and extract the dominant variability. More importantly, these methods find the common modes of variability in different datasets, therefore allowing parallel comparison and evaluation. These methods were applied to the AOD fields from fifteen CMIP5 models and three observational datasets: MODIS, MISR and AERONET. We focus on large-scale features including the spatial distribution, seasonality and long term trends. Results show that while models qualitatively agree with observations, significant regional differences still exist, especially in regions with mixed aerosol types such as the Sahel, North India and East Asia. Compared with observations, models in general lack interannual variability. Moreover, all models indicate consistent AOD trends with increases over East Asia and decreases over East US and Europe. However, the AOD trends over these regions are not very significant in the observations Instead, a significant increase in dust concentrations over the Arabian Peninsula and a significant decrease over the biomass burning regions of South America are found in MODIS and MISR. The aerosol composition for the regions with largest disagreement is also examined. Figure caption: The dominant mode of CMCA analysis using fifteen CMIP5 models and MODIS, MISR and AERONET. The color of the circles indicate the signal of AERONET. This mode is associated with a summer-winter seasonal cycle and models agree qualitatively with

  6. Conditional-likelihood approach to material decomposition in spectral absorption-based or phase-contrast CT

    Science.gov (United States)

    Baturin, Pavlo

    2015-03-01

    Material decomposition in absorption-based X-ray CT imaging suffers certain inefficiencies when differentiating among soft tissue materials. To address this problem, decomposition techniques turn to spectral CT, which has gained popularity over the last few years. Although proven to be more effective, such techniques are primarily limited to the identification of contrast agents and soft and bone-like materials. In this work, we introduce a novel conditional likelihood, material-decomposition method capable of identifying any type of material objects scanned by spectral CT. The method takes advantage of the statistical independence of spectral data to assign likelihood values to each of the materials on a pixel-by-pixel basis. It results in likelihood images for each material, which can be further processed by setting certain conditions or thresholds, to yield a final material-diagnostic image. The method can also utilize phase-contrast CT (PCI) data, where measured absorption and phase-shift information can be treated as statistically independent datasets. In this method, the following cases were simulated: (i) single-scan PCI CT, (ii) spectral PCI CT, (iii) absorption-based spectral CT, and (iv) single-scan PCI CT with an added tumor mass. All cases were analyzed using a digital breast phantom; although, any other objects or materials could be used instead. As a result, all materials were identified, as expected, according to their assignment in the digital phantom. Materials with similar attenuation or phase-shift values (e.g., glandular tissue, skin, and tumor masses) were especially successfully when differentiated by the likelihood approach.

  7. A BVMF-B algorithm for nonconvex nonlinear regularized decomposition of spectral x-ray projection images

    Science.gov (United States)

    Pham, Mai Quyen; Ducros, Nicolas; Nicolas, Barbara

    2017-03-01

    Spectral computed tomography (CT) exploits the measurements obtained by a photon counting detector to reconstruct the chemical composition of an object. In particular, spectral CT has shown a very good ability to image K-edge contrast agent. Spectral CT is an inverse problem that can be addressed solving two subproblems, namely the basis material decomposition (BMD) problem and the tomographic reconstruction problem. In this work, we focus on the BMD problem, which is ill-posed and nonlinear. The BDM problem is classically either linearized, which enables reconstruction based on compressed sensing methods, or nonlinearly solved with no explicit regularization scheme. In a previous communication, we proposed a nonlinear regularized Gauss-Newton (GN) algorithm.1 However, this algorithm can only be applied to convex regularization functionals. In particular, the lp (p soft tissue, bone and gadolinium, which is scanned with a 90-kV x-ray tube and a 3-bin photon counting detector.

  8. A structure preserving Lanczos algorithm for computing the optical absorption spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Meiyue [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Div.; Jornada, Felipe H. da [Univ. of California, Berkeley, CA (United States). Dept. of Physics; Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Materials Science Div.; Lin, Lin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Div.; Univ. of California, Berkeley, CA (United States). Dept. of Mathematics; Yang, Chao [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Div.; Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Louie, Steven G. [Univ. of California, Berkeley, CA (United States). Dept. of Physics; Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Materials Science Div.

    2016-11-16

    We present a new structure preserving Lanczos algorithm for approximating the optical absorption spectrum in the context of solving full Bethe-Salpeter equation without Tamm-Dancoff approximation. The new algorithm is based on a structure preserving Lanczos procedure, which exploits the special block structure of Bethe-Salpeter Hamiltonian matrices. A recently developed technique of generalized averaged Gauss quadrature is incorporated to accelerate the convergence. We also establish the connection between our structure preserving Lanczos procedure with several existing Lanczos procedures developed in different contexts. Numerical examples are presented to demonstrate the effectiveness of our Lanczos algorithm.

  9. Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the Koopman operator

    Science.gov (United States)

    Li, Qianxiao; Dietrich, Felix; Bollt, Erik M.; Kevrekidis, Ioannis G.

    2017-10-01

    Numerical approximation methods for the Koopman operator have advanced considerably in the last few years. In particular, data-driven approaches such as dynamic mode decomposition (DMD)51 and its generalization, the extended-DMD (EDMD), are becoming increasingly popular in practical applications. The EDMD improves upon the classical DMD by the inclusion of a flexible choice of dictionary of observables which spans a finite dimensional subspace on which the Koopman operator can be approximated. This enhances the accuracy of the solution reconstruction and broadens the applicability of the Koopman formalism. Although the convergence of the EDMD has been established, applying the method in practice requires a careful choice of the observables to improve convergence with just a finite number of terms. This is especially difficult for high dimensional and highly nonlinear systems. In this paper, we employ ideas from machine learning to improve upon the EDMD method. We develop an iterative approximation algorithm which couples the EDMD with a trainable dictionary represented by an artificial neural network. Using the Duffing oscillator and the Kuramoto Sivashinsky partical differential equation as examples, we show that our algorithm can effectively and efficiently adapt the trainable dictionary to the problem at hand to achieve good reconstruction accuracy without the need to choose a fixed dictionary a priori. Furthermore, to obtain a given accuracy, we require fewer dictionary terms than EDMD with fixed dictionaries. This alleviates an important shortcoming of the EDMD algorithm and enhances the applicability of the Koopman framework to practical problems.

  10. Analysis of daily river flow fluctuations using Empirical Mode Decomposition and arbitrary order Hilbert spectral analysis

    OpenAIRE

    Huang, Yongxiang; Schmitt, François G; Lu, Zhiming; Liu, Yulu

    2009-01-01

    International audience; In this paper we presented the analysis of two long time series of daily river flow data, 32 years recorded in the Seine river (France), and 25 years recorded in the Wimereux river (Wimereux, France). We applied a scale based decomposition method, namely Empirical Mode Decomposition (EMD), on these time series. The data were decomposed into several Intrinsic Mode Functions (IMF). The mean frequency of each IMF mode indicated that the EMD method acts as a filter bank. F...

  11. LBAS: Lanczos Bidiagonalization with Subspace Augmentation for Discrete Inverse Problems

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Abe, Kyniyoshi

    The regularizing properties of Lanczos bidiagonalization are powerful when the underlying Krylov subspace captures the dominating components of the solution. In some applications the regularized solution can be further improved by augmenting the Krylov subspace with a low-dimensional subspace tha...

  12. Non-normal Lanczos methods for quantum scattering.

    Science.gov (United States)

    Khorasani, Reza Rajaie; Dumont, Randall S

    2008-07-21

    This article presents a new complex absorbing potential (CAP) block Lanczos method for computing scattering eigenfunctions and reaction probabilities. The method reduces the problem of computing energy eigenfunctions to solving two energy dependent systems of equations. An energy independent block Lanczos factorization casts the system into a block tridiagonal form, which can be solved very efficiently for all energies. We show that CAP-Lanczos methods exhibit instability due to the non-normality of CAP Hamiltonians and may break down for some systems. The instability is not due to loss of orthogonality but to non-normality of the Hamiltonian matrix. While use of a Woods-Saxon exponential CAP-as opposed to a polynomial CAP-reduced non-normality, it did not always ensure convergence. Our results indicate that the Arnoldi algorithm is more robust for non-normal systems and less prone to break down. An Arnoldi version of our method is applied to a nonadiabatic tunneling Hamiltonian with excellent results, while the Lanczos algorithm breaks down for this system.

  13. Basis material decomposition in spectral CT using a semi-empirical, polychromatic adaption of the Beer-Lambert model

    Science.gov (United States)

    Ehn, S.; Sellerer, T.; Mechlem, K.; Fehringer, A.; Epple, M.; Herzen, J.; Pfeiffer, F.; Noël, P. B.

    2017-01-01

    Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.

  14. Single-trial normalization for event-related spectral decomposition reduces sensitivity to noisy trials

    Directory of Open Access Journals (Sweden)

    Romain eGrandchamp

    2011-09-01

    Full Text Available In EEG research, the classical Event-Related Potential (ERP model often proves to be a limited method when studying complex brain dynamics. For this reason, spectral techniques adapted from signal processing such as Event-Related Spectral Perturbation (ERSP – and its variant ERS (Event-Related Synchronization and ERD (Event-Related Desynchronization – have been used over the past 20-years. They represent average spectral changes in response to a stimulus.These spectral methods do not have strong consensus for comparing pre and post-stimulus activity. When computing ERSP, pre-stimulus baseline removal is usually performed after averaging the spectral estimate of multiple trials. Correcting the baseline of each single-trial prior to averaging spectral estimates is an alternative baseline correction method. However, we show that this method leads to positively skewed post-stimulus ERSP values. We eventually present new single-trial based ERSP baseline correction methods that perform trial normalization or centering prior to applying classical baseline correction methods. We show that single-trial correction methods minimize the contribution of artifactual data trials with high-amplitude spectral estimates and are robust to outliers when performing statistical inference testing. We then characterize these methods in terms of their time-frequency responses and behavior when performing statistical inference testing compared to classical ERSP methods.

  15. Entropy-Based Incomplete Cholesky Decomposition for a Scalable Spectral Clustering Algorithm: Computational Studies and Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Rocco Langone

    2016-05-01

    Full Text Available Spectral clustering methods allow datasets to be partitioned into clusters by mapping the input datapoints into the space spanned by the eigenvectors of the Laplacian matrix. In this article, we make use of the incomplete Cholesky decomposition (ICD to construct an approximation of the graph Laplacian and reduce the size of the related eigenvalue problem from N to m, with m ≪ N . In particular, we introduce a new stopping criterion based on normalized mutual information between consecutive partitions, which terminates the ICD when the change in the cluster assignments is below a given threshold. Compared with existing ICD-based spectral clustering approaches, the proposed method allows the reduction of the number m of selected pivots (i.e., to obtain a sparser model and at the same time, to maintain high clustering quality. The method scales linearly with respect to the number of input datapoints N and has low memory requirements, because only matrices of size N × m and m × m are calculated (in contrast to standard spectral clustering, where the construction of the full N × N similarity matrix is needed. Furthermore, we show that the number of clusters can be reliably selected based on the gap heuristics computed using just a small matrix R of size m × m instead of the entire graph Laplacian. The effectiveness of the proposed algorithm is tested on several datasets.

  16. Joint Spectral Decomposition for the Parcellation of the Human Cerebral Cortex Using Resting-State fMRI.

    Science.gov (United States)

    Arslan, Salim; Parisot, Sarah; Rueckert, Daniel

    2015-01-01

    Identification of functional connections within the human brain has gained a lot of attention due to its potential to reveal neural mechanisms. In a whole-brain connectivity analysis, a critical stage is the computation of a set of network nodes that can effectively represent cortical regions. To address this problem, we present a robust cerebral cortex parcellation method based on spectral graph theory and resting-state fMRI correlations that generates reliable parcellations at the single-subject level and across multiple subjects. Our method models the cortical surface in each hemisphere as a mesh graph represented in the spectral domain with its eigenvectors. We connect cortices of different subjects with each other based on the similarity of their connectivity profiles and construct a multi-layer graph, which effectively captures the fundamental properties of the whole group as well as preserves individual subject characteristics. Spectral decomposition of this joint graph is used to cluster each cortical vertex into a subregion in order to obtain whole-brain parcellations. Using rs-fMRI data collected from 40 healthy subjects, we show that our proposed algorithm computes highly reproducible parcellations across different groups of subjects and at varying levels of detail with an average Dice score of 0.78, achieving up to 9% better reproducibility compared to existing approaches. We also report that our group-wise parcellations are functionally more consistent, thus, can be reliably used to represent the population in network analyses.

  17. Ensemble empirical mode decomposition based fluorescence spectral noise reduction for low concentration PAHs

    Science.gov (United States)

    Wang, Shu-tao; Yang, Xue-ying; Kong, De-ming; Wang, Yu-tian

    2017-11-01

    A new noise reduction method based on ensemble empirical mode decomposition (EEMD) is proposed to improve the detection effect for fluorescence spectra. Polycyclic aromatic hydrocarbons (PAHs) pollutants, as a kind of important current environmental pollution source, are highly oncogenic. Using the fluorescence spectroscopy method, the PAHs pollutants can be detected. However, instrument will produce noise in the experiment. Weak fluorescent signals can be affected by noise, so we propose a way to denoise and improve the detection effect. Firstly, we use fluorescence spectrometer to detect PAHs to obtain fluorescence spectra. Subsequently, noises are reduced by EEMD algorithm. Finally, the experiment results show the proposed method is feasible.

  18. Spectral decomposition of internal gravity wave sea surface height in global models

    Science.gov (United States)

    Savage, Anna C.; Arbic, Brian K.; Alford, Matthew H.; Ansong, Joseph K.; Farrar, J. Thomas; Menemenlis, Dimitris; O'Rourke, Amanda K.; Richman, James G.; Shriver, Jay F.; Voet, Gunnar; Wallcraft, Alan J.; Zamudio, Luis

    2017-10-01

    Two global ocean models ranging in horizontal resolution from 1/12° to 1/48° are used to study the space and time scales of sea surface height (SSH) signals associated with internal gravity waves (IGWs). Frequency-horizontal wavenumber SSH spectral densities are computed over seven regions of the world ocean from two simulations of the HYbrid Coordinate Ocean Model (HYCOM) and three simulations of the Massachusetts Institute of Technology general circulation model (MITgcm). High wavenumber, high-frequency SSH variance follows the predicted IGW linear dispersion curves. The realism of high-frequency motions (>0.87 cpd) in the models is tested through comparison of the frequency spectral density of dynamic height variance computed from the highest-resolution runs of each model (1/25° HYCOM and 1/48° MITgcm) with dynamic height variance frequency spectral density computed from nine in situ profiling instruments. These high-frequency motions are of particular interest because of their contributions to the small-scale SSH variability that will be observed on a global scale in the upcoming Surface Water and Ocean Topography (SWOT) satellite altimetry mission. The variance at supertidal frequencies can be comparable to the tidal and low-frequency variance for high wavenumbers (length scales smaller than ˜50 km), especially in the higher-resolution simulations. In the highest-resolution simulations, the high-frequency variance can be greater than the low-frequency variance at these scales.

  19. A Numerical Solution Using an Adaptively Preconditioned Lanczos Method for a Class of Linear Systems Related with the Fractional Poisson Equation

    Directory of Open Access Journals (Sweden)

    M. Ilić

    2008-01-01

    Full Text Available This study considers the solution of a class of linear systems related with the fractional Poisson equation (FPE (−∇2α/2φ=g(x,y with nonhomogeneous boundary conditions on a bounded domain. A numerical approximation to FPE is derived using a matrix representation of the Laplacian to generate a linear system of equations with its matrix A raised to the fractional power α/2. The solution of the linear system then requires the action of the matrix function f(A=A−α/2 on a vector b. For large, sparse, and symmetric positive definite matrices, the Lanczos approximation generates f(Ab≈β0Vmf(Tme1. This method works well when both the analytic grade of A with respect to b and the residual for the linear system are sufficiently small. Memory constraints often require restarting the Lanczos decomposition; however this is not straightforward in the context of matrix function approximation. In this paper, we use the idea of thick-restart and adaptive preconditioning for solving linear systems to improve convergence of the Lanczos approximation. We give an error bound for the new method and illustrate its role in solving FPE. Numerical results are provided to gauge the performance of the proposed method relative to exact analytic solutions.

  20. Quantitative iodine-based material decomposition images with spectral CT imaging for differentiating prostatic carcinoma from benign prostatic hyperplasia.

    Science.gov (United States)

    Zhang, Xiao Fei; Lu, Qing; Wu, Lian Ming; Zou, Ai Hua; Hua, Xiao Lan; Xu, Jian Rong

    2013-08-01

    To investigate the value of iodine-based material decomposition images produced via spectral computed tomography (CT) in differentiating prostate cancer (PCa) from benign prostate hyperplasia (BPH). Fifty-six male patients underwent CT examination with spectral imaging during arterial phase (AP), venous phase (VP), and parenchymal phase (PP) of enhancement. Iodine concentrations of lesions were measured and normalized to that of the obturator internus muscle. Lesion CT values at 75 keV (corresponding to the energy of polychromatic images at 120 kVp) were measured and also normalized; their differences between AP and VP, VP and PP, and PP and AP were also obtained. The two-sample t-test was performed for comparisons. A receiver operating characteristic curve was generated to establish the threshold for normalized iodine concentration (NIC). Fifty-two peripheral lesions were found, which were confirmed by biopsy as 28 cases of PCa and 24 BPHs. The NICs of prostate cancers significantly differed from those of the BPHs: 2.38 ± 1.72 compared with 1.21 ± 0.72 in AP, respectively, and 2.67 ± 0.61 compared with 2.27 ± 0.77 in VP. Receiver operating characteristic analysis indicated that an NIC of 1.24 in the AP provided a sensitivity of 88% and a specificity of 71% for differentiating PCa from BPH. Spectral CT imaging enabled quantitative depiction of contrast medium uptake in prostatic lesions and improved sensitivity and specificity for differentiating PCa from BPH. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  1. An automatic detector of drowsiness based on spectral analysis and wavelet decomposition of EEG records.

    Science.gov (United States)

    Garces Correa, Agustina; Laciar Leber, Eric

    2010-01-01

    An algorithm to detect automatically drowsiness episodes has been developed. It uses only one EEG channel to differentiate the stages of alertness and drowsiness. In this work the vectors features are building combining Power Spectral Density (PDS) and Wavelet Transform (WT). The feature extracted from the PSD of EEG signal are: Central frequency, the First Quartile Frequency, the Maximum Frequency, the Total Energy of the Spectrum, the Power of Theta and Alpha bands. In the Wavelet Domain, it was computed the number of Zero Crossing and the integrated from the scale 3, 4 and 5 of Daubechies 2 order WT. The classifying of epochs is being done with neural networks. The detection results obtained with this technique are 86.5 % for drowsiness stages and 81.7% for alertness segment. Those results show that the features extracted and the classifier are able to identify drowsiness EEG segments.

  2. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    Science.gov (United States)

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in

  3. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Cai, C. [CEA, LIST, 91191 Gif-sur-Yvette, France and CNRS, SUPELEC, UNIV PARIS SUD, L2S, 3 rue Joliot-Curie, 91192 Gif-sur-Yvette (France); Rodet, T.; Mohammad-Djafari, A. [CNRS, SUPELEC, UNIV PARIS SUD, L2S, 3 rue Joliot-Curie, 91192 Gif-sur-Yvette (France); Legoupil, S. [CEA, LIST, 91191 Gif-sur-Yvette (France)

    2013-11-15

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  4. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography.

    Science.gov (United States)

    Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A

    2013-11-01

    Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the

  5. Error Analysis of the S-Step Lanczos Method in Finite Precision

    Science.gov (United States)

    2014-05-06

    using s > 5 due to the inherent instability of the monomial basis. This motivated research into the use of better-conditioned bases (e.g., Newton or... monomial (top), New- ton (middle), and Chebyshev (top) polynomials for computing the bases in line 3. The plots show the number of eigenvalue estimates...that for s = 2, s-step Lanczos with the monomial , Newton, and Chebyshev bases all well-replicate the convergence behavior of classical Lanczos; for the

  6. Detection of cretaceous incised-valley shale for resource play, Miano gas field, SW Pakistan: Spectral decomposition using continuous wavelet transform

    Science.gov (United States)

    Naseer, Muhammad Tayyab; Asim, Shazia

    2017-10-01

    Unconventional resource shales can play a critical role in economic growth throughout the world. The hydrocarbon potential of faults/fractured shales is the most significant challenge for unconventional prospect generation. The continuous wavelet transforms (CWT) of spectral decomposition (SD) technology is applied for shale gas prospects on high-resolution 3D seismic data from the Miano area in the Indus platform, SW Pakistan. Schmoker' technique reveals high-quality shales with total organic carbon (TOC) of 9.2% distributed in the western regions. The seismic amplitude, root-mean-square (RMS), and most positive curvature attributes show limited ability to resolve the prospective fractured shale components. The CWT is used to identify the hydrocarbon-bearing faulted/fractured compartments encased within the non-hydrocarbon bearing shale units. The hydrocarbon-bearing shales experience higher amplitudes (4694 dB and 3439 dB) than the non-reservoir shales (3290 dB). Cross plots between sweetness, 22 Hz spectral decomposition, and the seismic amplitudes are found more effective tools than the conventional seismic attribute mapping for discriminating the seal and reservoir elements within the incised-valley petroleum system. Rock physics distinguish the productive sediments from the non-productive sediments, suggesting the potential for future shale play exploration.

  7. Realization of preconditioned Lanczos and conjugate gradient algorithms on optical linear algebra processors.

    Science.gov (United States)

    Ghosh, A

    1988-08-01

    Lanczos and conjugate gradient algorithms are important in computational linear algebra. In this paper, a parallel pipelined realization of these algorithms on a ring of optical linear algebra processors is described. The flow of data is designed to minimize the idle times of the optical multiprocessor and the redundancy of computations. The effects of optical round-off errors on the solutions obtained by the optical Lanczos and conjugate gradient algorithms are analyzed, and it is shown that optical preconditioning can improve the accuracy of these algorithms substantially. Algorithms for optical preconditioning and results of numerical experiments on solving linear systems of equations arising from partial differential equations are discussed. Since the Lanczos algorithm is used mostly with sparse matrices, a folded storage scheme to represent sparse matrices on spatial light modulators is also described.

  8. The intrinsic nature of things the life and science of Cornelius Lanczos

    CERN Document Server

    Gellai, Barbara

    2010-01-01

    This book recounts the extraordinary personal journey and scientific story of Hungarian-born mathematician and physicist Cornelius Lanczos. His life and his mathematical accomplishments are inextricably linked, reflecting the social upheavals and historical events that shaped his odyssey in 20th-century Hungary, Germany, the United States, and Ireland. In his life Lanczos demonstrated a remarkable ability to be at the right place, or work with the right person, at the right time. At the start of his scientific career in Germany he worked as Einstein's assistant for one year and stayed in touch

  9. Spectral-decomposition techniques for the identification of radon anomalies temporally associated with earthquakes occurring in the UK in 2002 and 2008.

    Science.gov (United States)

    Crockett, R. G. M.; Gillmore, G. K.

    2009-04-01

    During the second half of 2002, the University of Northampton Radon Research Group operated two continuous hourly-sampling radon detectors 2.25 km apart in Northampton, in the (English) East Midlands. This period included the Dudley earthquake (22/09/2002) which was widely noticed by members of the public in the Northampton area. Also, at various periods during 2008 the Group has operated another pair of continuous hourly-sampling radon detectors similar distances apart in Northampton. One such period included the Market Rasen earthquake (27/02/2008) which was also widely noticed by members of the public in the Northampton area. During each period of monitoring, two time-series of radon readings were obtained, one from each detector. These have been analysed for evidence of simultaneous similar anomalies: the premise being that big disturbances occurring at big distances (in relation to the detector separation) should produce simultaneous similar anomalies but that simultaneous anomalies occurring by chance will be dissimilar. As previously reported, cross-correlating the two 2002 time-series over periods of 1-30 days duration, rolled forwards through the time-series at one-hour intervals produced two periods of significant correlation, i.e. two periods of simultaneous similar behaviour in the radon concentrations. One of these periods corresponded in time to the Dudley earthquake, the other corresponded in time to a smaller earthquake which occurred in the English Channel (26/08/2002). We here report subsequent investigation of the 2002 time-series and the 2008 time-series using spectral-decomposition techniques. These techniques have revealed additional simultaneous similar behaviour in the two radon concentrations, not revealed by the rolling correlation on the raw data. These correspond in time to the Manchester earthquake swarm of October 2002 and the Market Rasen earthquake of February 2008. The spectral-decomposition techniques effectively ‘de-noise' the

  10. A novel Lanczos-type procedure for computing eigenelements of Maxwell and Helmholtz problems

    NARCIS (Netherlands)

    Carpentieri, B.; Jing, Y-F; Huang, T-Z

    2010-01-01

    We introduce a novel variant of the Lanczos method for computing a few eigenvalues of sparse and/or dense non-Hermitian systems arising from the discretization of Maxwell- or Helmholtz-type operators in electromagnetics. We develop a Krylov subspace projection technique built upon short-term vector

  11. Q-3D: Imaging Spectroscopy of Quasar Hosts with JWST Analyzed with a Powerful New PSF Decomposition and Spectral Analysis Package

    Science.gov (United States)

    Wylezalek, Dominika; Veilleux, Sylvain; Zakamska, Nadia; Barrera-Ballesteros, J.; Luetzgendorf, N.; Nesvadba, N.; Rupke, D.; Sun, A.

    2017-11-01

    In the last few years, optical and near-IR IFU observations from the ground have revolutionized extragalactic astronomy. The unprecedented infrared sensitivity, spatial resolution, and spectral coverage of the JWST IFUs will ensure high demand from the community. For a wide range of extragalactic phenomena (e.g. quasars, starbursts, supernovae, gamma ray bursts, tidal disruption events) and beyond (e.g. nebulae, debris disks around bright stars), PSF contamination will be an issue when studying the underlying extended emission. We propose to provide the community with a PSF decomposition and spectral analysis package for high dynamic range JWST IFU observations allowing the user to create science-ready maps of relevant spectral features. Luminous quasars, with their bright central source (quasar) and extended emission (host galaxy), are excellent test cases for this software. Quasars are also of high scientific interest in their own right as they are widely considered to be the main driver in regulating massive galaxy growth. JWST will revolutionize our understanding of black hole-galaxy co-evolution by allowing us to probe the stellar, gas, and dust components of nearby and distant galaxies, spatially and spectrally. We propose to use the IFU capabilities of NIRSpec and MIRI to study the impact of three carefully selected luminous quasars on their hosts. Our program will provide (1) a scientific dataset of broad interest that will serve as a pathfinder for JWST science investigations in IFU mode and (2) a powerful new data analysis tool that will enable frontier science for a wide swath of astrophysical research.

  12. [Value of quantitative iodine-based material decomposition images with gemstone spectral CT imaging in the follow-up of patients with hepatocellular carcinoma after TACE treatment].

    Science.gov (United States)

    Xing, Gusheng; Wang, Shuang; Li, Chenrui; Zhao, Xinming; Zhou, Chunwu

    2015-03-01

    To investigate the value of quantitative iodine-based material decomposition images with gemstone spectral CT imaging in the follow-up of patients with hepatocellular carcinoma (HCC) after transcatheter arterial chemoebolization (TACE). Consecutive 32 HCC patients with previous TACE treatment were included in this study. For the follow-up, arterial phase (AP) and venous phase (VP) dual-phase CT scans were performed with a single-source dual-energy CT scanner (Discovery CT 750HD, GE Healthcare). Iodine concentrations were derived from iodine-based material-decomposition images in the liver parenchyma, tumors and coagulation necrosis (CN) areas. The iodine concentration difference (ICD) between the arterial-phase (AP) and venal-phase (VP) were quantitatively evaluated in different tissues.The lesion-to-normal parenchyma iodine concentration ratio (LNR) was calculated. ROC analysis was performed for the qualitative evaluation, and the area under ROC (Az) was calculated to represent the diagnostic ability of ICD and LNR. In all the 32 HCC patients, the region of interesting (ROI) for iodine concentrations included liver parenchyma (n=42), tumors (n=28) and coagulation necrosis (n=24). During the AP the iodine concentration of CNs (median value 0.088 µg/mm(3)) appeared significantly higher than that of the tumors (0.064 µg/mm(3), P=0.022) and liver parenchyma (0.048 µg/mm(3), P=0.005). But it showed no significant difference between liver parenchyma and tumors (P=0.454). During the VP the iodine concentration in hepatic parenchyma (median value 0.181 µg/mm(3)) was significantly higher than that in CNs (0.140 µg/mm(3), P=0.042). There was no significant difference between liver parenchyma and tumors, CNs and tumors (both P>0.05). The median value of ICD in CNs was 0.006 µg/mm(3), significantly lower than that of the HCC (0.201 µg/mm(3), Piodine-based material decomposition images with gemstone spectral CT imaging can improve the diagnostic efficacy of CT imaging

  13. Using spectral decomposition of the signals from laurdan-derived probes to evaluate the physical state of membranes in live cells [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Serge Mazeres

    2017-06-01

    Full Text Available Background: We wanted to investigate the physical state of biological membranes in live cells under the most physiological conditions possible. Methods: For this we have been using laurdan, C-laurdan or M-laurdan to label a variety of cells, and a biphoton microscope equipped with both a thermostatic chamber and a spectral analyser. We also used a flow cytometer to quantify the 450/530 nm ratio of fluorescence emissions by whole cells. Results: We find that using all the information provided by spectral analysis to perform spectral decomposition dramatically improves the imaging resolution compared to using just two channels, as commonly used to calculate generalized polarisation (GP. Coupled to a new plugin called Fraction Mapper, developed to represent the fraction of light intensity in the first component in a stack of two images, we obtain very clear pictures of both the intra-cellular distribution of the probes, and the polarity of the cellular environments where the lipid probes are localised. Our results lead us to conclude that, in live cells kept at 37°C, laurdan, and M-laurdan to a lesser extent, have a strong tendency to accumulate in the very apolar environment of intra-cytoplasmic lipid droplets, but label the plasma membrane (PM of mammalian cells ineffectively. On the other hand, C-laurdan labels the PM very quickly and effectively, and does not detectably accumulate in lipid droplets. Conclusions: From using these probes on a variety of mammalian cell lines, as well as on cells from Drosophila and Dictyostelium discoideum, we conclude that, apart from the lipid droplets, which are very apolar, probes in intracellular membranes reveal a relatively polar and hydrated environment, suggesting a very marked dominance of liquid disordered states. PMs, on the other hand, are much more apolar, suggesting a strong dominance of liquid ordered state, which fits with their high sterol contents.

  14. Using spectral decomposition of the signals from laurdan-derived probes to evaluate the physical state of membranes in live cells [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Serge Mazeres

    2017-08-01

    Full Text Available Background: We wanted to investigate the physical state of biological membranes in live cells under the most physiological conditions possible. Methods: For this we have been using laurdan, C-laurdan or M-laurdan to label a variety of cells, and a biphoton microscope equipped with both a thermostatic chamber and a spectral analyser. We also used a flow cytometer to quantify the 450/530 nm ratio of fluorescence emissions by whole cells. Results: We find that using all the information provided by spectral analysis to perform spectral decomposition dramatically improves the imaging resolution compared to using just two channels, as commonly used to calculate generalized polarisation (GP. Coupled to a new plugin called Fraction Mapper, developed to represent the fraction of light intensity in the first component in a stack of two images, we obtain very clear pictures of both the intra-cellular distribution of the probes, and the polarity of the cellular environments where the lipid probes are localised. Our results lead us to conclude that, in live cells kept at 37°C, laurdan, and M-laurdan to a lesser extent, have a strong tendency to accumulate in the very apolar environment of intra-cytoplasmic lipid droplets, but label the plasma membrane (PM of mammalian cells ineffectively. On the other hand, C-laurdan labels the PM very quickly and effectively, and does not detectably accumulate in lipid droplets. Conclusions: From using these probes on a variety of mammalian cell lines, as well as on cells from Drosophila and Dictyostelium discoideum, we conclude that, apart from the lipid droplets, which are very apolar, probes in intracellular membranes reveal a relatively polar and hydrated environment, suggesting a very marked dominance of liquid disordered states. PMs, on the other hand, are much more apolar, suggesting a strong dominance of liquid ordered state, which fits with their high sterol contents.

  15. High-Precision Spectral Decomposition Method Based on VMD/CWT/FWEO for Hydrocarbon Detection in Tight Sandstone Gas Reservoirs

    Directory of Open Access Journals (Sweden)

    Hui Chen

    2017-07-01

    Full Text Available Seismic time-frequency analysis methods can be used for hydrocarbon detection because of the phenomena of energy and abnormal attenuation of frequency when the seismic waves travel across reservoirs. A high-resolution method based on variational mode decomposition (VMD, continuous-wavelet transform (CWT and frequency-weighted energy operator (FWEO is proposed for hydrocarbon detection in tight sandstone gas reservoirs. VMD can decompose seismic signals into a set of intrinsic mode functions (IMF in the frequency domain. In order to avoid meaningful frequency loss, the CWT method is used to obtain the time-frequency spectra of the selected IMFs. The energy separation algorithm based on FWEO can improve the resolution of time-frequency spectra and highlight abnormal energy, which is applied to track the instantaneous energy in the time-frequency spectra. The difference between the high-frequency section and low-frequency section acquired by applying the proposed method is utilized to detect hydrocarbons. Applications using the model and field data further demonstrate that the proposed method can effectively detect hydrocarbons in tight sandstone reservoirs, with good anti-noise performance. The newly-proposed method can be used as an analysis tool to detect hydrocarbons.

  16. Extending the eigCG algorithm to nonsymmetric Lanczos for linear systems with multiple right-hand sides

    Energy Technology Data Exchange (ETDEWEB)

    Abdel-Rehim, A M; Stathopoulos, Andreas; Orginos, Kostas

    2014-08-01

    The technique that was used to build the EigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similarly to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems and then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of EigBiCG to compute spectral approximations with accuracy comparable to that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental EigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right-hand sides.

  17. Avoiding Communication in the Lanczos Bidiagonalization Routine and Associated Least Squares QR Solver

    Science.gov (United States)

    2015-04-12

    Avoiding communication in the Lanczos bidiagonalization routine and associated Least Squares QR solver Erin Carson Electrical Engineering and...Bidiagonalization Routine and Associated Least Squares QR Solver 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...ASSOCIATED LEAST SQUARES QR SOLVER ERIN CARSON Abstract. Communication – the movement of data between levels of memory hierarchy or between processors

  18. Some uses of the symmetric Lanczos algorithm - and why it works!

    Energy Technology Data Exchange (ETDEWEB)

    Druskin, V.L. [Schlumberger-Doll Research, Ridgefield, CT (United States); Greenbaum, A. [Courant Institute of Mathematical Sciences, New York, NY (United States); Knizhnerman, L.A. [Central Geophysical Expedition, Moscow (Russian Federation)

    1996-12-31

    The Lanczos algorithm uses a three-term recurrence to construct an orthonormal basis for the Krylov space corresponding to a symmetric matrix A and a starting vector q{sub 1}. The vectors and recurrence coefficients produced by this algorithm can be used for a number of purposes, including solving linear systems Au = {var_phi} and computing the matrix exponential e{sup -tA}{var_phi}. Although the vectors produced in finite precision arithmetic are not orthogonal, we show why they can still be used effectively for these purposes. The reason is that the 2-norm of the residual is essentially determined by the tridiagonal matrix and the next recurrence coefficient produced by the finite precision Lanczos computation. It follows that if the same tridiagonal matrix and recurrence coefficient are produced by the exact Lanczos algorithm applied to some other problem, then exact arithmetic bounds on the residual for that problem will hold for the finite precision computation. In order to establish exact arithmetic bounds for the different problem, it is necessary to have some information about the eigenvalues of the new coefficient matrix. Here we make use of information already established in the literature, and we also prove a new result for indefinite matrices.

  19. Advanced discriminating criteria for natural organic substances of cultural heritage interest: spectral decomposition and multivariate analyses of FT-Raman and FT-IR signatures.

    Science.gov (United States)

    Daher, Céline; Bellot-Gurlet, Ludovic; Le Hô, Anne-Solenn; Paris, Céline; Regert, Martine

    2013-10-15

    Natural organic substances are involved in many aspects of the cultural heritage field. Their presence in different forms (raw, heated, mixed), with various conservation states, constitutes a real challenge regarding their recognition and discrimination. Their characterization usually involves the use of separative techniques which imply destructive sampling and specific analytical preparations. Here we propose a non destructive approach using FT-Raman and infrared spectroscopies for the identification and differentiation of natural organic substances. Because of their related functional groups, they usually present similar vibrational signatures. Nevertheless the use of appropriate signal treatment and statistical analysis was successfully carried out to overcome this limitation, then proposing new objective discriminating methodology to identify these substances. Spectral decomposition calculations were performed on the CH stretching region of a large set of reference materials such as resins, oils, animal glues, and gums. Multivariate analyses (Principal Component Analyses) were then performed on the fitting parameters, and new discriminating criteria were established. A set of previously characterized archeological resins, with different surface aspects or alteration states, was analyzed using the same methodology. These testing samples validate the efficiency of our discriminating criteria established on the reference corpus. Moreover, we proved that some alteration or ageing of organic materials is not an issue to their recognition. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Development of a block Lanczos algorithm for free vibration analysis of spinning structures

    Science.gov (United States)

    Gupta, K. K.; Lawson, C. L.

    1988-01-01

    This paper is concerned with the development of an efficient eigenproblem solution algorithm and an associated computer program for the economical solution of the free vibration problem of complex practical spinning structural systems. Thus, a detailed description of a newly developed block Lanczos procedure is presented in this paper that employs only real numbers in all relevant computations and also fully exploits sparsity of associated matrices. The procedure is capable of computing multiple roots and proves to be most efficient compared to other existing similar techniques.

  1. Fast 3D Focusing Inversion of Gravity Data Using Reweighted Regularized Lanczos Bidiagonalization Method

    Science.gov (United States)

    Rezaie, Mohammad; Moradzadeh, Ali; Kalate, Ali Nejati; Aghajani, Hamid

    2017-01-01

    Inversion of gravity data is one of the important steps in the interpretation of practical data. One of the most interesting geological frameworks for gravity data inversion is the detection of sharp boundaries between orebody and host rocks. The focusing inversion is able to reconstruct a sharp image of the geological target. This technique can be efficiently applied for the quantitative interpretation of gravity data. In this study, a new reweighted regularized method for the 3D focusing inversion technique based on Lanczos bidiagonalization method is developed. The inversion results of synthetic data show that the new method is faster than common reweighted regularized conjugate gradient method to produce an acceptable solution for focusing inverse problem. The new developed inversion scheme is also applied for inversion of the gravity data collected over the San Nicolas Cu-Zn orebody in Zacatecas State, Mexico. The inversion results indicate a remarkable correlation with the true structure of the orebody that is achieved from drilling data.

  2. Lanczos-driven coupled-cluster damped linear response theory for molecules in polarizable environments

    DEFF Research Database (Denmark)

    List, Nanna Holmgaard; Coriani, Sonia; Kongsted, Jacob

    2014-01-01

    are specifically motivated by a twofold aim: (i) computation of core excitations in realistic surroundings and (ii) examination of the effect of the differential response of the environment upon excitation solely related to the CC multipliers (herein denoted the J matrix) in computations of excitation energies......We present an extension of a previously reported implementation of a Lanczos-driven coupled-cluster (CC) damped linear response approach to molecules in condensed phases, where the effects of a surrounding environment are incorporated by means of the polarizable embedding formalism. We...... and transition moments of polarizable-embedded molecules. Numerical calculations demonstrate that the differential polarization of the environment due to the first-order CC multipliers provides only minor contributions to the solvatochromic shift for all transitions considered. We thus complement previous works...

  3. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-01

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.

  4. Material decomposition through weighted imaged subtraction in dual-energy spectral mammography with an energy-resolved photon-counting detector using Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Eom, Ji Soo; Kang, Soon Cheol; Lee, Seung Wan [Konyang University, Daejeon (Korea, Republic of)

    2017-09-15

    Mammography is commonly used for screening early breast cancer. However, mammographic images, which depend on the physical properties of breast components, are limited to provide information about whether a lesion is malignant or benign. Although a dual-energy subtraction technique decomposes a certain material from a mixture, it increases radiation dose and degrades the accuracy of material decomposition. In this study, we simulated a breast phantom using attenuation characteristics, and we proposed a technique to enable the accurate material decomposition by applying weighting factors for the dual-energy mammography based on a photon-counting detector using a Monte Carlo simulation tool. We also evaluated the contrast and noise of simulated breast images for validating the proposed technique. As a result, the contrast for a malignant tumor in the dual-energy weighted subtraction technique was 0.98 and 1.06 times similar than those in the general mammography and dual-energy subtraction techniques, respectively. However the contrast between malignant and benign tumors dramatically increased 13.54 times due to the low contrast of a benign tumor. Therefore, the proposed technique can increase the material decomposition accuracy for malignant tumor and improve the diagnostic accuracy of mammography.

  5. Platform-Independent Genome-Wide Pattern of DNA Copy-Number Alterations Predicting Astrocytoma Survival and Response to Treatment Revealed by the GSVD Formulated as a Comparative Spectral Decomposition.

    Science.gov (United States)

    Aiello, Katherine A; Alter, Orly

    2016-01-01

    We use the generalized singular value decomposition (GSVD), formulated as a comparative spectral decomposition, to model patient-matched grades III and II, i.e., lower-grade astrocytoma (LGA) brain tumor and normal DNA copy-number profiles. A genome-wide tumor-exclusive pattern of DNA copy-number alterations (CNAs) is revealed, encompassed in that previously uncovered in glioblastoma (GBM), i.e., grade IV astrocytoma, where GBM-specific CNAs encode for enhanced opportunities for transformation and proliferation via growth and developmental signaling pathways in GBM relative to LGA. The GSVD separates the LGA pattern from other sources of biological and experimental variation, common to both, or exclusive to one of the tumor and normal datasets. We find, first, and computationally validate, that the LGA pattern is correlated with a patient's survival and response to treatment. Second, the GBM pattern identifies among the LGA patients a subtype, statistically indistinguishable from that among the GBM patients, where the CNA genotype is correlated with an approximately one-year survival phenotype. Third, cross-platform classification of the Affymetrix-measured LGA and GBM profiles by using the Agilent-derived GBM pattern shows that the GBM pattern is a platform-independent predictor of astrocytoma outcome. Statistically, the pattern is a better predictor (corresponding to greater median survival time difference, proportional hazard ratio, and concordance index) than the patient's age and the tumor's grade, which are the best indicators of astrocytoma currently in clinical use, and laboratory tests. The pattern is also statistically independent of these indicators, and, combined with either one, is an even better predictor of astrocytoma outcome. Recurring DNA CNAs have been observed in astrocytoma tumors' genomes for decades, however, copy-number subtypes that are predictive of patients' outcomes were not identified before. This is despite the growing number of

  6. Graph Decompositions

    DEFF Research Database (Denmark)

    Merker, Martin

    The topic of this PhD thesis is graph decompositions. While there exist various kinds of decompositions, this thesis focuses on three problems concerning edgedecompositions. Given a family of graphs H we ask the following question: When can the edge-set of a graph be partitioned so that each part...... induces a subgraph isomorphic to a member of H? Such a decomposition is called an H-decomposition. Apart from the existence of an H-decomposition, we are also interested in the number of parts needed in an H-decomposition. Firstly, we show that for every tree T there exists a constant k(T) such that every...... k(T)-edge-connected graph whose size is divisible by the size of T admits a T-decomposition. This proves a conjecture by Barát and Thomassen from 2006. Moreover, we introduce a new arboricity notion where we restrict the diameter of the trees in a decomposition into forests. We conjecture...

  7. A pseudo-spectral method for the simulation of poro-elastic seismic wave propagation in 2D polar coordinates using domain decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Sidler, Rolf, E-mail: rsidler@gmail.com [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland); Carcione, José M. [Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS), Borgo Grotta Gigante 42c, 34010 Sgonico, Trieste (Italy); Holliger, Klaus [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland)

    2013-02-15

    We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge–Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid–solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.

  8. Decomposition techniques

    Science.gov (United States)

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  9. Jahn-Teller Spectral Fingerprint in Molecular Photoemission: C60

    OpenAIRE

    Manini, Nicola; Gattari, Paolo; Tosatti, Erio

    2003-01-01

    The h_u hole spectral intensity for C60 -> C60+ molecular photoemission is calculated at finite temperature by a parameter-free Lanczos diagonalization of the electron-vibration Hamiltonian, including the full 8 H_g, 6 G_g, and 2 A_g mode couplings. The computed spectrum at 800 K is in striking agreement with gas-phase data. The energy separation of the first main shoulder from the main photoemission peak, 230 meV in C60, is shown to measure directly and rather generally the strength of the f...

  10. Jahn-Teller spectral fingerprint in molecular photoemission: c60.

    Science.gov (United States)

    Manini, Nicola; Gattari, Paolo; Tosatti, Erio

    2003-11-07

    The h(u) hole spectral intensity for C60-->C+60 molecular photoemission is calculated at finite temperature by a parameter-free Lanczos diagonalization of the electron-vibration Hamiltonian, including the full 8 H(g), 6 G(g), and 2 A(g) mode couplings. The computed spectrum at 800 K is in striking agreement with gas-phase data. The energy separation of the first main shoulder from the main photoemission peak, 230 meV in C60, is shown to measure directly and rather generally the strength of the final-state Jahn-Teller coupling.

  11. Clustering via Kernel Decomposition

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan

    2006-01-01

    Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....

  12. Composition decomposition

    DEFF Research Database (Denmark)

    Dyson, Mark

    2003-01-01

    . Not only have design tools changed character, but also the processes associated with them. Today, the composition of problems and their decomposition into parcels of information, calls for a new paradigm. This paradigm builds on the networking of agents and specialisations, and the paths of communication...

  13. Mapping litter decomposition by remote-detected indicators

    Directory of Open Access Journals (Sweden)

    L. Rossi

    2006-06-01

    Full Text Available Leaf litter decomposition is a key process for the functioning of natural ecosystems. An important limiting factor for this process is detritus availability, which we have estimated by remote sensed indices of canopy green biomass (NDVI. Here, we describe the use of multivariate geostatistical analysis to couple in situ measures with hyper-spectral and multi-spectral remote-sensed data for producing maps of litter decomposition. A direct relationship between the decomposition rates in four different CORINE habitats and NDVI, calculated at different scales from Landsat ETM+ multi-spectral data and MIVIS hyper-spectral data was found. Variogram analysis was used to evaluate the spatial properties of each single variable and their common interaction. Co-variogram and co-kriging analysis of the two variables turned out to be an effective approach for decomposition mapping from remote-sensed spatial explicit data.

  14. Estimation and calibration of observation impact signals using the Lanczos method in NOAA/NCEP data assimilation system

    Directory of Open Access Journals (Sweden)

    M. Wei

    2012-09-01

    Full Text Available Despite the tremendous progress that has been made in data assimilation (DA methodology, observing systems that reduce observation errors, and model improvements that reduce background errors, the analyses produced by the best available DA systems are still different from the truth. Analysis error and error covariance are important since they describe the accuracy of the analyses, and are directly related to the future forecast errors, i.e., the forecast quality. In addition, analysis error covariance is critically important in building an efficient ensemble forecast system (EFS.

    Estimating analysis error covariance in an ensemble-based Kalman filter DA is straightforward, but it is challenging in variational DA systems, which have been in operation at most NWP (Numerical Weather Prediction centers. In this study, we use the Lanczos method in the NCEP (the National Centers for Environmental Prediction Gridpoint Statistical Interpolation (GSI DA system to look into other important aspects and properties of this method that were not exploited before. We apply this method to estimate the observation impact signals (OIS, which are directly related to the analysis error variances. It is found that the smallest eigenvalue of the transformed Hessian matrix converges to one as the number of minimization iterations increases. When more observations are assimilated, the convergence becomes slower and more eigenvectors are needed to retrieve the observation impacts. It is also found that the OIS over data-rich regions can be represented by the eigenvectors with dominant eigenvalues.

    Since only a limited number of eigenvectors can be computed due to computational expense, the OIS is severely underestimated, and the analysis error variance is consequently overestimated. It is found that the mean OIS values for temperature and wind components at typical model levels are increased by about 1.5 times when the number of eigenvectors is doubled

  15. Comparison between the methods of Glauber states and Lanczos applied to the Jahn-Teller effect in Zn Se: Fe{sup 2+}

    Energy Technology Data Exchange (ETDEWEB)

    Rivera-Iratchet, J.; Orue, M.A. de [Departamento de Fisica, Universidad de Concepcion. Concepcion, Chile (Chile); Vogel, E.E. [Deparatmento de Fisica, Universidad de la Frontera. temuco, Chile (Chile); Bevilacqua, G.; Martinelli, L. [Instituto Nazionale di Fisica della Materia. Dipartamento di Fisica dell` Universita di Piazza Torricelli 2, 56126 Pisa, Italy (Italy)

    1998-12-01

    Vibronic levels in the strong-coupling limit are not easy to obtain since vibrational and electronic components are extremely mixed. Usual methods based on extrapolations from the adiabatic limit fail. In the present paper we want to compare the Lanczos method and the Glauber states method applied to a case were the strong-coupling limit is approached. The application considers parameters valid suited to the system Zn Se: Fe{sup 2} for a more realistic interpretation. Stability of the solutions is found and discussed. Advantages and disadvantages of both methods are also discussed. (Author)

  16. Waring decompositions of monomials

    National Research Council Canada - National Science Library

    Buczyńska, Weronika; Buczyński, Jarosław; Teitler, Zach

    2013-01-01

    .... We prove that any Waring decomposition of a monomial is obtained from a complete intersection ideal, determine the dimension of the set of Waring decompositions, and give the conditions under which...

  17. AUTONOMOUS GAUSSIAN DECOMPOSITION

    Energy Technology Data Exchange (ETDEWEB)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian [Department of Astronomy, University of Wisconsin, 475 North Charter Street, Madison, WI 53706 (United States); Heiles, Carl [Radio Astronomy Lab, UC Berkeley, 601 Campbell Hall, Berkeley, CA 94720 (United States); Hennebelle, Patrick [Laboratoire AIM, Paris-Saclay, CEA/IRFU/SAp-CNRS-Université Paris Diderot, F-91191 Gif-sur Yvette Cedex (France); Goss, W. M. [National Radio Astronomy Observatory, P.O. Box O, 1003 Lopezville, Socorro, NM 87801 (United States); Dickey, John, E-mail: rlindner@astro.wisc.edu [University of Tasmania, School of Maths and Physics, Private Bag 37, Hobart, TAS 7001 (Australia)

    2015-04-15

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.

  18. Fast Estimation of Approximate Matrix Ranks Using Spectral Densities.

    Science.gov (United States)

    Ubaru, Shashanka; Saad, Yousef; Seghouane, Abd-Krim

    2017-05-01

    Many machine learning and data-related applications require the knowledge of approximate ranks of large data matrices at hand. This letter presents two computationally inexpensive techniques to estimate the approximate ranks of such matrices. These techniques exploit approximate spectral densities, popular in physics, which are probability density distributions that measure the likelihood of finding eigenvalues of the matrix at a given point on the real line. Integrating the spectral density over an interval gives the eigenvalue count of the matrix in that interval. Therefore, the rank can be approximated by integrating the spectral density over a carefully selected interval. Two different approaches are discussed to estimate the approximate rank, one based on Chebyshev polynomials and the other based on the Lanczos algorithm. In order to obtain the appropriate interval, it is necessary to locate a gap between the eigenvalues that correspond to noise and the relevant eigenvalues that contribute to the matrix rank. A method for locating this gap and selecting the interval of integration is proposed based on the plot of the spectral density. Numerical experiments illustrate the performance of these techniques on matrices from typical applications.

  19. Tensor Decompositions for Learning Latent Variable Models

    Science.gov (United States)

    2012-12-08

    for several popular latent variable models Tensor Decompositions for Learning Latent Variable Models Anima Anandkumar1, Rong Ge2, Daniel Hsu3, Sham M...the ARO Award W911NF-12-1-0404. References [AFH+12] A. Anandkumar, D. P. Foster, D. Hsu, S. M. Kakade, and Y.-K. Liu . A spectral algorithm for latent...volume 13. Cambridge University Press, 2005. [PSX11] A. Parikh, L. Song , and E. P. Xing. A spectral algorithm for latent tree graphical models. In

  20. Hybrid spectral CT reconstruction

    Science.gov (United States)

    Clark, Darin P.

    2017-01-01

    Current photon counting x-ray detector (PCD) technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID). In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM). Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with a spectral

  1. Hybrid spectral CT reconstruction.

    Directory of Open Access Journals (Sweden)

    Darin P Clark

    Full Text Available Current photon counting x-ray detector (PCD technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID. In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM. Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with

  2. Hybrid spectral CT reconstruction.

    Science.gov (United States)

    Clark, Darin P; Badea, Cristian T

    2017-01-01

    Current photon counting x-ray detector (PCD) technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID). In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM). Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with a spectral

  3. The bipyridine adducts of N-phenyldithiocarbamato complexes of Zn(II) and Cd(II); synthesis, spectral, thermal decomposition studies and use as precursors for ZnS and CdS nanoparticles.

    Science.gov (United States)

    Onwudiwe, Damian C; Strydom, Christien A

    2015-01-25

    Bipyridine adducts of N-phenyldithiocarbamato complexes, [ML(1)2L(2)] (M=Cd(II), Zn(II); L(1)=N-phenyldithiocarbamate, L(2)=2,2' bipyridine), have been synthesized and characterised. The decomposition of these complexes to metal sulphides has been investigated by thermogravimetric analysis (TGA). The complexes were used as single-source precursors to synthesize MS (M=Zn, Cd) nanoparticles (NPs) passivated by hexadecyl amine (HDA). The growth of the nanoparticles was carried out at two different temperatures: 180 and 220 °C, and the optical and structural properties of the nanoparticles were studied using UV-Vis spectroscopy, photoluminescence spectroscopy (PL), transmission emission microscopy (TEM) and powdered X-ray diffraction (p-XRD). Nanoparticles, whose average diameters are 2.90 and 3.54 nm for ZnS, and 8.96 and 9.76 nm for CdS grown at 180 and 220 °C respectively, were obtained. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Thermal decomposition of hemicelluloses

    OpenAIRE

    Werner, Kajsa; Pommer, Linda; Broström, Markus

    2014-01-01

    Decomposition modeling of biomass often uses commercially available xylan as model compound representing hemicelluloses, not taking in account the heterogeneous nature of that group of carbohydrates. In this study, the thermal decomposition behavior of seven different hemicelluloses (beta-glucan, arabinogalactan, arabinoxylan, galactomannan, glucomannan, xyloglucan, and xylan) were investigated in inert atmosphere using (i) thermogravimetric analysis coupled to Fourier transform infrared spec...

  5. Multiresolution signal decomposition schemes

    NARCIS (Netherlands)

    J. Goutsias (John); H.J.A.M. Heijmans (Henk)

    1998-01-01

    textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis

  6. Azimuthal decomposition of optical modes

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2012-07-01

    Full Text Available This presentation analyses the azimuthal decomposition of optical modes. Decomposition of azimuthal modes need two steps, namely generation and decomposition. An azimuthally-varying phase (bounded by a ring-slit) placed in the spatial frequency...

  7. Graph Decompositions and Factorizing Permutations

    Directory of Open Access Journals (Sweden)

    Christian Capelle

    2002-12-01

    Full Text Available A factorizing permutation of a given graph is simply a permutation of the vertices in which all decomposition sets appear to be factors. Such a concept seems to play a central role in recent papers dealing with graph decomposition. It is applied here for modular decomposition and we propose a linear algorithm that computes the whole decomposition tree when a factorizing permutation is provided. This algorithm can be seen as a common generalization of Ma and Hsu for modular decomposition of chordal graphs and Habib, Huchard and Spinrad for inheritance graphs decomposition. It also suggests many new decomposition algorithms for various notions of graph decompositions.

  8. The spectral shift function and spectral flow

    OpenAIRE

    Azamov, N. A.; Carey, A.L.; Sukochev, F. A.

    2007-01-01

    This paper extends Krein's spectral shift function theory to the setting of semifinite spectral triples. We define the spectral shift function under these hypotheses via Birman-Solomyak spectral averaging formula and show that it computes spectral flow.

  9. Decompositions of manifolds

    CERN Document Server

    Daverman, Robert J

    2007-01-01

    Decomposition theory studies decompositions, or partitions, of manifolds into simple pieces, usually cell-like sets. Since its inception in 1929, the subject has become an important tool in geometric topology. The main goal of the book is to help students interested in geometric topology to bridge the gap between entry-level graduate courses and research at the frontier as well as to demonstrate interrelations of decomposition theory with other parts of geometric topology. With numerous exercises and problems, many of them quite challenging, the book continues to be strongly recommended to eve

  10. Polyethylene hydroperoxide decomposition products

    National Research Council Canada - National Science Library

    Lacoste, J; Carlsson, David James (Dave); Falicki, S; Wiles, D. M

    1991-01-01

    The decomposition products from pre-oxidized, linear low-density polyethylene have been identified and quantified for films exposed in the absence of oxygen to ultra-violet irradiation, heat or γ-irradiation...

  11. Litter Decomposition Rates, 2015

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data set contains decomposition rates for litter of Salicornia pacifica, Distichlis spicata, and Deschampsia cespitosa buried at 7 tidal marsh sites in 2015....

  12. Orthogonal tensor decompositions

    Energy Technology Data Exchange (ETDEWEB)

    Tamara G. Kolda

    2000-03-01

    The authors explore the orthogonal decomposition of tensors (also known as multi-dimensional arrays or n-way arrays) using two different definitions of orthogonality. They present numerous examples to illustrate the difficulties in understanding such decompositions. They conclude with a counterexample to a tensor extension of the Eckart-Young SVD approximation theorem by Leibovici and Sabatier [Linear Algebra Appl. 269(1998):307--329].

  13. Spectral stratigraphy

    Science.gov (United States)

    Lang, Harold R.

    1991-01-01

    A new approach to stratigraphic analysis is described which uses photogeologic and spectral interpretation of multispectral remote sensing data combined with topographic information to determine the attitude, thickness, and lithology of strata exposed at the surface. The new stratigraphic procedure is illustrated by examples in the literature. The published results demonstrate the potential of spectral stratigraphy for mapping strata, determining dip and strike, measuring and correlating stratigraphic sequences, defining lithofacies, mapping biofacies, and interpreting geological structures.

  14. Output-Only Modal Analysis by Frequency Domain Decomposition

    DEFF Research Database (Denmark)

    Brincker, Rune; Zhang, Lingmi; Andersen, Palle

    2000-01-01

    approach where the modal parameters are estimated by simple peak picking. However, by introducing a decomposition of the spectral density function matrix, the response spectra can be separated into a set of single degree of freedom systems, each corresponding to an individual mode. By using...

  15. Output-only Modal Analysis by Frequency Domain Decomposition

    DEFF Research Database (Denmark)

    Brincker, Rune; Zhang, L.; Andersen, P.

    2000-01-01

    approach where the modal parameters are estimated by simple peak picking. However, by introducing a decomposition of the spectral density function matrix, the response spectra can be separated into a set of single degree of freedom systems, each corresponding to an individual mode. By using...

  16. Decomposing Nekrasov decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, A. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); Institute for Information Transmission Problems,19-1 Bolshoy Karetniy, Moscow, 127051 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Zenkevich, Y. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Institute for Nuclear Research of Russian Academy of Sciences,6a Prospekt 60-letiya Oktyabrya, Moscow, 117312 (Russian Federation)

    2016-02-16

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  17. Symmetric Tensor Decomposition

    DEFF Research Database (Denmark)

    Brachat, Jerome; Comon, Pierre; Mourrain, Bernard

    2010-01-01

    We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables....... Exploiting this duality, we propose necessary and sufficient conditions for the existence of such a decomposition of a given rank, using the properties of Hankel (and quasi-Hankel) matrices, derived from multivariate polynomials and normal form computations. This leads to the resolution of systems...

  18. MADCam: The multispectral active decomposition camera

    DEFF Research Database (Denmark)

    Hilger, Klaus Baggesen; Stegmann, Mikkel Bille

    2001-01-01

    A real-time spectral decomposition of streaming three-band image data is obtained by applying linear transformations. The Principal Components (PC), the Maximum Autocorrelation Factors (MAF), and the Maximum Noise Fraction (MNF) transforms are applied. In the presented case study the PC transform...... that utilised information drawn from the temporal dimension instead of the traditional spatial approach. Using the CIF format (352x288) frame rates up to 30 Hz are obtained and in VGA mode (640x480) up to 15 Hz....

  19. Kosambi and Proper Orthogonal Decomposition

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 16; Issue 6. Kosambi and the Proper Orthogonal Decomposition. Roddam Narasimha. General ... Keywords. Proper orthogonal decomposition; Karhunen–Loéve expansion; statistics in function space; characteristic eddies; special calculating machines.

  20. Optimal Spectral Decomposition (OSD) for Ocean Data Assimilation

    Science.gov (United States)

    2015-01-01

    not display a currently valid OMB control number. 1. REPORT DATE 2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND...been spun up from rest and clima - tological annual mean (temperature and salinity) with the daily climatological surface forcing from the CORE, ver...servations are purely extrapolated to the data-poor area with the control of the observational influence matrix F [see (17) and (27)]. Since the

  1. EEG Signal Decomposition and Improved Spectral Analysis Using Wavelet Transform

    National Research Council Canada - National Science Library

    Bhatti, Muhammad

    2001-01-01

    EEG (Electroencephalograph), as a noninvasive testing method, plays a key role in the diagnosing diseases, and is useful for both physiological research and medical applications. Wavelet transform (WT...

  2. Quantifying and Qualifying Trust: Spectral Decomposition of Trust Networks

    NARCIS (Netherlands)

    Pavlovic, Dusko

    In a previous FAST paper, I presented a quantitative model of the process of trust building, and showed that trust is accumulated like wealth: the rich get richer. This explained the pervasive phenomenon of adverse selection of trust certificates, as well as the fragility of trust networks in

  3. Application of Burst Processing to the Spectral Decomposition of Speech.

    Science.gov (United States)

    1977-07-01

    of voluntary , formal i zed motions of the respiratory and masticatory apparatus. It is a skill which must be learned and develo ped. Control is...tract is an acoustical tube which acts as a filter on the exci tation functions of speech. It is term i nated by the lips on one end and by the vocal...The volume flow of air through the glottis as a function of time is roughly triangular in shane and exhibits duty factors on the or der of 0.3 to 0.7

  4. Micromechanical Sensor for the Spectral Decomposition of Acoustic Signals

    Science.gov (United States)

    2012-02-01

    in systems such as avalanche detectors, radiation detectors, and other dark discharge processes. The applicability of the Townsend regime and the...53 3.3.1 Gaussian Pulse Frequency Content...55 3.3.2 Gaussian Pulse with Plate

  5. Spectral Predictors

    Energy Technology Data Exchange (ETDEWEB)

    Ibarria, L; Lindstrom, P; Rossignac, J

    2006-11-17

    Many scientific, imaging, and geospatial applications produce large high-precision scalar fields sampled on a regular grid. Lossless compression of such data is commonly done using predictive coding, in which weighted combinations of previously coded samples known to both encoder and decoder are used to predict subsequent nearby samples. In hierarchical, incremental, or selective transmission, the spatial pattern of the known neighbors is often irregular and varies from one sample to the next, which precludes prediction based on a single stencil and fixed set of weights. To handle such situations and make the best use of available neighboring samples, we propose a local spectral predictor that offers optimal prediction by tailoring the weights to each configuration of known nearby samples. These weights may be precomputed and stored in a small lookup table. We show that predictive coding using our spectral predictor improves compression for various sources of high-precision data.

  6. Magic Coset Decompositions

    CERN Document Server

    Cacciatori, Sergio L; Marrani, Alessio

    2013-01-01

    By exploiting a "mixed" non-symmetric Freudenthal-Rozenfeld-Tits magic square, two types of coset decompositions are analyzed for the non-compact special K\\"ahler symmetric rank-3 coset E7(-25)/[(E6(-78) x U(1))/Z_3], occurring in supergravity as the vector multiplets' scalar manifold in N=2, D=4 exceptional Maxwell-Einstein theory. The first decomposition exhibits maximal manifest covariance, whereas the second (triality-symmetric) one is of Iwasawa type, with maximal SO(8) covariance. Generalizations to conformal non-compact, real forms of non-degenerate, simple groups "of type E7" are presented for both classes of coset parametrizations, and relations to rank-3 simple Euclidean Jordan algebras and normed trialities over division algebras are also discussed.

  7. Canonical decomposition of fluctuation interferences using the delta function formalism

    Directory of Open Access Journals (Sweden)

    Alexander V. Denisov

    2015-10-01

    Full Text Available The paper deals with the discrete spectral-orthogonal decompositions of centered Gaussian random processes for two cases. In the first case, the process implementations are a sequence of pulses that are short in comparison with the observation time. The process decomposition was obtained as a generalized Fourier series on the basis of the delta function formalism, and the variances of the coefficients (random values of this series were found as well. The resulting expressions complement Kotel'nikov's formula because they cover both the high-frequency and the low-frequency regions of the canonical-decomposition spectrum. In the second case, a random process is a superposition of narrow-band Gaussian random processes, and its implementations are characterized by oscillations. For such a process the canonical decomposition in terms of the Walsh functions was obtained on the basis of the generalized function formalism. Then this decomposition was re-decomposed in terms of trigonometric functions; it follows from the resulting series that the canonical decomposition spectrum is not uniform since a pedestal is formed in the constant component region.

  8. Thermal decomposition of illite

    Directory of Open Access Journals (Sweden)

    Araújo José Humberto de

    2004-01-01

    Full Text Available The effect of heat treatment on illite in air at temperatures ranging from 750 to 1150 °C was studied using the Mössbauer effect in 57Fe. The dependence of the Mössbauer parameters and relative percentage of the radiation absorption area was measured as a function of the firing temperature. The onset of thermal structural decomposition occurred at 800 °C. With rising temperature, the formation of hematite (Fe2O3 increased at the expense of the silicate mineral.

  9. Mode decomposition evolution equations.

    Science.gov (United States)

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2012-03-01

    Partial differential equation (PDE) based methods have become some of the most powerful tools for exploring the fundamental problems in signal processing, image processing, computer vision, machine vision and artificial intelligence in the past two decades. The advantages of PDE based approaches are that they can be made fully automatic, robust for the analysis of images, videos and high dimensional data. A fundamental question is whether one can use PDEs to perform all the basic tasks in the image processing. If one can devise PDEs to perform full-scale mode decomposition for signals and images, the modes thus generated would be very useful for secondary processing to meet the needs in various types of signal and image processing. Despite of great progress in PDE based image analysis in the past two decades, the basic roles of PDEs in image/signal analysis are only limited to PDE based low-pass filters, and their applications to noise removal, edge detection, segmentation, etc. At present, it is not clear how to construct PDE based methods for full-scale mode decomposition. The above-mentioned limitation of most current PDE based image/signal processing methods is addressed in the proposed work, in which we introduce a family of mode decomposition evolution equations (MoDEEs) for a vast variety of applications. The MoDEEs are constructed as an extension of a PDE based high-pass filter (Europhys. Lett., 59(6): 814, 2002) by using arbitrarily high order PDE based low-pass filters introduced by Wei (IEEE Signal Process. Lett., 6(7): 165, 1999). The use of arbitrarily high order PDEs is essential to the frequency localization in the mode decomposition. Similar to the wavelet transform, the present MoDEEs have a controllable time-frequency localization and allow a perfect reconstruction of the original function. Therefore, the MoDEE operation is also called a PDE transform. However, modes generated from the present approach are in the spatial or time domain and can be

  10. Vibration fatigue using modal decomposition

    Science.gov (United States)

    Mršnik, Matjaž; Slavič, Janko; Boltežar, Miha

    2018-01-01

    Vibration-fatigue analysis deals with the material fatigue of flexible structures operating close to natural frequencies. Based on the uniaxial stress response, calculated in the frequency domain, the high-cycle fatigue model using the S-N curve material data and the Palmgren-Miner hypothesis of damage accumulation is applied. The multiaxial criterion is used to obtain the equivalent uniaxial stress response followed by the spectral moment approach to the cycle-amplitude probability density estimation. The vibration-fatigue analysis relates the fatigue analysis in the frequency domain to the structural dynamics. However, once the stress response within a node is obtained, the physical model of the structure dictating that response is discarded and does not propagate through the fatigue-analysis procedure. The structural model can be used to evaluate how specific dynamic properties (e.g., damping, modal shapes) affect the damage intensity. A new approach based on modal decomposition is presented in this research that directly links the fatigue-damage intensity with the dynamic properties of the system. It thus offers a valuable insight into how different modes of vibration contribute to the total damage to the material. A numerical study was performed showing good agreement between results obtained using the newly presented approach with those obtained using the classical method, especially with regards to the distribution of damage intensity and critical point location. The presented approach also offers orders of magnitude faster calculation in comparison with the conventional procedure. Furthermore, it can be applied in a straightforward way to strain experimental modal analysis results, taking advantage of experimentally measured strains.

  11. Learning theory of distributed spectral algorithms

    Science.gov (United States)

    Guo, Zheng-Chu; Lin, Shao-Bo; Zhou, Ding-Xuan

    2017-07-01

    Spectral algorithms have been widely used and studied in learning theory and inverse problems. This paper is concerned with distributed spectral algorithms, for handling big data, based on a divide-and-conquer approach. We present a learning theory for these distributed kernel-based learning algorithms in a regression framework including nice error bounds and optimal minimax learning rates achieved by means of a novel integral operator approach and a second order decomposition of inverse operators. Our quantitative estimates are given in terms of regularity of the regression function, effective dimension of the reproducing kernel Hilbert space, and qualification of the filter function of the spectral algorithm. They do not need any eigenfunction or noise conditions and are better than the existing results even for the classical family of spectral algorithms.

  12. Singular Value Decomposition and Ligand Binding Analysis

    Directory of Open Access Journals (Sweden)

    André Luiz Galo

    2013-01-01

    Full Text Available Singular values decomposition (SVD is one of the most important computations in linear algebra because of its vast application for data analysis. It is particularly useful for resolving problems involving least-squares minimization, the determination of matrix rank, and the solution of certain problems involving Euclidean norms. Such problems arise in the spectral analysis of ligand binding to macromolecule. Here, we present a spectral data analysis method using SVD (SVD analysis and nonlinear fitting to determine the binding characteristics of intercalating drugs to DNA. This methodology reduces noise and identifies distinct spectral species similar to traditional principal component analysis as well as fitting nonlinear binding parameters. We applied SVD analysis to investigate the interaction of actinomycin D and daunomycin with native DNA. This methodology does not require prior knowledge of ligand molar extinction coefficients (free and bound, which potentially limits binding analysis. Data are acquired simply by reconstructing the experimental data and by adjusting the product of deconvoluted matrices and the matrix of model coefficients determined by the Scatchard and McGee and von Hippel equation.

  13. Tensor decomposition of EEG signals: A brief review

    OpenAIRE

    Cong, Fengyu; Lin, Qiu-Hua; Kuang, Li-Dan; Gong, Xiao-Feng; Astikainen, Piia; Ristaniemi, Tapani

    2015-01-01

    Electroencephalography (EEG) is one fundamental tool for functional brain imaging. EEG signals tend to be represented by a vector or a matrix to facilitate data processing and analysis with generally understood methodologies like time-series analysis, spectral analysis and matrix decomposition. Indeed, EEG signals are often naturally born with more than two modes of time and space, and they can be denoted by a multi-way array called as tensor. This review summarizes the current pr...

  14. Two-time Green's functions and the spectral density method in nonextensive classical statistical mechanics.

    Science.gov (United States)

    Cavallo, A; Cosenza, F; De Cesare, L

    2001-12-10

    The two-time retarded and advanced Green's function technique is formulated in nonextensive classical statistical mechanics within the optimal Lagrange multiplier framework. The main spectral properties are presented and a spectral decomposition for the spectral density is obtained. Finally, the nonextensive version of the spectral density method is given and its effectiveness is tested by exploring the equilibrium properties of a classical ferromagnetic spin chain.

  15. Erbium hydride decomposition kinetics.

    Energy Technology Data Exchange (ETDEWEB)

    Ferrizz, Robert Matthew

    2006-11-01

    Thermal desorption spectroscopy (TDS) is used to study the decomposition kinetics of erbium hydride thin films. The TDS results presented in this report are analyzed quantitatively using Redhead's method to yield kinetic parameters (E{sub A} {approx} 54.2 kcal/mol), which are then utilized to predict hydrogen outgassing in vacuum for a variety of thermal treatments. Interestingly, it was found that the activation energy for desorption can vary by more than 7 kcal/mol (0.30 eV) for seemingly similar samples. In addition, small amounts of less-stable hydrogen were observed for all erbium dihydride films. A detailed explanation of several approaches for analyzing thermal desorption spectra to obtain kinetic information is included as an appendix.

  16. Tensor decomposition of EEG signals: a brief review.

    Science.gov (United States)

    Cong, Fengyu; Lin, Qiu-Hua; Kuang, Li-Dan; Gong, Xiao-Feng; Astikainen, Piia; Ristaniemi, Tapani

    2015-06-15

    Electroencephalography (EEG) is one fundamental tool for functional brain imaging. EEG signals tend to be represented by a vector or a matrix to facilitate data processing and analysis with generally understood methodologies like time-series analysis, spectral analysis and matrix decomposition. Indeed, EEG signals are often naturally born with more than two modes of time and space, and they can be denoted by a multi-way array called as tensor. This review summarizes the current progress of tensor decomposition of EEG signals with three aspects. The first is about the existing modes and tensors of EEG signals. Second, two fundamental tensor decomposition models, canonical polyadic decomposition (CPD, it is also called parallel factor analysis-PARAFAC) and Tucker decomposition, are introduced and compared. Moreover, the applications of the two models for EEG signals are addressed. Particularly, the determination of the number of components for each mode is discussed. Finally, the N-way partial least square and higher-order partial least square are described for a potential trend to process and analyze brain signals of two modalities simultaneously. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Decomposition methods for unsupervised learning

    DEFF Research Database (Denmark)

    Mørup, Morten

    2008-01-01

    This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...

  18. Block term decomposition for modelling epileptic seizures

    Science.gov (United States)

    Hunyadi, Borbála; Camps, Daan; Sorber, Laurent; Paesschen, Wim Van; Vos, Maarten De; Huffel, Sabine Van; Lathauwer, Lieven De

    2014-12-01

    Recordings of neural activity, such as EEG, are an inherent mixture of different ongoing brain processes as well as artefacts and are typically characterised by low signal-to-noise ratio. Moreover, EEG datasets are often inherently multidimensional, comprising information in time, along different channels, subjects, trials, etc. Additional information may be conveyed by expanding the signal into even more dimensions, e.g. incorporating spectral features applying wavelet transform. The underlying sources might show differences in each of these modes. Therefore, tensor-based blind source separation techniques which can extract the sources of interest from such multiway arrays, simultaneously exploiting the signal characteristics in all dimensions, have gained increasing interest. Canonical polyadic decomposition (CPD) has been successfully used to extract epileptic seizure activity from wavelet-transformed EEG data (Bioinformatics 23(13):i10-i18, 2007; NeuroImage 37:844-854, 2007), where each source is described by a rank-1 tensor, i.e. by the combination of one particular temporal, spectral and spatial signature. However, in certain scenarios, where the seizure pattern is nonstationary, such a trilinear signal model is insufficient. Here, we present the application of a recently introduced technique, called block term decomposition (BTD) to separate EEG tensors into rank- ( L r , L r ,1) terms, allowing to model more variability in the data than what would be possible with CPD. In a simulation study, we investigate the robustness of BTD against noise and different choices of model parameters. Furthermore, we show various real EEG recordings where BTD outperforms CPD in capturing complex seizure characteristics.

  19. Old and New Spectral Techniques for Economic Time Series

    OpenAIRE

    Sella Lisa

    2008-01-01

    This methodological paper reviews different spectral techniques well suitable to the analysis of economic time series. While econometric time series analysis is generally yielded in the time domain, these techniques propose a complementary approach based on the frequency domain. Spectral decomposition and time series reconstruction provide a precise quantitative and formal description of the main oscillatory components of a series: thus, it is possible to formally identify trends, lowfrequenc...

  20. Spectral Imaging by Upconversion

    DEFF Research Database (Denmark)

    Dam, Jeppe Seidelin; Pedersen, Christian; Tidemand-Lichtenberg, Peter

    2011-01-01

    We present a method to obtain spectrally resolved images using upconversion. By this method an image is spectrally shifted from one spectral region to another wavelength. Since the process is spectrally sensitive it allows for a tailored spectral response. We believe this will allow standard...... silicon based cameras designed for visible/near infrared radiation to be used for spectral images in the mid infrared. This can lead to much lower costs for such imaging devices, and a better performance....

  1. Spectral analysis of one-way and two-way downscaling applications for a tidally driven coastal ocean forecasting system

    Science.gov (United States)

    Solano, Miguel; Gonzalez, Juan; Canals, Miguel; Capella, Jorge; Morell, Julio; Leonardi, Stefano

    2017-04-01

    ways: 1) using Rich Pawlowicz's t_tide package (classic harmonic analysis), 2) with traditional band-pass filters (e.g. Lanczos) and 3) using Proper Orthogonal Decomposition. The tide filtering approach shows great improvement in the high frequency response of tidal motions at the open boundaries. Results are validated with NOAA tide gauges, Acoustic Doppler Current Profilers, High Frequency Radars (6km and 2km resolution). A floating drifter experiment is performed in coastal zones, in which 12 drifters were deployed at different coastal zones and tracked for several days. The results show an improvement of the forecast skill with the proper implementation of the tide filtering approach by adjusting the nudging time scales and adequately removing the tidal signals. Significant improvement is found in the tracking skill of the floating drifters for the one-way grid and the two-way nested application also shows some improvement over the offline downscaling approach at higher resolutions.

  2. Some nonlinear space decomposition algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  3. NRSA enzyme decomposition model data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Microbial enzyme activities measured at more than 2000 US streams and rivers. These enzyme data were then used to predict organic matter decomposition and microbial...

  4. Decomposition of Network Communication Games

    NARCIS (Netherlands)

    Dietzenbacher, Bas; Borm, Peter; Hendrickx, Ruud

    2015-01-01

    Using network control structures this paper introduces network communication games as a generalization of vertex games and edge games corresponding to communication situations and studies their decomposition into unanimity games. We obtain a relation between the dividends of the network

  5. Decomposition Bounds for Marginal MAP

    OpenAIRE

    PING, WEI; Liu,Qiang; Ihler, Alexander

    2015-01-01

    Marginal MAP inference involves making MAP predictions in systems defined with latent variables or missing information. It is significantly more difficult than pure marginalization and MAP tasks, for which a large class of efficient and convergent variational algorithms, such as dual decomposition, exist. In this work, we generalize dual decomposition to a generic power sum inference task, which includes marginal MAP, along with pure marginalization and MAP, as special cases. Our method is ba...

  6. Facility Location Using Cross Decomposition

    OpenAIRE

    Jackson, Leroy A.

    1995-01-01

    The views expressed in this thesis are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Determining the best base stationing for military units can be modeled as a capacitated facility location problem with sole sourcing and multiple resource categories. Computational experience suggests that cross decomposition, a unification of Benders Decomposition and Lagrangean relaxation, is superior to other contempo...

  7. On the Spectral Singularities and Spectrality of the Hill Operator

    OpenAIRE

    Veliev, O. A.

    2014-01-01

    First we study the spectral singularity at infinity and investigate the connections of the spectral singularities and the spectrality of the Hill operator. Then we consider the spectral expansion when there is not the spectral singularity at infinity.

  8. Tensor decomposition-based sparsity divergence index for hyperspectral anomaly detection.

    Science.gov (United States)

    Zhang, Lili; Zhao, Chunhui

    2017-09-01

    Recently, some methods exploiting both the spatial and spectral features have drawn increasing attention in hyperspectral anomaly detection (AD) and they perform well. In addition, a tensor decomposition-based (TenB) algorithm treating the hyperspectral dataset as a three-order tensor (two modes for space and one mode for spectra) has been proposed to further improve the performance for AD. In this paper, a method using the sparsity divergence index (SDI) based on tensor decomposition (SDI-TD) is proposed. First, three modes of the hyperspectral dataset are obtained by tensor decomposition. Then, low-rank and sparse matrix decomposition is employed separately along the three modes and three sparse matrices are acquired. Finally, SDIs based on the three sparse matrices along the three modes are obtained, and the final result is generated by using the joint SDI. Experiments tested on the real and synthetic hyperspectral dataset reveal that the proposed SDI-TD performs better than the comparison algorithms.

  9. A complete ensemble empirical mode decomposition for GPR signal time-frequency analysis

    Science.gov (United States)

    Li, Jing; Chen, Lingna; Xia, Shugao; Xu, Penglong; Liu, Fengshan

    2014-05-01

    In this paper, we apply a time and frequency analysis method based on the complete ensemble empirical mode decomposition (CEEMD) in GPR signal processing. It decomposes the GPR signal into a sum of oscillatory components, with guaranteed positive and smoothly varying instantaneous frequencies. The key idea of this method relies on averaging the modes obtained by EMD applied to several realizations of Gaussian white noise added to the original signal. It can solve the mode mixing problem in empirical mode decomposition (EMD) method and improve the resolution of ensemble empirical mode decomposition (EEMD) when the signal has low signal noise ratio (SNR). First, we analyze the difference between the basic theory of EMD, EEMD and CEEMD. Then, we compare the time and frequency analysis results of different methods. The synthetic and real GPR data demonstrate that CEEMD promises higher spectral-spatial resolution than the other two EMDs method. Its decomposition is complete, with a numerically negligible error.

  10. Abstract decomposition theorem and applications

    CERN Document Server

    Grossberg, R; Grossberg, Rami; Lessmann, Olivier

    2005-01-01

    Let K be an Abstract Elementary Class. Under the asusmptions that K has a nicely behaved forking-like notion, regular types and existence of some prime models we establish a decomposition theorem for such classes. The decomposition implies a main gap result for the class K. The setting is general enough to cover \\aleph_0-stable first-order theories (proved by Shelah in 1982), Excellent Classes of atomic models of a first order tehory (proved Grossberg and Hart 1987) and the class of submodels of a large sequentially homogenuus \\aleph_0-stable model (which is new).

  11. Identification of liquid-phase decomposition species and reactions for guanidinium azotetrazolate

    Energy Technology Data Exchange (ETDEWEB)

    Kumbhakarna, Neeraj R.; Shah, Kaushal J.; Chowdhury, Arindrajit; Thynell, Stefan T., E-mail: thynell@psu.edu

    2014-08-20

    Highlights: • Guanidinium azotetrazolate (GzT) is a high-nitrogen energetic material. • FTIR spectroscopy and ToFMS spectrometry were used for species identification. • Quantum mechanics was used to identify transition states and decomposition pathways. • Important reactions in the GzT liquid-phase decomposition process were identified. • Initiation of decomposition occurs via ring opening, releasing N{sub 2}. - Abstract: The objective of this work is to analyze the decomposition of guanidinium azotetrazolate (GzT) in the liquid phase by using a combined experimental and computational approach. The experimental part involves the use of Fourier transform infrared (FTIR) spectroscopy to acquire the spectral transmittance of the evolved gas-phase species from rapid thermolysis, as well as to acquire spectral transmittance of the condensate and residue formed from the decomposition. Time-of-flight mass spectrometry (ToFMS) is also used to acquire mass spectra of the evolved gas-phase species. Sub-milligram samples of GzT were heated at rates of about 2000 K/s to a set temperature (553–573 K) where decomposition occurred under isothermal conditions. N{sub 2}, NH{sub 3}, HCN, guanidine and melamine were identified as products of decomposition. The computational approach is based on using quantum mechanics for confirming the identity of the species observed in experiments and for identifying elementary chemical reactions that formed these species. In these ab initio techniques, various levels of theory and basis sets were used. Based on the calculated enthalpy and free energy values of various molecular structures, important reaction pathways were identified. Initiation of decomposition of GzT occurs via ring opening to release N{sub 2}.

  12. Spectral response model for a multibin photon-counting spectral computed tomography detector and its applications.

    Science.gov (United States)

    Liu, Xuejin; Persson, Mats; Bornefalk, Hans; Karlsson, Staffan; Xu, Cheng; Danielsson, Mats; Huber, Ben

    2015-07-01

    Variations among detector channels in computed tomography can lead to ring artifacts in the reconstructed images and biased estimates in projection-based material decomposition. Typically, the ring artifacts are corrected by compensation methods based on flat fielding, where transmission measurements are required for a number of material-thickness combinations. Phantoms used in these methods can be rather complex and require an extensive number of transmission measurements. Moreover, material decomposition needs knowledge of the individual response of each detector channel to account for the detector inhomogeneities. For this purpose, we have developed a spectral response model that binwise predicts the response of a multibin photon-counting detector individually for each detector channel. The spectral response model is performed in two steps. The first step employs a forward model to predict the expected numbers of photon counts, taking into account parameters such as the incident x-ray spectrum, absorption efficiency, and energy response of the detector. The second step utilizes a limited number of transmission measurements with a set of flat slabs of two absorber materials to fine-tune the model predictions, resulting in a good correspondence with the physical measurements. To verify the response model, we apply the model in two cases. First, the model is used in combination with a compensation method which requires an extensive number of transmission measurements to determine the necessary parameters. Our spectral response model successfully replaces these measurements by simulations, saving a significant amount of measurement time. Second, the spectral response model is used as the basis of the maximum likelihood approach for projection-based material decomposition. The reconstructed basis images show a good separation between the calcium-like material and the contrast agents, iodine and gadolinium. The contrast agent concentrations are reconstructed with more

  13. Thermal decomposition of natural dolomite

    Indian Academy of Sciences (India)

    Keywords. TGA–DTA; FTIR; X-ray diffraction; dolomite. Abstract. Thermal decomposition behaviour of dolomite sample has been studied by thermogravimetric (TG) measurements. Differential thermal analysis (DTA) curve of dolomite shows two peaks at 777.8°C and 834°C. The two endothermic peaks observed in dolomite ...

  14. Probability inequalities for decomposition integrals

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mesiar, Radko

    2017-01-01

    Roč. 315, č. 1 (2017), s. 240-248 ISSN 0377-0427 Institutional support: RVO:67985556 Keywords : Decomposition integral * Superdecomposition integral * Probability inequalities Subject RIV: BA - General Mathematics Impact factor: 1.357, year: 2016 http:// library .utia.cas.cz/separaty/2017/E/mesiar-0470959.pdf

  15. Thermal decomposition of ammonium hexachloroosmate

    DEFF Research Database (Denmark)

    Asanova, T I; Kantor, Innokenty; Asanov, I. P.

    2016-01-01

    polymeric structure. Being revealed for the first time the intermediate was subjected to determine the local atomic structure around osmium. The thermal decomposition of hexachloroosmate is much more complex and occurs within a minimum two-step process, which has never been observed before....

  16. Wavefront reconstruction by modal decomposition

    CSIR Research Space (South Africa)

    Schulze, C

    2012-08-01

    Full Text Available We propose a new method to determine the wavefront of a laser beam based on modal decomposition by computer-generated holograms. The hologram is encoded with a transmission function suitable for measuring the amplitudes and phases of the modes...

  17. Torsion and Open Book Decompositions

    OpenAIRE

    Etnyre, John B.; Vela-Vick, David Shea

    2009-01-01

    We show that if (B,\\pi) is an open book decomposition of a contact 3-manifold (Y,\\xi), then the complement of the binding B has no Giroux torsion. We also prove the sutured Heegaard-Floer c-bar invariant of the binding of an open book is non-zero.

  18. Modular Decomposition of Boolean Functions

    NARCIS (Netherlands)

    J.C. Bioch (Cor)

    2002-01-01

    textabstractModular decomposition is a thoroughly investigated topic in many areas such as switching theory, reliability theory, game theory and graph theory. Most appli- cations can be formulated in the framework of Boolean functions. In this paper we give a uni_ed treatment of modular

  19. Thermal decomposition of natural dolomite

    Indian Academy of Sciences (India)

    TECS

    the effects of experimental variables i.e. sample weight, particle size, purge gas velocity and crystalline structure, ... effect of chlorine ions on the decomposition kinetics of dolomite at various temperatures studied by ... to 1000°C at a heating rate of 10 K/min, (ii) N2-gas dyna- mic atmosphere (90 cm. 3 min. –1. ), (iii) alumina ...

  20. Decomposition of network communication games

    NARCIS (Netherlands)

    Dietzenbacher, Bas; Borm, Peter; Hendrickx, Ruud

    Using network control structures, this paper introduces a general class of network communication games and studies their decomposition into unanimity games. We obtain a relation between the dividends in any network communication game and its underlying transferable utility game, which depends on the

  1. Locally linear constraint based optimization model for material decomposition

    Science.gov (United States)

    Wang, Qian; Zhu, Yining; Yu, Hengyong

    2017-11-01

    Dual spectral computed tomography (DSCT) has a superior material distinguishability than the conventional single spectral computed tomography (SSCT). However, the decomposition process is an illposed problem, which is sensitive to noise. Thus, the decomposed image quality is degraded, and the corresponding signal-to-noise ratio (SNR) is much lower than that of directly reconstructed image of SSCT. In this work, we establish a locally linear relationship between the decomposed results of DSCT and SSCT. Based on this constraint, we propose an optimization model for DSCT and develop an iterative method with image guided filtering. To further improve the image quality, we employ a preprocessing method based on the relative total variation regularization. Both numerical simulations and real experiments are performed, and the results confirm the effectiveness of our proposed approach.

  2. Influence of Cu(NO32 initiation additive in two-stage mode conditions of coal pyrolytic decomposition

    Directory of Open Access Journals (Sweden)

    Larionov Kirill

    2017-01-01

    Full Text Available Two-stage process (pyrolysis and oxidation of brown coal sample with Cu(NO32 additive pyrolytic decomposition was studied. Additive was introduced by using capillary wetness impregnation method with 5% mass concentration. Sample reactivity was studied by thermogravimetric analysis with staged gaseous medium supply (argon and air at heating rate 10 °C/min and intermediate isothermal soaking. The initiative additive introduction was found to significantly reduce volatile release temperature and accelerate thermal decomposition of sample. Mass-spectral analysis results reveal that significant difference in process characteristics is connected to volatile matter release stage which is initiated by nitrous oxide produced during copper nitrate decomposition.

  3. Decomposition Mechanism and Decomposition Promoting Factors of Waste Hard Metal for Zinc Decomposition Process (ZDP)

    Energy Technology Data Exchange (ETDEWEB)

    Pee, J H; Kim, Y J; Kim, J Y; Cho, W S; Kim, K J [Whiteware Ceramic Center, KICET (Korea, Republic of); Seong, N E, E-mail: pee@kicet.re.kr [Recytech Korea Co., Ltd. (Korea, Republic of)

    2011-10-29

    Decomposition promoting factors and decomposition mechanism in the zinc decomposition process of waste hard metals which are composed mostly of tungsten carbide and cobalt were evaluated. Zinc volatility amount was suppressed and zinc steam pressure was produced in the reaction graphite crucible inside an electric furnace for ZDP. Reaction was done for 2 hrs at 650 deg. C, which 100% decomposed the waste hard metals that were over 30 mm thick. As for the separation-decomposition of waste hard metals, zinc melted alloy formed a liquid composed of a mixture of {gamma}-{beta}1 phase from the cobalt binder layer (reaction interface). The volume of reacted zone was expanded and the waste hard metal layer was decomposed-separated horizontally from the hard metal. Zinc used in the ZDP process was almost completely removed-collected by decantation and volatilization-collection process at 1000 deg. C. The small amount of zinc remaining in the tungsten carbide-cobalt powder which was completely decomposed was fully removed by using phosphate solution which had a slow cobalt dissolution speed.

  4. Thermal decomposition and non-isothermal decomposition kinetics of carbamazepine

    Science.gov (United States)

    Qi, Zhen-li; Zhang, Duan-feng; Chen, Fei-xiong; Miao, Jun-yan; Ren, Bao-zeng

    2014-12-01

    The thermal stability and kinetics of isothermal decomposition of carbamazepine were studied under isothermal conditions by thermogravimetry (TGA) and differential scanning calorimetry (DSC) at three heating rates. Particularly, transformation of crystal forms occurs at 153.75°C. The activation energy of this thermal decomposition process was calculated from the analysis of TG curves by Flynn-Wall-Ozawa, Doyle, distributed activation energy model, Šatava-Šesták and Kissinger methods. There were two different stages of thermal decomposition process. For the first stage, E and log A [s-1] were determined to be 42.51 kJ mol-1 and 3.45, respectively. In the second stage, E and log A [s-1] were 47.75 kJ mol-1 and 3.80. The mechanism of thermal decomposition was Avrami-Erofeev (the reaction order, n = 1/3), with integral form G(α) = [-ln(1 - α)]1/3 (α = ˜0.1-0.8) in the first stage and Avrami-Erofeev (the reaction order, n = 1) with integral form G(α) = -ln(1 - α) (α = ˜0.9-0.99) in the second stage. Moreover, Δ H ≠, Δ S ≠, Δ G ≠ values were 37.84 kJ mol-1, -192.41 J mol-1 K-1, 146.32 kJ mol-1 and 42.68 kJ mol-1, -186.41 J mol-1 K-1, 156.26 kJ mol-1 for the first and second stage, respectively.

  5. Statistical iterative material image reconstruction for spectral CT using a semi-empirical forward model

    Science.gov (United States)

    Mechlem, Korbinian; Ehn, Sebastian; Sellerer, Thorsten; Pfeiffer, Franz; Noël, Peter B.

    2017-03-01

    In spectral computed tomography (spectral CT), the additional information about the energy dependence of attenuation coefficients can be exploited to generate material selective images. These images have found applications in various areas such as artifact reduction, quantitative imaging or clinical diagnosis. However, significant noise amplification on material decomposed images remains a fundamental problem of spectral CT. Most spectral CT algorithms separate the process of material decomposition and image reconstruction. Separating these steps is suboptimal because the full statistical information contained in the spectral tomographic measurements cannot be exploited. Statistical iterative reconstruction (SIR) techniques provide an alternative, mathematically elegant approach to obtaining material selective images with improved tradeoffs between noise and resolution. Furthermore, image reconstruction and material decomposition can be performed jointly. This is accomplished by a forward model which directly connects the (expected) spectral projection measurements and the material selective images. To obtain this forward model, detailed knowledge of the different photon energy spectra and the detector response was assumed in previous work. However, accurately determining the spectrum is often difficult in practice. In this work, a new algorithm for statistical iterative material decomposition is presented. It uses a semi-empirical forward model which relies on simple calibration measurements. Furthermore, an efficient optimization algorithm based on separable surrogate functions is employed. This partially negates one of the major shortcomings of SIR, namely high computational cost and long reconstruction times. Numerical simulations and real experiments show strongly improved image quality and reduced statistical bias compared to projection-based material decomposition.

  6. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maître, O. P.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  7. First-principles investigation of organic photovoltaic materials C60, C70, [C60]PCBM , and bis-[C60]PCBM using a many-body G0W0 -Lanczos approach

    Science.gov (United States)

    Qian, Xiaofeng; Umari, Paolo; Marzari, Nicola

    2015-06-01

    We present a first-principles investigation of the excited-state properties of electron acceptors in organic photovoltaics including C60, C70, [6,6]-phenyl-C61-butyric-acid-methyl-ester ([C60]PCBM ), and bis-[C60]PCBM using many-body perturbation theory within the Hedin's G0W0 approximation and an efficient Lanczos approach. Calculated vertical ionization potentials (VIP) and vertical electron affinities (VEA) of C60 and C70 agree very well with experimental values measured in the gas phase. The density of states of all three molecules is also compared to photoemission and inverse photoemission spectra measured on thin films, and they exhibit a close agreement—a rigid energy-gap renormalization owing to intermolecular interactions in the thin films. In addition, it is shown that the low-lying unoccupied states of [C60]PCBM are all derived from the highest-occupied molecular orbitals and the lowest-unoccupied molecular orbitals of fullerene C60. The functional side group in [C60]PCBM introduces a slight electron transfer to the fullerene cage, resulting in small decreases of both VIP and VEA. This small change of VEA provides a solid justification for the increase of open-circuit voltage when replacing fullerene C60 with [C60]PCBM as the electron acceptor in bulk heterojunction polymer solar cells.

  8. Compressive Spectral Renormalization Method

    CERN Document Server

    Bayindir, Cihan

    2016-01-01

    In this paper a novel numerical scheme for finding the sparse self-localized states of a nonlinear system of equations with missing spectral data is introduced. As in the Petviashivili's and the spectral renormalization method, the governing equation is transformed into Fourier domain, but the iterations are performed for far fewer number of spectral components (M) than classical versions of the these methods with higher number of spectral components (N). After the converge criteria is achieved for M components, N component signal is reconstructed from M components by using the l1 minimization technique of the compressive sampling. This method can be named as compressive spectral renormalization (CSRM) method. The main advantage of the CSRM is that, it is capable of finding the sparse self-localized states of the evolution equation(s) with many spectral data missing.

  9. Water Continuum Absorption in the Infrared and Millimeter Spectral Regions.

    Science.gov (United States)

    Ma, Qiancheng

    1990-01-01

    The absorption coefficient due to the water continuum is calculated both in the high-frequency (infrared) wing and in the low-frequency (millimeter) wing of the pure rotational band. The statistical theory proposed by Rosenkranz to calculate the continuum absorption in the high-frequency wing is reviewed and extended. In this review, we discuss specifically the validity and the limitation of the approximations made by Rosenkranz. We then discuss several extensions to his theory, including increasing the number of rotational states used to calculate the band-average relaxation parameter, correcting the normalization factor, and eliminating the "boxcar approximation." These improvements allow us to eliminate some inconsistencies in the original formulation of Rosenkranz while obtaining substantially the same final results. As a consequence, we confirm his conclusions about the origin, magnitude, and temperature-dependence of the water continuum absorption in the high-frequency wing of the pure rotational band. A new theory is developed to calculate the continuum in the low-frequency wing, i.e., in the millimeter spectral region. This theory is based on a generalization of Fano's theory in which the spectral density is calculated for a system consisting of a pair of water molecules. The internal states are written in terms of the line space of the system, and the resolvent operator is obtained using the Lanczos algorithm. For the interaction between two water molecules, we include only the leading dipole-dipole anisotropic potential and model the isotropic interaction by a Lennard-Jones potential. Using reasonable values for the two Lennard-Jones potential parameters, and the known rotational constants and permanent dipole moment of a water molecule, we calculate the absorption coefficient for frequencies up to 450 GHz for temperatures between 282 and 315 K. Without any free parameters, the present results are in good agreement with an empirical model for the water

  10. Hyperspectral BSS using GMCA with spatio-spectral sparsity constraints.

    Science.gov (United States)

    Moudden, Yassir; Bobin, Jerome

    2011-03-01

    Generalized morphological component analysis (GMCA) is a recent algorithm for multichannel data analysis which was used successfully in a variety of applications including multichannel sparse decomposition, blind source separation (BSS), color image restoration and inpainting. Building on GMCA, the purpose of this contribution is to describe a new algorithm for BSS applications in hyperspectral data processing. It assumes the collected data is a mixture of components exhibiting sparse spectral signatures as well as sparse spatial morphologies, each in specified dictionaries of spectral and spatial waveforms. We report on numerical experiments with synthetic data and application to real observations which demonstrate the validity of the proposed method.

  11. Thermic decomposition of biphenyl; Decomposition thermique du biphenyle

    Energy Technology Data Exchange (ETDEWEB)

    Lutz, M. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1966-03-01

    Liquid and vapour phase pyrolysis of very pure biphenyl obtained by methods described in the text was carried out at 400 C in sealed ampoules, the fraction transformed being always less than 0.1 per cent. The main products were hydrogen, benzene, terphenyls, and a deposit of polyphenyls strongly adhering to the walls. Small quantities of the lower aliphatic hydrocarbons were also found. The variation of the yields of these products with a) the pyrolysis time, b) the state (gas or liquid) of the biphenyl, and c) the pressure of the vapour was measured. Varying the area and nature of the walls showed that in the absence of a liquid phase, the pyrolytic decomposition takes place in the adsorbed layer, and that metallic walls promote the reaction more actively than do those of glass (pyrex or silica). A mechanism is proposed to explain the results pertaining to this decomposition in the adsorbed phase. The adsorption seems to obey a Langmuir isotherm, and the chemical act which determines the overall rate of decomposition is unimolecular. (author) [French] Du biphenyle tres pur, dont la purification est decrite, est pyrolyse a 400 C en phase vapeur et en phase liquide dans des ampoules scellees sous vide, a des taux de decomposition n'ayant jamais depasse 0,1 pour cent. Les produits provenant de la pyrolyse sont essentiellement: l' hydrogene, le benzene, les therphenyles, et un depot de polyphenyles adherant fortement aux parois. En plus il se forme de faibles quantites d'hydrocarbures aliphatiques gazeux. On indique la variation des rendements des differents produits avec la duree de pyrolyse, l'etat gazeux ou liquide du biphenyle, et la pression de la vapeur. Variant la superficie et la nature des parois, on montre qu'en absence de liquide la pyrolyse se fait en phase adsorbee. La pyrolyse est plus active au contact de parois metalliques que de celles de verres (pyrex ou silice). A partir des resultats experimentaux un mecanisme de

  12. Research on intelligent fault diagnosis of gears using EMD, spectral features and data mining techniques

    Science.gov (United States)

    Sagar, M.; Vivekkumar, G.; Reddy, Mallikarjuna; Devendiran, S.; Amarnath, M.

    2017-11-01

    In this present work aims to formulate an automated prediction model using vibration signals of various gear operating conditions by using EMD (empirical mode decomposition) and spectral features and different classification algorithms. In this present work empirical mode decomposition (EMD) is a signal processing technique used to extract more useful fault information from the vibration signals. The proposed method described in following parts gear test rig, data acquisition system, signal processing, feature extraction and classification algorithms and finally identification. Meanwhile, in order to remove the redundant and irrelevant spectral features and classification algorithms, data mining is implemented and it showed promising prediction results.

  13. Adaptive Spectral Doppler Estimation

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt

    2009-01-01

    In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence...

  14. Hydrocarbon Spectral Database

    Science.gov (United States)

    SRD 115 Hydrocarbon Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 91 hydrocarbon molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.

  15. On Longitudinal Spectral Coherence

    DEFF Research Database (Denmark)

    Kristensen, Leif

    1979-01-01

    It is demonstrated that the longitudinal spectral coherence differs significantly from the transversal spectral coherence in its dependence on displacement and frequency. An expression for the longitudinal coherence is derived and it is shown how the scale of turbulence, the displacement between ...... observation sites and the turbulence intensity influence the results. The limitations of the theory are discussed....

  16. Understanding Soliton Spectral Tunneling as a Spectral Coupling Effect

    DEFF Research Database (Denmark)

    Guo, Hairun; Wang, Shaofei; Zeng, Xianglong

    2013-01-01

    between channels, here we suggest that the soliton spectral tunneling effect can be understood supported by a spectral phase coupler. The dispersive wave number in the spectral domain must have a coupler-like symmetric profile for soliton spectral tunneling to occur. We show that such a spectral coupler...

  17. Decomposition of Diethylstilboestrol in Soil

    DEFF Research Database (Denmark)

    Gregers-Hansen, Birte

    1964-01-01

    The rate of decomposition of DES-monoethyl-1-C14 in soil was followed by measurement of C14O2 released. From 1.6 to 16% of the added C14 was recovered as C14O2 during 3 months. After six months as much as 12 to 28 per cent was released as C14O2.Determination of C14 in the soil samples after...... not inhibit the CO2 production from the soil.Experiments with γ-sterilized soil indicated that enzymes present in the soil are able to attack DES....

  18. Azimuthal decomposition with digital holograms

    CSIR Research Space (South Africa)

    Litvin, IA

    2012-05-01

    Full Text Available stream_source_info Litvin_2012.pdf.txt stream_content_type text/plain stream_size 26000 Content-Encoding ISO-8859-1 stream_name Litvin_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 Azimuthal decomposition... outside the annular ring and 1 inside the ring was programmed using complex amplitude modulation for amplitude only effects on a phase-only device. The hologram takes the form of a high frequency grating that oscillates between phase values of 0...

  19. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    Science.gov (United States)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  20. Parallel QR Decomposition for Electromagnetic Scattering Problems

    National Research Council Canada - National Science Library

    Boleng, Jeff

    1997-01-01

    This report introduces a new parallel QR decomposition algorithm. Test results are presented for several problem sizes, numbers of processors, and data from the electromagnetic scattering problem domain...

  1. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    Data.gov (United States)

    National Aeronautics and Space Administration — In this research, we propose a variant of the classical Matching Pursuit Decomposition (MPD) algorithm with significantly improved scalability and computational...

  2. Thermal-decomposition studies of HMX

    Energy Technology Data Exchange (ETDEWEB)

    Kolb, J.R.; Garza, R.G.

    1981-10-20

    We have investigated the rates of decomposition as functions of time and temperature on a combined thermogravimetric analyzer-residual gas analyzer (TGA-RGA). This technique also allows us to identify decomposition products generated as the original HMX begins to decompose. The temperature range studied was 50 to 200/sup 0/C. The decomposition process and the nature of decomposition products as functions of HMX polymorphs and conformations of the organic ring systems and possible reactive intermediates are discussed. 7 figures, 3 tables.

  3. Symmetric Decomposition of Asymmetric Games.

    Science.gov (United States)

    Tuyls, Karl; Pérolat, Julien; Lanctot, Marc; Ostrovski, Georg; Savani, Rahul; Leibo, Joel Z; Ord, Toby; Graepel, Thore; Legg, Shane

    2018-01-17

    We introduce new theoretical insights into two-population asymmetric games allowing for an elegant symmetric decomposition into two single population symmetric games. Specifically, we show how an asymmetric bimatrix game (A,B) can be decomposed into its symmetric counterparts by envisioning and investigating the payoff tables (A and B) that constitute the asymmetric game, as two independent, single population, symmetric games. We reveal several surprising formal relationships between an asymmetric two-population game and its symmetric single population counterparts, which facilitate a convenient analysis of the original asymmetric game due to the dimensionality reduction of the decomposition. The main finding reveals that if (x,y) is a Nash equilibrium of an asymmetric game (A,B), this implies that y is a Nash equilibrium of the symmetric counterpart game determined by payoff table A, and x is a Nash equilibrium of the symmetric counterpart game determined by payoff table B. Also the reverse holds and combinations of Nash equilibria of the counterpart games form Nash equilibria of the asymmetric game. We illustrate how these formal relationships aid in identifying and analysing the Nash structure of asymmetric games, by examining the evolutionary dynamics of the simpler counterpart games in several canonical examples.

  4. Decomposition methods in turbulence research

    Science.gov (United States)

    Uruba, Václav

    2012-04-01

    Nowadays we have the dynamical velocity vector field of turbulent flow at our disposal coming thanks advances of either mathematical simulation (DNS) or of experiment (time-resolved PIV). Unfortunately there is no standard method for analysis of such data describing complicated extended dynamical systems, which is characterized by excessive number of degrees of freedom. An overview of candidate methods convenient to spatiotemporal analysis for such systems is to be presented. Special attention will be paid to energetic methods including Proper Orthogonal Decomposition (POD) in regular and snapshot variants as well as the Bi-Orthogonal Decomposition (BOD) for joint space-time analysis. Then, stability analysis using Principal Oscillation Patterns (POPs) will be introduced. Finally, the Independent Component Analysis (ICA) method will be proposed for detection of coherent structures in turbulent flow-field defined by time-dependent velocity vector field. Principle and some practical aspects of the methods are to be shown. Special attention is to be paid to physical interpretation of outputs of the methods listed above.

  5. Decomposition methods in turbulence research

    Directory of Open Access Journals (Sweden)

    Uruba Václav

    2012-04-01

    Full Text Available Nowadays we have the dynamical velocity vector field of turbulent flow at our disposal coming thanks advances of either mathematical simulation (DNS or of experiment (time-resolved PIV. Unfortunately there is no standard method for analysis of such data describing complicated extended dynamical systems, which is characterized by excessive number of degrees of freedom. An overview of candidate methods convenient to spatiotemporal analysis for such systems is to be presented. Special attention will be paid to energetic methods including Proper Orthogonal Decomposition (POD in regular and snapshot variants as well as the Bi-Orthogonal Decomposition (BOD for joint space-time analysis. Then, stability analysis using Principal Oscillation Patterns (POPs will be introduced. Finally, the Independent Component Analysis (ICA method will be proposed for detection of coherent structures in turbulent flow-field defined by time-dependent velocity vector field. Principle and some practical aspects of the methods are to be shown. Special attention is to be paid to physical interpretation of outputs of the methods listed above.

  6. Assessing plant residue decomposition in soil using DRIFT spectroscopy

    Science.gov (United States)

    Ouellette, Lance; Van Eerd, Laura; Voroney, Paul

    2016-04-01

    Assessment of the decomposition of plant residues typically involves the use of tracer techniques combined with measurements of soil respiration. This laboratory study evaluated use of Diffuse Reflectance Fourier Transform (DRIFT) spectroscopy for its potential to assess plant residue decomposition in soil. A sandy loam soil (Orthic Humic Gleysol) obtained from a field research plot was passed through a 4.75 mm sieve moist (~70% of field capacity) to remove larger crop residues. The experimental design consisted of a randomized complete block with four replicates of ten above-ground cover crop residue-corn stover combinations, where sampling time was blocked. Two incubations were set up for 1) Drift analysis: field moist soil (250 g ODW) was placed in 500 mL glass jars, and 2) CO2 evolution: 100 g (ODW) was placed in 2 L jars. Soils were amended with the plant residues (oven-dried at 60°C and ground to <2 mm) at rates equivalent to field mean above-ground biomass yields, then moistened to 60% water holding capacity and incubated in the dark at 22±3°C. Measurements for DRIFT and CO2-C evolved were taken after 0.5, 2, 4, 7, 10, 15, 22, 29, 36, 43, 50 64 and 72 d. DRIFT spectral data (100co-added scans per sample) were recorded with a Varian Cary 660 FT-IR Spectrometer equipped with an EasiDiff Diffuse Reflectance accessory operated at a resolution of 4 cm-1 over the mid-infrared spectrum from 4000 to 400 cm-1. DRIFT spectra of amended soils indicated peak areas of aliphatics at 2930 cm-1, of aromatics at 1620, and 1530 cm-1 and of polysaccharides at 1106 and 1036 cm-1. Evolved CO2 was measured by the alkali trap method (1 M NaOH); the amount of plant residue-C remaining in soil was calculated from the difference in the quantity of plant residue C added and the additional CO2-C evolved from the amended soil. First-order model parameters of the change in polysaccharide peak area over the incubation were related to those generated from the plant residue C decay

  7. Power spectral density of 3D noise

    Science.gov (United States)

    Haefner, David P.

    2017-05-01

    When evaluated with a spatially uniform irradiance, an imaging sensor exhibits both spatial and temporal variations, which can be described as a three-dimensional (3D) random process considered as noise. In the 1990s, NVESD engineers developed an approximation to the 3D power spectral density (PSD) for noise in imaging systems known as 3D noise. This correspondence describes the decomposition of the full 3D PSD into the familiar components from the 3D Noise model. The standard 3D noise method assumes spectrally (spatio-temporal) white random processes, which is demonstrated to be atypically in the case with complex modern imaging sensors. Using the spectral shape allows for more appropriate analysis of the impact of the noise of the sensor. The processing routines developed for this work consider finite memory constraints and utilize Welch's method for unbiased PSD estimation. In support of the reproducible research effort, the Matlab functions associated with this work can be found on the Mathworks file exchange [1].

  8. Segmentation of cDNA Microarray Images using Parallel Spectral Clustering

    Directory of Open Access Journals (Sweden)

    Daniel RUIZ

    2013-05-01

    Full Text Available Microarray technology generates large amounts of expression level of genes to be analyzed simultaneously. This analysis implies microarray image segmentation to extract the quantitative information from spots. Spectral clustering is one of the most relevant unsupervised methods able to gather data without a priori information on shapes or locality. We propose and test on microarray images a parallel strategy for the Spectral Clustering method based on domain decomposition with a criterion to determine the number of clusters.

  9. Tensor-based Dictionary Learning for Spectral CT Reconstruction

    Science.gov (United States)

    Zhang, Yanbo; Wang, Ge

    2016-01-01

    Spectral computed tomography (CT) produces an energy-discriminative attenuation map of an object, extending a conventional image volume with a spectral dimension. In spectral CT, an image can be sparsely represented in each of multiple energy channels, and are highly correlated among energy channels. According to this characteristics, we propose a tensor-based dictionary learning method for spectral CT reconstruction. In our method, tensor patches are extracted from an image tensor, which is reconstructed using the filtered backprojection (FBP), to form a training dataset. With the Candecomp/Parafac decomposition, a tensor-based dictionary is trained, in which each atom is a rank-one tensor. Then, the trained dictionary is used to sparsely represent image tensor patches during an iterative reconstruction process, and the alternating minimization scheme is adapted for optimization. The effectiveness of our proposed method is validated with both numerically simulated and real preclinical mouse datasets. The results demonstrate that the proposed tensor-based method generally produces superior image quality, and leads to more accurate material decomposition than the currently popular popular methods. PMID:27541628

  10. Vowel Inherent Spectral Change

    CERN Document Server

    Assmann, Peter

    2013-01-01

    It has been traditional in phonetic research to characterize monophthongs using a set of static formant frequencies, i.e., formant frequencies taken from a single time-point in the vowel or averaged over the time-course of the vowel. However, over the last twenty years a growing body of research has demonstrated that, at least for a number of dialects of North American English, vowels which are traditionally described as monophthongs often have substantial spectral change. Vowel Inherent Spectral Change has been observed in speakers’ productions, and has also been found to have a substantial effect on listeners’ perception. In terms of acoustics, the traditional categorical distinction between monophthongs and diphthongs can be replaced by a gradient description of dynamic spectral patterns. This book includes chapters addressing various aspects of vowel inherent spectral change (VISC), including theoretical and experimental studies of the perceptually relevant aspects of VISC, the relationship between ar...

  11. Spectral transmittance reference standards

    Energy Technology Data Exchange (ETDEWEB)

    Kruglyakova, M.A.; Belyaeva, O.N.; Nikitin, M.V.

    1995-06-01

    This paper presents spectral transmittance reference standards for UV and IR spectrophotometers, developed, studied, and certified by a precision spectrophotometry laboratory (the RSP Complex). 8 refs., 3 figs., 3 tabs.

  12. Decomposition kinetics of plutonium hydride

    Energy Technology Data Exchange (ETDEWEB)

    Haschke, J.M.; Stakebake, J.L.

    1979-01-01

    Kinetic data for decomposition of PuH/sub 1/ /sub 95/ provides insight into a possible mechanism for the hydriding and dehydriding reactions of plutonium. The fact that the rate of the hydriding reaction, K/sub H/, is proportional to P/sup 1/2/ and the rate of the dehydriding process, K/sub D/, is inversely proportional to P/sup 1/2/ suggests that the forward and reverse reactions proceed by opposite paths of the same mechanism. The P/sup 1/2/ dependence of hydrogen solubility in metals is characteristic of the dissociative absorption of hydrogen; i.e., the reactive species is atomic hydrogen. It is reasonable to assume that the rates of the forward and reverse reactions are controlled by the surface concentration of atomic hydrogen, (H/sub s/), that K/sub H/ = c'(H/sub s/), and that K/sub D/ = c/(H/sub s/), where c' and c are proportionality constants. For this surface model, the pressure dependence of K/sub D/ is related to (H/sub s/) by the reaction (H/sub s/) reversible 1/2H/sub 2/(g) and by its equilibrium constant K/sub e/ = (H/sub 2/)/sup 1/2//(H/sub s/). In the pressure range of ideal gas behavior, (H/sub s/) = K/sub e//sup -1/(RT)/sup -1/2/ and the decomposition rate is given by K/sub D/ = cK/sub e/(RT)/sup -1/2/P/sup 1/2/. For an analogous treatment of the hydriding process with this model, it can be readily shown that K/sub H/ = c'K/sub e//sup -1/(RT)/sup -1/2/P/sup 1/2/. The inverse pressure dependence and direct temperature dependence of the decomposition rate are correctly predicted by this mechanism which is most consistent with the observed behavior of the Pu--H system.

  13. SDSS-IV MaNGA: bulge-disc decomposition of IFU data cubes (BUDDI)

    Science.gov (United States)

    Johnston, Evelyn J.; Häußler, Boris; Aragón-Salamanca, Alfonso; Merrifield, Michael R.; Bamford, Steven; Bershady, Matthew A.; Bundy, Kevin; Drory, Niv; Fu, Hai; Law, David; Nitschelm, Christian; Thomas, Daniel; Roman Lopes, Alexandre; Wake, David; Yan, Renbin

    2017-02-01

    With the availability of large integral field unit (IFU) spectral surveys of nearby galaxies, there is now the potential to extract spectral information from across the bulges and discs of galaxies in a systematic way. This information can address questions such as how these components built up with time, how galaxies evolve and whether their evolution depends on other properties of the galaxy such as its mass or environment. We present bulge-disc decomposition of IFU data cubes (BUDDI), a new approach to fit the two-dimensional light profiles of galaxies as a function of wavelength to extract the spectral properties of these galaxies' discs and bulges. The fitting is carried out using GALFITM, a modified form of GALFIT which can fit multiwaveband images simultaneously. The benefit of this technique over traditional multiwaveband fits is that the stellar populations of each component can be constrained using knowledge over the whole image and spectrum available. The decomposition has been developed using commissioning data from the Sloan Digital Sky Survey-IV Mapping Nearby Galaxies at APO (MaNGA) survey with redshifts z 22 arcsec, but can be applied to any IFU data of a nearby galaxy with similar or better spatial resolution and coverage. We present an overview of the fitting process, the results from our tests, and we finish with example stellar population analyses of early-type galaxies from the MaNGA survey to give an indication of the scientific potential of applying bulge-disc decomposition to IFU data.

  14. Thermophotovoltaic Spectral Control

    Energy Technology Data Exchange (ETDEWEB)

    DM DePoy; PM Fourspring; PF Baldasaro; JF Beausang; EJ Brown; MW Dashiel; KD Rahner; TD Rahmlow; JE Lazo-Wasem; EJ Gratrix; B Wemsman

    2004-06-09

    Spectral control is a key technology for thermophotovoltaic (TPV) direct energy conversion systems because only a fraction (typically less than 25%) of the incident thermal radiation has energy exceeding the diode bandgap energy, E{sub g}, and can thus be converted to electricity. The goal for TPV spectral control in most applications is twofold: (1) Maximize TPV efficiency by minimizing transfer of low energy, below bandgap photons from the radiator to the TPV diode. (2) Maximize TPV surface power density by maximizing transfer of high energy, above bandgap photons from the radiator to the TPV diode. TPV spectral control options include: front surface filters (e.g. interference filters, plasma filters, interference/plasma tandem filters, and frequency selective surfaces), back surface reflectors, and wavelength selective radiators. System analysis shows that spectral performance dominates diode performance in any practical TPV system, and that low bandgap diodes enable both higher efficiency and power density when spectral control limitations are considered. Lockheed Martin has focused its efforts on front surface tandem filters which have achieved spectral efficiencies of {approx}83% for E{sub g} = 0.52 eV and {approx}76% for E{sub g} = 0.60 eV for a 950 C radiator temperature.

  15. Spectrally selective glazings

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-08-01

    Spectrally selective glazing is window glass that permits some portions of the solar spectrum to enter a building while blocking others. This high-performance glazing admits as much daylight as possible while preventing transmission of as much solar heat as possible. By controlling solar heat gains in summer, preventing loss of interior heat in winter, and allowing occupants to reduce electric lighting use by making maximum use of daylight, spectrally selective glazing significantly reduces building energy consumption and peak demand. Because new spectrally selective glazings can have a virtually clear appearance, they admit more daylight and permit much brighter, more open views to the outside while still providing the solar control of the dark, reflective energy-efficient glass of the past. This Federal Technology Alert provides detailed information and procedures for Federal energy managers to consider spectrally selective glazings. The principle of spectrally selective glazings is explained. Benefits related to energy efficiency and other architectural criteria are delineated. Guidelines are provided for appropriate application of spectrally selective glazing, and step-by-step instructions are given for estimating energy savings. Case studies are also presented to illustrate actual costs and energy savings. Current manufacturers, technology users, and references for further reading are included for users who have questions not fully addressed here.

  16. Application Of Adomian's Decomposition Method In Solving ...

    African Journals Online (AJOL)

    It is shown in literature that Adomian's decomposition method gives better results than any other computational techniques. We use this method to tackle simple heat equation and compare the result with the closed form solution of the giving problem. Keywords: Adomian decomposition method; accuracy; nonlinear equation ...

  17. Modular polynomial arithmetic in partial fraction decomposition

    Science.gov (United States)

    Abdali, S. K.; Caviness, B. F.; Pridor, A.

    1977-01-01

    Algorithms for general partial fraction decomposition are obtained by using modular polynomial arithmetic. An algorithm is presented to compute inverses modulo a power of a polynomial in terms of inverses modulo that polynomial. This algorithm is used to make an improvement in the Kung-Tong partial fraction decomposition algorithm.

  18. Spinodal decomposition in fine grained materials

    Indian Academy of Sciences (India)

    Unknown

    A-rich grain boundary layer followed by a B-rich layer; the grain interior exhibits a spinodally decomposed microstructure, evolving slowly. Further, grain growth is suppressed completely during the decomposition process. Keywords. Spinodal decomposition; grain boundary effects; phase field models. 1. Introduction.

  19. An Introduction to Clique Minimal Separator Decomposition

    Directory of Open Access Journals (Sweden)

    Anne Berry

    2010-05-01

    Full Text Available This paper is a review which presents and explains the decomposition of graphs by clique minimal separators. The pace is leisurely, we give many examples and figures. Easy algorithms are provided to implement this decomposition. The historical and theoretical background is given, as well as sketches of proofs of the structural results involved.

  20. Some Aspects of Thermochemical Decomposition of Peat

    Directory of Open Access Journals (Sweden)

    Y. A. Losiuk

    2008-01-01

    Full Text Available The paper considers peculiar features of thermochemical decomposition of peat as a result of quick pyrolysis. Evaluation of energy and economic expediency of the preliminary peat decomposition process for obtaining liquid and gaseous products has been made in the paper. The paper reveals prospects pertaining to application of the given technology while generating electric power and heat.

  1. Moisture controls decomposition rate in thawing tundra

    Science.gov (United States)

    C.E. Hicks-Pries; E.A.G. Schuur; S.M. Natali; J.G. Vogel

    2013-01-01

    Permafrost thaw can affect decomposition rates by changing environmental conditions and litter quality. As permafrost thaws, soils warm and thermokarst (ground subsidence) features form, causing some areas to become wetter while other areas become drier. We used a common substrate to measure how permafrost thaw affects decomposition rates in the surface soil in a...

  2. Spinodal decomposition in fine grained materials

    Indian Academy of Sciences (India)

    We have used a phase field model to study spinodal decomposition in polycrystalline materials in which the grain size is of the same order of magnitude as the characteristic decomposition wavelength ( λ S D ). In the spirit of phase field models, each grain () in our model has an order parameter ( η i ) associated with it; ...

  3. Climate history shapes contemporary leaf litter decomposition

    Science.gov (United States)

    Michael S. Strickland; Ashley D. Keiser; Mark A. Bradford

    2015-01-01

    Litter decomposition is mediated by multiple variables, of which climate is expected to be a dominant factor at global scales. However, like other organisms, traits of decomposers and their communities are shaped not just by the contemporary climate but also their climate history. Whether or not this affects decomposition rates is underexplored. Here we source...

  4. Light-induced decomposition of indocyanine green.

    Science.gov (United States)

    Engel, Eva; Schraml, Rüdiger; Maisch, Tim; Kobuch, Karin; König, Burkhard; Szeimies, Rolf-Markus; Hillenkamp, Jost; Bäumler, Wolfgang; Vasold, Rudolf

    2008-05-01

    To investigate the light-induced decomposition of indocyanine green (ICG) and to test the cytotoxicity of light-induced ICG decomposition products. ICG in solution was irradiated with laser light, solar light, or surgical endolight. The light-induced decomposition of ICG was analyzed by high-performance liquid chromatography (HPLC) and mass spectrometry. Porcine retinal pigment epithelial (RPE) cells were incubated with the light-induced decomposition products of ICG, and cell viability was measured by trypan blue exclusion assay. Independent of the light source used, singlet oxygen (photodynamic type 2 reaction) is generated by ICG leading to dioxetanes by [2+2]-cycloaddition of singlet oxygen. These dioxetanes thermally decompose into several carbonyl compounds. The decomposition products were identified by mass spectrometry. The decomposition of ICG was inhibited by adding sodium azide, a quencher of singlet oxygen. Incubation with ICG decomposition products significantly reduced the viability of RPE cells in contrast to control cells. ICG is decomposed by light within a self-sensitized photo oxidation. The decomposition products reduce the viability of RPE cells in vitro. The toxic effects of decomposed ICG should be further investigated under in vivo conditions.

  5. SpecViz: Interactive Spectral Data Analysis

    Science.gov (United States)

    Earl, Nicholas Michael; STScI

    2016-06-01

    The astronomical community is about to enter a new generation of scientific enterprise. With next-generation instrumentation and advanced capabilities, the need has arisen to equip astronomers with the necessary tools to deal with large, multi-faceted data. The Space Telescope Science Institute has initiated a data analysis forum for the creation, development, and maintenance of software tools for the interpretation of these new data sets. SpecViz is a spectral 1-D interactive visualization and analysis application built with Python in an open source development environment. A user-friendly GUI allows for a fast, interactive approach to spectral analysis. SpecViz supports handling of unique and instrument-specific data, incorporation of advanced spectral unit handling and conversions in a flexible, high-performance interactive plotting environment. Active spectral feature analysis is possible through interactive measurement and statistical tools. It can be used to build wide-band SEDs, with the capability of combining or overplotting data products from various instruments. SpecViz sports advanced toolsets for filtering and detrending spectral lines; identifying, isolating, and manipulating spectral features; as well as utilizing spectral templates for renormalizing data in an interactive way. SpecViz also includes a flexible model fitting toolset that allows for multi-component models, as well as custom models, to be used with various fitting and decomposition routines. SpecViz also features robust extension via custom data loaders and connection to the central communication system underneath the interface for more advanced control. Incorporation with Jupyter notebooks via connection with the active iPython kernel allows for SpecViz to be used in addition to a user’s normal workflow without demanding the user drastically alter their method of data analysis. In addition, SpecViz allows the interactive analysis of multi-object spectroscopy in the same straight

  6. Multilinear operators for higher-order decompositions.

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, Tamara Gibson

    2006-04-01

    We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.

  7. Nonconforming mortar element methods: Application to spectral discretizations

    Science.gov (United States)

    Maday, Yvon; Mavriplis, Cathy; Patera, Anthony

    1988-01-01

    Spectral element methods are p-type weighted residual techniques for partial differential equations that combine the generality of finite element methods with the accuracy of spectral methods. Presented here is a new nonconforming discretization which greatly improves the flexibility of the spectral element approach as regards automatic mesh generation and non-propagating local mesh refinement. The method is based on the introduction of an auxiliary mortar trace space, and constitutes a new approach to discretization-driven domain decomposition characterized by a clean decoupling of the local, structure-preserving residual evaluations and the transmission of boundary and continuity conditions. The flexibility of the mortar method is illustrated by several nonconforming adaptive Navier-Stokes calculations in complex geometry.

  8. Subspace dynamic mode decomposition for stochastic Koopman analysis

    Science.gov (United States)

    Takeishi, Naoya; Kawahara, Yoshinobu; Yairi, Takehisa

    2017-09-01

    The analysis of nonlinear dynamical systems based on the Koopman operator is attracting attention in various applications. Dynamic mode decomposition (DMD) is a data-driven algorithm for Koopman spectral analysis, and several variants with a wide range of applications have been proposed. However, popular implementations of DMD suffer from observation noise on random dynamical systems and generate inaccurate estimation of the spectra of the stochastic Koopman operator. In this paper, we propose subspace DMD as an algorithm for the Koopman analysis of random dynamical systems with observation noise. Subspace DMD first computes the orthogonal projection of future snapshots to the space of past snapshots and then estimates the spectra of a linear model, and its output converges to the spectra of the stochastic Koopman operator under standard assumptions. We investigate the empirical performance of subspace DMD with several dynamical systems and show its utility for the Koopman analysis of random dynamical systems.

  9. 2D Prony-Huang Transform: A New Tool for 2D Spectral Analysis

    Science.gov (United States)

    Schmitt, Jeremy; Pustelnik, Nelly; Borgnat, Pierre; Flandrin, Patrick; Condat, Laurent

    2014-12-01

    This work proposes an extension of the 1-D Hilbert Huang transform for the analysis of images. The proposed method consists in (i) adaptively decomposing an image into oscillating parts called intrinsic mode functions (IMFs) using a mode decomposition procedure, and (ii) providing a local spectral analysis of the obtained IMFs in order to get the local amplitudes, frequencies, and orientations. For the decomposition step, we propose two robust 2-D mode decompositions based on non-smooth convex optimization: a "Genuine 2-D" approach, that constrains the local extrema of the IMFs, and a "Pseudo 2-D" approach, which constrains separately the extrema of lines, columns, and diagonals. The spectral analysis step is based on Prony annihilation property that is applied on small square patches of the IMFs. The resulting 2-D Prony-Huang transform is validated on simulated and real data.

  10. Biologically-inspired data decorrelation for hyper-spectral imaging

    Directory of Open Access Journals (Sweden)

    Ghita Ovidiu

    2011-01-01

    Full Text Available Abstract Hyper-spectral data allows the construction of more robust statistical models to sample the material properties than the standard tri-chromatic color representation. However, because of the large dimensionality and complexity of the hyper-spectral data, the extraction of robust features (image descriptors is not a trivial issue. Thus, to facilitate efficient feature extraction, decorrelation techniques are commonly applied to reduce the dimensionality of the hyper-spectral data with the aim of generating compact and highly discriminative image descriptors. Current methodologies for data decorrelation such as principal component analysis (PCA, linear discriminant analysis (LDA, wavelet decomposition (WD, or band selection methods require complex and subjective training procedures and in addition the compressed spectral information is not directly related to the physical (spectral characteristics associated with the analyzed materials. The major objective of this article is to introduce and evaluate a new data decorrelation methodology using an approach that closely emulates the human vision. The proposed data decorrelation scheme has been employed to optimally minimize the amount of redundant information contained in the highly correlated hyper-spectral bands and has been comprehensively evaluated in the context of non-ferrous material classification

  11. Parametric Explosion Spectral Model

    Energy Technology Data Exchange (ETDEWEB)

    Ford, S R; Walter, W R

    2012-01-19

    Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before occurred. We develop a parametric model of the nuclear explosion seismic source spectrum derived from regional phases that is compatible with earthquake-based geometrical spreading and attenuation. Earthquake spectra are fit with a generalized version of the Brune spectrum, which is a three-parameter model that describes the long-period level, corner-frequency, and spectral slope at high-frequencies. Explosion spectra can be fit with similar spectral models whose parameters are then correlated with near-source geology and containment conditions. We observe a correlation of high gas-porosity (low-strength) with increased spectral slope. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.

  12. Photovoltaic spectral responsivity measurements

    Energy Technology Data Exchange (ETDEWEB)

    Emery, K.; Dunlavy, D.; Field, H.; Moriarty, T. [National Renewable Energy Lab., Golden, CO (United States)

    1998-09-01

    This paper discusses the various elemental random and nonrandom error sources in typical spectral responsivity measurement systems. The authors focus specifically on the filter and grating monochrometer-based spectral responsivity measurement systems used by the Photovoltaic (PV) performance characterization team at NREL. A variety of subtle measurement errors can occur that arise from a finite photo-current response time, bandwidth of the monochromatic light, waveform of the monochromatic light, and spatial uniformity of the monochromatic and bias lights; the errors depend on the light source, PV technology, and measurement system. The quantum efficiency can be a function of he voltage bias, light bias level, and, for some structures, the spectral content of the bias light or location on the PV device. This paper compares the advantages and problems associated with semiconductor-detector-based calibrations and pyroelectric-detector-based calibrations. Different current-to-voltage conversion and ac photo-current detection strategies employed at NREL are compared and contrasted.

  13. ADE spectral networks

    Science.gov (United States)

    Longhi, Pietro; Park, Chan Y.

    2016-08-01

    We introduce a new perspective and a generalization of spectral networks for 4d {N} = 2 theories of class S associated to Lie algebras {g} = A n , D n , E6, and E7. Spectral networks directly compute the BPS spectra of 2d theories on surface defects coupled to the 4d theories. A Lie algebraic interpretation of these spectra emerges naturally from our construction, leading to a new description of 2d-4d wall-crossing phenomena. Our construction also provides an efficient framework for the study of BPS spectra of the 4d theories. In addition, we consider novel types of surface defects associated with minuscule ccrepresentations of {g}.

  14. Surface-directed spinodal decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Puri, Sanjay [School of Physical Sciences, Jawaharlal Nehru University, New Delhi-110067 (India)

    2005-01-26

    We review analytical and numerical results for surface-directed spinodal decomposition (SDSD), namely, the interplay of wetting kinetics and phase separation in a binary (AB) mixture in contact with a surface S which prefers one of the components (say, A). Depending on the relative strengths of the A-B, A-S and B-S interactions, the surface is either partially wetted or completely wetted by A in equilibrium. We discuss the theoretical framework for modelling SDSD, and review results obtained from both microscopic and coarse-grained models. We clarify the differences between diffusion-driven SDSD in solids, and SDSD in fluids, where velocity fields play an important role. Furthermore, we discuss the dependence of wetting-layer kinetics on the composition of the mixture. Some results are also presented for phase separation in a confined geometry, e.g., thin films. Finally, we discuss the problem of surface-enrichment kinetics, namely, the kinetics of enrichment of an attracting surface when the bulk mixture is stable. These nonequilibrium processes have important applications in the preparation of nanomaterials and multi-layered structures. (topical review)

  15. Geometric decompositions of collective motion

    Science.gov (United States)

    Mischiati, Matteo; Krishnaprasad, P. S.

    2017-04-01

    Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes-including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots.

  16. Detector-based spectral CT with a novel dual-layer technology: principles and applications.

    Science.gov (United States)

    Rassouli, Negin; Etesami, Maryam; Dhanantwari, Amar; Rajiah, Prabhakar

    2017-10-06

    Detector-based spectral computed tomography is a novel dual-energy CT technology that employs two layers of detectors to simultaneously collect low- and high-energy data in all patients using standard CT protocols. In addition to the conventional polyenergetic images created for each patient, projection-space decomposition is used to generate spectral basis images (photoelectric and Compton scatter) for creating multiple spectral images, including material decomposition (iodine-only, virtual non-contrast, effective atomic number) and virtual monoenergetic images, on-demand according to clinical need. These images are useful in multiple clinical applications, including- improving vascular contrast, improving lesion conspicuity, decreasing artefacts, material characterisation and reducing radiation dose. In this article, we discuss the principles of this novel technology and also illustrate the common clinical applications. Teaching points • The top and bottom layers of dual-layer CT absorb low- and high-energy photons, respectively.• Multiple spectral images are generated by projection-space decomposition.• Spectral images can be generated in all patients scanned in this scanner.

  17. Riesz spectral systems

    NARCIS (Netherlands)

    Guo, B.Z.; Zwart, Heiko J.

    2001-01-01

    In this paper we study systems in which the system operator, $A$, has a Riesz basis of (generalized) eigenvectors. We show that this class is subset of the class of spectral operators as studied by Dunford and Schwartz. For these systems we investigate several system theoretic properties, like

  18. SYNTHESIS, SPECTRAL CHARACTERIZATIONS AND ...

    African Journals Online (AJOL)

    Preferred Customer

    60.55) 4.46 ..... The carbon atom C5, bonded to the chlorine atom, appears at ca. 124 ppm in all of the compounds [63, 70]. Table 6. 13C-NMR spectral (APT) data of the compounds (δC, as ppm, in DMSO-d6). Chloro-hydroxyphenyl carbons.

  19. Further remarks on convergence of decomposition method.

    Science.gov (United States)

    Cherruault, Y; Adomian, G; Abbaoui, K; Rach, R

    1995-01-01

    The decomposition method solves a wide class of nonlinear functional equations. This method uses a series solution with rapid convergence. This paper is intended as a useful review and clarification of related issues.

  20. A Decomposition Theorem for Finite Automata.

    Science.gov (United States)

    Santa Coloma, Teresa L.; Tucci, Ralph P.

    1990-01-01

    Described is automata theory which is a branch of theoretical computer science. A decomposition theorem is presented that is easier than the Krohn-Rhodes theorem. Included are the definitions, the theorem, and a proof. (KR)

  1. Decomposition Analysis of Forest Ecosystem Services Values

    National Research Council Canada - National Science Library

    Hidemichi Fujii; Masayuki Sato; Shunsuke Managi

    2017-01-01

    .... We applied two approaches: a contingent valuation method for estimating the forest ecosystem service value per area and a decomposition analysis for identifying the main driving factors of changes in the value of forest ecosystem services...

  2. The decomposition of estuarine macrophytes under different ...

    African Journals Online (AJOL)

    2013-04-29

    Apr 29, 2013 ... knowledge of the decomposition rates of algal species in order to validate their role in the ... sure in the Great Brak Estuary, numerous filamentous green algae ... structure and functioning of the estuary and as such need to.

  3. Classification of breast microcalcifications using spectral mammography

    Science.gov (United States)

    Ghammraoui, B.; Glick, S. J.

    2017-03-01

    Purpose: To investigate the potential of spectral mammography to distinguish between type I calcifications, consisting of calcium oxalate dihydrate or weddellite compounds that are more often associated with benign lesions, and type II calcifications containing hydroxyapatite which are predominantly associated with malignant tumors. Methods: Using a ray tracing algorithm, we simulated the total number of x-ray photons recorded by the detector at one pixel from a single pencil-beam projection through a breast of 50/50 (adipose/glandular) tissues with inserted microcalcifications of different types and sizes. Material decomposition using two energy bins was then applied to characterize the simulated calcifications into hydroxyapatite and weddellite using maximumlikelihood estimation, taking into account the polychromatic source, the detector response function and the energy dependent attenuation. Results: Simulation tests were carried out for different doses and calcification sizes for multiple realizations. The results were summarized using receiver operating characteristic (ROC) analysis with the area under the curve (AUC) taken as an overall indicator of discrimination performance and showing high AUC values up to 0.99. Conclusion: Our simulation results obtained for a uniform breast imaging phantom indicate that spectral mammography using two energy bins has the potential to be used as a non-invasive method for discrimination between type I and type II microcalcifications to improve early breast cancer diagnosis and reduce the number of unnecessary breast biopsies.

  4. Multipartite graph decomposition: cycles and closed trails

    Directory of Open Access Journals (Sweden)

    Elizabeth J. Billington

    2004-11-01

    Full Text Available This paper surveys results on cycle decompositions of complete multipartite graphs (where the parts are not all of size 1, so the graph is not K_n , in the case that the cycle lengths are “small”. Cycles up to length n are considered, when the complete multipartite graph has n parts, but not hamilton cycles. Properties which the decompositions may have, such as being gregarious, are also mentioned.

  5. Microbiological decomposition of bagasse after radiation pasteurization

    Energy Technology Data Exchange (ETDEWEB)

    Ito, Hitoshi; Ishigaki, Isao

    1987-11-01

    Microbiological decomposition of bagasse was studied for upgrading to animal feeds after radiation pasteurization. Solid-state culture media of bagasse were prepared with addition of some amount of inorganic salts for nitrogen source, and after irradiation, fungi were infected for cultivation. In this study, many kind of cellulosic fungi such as Pleurotus ostreatus, P. flavellatus, Verticillium sp., Coprinus cinereus, Lentinus edodes, Aspergillus niger, Trichoderma koningi, T. viride were used for comparison of decomposition of crude fibers. In alkali nontreated bagasse, P. ostreatus, P. flavellatus, C. cinereus and Verticillium sp. could decompose crude fibers from 25 to 34 % after one month of cultivation, whereas other fungi such as A. niger, T. koningi, T. viride, L. edodes decomposed below 10 %. On the contrary, alkali treatment enhanced the decomposition of crude fiber by A. niger, T. koningi and T. viride to be 29 to 47 % as well as Pleurotus species or C. cinereus. Other species of mushrooms such as L. edodes had a little ability of decomposition even after alkali treatment. Radiation treatment with 10 kGy could not enhance the decomposition of bagasse compared with steam treatment, whereas higher doses of radiation treatment enhanced a little of decomposition of crude fibers by microorganisms.

  6. Wavelength conversion based spectral imaging

    DEFF Research Database (Denmark)

    Dam, Jeppe Seidelin

    There has been a strong, application driven development of Si-based cameras and spectrometers for imaging and spectral analysis of light in the visible and near infrared spectral range. This has resulted in very efficient devices, with high quantum efficiency, good signal to noise ratio and high...... resolution for this spectral region. Today, an increasing number of applications exists outside the spectral region covered by Si-based devices, e.g. within cleantech, medical or food imaging. We present a technology based on wavelength conversion which will extend the spectral coverage of state of the art...... visible or near infrared cameras and spectrometers to include other spectral regions of interest....

  7. Spectral-spatial classification combined with diffusion theory based inverse modeling of hyperspectral images

    Science.gov (United States)

    Paluchowski, Lukasz A.; Bjorgan, Asgeir; Nordgaard, Hâvard B.; Randeberg, Lise L.

    2016-02-01

    Hyperspectral imagery opens a new perspective for biomedical diagnostics and tissue characterization. High spectral resolution can give insight into optical properties of the skin tissue. However, at the same time the amount of collected data represents a challenge when it comes to decomposition into clusters and extraction of useful diagnostic information. In this study spectral-spatial classification and inverse diffusion modeling were employed to hyperspectral images obtained from a porcine burn model using a hyperspectral push-broom camera. The implemented method takes advantage of spatial and spectral information simultaneously, and provides information about the average optical properties within each cluster. The implemented algorithm allows mapping spectral and spatial heterogeneity of the burn injury as well as dynamic changes of spectral properties within the burn area. The combination of statistical and physics informed tools allowed for initial separation of different burn wounds and further detailed characterization of the injuries in short post-injury time.

  8. Aridity and decomposition processes in complex landscapes

    Science.gov (United States)

    Ossola, Alessandro; Nyman, Petter

    2015-04-01

    Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally

  9. Characterization of volatile organic compounds from human analogue decomposition using thermal desorption coupled to comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry.

    Science.gov (United States)

    Stadler, Sonja; Stefanuto, Pierre-Hugues; Brokl, Michał; Forbes, Shari L; Focant, Jean-François

    2013-01-15

    Complex processes of decomposition produce a variety of chemicals as soft tissues, and their component parts are broken down. Among others, these decomposition byproducts include volatile organic compounds (VOCs) responsible for the odor of decomposition. Human remains detection (HRD) canines utilize this odor signature to locate human remains during police investigations and recovery missions in the event of a mass disaster. Currently, it is unknown what compounds or combinations of compounds are recognized by the HRD canines. Furthermore, a comprehensive decomposition VOC profile remains elusive. This is likely due to difficulties associated with the nontarget analysis of complex samples. In this study, cadaveric VOCs were collected from the decomposition headspace of pig carcasses and were further analyzed using thermal desorption coupled to comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (TD-GC × GC-TOFMS). Along with an advanced data handling methodology, this approach allowed for enhanced characterization of these complex samples. The additional peak capacity of GC × GC, the spectral deconvolution algorithms applied to unskewed mass spectral data, and the use of a robust data mining strategy generated a characteristic profile of decomposition VOCs across the various stages of soft-tissue decomposition. The profile was comprised of numerous chemical families, particularly alcohols, carboxylic acids, aromatics, and sulfides. Characteristic compounds identified in this study, e.g., 1-butanol, 1-octen-3-ol, 2-and 3-methyl butanoic acid, hexanoic acid, octanal, indole, phenol, benzaldehyde, dimethyl disulfide, and trisulfide, are potential target compounds of decomposition odor. This approach will facilitate the comparison of complex odor profiles and produce a comprehensive VOC profile for decomposition.

  10. Decomposition of hydroxylamine by hemoglobin.

    Science.gov (United States)

    Bazylinski, D A; Arkowitz, R A; Hollocher, T C

    1987-12-01

    The reaction between hydroxylamine (NH2OH) and human hemoglobin (Hb) at pH 6-8 and the reaction between NH2OH and methemoglobin (Hb+) chiefly at pH 7 were studied under anaerobic conditions at 25 degrees C. In presence of cyanide, which was used to trap Hb+, Hb was oxidized by NH2OH to methemoglobin cyanide with production of about 0.5 mol NH+4/mol of heme oxidized at pH 7. The conversion of Hb to Hb+ was first order in [Hb] (or nearly so) but the pseudo-first-order rate constant was not strictly proportional to [NH2OH]. Thus, the apparent second-order rate constant at pH 7 decreased from about 30 M-1 X s-1 to a limiting value of 11.3 M-1 X s-1 with increasing [NH2OH]. The rate of Hb oxidation was not much affected by cyanide, whereas there was no reaction between NH2OH and carbonmonoxyhemoglobin (HbCO). The pseudo-first-order rate constant for Hb oxidation at 500 microM NH2OH increased from about 0.008 s-1 at pH 6 to 0.02 s-1 at pH 8. The oxidation of Hb by NH2OH terminated prematurely at 75-90% completion at pH 7 and at 30-35% completion at pH 8. Data on the premature termination of reaction fit the titration curve for a group with pK = 7.5-7.7. NH2OH was decomposed by Hb+ to N2, NH+4, and a small amount of N2O in what appears to be a dismutation reaction. Nitrite and hydrazine were not detected, and N2 and NH+4 were produced in nearly equimolar amounts. The dismutation reaction was first order in [Hb+] and [NH2OH] only at low concentrations of reactants and was cleanly inhibited by cyanide. The spectrum of Hb+ remained unchanged during the reaction, except for the gradual formation of some choleglobin-like (green) pigment, whereas in the presence of CO, HbCO was formed. Kinetics are consistent with the view advanced previously by J. S. Colter and J. H. Quastel [1950) Arch. Biochem. 27, 368-389) that the decomposition of NH2OH proceeds by a mechanism involving a Hb/Hb+ cycle (reactions [1] and [2]) in which Hb is oxidized to Hb+ by NH2OH.

  11. Spectral Anonymization of Data.

    Science.gov (United States)

    Lasko, Thomas A; Vinterbo, Staal A

    2010-03-01

    The goal of data anonymization is to allow the release of scientifically useful data in a form that protects the privacy of its subjects. This requires more than simply removing personal identifiers from the data, because an attacker can still use auxiliary information to infer sensitive individual information. Additional perturbation is necessary to prevent these inferences, and the challenge is to perturb the data in a way that preserves its analytic utility.No existing anonymization algorithm provides both perfect privacy protection and perfect analytic utility. We make the new observation that anonymization algorithms are not required to operate in the original vector-space basis of the data, and many algorithms can be improved by operating in a judiciously chosen alternate basis. A spectral basis derived from the data's eigenvectors is one that can provide substantial improvement. We introduce the term spectral anonymization to refer to an algorithm that uses a spectral basis for anonymization, and we give two illustrative examples.We also propose new measures of privacy protection that are more general and more informative than existing measures, and a principled reference standard with which to define adequate privacy protection.

  12. Local Fractional Adomian Decomposition and Function Decomposition Methods for Laplace Equation within Local Fractional Operators

    Directory of Open Access Journals (Sweden)

    Sheng-Ping Yan

    2014-01-01

    Full Text Available We perform a comparison between the local fractional Adomian decomposition and local fractional function decomposition methods applied to the Laplace equation. The operators are taken in the local sense. The results illustrate the significant features of the two methods which are both very effective and straightforward for solving the differential equations with local fractional derivative.

  13. On the Equivalence of Nonnegative Matrix Factorization and K-means- Spectral Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Chris; He, Xiaofeng; Simon, Horst D.; Jin, Rong

    2005-12-04

    We provide a systematic analysis of nonnegative matrix factorization (NMF) relating to data clustering. We generalize the usual X = FG{sup T} decomposition to the symmetric W = HH{sup T} and W = HSH{sup T} decompositions. We show that (1) W = HH{sup T} is equivalent to Kernel K-means clustering and the Laplacian-based spectral clustering. (2) X = FG{sup T} is equivalent to simultaneous clustering of rows and columns of a bipartite graph. We emphasizes the importance of orthogonality in NMF and soft clustering nature of NMF. These results are verified with experiments on face images and newsgroups.

  14. Spectral CT of the extremities with a silicon strip photon counting detector

    Science.gov (United States)

    Sisniega, A.; Zbijewski, W.; Stayman, J. W.; Xu, J.; Taguchi, K.; Siewerdsen, J. H.

    2015-03-01

    Purpose: Photon counting x-ray detectors (PCXDs) are an important emerging technology for spectral imaging and material differentiation with numerous potential applications in diagnostic imaging. We report development of a Si-strip PCXD system originally developed for mammography with potential application to spectral CT of musculoskeletal extremities, including challenges associated with sparse sampling, spectral calibration, and optimization for higher energy x-ray beams. Methods: A bench-top CT system was developed incorporating a Si-strip PCXD, fixed anode x-ray source, and rotational and translational motions to execute complex acquisition trajectories. Trajectories involving rotation and translation combined with iterative reconstruction were investigated, including single and multiple axial scans and longitudinal helical scans. The system was calibrated to provide accurate spectral separation in dual-energy three-material decomposition of soft-tissue, bone, and iodine. Image quality and decomposition accuracy were assessed in experiments using a phantom with pairs of bone and iodine inserts (3, 5, 15 and 20 mm) and an anthropomorphic wrist. Results: The designed trajectories improved the sampling distribution from 56% minimum sampling of voxels to 75%. Use of iterative reconstruction (viz., penalized likelihood with edge preserving regularization) in combination with such trajectories resulted in a very low level of artifacts in images of the wrist. For large bone or iodine inserts (>5 mm diameter), the error in the estimated material concentration was decomposition in joint imaging proved feasible through a combination of rotation-translation acquisition trajectories and iterative reconstruction with optimized regularization.

  15. Fast approximate convex decomposition using relative concavity

    KAUST Repository

    Ghosh, Mukulika

    2013-02-01

    Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.

  16. Spectral signatures of chirality

    DEFF Research Database (Denmark)

    Pedersen, Jesper Goor; Mortensen, Asger

    2009-01-01

    We present a new way of measuring chirality, via the spectral shift of photonic band gaps in one-dimensional structures. We derive an explicit mapping of the problem of oblique incidence of circularly polarized light on a chiral one-dimensional photonic crystal with negligible index contrast...... to the formally equivalent problem of linearly polarized light incident on-axis on a non-chiral structure with index contrast. We derive analytical expressions for the first-order shifts of the band gaps for negligible index contrast. These are modified to give good approximations to the band gap shifts also...

  17. Spectral tripartitioning of networks

    OpenAIRE

    Richardson, Thomas; Mucha, Peter J; Porter, Mason A.

    2008-01-01

    We formulate a spectral graph-partitioning algorithm that uses the two leading eigenvectors of the matrix corresponding to a selected quality function to split a network into three communities in a single step. In so doing, we extend the recursive bipartitioning methods developed by Newman [Proc. Nat. Acad. Sci. 103, 8577 (2006); Phys. Rev. E 74, 036104 (2006)] to allow one to consider the best available two-way and three-way divisions at each recursive step. We illustrate the method using si...

  18. Inverse boundary spectral problems

    CERN Document Server

    Kachalov, Alexander; Lassas, Matti

    2001-01-01

    Inverse boundary problems are a rapidly developing area of applied mathematics with applications throughout physics and the engineering sciences. However, the mathematical theory of inverse problems remains incomplete and needs further development to aid in the solution of many important practical problems.Inverse Boundary Spectral Problems develop a rigorous theory for solving several types of inverse problems exactly. In it, the authors consider the following: ""Can the unknown coefficients of an elliptic partial differential equation be determined from the eigenvalues and the boundary value

  19. QCD spectral sum rules

    CERN Document Server

    Narison, Stéphan

    The aim of the book is to give an introduction to the method of QCD Spectral Sum Rules and to review its developments. After some general introductory remarks, Chiral Symmetry, the Historical Developments of the Sum Rules and the necessary materials for perturbative QCD including the MS regularization and renormalization schemes are discussed. The book also gives a critical review and some improvements of the wide uses of the QSSR in Hadron Physics and QSSR beyond the Standard Hadron Phenomenology. The author has participated actively in this field since 1978 just before the expanding success

  20. Two Notes on Discrimination and Decomposition

    DEFF Research Database (Denmark)

    Nielsen, Helena Skyt

    1998-01-01

    1. It turns out that the Oaxaca-Blinder wage decomposition is inadequate when it comes to calculation of separate contributions for indicator variables. The contributions are not robust against a change of reference group. I extend the Oaxaca-Blinder decomposition to handle this problem. 2. The p....... The paper suggests how to use the logit model to decompose the gender difference in the probability of an occurrence. The technique is illustrated by an analysis of discrimination in child labor in rural Zambia....

  1. Vector domain decomposition schemes for parabolic equations

    Science.gov (United States)

    Vabishchevich, P. N.

    2017-09-01

    A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.

  2. Claw-decompositions and Tutte-orientations

    DEFF Research Database (Denmark)

    Barat, Janos; Thomassen, Carsten

    2006-01-01

    We conjecture that, for each tree T there exists a natural number k(T) such that the following holds: If G is a k(T)-edge-connected graph such that \\E(T)\\ divides \\EG)\\, then the edges of G can be divided into parts, each of which is isomorphic to T. We prove that for T=K-1,K-3 (the claw), this h......]-edge-connected graph with n vertices has an edge-decomposition into claws provided its number of edges is divisible by 3. We also prove that every triangulation of a surface has an edge-decomposition into claws. (C) 2006 Wiley Periodicals, Inc....

  3. Surface Modes of Coherent Spinodal Decomposition

    Science.gov (United States)

    Tang, Ming; Karma, Alain

    2012-06-01

    We use linear stability theory and numerical simulations to show that spontaneous phase separation in elastically coherent solids is fundamentally altered by the presence of free surfaces. Because of misfit stress relaxation near surfaces, phase separation is mediated by unique surface modes of spinodal decomposition that have faster kinetics than bulk modes and are unstable even when spinodal decomposition is suppressed in the bulk. Consequently, in the presence of free surfaces, the limit of metastability of supersaturated solid solutions of crystalline materials is shifted from the coherent to chemical spinodal.

  4. Multiresolution signal decomposition transforms, subbands, and wavelets

    CERN Document Server

    Akansu, Ali N; Haddad, Paul R

    2001-01-01

    The uniqueness of this book is that it covers such important aspects of modern signal processing as block transforms from subband filter banks and wavelet transforms from a common unifying standpoint, thus demonstrating the commonality among these decomposition techniques. In addition, it covers such ""hot"" areas as signal compression and coding, including particular decomposition techniques and tables listing coefficients of subband and wavelet filters and other important properties.The field of this book (Electrical Engineering/Computer Science) is currently booming, which is, of course

  5. Decomposition of aquatic plants in lakes

    Energy Technology Data Exchange (ETDEWEB)

    Godshalk, G.L.

    1977-01-01

    This study was carried out to systematically determine the effects of temperature and oxygen concentration, two environmental parameters crucial to lake metabolism in general, on decomposition of five species of aquatic vascular plants of three growth forms in a Michigan lake. Samples of dried plant material were decomposed in flasks in the laboratory under three different oxygen regimes, aerobic-to-anaerobic, strict anaerobic, and aerated, each at 10/sup 0/C and 25/sup 0/C. In addition, in situ decomposition of the same species was monitored using the litter bag technique under four conditions.

  6. THE STUDY OF SPECTRUM RECONSTRUCTION BASED ON FUZZY SET FULL CONSTRAINT AND MULTIENDMEMBER DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    Y. Sun

    2017-09-01

    Full Text Available Hyperspectral imaging system can obtain spectral and spatial information simultaneously with bandwidth to the level of 10 nm or even less. Therefore, hyperspectral remote sensing has the ability to detect some kinds of objects which can not be detected in wide-band remote sensing, making it becoming one of the hottest spots in remote sensing. In this study, under conditions with a fuzzy set of full constraints, Normalized Multi-Endmember Decomposition Method (NMEDM for vegetation, water, and soil was proposed to reconstruct hyperspectral data using a large number of high-quality multispectral data and auxiliary spectral library data. This study considered spatial and temporal variation and decreased the calculation time required to reconstruct the hyper-spectral data. The results of spectral reconstruction based on NMEDM showed that the reconstructed data has good qualities and certain applications, which makes it possible to carry out spectral features identification. This method also extends the application of depth and breadth of remote sensing data, helping to explore the law between multispectral and hyperspectral data.

  7. KOALA: A program for the processing and decomposition of transient spectra

    Science.gov (United States)

    Grubb, Michael P.; Orr-Ewing, Andrew J.; Ashfold, Michael N. R.

    2014-06-01

    Extracting meaningful kinetic traces from time-resolved absorption spectra is a non-trivial task, particularly for solution phase spectra where solvent interactions can substantially broaden and shift the transition frequencies. Typically, each spectrum is composed of signal from a number of molecular species (e.g., excited states, intermediate complexes, product species) with overlapping spectral features. Additionally, the profiles of these spectral features may evolve in time (i.e., signal nonlinearity), further complicating the decomposition process. Here, we present a new program for decomposing mixed transient spectra into their individual component spectra and extracting the corresponding kinetic traces: KOALA (Kinetics Observed After Light Absorption). The software combines spectral target analysis with brute-force linear least squares fitting, which is computationally efficient because of the small nonlinear parameter space of most spectral features. Within, we demonstrate the application of KOALA to two sets of experimental transient absorption spectra with multiple mixed spectral components. Although designed for decomposing solution-phase transient absorption data, KOALA may in principle be applied to any time-evolving spectra with multiple components.

  8. The Study of Spectrum Reconstruction Based on Fuzzy Set Full Constraint and Multiendmember Decomposition

    Science.gov (United States)

    Sun, Y.; Lin, Y.; Hu, X.; Zhao, S.; Liu, S.; Tong, Q.; Helder, D.; Yan, L.

    2017-09-01

    Hyperspectral imaging system can obtain spectral and spatial information simultaneously with bandwidth to the level of 10 nm or even less. Therefore, hyperspectral remote sensing has the ability to detect some kinds of objects which can not be detected in wide-band remote sensing, making it becoming one of the hottest spots in remote sensing. In this study, under conditions with a fuzzy set of full constraints, Normalized Multi-Endmember Decomposition Method (NMEDM) for vegetation, water, and soil was proposed to reconstruct hyperspectral data using a large number of high-quality multispectral data and auxiliary spectral library data. This study considered spatial and temporal variation and decreased the calculation time required to reconstruct the hyper-spectral data. The results of spectral reconstruction based on NMEDM showed that the reconstructed data has good qualities and certain applications, which makes it possible to carry out spectral features identification. This method also extends the application of depth and breadth of remote sensing data, helping to explore the law between multispectral and hyperspectral data.

  9. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    OpenAIRE

    Chulhee Park; Moon Gi Kang

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB co...

  10. Reduction of Non-stationary Noise using a Non-negative Latent Variable Decomposition

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Larsen, Jan

    2008-01-01

    We present a method for suppression of non-stationary noise in single channel recordings of speech. The method is based on a non-negative latent variable decomposition model for the speech and noise signals, learned directly from a noisy mixture. In non-speech regions an over complete basis...... is learned for the noise that is then used to jointly estimate the speech and the noise from the mixture. We compare the method to the classical spectral subtraction approach, where the noise spectrum is estimated as the average over non-speech frames. The proposed method significantly outperforms...

  11. Introducing sensor spectral response into the classification process

    Science.gov (United States)

    Mesas-Carrascosa, Francisco Javier; Castillejo-González, Isabel Luisa; de la Orden, Manuel Sánchez; Porras, Alfonso García-Ferrer

    2013-04-01

    Many sensors have their bands overlapped and therefore do not set a normal space. If a spectral distance is measured, as in first-order statistical classifiers, the direct consequence is that the result will not be the most accurate. Image classification processes are independent of the spectral response function of the sensor, so this overlap is usually ignored during image processing. This paper presents a methodology that introduces the spectral response function of sensors into the classification process to increase its accuracy. This process takes place in two steps: first, incident energy values of the sensors are reconstructed; second, the energy of the bands is set in an orthonormal space using a matrix singular value decomposition. Sensors with and without overlapping spectral bands were simulated to evaluate the reconstruction of energy values. The whole process was implemented on three types of images with medium, high and very high spatial resolution obtained with the sensors ASTER, IKONOS and DMC camera, respectively. These images were classified by ISODATA and minimum distance algorithms. The ISODATA classifier showed well-defined features in the processed images, while the results were less clear in the original images. At the same time, the minimum distance classifier showed that overall accuracy of the processed images increased as the maximum tolerance distance decreased compared to the original images.

  12. Spectral estimation—What is new? What is next?

    Science.gov (United States)

    Tary, Jean Baptiste; Herrera, Roberto Henry; Han, Jiajun; van der Baan, Mirko

    2014-12-01

    Spectral estimation, and corresponding time-frequency representation for nonstationary signals, is a cornerstone in geophysical signal processing and interpretation. The last 10-15 years have seen the development of many new high-resolution decompositions that are often fundamentally different from Fourier and wavelet transforms. These conventional techniques, like the short-time Fourier transform and the continuous wavelet transform, show some limitations in terms of resolution (localization) due to the trade-off between time and frequency localizations and smearing due to the finite size of the time series of their template. Well-known techniques, like autoregressive methods and basis pursuit, and recently developed techniques, such as empirical mode decomposition and the synchrosqueezing transform, can achieve higher time-frequency localization due to reduced spectral smearing and leakage. We first review the theory of various established and novel techniques, pointing out their assumptions, adaptability, and expected time-frequency localization. We illustrate their performances on a provided collection of benchmark signals, including a laughing voice, a volcano tremor, a microseismic event, and a global earthquake, with the intention to provide a fair comparison of the pros and cons of each method. Finally, their outcomes are discussed and possible avenues for improvements are proposed.

  13. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  14. Domain decomposition methods for hyperbolic problems

    Indian Academy of Sciences (India)

    problems using domain decomposition but this technique faces difficulties if the system becomes characteristic at the inter-element boundaries. By making the inter-element boundaries move faster than the fastest wave speed associated with the hyperbolic system we are able to overcome this problem. Keywords. Domain ...

  15. Lignin Derivatives Formation In Catalysed Thermal Decomposition ...

    African Journals Online (AJOL)

    denise

    in the heat of gasification and mass fraction of non-combustible volatiles in solid. NaOH-catalysed thermal decomposition of pure and fire-retardant- cellulose. Kuroda and co-workers14 studied the Curie-point pyrolysis of Japanese softwood species of the red pine, cedar and cypress in the presence of inorganic substances ...

  16. Domain decomposition methods for hyperbolic problems

    Indian Academy of Sciences (India)

    In this paper a method is developed for solving hyperbolic initial boundary value problems in one space dimension using domain decomposition, which can be extended to problems in several space dimensions. We minimize a functional which is the sum of squares of the 2 norms of the residuals and a term which is the ...

  17. KINETICS OF HYDROXIDE PHOMOTED DECOMPOSITION 0F ...

    African Journals Online (AJOL)

    1991-04-26

    (Received July 2?. 1990; revised April 26, 1991). ABSTRACT. The effects of varying concentrations of dimethyl sulphoxide in mixture with water on rates and activation parameters for the hydroxide promoted decomposition of tetraphenylphosphonium chloride have been studied. Increasing the DMSO content of the reaction ...

  18. The decomposition of estuarine macrophytes under different ...

    African Journals Online (AJOL)

    The estuary is subject to a variety of anthropogenic impacts (e.g. freshwater abstraction and sewage discharge) that increases its susceptibility to prolonged periods of mouth closure, eutrophication, and ultimately the formation of macroalgal blooms. The aim of this study was to determine the decomposition characteristics of ...

  19. Direct observation of nanowire growth and decomposition

    DEFF Research Database (Denmark)

    Rackauskas, Simas; Shandakov, Sergey D; Jiang, Hua

    2017-01-01

    knowledge, so far this has been only postulated, but never observed at the atomic level. By means of in situ environmental transmission electron microscopy we monitored and examined the atomic layer transformation at the conditions of the crystal growth and its decomposition using CuO nanowires selected...

  20. Preparation, Structure Characterization and Thermal Decomposition ...

    African Journals Online (AJOL)

    NJD

    thermal decomposition process of [Dy(m-MBA)3phen]2·H2O has been followed by thermal analysis. KEYWORDS ... X-ray diffraction, elemental analysis, UV and IR spectroscopy, .... diffractometer with graphite-monochromated Mo Kα radiation.

  1. Organic matter decomposition in simulated aquaculture ponds

    NARCIS (Netherlands)

    Torres Beristain, B.

    2005-01-01

    Different kinds of organic and inorganic compounds (e.g. formulated food, manures, fertilizers) are added to aquaculture ponds to increase fish production. However, a large part of these inputs are not utilized by the fish and are decomposed inside the pond. The microbiological decomposition of the

  2. Decomposition and nutrient release patterns of Pueraria ...

    African Journals Online (AJOL)

    Decomposition and nutrient release patterns of Pueraria phaseoloides, Flemingia macrophylla and Chromolaena odorata leaf residues in tropical land use ... The slowest releases, irrespective of type of leaf residue, were in Ca and Mg. The study concluded that among the planted fallows, Pueraria phaseoloides had the ...

  3. Methodologies in forensic and decomposition microbiology

    Science.gov (United States)

    Culturable microorganisms represent only 0.1-1% of the total microbial diversity of the biosphere. This has severely restricted the ability of scientists to study the microbial biodiversity associated with the decomposition of ephemeral resources in the past. Innovations in technology are bringing...

  4. Thermal decomposition of barium valerate in argon

    DEFF Research Database (Denmark)

    Torres, P.; Norby, Poul; Grivel, Jean-Claude

    2015-01-01

    The thermal decomposition of barium valerate (Ba(C4H9CO2)(2)/Ba-pentanoate) was studied in argon by means of thermogravimetry, differential thermal analysis, IR-spectroscopy, X-ray diffraction and hot-stage optical microscopy. Melting takes place in two different steps, at 200 degrees C and 280...

  5. Compactly supported frames for decomposition spaces

    DEFF Research Database (Denmark)

    Nielsen, Morten; Rasmussen, Kenneth Niemann

    2012-01-01

    In this article we study a construction of compactly supported frame expansions for decomposition spaces of Triebel-Lizorkin type and for the associated modulation spaces. This is done by showing that finite linear combinations of shifts and dilates of a single function with sufficient decay in b...

  6. The Algorithmic Complexity of Modular Decomposition

    NARCIS (Netherlands)

    J.C. Bioch (Cor)

    2001-01-01

    textabstractModular decomposition is a thoroughly investigated topic in many areas such as switching theory, reliability theory, game theory and graph theory. We propose an O(mn)-algorithm for the recognition of a modular set of a monotone Boolean function f with m prime implicants and n variables.

  7. Snapshot wavefield decomposition for heterogeneous velocity media

    NARCIS (Netherlands)

    Holicki, M.E.; Wapenaar, C.P.A.

    2017-01-01

    We propose a novel directional decomposition operator for wavefield snapshots in heterogeneous-velocity media. The proposed operator demonstrates the link between the amplitude of pressure and particlevelocity plane waves in the wavenumber domain. The proposed operator requires two spatial Fourier

  8. Thermal decomposition of lead titanyl oxalate tetrahydrate

    NARCIS (Netherlands)

    van de Velde, G.M.H.; Oranje, P.J.D.

    1976-01-01

    The thermal behaviour of PbTiO(C2O4)2·4H2O (PTO) has been investigated, employing TG, quantitative DTA, infrared spectroscopy and (high temperature) X-ray powder diffraction. The decomposition involves four main steps. The first is the dehydration of the tetrahydrate (30–180°C), followed by a small

  9. Decomposition of variance for spatial Cox processes

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2013-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with addit...

  10. Reference model decomposition in direct adaptive control

    NARCIS (Netherlands)

    Butler, H.; Honderd, G.; van Amerongen, J.

    1991-01-01

    This paper introduces the method of reference model decomposition as a way to improve the robustness of model reference adaptive control systems (MRACs) with respect to unmodelled dynamics with a known structure. Such unmodelled dynamics occur when some of the nominal plant dynamics are purposely

  11. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained...

  12. Factors affecting decomposition and Diptera colonization.

    Science.gov (United States)

    Campobasso, C P; Di Vella, G; Introna, F

    2001-08-15

    Understanding the process of corpse decomposition is basic to establishing the postmortem interval (PMI) in any death investigation even using insect evidence. The sequence of postmortem changes in soft tissues usually gives an idea of how long an individual has been dead. However, modification of the decomposition process can considerably alter the estimate of the time of death. A body after death is sometimes subject to depredation by various types of animals among which insects can have a predominant role in the breakdown of the corpse thus, accelerating the decomposition rate. The interference of the insect community in the decomposition process has been investigated by several experimental studies using animal models and very few contributions directly on cadavers. Several of the most frequent factors affecting PMI estimates such as temperature, burial depth and access of the body to insects are fully reviewed. On account of their activity and world wide distribution, Diptera are the insects of greatest forensic interest. The knowledge of factors inhibiting or favouring colonization and Diptera development is a necessary pre-requisite for estimating the PMI using entomological data.

  13. Spectral Inverse Quantum (Spectral-IQ Method for Modeling Mesoporous Systems: Application on Silica Films by FTIR

    Directory of Open Access Journals (Sweden)

    Mihai V. Putz

    2012-11-01

    Full Text Available The present work advances the inverse quantum (IQ structural criterion for ordering and characterizing the porosity of the mesosystems based on the recently advanced ratio of the particle-to-wave nature of quantum objects within the extended Heisenberg uncertainty relationship through employing the quantum fluctuation, both for free and observed quantum scattering information, as computed upon spectral identification of the wave-numbers specific to the maximum of absorption intensity record, and to left-, right- and full-width at the half maximum (FWHM of the concerned bands of a given compound. It furnishes the hierarchy for classifying the mesoporous systems from more particle-related (porous, tight or ionic bindings to more wave behavior (free or covalent bindings. This so-called spectral inverse quantum (Spectral-IQ particle-to-wave assignment was illustrated on spectral measurement of FT-IR (bonding bands’ assignment for samples synthesized within different basic environment and different thermal treatment on mesoporous materials obtained by sol-gel technique with n-dodecyl trimethyl ammonium bromide (DTAB and cetyltrimethylammonium bromide (CTAB and of their combination as cosolvents. The results were analyzed in the light of the so-called residual inverse quantum information, accounting for the free binding potency of analyzed samples at drying temperature, and were checked by cross-validation with thermal decomposition techniques by endo-exo thermo correlations at a higher temperature.

  14. The Slice Algorithm For Irreducible Decomposition of Monomial Ideals

    DEFF Research Database (Denmark)

    Roune, Bjarke Hammersholt

    2009-01-01

    Irreducible decomposition of monomial ideals has an increasing number of applications from biology to pure math. This paper presents the Slice Algorithm for computing irreducible decompositions, Alexander duals and socles of monomial ideals. The paper includes experiments showing good performance...

  15. Adomian decomposition method used to solve the gravity wave equations

    Science.gov (United States)

    Mungkasi, Sudi; Dheno, Maria Febronia Sedho

    2017-01-01

    The gravity wave equations are considered. We solve these equations using the Adomian decomposition method. We obtain that the approximate Adomian decomposition solution to the gravity wave equations is accurate (physically correct) for early stages of fluid flows.

  16. Litter decomposition and nutrient dynamics of ten selected tree ...

    African Journals Online (AJOL)

    Litter decomposition processes in tropical rainforests are still poorly understood. Leaf litter decomposition and nutrient dynamics of ten contrasting tree species, Entandraphragma utile, Guibourtia tessmannii, Klainedoxa gabonensis, Musanga cecropioides, Panda oleosa, Plagiostyles africana, Pterocarpus soyauxii, ...

  17. TRIANGLE-SHAPED DC CORONA DISCHARGE DEVICE FOR MOLECULAR DECOMPOSITION

    Science.gov (United States)

    The paper discusses the evaluation of electrostatic DC corona discharge devices for the application of molecular decomposition. A point-to-plane geometry corona device with a rectangular cross section demonstrated low decomposition efficiencies in earlier experimental work. The n...

  18. Sliding Window Empirical Mode Decomposition -its performance and quality

    Directory of Open Access Journals (Sweden)

    Stepien Pawel

    2014-12-01

    Proposed algorithm speeds up (about 10 times the computation with acceptable quality of decomposition. Conclusions Sliding Window EMD algorithm is suitable for decomposition of long signals with high sampling frequency.

  19. Spectral analysis of growing graphs a quantum probability point of view

    CERN Document Server

    Obata, Nobuaki

    2017-01-01

    This book is designed as a concise introduction to the recent achievements on spectral analysis of graphs or networks from the point of view of quantum (or non-commutative) probability theory. The main topics are spectral distributions of the adjacency matrices of finite or infinite graphs and their limit distributions for growing graphs. The main vehicle is quantum probability, an algebraic extension of the traditional probability theory, which provides a new framework for the analysis of adjacency matrices revealing their non-commutative nature. For example, the method of quantum decomposition makes it possible to study spectral distributions by means of interacting Fock spaces or equivalently by orthogonal polynomials. Various concepts of independence in quantum probability and corresponding central limit theorems are used for the asymptotic study of spectral distributions for product graphs. This book is written for researchers, teachers, and students interested in graph spectra, their (asymptotic) spectr...

  20. Rectangular spectral collocation

    KAUST Repository

    Driscoll, Tobin A.

    2015-02-06

    Boundary conditions in spectral collocation methods are typically imposed by removing some rows of the discretized differential operator and replacing them with others that enforce the required conditions at the boundary. A new approach based upon resampling differentiated polynomials into a lower-degree subspace makes differentiation matrices, and operators built from them, rectangular without any row deletions. Then, boundary and interface conditions can be adjoined to yield a square system. The resulting method is both flexible and robust, and avoids ambiguities that arise when applying the classical row deletion method outside of two-point scalar boundary-value problems. The new method is the basis for ordinary differential equation solutions in Chebfun software, and is demonstrated for a variety of boundary-value, eigenvalue and time-dependent problems.

  1. Decomposition of Amino Diazeniumdiolates (NONOates): Molecular Mechanisms

    Energy Technology Data Exchange (ETDEWEB)

    Shaikh, Nizamuddin; Valiev, Marat; Lymar, Sergei V.

    2014-08-23

    Although diazeniumdiolates (X[N(O)NO]-) are extensively used in biochemical, physiological, and pharmacological studies due to their ability to slowly release NO and/or its congeneric nitroxyl, the mechanisms of these processes remain obscure. In this work, we used a combination of spectroscopic, kinetic, and computational techniques to arrive at a qualitatively consistent molecular mechanism for decomposition of amino diazeniumdiolates (amino NONOates: R2N[N(O)NO]-, where R = -N(C2H5)2 (1), -N(C3H4NH2)2 (2), or -N(C2H4NH2)2 (3)). Decomposition of these NONOates is triggered by protonation of their [NN(O)NO]- group with apparent pKa and decomposition rate constants of 4.6 and 1 s-1 for 1-H, 3.5 and 83 x 10-3 s-1 for 2-H, and 3.8 and 3.3 x 10-3 s-1 for 3-H. Although protonation occurs mainly on the O atoms of the functional group, only the minor R2N(H)N(O)NO tautomer (population ~0.01%, for 1) undergoes the N-N heterolytic bond cleavage (k ~102 s-1 for 1) leading to amine and NO. Decompositions of protonated amino NONOates are strongly temperature-dependent; activation enthalpies are 20.4 and 19.4 kcal/mol for 1 and 2, respectively, which includes contributions from both the tautomerization and bond cleavage. The bond cleavage rates exhibit exceptional sensitivity to the nature of R substituents which strongly modulate activation entropy. At pH < 2, decompositions of all these NONOates are subject to additional acid catalysis that occurs through di-protonation of the [NN(O)NO]- group.

  2. Wood decomposition as influenced by invertebrates.

    Science.gov (United States)

    Ulyshen, Michael D

    2016-02-01

    The diversity and habitat requirements of invertebrates associated with dead wood have been the subjects of hundreds of studies in recent years but we still know very little about the ecological or economic importance of these organisms. The purpose of this review is to examine whether, how and to what extent invertebrates affect wood decomposition in terrestrial ecosystems. Three broad conclusions can be reached from the available literature. First, wood decomposition is largely driven by microbial activity but invertebrates also play a significant role in both temperate and tropical environments. Primary mechanisms include enzymatic digestion (involving both endogenous enzymes and those produced by endo- and ectosymbionts), substrate alteration (tunnelling and fragmentation), biotic interactions and nitrogen fertilization (i.e. promoting nitrogen fixation by endosymbiotic and free-living bacteria). Second, the effects of individual invertebrate taxa or functional groups can be accelerative or inhibitory but the cumulative effect of the entire community is generally to accelerate wood decomposition, at least during the early stages of the process (most studies are limited to the first 2-3 years). Although methodological differences and design limitations preclude meta-analysis, studies aimed at quantifying the contributions of invertebrates to wood decomposition commonly attribute 10-20% of wood loss to these organisms. Finally, some taxa appear to be particularly influential with respect to promoting wood decomposition. These include large wood-boring beetles (Coleoptera) and termites (Termitoidae), especially fungus-farming macrotermitines. The presence or absence of these species may be more consequential than species richness and the influence of invertebrates is likely to vary biogeographically. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  3. Spectral unmixing: estimating partial abundances

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-01-01

    Full Text Available of spectral unmixing 3 End-member spectra and synthetic mixtures 4 Results 5 Conclusions Debba (CSIR) Spectral Unmixing LQM 2009 2 / 22 Background and Research Question If research could be as easy as eating a chocolate cake . . . Figure: Can you guess... the ingredients for this chocolate cake? Debba (CSIR) Spectral Unmixing LQM 2009 3 / 22 Background and Research Question Ingredients Quantity unsweetened chocolate unsweetened cocoa powder boiling water flour baking powder baking soda salt unsalted...

  4. Domain decomposition method for nonconforming finite element approximations of anisotropic elliptic problems on nonmatching grids

    Energy Technology Data Exchange (ETDEWEB)

    Maliassov, S.Y. [Texas A& M Univ., College Station, TX (United States)

    1996-12-31

    An approach to the construction of an iterative method for solving systems of linear algebraic equations arising from nonconforming finite element discretizations with nonmatching grids for second order elliptic boundary value problems with anisotropic coefficients is considered. The technique suggested is based on decomposition of the original domain into nonoverlapping subdomains. The elliptic problem is presented in the macro-hybrid form with Lagrange multipliers at the interfaces between subdomains. A block diagonal preconditioner is proposed which is spectrally equivalent to the original saddle point matrix and has the optimal order of arithmetical complexity. The preconditioner includes blocks for preconditioning subdomain and interface problems. It is shown that constants of spectral equivalence axe independent of values of coefficients and mesh step size.

  5. Photodegradation at day, microbial decomposition at night - decomposition in arid lands

    Science.gov (United States)

    Gliksman, Daniel; Gruenzweig, Jose

    2014-05-01

    Our current knowledge of decomposition in dry seasons and its role in carbon turnover is fragmentary. So far, decomposition during dry seasons was mostly attributed to abiotic mechanisms, mainly photochemical and thermal degradation, while the contribution of microorganisms to the decay process was excluded. We asked whether microbial decomposition occurs during the dry season and explored its interaction with photochemical degradation under Mediterranean climate. We conducted a litter bag experiment with local plant litter and manipulated litter exposure to radiation using radiation filters. We found notable rates of CO2 fluxes from litter which were related to microbial activity mainly during night-time throughout the dry season. This activity was correlated with litter moisture content and high levels of air humidity and dew. Day-time CO2 fluxes were related to solar radiation, and radiation manipulation suggested photodegradation as the underlying mechanism. In addition, a decline in microbial activity was followed by a reduction in photodegradation-related CO2 fluxes. The levels of microbial decomposition and photodegradation in the dry season were likely the factors influencing carbon mineralization during the subsequent wet season. This study showed that microbial decomposition can be a dominant contributor to CO2 emissions and mass loss in the dry season and it suggests a regulating effect of microbial activity on photodegradation. Microbial decomposition is an important contributor to the dry season decomposition and impacts the annual litter turn-over rates in dry regions. Global warming may lead to reduced moisture availability and dew deposition, which may greatly influence not only microbial decomposition of plant litter, but also photodegradation.

  6. Spectral studies related to dissociation of HBr, HCl and BrO

    Science.gov (United States)

    Ginter, M. L.

    1986-01-01

    Concern over halogen catalyzed decomposition of O3 in the upper atmosphere has generated need for data on the atomic and molecular species X, HX and XO (where X is Cl and Br). Of special importance are Cl produced from freon decomposition and Cl and Br produced from natural processes and from other industrial and agricultural chemicals. Basic spectral data is provided on HCl, HBr, and BrO necessary to detect specific states and energy levels, to enable detailed modeling of the processes involving molecular dissociation, ionization, etc., and to help evaluate field experiments to check the validity of model calculations for these species in the upper atmosphere. Results contained in four published papers and two major spectral compilations are summarized together with other results obtained.

  7. [Review of digital ground object spectral library].

    Science.gov (United States)

    Zhou, Xiao-Hu; Zhou, Ding-Wu

    2009-06-01

    A higher spectral resolution is the main direction of developing remote sensing technology, and it is quite important to set up the digital ground object reflectance spectral database library, one of fundamental research fields in remote sensing application. Remote sensing application has been increasingly relying on ground object spectral characteristics, and quantitative analysis has been developed to a new stage. The present article summarized and systematically introduced the research status quo and development trend of digital ground object reflectance spectral libraries at home and in the world in recent years. Introducing the spectral libraries has been established, including desertification spectral database library, plants spectral database library, geological spectral database library, soil spectral database library, minerals spectral database library, cloud spectral database library, snow spectral database library, the atmosphere spectral database library, rocks spectral database library, water spectral database library, meteorites spectral database library, moon rock spectral database library, and man-made materials spectral database library, mixture spectral database library, volatile compounds spectral database library, and liquids spectral database library. In the process of establishing spectral database libraries, there have been some problems, such as the lack of uniform national spectral database standard and uniform standards for the ground object features as well as the comparability between different databases. In addition, data sharing mechanism can not be carried out, etc. This article also put forward some suggestions on those problems.

  8. Climate fails to predict wood decomposition at regional scales

    Science.gov (United States)

    Mark A. Bradford; Robert J. Warren; Petr Baldrian; Thomas W. Crowther; Daniel S. Maynard; Emily E. Oldfield; William R. Wieder; Stephen A. Wood; Joshua R. King

    2014-01-01

    Decomposition of organic matter strongly influences ecosystem carbon storage1. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter2, 3, 4, 5. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on...

  9. Decomposition of cattle dung on grazed signalgrass ( Brachiaria ...

    African Journals Online (AJOL)

    Livestock excreta is one of the major nutrient sources in natural grasslands. Understanding how livestock diet and season affects the decomposition dynamics is critical to nutrient cycling models. We hypothesised that livestock diet and season of the year affect dung decomposition. This study evaluated the decomposition ...

  10. Specific leaf area predicts dryland litter decomposition via two mechanisms

    NARCIS (Netherlands)

    Liu, Guofang; Wang, Lei; Jiang, Li; Pan, Xu; Huang, Zhenying; Dong, Ming; Cornelissen, Johannes H.C.

    2018-01-01

    Litter decomposition plays important roles in carbon and nutrient cycling. In dryland, both microbial decomposition and abiotic degradation (by UV light or other forces) drive variation in decomposition rates, but whether and how litter traits and position determine the balance between these

  11. Through-wall image enhancement using fuzzy and QR decomposition.

    Science.gov (United States)

    Riaz, Muhammad Mohsin; Ghafoor, Abdul

    2014-01-01

    QR decomposition and fuzzy logic based scheme is proposed for through-wall image enhancement. QR decomposition is less complex compared to singular value decomposition. Fuzzy inference engine assigns weights to different overlapping subspaces. Quantitative measures and visual inspection are used to analyze existing and proposed techniques.

  12. Coupling of temperature with pressure induced initial decomposition ...

    Indian Academy of Sciences (India)

    The pressure effects on the initial decomposition stepsand initially generated products on PETN and NTO were very different. PETN was triggered by C-H... O intermolecular hydrogen transfer. The initial decomposition mechanism was independent of the pressure. ForNTO, two different initial decomposition mechanisms ...

  13. Plant litter decomposition in wetlands receiving acid mine drainage

    Energy Technology Data Exchange (ETDEWEB)

    Kittle, D.L.; McGraw, J.B.; Garbutt, K. [West Virginia University, Morgantown, WV (United States). Dept. of Biology

    1995-03-01

    The impact of acid mine drainage on the decomposition of wetland plant species of northern West Virginia was studied to determine if the potential exists for nutrient cycling to be altered in systems used to treat this drainage. There were two objectives of this study. First, decomposition of aboveground plant material was measured to determine species decomposition patterns as a function of pH. Second, decomposition of litter from various pH environments was compared to assess whether litter origin affects decomposition rates. Species differences were detected throughout the study. Decomposition rates of woolgrass ({ital Scirpus cyperinus} (L.) Kunth) and common rush ({ital Juncus effusus} L.) were significantly lower than the use of calamus ({ital Acorus calamus} L.) and rice cutgrass ({ital Leersia oryzoids} L.). Differences among species explained a large proportion of the variation in percentage of biomass remaining. Thus, differences in litter quality among species was important in determining the rate of decomposition. In general, significantly more decomposition occurred for all species in high pH environments, indicating impeded decomposition at low pH. While decomposition of some species litter differed depending on its origin, other species showed no effect. Cattail ({ital Typha latifolia} L.) in particular, was found to have lower decomposition rates occurring with material grown at low pH. Lower decomposition rates could result in lower nutrient availability leading to further reduction of productivity under low pH conditions. 34 refs., 4 figs., 4 tabs.

  14. FastMotif: spectral sequence motif discovery.

    Science.gov (United States)

    Colombo, Nicoló; Vlassis, Nikos

    2015-08-15

    Sequence discovery tools play a central role in several fields of computational biology. In the framework of Transcription Factor binding studies, most of the existing motif finding algorithms are computationally demanding, and they may not be able to support the increasingly large datasets produced by modern high-throughput sequencing technologies. We present FastMotif, a new motif discovery algorithm that is built on a recent machine learning technique referred to as Method of Moments. Based on spectral decompositions, our method is robust to model misspecifications and is not prone to locally optimal solutions. We obtain an algorithm that is extremely fast and designed for the analysis of big sequencing data. On HT-Selex data, FastMotif extracts motif profiles that match those computed by various state-of-the-art algorithms, but one order of magnitude faster. We provide a theoretical and numerical analysis of the algorithm's robustness and discuss its sensitivity with respect to the free parameters. The Matlab code of FastMotif is available from http://lcsb-portal.uni.lu/bioinformatics. vlassis@adobe.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. CLUSTERING OF MULTISPECTRAL AIRBORNE LASER SCANNING DATA USING GAUSSIAN DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    S. Morsy

    2017-09-01

    Full Text Available With the evolution of the LiDAR technology, multispectral airborne laser scanning systems are currently available. The first operational multispectral airborne LiDAR sensor, the Optech Titan, acquires LiDAR point clouds at three different wavelengths (1.550, 1.064, 0.532 μm, allowing the acquisition of different spectral information of land surface. Consequently, the recent studies are devoted to use the radiometric information (i.e., intensity of the LiDAR data along with the geometric information (e.g., height for classification purposes. In this study, a data clustering method, based on Gaussian decomposition, is presented. First, a ground filtering mechanism is applied to separate non-ground from ground points. Then, three normalized difference vegetation indices (NDVIs are computed for both non-ground and ground points, followed by histograms construction from each NDVI. The Gaussian function model is used to decompose the histograms into a number of Gaussian components. The maximum likelihood estimate of the Gaussian components is then optimized using Expectation – Maximization algorithm. The intersection points of the adjacent Gaussian components are subsequently used as threshold values, whereas different classes can be clustered. This method is used to classify the terrain of an urban area in Oshawa, Ontario, Canada, into four main classes, namely roofs, trees, asphalt and grass. It is shown that the proposed method has achieved an overall accuracy up to 95.1 % using different NDVIs.

  16. Satellite time series analysis using Empirical Mode Decomposition

    Science.gov (United States)

    Pannimpullath, R. Renosh; Doolaeghe, Diane; Loisel, Hubert; Vantrepotte, Vincent; Schmitt, Francois G.

    2016-04-01

    Geophysical fields possess large fluctuations over many spatial and temporal scales. Satellite successive images provide interesting sampling of this spatio-temporal multiscale variability. Here we propose to consider such variability by performing satellite time series analysis, pixel by pixel, using Empirical Mode Decomposition (EMD). EMD is a time series analysis technique able to decompose an original time series into a sum of modes, each one having a different mean frequency. It can be used to smooth signals, to extract trends. It is built in a data-adaptative way, and is able to extract information from nonlinear signals. Here we use MERIS Suspended Particulate Matter (SPM) data, on a weekly basis, during 10 years. There are 458 successive time steps. We have selected 5 different regions of coastal waters for the present study. They are Vietnam coastal waters, Brahmaputra region, St. Lawrence, English Channel and McKenzie. These regions have high SPM concentrations due to large scale river run off. Trend and Hurst exponents are derived for each pixel in each region. The energy also extracted using Hilberts Spectral Analysis (HSA) along with EMD method. Normalised energy computed for each mode for each region with the total energy. The total energy computed using all the modes are extracted using EMD method.

  17. Clustering of Multispectral Airborne Laser Scanning Data Using Gaussian Decomposition

    Science.gov (United States)

    Morsy, S.; Shaker, A.; El-Rabbany, A.

    2017-09-01

    With the evolution of the LiDAR technology, multispectral airborne laser scanning systems are currently available. The first operational multispectral airborne LiDAR sensor, the Optech Titan, acquires LiDAR point clouds at three different wavelengths (1.550, 1.064, 0.532 μm), allowing the acquisition of different spectral information of land surface. Consequently, the recent studies are devoted to use the radiometric information (i.e., intensity) of the LiDAR data along with the geometric information (e.g., height) for classification purposes. In this study, a data clustering method, based on Gaussian decomposition, is presented. First, a ground filtering mechanism is applied to separate non-ground from ground points. Then, three normalized difference vegetation indices (NDVIs) are computed for both non-ground and ground points, followed by histograms construction from each NDVI. The Gaussian function model is used to decompose the histograms into a number of Gaussian components. The maximum likelihood estimate of the Gaussian components is then optimized using Expectation - Maximization algorithm. The intersection points of the adjacent Gaussian components are subsequently used as threshold values, whereas different classes can be clustered. This method is used to classify the terrain of an urban area in Oshawa, Ontario, Canada, into four main classes, namely roofs, trees, asphalt and grass. It is shown that the proposed method has achieved an overall accuracy up to 95.1 % using different NDVIs.

  18. Stochastic processes and their spectral representations over non-archimedean fields

    OpenAIRE

    Ludkovsky, S. V.

    2008-01-01

    The article is devoted to stochastic processes with values in finite- and infinite-dimensional vector spaces over infinite fields $\\bf K$ of zero characteristics with non-trivial non-archimedean norms. For different types of stochastic processes controlled by measures with values in $\\bf K$ and in complete topological vector spaces over $\\bf K$ stochastic integrals are investigated. Vector valued measures and integrals in spaces over $\\bf K$ are studied. Theorems about spectral decompositions...

  19. Biogeochemistry of Decomposition and Detrital Processing

    Science.gov (United States)

    Sanderman, J.; Amundson, R.

    2003-12-01

    Decomposition is a key ecological process that roughly balances net primary production in terrestrial ecosystems and is an essential process in resupplying nutrients to the plant community. Decomposition consists of three concurrent processes: communition or fragmentation, leaching of water-soluble compounds, and microbial catabolism. Decomposition can also be viewed as a sequential process, what Eijsackers and Zehnder (1990) compare to a Russian matriochka doll. Soil macrofauna fragment and partially solubilize plant residues, facilitating establishment of a community of decomposer microorganisms. This decomposer community will gradually shift as the most easily degraded plant compounds are utilized and the more recalcitrant materials begin to accumulate. Given enough time and the proper environmental conditions, most naturally occurring compounds can completely be mineralized to inorganic forms. Simultaneously with mineralization, the process of humification acts to transform a fraction of the plant residues into stable soil organic matter (SOM) or humus. For reference, Schlesinger (1990) estimated that only ˜0.7% of detritus eventually becomes stabilized into humus.Decomposition plays a key role in the cycling of most plant macro- and micronutrients and in the formation of humus. Figure 1 places the roles of detrital processing and mineralization within the context of the biogeochemical cycling of essential plant nutrients. Chapin (1991) found that while the atmosphere supplied 4% and mineral weathering supplied no nitrogen and 95% of all the nitrogen and phosphorus uptake by tundra species in Barrow, Alaska. In a cool temperate forest, nutrient recycling accounted for 93%, 89%, 88%, and 65% of total sources for nitrogen, phosphorus, potassium, and calcium, respectively ( Chapin, 1991). (13K)Figure 1. A decomposition-centric biogeochemical model of nutrient cycling. Although there is significant external input (1) and output (2) from neighboring ecosystems

  20. ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE

    Energy Technology Data Exchange (ETDEWEB)

    Ketusky, E.; Subramanian, K.

    2012-02-29

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration

  1. Nucleon spin decomposition and orbital angular momentum in the nucleon

    Science.gov (United States)

    Wakamatsu, Masashi

    2014-09-01

    To get a complete decomposition of nucleon spin is a fundamentally important homework of QCD. In fact, if our researches end up without accomplishing this task, a tremendous efforts since the 1st discovery of the nucleon spin crisis would end in the air. We now have a general agreement that there are at least two physically inequivalent gauge-invariant decompositions of the nucleon. In these two decompositions, the intrinsic spin parts of quarks and gluons are just common. What discriminate these two decompositions are the orbital angular momentum (OAM) parts. The OAMs of quarks and gluons appearing in the first decomposition are the so-called ``mechanical'' OAMs, while those appearing in the second decomposition are the generalized (gauge-invariant) ``canonical'' ones. By this reason, these decompositions are broadly called the ``mechanical'' and ``canonical'' decompositions of the nucleon spin. Still, there remains several issues, which have not reached a complete consensus among the experts. (See the latest recent). In the present talk, I will mainly concentrate on the practically most important issue, i.e. which decomposition is more favorable from the observational viewpoint. There are two often-claimed advantages of canonical decomposition. First, each piece of this decomposition satisfies the SU(2) commutation relation or angular momentum algebra. Second, the canonical OAM rather than the mechanical OAM is compatible with free partonic picture of constituent orbital motion. In the present talk, I will show that both these claims are not necessarily true, and push forward a viewpoint that the ``mechanical'' decomposition is more physical in that it has more direct connection with observables. I also emphasize that the nucleon spin decomposition accessed by the lattice QCD analyses is the ``mechanical'' decomposition not the ``canonical'' one. The recent lattice QCD studies of the nucleon spin decomposition are also briefly overviewed.

  2. Speech recognition from spectral dynamics

    Indian Academy of Sciences (India)

    Information is carried in changes of a signal. The paper starts with revisiting Dudley's concept of the carrier nature of speech. It points to its close connection to modulation spectra of speech and argues against short-term spectral envelopes as dominant carriers of the linguistic information in speech. The history of spectral ...

  3. SPECTRAL ANALYSIS OF EXCHANGE RATES

    Directory of Open Access Journals (Sweden)

    ALEŠA LOTRIČ DOLINAR

    2013-06-01

    Full Text Available Using spectral analysis is very common in technical areas but rather unusual in economics and finance, where ARIMA and GARCH modeling are much more in use. To show that spectral analysis can be useful in determining hidden periodic components for high-frequency finance data as well, we use the example of foreign exchange rates

  4. Algorithms for Sparse Non-negative Tucker Decompositions

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai

    2008-01-01

    There is a increasing interest in analysis of large scale multi-way data. The concept of multi-way data refers to arrays of data with more than two dimensions, i.e., taking the form of tensors. To analyze such data, decomposition techniques are widely used. The two most common decompositions...... decompositions). To reduce ambiguities of this type of decomposition we develop updates that can impose sparseness in any combination of modalities, hence, proposed algorithms for sparse non-negative Tucker decompositions (SN-TUCKER). We demonstrate how the proposed algorithms are superior to existing algorithms...

  5. Proper orthogonal decomposition analysis of vortex shedding behind a rotating circular cylinder

    Directory of Open Access Journals (Sweden)

    Dol Sharul Sham

    2016-01-01

    Full Text Available Turbulence studies were made in the wake of a rotating circular cylinder in a uniform free stream with the objective of describing the patterns of the vortex shedding up to suppression of the periodic vortex street at high velocity ratios, λ. The results obtained in the present study establish that shedding of Kármán vortices in a rotating circular cylinder-generated wake is modified by rotation of the cylinder. Alternate vortex shedding is highly visible when λ < 2.0 although the strength of the separated shear layers differ due to the rotation of the cylinder. The spectral density in the wakes indicate significant changes at λ = 2.0. The results indicate that the rotation of the cylinder causes significant disruption in the structure of the flow. Alternate vortex shedding is weak, distorted and close to being suppressed at λ = 2.0. It is clear that flow asymmetries will weaken vortex shedding, and when the asymmetries are significant enough, total suppression of a periodic street occurs. Particular attention was paid to the decomposition of the flow using Proper Orthogonal Decomposition (POD. By analyzing this decomposition with the help of Particle Image Velocimetry (PIV data, it was found that large scales contribute to the coherent motion. Vorticity structures in the modes become increasingly irregular with downstream distance, suggesting turbulent interactions are occurring at the more downstream locations, especially when the cylinder rotates.

  6. GoDec+: Fast and Robust Low-Rank Matrix Decomposition Based on Maximum Correntropy.

    Science.gov (United States)

    Guo, Kailing; Liu, Liu; Xu, Xiangmin; Xu, Dong; Tao, Dacheng

    2017-04-24

    GoDec is an efficient low-rank matrix decomposition algorithm. However, optimal performance depends on sparse errors and Gaussian noise. This paper aims to address the problem that a matrix is composed of a low-rank component and unknown corruptions. We introduce a robust local similarity measure called correntropy to describe the corruptions and, in doing so, obtain a more robust and faster low-rank decomposition algorithm: GoDec+. Based on half-quadratic optimization and greedy bilateral paradigm, we deliver a solution to the maximum correntropy criterion (MCC)-based low-rank decomposition problem. Experimental results show that GoDec+ is efficient and robust to different corruptions including Gaussian noise, Laplacian noise, salt & pepper noise, and occlusion on both synthetic and real vision data. We further apply GoDec+ to more general applications including classification and subspace clustering. For classification, we construct an ensemble subspace from the low-rank GoDec+ matrix and introduce an MCC-based classifier. For subspace clustering, we utilize GoDec+ values low-rank matrix for MCC-based self-expression and combine it with spectral clustering. Face recognition, motion segmentation, and face clustering experiments show that the proposed methods are effective and robust. In particular, we achieve the state-of-the-art performance on the Hopkins 155 data set and the first 10 subjects of extended Yale B for subspace clustering.

  7. Full-waveform LiDAR echo decomposition based on wavelet decomposition and particle swarm optimization

    Science.gov (United States)

    Li, Duan; Xu, Lijun; Li, Xiaolu

    2017-04-01

    To measure the distances and properties of the objects within a laser footprint, a decomposition method for full-waveform light detection and ranging (LiDAR) echoes is proposed. In this method, firstly, wavelet decomposition is used to filter the noise and estimate the noise level in a full-waveform echo. Secondly, peak and inflection points of the filtered full-waveform echo are used to detect the echo components in the filtered full-waveform echo. Lastly, particle swarm optimization (PSO) is used to remove the noise-caused echo components and optimize the parameters of the most probable echo components. Simulation results show that the wavelet-decomposition-based filter is of the best improvement of SNR and decomposition success rates than Wiener and Gaussian smoothing filters. In addition, the noise level estimated using wavelet-decomposition-based filter is more accurate than those estimated using other two commonly used methods. Experiments were carried out to evaluate the proposed method that was compared with our previous method (called GS-LM for short). In experiments, a lab-build full-waveform LiDAR system was utilized to provide eight types of full-waveform echoes scattered from three objects at different distances. Experimental results show that the proposed method has higher success rates for decomposition of full-waveform echoes and more accurate parameters estimation for echo components than those of GS-LM. The proposed method based on wavelet decomposition and PSO is valid to decompose the more complicated full-waveform echoes for estimating the multi-level distances of the objects and measuring the properties of the objects in a laser footprint.

  8. Infinite order decompositions of C*-algebras.

    Science.gov (United States)

    Nematjonovich, Arzikulov Farhodjon

    2016-01-01

    The present paper is devoted to infinite order decompositions of C*-algebras. It is proved that an infinite order decomposition (IOD) of a C*-algebra forms the complexification of an order unit space, and, if the C*-algebra is monotone complete (not necessarily weakly closed) then its IOD is also monotone complete ordered vector space. Also it is established that an IOD of a C*-algebra is a C*-algebra if and only if this C*-algebra is a von Neumann algebra. As a summary we obtain that the norm of an infinite dimensional matrix is equal to the supremum of norms of all finite dimensional main diagonal submatrices of this matrix and an infinite dimensional matrix is positive if and only if all finite dimensional main diagonal submatrices of this matrix are positive.

  9. Spinodal Decomposition in Critical and Tricritical Systems.

    Science.gov (United States)

    Dee, Gregory Thomas

    In this thesis we study the dynamical process of phase separation known as spinodal decomposition. We use the best available theoretical techniques (linear stability analysis, and the Langer, Bar-on, and Miller('1) theory) to study the phenomena in both critical and tricritical systems. We deal with the problems of the early stage evolution and the late stage coarsening in these systems. We use Monte Carlo computer simulation techniques to study the process of spinodal decomposition in two systems one of which is a model for a two dimensional system with an order-disorder transition and the other is a two dimensional model of a binary alloy system. We also present a renormalization group calculation of a mean field character for the coarse grained free energy.

  10. Decentralized Model Predictive Control via Dual Decomposition

    Science.gov (United States)

    Wakasa, Yuji; Arakawa, Mizue; Tanaka, Kanya; Akashi, Takuya

    This paper proposes a decentralized model predictive control method based on a dual decomposition technique. A model predictive control problem for a system with multiple subsystems is formulated as a convex optimization problem. In particular, we deal with the case where the control outputs of the subsystems have coupling constraints represented by linear equalities. A dual decomposition technique is applied to this problem in order to derive the dual problem with decoupled equality constraints. A projected subgradient method is used to solve the dual problem, which leads to a decentralized algorithm. In the algorithm, a small-scale problem is solved at each subsystem, and information exchange is performed in each group consisting of some subsystems. Also, it is shown that the computational complexity in the decentralized algorithm is reduced if the dynamics of the subsystems are all the same.

  11. Multiple Shooting and Time Domain Decomposition Methods

    CERN Document Server

    Geiger, Michael; Körkel, Stefan; Rannacher, Rolf

    2015-01-01

    This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms.  The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics.  This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...

  12. Thermal decompositions of light lanthanide aconitates

    Energy Technology Data Exchange (ETDEWEB)

    Brzyska, W.; Ozga, W. (Uniwersytet Marii Curie-Sklodowskiej, Lublin (Poland))

    The conditions of thermal decomposition of Y, La, Ce(III), Pr, Nd, Sm, and Gd aconitates have been studied. On heating, the aconitate of Ce(III) loses crystallization water to yield anhydrous salt, which then is transformed to oxide CeO/sub 2/. The aconitates of Y, Pr, Nd, Sm, Eu and Gd decompose in three stages. First, aconitates undergo dehydration to form the anhydrous salts, which next decompose to Ln/sub 2/O/sub 2/CO/sub 3/. In the last stage the thermal decomposition of Ln/sub 2/O/sub 2/CO/sub 3/ is accompanied by endothermic effect. Dehydration of aconitate of La undergoes in two stages. The anhydrous complex decomposes to La/sub 2/O/sub 2/CO/sub 3/; this subsequently decomposes to La/sub 2/O/sub 3/.

  13. Formal Language Decomposition into Semantic Primes

    Directory of Open Access Journals (Sweden)

    Johannes FÄHNDRICH

    2014-10-01

    Full Text Available This paper describes an algorithm for semantic decomposition. For that we surveys languages used to enrich contextual information with semantic descriptions. Such descriptions can be e.g. applied to enable reasoning when collecting vast amounts of information. In particular, we focus on the elements of the languages that make up their semantic. To do so, we compare the expressiveness of the well-known languages OWL, PDDL and MOF with a theory from linguistic called the Natural Semantic Metalanguage. We then analyze how the semantic of the language is build up and describe how semantic decomposition based on the semantic primes can be used for a so called mental lexicon. This mental lexicon can be used to reason upon semantic service description in the research domain of service match making.

  14. Decomposition of water Raman stretching band with a combination of optimization methods

    Science.gov (United States)

    Burikov, Sergey; Dolenko, Sergey; Dolenko, Tatiana; Patsaeva, Svetlana; Yuzhakov, Viktor

    2010-03-01

    In this study, an investigation of the behaviour of stretching bands of CH and OH groups of water-ethanol solutions at alcohol concentrations ranging from 0 to 96% by volume has been performed. A new approach to decomposition of the wide structureless water Raman band into spectral components based on modern mathematical methods of solution of inverse multi-parameter problems-combination of Genetic Algorithm and the method of Generalized Reduced Gradient-has been demonstrated. Application of this approach to decomposition of Raman stretching bands of water-ethanol solutions allowed obtaining new interesting results practically without a priori information. The behaviour of resolved spectral components of Raman stretching OH band in binary mixture with rising ethanol concentration is in a good agreement with the concept of clathrate-like structure of water-ethanol solutions. The results presented in this paper confirm existence of essential structural rearrangement in water-ethanol solutions at ethanol concentrations 20-30% by volume.

  15. Domain decomposition methods for mortar finite elements

    Energy Technology Data Exchange (ETDEWEB)

    Widlund, O.

    1996-12-31

    In the last few years, domain decomposition methods, previously developed and tested for standard finite element methods and elliptic problems, have been extended and modified to work for mortar and other nonconforming finite element methods. A survey will be given of work carried out jointly with Yves Achdou, Mario Casarin, Maksymilian Dryja and Yvon Maday. Results on the p- and h-p-version finite elements will also be discussed.

  16. Heuristic decomposition for non-hierarchic systems

    Science.gov (United States)

    Bloebaum, Christina L.; Hajela, P.

    1991-01-01

    Design and optimization is substantially more complex in multidisciplinary and large-scale engineering applications due to the existing inherently coupled interactions. The paper introduces a quasi-procedural methodology for multidisciplinary optimization that is applicable for nonhierarchic systems. The necessary decision-making support for the design process is provided by means of an embedded expert systems capability. The method employs a decomposition approach whose modularity allows for implementation of specialized methods for analysis and optimization within disciplines.

  17. Influence of Family Structure on Variance Decomposition

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter

    Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...

  18. Grandchild of the frequency: Decomposition multigrid method

    Energy Technology Data Exchange (ETDEWEB)

    Dendy, J.E. Jr. [Los Alamos National Lab., NM (United States); Tazartes, C.C. [Univ. of California, Los Angeles, CA (United States)

    1994-12-31

    Previously the authors considered the frequency decomposition multigrid method and rejected it because it was not robust for problems with discontinuous coefficients. In this paper they show how to modify the method so as to obtain such robustness while retaining robustness for problems with anisotropic coefficients. They also discuss application of this method to a problem arising in global ocean modeling on the CM-5.

  19. Numerical CP Decomposition of Some Difficult Tensors

    Czech Academy of Sciences Publication Activity Database

    Tichavský, Petr; Phan, A. H.; Cichocki, A.

    2017-01-01

    Roč. 317, č. 1 (2017), s. 362-370 ISSN 0377-0427 R&D Projects: GA ČR(CZ) GA14-13713S Institutional support: RVO:67985556 Keywords : Small matrix multiplication * Canonical polyadic tensor decomposition * Levenberg-Marquardt method Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/tichavsky-0468385. pdf

  20. Snapshot wavefield decomposition for heterogeneous velocity media

    OpenAIRE

    Holicki, M.E.; Wapenaar, C.P.A.

    2017-01-01

    We propose a novel directional decomposition operator for wavefield snapshots in heterogeneous-velocity media. The proposed operator demonstrates the link between the amplitude of pressure and particlevelocity plane waves in the wavenumber domain. The proposed operator requires two spatial Fourier transforms (one forward and one backward) per spatial dimension and time slice. To illustrate the operator we demonstrate its applicability to heterogeneous velocity models using a simple velocity-b...

  1. Perspectives on Pentaerythritol Tetranitrate (PETN) Decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, D; Brackett, C; Sparkman, D O

    2002-07-01

    This report evaluates the large body of work involving the decomposition of PETN and identifies the major decomposition routes and byproducts. From these studies it becomes apparent that the PETN decomposition mechanisms and the resulting byproducts are primarily determined by the chemical environment. In the absence of water, PETN can decompose through the scission of the O-NO{sup 2} bond resulting in the formation of an alkoxy radical and NO{sub 2}. Because of the relatively high reactivity of both these initial byproducts, they are believed to drive a number of autocatalytic reactions eventually forming (NO{sub 2}OCH{sub 2}){sub 3}CCHO, (NO{sub 2}OCH{sub 2}){sub 2}C=CHONO{sub 2}, NO{sub 2}OCH=C=CHONO{sub 2}, (NO{sub 2}OCH{sub 2}){sub 3}C-NO{sub 2}, (NO{sub 2}OCH{sub 2}){sub 2}C(NO{sub 2}){sub 2}, NO{sub 2}OCH{sub 2}C(NO{sub 2}){sub 3}, and C(NO{sub 2}){sub 4} as well as polymer-like species such as di-PEHN and tri-PEON. Surprisingly, the products of many of these proposed autocatalytic reactions have never been analytically validated. Conversely, in the presence of water, PETN has been shown to decompose primarily to mono, di, and tri nitrates of pentaerythritol.

  2. Hydroxyl radical formation during peroxynitrous acid decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Coddington, J.W.; Hurst, J.K.; Lymar, S.V.

    1999-03-24

    Yields of O{sub 2} formed during decomposition of peroxynitrous acid (ONOOH) under widely varying medium conditions are compared to predictions based upon the assumption that the reaction involves formation of discrete {sm{underscore}bullet}OH and {sm{underscore}bullet}NO{sub 2} radicals as oxidizing intermediates. The kinetic model used includes all reactions of {sm{underscore}bullet}OH, {sm{underscore}bullet}O{sub 2}{sup {minus}}, and reactive nitrogen species known to be important under the prevailing conditions; because the rate constants for all of these reactions have been independently measured, the calculations contain no adjustable fitting parameters. The model quantitatively accounts for (1) the complex pH dependence of the O{sub 2} yields and (2) the unusual effects of NO{sub 2} {sup {minus}}, which inhibits O{sub 2} formation in neutral, but not alkaline, solutions and also reverses inhibition by organic {sm{underscore}bullet}OH scavengers in alkaline media. Other observations, including quenching of O{sub 2} yields by ferrocyanide and bicarbonate, the pressure dependence of the decomposition rate, and the reported dynamic behavior for O{sub 2} generation in the presence of H{sub 2}O{sub 2}, also appear to be in accord with the suggested mechanism. Overall, the close correspondence between observed and calculated O{sub 2} yields provides strong support for decomposition via homolysis of the ONOOH peroxo bond.

  3. Bregmanized Domain Decomposition for Image Restoration

    KAUST Repository

    Langer, Andreas

    2012-05-22

    Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems such as total variation minimization became increasingly important for the solution of these tasks. While being favorable due to the improved enhancement of images compared to smooth imaging approaches, non-smooth minimization problems typically scale badly with the dimension of the data. Hence, for large imaging problems solved by total variation minimization domain decomposition algorithms have been proposed, aiming to split one large problem into N > 1 smaller problems which can be solved on parallel CPUs. The N subproblems constitute constrained minimization problems, where the constraint enforces the support of the minimizer to be the respective subdomain. In this paper we discuss a fast computational algorithm to solve domain decomposition for total variation minimization. In particular, we accelerate the computation of the subproblems by nested Bregman iterations. We propose a Bregmanized Operator Splitting-Split Bregman (BOS-SB) algorithm, which enforces the restriction onto the respective subdomain by a Bregman iteration that is subsequently solved by a Split Bregman strategy. The computational performance of this new approach is discussed for its application to image inpainting and image deblurring. It turns out that the proposed new solution technique is up to three times faster than the iterative algorithm currently used in domain decomposition methods for total variation minimization. © Springer Science+Business Media, LLC 2012.

  4. CCN Spectral Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hudson, James G.

    2009-02-27

    Detailed aircraft measurements were made of cloud condensation nuclei (CCN) spectra associated with extensive cloud systems off the central California coast in the July 2005 MASE project. These measurements include the wide supersaturation (S) range (2-0.01%) that is important for these polluted stratus clouds. Concentrations were usually characteristic of continental/anthropogenic air masses. The most notable feature was the consistently higher concentrations above the clouds than below. CCN measurements are so important because they provide a link between atmospheric chemistry and cloud-climate effects, which are the largest climate uncertainty. Extensive comparisons throughout the eleven flights between two CCN spectrometers operated at different but overlapping S ranges displayed the precision and accuracy of these difficult spectral determinations. There are enough channels of resolution in these instruments to provide differential spectra, which produce more rigorous and precise comparisons than traditional cumulative presentations of CCN concentrations. Differential spectra are also more revealing than cumulative spectra. Only one of the eleven flights exhibited typical maritime concentrations. Average below cloud concentrations over the two hours furthest from the coast for the 8 flights with low polluted stratus was 614?233 at 1% S, 149?60 at 0.1% S and 57?33 at 0.04% S cm-3. Immediately above cloud average concentrations were respectively 74%, 55%, and 18% higher. Concentration variability among those 8 flights was a factor of two. Variability within each flight excluding distances close to the coast ranged from 15-56% at 1% S. However, CN and probably CCN concentrations sometimes varied by less than 1% over distances of more than a km. Volatility and size-critical S measurements indicated that the air masses were very polluted throughout MASE. The aerosol above the clouds was more polluted than the below cloud aerosol. These high CCN concentrations from

  5. Effects of stoichiometry and temperature perturbations on beech litter decomposition, enzyme activities and protein expression

    Science.gov (United States)

    Keiblinger, K. M.; Schneider, T.; Roschitzki, B.; Schmid, E.; Eberl, L.; Hämmerle, I.; Leitner, S.; Richter, A.; Wanek, W.; Riedel, K.; Zechmeister-Boltenstern, S.

    2011-12-01

    Microbes are major players in leaf litter decomposition and therefore advances in the understanding of their control on element cycling are of paramount importance. Our aim was to investigate the influence of leaf litter stoichiometry in terms of carbon (C) : nitrogen (N) : phosphorus (P) on the decomposition process, and to follow changes in microbial community structure and function in response to temperature-stress treatments. To elucidate how the stoichiometry of beech litter (Fagus sylvatica L.) and stress treatments interactively affect the decomposition processes, a terrestrial microcosm experiment was conducted. Beech litter from different Austrian sites covering C:N ratios from 39 to 61 and C:P ratios from 666 to 1729 were incubated at 15 °C and 60% moisture for six months. Part of the microcosms were then subjected to severe changes in temperature (+30 °C and -15 °C) to monitor the influence of temperature stress. Extracellular enzyme activities were assayed and respiratory activities measured. A semi-quantitative metaproteomics approach (1D-SDS PAGE combined with liquid chromatography and tandem mass-spectrometry; unique spectral counting) was employed to investigate the impact of the applied stress treatments in dependency of litter stoichiometry on structure and function of the decomposing community. In litter with narrow C:nutrient ratios microbial decomposers were most abundant. Cellulase, chitinase, phosphatase and protease activity decreased after heat and frost treatments. Decomposer communities and specific functions varied with site i.e. stoichiometry. The applied stress evoked strong changes of enzyme activities, dissolved organic nitrogen and litter pH. Freeze treatments resulted in a decline in residual plant litter material, and increased fungal abundance indicating slightly accelerated decomposition. Overall, we could detect a strong effect of litter stoichiometry on microbial community structure as well as function. Temperature

  6. Decomposition studies of group 6 hexacarbonyl complexes. Pt. 2. Modelling of the decomposition process

    Energy Technology Data Exchange (ETDEWEB)

    Usoltsev, Ilya; Eichler, Robert; Tuerler, Andreas [Paul Scherrer Institut (PSI), Villigen (Switzerland); Bern Univ. (Switzerland)

    2016-11-01

    The decomposition behavior of group 6 metal hexacarbonyl complexes (M(CO){sub 6}) in a tubular flow reactor is simulated. A microscopic Monte-Carlo based model is presented for assessing the first bond dissociation enthalpy of M(CO){sub 6} complexes. The suggested approach superimposes a microscopic model of gas adsorption chromatography with a first-order heterogeneous decomposition model. The experimental data on the decomposition of Mo(CO){sub 6} and W(CO){sub 6} are successfully simulated by introducing available thermodynamic data. Thermodynamic data predicted by relativistic density functional theory is used in our model to deduce the most probable experimental behavior of the corresponding Sg carbonyl complex. Thus, the design of a chemical experiment with Sg(CO){sub 6} is suggested, which is sensitive to benchmark our theoretical understanding of the bond stability in carbonyl compounds of the heaviest elements.

  7. Preconditioned dynamic mode decomposition and mode selection algorithms for large datasets using incremental proper orthogonal decomposition

    Science.gov (United States)

    Ohmichi, Yuya

    2017-07-01

    In this letter, we propose a simple and efficient framework of dynamic mode decomposition (DMD) and mode selection for large datasets. The proposed framework explicitly introduces a preconditioning step using an incremental proper orthogonal decomposition (POD) to DMD and mode selection algorithms. By performing the preconditioning step, the DMD and mode selection can be performed with low memory consumption and therefore can be applied to large datasets. Additionally, we propose a simple mode selection algorithm based on a greedy method. The proposed framework is applied to the analysis of three-dimensional flow around a circular cylinder.

  8. EMD-Based Temporal and Spectral Features for the Classification of EEG Signals Using Supervised Learning.

    Science.gov (United States)

    Riaz, Farhan; Hassan, Ali; Rehman, Saad; Niazi, Imran Khan; Dremstrup, Kim

    2016-01-01

    This paper presents a novel method for feature extraction from electroencephalogram (EEG) signals using empirical mode decomposition (EMD). Its use is motivated by the fact that the EMD gives an effective time-frequency analysis of nonstationary signals. The intrinsic mode functions (IMF) obtained as a result of EMD give the decomposition of a signal according to its frequency components. We present the usage of upto third order temporal moments, and spectral features including spectral centroid, coefficient of variation and the spectral skew of the IMFs for feature extraction from EEG signals. These features are physiologically relevant given that the normal EEG signals have different temporal and spectral centroids, dispersions and symmetries when compared with the pathological EEG signals. The calculated features are fed into the standard support vector machine (SVM) for classification purposes. The performance of the proposed method is studied on a publicly available dataset which is designed to handle various classification problems including the identification of epilepsy patients and detection of seizures. Experiments show that good classification results are obtained using the proposed methodology for the classification of EEG signals. Our proposed method also compares favorably to other state-of-the-art feature extraction methods.

  9. Nonlinear spectral imaging of fungi

    NARCIS (Netherlands)

    Knaus, H.

    2014-01-01

    Nonlinear microscopy combined with fluorescence spectroscopy is known as nonlinear spectral imaging microscopy (NLSM). This method provides simultaneously specimen morphology – distinguishing different parts in a tissue – and (auto)fluorescence spectra, thus their biochemical composition. A novel

  10. Matched Spectral Filter Imager Project

    Data.gov (United States)

    National Aeronautics and Space Administration — OPTRA proposes the development of an imaging spectrometer for greenhouse gas and volcanic gas imaging based on matched spectral filtering and compressive imaging....

  11. Multi-spectral camera development

    CSIR Research Space (South Africa)

    Holloway, M

    2012-10-01

    Full Text Available ) ? 6 Spectral bands plus laser range finder ? High Definition (HD) video format ? Synchronised image capture ? Configurable mounts ? positioner and laboratory ? Radiometric and geometric calibration ? Fiber optic data transmission Proposed system...

  12. Broadband Advanced Spectral System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — NovaSol proposes to develop an advanced hyperspectral imaging system for earth science missions named BRASS (Broadband Advanced Spectral System). BRASS combines...

  13. Root Asymptotics of Spectral Polynomials

    Directory of Open Access Journals (Sweden)

    B. Shapiro

    2007-01-01

    Full Text Available We have been studying the asymptotic energy distribution of the algebraic part of the spectrum of the one-dimensional sextic anharmonic oscillator. We review some (both old and recent results on the multiparameter spectral problem and show that our problem ranks among the degenerate cases of Heine-Stieltjes spectral problem, and we derive the density of the corresponding probability measure. 

  14. Oxidative synthesis of a novel polyphenol having pendant Schiff base group: Synthesis, characterization, non-isothermal decomposition kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Dilek, Deniz [Faculty of Education, Secondary Science and Mathematics Education, Canakkale Onsekiz Mart University, 17100 Canakkale (Turkey); Dogan, Fatih, E-mail: fatihdogan@comu.edu.tr [Faculty of Education, Secondary Science and Mathematics Education, Canakkale Onsekiz Mart University, 17100 Canakkale (Turkey); Bilici, Ali, E-mail: alibilici66@hotmail.com [Control Laboratory of Agricultural and Forestry Ministry, 34153 Istanbul (Turkey); Kaya, Ismet [Department of Chemistry, Faculty of Science and Arts, Canakkale Onsekiz Mart University, Canakkale (Turkey)

    2011-05-10

    Research highlights: {yields} In this study, the synthesis and thermal characterization of a new functional polyphenol are reported. {yields} Non-isothermal methods were used to evaluate the thermal decomposition kinetics of resulting polymer. {yields} Thermal decomposition of polymer follows a diffusion type kinetic model. {yields} It is noted that this kinetic model is quite rare in polymer degradation studies. - Abstract: In here, the facile synthesis and thermal characterization of a novel polyphenol containing Schiff base pendant group, poly(4-{l_brace}[(4-hydroxyphenyl)imino]methyl{r_brace}benzene-1,2,3-triol) [PHPIMB], are reported. UV-vis, FT-IR, {sup 1}H NMR, {sup 13}C NMR, GPC, TG/DTG-DTA, CV (cyclic voltammetry) and solid state conductivity measurements were utilized to characterize the obtained monomer and polymer. The spectral analyses results showed that PHPIMB was composed of polyphenol main chains containing Schiff base pendant side groups. Thermal properties of the polymer were investigated by thermogravimetric analyses under a nitrogen atmosphere. Five methods were used to study the thermal decomposition of PHPIMB at different heating rate and the results obtained by using all the kinetic methods were compared with each other. The thermal decomposition of PHPIMB was found to be a simple process composed of three stages. These investigated methods were those of Flynn-Wall-Ozawa (FWO), Tang, Kissinger-Akahira-Sunose (KAS), Friedman and Kissinger methods.

  15. Pressure-induced decomposition of indium hydroxide.

    Science.gov (United States)

    Gurlo, Aleksander; Dzivenko, Dmytro; Andrade, Miria; Riedel, Ralf; Lauterbach, Stefan; Kleebe, Hans-Joachim

    2010-09-15

    A static pressure-induced decomposition of indium hydroxide into metallic indium that takes place at ambient temperature is reported. The lattice parameter of c-In(OH)(3) decreased upon compression from 7.977(2) to approximately 7.45 A at 34 GPa, corresponding to a decrease in specific volume of approximately 18%. Fitting the second-order Birch-Murnaghan equation of state to the obtained compression data gave a bulk modulus of 99 +/- 3 GPa for c-In(OH)(3). The c-In(OH)(3) crystals with a size of approximately 100 nm are comminuted upon compression, as indicated by the grain-size reduction reflected in broadening of the diffraction reflections and the appearance of smaller (approximately 5 nm) incoherently oriented domains in TEM. The rapid decompression of compressed c-In(OH)(3) leads to partial decomposition of indium hydroxide into metallic indium, mainly as a result of localized stress gradients caused by relaxation of the highly disordered indium sublattice in indium hydroxide. This partial decomposition of indium hydroxide into metallic indium is irreversible, as confirmed by angle-dispersive X-ray diffraction, transmission electron microscopy imaging, Raman scattering, and FTIR spectroscopy. Recovered c-In(OH)(3) samples become completely black and nontransparent and show typical features of metals, i.e., a falling absorption in the 100-250 cm(-1) region accompanied by a featureless spectrum in the 250-2500 cm(-1) region in the Raman spectrum and Drude-like absorption of free electrons in the region of 4000-8000 cm(-1) in the FTIR spectrum. These features were not observed in the initial c-In(OH)(3), which is a typical white wide-band-gap semiconductor.

  16. A wavelet "time-shift-detail" decomposition

    OpenAIRE

    Levan, N.; Kubrusly, Carlos S.

    2003-01-01

    \\begin{abstract}We show that, with respect to an orthonormal wavelet $\\psi(.)\\in \\L^{2}(\\RR),$ any $f(.)\\in\\L^{2}(\\RR)$ is, on the one hand, the sum of its ``layers of details'' over all time-shifts, and on the other hand, the sum of its layers of details over all scales. The latter is well known and is a consequence of a wandering subspace decomposition of $\\L^{2}(\\RR)$ which, in turn, resulted from a wavelet Multiresolution Analysis (MRA). The former has not been discussed before. We show ...

  17. Thermal decomposition as route for silver nanoparticles

    Directory of Open Access Journals (Sweden)

    Navaladian S

    2006-01-01

    Full Text Available AbstractSingle crystalline silver nanoparticles have been synthesized by thermal decomposition of silver oxalate in water and in ethylene glycol. Polyvinyl alcohol (PVA was employed as a capping agent. The particles were spherical in shape with size below 10 nm. The chemical reduction of silver oxalate by PVA was also observed. Increase of the polymer concentration led to a decrease in the size of Ag particles. Ag nanoparticle was not formed in the absence of PVA. Antibacterial activity of the Ag colloid was studied by disc diffusion method.

  18. Diffuse Optical Imaging Using Decomposition Methods

    Directory of Open Access Journals (Sweden)

    Binlin Wu

    2012-01-01

    Full Text Available Diffuse optical imaging (DOI for detecting and locating targets in a highly scattering turbid medium is treated as a blind source separation (BSS problem. Three matrix decomposition methods, independent component analysis (ICA, principal component analysis (PCA, and nonnegative matrix factorization (NMF were used to study the DOI problem. The efficacy of resulting approaches was evaluated and compared using simulated and experimental data. Samples used in the experiments included Intralipid-10% or Intralipid-20% suspension in water as the medium with absorptive or scattering targets embedded.

  19. Decomposition of nitrous oxide at medium temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Loeffler, G.; Wargadalam, V.J.; Winter, F.; Hofbauer, H.

    2000-03-01

    Flow reactor experiments were done to study the decomposition of N{sub 2}O at atmospheric pressure and in a temperature range of 600--1,000 C. Dilute mixtures of N{sub 2}O with H{sub 2}, CH{sub 4}, CO with and without oxygen with N{sub 2} as carrier gas were studied. To see directly the relative importance of the thermal decomposition versus the destruction by free radicals (i.e.: H, O, OH) iodine was added to the reactant mixture suppressing the radicals' concentrations towards their equilibrium concentrations. The experimental results were discussed using a detailed chemistry model. This work shows that there are still some uncertainties regarding the kinetics of the thermal decomposition and the reaction between N{sub 2}O and the O radical. Using the recommendations applied in this work for the reaction N{sub 2}O + M {leftrightarrow} N{sub 2} + O + M and for N{sub 2}O + O {leftrightarrow} products, a good agreement with the experimental data can be obtained over a wide range of experimental conditions. The reaction between N{sub 2}O and OH is of minor importance under present conditions as stated in latest literature. The results show that N{sub 2}O + H {leftrightarrow} N{sub 2} + OH is the most important reaction in the destruction of N{sub 2}O. In the presence of oxygen it competes with H + O{sub 2} + M {leftrightarrow} HO{sub 2} + M and H + O{sub 2} {leftrightarrow} O + OH, respectively. The importance of the thermal decomposition (N{sub 2}O + M {leftrightarrow} N{sub 2} + O + M) increases with residence time. Reducing conditions and a long residence time lead to a high potential in N{sub 2}O reduction. Especially mixtures of H{sub 2}/N{sub 2}O and CO/H{sub 2}O/N{sub 2}O in nitrogen lead to a chain reaction mechanism causing a strong N{sub 2}O reduction.

  20. Multiresolution signal decomposition transforms, subbands, and wavelets

    CERN Document Server

    Akansu, Ali N

    1992-01-01

    This book provides an in-depth, integrated, and up-to-date exposition of the topic of signal decomposition techniques. Application areas of these techniques include speech and image processing, machine vision, information engineering, High-Definition Television, and telecommunications. The book will serve as the major reference for those entering the field, instructors teaching some or all of the topics in an advanced graduate course and researchers needing to consult an authoritative source.n The first book to give a unified and coherent exposition of multiresolutional signal decompos

  1. Thermal decomposition of meat and bone meal

    Energy Technology Data Exchange (ETDEWEB)

    Conesa, J.A.; Fullana, A.; Font, R. [Department of Chemical Engineering, University of Alicante, P.O. Box 99, E-03080 Alicante (Spain)

    2003-12-01

    A series of runs has been performed to study the thermal behavior of meat and bone meal (MBM) both in inert and reactive atmosphere. Although they are actually burned, the thermal decomposition of such MBM wastes has not been studied from a scientific point of view until now. The aim of this work is to present and discuss the thermogravimetric behavior of MBM both in nitrogen and air atmospheres. A thermobalance has been used to carry out the study at three different heating rates. A kinetic scheme able to correlate simultaneously (with no variation of the kinetic constants) the runs performed at different heating rates and different atmospheres of reaction is presented.

  2. Decomposition Polypropylene Plastic Waste with Pyrolysis Methode

    OpenAIRE

    Naimah, Siti; Nuraeni, Chicha; Rumondang, Irma; Jati, Bumiarto Nugroho; Ermawati, Rahyani

    2012-01-01

    Various attempts have been made to reduce plastic waste. One of the attempts is to convert plastic waste into energy sources. The process of converting waste plastics involves several stages of the process, one of which is the pyrolysis (thermal cracking). Pyrolysis is the decomposition process of plastic waste and distillation process without O2 at high temperatures (500-1000 °C). Results of pyrolysis process is solids and liquids forms. With the reactor temperature at 500 °C, pyrolysis equi...

  3. Thermoanalytical study of the decomposition of yttrium trifluoroacetate thin films

    Energy Technology Data Exchange (ETDEWEB)

    Eloussifi, H. [GRMT, GRMT, Department of Physics, University of Girona, Campus Montilivi, E17071 Girona, Catalonia (Spain); Laboratoire de Chimie Inorganique, Faculté des Sciences de Sfax, Université de Sfax, BP 1171, 3000 Sfax (Tunisia); Farjas, J., E-mail: jordi.farjas@udg.cat [GRMT, GRMT, Department of Physics, University of Girona, Campus Montilivi, E17071 Girona, Catalonia (Spain); Roura, P. [GRMT, GRMT, Department of Physics, University of Girona, Campus Montilivi, E17071 Girona, Catalonia (Spain); Ricart, S.; Puig, T.; Obradors, X. [Institut de Ciència de Materials de Barcelona (CSIC), Campus UAB, 08193 Bellaterra, Catalonia (Spain); Dammak, M. [Laboratoire de Chimie Inorganique, Faculté des Sciences de Sfax, Université de Sfax, BP 1171, 3000 Sfax (Tunisia)

    2013-10-31

    We present the use of the thermal analysis techniques to study yttrium trifluoroacetate thin films decomposition. In situ analysis was done by means of thermogravimetry, differential thermal analysis, and evolved gas analysis. Solid residues at different stages and the final product have been characterized by X-ray diffraction and scanning electron microscopy. The thermal decomposition of yttrium trifluoroacetate thin films results in the formation of yttria and presents the same succession of intermediates than powder's decomposition, however, yttria and all intermediates but YF{sub 3} appear at significantly lower temperatures. We also observe a dependence on the water partial pressure that was not observed in the decomposition of yttrium trifluoroacetate powders. Finally, a dependence on the substrate chemical composition is discerned. - Highlights: • Thermal decomposition of yttrium trifluoroacetate films. • Very different behavior of films with respect to powders. • Decomposition is enhanced in films. • Application of thermal analysis to chemical solution deposition synthesis of films.

  4. Review of Matrix Decomposition Techniques for Signal Processing Applications

    Directory of Open Access Journals (Sweden)

    Monika Agarwal,

    2014-01-01

    Full Text Available Decomposition of matrix is a vital part of many scientific and engineering applications. It is a technique that breaks down a square numeric matrix into two different square matrices and is a basis for efficiently solving a system of equations, which in turn is the basis for inverting a matrix. An inverting matrix is a part of many important algorithms. Matrix factorizations have wide applications in numerical linear algebra, in solving linear systems, computing inertia, and rank estimation is an important consideration. This paper presents review of all the matrix decomposition techniques used in signal processing applications on the basis of their computational complexity, advantages and disadvantages. Various Decomposition techniques such as LU Decomposition, QR decomposition , Cholesky decomposition are discussed here. Keywords –

  5. Freeman-Durden Decomposition with Oriented Dihedral Scattering

    Directory of Open Access Journals (Sweden)

    Yan Jian

    2014-10-01

    Full Text Available In this paper, when the azimuth direction of polarimetric Synthetic Aperature Radars (SAR differs from the planting direction of crops, the double bounce of the incident electromagnetic waves from the terrain surface to the growing crops is investigated and compared with the normal double bounce. Oriented dihedral scattering model is developed to explain the investigated double bounce and is introduced into the Freeman-Durden decomposition. The decomposition algorithm corresponding to the improved decomposition is then proposed. The airborne polarimetric SAR data for agricultural land covering two flight tracks are chosen to validate the algorithm; the decomposition results show that for agricultural vegetated land, the improved Freeman-Durden decomposition has the advantage of increasing the decomposition coherency among the polarimetric SAR data along the different flight tracks.

  6. Relative calibration of energy thresholds on multi-bin spectral x-ray detectors

    Energy Technology Data Exchange (ETDEWEB)

    Sjölin, M., E-mail: martin.sjolin@mi.physics.kth.se; Danielsson, M.

    2016-12-21

    Accurate and reliable energy calibration of spectral x-ray detectors used in medical imaging is essential for avoiding ring artifacts in the reconstructed images (computed tomography) and for performing accurate material basis decomposition. A simple and accurate method for relative calibration of the energy thresholds on a multi-bin spectral x-ray detector is presented. The method obtains the linear relations between all energy thresholds in a channel by scanning the thresholds with respect to each other during x-ray illumination. The method does not rely on a model of the detector's response function and does not require any identifiable features in the x-ray spectrum. Applying the same method, the offset between the thresholds can be determined also without external stimuli by utilizing the electronic noise as a source. The simplicity and accuracy of the method makes it suitable for implementation in clinical multi-bin spectral x-ray imaging systems.

  7. Parallel decomposition methods for the solution of electromagnetic scattering problems

    Science.gov (United States)

    Cwik, Tom

    1992-01-01

    This paper contains a overview of the methods used in decomposing solutions to scattering problems onto coarse-grained parallel processors. Initially, a short summary of relevant computer architecture is presented as background to the subsequent discussion. After the introduction of a programming model for problem decomposition, specific decompositions of finite difference time domain, finite element, and integral equation solutions to Maxwell's equations are presented. The paper concludes with an outline of possible software-assisted decomposition methods and a summary.

  8. Offset-sparsity decomposition for enhancement of color microscopic image of stained specimen in histopathology: further results

    Science.gov (United States)

    Kopriva, Ivica; Popović Hadžija, Marijana; Hadžija, Mirko; Aralica, Gorana

    2016-03-01

    Recently, novel data-driven offset-sparsity decomposition (OSD) method was proposed by us to increase colorimetric difference between tissue-structures present in the color microscopic image of stained specimen in histopathology. The OSD method performs additive decomposition of vectorized spectral images into image-adapted offset term and sparse term. Thereby, the sparse term represents an enhanced image. The method was tested on images of the histological slides of human liver stained with hematoxylin and eosin, anti-CD34 monoclonal antibody and Sudan III. Herein, we present further results related to increase of colorimetric difference between tissue structures present in the images of human liver specimens with pancreatic carcinoma metastasis stained with Gomori, CK7, CDX2 and LCA, and with colon carcinoma metastasis stained with Gomori, CK20 and PAN CK. Obtained relative increase of colorimetric difference is in the range [19.36%, 103.94%].

  9. Plant identity influences decomposition through more than one mechanism.

    Directory of Open Access Journals (Sweden)

    Jennie R McLaren

    Full Text Available Plant litter decomposition is a critical ecosystem process representing a major pathway for carbon flux, but little is known about how it is affected by changes in plant composition and diversity. Single plant functional groups (graminoids, legumes, non-leguminous forbs were removed from a grassland in northern Canada to examine the impacts of functional group identity on decomposition. Removals were conducted within two different environmental contexts (fertilization and fungicide application to examine the context-dependency of these identity effects. We examined two different mechanisms by which the loss of plant functional groups may impact decomposition: effects of the living plant community on the decomposition microenvironment, and changes in the species composition of the decomposing litter, as well as the interaction between these mechanisms. We show that the identity of the plant functional group removed affects decomposition through both mechanisms. Removal of both graminoids and forbs slowed decomposition through changes in the decomposition microenvironment. We found non-additive effects of litter mixing, with both the direction and identity of the functional group responsible depending on year; in 2004 graminoids positively influenced decomposition whereas in 2006 forbs negatively influenced decomposition rate. Although these two mechanisms act independently, their effects may be additive if both mechanisms are considered simultaneously. It is essential to understand the variety of mechanisms through which even a single ecosystem property is affected if we are to predict the future consequences of biodiversity loss.

  10. Microbial community functional change during vertebrate carrion decomposition

    National Research Council Canada - National Science Library

    Pechal, Jennifer L; Crippen, Tawni L; Tarone, Aaron M; Lewis, Andrew J; Tomberlin, Jeffery K; Benbow, M Eric

    2013-01-01

    .... The objective of this study was to provide a description of the carrion associated microbial community functional activity using differential carbon source use throughout decomposition over seasons...

  11. Microbial Community Functional Change during Vertebrate Carrion Decomposition: e79035

    National Research Council Canada - National Science Library

    Jennifer L Pechal; Tawni L Crippen; Aaron M Tarone; Andrew J Lewis; Jeffery K Tomberlin; M Eric Benbow

    2013-01-01

    .... The objective of this study was to provide a description of the carrion associated microbial community functional activity using differential carbon source use throughout decomposition over seasons...

  12. Primary decomposition of torsion R[X]-modules

    Directory of Open Access Journals (Sweden)

    William A. Adkins

    1994-01-01

    Full Text Available This paper is concerned with studying hereditary properties of primary decompositions of torsion R[X]-modules M which are torsion free as R-modules. Specifically, if an R[X]-submodule of M is pure as an R-submodule, then the primary decomposition of M determines a primary decomposition of the submodule. This is a generalization of the classical fact from linear algebra that a diagonalizable linear transformation on a vector space restricts to a diagonalizable linear transformation of any invariant subspace. Additionally, primary decompositions are considered under direct sums and tensor product.

  13. An investigation of the decomposition mechanism of calcium carbonate

    Directory of Open Access Journals (Sweden)

    D. Wang

    2017-01-01

    Full Text Available This paper focuses on investigating the decomposition mechanism of ca lcium carbonate. The non-isothermal thermal decompositions of calcium carbonate under vacuum and flowing nitrogen atmosphere have been studied by thermogravimetric analysis. With the application of the advanced nonlinear isoconversional method, the determined activation energy for each condition is dependent on the extent of reaction. Based on the dependences, a process involving two consecutive decomposition steps has been simulated. The simulation results match the experimental results of flowing nitrogen atmosphere. Results indicate that the decomposition of calcium carbonate undergoes the process of the formation of the intermediate and metastable product.

  14. Modeling yields insight into thermal decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Case, J.L.; Carr, R.V.; Simpson, M.S. [Air Products and Chemicals, Inc., Allentown, PA (United States)

    1995-12-01

    A fundamental understanding of the thermal decomposition of nitrotoluenes is critical in evaluating the hazards associated with transporting and storing commercial volumes of these chemicals. Detailed modeling of an adiabatic, low PHI and semi-open (vented to a larger pressure vessel) calorimeter provides insight into a multiple reaction mechanism. The reaction rates developed, along with the significant effect of reactant or intermediates vaporization were confirmed with additional experimental results. Such an interpretation of nitrotoluene decomposition is consistent with recent isothermal experiments as well as with the body of data reported in the open literature. The low temperature or induction reactions are accurately represented with a first order Arrhenius model having typical values for kinetic and thermodynamic parameters. These reactions generate minimal amounts of non condensable gas. If the material is maintained at an elevated temperature, but prevented from self-heating (by external cooling), the intermediate products form thermally unstable and nonvolatile oligomers. At higher temperatures the remaining materials undergo explosive reactions characterized by high heats of reaction, large activation energies and massive releases of non condensable gas. Quantifying the rates of nitrotoluene and/or intermediate vaporization versus oligomerization is essential in evaluating the hazard of a thermal explosion involving a commercial quantity of nitrotoluene.

  15. Interactions between Fine Wood Decomposition and Flammability

    Directory of Open Access Journals (Sweden)

    Weiwei Zhao

    2014-04-01

    Full Text Available Fire is nearly ubiquitous in the terrestrial biosphere, with profound effects on earth surface carbon storage, climate, and forest functions. Fuel quality is an important parameter determining forest fire behavior, which differs among both tree species and organs. Fuel quality is not static: when dead plant material decomposes, its structural, chemical, and water dynamic properties change, with implications for fuel flammability. However, the interactions between decomposition and flammability are poorly understood. This study aimed to determine decomposition’s effects on fuel quality and how this directly and indirectly affects wood flammability. We did controlled experiments on water dynamics and fire using twigs of four temperate tree species. We found considerable direct and indirect effects of decomposition on twig flammability, particularly on ignitability and burning time, which are important variables for fire spread. More decomposed twigs ignite and burn faster at given water content. Moreover, decomposed twigs dry out faster than fresh twigs, which make them flammable sooner when drying out after rain. Decomposed fine woody litters may promote horizontal fire spread as ground fuels and act as a fuel ladder when staying attached to trees. Our results add an important, previously poorly studied dynamic to our understanding of forest fire spread.

  16. Kinetics of bromochloramine formation and decomposition.

    Science.gov (United States)

    Luh, Jeanne; Mariñas, Benito J

    2014-01-01

    Batch experiments were performed to study the kinetics of bromochloramine formation and decomposition from the reaction of monochloramine and bromide ion. The effects of pH, initial monochloramine and bromide ion concentrations, phosphate buffer concentration, and excess ammonia were evaluated. Results showed that the monochloramine decay rate increased with decreasing pH and increasing bromide ion concentration, and the concentration of bromochloramine increased to a maximum before decreasing gradually. The maximum bromochloramine concentration reached was found to decrease with increasing phosphate and ammonia concentrations. Previous models in the literature were not able to capture the decay of bromochloramine, and therefore we proposed an extended model consisting of reactions for monochloramine autodecomposition, the decay of bromamines in the presence of bromide, bromochloramine formation, and bromochloramine decomposition. Reaction rate constants were obtained through least-squares fitting to 11 data sets representing the effect of pH, bromide, monochloramine, phosphate, and excess ammonia. The reaction rate constants were then used to predict monochloramine and bromochloramine concentration profiles for all experimental conditions tested. In general, the modeled lines were found to provide good agreement with the experimental data under most conditions tested, with deviations occurring at low pH and high bromide concentrations.

  17. DECOMPOSITION OF MANUFACTURING PROCESSES: A REVIEW

    Directory of Open Access Journals (Sweden)

    N.M.Z.N. Mohamed

    2012-06-01

    Full Text Available Manufacturing is a global activity that started during the industrial revolution in the late 19th century to cater for the large-scale production of products. Since then, manufacturing has changed tremendously through the innovations of technology, processes, materials, communication and transportation. The major challenge facing manufacturing is to produce more products using less material, less energy and less involvement of labour. To face these challenges, manufacturing companies must have a strategy and competitive priority in order for them to compete in a dynamic market. A review of the literature on the decomposition of manufacturing processes outlines three main processes, namely: high volume, medium volume and low volume. The decomposition shows that each sub process has its own characteristics and depends on the nature of the firm’s business. Two extreme processes are continuous line production (fast extreme and project shop (slow extreme. Other processes are in between these two extremes of the manufacturing spectrum. Process flow patterns become less complex with cellular, line and continuous flow compared with jobbing and project. The review also indicates that when the product is high variety and low volume, project or functional production is applied.

  18. Experimental study of trimethyl aluminum decomposition

    Science.gov (United States)

    Zhang, Zhi; Pan, Yang; Yang, Jiuzhong; Jiang, Zhiming; Fang, Haisheng

    2017-09-01

    Trimethyl aluminum (TMA) is an important precursor used for metal-organic chemical vapor deposition (MOCVD) of most Al-containing structures, in particular of nitride structures. The reaction mechanism of TMA with ammonia is neither clear nor certain due to its complexity. Pyrolysis of trimethyl metal is the start of series of reactions, thus significantly affecting the growth. Experimental study of TMA pyrolysis, however, has not yet been conducted in detail. In this paper, a reflectron time-of-flight mass spectrometer is adopted to measure the TMA decomposition from room temperature to 800 °C in a special pyrolysis furnace, activated by soft X-ray from the synchrotron radiation. The results show that generation of methyl, ethane and monomethyl aluminum (MMA) indicates the start of the pyrolysis process. In the low temperature range from 25 °C to 700 °C, the main product is dimethyl aluminum (DMA) from decomposition of TMA. For temperatures larger than 700 °C, the main products are MMA, DMA, methyl and ethane.

  19. Gaussian Decomposition of Laser Altimeter Waveforms

    Science.gov (United States)

    Hofton, Michelle A.; Minster, J. Bernard; Blair, J. Bryan

    1999-01-01

    We develop a method to decompose a laser altimeter return waveform into its Gaussian components assuming that the position of each Gaussian within the waveform can be used to calculate the mean elevation of a specific reflecting surface within the laser footprint. We estimate the number of Gaussian components from the number of inflection points of a smoothed copy of the laser waveform, and obtain initial estimates of the Gaussian half-widths and positions from the positions of its consecutive inflection points. Initial amplitude estimates are obtained using a non-negative least-squares method. To reduce the likelihood of fitting the background noise within the waveform and to minimize the number of Gaussians needed in the approximation, we rank the "importance" of each Gaussian in the decomposition using its initial half-width and amplitude estimates. The initial parameter estimates of all Gaussians ranked "important" are optimized using the Levenburg-Marquardt method. If the sum of the Gaussians does not approximate the return waveform to a prescribed accuracy, then additional Gaussians are included in the optimization procedure. The Gaussian decomposition method is demonstrated on data collected by the airborne Laser Vegetation Imaging Sensor (LVIS) in October 1997 over the Sequoia National Forest, California.

  20. Overlapping Community Detection based on Network Decomposition

    Science.gov (United States)

    Ding, Zhuanlian; Zhang, Xingyi; Sun, Dengdi; Luo, Bin

    2016-04-01

    Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.

  1. Minimax eigenvector decomposition for data hiding

    Science.gov (United States)

    Davidson, Jennifer

    2005-09-01

    Steganography is the study of hiding information within a covert channel in order to transmit a secret message. Any public media such as image data, audio data, or even file packets, can be used as a covert channel. This paper presents an embedding algorithm that hides a message in an image using a technique based on a nonlinear matrix transform called the minimax eigenvector decomposition (MED). The MED is a minimax algebra version of the well-known singular value decomposition (SVD). Minimax algebra is a matrix algebra based on the algebraic operations of maximum and addition, developed initially for use in operations research and extended later to represent a class of nonlinear image processing operations. The discrete mathematical morphology operations of dilation and erosion, for example, are contained within minimax algebra. The MED is much quicker to compute than the SVD and avoids the numerical computational issues of the SVD because the operations involved only integer addition, subtraction, and compare. We present the algorithm to embed data using the MED, show examples applied to image data, and discuss limitations and advantages as compared with another similar algorithm.

  2. Spectral filtering for plant production

    Energy Technology Data Exchange (ETDEWEB)

    Young, R.E.; McMahon, M.J.; Rajapakse, N.C.; Becoteau, D.R.

    1994-12-31

    Research to date suggests that spectral filtering can be an effective alternative to chemical growth regulators for altering plant development. If properly implemented, it can be nonchemical and environmentally friendly. The aqueous CuSO{sub 4}, and CuCl{sub 2} solutions in channelled plastic panels have been shown to be effective filters, but they can be highly toxic if the solutions contact plants. Some studies suggest that spectral filtration limited to short EOD intervals can also alter plant development. Future research should be directed toward confirmation of the influence of spectral filters and exposure times on a broader range of plant species and cultivars. Efforts should also be made to identify non-noxious alternatives to aqueous copper solutions and/or to incorporate these chemicals permanently into plastic films and panels that can be used in greenhouse construction. It would also be informative to study the impacts of spectral filters on insect and microbal populations in plant growth facilities. The economic impacts of spectral filtering techniques should be assessed for each delivery methodology.

  3. Solar Spectral Irradiance and Climate

    Science.gov (United States)

    Pilewskie, P.; Woods, T.; Cahalan, R.

    2012-01-01

    Spectrally resolved solar irradiance is recognized as being increasingly important to improving our understanding of the manner in which the Sun influences climate. There is strong empirical evidence linking total solar irradiance to surface temperature trends - even though the Sun has likely made only a small contribution to the last half-century's global temperature anomaly - but the amplitudes cannot be explained by direct solar heating alone. The wavelength and height dependence of solar radiation deposition, for example, ozone absorption in the stratosphere, absorption in the ocean mixed layer, and water vapor absorption in the lower troposphere, contribute to the "top-down" and "bottom-up" mechanisms that have been proposed as possible amplifiers of the solar signal. New observations and models of solar spectral irradiance are needed to study these processes and to quantify their impacts on climate. Some of the most recent observations of solar spectral variability from the mid-ultraviolet to the near-infrared have revealed some unexpected behavior that was not anticipated prior to their measurement, based on an understanding from model reconstructions. The atmospheric response to the observed spectral variability, as quantified in climate model simulations, have revealed similarly surprising and in some cases, conflicting results. This talk will provide an overview on the state of our understanding of the spectrally resolved solar irradiance, its variability over many time scales, potential climate impacts, and finally, a discussion on what is required for improving our understanding of Sun-climate connections, including a look forward to future observations.

  4. New approach to spectral features modeling

    NARCIS (Netherlands)

    Brug, H. van; Scalia, P.S.

    2012-01-01

    The origin of spectral features, speckle effects, is explained, followed by a discussion on many aspects of spectral features generation. The next part gives an overview of means to limit the amplitude of the spectral features. This paper gives a discussion of all means to reduce the spectral

  5. Effects of stoichiometry and temperature perturbations on beech leaf litter decomposition, enzyme activities and protein expression

    Directory of Open Access Journals (Sweden)

    K. M. Keiblinger

    2012-11-01

    Full Text Available Microbes are major players in leaf litter decomposition and therefore advances in the understanding of their control on element cycling are of paramount importance. Our aim was to investigate the influence of leaf litter stoichiometry in terms of carbon (C : nitrogen (N : phosphorus (P ratios on the decomposition processes and to track changes in microbial community structures and functions in response to temperature stress treatments. To elucidate how the stoichiometry of beech leaf litter (Fagus sylvatica L. and stress treatments interactively affect the microbial decomposition processes, a terrestrial microcosm experiment was conducted. Beech litter from different Austrian sites covering C:N ratios from 39 to 61 and C:P ratios from 666 to 1729 were incubated at 15 °C and 60% moisture for six months. Part of the microcosms were then subjected to severe changes in temperature (+30 °C and −15 °C to monitor the influence of temperature stress. Extracellular enzyme activities were assayed and respiratory activities measured. A semi-quantitative metaproteomics approach (1D-SDS PAGE combined with liquid chromatography and tandem mass spectrometry; unique spectral counting was employed to investigate the impact of the applied stress treatments in dependency of litter stoichiometry on structure and function of the decomposing community. In litter with narrow C:nutrient (C:N, C:P ratios, microbial decomposers were most abundant. Cellulase, chitinase, phosphatase and protease activity decreased after heat and freezing treatments. Decomposer communities and specific functions varied with site, i.e. stoichiometry. The applied stress combined with the respective time of sampling evoked changes of enzyme activities and litter pH. Freezing treatments resulted in a decline in residual plant litter material and increased fungal abundance, indicating slightly accelerated decomposition. Overall, a strong effect of litter stoichiometry on microbial

  6. Effects of stoichiometry and temperature perturbations on beech leaf litter decomposition, enzyme activities and protein expression

    Science.gov (United States)

    Keiblinger, K. M.; Schneider, T.; Roschitzki, B.; Schmid, E.; Eberl, L.; Hämmerle, I.; Leitner, S.; Richter, A.; Wanek, W.; Riedel, K.; Zechmeister-Boltenstern, S.

    2012-11-01

    Microbes are major players in leaf litter decomposition and therefore advances in the understanding of their control on element cycling are of paramount importance. Our aim was to investigate the influence of leaf litter stoichiometry in terms of carbon (C) : nitrogen (N) : phosphorus (P) ratios on the decomposition processes and to track changes in microbial community structures and functions in response to temperature stress treatments. To elucidate how the stoichiometry of beech leaf litter (Fagus sylvatica L.) and stress treatments interactively affect the microbial decomposition processes, a terrestrial microcosm experiment was conducted. Beech litter from different Austrian sites covering C:N ratios from 39 to 61 and C:P ratios from 666 to 1729 were incubated at 15 °C and 60% moisture for six months. Part of the microcosms were then subjected to severe changes in temperature (+30 °C and -15 °C) to monitor the influence of temperature stress. Extracellular enzyme activities were assayed and respiratory activities measured. A semi-quantitative metaproteomics approach (1D-SDS PAGE combined with liquid chromatography and tandem mass spectrometry; unique spectral counting) was employed to investigate the impact of the applied stress treatments in dependency of litter stoichiometry on structure and function of the decomposing community. In litter with narrow C:nutrient (C:N, C:P) ratios, microbial decomposers were most abundant. Cellulase, chitinase, phosphatase and protease activity decreased after heat and freezing treatments. Decomposer communities and specific functions varied with site, i.e. stoichiometry. The applied stress combined with the respective time of sampling evoked changes of enzyme activities and litter pH. Freezing treatments resulted in a decline in residual plant litter material and increased fungal abundance, indicating slightly accelerated decomposition. Overall, a strong effect of litter stoichiometry on microbial community structures and

  7. Thermodynamic anomaly in magnesium hydroxide decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Reis, T.A.

    1983-08-01

    The Origin of the discrepancy in the equilibrium water vapor pressure measurements for the reaction Mg(OH)/sub 2/(s) = MgO(s) + H/sub 2/O(g) when determined by Knudsen effusion and static manometry at the same temperature was investigated. For this reaction undergoing continuous thermal decomposition in Knudsen cells, Kay and Gregory observed that by extrapolating the steady-state apparent equilibrium vapor pressure measurements to zero-orifice, the vapor pressure was approx. 10/sup -4/ of that previously established by Giauque and Archibald as the true thermodynamic equilibrium vapor pressure using statistical mechanical entropy calculations for the entropy of water vapor. This large difference in vapor pressures suggests the possibility of the formation in a Knudsen cell of a higher energy MgO that is thermodynamically metastable by about 48 kJ / mole. It has been shown here that experimental results are qualitatively independent of the type of Mg(OH)/sub 2/ used as a starting material, which confirms the inferences of Kay and Gregory. Thus, most forms of Mg(OH)/sub 2/ are considered to be the stable thermodynamic equilibrium form. X-ray diffraction results show that during the course of the reaction only the equilibrium NaCl-type MgO is formed, and no different phases result from samples prepared in Knudsen cells. Surface area data indicate that the MgO molar surface area remains constant throughout the course of the reaction at low decomposition temperatures, and no significant annealing occurs at less than 400/sup 0/C. Scanning electron microscope photographs show no change in particle size or particle surface morphology. Solution calorimetric measurements indicate no inherent hgher energy content in the MgO from the solid produced in Knudsen cells. The Knudsen cell vapor pressure discrepancy may reflect the formation of a transient metastable MgO or Mg(OH)/sub 2/-MgO solid solution during continuous thermal decomposition in Knudsen cells.

  8. Berlin Reflectance Spectral Library (BRSL)

    Science.gov (United States)

    Henckel, D.; Arnold, G.; Kappel, D.; Moroz, L. V.; Markus, K.

    2017-09-01

    The Berlin Reflectance Spectral Library (BRSL) provides a collection of reflectance spectra between 0.3 and 17 µm. It was originally dedicated to support space missions to small solar system bodies. Meanwhile the library includes selections of biconical reflectance spectra for spectral data analysis of other planetary bodies as well. The library provides reference spectra of well-characterized terrestrial analogue materials and meteorites for interpretation of remote sensing reflectance spectra of planetary surfaces. We introduce the BRSL, summarize the data available, and access to use them for further relevant applications.

  9. Generation of metallic plasmon nanostructures in a thin transparent photosensitive copper oxide film by femtosecond thermochemical decomposition

    Science.gov (United States)

    Danilov, P. A.; Zayarny, D. A.; Ionin, A. A.; Kudryashov, S. I.; Litovko, E. P.; Mel'nik, N. N.; Rudenko, A. A.; Saraeva, I. N.; Umanskaya, S. P.; Khmelnitskii, R. A.

    2017-09-01

    Irradiation of optically transparent copper (I) oxide film covering a glass substrate with a tightly focused femtosecond laser pulses in the pre-ablation regime leads to film reduction to a metallic colloidal state via a single-photon absorption and its subsequent thermochemical decomposition. This effect was demonstrated by the corresponding measurement of the extinction spectrum in visible spectral range. The laser-induced formation of metallic copper nanoparticles in the focal region inside the bulk oxide film allows direct recording of individual thin-film plasmon nanostructures and optical-range metasurfaces.

  10. Comparative study of laser and lamp fluorescence of cancer and normal tissue through wavelet transform and singular value decomposition

    Science.gov (United States)

    Gharekhan, Anita H.; Rath, Dhaitri; Oza, Ashok N.; Pradhan, Asima; Sureshkumar, M. B.; Panigrahi, Prasanta K.

    2009-02-01

    A systematic investigation of the fluorescence characteristics of normal and cancerous human breast tissues is carried out, using laser and lamp as excitation sources. It is found that earlier observed subtle differences between these two tissue types in the wavelet domain are absent, when lamp is used as excitation source. However, singular value decomposition of the average spectral profile in the wavelet domain yields strong correlation for the cancer tissues in the 580-750 nm regimes indicating weak fluorophore activity in this wavelength range.

  11. Litter evenness influences short-term peatland decomposition processes.

    Science.gov (United States)

    Ward, Susan E; Ostle, Nick J; McNamara, Niall P; Bardgett, Richard D

    2010-10-01

    There is concern that changes in climate and land use could increase rates of decomposition in peatlands, leading to release of stored C to the atmosphere. Rates of decomposition are driven by abiotic factors such as temperature and moisture, but also by biotic factors such as changes in litter quality resulting from vegetation change. While effects of litter species identity and diversity on decomposition processes are well studied, the impact of changes in relative abundance (evenness) of species has received less attention. In this study we investigated effects of changes in short-term peatland plant species evenness on decomposition in mixed litter assemblages, measured as litter weight loss, respired CO(2) and leachate C and N. We found that over the 307-day incubation period, higher levels of species evenness increased rates of decomposition in mixed litters, measured as weight loss and leachate dissolved organic N. We also found that the identity of the dominant species influenced rates of decomposition, measured as weight loss, CO(2) flux and leachate N. Greatest rates of decomposition were when the dwarf shrub Calluna vulgaris dominated litter mixtures, and lowest rates when the bryophyte Pleurozium schreberi dominated. Interactions between evenness and dominant species identity were also detected for litter weight loss and leachate N. In addition, positive non-additive effects of mixing litter were observed for litter weight loss. Our findings highlight the importance of changes in the evenness of plant community composition for short-term decomposition processes in UK peatlands.

  12. An Approach to Operational Analysis: Doctrinal Task Decomposition

    Science.gov (United States)

    2016-08-04

    SESSION AUGUST 2-4, 2016 – NOVI, MICHIGAN AN APPROACH TO OPERATIONAL ANALYSIS: DOCTRINAL TASK DECOMPOSITION Major Matthew A. Horning U.S...Engineering and Technology Symposium (GVSETS) An Approach To Operational Analysis: Doctrinal Task Decomposition UNCLASSIFIED: Distribution...NCO from any branch, such as logistics, can describe Armor doctrine to the TRADOC standards. DOCTRINAL TASK ANALYSIS FRAMEWORK This approach to

  13. Assessment of three major decomposition techniques for sample ...

    African Journals Online (AJOL)

    Three main rock-decomposition techniques; microwave oven, open beaker acid and basic fusion were examined in an attempt to establish the most appropriate method for the decomposition of granite rocks for elemental analysis. Standard reference rock material NIM- SARM-I was dissolved using each of the digestion ...

  14. Pitfalls in VAR based return decompositions: A clarification

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten

    Based on Chen and Zhao's (2009) criticism of VAR based return de- compositions, we explain in detail the various limitations and pitfalls involved in such decompositions. First, we show that Chen and Zhao's interpretation of their excess bond return decomposition is wrong: the residual component...

  15. Kinetics of the thermal decomposition of tetramethylsilane behind ...

    Indian Academy of Sciences (India)

    Thermal decomposition of tetramethylsilane (TMS) diluted in argon was studied behind the reflected shock waves in a single pulse shock tube (SPST) in the temperature range of 1058–1194 K. The major products formed in the decomposition are methane (CH4) and ethylene (C2H4); whereas ethane and propylene were ...

  16. Rate of Decomposition of Leaflitter in an Age Series Gmelina ...

    African Journals Online (AJOL)

    The study was carried out to investigate the rate of decomposition of Gmelina arborea Robx leaflitter in an age series in Gmelina plantation in shasa forest reserve in a Nigerian low land Forest. Rate of decomposition of Gmelina leaf litter was determined using litter bag technique and mass balance analysis to quantify the ...

  17. Litter fall and decomposition of mangrove species Avicennia marina ...

    African Journals Online (AJOL)

    Abstract—Litter fall and decomposition of mangrove leaves were compared for different seasons, species (Avicennia marina and Rhizophora mucronata) and sites in southern Mozambique. Mangrove leaf litter fall and decomposition was estimated using small mesh collecting-baskets and litter bags respectively in 2006 and ...

  18. Effect of hydrofluoric acid on acid decomposition mixtures for ...

    African Journals Online (AJOL)

    Effect of hydrofluoric acid on acid decomposition mixtures for determining iron and other metallic elements in green vegetables. ... Therefore, the inclusion of HF in the acid decomposition mixtures would ensure total and precise estimation of Fe in plant materials, but not critical for analysis of Mn, Mg, Cu, Zn and Ca.

  19. Organic fertilizer decomposition and nutrient loads in water reservoir ...

    African Journals Online (AJOL)

    Decomposition in aquatic ecosystems is controlled by various factors. The study investigated the trend of decomposition and the potential nutrients loaded in reservoir water. Analysis of water samples and organic fertilizer composition was according to APHA (1995) and Klute (1986) respectively. Reservoir water ...

  20. Improved beamforming performance using pulsed plane wave decomposition

    DEFF Research Database (Denmark)

    Munk, Peter; Jensen, Jørgen Arendt

    2000-01-01

    A tool for calculating the beamformer setup associated with a specified pulsed acoustic field is presented. The method is named Pulsed Plane Wave Decomposition (PPWD) and is based on the decomposition of a pulsed acoustic field into a set of PPWs at a given depth. Each PPW can be propagated to th...

  1. Decomposition characteristics of maize ( Zea mays . L.) straw with ...

    African Journals Online (AJOL)

    Decomposition of maize straw incorporated into soil with various nitrogen amended carbon to nitrogen (C/N) ratios under a range of moisture was studied through a laboratory incubation trial. The experiment was set up to simulate the most suitable C/N ratio for straw carbon (C) decomposition and sequestering in the soil.

  2. Consequences of biodiversity loss for litter decomposition across biomes

    NARCIS (Netherlands)

    Handa, I.T.; Aerts, R.; Berendse, F.; Berg, M.P.; Butenschoen, O.; Bruder, A.; Chauvet, E.; Gessner, M.O.; Jabiol, J.; Makkonen, M.; McKie, B.G.; Malmqvist, B.; Peeters, E.T.H.M.; Scheu, S.; Schmid, B.; Ruijven, van J.; Vos, V.C.A.; Hattenschwiler, S.

    2014-01-01

    The decomposition of dead organic matter is a major determinant of carbon and nutrient cycling in ecosystems, and of carbon fluxes between the biosphere and the atmosphere1, 2, 3. Decomposition is driven by a vast diversity of organisms that are structured in complex food webs2, 4. Identifying the

  3. Generalized Benders’ Decomposition for topology optimization problems

    DEFF Research Database (Denmark)

    Munoz Queupumil, Eduardo Javier; Stolpe, Mathias

    2011-01-01

    This article considers the non-linear mixed 0–1 optimization problems that appear in topology optimization of load carrying structures. The main objective is to present a Generalized Benders’ Decomposition (GBD) method for solving single and multiple load minimum compliance (maximum stiffness......) problems with discrete design variables to global optimality. We present the theoretical aspects of the method, including a proof of finite convergence and conditions for obtaining global optimal solutions. The method is also linked to, and compared with, an Outer-Approximation approach and a mixed 0......–1 semi definite programming formulation of the considered problem. Several ways to accelerate the method are suggested and an implementation is described. Finally, a set of truss topology optimization problems are numerically solved to global optimality....

  4. Hydrogen peroxide decomposition kinetics in aquaculture water

    DEFF Research Database (Denmark)

    Arvin, Erik; Pedersen, Lars-Flemming

    2015-01-01

    Hydrogen peroxide (HP) is used in aquaculture systems where preventive or curative water treatments occasionally are required. Use of chemical agents can be challenging in recirculating aquaculture systems (RAS) due to extended water retention time and because the agents must not damage the fish ...... in RAS by addressing disinfection demand and identify efficient and safe water treatment routines.......Hydrogen peroxide (HP) is used in aquaculture systems where preventive or curative water treatments occasionally are required. Use of chemical agents can be challenging in recirculating aquaculture systems (RAS) due to extended water retention time and because the agents must not damage the fish...... reared or the nitrifying bacteria in the biofilters at concentrations required to eliminating pathogens. This calls for quantitative insight into the fate of the disinfectant residuals during water treatment. This paper presents a kinetic model that describes the HP decomposition in aquaculture water...

  5. On Double-Star Decomposition of Graphs

    Directory of Open Access Journals (Sweden)

    Akbari Saieed

    2017-08-01

    Full Text Available A tree containing exactly two non-pendant vertices is called a double-star. A double-star with degree sequence (k1 + 1, k2 + 1, 1, . . . , 1 is denoted by Sk1,k2. We study the edge-decomposition of graphs into double-stars. It was proved that every double-star of size k decomposes every 2k-regular graph. In this paper, we extend this result by showing that every graph in which every vertex has degree 2k + 1 or 2k + 2 and containing a 2-factor is decomposed into Sk1,k2 and Sk1−1,k2, for all positive integers k1 and k2 such that k1 + k2 = k.

  6. Generalized decomposition methods for singular oscillators

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, J.I. [Room I-320-D, E. T. S. Ingenieros Industriales, Universidad de Malaga, Plaza El Ejido, s/n 29013 Malaga (Spain)], E-mail: jirs@lcc.uma.es

    2009-10-30

    Generalized decomposition methods based on a Volterra integral equation, the introduction of an ordering parameter and a power series expansion of the solution in terms of the ordering parameter are developed and used to determine the solution and the frequency of oscillation of a singular, nonlinear oscillator with an odd nonlinearity. It is shown that these techniques provide solutions which are free from secularities if the unknown frequency of oscillation is also expanded in power series of the ordering parameter, require that the nonlinearities be analytic functions of their arguments, and, at leading-order, provide the same frequency of oscillation as two-level iterative techniques, the homotopy perturbation method if the constants that appear in the governing equation are expanded in power series of the ordering parameter, and modified artificial parameter - Linstedt-Poincare procedures.

  7. Damping Estimation by Frequency Domain Decomposition

    DEFF Research Database (Denmark)

    Brincker, Rune; Ventura, C. E.; Andersen, P.

    2001-01-01

    In this paper it is explained how the damping can be estimated using the Frequency Domain Decomposition technique for output-only modal identification, i.e. in the case where the modal parameters is to be estimated without knowing the forces exciting the system. Also it is explained how the natural...... back to time domain to identify damping and frequency. The technique is illustrated on a simple simulation case with 2 closely spaced modes. On this example it is illustrated how the identification is influenced by very closely spacing, by non-orthogonal modes, and by correlated input. The technique...... is further illustrated on the output-only identification of the Great Belt Bridge. On this example it is shown how the damping is identified on a weakly exited mode and a closely spaced mode....

  8. Decomposition of time-resolved tomographic PIV

    Energy Technology Data Exchange (ETDEWEB)

    Schmid, Peter J. [Ecole Polytechnique, Laboratoire d' Hydrodynamique (LadHyX), Palaiseau (France); Violato, Daniele; Scarano, Fulvio [Delft University of Technology, Department of Aerospace Engineering, Delft (Netherlands)

    2012-06-15

    An experimental study has been conducted on a transitional water jet at a Reynolds number of Re = 5,000. Flow fields have been obtained by means of time-resolved tomographic particle image velocimetry capturing all relevant spatial and temporal scales. The measured three-dimensional flow fields have then been postprocessed by the dynamic mode decomposition which identifies coherent structures that contribute significantly to the dynamics of the jet. Both temporal and spatial analyses have been performed. Where the jet exhibits a primary axisymmetric instability followed by a pairing of the vortex rings, dominant dynamic modes have been extracted together with their amplitude distribution. These modes represent a basis for the low-dimensional description of the dominant flow features. (orig.)

  9. Biclustering via Sparse Singular Value Decomposition

    KAUST Repository

    Lee, Mihee

    2010-02-16

    Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets. © 2010, The International Biometric Society.

  10. Image compression using singular value decomposition

    Science.gov (United States)

    Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.

    2017-11-01

    We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.

  11. Decomposition of spectra using maximum autocorrelation factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2001-01-01

    This paper addresses the problem of generating a low dimensional representation of the variation present in a set of spectra, e.g. reflection spectra recorded from a series of objects. The resulting low dimensional description may subseque ntly be input through variable selection schemes into cla......This paper addresses the problem of generating a low dimensional representation of the variation present in a set of spectra, e.g. reflection spectra recorded from a series of objects. The resulting low dimensional description may subseque ntly be input through variable selection schemes...... Fourier decomposition these new variables are located in frequency as well as well wavelength. The proposed algorithm is tested on 100 samples of NIR spectra of wheat....

  12. Heterogeneous Thermochemical Decomposition Under Direct Irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Lipinski, W.; Steinfeld, A. [PSI and ETH Zuerich(Switzerland)

    2005-03-01

    Radiative heat transfer in a chemical reacting system directly exposed to an external source of high-flux radiation is considered. The endothermic decomposition of CaCO{sub 3}(s) into CaO(s) and CO{sub 2}(g) is selected as the model heterogeneous reaction. Experimentation using an Ar arc as the radiation source was carried out in which powder samples were subjected to radiative power fluxes in the range 400-930 kW/m{sup 2}. A 3D transient heat transfer model that links conduction-convection-radiation heat transfer to the chemical kinetics is formulated using wavelength and chemical composition dependent material properties. Monte-Carlo ray tracing and the Rosseland diffusion approximation are employed to obtain the radiative transport. The unsteady energy equation is solved by finite volume technique. The model is validated by com-paring the computed reaction extent variation with time to the values experimentally measured. (author)

  13. Sparsity-promoting dynamic mode decomposition

    Science.gov (United States)

    Jovanović, Mihailo R.; Schmid, Peter J.; Nichols, Joseph W.

    2014-02-01

    Dynamic mode decomposition (DMD) represents an effective means for capturing the essential features of numerically or experimentally generated flow fields. In order to achieve a desirable tradeoff between the quality of approximation and the number of modes that are used to approximate the given fields, we develop a sparsity-promoting variant of the standard DMD algorithm. Sparsity is induced by regularizing the least-squares deviation between the matrix of snapshots and the linear combination of DMD modes with an additional term that penalizes the ℓ1-norm of the vector of DMD amplitudes. The globally optimal solution of the resulting regularized convex optimization problem is computed using the alternating direction method of multipliers, an algorithm well-suited for large problems. Several examples of flow fields resulting from numerical simulations and physical experiments are used to illustrate the effectiveness of the developed method.

  14. Thermal decompositions of heavy lanthanide aconitates

    Energy Technology Data Exchange (ETDEWEB)

    Brzyska, W.; Ozga, W. (Uniwersytet Marii Curie-Sklodowskiej, Lublin (Poland))

    The conditions of thermal decomposition of Tb(III), Dy, Ho, Er, Tm, Yb and Lu aconitates have been studied. On heating, the aconitates of heavy lanthanides lose crystallization water to yield anhydrous salts, which are then transformed into oxides. The aconitate of Tb(III) decomposes in two stages. First, the complex undergoes dehydration to form the anhydrous salt, which next decomposes directly to Tb/sub 4/O/sub 7/. The aconitates of Dy, Ho, Er, Tm, Yb and Lu decompose in three stages. On heating, the hydrated complexes lose crystallization water, yielding the anhydrous complexes; these subsequently decompose to Ln/sub 2/O/sub 3/ with intermediate formation of Ln/sub 2/O/sub 2/CO/sub 3/.

  15. Decomposition Analysis of Forest Ecosystem Services Values

    Directory of Open Access Journals (Sweden)

    Hidemichi Fujii

    2017-04-01

    Full Text Available Forest ecosystem services are fundamental for human life. To protect and increase forest ecosystem services, the driving factors underlying changes in forest ecosystem service values must be determined to properly implement forest resource management planning. This study examines the driving factors that affect changes in forest ecosystem service values by focusing on regional forest characteristics using a dataset of 47 prefectures in Japan for 2000, 2007, and 2012. We applied two approaches: a contingent valuation method for estimating the forest ecosystem service value per area and a decomposition analysis for identifying the main driving factors of changes in the value of forest ecosystem services. The results indicate that the value of forest ecosystem services has increased due to the expansion of forest area from 2000 to 2007. However, factors related to forest management and ecosystem service value per area have contributed to a decrease in the value of ecosystem services from 2000 to 2007 and from 2007 to 2012, respectively.

  16. Decomposition of childhood malnutrition in Cambodia.

    Science.gov (United States)

    Sunil, Thankam S; Sagna, Marguerite

    2015-10-01

    Childhood malnutrition is a major problem in developing countries, and in Cambodia, it is estimated that approximately 42% of the children are stunted, which is considered to be very high. In the present study, we examined the effects of proximate and socio-economic determinants on childhood malnutrition in Cambodia. In addition, we examined the effects of the changes in these proximate determinants on childhood malnutrition between 2000 and 2005. Our analytical approach included descriptive, logistic regression and decomposition analyses. Separate analyses are estimated for 2000 and 2005 survey. The primary component of the difference in stunting is attributable to the rates component, indicating that the decrease of stunting is due mainly to the decrease in stunting rates between 2000 and 2005. While majority of the differences in childhood malnutrition between 2000 and 2005 can be attributed to differences in the distribution of malnutrition determinants between 2000 and 2005, differences in their effects also showed some significance. © 2013 John Wiley & Sons Ltd.

  17. Thermal Decomposition Chemistry of Amine Borane (U)

    Energy Technology Data Exchange (ETDEWEB)

    Stowe, A. C.; Feigerle, J.; Smyrl, N. R.; Morrell, J. S.

    2010-01-29

    The conclusions of this presentation are: (1) Amine boranes potentially can be used as a vehicular hydrogen storage material. (2) Purity of the hydrogen stream is critical for use with a fuel cell. Pure H{sub 2} can be provided by carefully conditioning the fuel (encapsulation, drying, heating rate, impurities). (3) Thermodynamics and kinetics can be controlled by conditioning as well. (4) Regeneration of the spent amine borane fuel is still the greatest challenge to its potential use. (5) Addition of hydrocarbon-substituted amine boranes alter the chemistry dramatically. (6) Decomposition of the substituted amine borane mixed system favors reaction products that are more potentially easier to regenerate the hydrogenated fuel. (7) t-butylamine borane is not the best substituted amine borane to use since it releases isobutane; however, formation of CNBH{sub x} products does occur.

  18. Spectral theorem and partial symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Gozdz, A. [University of Maria Curie-Sklodowska, Department of Mathematical Physics, Institute of Physics (Poland); Gozdz, M. [University of Maria Curie-Sklodowska, Department of Complex Systems and Neurodynamics, Institute of Informatics (Poland)

    2012-10-15

    A novel method of the decompositon of a quantum system's Hamiltonian is presented. In this approach the criterion of the decomposition is determined by the symmetries possessed by the sub-Hamiltonians. This procedure is rather generic and independent of the actual global symmetry, or the lack of it, of the full Hamilton operator. A detailed investigation of the time evolution of the various sub-Hamiltonians, therefore the change in time of the symmetry of the physical object, is presented for the case of a vibrator-plus-rotor model. Analytical results are illustrated by direct numerical calculations.

  19. Thermal Decomposition of Radiation-Damaged Polystyrene

    Energy Technology Data Exchange (ETDEWEB)

    J Abrefah GS Klinger

    2000-09-26

    The radiation-damaged polystyrene material (''polycube'') used in this study was synthesized by mixing a high-density polystyrene (''Dylene Fines No. 100'') with plutonium and uranium oxides. The polycubes were used on the Hanford Site in the 1960s for criticality studies to determine the hydrogen-to-fissile atom ratios for neutron moderation during processing of spent nuclear fuel. Upon completion of the studies, two methods were developed to reclaim the transuranic (TRU) oxides from the polymer matrix: (1) burning the polycubes in air at 873 K; and (2) heating the polycubes in the absence of oxygen and scrubbing the released monomer and other volatile organics using carbon tetrachloride. Neither of these methods was satisfactory in separating the TRU oxides from the polystyrene. Consequently, the remaining polycubes were sent to the Hanford Plutonium Finishing Plant (PFP) for storage. Over time, the high dose of alpha and gamma radiation has resulted in a polystyrene matrix that is highly cross-linked and hydrogen deficient and a stabilization process is being developed in support of Defense Nuclear Facility Safety Board Recommendation 94-1. Baseline processes involve thermal treatment to pyrolyze the polycubes in a furnace to decompose the polystyrene and separate out the TRU oxides. Thermal decomposition products from this degraded polystyrene matrix were characterized by Pacific Northwest National Laboratory to provide information for determining the environmental impact of the process and for optimizing the process parameters. A gas chromatography/mass spectrometry (GC/MS) system coupled to a horizontal tube furnace was used for the characterization studies. The decomposition studies were performed both in air and helium atmospheres at 773 K, the planned processing temperature. The volatile and semi-volatile organic products identified for the radiation-damaged polystyrene were different from those observed for virgin

  20. Thermal Decomposition of Radiation-Damaged Polystyrene

    Energy Technology Data Exchange (ETDEWEB)

    Abrefah, John; Klinger, George S.

    2000-09-26

    The radiation-damaged polystyrene (given the identification name of 'polycube') was fabricated by mixing high-density polystyrene material ("Dylene Fines # 100") with plutonium and uranium oxides. The polycubes were used in the 1960s for criticality studies during processing of spent nuclear fuel. The polycubes have since been stored for almost 40 years at the Hanford Plutonium Finishing Plant (PFP) after failure of two processes to reclaim the plutonium and uranium oxides from the polystyrene matrix. Thermal decomposition products from this highly cross-linked polystyrene matrix were characterized using Gas Chromatograph/Mass Spectroscopy (GC/MS) system coupled to a horizontal furnace. The decomposition studies were performed in air and helium atmospheres at about 773 K. The volatile and semi-volatile organic products for the radiation-damaged polystyrene were different compared to virgin polystyrene. The differences were in the number of organic species generated and their concentrations. In the inert (i.e., helium) atmosphere, the major volatile organic products identified (in order of decreasing concentrations) were styrene, benzene, toluene, ethylbenzene, xylene, nathphalene, propane, .alpha.-methylbenzene, indene and 1,2,3-trimethylbenzene. But in air, the major volatile organic species identified changed slightly. Concentrations of the organic species in the inert atmosphere were significantly higher than those for the air atmosphere processing. Overall, 38 volatile organic species were identified in the inert atmosphere compared to 49 species in air. Twenty of the 38 species in the inert conditions were also products in the air atmosphere. Twenty-two oxidized organic products were identified during thermal processing in air.

  1. Spectral CT imaging of vulnerable plaque with two independent biomarkers

    Science.gov (United States)

    Baturin, Pavlo; Alivov, Yahya; Molloi, Sabee

    2012-07-01

    The purpose of this paper is to investigate the feasibility of a novel four-material decomposition technique for assessing the vulnerability of plaque with two contrast materials spectral computer tomography (CT) using two independent markers: plaque's inflammation and spotty calcification. A simulation study was conducted using an energy-sensitive photon-counting detector for k-edge imaging of the coronary arteries. In addition to detecting the inflammation status, which is known as a biological marker of a plaque's vulnerability, we use spotty calcium concentration as an independent marker to test a plaque's vulnerability. We have introduced a new method for detecting and quantifying calcium concentrations in the presence of two contrast materials (iodine and gold), calcium and soft tissue background. In this method, four-material decomposition was performed on a pixel-by-pixel basis, assuming there was an arbitrary mixture of materials in the voxel. The concentrations of iodine and gold were determined by the k-edge material decomposition based on the maximum likelihood method. The calibration curves of the attenuation coefficients, with respect to the concentrations of different materials, were used to separate the calcium signal from both contrast materials and different soft tissues in the mixtures. Three different materials (muscle, blood and lipid) were independently used as soft tissue. The simulations included both ideal and more realistic energy resolving detectors to measure the polychromatic photon spectrum in single slice parallel beam geometry. The ideal detector was used together with a 3 cm diameter digital phantom to demonstrate the decomposition method while a more realistic detector and a 33 × 24 cm2 digital chest phantom were simulated to validate the vulnerability assessment technique. A 120 kVp spectrum was generated to produce photon flux sufficient for detecting contrast materials above the k-edges of iodine (33.2 keV) and gold (80.7 ke

  2. Spectral element simulation of ultrafiltration

    DEFF Research Database (Denmark)

    Hansen, M.; Barker, Vincent A.; Hassager, Ole

    1998-01-01

    A spectral element method for simulating stationary 2-D ultrafiltration is presented. The mathematical model is comprised of the Navier-Stokes equations for the velocity field of the fluid and a transport equation for the concentration of the solute. In addition to the presence of the velocity ve....... The performance of the spectral element code when applied to several ultrafiltration problems is reported. (C) 1998 Elsevier Science Ltd. All rights reserved.......A spectral element method for simulating stationary 2-D ultrafiltration is presented. The mathematical model is comprised of the Navier-Stokes equations for the velocity field of the fluid and a transport equation for the concentration of the solute. In addition to the presence of the velocity...... vector in the transport equation, the system is coupled by the dependency of the fluid viscosity on the solute concentration and by a concentration-dependent boundary condition for the Navier-Stokes equations at the membrane surface. The spectral element discretization yields a nonlinear algebraic system...

  3. Spectral representation of Gaussian semimartingales

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas

    2009-01-01

    The aim of the present paper is to characterize the spectral representation of Gaussian semimartingales. That is, we provide necessary and sufficient conditions on the kernel K for X t =∫ K t (s) dN s to be a semimartingale. Here, N denotes an independently scattered Gaussian random measure...

  4. SPECTRAL DEPENDENT ELECTRICAL CHARACTERISTICS OF ...

    African Journals Online (AJOL)

    solar cells. The effect of irradiance and spectral illumination on the cell performance was investigated. Finally, the applicability of the investigated thin. film a-Si:H ... ashizli, It has been applied in amorphousoilleon based thin film solar cells. ("Marlo/art ..... simulations. The 13th E. C. Photovoltaic Solar Energy Conference.

  5. Spectral clustering with epidemic diffusion

    Science.gov (United States)

    Smith, Laura M.; Lerman, Kristina; Garcia-Cardona, Cristina; Percus, Allon G.; Ghosh, Rumi

    2013-10-01

    Spectral clustering is widely used to partition graphs into distinct modules or communities. Existing methods for spectral clustering use the eigenvalues and eigenvectors of the graph Laplacian, an operator that is closely associated with random walks on graphs. We propose a spectral partitioning method that exploits the properties of epidemic diffusion. An epidemic is a dynamic process that, unlike the random walk, simultaneously transitions to all the neighbors of a given node. We show that the replicator, an operator describing epidemic diffusion, is equivalent to the symmetric normalized Laplacian of a reweighted graph with edges reweighted by the eigenvector centralities of their incident nodes. Thus, more weight is given to edges connecting more central nodes. We describe a method that partitions the nodes based on the componentwise ratio of the replicator's second eigenvector to the first and compare its performance to traditional spectral clustering techniques on synthetic graphs with known community structure. We demonstrate that the replicator gives preference to dense, clique-like structures, enabling it to more effectively discover communities that may be obscured by dense intercommunity linking.

  6. Decomposition of multi-scale coherent structures in a turbulent boundary layer by variational mode decomposition

    Science.gov (United States)

    Wang, Wenkang; Pan, Chong; Wang, Jinjun

    2016-11-01

    Turbulent boundary layer (TBL) is believed to contain a wide spectrum of coherent structures, from near-wall low-speed streaks characterized by inner scale to log-layer large-scale coherent motions (LSM and VLSM) characterized by outer scale. Recent studies have evidenced the interaction between these multi-scale structures via either bottom-up or top-down mechanisms, which implies the possibility of identifying the coexistence of their footprints at medium flow layer. Here, we propose a Quasi-Bivariate Variational Mode Decomposition method (QB-VMD), which is an update of the traditional Empirical Mode Decomposition (EMD) with bandwidth limitation, for the decomposition of the PIV measured 2D flow fields with large ROI (Δx × Δz 4 δ × 1 . 5 δ) at specified wall-normal heights (y / δ = 0 . 05 0 . 2) of a turbulent boundary layer with Reτ = 3460 . The empirical modes identified by QB-VMD well capture the characteristics of log-layer LSMs as well as that of near-wall streak-like structures. The lateral scales of these structures are analyzed and their respective energy contribution are evaluated. Supported by both the National Natural Science Foundation of China (Grant Nos. 11372001 and 11490552) and the Fundamental Research Funds for the Central Universities of China (No. YWF-16-JCTD-A-05).

  7. Quantitative imaging of excised osteoarthritic cartilage using spectral CT

    Energy Technology Data Exchange (ETDEWEB)

    Rajendran, Kishore; Bateman, Christopher J.; Younis, Raja Aamir; De Ruiter, Niels J.A.; Ramyar, Mohsen; Anderson, Nigel G. [University of Otago - Christchurch, Department of Radiology, Christchurch (New Zealand); Loebker, Caroline [University of Otago, Christchurch Regenerative Medicine and Tissue Engineering Group, Department of Orthopaedic Surgery and Musculoskeletal Medicine, Christchurch (New Zealand); University of Twente, Department of Developmental BioEngineering, Enschede (Netherlands); Schon, Benjamin S.; Hooper, Gary J.; Woodfield, Tim B.F. [University of Otago, Christchurch Regenerative Medicine and Tissue Engineering Group, Department of Orthopaedic Surgery and Musculoskeletal Medicine, Christchurch (New Zealand); Chernoglazov, Alex I. [University of Canterbury, Human Interface Technology Laboratory New Zealand, Christchurch (New Zealand); Butler, Anthony P.H. [University of Otago - Christchurch, Department of Radiology, Christchurch (New Zealand); European Organisation for Nuclear Research (CERN), Geneva (Switzerland); MARS Bioimaging, Christchurch (New Zealand)

    2017-01-15

    To quantify iodine uptake in articular cartilage as a marker of glycosaminoglycan (GAG) content using multi-energy spectral CT. We incubated a 25-mm strip of excised osteoarthritic human tibial plateau in 50 % ionic iodine contrast and imaged it using a small-animal spectral scanner with a cadmium telluride photon-processing detector to quantify the iodine through the thickness of the articular cartilage. We imaged both spectroscopic phantoms and osteoarthritic tibial plateau samples. The iodine distribution as an inverse marker of GAG content was presented in the form of 2D and 3D images after applying a basis material decomposition technique to separate iodine in cartilage from bone. We compared this result with a histological section stained for GAG. The iodine in cartilage could be distinguished from subchondral bone and quantified using multi-energy CT. The articular cartilage showed variation in iodine concentration throughout its thickness which appeared to be inversely related to GAG distribution observed in histological sections. Multi-energy CT can quantify ionic iodine contrast (as a marker of GAG content) within articular cartilage and distinguish it from bone by exploiting the energy-specific attenuation profiles of the associated materials. (orig.)

  8. Spectral analysis and slow spreading dynamics on complex networks

    Science.gov (United States)

    Ódor, Géza

    2013-09-01

    The susceptible-infected-susceptible (SIS) model is one of the simplest memoryless systems for describing information or epidemic spreading phenomena with competing creation and spontaneous annihilation reactions. The effect of quenched disorder on the dynamical behavior has recently been compared to quenched mean-field (QMF) approximations in scale-free networks. QMF can take into account topological heterogeneity and clustering effects of the activity in the steady state by spectral decomposition analysis of the adjacency matrix. Therefore, it can provide predictions on possible rare-region effects, thus on the occurrence of slow dynamics. I compare QMF results of SIS with simulations on various large dimensional graphs. In particular, I show that for Erdős-Rényi graphs this method predicts correctly the occurrence of rare-region effects. It also provides a good estimate for the epidemic threshold in case of percolating graphs. Griffiths Phases emerge if the graph is fragmented or if we apply a strong, exponentially suppressing weighting scheme on the edges. The latter model describes the connection time distributions in the face-to-face experiments. In case of a generalized Barabási-Albert type of network with aging connections, strong rare-region effects and numerical evidence for Griffiths Phase dynamics are shown. The dynamical simulation results agree well with the predictions of the spectral analysis applied for the weighted adjacency matrices.

  9. A spectral approach for damage quantification in stochastic dynamic systems

    Science.gov (United States)

    Machado, M. R.; Adhikari, S.; Santos, J. M. C. Dos

    2017-05-01

    Intrinsic to all real structures, parameter uncertainty can be found in material properties and geometries. Many structural parameters, such as, elastic modulus, Poisson's rate, thickness, density, etc., are spatially distributed by nature. The Karhunen-Loève expansion is a method used to model the random field expanded in a spectral decomposition. Once many structural parameters can not be modelled as a Gaussian distribution the memoryless nonlinear transformation is used to translate a Gaussian random field in a non-Gaussian. Thus, stochastic methods have been used to include these uncertainties in the structural model. The Spectral Element Method (SEM) is a wave-based numerical approach used to model structures. It is also developed to express parameters as spatially correlated random field in its formulation. In this paper, the problem of structural damage detection under the presence of spatially distributed random parameter is addressed. Explicit equations to localize and assess damage are proposed based on the SEM formulation. Numerical examples in an axially vibrating undamaged and damaged structure with distributed parameters are analysed.

  10. Bearing fault detection using motor current signal analysis based on wavelet packet decomposition and Hilbert envelope

    Directory of Open Access Journals (Sweden)

    Imaouchen Yacine

    2015-01-01

    Full Text Available To detect rolling element bearing defects, many researches have been focused on Motor Current Signal Analysis (MCSA using spectral analysis and wavelet transform. This paper presents a new approach for rolling element bearings diagnosis without slip estimation, based on the wavelet packet decomposition (WPD and the Hilbert transform. Specifically, the Hilbert transform first extracts the envelope of the motor current signal, which contains bearings fault-related frequency information. Subsequently, the envelope signal is adaptively decomposed into a number of frequency bands by the WPD algorithm. Two criteria based on the energy and correlation analyses have been investigated to automate the frequency band selection. Experimental studies have confirmed that the proposed approach is effective in diagnosing rolling element bearing faults for improved induction motor condition monitoring and damage assessment.

  11. A blind algorithm for reverberation-time estimation using subband decomposition of speech signals.

    Science.gov (United States)

    Prego, Thiago de M; de Lima, Amaro A; Netto, Sergio L; Lee, Bowon; Said, Amir; Schafer, Ronald W; Kalker, Ton

    2012-04-01

    An algorithm for blind estimation of reverberation time (RT) in speech signals is proposed. Analysis is restricted to the free-decaying regions of the signal, where the reverberation effect dominates, yielding a more accurate RT estimate at a reduced computational cost. A spectral decomposition is performed on the reverberant signal and partial RT estimates are determined in all signal subbands, providing more data to the statistical-analysis stage of the algorithm, which yields the final RT estimate. Algorithm performance is assessed using two distinct speech databases, achieving 91% and 97% correlation with the RTs measured by a standard nonblind method, indicating that the proposed method blindly estimates the RT in a reliable and consistent manner.

  12. A Pattern Recognition Technique Based on Wavelet Decomposition for Identification of Patients With Congestive Heart Failure

    Directory of Open Access Journals (Sweden)

    Abdulnasir Hossen

    2009-12-01

    Full Text Available A pattern recognition technique based on approximate estimation of power spectral densities (PSD of sub-bands resulted from wavelet decomposition of R-R interval (RRI data for identification of patients with Congestive Heart Failure (CHF is investigated. Both trial and test data used in this work are drawn from MIT databases. Two standard patterns of the base-2 logarithmic values of the reciprocal of the probability measure of the approximated PSD of CHF patients and normal subjects are derived by averaging all corresponding values of all sub-bands of 12 CHF data and 12 normal subjects in the trial set. The computed pattern of each data under test is then compared band-by-band with both standard patterns of CHF and normal subjects to find the closest pattern. The new technique resulted in an identification accuracy of about 90% by applying it on the test data.

  13. TICMR: Total Image Constrained Material Reconstruction via Nonlocal Total Variation Regularization for Spectral CT.

    Science.gov (United States)

    Liu, Jiulong; Ding, Huanjun; Molloi, Sabee; Zhang, Xiaoqun; Gao, Hao

    2016-12-01

    This work develops a material reconstruction method for spectral CT, namely Total Image Constrained Material Reconstruction (TICMR), to maximize the utility of projection data in terms of both spectral information and high signal-to-noise ratio (SNR). This is motivated by the following fact: when viewed as a spectrally-integrated measurement, the projection data can be used to reconstruct a total image without spectral information, which however has a relatively high SNR; when viewed as a spectrally-resolved measurement, the projection data can be utilized to reconstruct the material composition, which however has a relatively low SNR. The material reconstruction synergizes material decomposition and image reconstruction, i.e., the direct reconstruction of material compositions instead of a two-step procedure that first reconstructs images and then decomposes images. For material reconstruction with high SNR, we propose TICMR with nonlocal total variation (NLTV) regularization. That is, first we reconstruct a total image using spectrally-integrated measurement without spectral binning, and build the NLTV weights from this image that characterize nonlocal image features; then the NLTV weights are incorporated into a NLTV-based iterative material reconstruction scheme using spectrally-binned projection data, so that these weights serve as a high-SNR reference to regularize material reconstruction. Note that the nonlocal property of NLTV is essential for material reconstruction, since material compositions may have significant local intensity variations although their structural information is often similar. In terms of solution algorithm, TICMR is formulated as an iterative reconstruction method with the NLTV regularization, in which the nonlocal divergence is utilized based on the adjoint relationship. The alternating direction method of multipliers is developed to solve this sparsity optimization problem. The proposed TICMR method was validated using both simulated

  14. Quantifying Neural Oscillatory Synchronization: A Comparison between Spectral Coherence and Phase-Locking Value Approaches.

    Science.gov (United States)

    Lowet, Eric; Roberts, Mark J; Bonizzi, Pietro; Karel, Joël; De Weerd, Peter

    2016-01-01

    Synchronization or phase-locking between oscillating neuronal groups is considered to be important for coordination of information among cortical networks. Spectral coherence is a commonly used approach to quantify phase locking between neural signals. We systematically explored the validity of spectral coherence measures for quantifying synchronization among neural oscillators. To that aim, we simulated coupled oscillatory signals that exhibited synchronization dynamics using an abstract phase-oscillator model as well as interacting gamma-generating spiking neural networks. We found that, within a large parameter range, the spectral coherence measure deviated substantially from the expected phase-locking. Moreover, spectral coherence did not converge to the expected value with increasing signal-to-noise ratio. We found that spectral coherence particularly failed when oscillators were in the partially (intermittent) synchronized state, which we expect to be the most likely state for neural synchronization. The failure was due to the fast frequency and amplitude changes induced by synchronization forces. We then investigated whether spectral coherence reflected the information flow among networks measured by transfer entropy (TE) of spike trains. We found that spectral coherence failed to robustly reflect changes in synchrony-mediated information flow between neural networks in many instances. As an alternative approach we explored a phase-locking value (PLV) method based on the reconstruction of the instantaneous phase. As one approach for reconstructing instantaneous phase, we used the Hilbert Transform (HT) preceded by Singular Spectrum Decomposition (SSD) of the signal. PLV estimates have broad applicability as they do not rely on stationarity, and, unlike spectral coherence, they enable more accurate estimations of oscillatory synchronization across a wide range of different synchronization regimes, and better tracking of synchronization-mediated information

  15. Quantifying Neural Oscillatory Synchronization: A Comparison between Spectral Coherence and Phase-Locking Value Approaches.

    Directory of Open Access Journals (Sweden)

    Eric Lowet

    Full Text Available Synchronization or phase-locking between oscillating neuronal groups is considered to be important for coordination of information among cortical networks. Spectral coherence is a commonly used approach to quantify phase locking between neural signals. We systematically explored the validity of spectral coherence measures for quantifying synchronization among neural oscillators. To that aim, we simulated coupled oscillatory signals that exhibited synchronization dynamics using an abstract phase-oscillator model as well as interacting gamma-generating spiking neural networks. We found that, within a large parameter range, the spectral coherence measure deviated substantially from the expected phase-locking. Moreover, spectral coherence did not converge to the expected value with increasing signal-to-noise ratio. We found that spectral coherence particularly failed when oscillators were in the partially (intermittent synchronized state, which we expect to be the most likely state for neural synchronization. The failure was due to the fast frequency and amplitude changes induced by synchronization forces. We then investigated whether spectral coherence reflected the information flow among networks measured by transfer entropy (TE of spike trains. We found that spectral coherence failed to robustly reflect changes in synchrony-mediated information flow between neural networks in many instances. As an alternative approach we explored a phase-locking value (PLV method based on the reconstruction of the instantaneous phase. As one approach for reconstructing instantaneous phase, we used the Hilbert Transform (HT preceded by Singular Spectrum Decomposition (SSD of the signal. PLV estimates have broad applicability as they do not rely on stationarity, and, unlike spectral coherence, they enable more accurate estimations of oscillatory synchronization across a wide range of different synchronization regimes, and better tracking of synchronization

  16. Extending Dynamic Mode Decomposition: A Data-Driven Approximation of the Koopman Operator

    Science.gov (United States)

    Williams, Matthew; Kevrekidis, Ioannis; Rowley, Clarence

    2014-11-01

    In recent years, Koopman spectral analysis has become a popular tool for the decomposition and study of fluid flows. One benefit of the Koopman approach is that it generates a set of spatial modes, called Koopman modes, whose evolution is determined by the corresponding set of Koopman eigenvalues. Furthermore, these modes are valid globally, and not only in some small neighborhood of a fixed point. A popular method for approximating the Koopman modes and eigenvalues is Dynamic Mode Decomposition (DMD). In this talk, we show that DMD approximates the Koopman eigenfunctions, but uses linear monomials to do so; this may be limiting in certain applications. We then introduce an extension of DMD, which we refer to as Extended DMD (EDMD), that uses a richer set of user determined basis functions to approximate the Koopman eigenfunctions. We demonstrate the impact this difference has on the eigenvalues and modes by applying DMD and EDMD to some simple example problems. Although the algorithms for DMD and EDMD appear to be similar, modifications like the ones we will present can be important if the resulting eigenvalues, eigenfunctions, and modes are to accurately approximate those of the Koopman operator. Work supported by the NSF (DMS-1204783).

  17. TU-CD-207-01: Characterization of Breast Tissue Composition Using Spectral Mammography

    Energy Technology Data Exchange (ETDEWEB)

    Ding, H; Cho, H; Kumar, N; Sennung, D; Ng, A Lam; Molloi, S [Department of radiological scicens, University of California, Irvine, CA (United States)

    2015-06-15

    Purpose: To investigate the feasibility of characterizing the chemical composition of breast tissue, in terms of water and lipid, by using spectral mammography in simulation and postmortem studies. Methods: Analytical simulations were performed to obtain low- and high-energy signals of breast tissue based on previously reported water, lipid, and protein contents. Dual-energy decomposition was used to characterize the simulated breast tissue into water and lipid basis materials and the measured water density was compared to the known value. In experimental studies, postmortem breasts were imaged with a spectral mammography system based on a scanning multi-slit Si strip photon-counting detector. Low- and high-energy images were acquired simultaneously from a single exposure by sorting the recorded photons into the corresponding energy bins. Dual-energy material decomposition of the low- and high-energy images yielded individual pixel measurements of breast tissue composition in terms of water and lipid thicknesses. After imaging, each postmortem breast was chemically decomposed into water, lipid and protein. The water density calculated from chemical analysis was used as the reference gold standard. Correlation of the water density measurements between spectral mammography and chemical analysis was analyzed using linear regression. Results: Both simulation and postmortem studies showed good linear correlation between the decomposed water thickness using spectral mammography and chemical analysis. The slope of the linear fitting function in the simulation and postmortem studies were 1.15 and 1.21, respectively. Conclusion: The results indicate that breast tissue composition, in terms of water and lipid, can be accurately measured using spectral mammography. Quantitative breast tissue composition can potentially be used to stratify patients according to their breast cancer risk.

  18. A neural network-based method for spectral distortion correction in photon counting x-ray CT

    Science.gov (United States)

    Touch, Mengheng; Clark, Darin P.; Barber, William; Badea, Cristian T.

    2016-08-01

    Spectral CT using a photon counting x-ray detector (PCXD) shows great potential for measuring material composition based on energy dependent x-ray attenuation. Spectral CT is especially suited for imaging with K-edge contrast agents to address the otherwise limited contrast in soft tissues. We have developed a micro-CT system based on a PCXD. This system enables both 4 energy bins acquisition, as well as full-spectrum mode in which the energy thresholds of the PCXD are swept to sample the full energy spectrum for each detector element and projection angle. Measurements provided by the PCXD, however, are distorted due to undesirable physical effects in the detector and can be very noisy due to photon starvation in narrow energy bins. To address spectral distortions, we propose and demonstrate a novel artificial neural network (ANN)-based spectral distortion correction mechanism, which learns to undo the distortion in spectral CT, resulting in improved material decomposition accuracy. To address noise, post-reconstruction denoising based on bilateral filtration, which jointly enforces intensity gradient sparsity between spectral samples, is used to further improve the robustness of ANN training and material decomposition accuracy. Our ANN-based distortion correction method is calibrated using 3D-printed phantoms and a model of our spectral CT system. To enable realistic simulations and validation of our method, we first modeled the spectral distortions using experimental data acquired from 109Cd and 133Ba radioactive sources measured with our PCXD. Next, we trained an ANN to learn the relationship between the distorted spectral CT projections and the ideal, distortion-free projections in a calibration step. This required knowledge of the ground truth, distortion-free spectral CT projections, which were obtained by simulating a spectral CT scan of the digital version of a 3D-printed phantom. Once the training was completed, the trained ANN was used to perform

  19. An EMG Decomposition System Aimed at Detailed Analysis of Motor Unit Activity

    DEFF Research Database (Denmark)

    Nikolic, Mile; Krarup, Christian; Dahl, Kristian

    1997-01-01

    Decomposition of EMG signals by segmentation oftime signals, clustering and resolving of compoundsegments.......Decomposition of EMG signals by segmentation oftime signals, clustering and resolving of compoundsegments....

  20. Decomposition of silicon carbide at high pressures and temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Daviau, Kierstin; Lee, Kanani K. M.

    2017-11-01

    We measure the onset of decomposition of silicon carbide, SiC, to silicon and carbon (e.g., diamond) at high pressures and high temperatures in a laser-heated diamond-anvil cell. We identify decomposition through x-ray diffraction and multiwavelength imaging radiometry coupled with electron microscopy analyses on quenched samples. We find that B3 SiC (also known as 3C or zinc blende SiC) decomposes at high pressures and high temperatures, following a phase boundary with a negative slope. The high-pressure decomposition temperatures measured are considerably lower than those at ambient, with our measurements indicating that SiC begins to decompose at ~ 2000 K at 60 GPa as compared to ~ 2800 K at ambient pressure. Once B3 SiC transitions to the high-pressure B1 (rocksalt) structure, we no longer observe decomposition, despite heating to temperatures in excess of ~ 3200 K. The temperature of decomposition and the nature of the decomposition phase boundary appear to be strongly influenced by the pressure-induced phase transitions to higher-density structures in SiC, silicon, and carbon. The decomposition of SiC at high pressure and temperature has implications for the stability of naturally forming moissanite on Earth and in carbon-rich exoplanets.

  1. Decomposition of dioxin analogues and ablation study for carbon nanotube

    Energy Technology Data Exchange (ETDEWEB)

    Yamauchi, Toshihiko [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2002-08-01

    Two application studies associated with the free electron laser are presented separately, which are the titles of 'Decomposition of Dioxin Analogues' and 'Ablation Study for Carbon Nanotube'. The decomposition of dioxin analogues by infrared (IR) laser irradiation includes the thermal destruction and multiple-photon dissociation. It is important for us to choose the highly absorbable laser wavelength for the decomposition. The thermal decomposition takes place by the irradiation of the low IR laser power. Considering the model of thermal decomposition, it is proposed that adjacent water molecules assist the decomposition of dioxin analogues in addition to the thermal decomposition by the direct laser absorption. The laser ablation study is performed for the aim of a carbon nanotube synthesis. The vapor by the ablation is weakly ionized in the power of several-hundred megawatts. The plasma internal energy is kept over an 8.5 times longer than the vacuum. The cluster was produced from the weakly ionized gas in the enclosed gas, which is composed of the rough particles in the low power laser more than the high power which is composed of the fine particles. (J.P.N.)

  2. Analysis of complex metabolic behavior through pathway decomposition

    Directory of Open Access Journals (Sweden)

    Ip Kuhn

    2011-06-01

    Full Text Available Abstract Background Understanding complex systems through decomposition into simple interacting components is a pervasive paradigm throughout modern science and engineering. For cellular metabolism, complexity can be reduced by decomposition into pathways with particular biochemical functions, and the concept of elementary flux modes provides a systematic way for organizing metabolic networks into such pathways. While decomposition using elementary flux modes has proven to be a powerful tool for understanding and manipulating cellular metabolism, its utility, however, is severely limited since the number of modes in a network increases exponentially with its size. Results Here, we present a new method for decomposition of metabolic flux distributions into elementary flux modes. Our method can easily operate on large, genome-scale networks since it does not require all relevant modes of the metabolic network to be generated. We illustrate the utility of our method for metabolic engineering of Escherichia coli and for understanding the survival of Mycobacterium tuberculosis (MTB during infection. Conclusions Our method can achieve computational time improvements exceeding 2000-fold and requires only several seconds to generate elementary mode decompositions on genome-scale networks. These improvements arise from not having to generate all relevant elementary modes prior to initiating the decomposition. The decompositions from our method are useful for understanding complex flux distributions and debugging genome-scale models.

  3. Domain Decomposition: A Bridge between Nature and Parallel Computers

    Science.gov (United States)

    1992-09-01

    AD-A256 575 NASA Contractor Report 189709 ICASE Report No. 92-44 ICASE DOMAIN DECOMPOSITION: A BRIDGE BETWEEN NATURE AND PARALLEL COMPUTERS DTIC dE...5225 , I I II I II I I1 DOMAIN DECOMPOSITION: A BRIDGE BETWEEN NATURE AND PARALLEL COMPUTERS Accesion For £ -- j David E. Keyes NTIS C{A&I Department...space decompositions, but of special form that can be motivated by, among other things, the memory hierarchies of distributed-memory parallel computers . Each

  4. Dynamics in the Decompositions Approach to Quantum Mechanics

    Science.gov (United States)

    Harding, John

    2017-12-01

    In Harding (Trans. Amer. Math. Soc. 348(5), 1839-1862 1996) it was shown that the direct product decompositions of any non-empty set, group, vector space, and topological space X form an orthomodular poset Fact X. This is the basis for a line of study in foundational quantum mechanics replacing Hilbert spaces with other types of structures. Here we develop dynamics and an abstract version of a time independent Schrödinger's equation in the setting of decompositions by considering representations of the group of real numbers in the automorphism group of the orthomodular poset Fact X of decompositions.

  5. Integral decomposition and polarization properties of depolarizing Mueller matrices.

    Science.gov (United States)

    Ossikovski, Razvigor; Arteaga, Oriol

    2015-03-15

    We show that, by suitably defining the integral decomposition of a depolarizing Mueller matrix, it becomes possible to fully interpret the polarization response of the medium or structure under study in terms of mean values and variances-covariances of a set of six integral polarization properties. The latter appear as natural counterparts of the elementary (differential) polarization properties stemming from the differential decomposition of the Mueller matrix. However, unlike the differential decomposition, the integral one is always mathematically and physically realizable and is furthermore unambiguously defined inasmuch as a nondepolarizing estimate of the initial Mueller matrix is secured. The theoretical results are illustrated on an experimental example.

  6. Energy decompositions according to physical space partitioning schemes

    Science.gov (United States)

    Alcoba, Diego R.; Torre, Alicia; Lain, Luis; Bochicchio, Roberto C.

    2005-02-01

    This work describes simple decompositions of the energy of molecular systems according to schemes that partition the three-dimensional space. The components of those decompositions depend on one and two atomic domains thus providing a meaningful chemical information about the nature of different bondings among the atoms which compose the system. Our algorithms can be applied at any level of theory (correlated or uncorrelated wave functions). The results reported here, obtained at the Hartree-Fock level in selected molecules, show a good agreement with the chemical picture of molecules and require a low computational cost in comparison with other previously reported decompositions.

  7. Seizure detection from EEG signals using Multivariate Empirical Mode Decomposition.

    Science.gov (United States)

    Zahra, Asmat; Kanwal, Nadia; Ur Rehman, Naveed; Ehsan, Shoaib; McDonald-Maier, Klaus D

    2017-09-01

    We present a data driven approach to classify ictal (epileptic seizure) and non-ictal EEG signals using the multivariate empirical mode decomposition (MEMD) algorithm. MEMD is a multivariate extension of empirical mode decomposition (EMD), which is an established method to perform the decomposition and time-frequency (T-F) analysis of non-stationary data sets. We select suitable feature sets based on the multiscale T-F representation of the EEG data via MEMD for the classification purposes. The classification is achieved using the artificial neural networks. The efficacy of the proposed method is verified on extensive publicly available EEG datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Spectral computations for bounded operators

    CERN Document Server

    Ahues, Mario; Limaye, Balmohan

    2001-01-01

    Exact eigenvalues, eigenvectors, and principal vectors of operators with infinite dimensional ranges can rarely be found. Therefore, one must approximate such operators by finite rank operators, then solve the original eigenvalue problem approximately. Serving as both an outstanding text for graduate students and as a source of current results for research scientists, Spectral Computations for Bounded Operators addresses the issue of solving eigenvalue problems for operators on infinite dimensional spaces. From a review of classical spectral theory through concrete approximation techniques to finite dimensional situations that can be implemented on a computer, this volume illustrates the marriage of pure and applied mathematics. It contains a variety of recent developments, including a new type of approximation that encompasses a variety of approximation methods but is simple to verify in practice. It also suggests a new stopping criterion for the QR Method and outlines advances in both the iterative refineme...

  9. Chebyshev and Fourier spectral methods

    CERN Document Server

    Boyd, John P

    2001-01-01

    Completely revised text focuses on use of spectral methods to solve boundary value, eigenvalue, and time-dependent problems, but also covers Hermite, Laguerre, rational Chebyshev, sinc, and spherical harmonic functions, as well as cardinal functions, linear eigenvalue problems, matrix-solving methods, coordinate transformations, methods for unbounded intervals, spherical and cylindrical geometry, and much more. 7 Appendices. Glossary. Bibliography. Index. Over 160 text figures.

  10. Understanding Big Data Spectral Clustering

    OpenAIRE

    Couillet, Romain; Benaych-Georges, Florent

    2015-01-01

    International audience; This article introduces an original approach to understand the behavior of standard kernel spectral clustering algorithms (such as the Ng–Jordan–Weiss method) for large dimensional datasets. Precisely, using advanced methods from the field of random matrix theory and assuming Gaussian data vectors, we show that the Laplacian of the kernel matrix can asymptotically be well approximated by an analytically tractable equivalent random matrix. The analysis of the former all...

  11. Remote application for spectral collection

    Science.gov (United States)

    Cone, Shelli R.; Steele, R. J.; Tzeng, Nigel H.; Firpi, Alexer H.; Rodriguez, Benjamin M.

    2016-05-01

    In the area of collecting field spectral data using a spectrometer, it is common to have the instrument over the material of interest. In certain instances it is beneficial to have the ability to remotely control the spectrometer. While several systems have the ability to use a form of connectivity to capture the measurement it is essential to have the ability to control the settings. Additionally, capturing reference information (metadata) about the setup, system configuration, collection, location, atmospheric conditions, and sample information is necessary for future analysis leading towards material discrimination and identification. This has the potential to lead to cumbersome field collection and a lack of necessary information for post processing and analysis. The method presented in this paper describes a capability to merge all parts of spectral collection from logging reference information to initial analysis as well as importing information into a web-hosted spectral database. This allows the simplification of collecting, processing, analyzing and storing field spectra for future analysis and comparisons. This concept is developed for field collection of thermal data using the Designs and Prototypes (D&P) Hand Portable FT-IR Spectrometer (Model 102). The remote control of the spectrometer is done with a customized Android application allowing the ability to capture reference information, process the collected data from radiance to emissivity using a temperature emissivity separation algorithm and store the data into a custom web-based service. The presented system of systems allows field collected spectra to be used for various applications by spectral analysts in the future.

  12. Abundance estimation of spectrally similar minerals

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-07-01

    Full Text Available This paper evaluates a spectral unmixing method for estimating the partial abundance of spectrally similar minerals in complex mixtures. The method requires formulation of a linear function of individual spectra of individual minerals. The first...

  13. EEG-Based Prediction of Epileptic Seizures Using Phase Synchronization Elicited from Noise-Assisted Multivariate Empirical Mode Decomposition.

    Science.gov (United States)

    Cho, Dongrae; Min, Beomjun; Kim, Jongin; Lee, Boreom

    2017-08-01

    In this study, we examined the phase locking value (PLV) for seizure prediction, particularly, in the gamma frequency band. We prepared simulation data and 65 clinical cases of seizure. In addition, various filtering algorithms including bandpass filtering, empirical mode decomposition, multivariate empirical mode decomposition and noise-assisted multivariate empirical mode decomposition (NA-MEMD) were used to decompose spectral components from the data. Moreover, in the case of clinical data, the PLVs were used to classify between interictal and preictal stages using a support vector machine. The highest PLV was achieved with NA-MEMD with 0-dB white noise algorithm (0.9988), which exhibited statistically significant differences compared to other filtering algorithms. Moreover, the classification rate was the highest for the NA-MEMD with 0-dB algorithm (83.17%). In terms of frequency components, examining the gamma band resulted in the highest classification rates for all algorithms, compared to other frequency bands such as theta, alpha, and beta bands. We found that PLVs calculated with the NA-MEMD algorithm could be used as a potential biological marker for seizure prediction. Moreover, the gamma frequency band was useful for discriminating between interictal and preictal stages.

  14. Seismic entangled patterns analyzed via multiresolution decomposition

    Directory of Open Access Journals (Sweden)

    F. E. A. Leite

    2009-03-01

    Full Text Available This article explores a method for distinguishing entangled coherent structures embedded in geophysical images. The original image is decomposed in a series of j-scale-images using multiresolution decomposition. To improve the image processing analysis each j-image is divided in l-spacial regions generating set of (j, l-regions. At each (j, l-region we apply a continuous wavelet transform to evaluate Eν, the spectrum of energy. Eν has two maxima in the original data. Otherwise, at each scale Eν hast typically one peak. The localization of the peaks changes according to the (j, l-region. The intensity of the peaks is linked with the presence of coherent structures, or patterns, at the respective (j, l-region. The method is successfully applied to distinguish, in scale and region, the ground roll noise from the relevant geologic information in the signal.

  15. Dissociative Ionization and Thermal Decomposition of Cyclopentanone.

    Science.gov (United States)

    Pastoors, Johan I M; Bodi, Andras; Hemberger, Patrick; Bouwman, Jordy

    2017-09-21

    Despite the growing use of renewable and sustainable biofuels in transportation, their combustion chemistry is poorly understood, limiting our efforts to reduce harmful emissions. Here we report on the (dissociative) ionization and the thermal decomposition mechanism of cyclopentanone, studied using imaging photoelectron photoion coincidence spectroscopy. The fragmentation of the ions is dominated by loss of CO, C 2 H 4 , and C 2 H 5 , leading to daughter ions at m/z 56 and 55. Exploring the C 5 H 8 O . + potential energy surface reveals hydrogen tunneling to play an important role in low-energy decarbonylation and probably also in the ethene-loss processes, yielding 1-butene and methylketene cations, respectively. At higher energies, pathways without a reverse barrier open up to oxopropenyl and cyclopropanone cations by ethyl-radical loss and a second ethene-loss channel, respectively. A statistical Rice-Ramsperger-Kassel-Marcus model is employed to test the viability of this mechanism. The pyrolysis of cyclopentanone is studied at temperatures ranging from about 800 to 1100 K. Closed-shell pyrolysis products, namely 1,3-butadiene, ketene, propyne, allene, and ethene, are identified based on their photoion mass-selected threshold photoelectron spectrum. Furthermore, reactive radical species such as allyl, propargyl, and methyl are found. A reaction mechanism is derived incorporating both stable and reactive species, which were not predicted in prior computational studies. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  16. Compressed sensing MRI exploiting complementary dual decomposition.

    Science.gov (United States)

    Park, Suhyung; Park, Jaeseok

    2014-04-01

    Compressed sensing (CS) MRI exploits the sparsity of an image in a transform domain to reconstruct the image from incoherently under-sampled k-space data. However, it has been shown that CS suffers particularly from loss of low-contrast image features with increasing reduction factors. To retain image details in such degraded experimental conditions, in this work we introduce a novel CS reconstruction method exploiting feature-based complementary dual decomposition with joint estimation of local scale mixture (LSM) model and images. Images are decomposed into dual block sparse components: total variation for piecewise smooth parts and wavelets for residuals. The LSM model parameters of residuals in the wavelet domain are estimated and then employed as a regional constraint in spatially adaptive reconstruction of high frequency subbands to restore image details missing in piecewise smooth parts. Alternating minimization of the dual image components subject to data consistency is performed to extract image details from residuals and add them back to their complementary counterparts while the LSM model parameters and images are jointly estimated in a sequential fashion. Simulations and experiments demonstrate the superior performance of the proposed method in preserving low-contrast image features even at high reduction factors. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Plasma-catalytic decomposition of TCE

    Energy Technology Data Exchange (ETDEWEB)

    Vandenbroucke, A.; Morent, R.; De Geyter, N.; Leys, C. [Ghent Univ., Ghent (Belgium). Dept. of Applied Physics; Tuan, N.D.M.; Giraudon, J.M.; Lamonier, J.F. [Univ. des Sciences et Technologies de Lille, Villeneuve (France). Dept. de Catalyse et Chimie du Solide

    2010-07-01

    Volatile organic compounds (VOCs) are gaseous pollutants that pose an environmental hazard due to their high volatility and their possible toxicity. Conventional technologies to reduce the emission of VOCs have their advantages, but they become cost-inefficient when low concentrations have to be treated. In the past 2 decades, non-thermal plasma technology has received growing attention as an alternative and promising remediation method. Non-thermal plasmas are effective because they produce a series of strong oxidizers such as ozone, oxygen radicals and hydroxyl radicals that provide a reactive chemical environment in which VOCs are completely oxidized. This study investigated whether the combination of NTP and catalysis could improve the energy efficiency and the selectivity towards carbon dioxide (CO{sub 2}). Trichloroethylene (TCE) was decomposed by non-thermal plasma generated in a DC-excited atmospheric pressure glow discharge. The production of by-products was qualitatively investigated through FT-IR spectrometry. The results were compared with those from a catalytic reactor. The removal rate of TCE reached a maximum of 78 percent at the highest input energy. The by-products of TCE decomposition were CO{sub 2}, carbon monoxide (CO) hydrochloric acid (HCl) and dichloroacetylchloride. Combining the plasma system with a catalyst located in an oven downstream resulted in a maximum removal of 80 percent, at an energy density of 300 J/L, a catalyst temperature of 373 K and a total air flow rate of 2 slm. 14 refs., 6 figs.

  18. Differential Decomposition of Bacterial and Viral Fecal ...

    Science.gov (United States)

    Understanding the decomposition of microorganisms associated with different human fecal pollution types is necessary for proper implementation of many water qualitymanagement practices, as well as predicting associated public health risks. Here, thedecomposition of select cultivated and molecular indicators of fecal pollution originating from fresh human feces, septage, and primary effluent sewage in a subtropical marine environment was assessed over a six day period with an emphasis on the influence of ambient sunlight and indigenous microbiota. Ambient water mixed with each fecal pollution type was placed in dialysis bags and incubated in situ in a submersible aquatic mesocosm. Genetic and cultivated fecal indicators including fecal indicator bacteria (enterococci, E. coli, and Bacteroidales), coliphage (somatic and F+), Bacteroides fragilis phage (GB-124), and human-associated geneticindicators (HF183/BacR287 and HumM2) were measured in each sample. Simple linearregression assessing treatment trends in each pollution type over time showed significant decay (p ≤ 0.05) in most treatments for feces and sewage (27/28 and 32/40, respectively), compared to septage (6/26). A two-way analysis of variance of log10 reduction values for sewage and feces experiments indicated that treatments differentially impact survival of cultivated bacteria, cultivated phage, and genetic indicators. Findings suggest that sunlight is critical for phage decay, and indigenous microbio

  19. Interior tomography with continuous singular value decomposition.

    Science.gov (United States)

    Jin, Xin; Katsevich, Alexander; Yu, Hengyong; Wang, Ge; Li, Liang; Chen, Zhiqiang

    2012-11-01

    The long-standing interior problem has important mathematical and practical implications. The recently developed interior tomography methods have produced encouraging results. A particular scenario for theoretically exact interior reconstruction from truncated projections is that there is a known sub-region in the ROI. In this paper, we improve a novel continuous singular value decomposition (SVD) method for interior reconstruction assuming a known sub-region. First, two sets of orthogonal eigen-functions are calculated for the Hilbert and image spaces respectively. Then, after the interior Hilbert data are calculated from projection data through the ROI, they are projected onto the eigen-functions in the Hilbert space, and an interior image is recovered by a linear combination of the eigen-functions with the resulting coefficients. Finally, the interior image is compensated for the ambiguity due to the null space utilizing the prior sub-region knowledge. Experiments with simulated and real data demonstrate the advantages of our approach relative to the POCS type interior reconstructions.

  20. Algebraic Davis Decomposition and Asymmetric Doob Inequalities

    Science.gov (United States)

    Hong, Guixiang; Junge, Marius; Parcet, Javier

    2016-09-01

    In this paper we investigate asymmetric forms of Doob maximal inequality. The asymmetry is imposed by noncommutativity. Let {({M}, τ)} be a noncommutative probability space equipped with a filtration of von Neumann subalgebras {({M}_n)_{n ≥ 1}}, whose union {bigcup_{n≥1}{M}_n} is weak-* dense in {{M}}. Let {{E}_n} denote the corresponding family of conditional expectations. As an illustration for an asymmetric result, we prove that for {1 Hardy spaces {{H}_p^r({M})} and {{H}_p^c({M})} respectively. In particular, this solves a problem posed by the Defant and Junge in 2004. In the case p = 1, our results establish a noncommutative form of the Davis celebrated theorem on the relation betwe en martingale maximal and square functions in L 1, whose noncommutative form has remained open for quite some time. Given {1 ≤ p ≤ 2}, we also provide new weak type maximal estimates, which imply in turn left/right almost uniform convergence of {{E}_n(x)} in row/column Hardy spaces. This improves the bilateral convergence known so far. Our approach is based on new forms of Davis martingale decomposition which are of independent interest, and an algebraic atomic description for the involved Hardy spaces. The latter results are new even for commutative von Neumann algebras.

  1. Experimental Shock Decomposition of Siderite to Magnetite

    Science.gov (United States)

    Bell, M. S.; Golden, D. C.; Zolensky, M. E.

    2005-01-01

    The debate about fossil life on Mars includes the origin of magnetites of specific sizes and habits in the siderite-rich portions of the carbonate spheres in ALH 84001 [1,2]. Specifically [2] were able to demonstrate that inorganic synthesis of these compositionally zoned spheres from aqueous solutions of variable ion-concentrations is possible. They further demonstrated the formation of magnetite from siderite upon heating at 550 C under a Mars-like CO2-rich atmosphere according to 3FeCO3 = Fe3O4 + 2CO2 + CO [3] and they postulated that the carbonates in ALH 84001 were heated to these temperatures by some shock event. The average shock pressure for ALH 84001, substantially based on the refractive index of diaplectic feldspar glasses [3,4,5] is some 35-40 GPa and associated temperatures are some 300-400 C [4]. However, some of the feldspar is melted [5], requiring local deviations from this average as high as 45-50 GPa. Indeed, [5] observes the carbonates in ALH 84001 to be melted locally, requiring pressures in excess of 60 GPa and temperatures > 600 C. Combining these shock studies with the above inorganic synthesis of zoned carbonates it seems possible to produce the ALH 84001 magnetites by the shock-induced decomposition of siderite.

  2. Evolution-Based Functional Decomposition of Proteins.

    Directory of Open Access Journals (Sweden)

    Olivier Rivoire

    2016-06-01

    Full Text Available The essential biological properties of proteins-folding, biochemical activities, and the capacity to adapt-arise from the global pattern of interactions between amino acid residues. The statistical coupling analysis (SCA is an approach to defining this pattern that involves the study of amino acid coevolution in an ensemble of sequences comprising a protein family. This approach indicates a functional architecture within proteins in which the basic units are coupled networks of amino acids termed sectors. This evolution-based decomposition has potential for new understandings of the structural basis for protein function. To facilitate its usage, we present here the principles and practice of the SCA and introduce new methods for sector analysis in a python-based software package (pySCA. We show that the pattern of amino acid interactions within sectors is linked to the divergence of functional lineages in a multiple sequence alignment-a model for how sector properties might be differentially tuned in members of a protein family. This work provides new tools for studying proteins and for generally testing the concept of sectors as the principal units of function and adaptive variation.

  3. Dynamic mode decomposition for compressive system identification

    Science.gov (United States)

    Bai, Zhe; Kaiser, Eurika; Proctor, Joshua L.; Kutz, J. Nathan; Brunton, Steven L.

    2017-11-01

    Dynamic mode decomposition has emerged as a leading technique to identify spatiotemporal coherent structures from high-dimensional data. In this work, we integrate and unify two recent innovations that extend DMD to systems with actuation and systems with heavily subsampled measurements. When combined, these methods yield a novel framework for compressive system identification, where it is possible to identify a low-order model from limited input-output data and reconstruct the associated full-state dynamic modes with compressed sensing, providing interpretability of the state of the reduced-order model. When full-state data is available, it is possible to dramatically accelerate downstream computations by first compressing the data. We demonstrate this unified framework on simulated data of fluid flow past a pitching airfoil, investigating the effects of sensor noise, different types of measurements (e.g., point sensors, Gaussian random projections, etc.), compression ratios, and different choices of actuation (e.g., localized, broadband, etc.). This example provides a challenging and realistic test-case for the proposed method, and results indicate that the dominant coherent structures and dynamics are well characterized even with heavily subsampled data.

  4. Organocatalytic decomposition of polyethylene terephthalate using triazabicyclodecene

    Science.gov (United States)

    Lecuyer, Julien Matsumoto

    This study focuses on the organocatalytic decomposition of polyethylene terephthalate (PET) using 1,5,7-triazabicyclo[4.4.0]dec-5-ene (TBD) to form a diverse library of aromatic amides. The reaction scheme was specifically designed to use low reaction temperatures (>150°C) and avoid using solvents during the reaction to provide a more environmentally friendly process. Of all the amines tested, PET aminolysis with aliphatic and aromatic amines demonstrated the best performance with yields higher than 72%. PET aminolysis with click functionalized and non-symmetric reagents facilitated attack on certain sites on the basis of reactivity. Finally, the performance of the PET degradation reactions with secondary amine and tertiary amine functionalized reagents yielded mixed results due to complications with isolating the product from the crude solution. Four of the PET-based monomers were also selected as modifiers for epoxy hardening to demonstrate the ability to convert waste into monomers for high-value applications. The glass transition temperatures, obtained using differential scanning calorimetry (DSC) and dynamic mechanical analysis (DMA) of the epoxy composite samples treated with the PET-based monomers, were generally higher in comparison to the samples cured with the basic diamines due to the hydrogen bonding and added rigidity from the aromatic amide group. Developing these monomers provides a green and commercially viable alternative to eradicating a waste product that is becoming an environmental concern.

  5. Calibration with near-continuous spectral measurements

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Rasmussen, Michael; Madsen, Henrik

    2001-01-01

    In chemometrics traditional calibration in case of spectral measurements express a quantity of interest (e.g. a concentration) as a linear combination of the spectral measurements at a number of wavelengths. Often the spectral measurements are performed at a large number of wavelengths and in thi...... by an example in which the octane number of gasoline is related to near infrared spectral measurements. The performance is found to be much better that for the traditional calibration methods....

  6. USGS Spectral Library Version 7

    Science.gov (United States)

    Kokaly, Raymond F.; Clark, Roger N.; Swayze, Gregg A.; Livo, K. Eric; Hoefen, Todd M.; Pearson, Neil C.; Wise, Richard A.; Benzel, William M.; Lowers, Heather A.; Driscoll, Rhonda L.; Klein, Anna J.

    2017-04-10

    We have assembled a library of spectra measured with laboratory, field, and airborne spectrometers. The instruments used cover wavelengths from the ultraviolet to the far infrared (0.2 to 200 microns [μm]). Laboratory samples of specific minerals, plants, chemical compounds, and manmade materials were measured. In many cases, samples were purified, so that unique spectral features of a material can be related to its chemical structure. These spectro-chemical links are important for interpreting remotely sensed data collected in the field or from an aircraft or spacecraft. This library also contains physically constructed as well as mathematically computed mixtures. Four different spectrometer types were used to measure spectra in the library: (1) Beckman™ 5270 covering the spectral range 0.2 to 3 µm, (2) standard, high resolution (hi-res), and high-resolution Next Generation (hi-resNG) models of Analytical Spectral Devices (ASD) field portable spectrometers covering the range from 0.35 to 2.5 µm, (3) Nicolet™ Fourier Transform Infra-Red (FTIR) interferometer spectrometers covering the range from about 1.12 to 216 µm, and (4) the NASA Airborne Visible/Infra-Red Imaging Spectrometer AVIRIS, covering the range 0.37 to 2.5 µm. Measurements of rocks, soils, and natural mixtures of minerals were made in laboratory and field settings. Spectra of plant components and vegetation plots, comprising many plant types and species with varying backgrounds, are also in this library. Measurements by airborne spectrometers are included for forested vegetation plots, in which the trees are too tall for measurement by a field spectrometer. This report describes the instruments used, the organization of materials into chapters, metadata descriptions of spectra and samples, and possible artifacts in the spectral measurements. To facilitate greater application of the spectra, the library has also been convolved to selected spectrometer and imaging spectrometers sampling and

  7. Spectral properties of generalized eigenparameter dependent ...

    African Journals Online (AJOL)

    Jost function, spectrum, the spectral singularities, and the properties of the principal vectors corresponding to the spectral singularities of L, if. ∞Σn=1 n(∣1 - an∣ + ∣bnl) < ∞. Mathematics Subject Classication (2010): 34L05, 34L40, 39A70, 47A10, 47A75. Key words: Discrete equations, eigenparameter, spectral analysis, ...

  8. Spectral Lag Evolution among -Ray Burst Pulses

    Indian Academy of Sciences (India)

    2016-01-27

    Jan 27, 2016 ... We analyse the spectral lag evolution of -ray burst (GRB) pulses with observations by CGRO/BATSE. No universal spectral lag evolution feature and pulse luminosity-lag relation within a GRB is observed.Our results suggest that the spectral lag would be due to radiation physics and dynamics of a given ...

  9. Calibrating spectral images using penalized likelihood

    NARCIS (Netherlands)

    Heijden, van der G.W.A.M.; Glasbey, C.

    2003-01-01

    A new method is presented for automatic correction of distortions and for spectral calibration (which band corresponds to which wavelength) of spectral images recorded by means of a spectrograph. The method consists of recording a bar-like pattern with an illumination source with spectral bands

  10. Traffic Simulations on Parallel Computers Using Domain Decomposition Techniques

    Science.gov (United States)

    1995-01-01

    Large scale simulations of Intelligent Transportation Systems (ITS) can only be acheived by using the computing resources offered by parallel computing architectures. Domain decomposition techniques are proposed which allow the performance of traffic...

  11. C7-Decompositions of the Tensor Product of Complete Graphs

    Directory of Open Access Journals (Sweden)

    Manikandan R.S.

    2017-08-01

    Full Text Available In this paper we consider a decomposition of Km × Kn, where × denotes the tensor product of graphs, into cycles of length seven. We prove that for m, n ≥ 3, cycles of length seven decompose the graph Km × Kn if and only if (1 either m or n is odd and (2 14 | m(m − 1n(n − 1. The results of this paper together with the results of [Cp-Decompositions of some regular graphs, Discrete Math. 306 (2006 429–451] and [C5-Decompositions of the tensor product of complete graphs, Australasian J. Combinatorics 37 (2007 285–293], give necessary and sufficient conditions for the existence of a p-cycle decomposition, where p ≥ 5 is a prime number, of the graph Km × Kn.

  12. Thermal Decomposition of RDX from Reactive Molecular Dynamics

    National Research Council Canada - National Science Library

    Strachan, Alejandro; Kober, Edward M; van Duin, Adri C; Oxgaard, Jonas; Goddard, III, William A

    2005-01-01

    ...] at various temperatures and densities. We find that the time evolution of the potential energy can be described reasonably well with a single exponential function from which we obtain an overall characteristic time of decomposition...

  13. Modal Decomposition of Synthetic Jet Flow Based on CFD Computation

    Directory of Open Access Journals (Sweden)

    Hyhlík Tomáš

    2015-01-01

    Full Text Available The article analyzes results of numerical simulation of synthetic jet flow using modal decomposition. The analyzes are based on the numerical simulation of axisymmetric unsteady laminar flow obtained using ANSYS Fluent CFD code. Three typical laminar regimes are compared from the point of view of modal decomposition. The first regime is without synthetic jet creation with Reynolds number Re = 76 and Stokes number S = 19.7. The second studied regime is defined by Re = 145 and S = 19.7. The third regime of synthetic jet work is regime with Re = 329 and S = 19.7. Modal decomposition of obtained flow fields is done using proper orthogonal decomposition (POD where energetically most important modes are identified. The structure of POD modes is discussed together with classical approach based on phase averaged velocities.

  14. Bark traits, decomposition and flammability of Australian forest trees

    NARCIS (Netherlands)

    Grootemaat, Saskia; Wright, Ian J.; Van Bodegom, Peter M.; Cornelissen, Johannes H.C.; Shaw, Veronica

    2017-01-01

    Bark shedding is a remarkable feature of Australian trees, yet relatively little is known about interspecific differences in bark decomposability and flammability, or what chemical or physical traits drive variation in these properties. We measured the decomposition rate and flammability

  15. Thermal decomposition of potassium bis-oxalatodiaqua-indate (III ...

    Indian Academy of Sciences (India)

    2] 3H2O. Thermal decomposition studies show that the compound decomposes first to the anhydrous potassium indium oxalate ... Bio-inorganic Chemistry Laboratories, School of Chemistry, Andhra University, Visakhapatnam 530 003, India ...

  16. Ozone Decomposition on the Surface of Metal Oxide Catalyst

    Directory of Open Access Journals (Sweden)

    Batakliev Todor Todorov

    2014-12-01

    Full Text Available The catalytic decomposition of ozone to molecular oxygen over catalytic mixture containing manganese, copper and nickel oxides was investigated in the present work. The catalytic activity was evaluated on the basis of the decomposition coefficient which is proportional to ozone decomposition rate, and it has been already used in other studies for catalytic activity estimation. The reaction was studied in the presence of thermally modified catalytic samples operating at different temperatures and ozone flow rates. The catalyst changes were followed by kinetic methods, surface measurements, temperature programmed reduction and IR-spectroscopy. The phase composition of the metal oxide catalyst was determined by X-ray diffraction. The catalyst mixture has shown high activity in ozone decomposition at wet and dry O3/O2 gas mixtures. The mechanism of catalytic ozone degradation was suggested.

  17. DECOMPOSITION OF TARS IN MICROWAVE PLASMA – PRELIMINARY RESULTS

    Directory of Open Access Journals (Sweden)

    Mateusz Wnukowski

    2014-07-01

    Full Text Available The paper refers to the main problem connected with biomass gasification - a presence of tar in a product gas. This paper presents preliminary results of tar decomposition in a microwave plasma reactor. It gives a basic insight into the construction and work of the plasma reactor. During the experiment, researches were carried out on toluene as a tar surrogate. As a carrier gas for toluene and as a plasma agent, nitrogen was used. Flow rates of the gases and the microwave generator’s power were constant during the whole experiment. Results of the experiment showed that the decomposition process of toluene was effective because the decomposition efficiency attained above 95%. The main products of tar decomposition were light hydrocarbons and soot. The article also gives plans for further research in a matter of tar removal from the product gas.

  18. Using Microwave Sample Decomposition in Undergraduate Analytical Chemistry

    Science.gov (United States)

    Griff Freeman, R.; McCurdy, David L.

    1998-08-01

    A shortcoming of many undergraduate classes in analytical chemistry is that students receive little exposure to sample preparation in chemical analysis. This paper reports the progress made in introducing microwave sample decomposition into several quantitative analysis experiments at Truman State University. Two experiments being performed in our current laboratory rotation include closed vessel microwave decomposition applied to the classical gravimetric determination of nickel and the determination of sodium in snack foods by flame atomic emission spectrometry. A third lab, using open-vessel microwave decomposition for the Kjeldahl nitrogen determination is now ready for student trial. Microwave decomposition reduces the time needed to complete these experiments and significantly increases the student awareness of the importance of sample preparation in quantitative chemical analyses, providing greater breadth and realism in the experiments.

  19. A test of the hierarchical model of litter decomposition

    DEFF Research Database (Denmark)

    Bradford, Mark A.; Veen, G. F.; Bonis, Anne

    2017-01-01

    Our basic understanding of plant litter decomposition informs the assumptions underlying widely applied soil biogeochemical models, including those embedded in Earth system models. Confidence in projected carbon cycle-climate feedbacks therefore depends on accurate knowledge about the controls...

  20. Fluorescence Intrinsic Characterization of Excitation-Emission Matrix Using Multi-Dimensional Ensemble Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Tzu-Chien Hsiao

    2013-11-01

    Full Text Available Excitation-emission matrix (EEM fluorescence spectroscopy is a noninvasive method for tissue diagnosis and has become important in clinical use. However, the intrinsic characterization of EEM fluorescence remains unclear. Photobleaching and the complexity of the chemical compounds make it difficult to distinguish individual compounds due to overlapping features. Conventional studies use principal component analysis (PCA for EEM fluorescence analysis, and the relationship between the EEM features extracted by PCA and diseases has been examined. The spectral features of different tissue constituents are not fully separable or clearly defined. Recently, a non-stationary method called multi-dimensional ensemble empirical mode decomposition (MEEMD was introduced; this method can extract the intrinsic oscillations on multiple spatial scales without loss of information. The aim of this study was to propose a fluorescence spectroscopy system for EEM measurements and to describe a method for extracting the intrinsic characteristics of EEM by MEEMD. The results indicate that, although PCA provides the principal factor for the spectral features associated with chemical compounds, MEEMD can provide additional intrinsic features with more reliable mapping of the chemical compounds. MEEMD has the potential to extract intrinsic fluorescence features and improve the detection of biochemical changes.

  1. Plant diversity effects on root decomposition in grasslands

    Science.gov (United States)

    Chen, Hongmei; Mommer, Liesje; van Ruijven, Jasper; de Kroon, Hans; Gessler, Arthur; Scherer-Lorenzen, Michael; Wirth, Christian; Weigelt, Alexandra

    2016-04-01

    Loss of plant diversity impairs ecosystem functioning. Compared to other well-studied processes, we know little about whether and how plant diversity affects root decomposition, which is limiting our knowledge on biodiversity-carbon cycling relationships in the soil. Plant diversity potentially affects root decomposition via two non-exclusive mechanisms: by providing roots of different substrate quality and/or by altering the soil decomposition environment. To disentangle these two mechanisms, three decomposition experiments using a litter-bag approach were conducted on experimental grassland plots differing in plant species richness, functional group richness and functional group composition (e.g. presence/absence of grasses, legumes, small herbs and tall herbs, the Jena Experiment). We studied: 1) root substrate quality effects by decomposing roots collected from the different experimental plant communities in one common plot; 2) soil decomposition environment effects by decomposing standard roots in all experimental plots; and 3) the overall plant diversity effects by decomposing community roots in their 'home' plots. Litter bags were installed in April 2014 and retrieved after 1, 2 and 4 months to determine the mass loss. We found that mass loss decreased with increasing plant species richness, but not with functional group richness in the three experiments. However, functional group presence significantly affected mass loss with primarily negative effects of the presence of grasses and positive effects of the presence of legumes and small herbs. Our results thus provide clear evidence that species richness has a strong negative effect on root decomposition via effects on both root substrate quality and soil decomposition environment. This negative plant diversity-root decomposition relationship may partly account for the positive effect of plant diversity on soil C stocks by reducing C loss in addition to increasing primary root productivity. However, to fully

  2. Decomposition in lake sediments: bacterial action and interaction

    OpenAIRE

    Jones, J.G.

    1985-01-01

    This review discusses the processes involved in the decomposition of organic carbon derived initially from structural components of algae and other primary producers. It describes how groups of bacteria interact in time and space in a eutrophic lake. The relative importance of anaerobic and aerobic processes are discussed. The bulk of decomposition occurs within the sediment. The role of bacteria in the nitrogen cycle and the iron cycle, and in sulphate reduction and methanogenesis as the ter...

  3. Thermal decomposition of lanthanum(III) butyrate in argon atmosphere

    DEFF Research Database (Denmark)

    Grivel, Jean-Claude; Yue, Zhao; Xiao, Tang

    2013-01-01

    The thermal decomposition of La(C3H7CO2)3·xH2O (x≈0.82) was studied in argon during heating at 5K/min. After the loss of bound H2O, the anhydrous butyrate presents at 135°C a phase transition to a mesophase, which turns to an isotropic liquid at 180°C. The decomposition of the anhydrous butyrate ...

  4. ROLE OF MICROORGANISM AND MICROFAUNA IN PLANT LITTER DECOMPOSITION

    OpenAIRE

    Raj Singh*, Anju Rani, Permod Kumar, Gyanika Shukla, Amit Kumar

    2016-01-01

    Though the fungi play a very significant role in the plant litter decomposition, studies revealed that the bacteria colonize the litters in the initial stages of decomposition. It has been observed that leaf species with low C:N ratio harbored higher number of bacteria than the more resistant species. The results of various workers outlined the development of the bacterial flora after litter fall due to improved moisture conditions but there is no change in the species composition. The plant ...

  5. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.

  6. Modified decomposition method for nonlinear Volterra-Fredholm integral equations

    Energy Technology Data Exchange (ETDEWEB)

    Bildik, Necdet [Department of Mathematics, Celal Bayar University, 45030 Manisa (Turkey)]. E-mail: necdet.bildik@bayar.edu.tr; Inc, Mustafa [Department of Mathematics, Firat University, 23119 Elazig (Turkey)]. E-mail: minc@firat.edu.tr

    2007-07-15

    In this paper, the nonlinear Volterra-Fredholm integral equations are solved by using the modified decomposition method (MDM). The approximate solution of this equation is calculated in the form of a series with easily computable components. The accuracy of the proposed numerical scheme is examined by comparison with other analytical and numerical results. Two test problems are presented to illustrate the reliability and the performance of the modified decomposition method.

  7. Decomposition in conic optimization with partially separable structure

    DEFF Research Database (Denmark)

    Sun, Yifan; Andersen, Martin Skovgaard; Vandenberghe, Lieven

    2014-01-01

    Decomposition techniques for linear programming are difficult to extend to conic optimization problems with general nonpolyhedral convex cones because the conic inequalities introduce an additional nonlinear coupling between the variables. However in many applications the convex cones have...... semidefinite and positive-semidefinite-completable matrices with chordal sparsity patterns. The paper describes a decomposition method that exploits partial separability in conic linear optimization. The method is based on Spingarn's method for equality constrained convex optimization, combined with a fast...

  8. Decomposition of Mueller matrices of scattering media: Theory and experiment

    Directory of Open Access Journals (Sweden)

    R. Ossikovski

    2011-09-01

    Full Text Available Algebraic decomposition of Mueller matrices is a particularly promising approach to the retrieval of the optical properties of the medium investigated in a polarized light scattering experiment. Various decompositions of generally depolarizing Mueller matrices are revisited and discussed. Both classic as well as recently proposed approaches are reviewed. Physical and mathematical aspects such as depolarization and limits of applicability are comparatively addressed. Experimental matrices of scattering media are decomposed by different methodologies and physically interpreted.

  9. Conservation Rules of Direct Sum Decomposition of Groups

    Directory of Open Access Journals (Sweden)

    Nakasho Kazuhisa

    2016-03-01

    Full Text Available In this article, conservation rules of the direct sum decomposition of groups are mainly discussed. In the first section, we prepare miscellaneous definitions and theorems for further formalization in Mizar [5]. In the next three sections, we formalized the fact that the property of direct sum decomposition is preserved against the substitutions of the subscript set, flattening of direct sum, and layering of direct sum, respectively. We referred to [14], [13] [6] and [11] in the formalization.

  10. Sector decomposition and Hironaka's polyhedra game

    Energy Technology Data Exchange (ETDEWEB)

    Bogner, Christian [Institut fuer Physik, Universitaet Mainz (Germany)

    2008-07-01

    Sector decomposition is a method to compute numerically the Laurent expansion of divergent multi-loop Feynman integrals. In this talk we point out, that winning strategies for Hironaka's polyhedra game, encoding the combinatorics of resolutions of singularities by a blow-up sequence, can be applied to this method. We indicate how these strategies are used to guarantee for the termination of the sector decomposition algorithm by Binoth and Heinrich.

  11. A Decomposition Algorithm for Learning Bayesian Network Structures from Data

    DEFF Research Database (Denmark)

    Zeng, Yifeng; Cordero Hernandez, Jorge

    2008-01-01

    It is a challenging task of learning a large Bayesian network from a small data set. Most conventional structural learning approaches run into the computational as well as the statistical problems. We propose a decomposition algorithm for the structure construction without having to learn...... the complete network. The new learning algorithm firstly finds local components from the data, and then recover the complete network by joining the learned components. We show the empirical performance of the decomposition algorithm in several benchmark networks....

  12. Resolving Nonstationary Spectral Information in Wind Speed Time Series Using the Hilbert-Huang Transform

    DEFF Research Database (Denmark)

    Vincent, Claire Louise; Giebel, Gregor; Pinson, Pierre

    2010-01-01

    such as the Fourier transform. The Hilbert–Huang transform is a local method based on a nonparametric and empirical decomposition of the data followed by calculation of instantaneous amplitudes and frequencies using the Hilbert transform. The Hilbert–Huang transformed 4-yr time series is averaged and summarized...... a 4-yr time series of 10-min wind speed observations. An adaptive spectral analysis method called the Hilbert–Huang transform is chosen for the analysis, because the nonstationarity of time series of wind speed observations means that they are not well described by a global spectral analysis method...... to show climatological patterns in the relationship between wind variability and time of day. First, by integrating the Hilbert spectrum along the frequency axis, a scalar time series representing the total variability within a given frequency range is calculated. Second, by calculating average spectra...

  13. Passive microrheology of soft materials with atomic force microscopy: A wavelet-based spectral analysis

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Torres, C.; Streppa, L. [CNRS, UMR5672, Laboratoire de Physique, Ecole Normale Supérieure de Lyon, 46 Allée d' Italie, Université de Lyon, 69007 Lyon (France); Arneodo, A.; Argoul, F. [CNRS, UMR5672, Laboratoire de Physique, Ecole Normale Supérieure de Lyon, 46 Allée d' Italie, Université de Lyon, 69007 Lyon (France); CNRS, UMR5798, Laboratoire Ondes et Matière d' Aquitaine, Université de Bordeaux, 351 Cours de la Libération, 33405 Talence (France); Argoul, P. [Université Paris-Est, Ecole des Ponts ParisTech, SDOA, MAST, IFSTTAR, 14-20 Bd Newton, Cité Descartes, 77420 Champs sur Marne (France)

    2016-01-18

    Compared to active microrheology where a known force or modulation is periodically imposed to a soft material, passive microrheology relies on the spectral analysis of the spontaneous motion of tracers inherent or external to the material. Passive microrheology studies of soft or living materials with atomic force microscopy (AFM) cantilever tips are rather rare because, in the spectral densities, the rheological response of the materials is hardly distinguishable from other sources of random or periodic perturbations. To circumvent this difficulty, we propose here a wavelet-based decomposition of AFM cantilever tip fluctuations and we show that when applying this multi-scale method to soft polymer layers and to living myoblasts, the structural damping exponents of these soft materials can be retrieved.

  14. Spectral multitude and spectral dynamics reflect changing conjugation length in single molecules of oligophenylenevinylenes

    KAUST Repository

    Kobayashi, Hiroyuki

    2012-01-01

    Single-molecule study of phenylenevinylene oligomers revealed distinct spectral forms due to different conjugation lengths which are determined by torsional defects. Large spectral jumps between different spectral forms were ascribed to torsional flips of a single phenylene ring. These spectral changes reflect the dynamic nature of electron delocalization in oligophenylenevinylenes and enable estimation of the phenylene torsional barriers. © 2012 The Owner Societies.

  15. Termites promote resistance of decomposition to spatiotemporal variability in rainfall.

    Science.gov (United States)

    Veldhuis, Michiel P; Laso, Francisco J; Olff, Han; Berg, Matty P

    2017-02-01

    The ecological impact of rapid environmental change will depend on the resistance of key ecosystems processes, which may be promoted by species that exert strong control over local environmental conditions. Recent theoretical work suggests that macrodetritivores increase the resistance of African savanna ecosystems to changing climatic conditions, but experimental evidence is lacking. We examined the effect of large fungus-growing termites and other non-fungus-growing macrodetritivores on decomposition rates empirically with strong spatiotemporal variability in rainfall and temperature. Non-fungus-growing larger macrodetritivores (earthworms, woodlice, millipedes) promoted decomposition rates relative to microbes and small soil fauna (+34%) but both groups reduced their activities with decreasing rainfall. However, fungus-growing termites increased decomposition rates strongest (+123%) under the most water-limited conditions, making overall decomposition rates mostly independent from rainfall. We conclude that fungus-growing termites are of special importance in decoupling decomposition rates from spatiotemporal variability in rainfall due to the buffered environment they create within their extended phenotype (mounds), that allows decomposition to continue when abiotic conditions outside are less favorable. This points at a wider class of possibly important ecological processes, where soil-plant-animal interactions decouple ecosystem processes from large-scale climatic gradients. This may strongly alter predictions from current climate change models. © 2016 by the Ecological Society of America.

  16. Nonequilibrium adiabatic molecular dynamics simulations of methane clathrate hydrate decomposition.

    Science.gov (United States)

    Alavi, Saman; Ripmeester, J A

    2010-04-14

    Nonequilibrium, constant energy, constant volume (NVE) molecular dynamics simulations are used to study the decomposition of methane clathrate hydrate in contact with water. Under adiabatic conditions, the rate of methane clathrate decomposition is affected by heat and mass transfer arising from the breakup of the clathrate hydrate framework and release of the methane gas at the solid-liquid interface and diffusion of methane through water. We observe that temperature gradients are established between the clathrate and solution phases as a result of the endothermic clathrate decomposition process and this factor must be considered when modeling the decomposition process. Additionally we observe that clathrate decomposition does not occur gradually with breakup of individual cages, but rather in a concerted fashion with rows of structure I cages parallel to the interface decomposing simultaneously. Due to the concerted breakup of layers of the hydrate, large amounts of methane gas are released near the surface which can form bubbles that will greatly affect the rate of mass transfer near the surface of the clathrate phase. The effects of these phenomena on the rate of methane hydrate decomposition are determined and implications on hydrate dissociation in natural methane hydrate reservoirs are discussed.

  17. Consequences of biodiversity loss for litter decomposition across biomes.

    Science.gov (United States)

    Handa, I Tanya; Aerts, Rien; Berendse, Frank; Berg, Matty P; Bruder, Andreas; Butenschoen, Olaf; Chauvet, Eric; Gessner, Mark O; Jabiol, Jérémy; Makkonen, Marika; McKie, Brendan G; Malmqvist, Björn; Peeters, Edwin T H M; Scheu, Stefan; Schmid, Bernhard; van Ruijven, Jasper; Vos, Veronique C A; Hättenschwiler, Stephan

    2014-05-08

    The decomposition of dead organic matter is a major determinant of carbon and nutrient cycling in ecosystems, and of carbon fluxes between the biosphere and the atmosphere. Decomposition is driven by a vast diversity of organisms that are structured in complex food webs. Identifying the mechanisms underlying the effects of biodiversity on decomposition is critical given the rapid loss of species worldwide and the effects of this loss on human well-being. Yet despite comprehensive syntheses of studies on how biodiversity affects litter decomposition, key questions remain, including when, where and how biodiversity has a role and whether general patterns and mechanisms occur across ecosystems and different functional types of organism. Here, in field experiments across five terrestrial and aquatic locations, ranging from the subarctic to the tropics, we show that reducing the functional diversity of decomposer organisms and plant litter types slowed the cycling of litter carbon and nitrogen. Moreover, we found evidence of nitrogen transfer from the litter of nitrogen-fixing plants to that of rapidly decomposing plants, but not between other plant functional types, highlighting that specific interactions in litter mixtures control carbon and nitrogen cycling during decomposition. The emergence of this general mechanism and the coherence of patterns across contrasting terrestrial and aquatic ecosystems suggest that biodiversity loss has consistent consequences for litter decomposition and the cycling of major elements on broad spatial scales.

  18. Long-term litter decomposition controlled by manganese redox cycling.

    Science.gov (United States)

    Keiluweit, Marco; Nico, Peter; Harmon, Mark E; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus

    2015-09-22

    Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn(2+) provided by fresh plant litter to produce oxidative Mn(3+) species at sites of active decay, with Mn eventually accumulating as insoluble Mn(3+/4+) oxides. Formation of reactive Mn(3+) species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn(3+)-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn(3+) species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant-soil system may have a profound impact on litter decomposition rates.

  19. The Products of the Thermal Decomposition of CH3CHO

    Energy Technology Data Exchange (ETDEWEB)

    Vasiliou, AnGayle; Piech, Krzysztof M.; Zhang, Xu; Nimlos, Mark R.; Ahmed, Musahid; Golan, Amir; Kostko, Oleg; Osborn, David L.; Daily, John W.; Stanton, John F.; Ellison, G. Barney

    2011-04-06

    We have used a heated 2 cm x 1 mm SiC microtubular (mu tubular) reactor to decompose acetaldehyde: CH3CHO + DELTA --> products. Thermal decomposition is followed at pressures of 75 - 150 Torr and at temperatures up to 1700 K, conditions that correspond to residence times of roughly 50 - 100 mu sec in the mu tubular reactor. The acetaldehyde decomposition products are identified by two independent techniques: VUV photoionization mass spectroscopy (PIMS) and infrared (IR) absorption spectroscopy after isolation in a cryogenic matrix. Besides CH3CHO, we have studied three isotopologues, CH3CDO, CD3CHO, and CD3CDO. We have identified the thermal decomposition products CH3(PIMS), CO (IR, PIMS), H (PIMS), H2 (PIMS), CH2CO (IR, PIMS), CH2=CHOH (IR, PIMS), H2O (IR, PIMS), and HC=CH (IR, PIMS). Plausible evidence has been found to support the idea that there are at least three different thermal decomposition pathways for CH3CHO: Radical decomposition: CH3CHO + DELTA --> CH3 + [HCO] --> CH3 + H + CO Elimination: CH3CHO + DELTA --> H2 + CH2=C=O. Isomerization/elimination: CH3CHO + DELTA --> [CH2=CH-OH] --> HC=CH + H2O. Both PIMS and IR spectroscopy show compelling evidence for the participation of vinylidene, CH2=C:, as an intermediate in the decomposition of vinyl alchohol: CH2=CH-OH + DELTA --> [CH2=C:] + H2O --> HC=CH + H2O.

  20. Gastric cancer staging with dual energy spectral CT imaging.

    Directory of Open Access Journals (Sweden)

    Zilai Pan

    Full Text Available PURPOSE: To evaluate the clinical utility of dual energy spectral CT (DEsCT in staging and characterizing gastric cancers. MATERIALS AND METHODS: 96 patients suspected of gastric cancers underwent dual-phasic scans (arterial phase (AP and portal venous phase (PP with DEsCT mode. Three types of images were reconstructed for analysis: conventional polychromatic images, material-decomposition images, and monochromatic image sets with photon energies from 40 to 140 keV. The polychromatic and monochromatic images were compared in TNM staging. The iodine concentrations in the lesions and lymph nodes were measured on the iodine-based material-decomposition images. These values were further normalized against that in aorta and the normalized iodine concentration (nIC values were statistically compared. Results were correlated with pathological findings. RESULTS: The overall accuracies for T, N and M staging were (81.2%, 80.0%, and 98.9% and (73.9%, 75.0%, and 98.9% determined with the monochromatic images and the conventional kVp images, respectively. The improvement of the accuracy in N-staging using the keV images was statistically significant (p<0.05. The nIC values between the differentiated and undifferentiated carcinoma and between metastatic and non-metastatic lymph nodes were significantly different both in AP (p = 0.02, respectively and PP (p = 0.01, respectively. Among metastatic lymph nodes, nIC of the signet-ring cell carcinoma were significantly different from the adenocarcinoma (p = 0.02 and mucinous adenocarcinoma (p = 0.01 in PP. CONCLUSION: The monochromatic images obtained with DEsCT may be used to improve the N-staging accuracy. Quantitative iodine concentration measurements may be helpful for differentiating between differentiated and undifferentiated gastric carcinoma, and between metastatic and non-metastatic lymph nodes.

  1. [A New HAC Unsupervised Classifier Based on Spectral Harmonic Analysis].

    Science.gov (United States)

    Yang, Ke-ming; Wei, Hua-feng; Shi, Gang-qiang; Sun, Yang-yang; Liu, Fei

    2015-07-01

    Hyperspectral images classification is one of the important methods to identify image information, which has great significance for feature identification, dynamic monitoring and thematic information extraction, etc. Unsupervised classification without prior knowledge is widely used in hyperspectral image classification. This article proposes a new hyperspectral images unsupervised classification algorithm based on harmonic analysis(HA), which is called the harmonic analysis classifer (HAC). First, the HAC algorithm counts the first harmonic component and draws the histogram, so it can determine the initial feature categories and the pixel of cluster centers according to the number and location of the peak. Then, the algorithm is to map the waveform information of pixels to be classified spectrum into the feature space made up of harmonic decomposition times, amplitude and phase, and the similar features can be gotten together in the feature space, these pixels will be classified according to the principle of minimum distance. Finally, the algorithm computes the Euclidean distance of these pixels between cluster center, and merges the initial classification by setting the distance threshold. so the HAC can achieve the purpose of hyperspectral images classification. The paper collects spectral curves of two feature categories, and obtains harmonic decomposition times, amplitude and phase after harmonic analysis, the distribution of HA components in the feature space verified the correctness of the HAC. While the HAC algorithm is applied to EO-1 satellite Hyperion hyperspectral image and obtains the results of classification. Comparing with the hyperspectral image classifying results of K-MEANS, ISODATA and HAC classifiers, the HAC, as a unsupervised classification method, is confirmed to have better application on hyperspectral image classification.

  2. Cross-spectral purity of electromagnetic fields.

    Science.gov (United States)

    Hassinen, Timo; Tervo, Jani; Friberg, Ari T

    2009-12-15

    We extend Mandel's scalar-wave concept of cross-spectral purity to electromagnetic fields. We show that in the electromagnetic case, assumptions similar to the scalar cross-spectral purity lead to a reduction formula, analogous with the one introduced by Mandel. We also derive a condition that shows that the absolute value of the normalized zeroth two-point Stokes parameter of two cross-spectrally pure electromagnetic fields is the same for every frequency component of the field. In analogy with the scalar theory we further introduce a measure of the cross-spectral purity of two electromagnetic fields, namely, the degree of electromagnetic cross-spectral purity.

  3. Mode Decomposition Methods for Soil Moisture Prediction

    Science.gov (United States)

    Jana, R. B.; Efendiev, Y. R.; Mohanty, B.

    2014-12-01

    Lack of reliable, well-distributed, long-term datasets for model validation is a bottle-neck for most exercises in soil moisture analysis and prediction. Understanding what factors drive soil hydrological processes at different scales and their variability is very critical to further our ability to model the various components of the hydrologic cycle more accurately. For this, a comprehensive dataset with measurements across scales is very necessary. Intensive fine-resolution sampling of soil moisture over extended periods of time is financially and logistically prohibitive. Installation of a few long term monitoring stations is also expensive, and needs to be situated at critical locations. The concept of Time Stable Locations has been in use for some time now to find locations that reflect the mean values for the soil moisture across the watershed under all wetness conditions. However, the soil moisture variability across the watershed is lost when measuring at only time stable locations. We present here a study using techniques such as Dynamic Mode Decomposition (DMD) and Discrete Empirical Interpolation Method (DEIM) that extends the concept of time stable locations to arrive at locations that provide not simply the average soil moisture values for the watershed, but also those that can help re-capture the dynamics across all locations in the watershed. As with the time stability, the initial analysis is dependent on an intensive sampling history. The DMD/DEIM method is an application of model reduction techniques for non-linearly related measurements. Using this technique, we are able to determine the number of sampling points that would be required for a given accuracy of prediction across the watershed, and the location of those points. Locations with higher energetics in the basis domain are chosen first. We present case studies across watersheds in the US and India. The technique can be applied to other hydro-climates easily.

  4. Hydrogen production by the decomposition of water

    Science.gov (United States)

    Hollabaugh, C.M.; Bowman, M.G.

    A process is described for the production of hydrogen from water by a sulfuric acid process employing electrolysis and thermo-chemical decomposition. The water containing SO/sub 2/ is electrolyzed to produce H/sub 2/ at the cathode and to oxidize the SO/sub 2/ to form H/sub 2/SO/sub 4/ at the anode. After the H/sub 2/ has been separated, a compound of the type M/sub r/X/sub s/ is added to produce a water insoluble sulfate of M and a water insoluble oxide of the metal in the radical X. In the compound M/sub r/X/sub s/, M is at least one metal selected from the group consisting of Ba/sup 2 +/, Ca/sup 2 +/, Sr/sup 2 +/, La/sup 2 +/, and Pb/sup 2 +/; X is at least one radical selected from the group consisting of molybdate (MoO/sub 4//sup 2 -/), tungstate (WO/sub 4//sup 2 -/), and metaborate (BO/sub 2//sup 1 -/); and r and s are either 1, 2, or 3 depending upon the valence of M and X. The precipitated mixture is filtered and heated to a temperature sufficiently high to form SO/sub 3/ gas and to reform M/sub r/X/sub s/. The SO/sub 3/ is dissolved in a small amount of H/sub 2/O to produce concentrated H/sub 2/SO/sub 4/, and the M/sub r/X/sub s/ is recycled to the process. Alternatively, the SO/sub 3/ gas can be recycled to the beginning of the process to provide a continuous process for the production of H/sub 2/ in which only water need be added in a substantial amount. (BLM)

  5. Flat norm decomposition of integral currents

    Directory of Open Access Journals (Sweden)

    Sharif Ibrahim

    2016-05-01

    Full Text Available Currents represent generalized surfaces studied in geometric measure theory. They range from relatively tame integral currents representing oriented compact manifolds with boundary and integer multiplicities, to arbitrary elements of the dual space of differential forms. The flat norm provides a natural distance in the space of currents, and works by decomposing a $d$-dimensional current into $d$- and (the boundary of $(d+1$-dimensional pieces in an optimal way.Given an integral current, can we expect its at norm decomposition to be integral as well? This is not known in general, except in the case of $d$-currents that are boundaries of $(d+1$-currents in $\\mathbb{R}^{d+1}$ (following results from a corresponding problem on the $L^1$ total variation ($L^1$TV of functionals. On the other hand, for a discretized at norm on a finite simplicial complex, the analogous statement holds even when the inputs are not boundaries. This simplicial version relies on the total unimodularity of the boundary matrix of the simplicial complex; a result distinct from the $L^1$TV approach.We develop an analysis framework that extends the result in the simplicial setting to one for $d$-currents in $\\mathbb{R}^{d+1}$, provided a suitable triangulation result holds. In $\\mathbb{R}^2$, we use a triangulation result of Shewchuk (bounding both the size and location of small angles, and apply the framework to show that the discrete result implies the continuous result for $1$-currents in $\\mathbb{R}^2$ .

  6. Ramanujan subspace pursuit for signal periodic decomposition

    Science.gov (United States)

    Deng, Shi-Wen; Han, Ji-Qing

    2017-06-01

    The period estimation and periodic decomposition of a signal represent long-standing problems in the field of signal processing and biomolecular sequence analysis. To address such problems, we introduce the Ramanujan subspace pursuit (RSP) based on the Ramanujan subspace. As a greedy iterative algorithm, the RSP can uniquely decompose any signal into a sum of exactly periodic components by selecting and removing the most dominant periodic component from the residual signal in each iteration. In the RSP, a novel periodicity metric is derived based on the energy of the exactly periodic component obtained by orthogonally projecting the residual signal into the Ramanujan subspace. The metric is then used to select the most dominant periodic component in each iteration. To reduce the computational cost of the RSP, we also propose the fast RSP (FRSP) based on the relationship between the periodic subspace and the Ramanujan subspace and based on the maximum likelihood estimation of the energy of the periodic component in the periodic subspace. The fast RSP has a lower computational cost and can decompose a signal of length N into a sum of K exactly periodic components in O (KNlogN) . In short, the main contributions of this paper are threefold: First, we present the RSP algorithm for decomposing a signal into its periodic components and theoretically prove the convergence of the algorithm based on the Ramanujan subspaces. Second, we present the FRSP algorithm, which is used to reduce the computational cost. Finally, we derive a periodic metric to measure the periodicity of the hidden periodic components of a signal. In addition, our results show that the RSP outperforms current algorithms for period estimation.

  7. Ultrasound elastography using empirical mode decomposition analysis.

    Science.gov (United States)

    Sadeghi, Sajjad; Behnam, Hamid; Tavakkoli, Jahan

    2014-01-01

    Ultrasound elastography is a non-invasive method which images the elasticity of soft-tissues. To make an image, pre and after a small compression, ultrasound radio frequency (RF) signals are acquired and the time delays between them are estimated. The first differentiation of displacement estimations is called elastogram. In this study, we are going to make an elastogram using the processing method named empirical mode decomposition (EMD). EMD is an analytic technique which decomposes a complicated signal to a collection of simple signals called intrinsic mode functions (IMFs). The idea of paper is using these IMFs instead of primary RF signals. To implement the algorithms two different datasets selected. The first one was data from a sandwich structure of normal and cooked tissue. The second dataset consisted of around 180 frames acquired from a malignant breast tumor. For displacement estimating, two different methods, cross-correlation and wavelet transform, were used too and for evaluating the quality, two conventional parameters, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) calculated for each image. Results show that in both methods after using EMD the quality improves. In first dataset and cross correlation technique CNR and SNR improve about 16 dB and 9 dB respectively. In same dataset by using wavelet technique, the parameters show 14 dB and 10 dB improvement respectively. In second dataset (breast tumor data) CNR and SNR in cross correlation method improve 18 dB and 7 dB and in wavelet technique improve 17 dB and 6 dB respectively.

  8. Normalized spectral damage of a linear system over different spectral loading patterns

    Science.gov (United States)

    Kim, Chan-Jung

    2017-08-01

    Spectral fatigue damage is affected by different loading patterns; the damage may be accumulated in a different manner because the spectral pattern has an influence on stresses or strains. The normalization of spectral damage with respect to spectral loading acceleration is a novel solution to compare the accumulated fatigue damage over different spectral loading patterns. To evaluate the sensitivity of fatigue damage over different spectral loading cases, a simple notched specimen is used to conduct a uniaxial vibration test for two representative spectral patterns-random and harmonic-between 30 and 3000 Hz. The fatigue damage to the simple specimen is analyzed for different spectral loading cases using the normalized spectral damage from the measured response data for both acceleration and strain. The influence of spectral loading patterns is discussed based on these analyses.

  9. Bayesian approach to magnetotelluric tensor decomposition

    Directory of Open Access Journals (Sweden)

    Michel Menvielle

    2010-05-01

    ;} -->

    Magnetotelluric directional analysis and impedance tensor decomposition are basic tools to validate a local/regional composite electrical model of the underlying structure. Bayesian stochastic methods approach the problem of the parameter estimation and their uncertainty characterization in a fully probabilistic fashion, through the use of posterior model probabilities.We use the standard Groom­Bailey 3­D local/2­D regional composite model in our bayesian approach. We assume that the experimental impedance estimates are contamined with the Gaussian noise and define the likelihood of a particular composite model with respect to the observed data. We use non­informative, flat priors over physically reasonable intervals for the standard Groom­Bailey decomposition parameters. We apply two numerical methods, the Markov chain Monte Carlo procedure based on the Gibbs sampler and a single­component adaptive Metropolis algorithm. From the posterior samples, we characterize the estimates and uncertainties of the individual decomposition parameters by using the respective marginal posterior probabilities. We conclude that the stochastic scheme performs reliably for a variety of models, including the multisite and multifrequency case with up to

  10. Recovering Interstellar Gas Properties with Hi Spectral Lines: A Comparison between Synthetic Spectra and 21-SPONGE

    Science.gov (United States)

    Murray, Claire E.; Stanimirović, Snežana; Kim, Chang-Goo; Ostriker, Eve C.; Lindner, Robert R.; Heiles, Carl; Dickey, John M.; Babler, Brian

    2017-03-01

    We analyze synthetic neutral hydrogen (H I) absorption and emission spectral lines from a high-resolution, three-dimensional hydrodynamical simulation to quantify how well observational methods recover the physical properties of interstellar gas. We present a new method for uniformly decomposing H I spectral lines and estimating the properties of associated gas using the Autonomous Gaussian Decomposition (AGD) algorithm. We find that H I spectral lines recover physical structures in the simulation with excellent completeness at high Galactic latitude, and this completeness declines with decreasing latitude due to strong velocity-blending of spectral lines. The temperature and column density inferred from our decomposition and radiative transfer method agree with the simulated values within a factor of < 2 for the majority of gas structures. We next compare synthetic spectra with observations from the 21-SPONGE survey at the Karl G. Jansky Very Large Array using AGD. We find more components per line of sight in 21-SPONGE than in synthetic spectra, which reflects insufficient simulated gas scale heights and the limitations of local box simulations. In addition, we find a significant population of low-optical depth, broad absorption components in the synthetic data which are not seen in 21-SPONGE. This population is not obvious in integrated or per-channel diagnostics, and reflects the benefit of studying velocity-resolved components. The discrepant components correspond to the highest spin temperatures (1000< {T}s< 4000 {{K}}), which are not seen in 21-SPONGE despite sufficient observational sensitivity. We demonstrate that our analysis method is a powerful tool for diagnosing neutral interstellar medium conditions, and future work is needed to improve observational statistics and implementation of simulated physics.

  11. Spectral clustering for TRUS images

    Directory of Open Access Journals (Sweden)

    Salama Magdy MA

    2007-03-01

    Full Text Available Abstract Background Identifying the location and the volume of the prostate is important for ultrasound-guided prostate brachytherapy. Prostate volume is also important for prostate cancer diagnosis. Manual outlining of the prostate border is able to determine the prostate volume accurately, however, it is time consuming and tedious. Therefore, a number of investigations have been devoted to designing algorithms that are suitable for segmenting the prostate boundary in ultrasound images. The most popular method is the deformable model (snakes, a method that involves designing an energy function and then optimizing this function. The snakes algorithm usually requires either an initial contour or some points on the prostate boundary to be estimated close enough to the original boundary which is considered a drawback to this powerful method. Methods The proposed spectral clustering segmentation algorithm is built on a totally different foundation that doesn't involve any function design or optimization. It also doesn't need any contour or any points on the boundary to be estimated. The proposed algorithm depends mainly on graph theory techniques. Results Spectral clustering is used in this paper for both prostate gland segmentation from the background and internal gland segmentation. The obtained segmented images were compared to the expert radiologist segmented images. The proposed algorithm obtained excellent gland segmentation results with 93% average overlap areas. It is also able to internally segment the gland where the segmentation showed consistency with the cancerous regions identified by the expert radiologist. Conclusion The proposed spectral clustering segmentation algorithm obtained fast excellent estimates that can give rough prostate volume and location as well as internal gland segmentation without any user interaction.

  12. Spectral clustering for TRUS images.

    Science.gov (United States)

    Mohamed, Samar S; Salama, Magdy M A

    2007-03-15

    Identifying the location and the volume of the prostate is important for ultrasound-guided prostate brachytherapy. Prostate volume is also important for prostate cancer diagnosis. Manual outlining of the prostate border is able to determine the prostate volume accurately, however, it is time consuming and tedious. Therefore, a number of investigations have been devoted to designing algorithms that are suitable for segmenting the prostate boundary in ultrasound images. The most popular method is the deformable model (snakes), a method that involves designing an energy function and then optimizing this function. The snakes algorithm usually requires either an initial contour or some points on the prostate boundary to be estimated close enough to the original boundary which is considered a drawback to this powerful method. The proposed spectral clustering segmentation algorithm is built on a totally different foundation that doesn't involve any function design or optimization. It also doesn't need any contour or any points on the boundary to be estimated. The proposed algorithm depends mainly on graph theory techniques. Spectral clustering is used in this paper for both prostate gland segmentation from the background and internal gland segmentation. The obtained segmented images were compared to the expert radiologist segmented images. The proposed algorithm obtained excellent gland segmentation results with 93% average overlap areas. It is also able to internally segment the gland where the segmentation showed consistency with the cancerous regions identified by the expert radiologist. The proposed spectral clustering segmentation algorithm obtained fast excellent estimates that can give rough prostate volume and location as well as internal gland segmentation without any user interaction.

  13. Semiclassical Theory of Spectral Rigidity

    Science.gov (United States)

    Berry, M. V.

    1985-08-01

    The spectral rigidity Δ(L) of a set of quantal energy levels is the mean square deviation of the spectral staircase from the straight line that best fits it over a range of L mean level spacings. In the semiclassical limit (hslash-> 0), formulae are obtained giving Δ(L) as a sum over classical periodic orbits. When L ~= Lmax, where Lmax ~ hslash-(N-1) for a system of N freedoms, Δ(L) is shown to display the following universal behaviour as a result of properties of very long classical orbits: if the system is classically integrable (all periodic orbits filling tori), Δ(L) = 1/15L (as in an uncorrelated (Poisson) eigenvalue sequence); if the system is classically chaotic (all periodic orbits isolated and unstable) and has no symmetry, Δ(L) = ln L/2π^2 + D if 1 ~= L ~= Lmax (as in the gaussian unitary ensemble of random-matrix theory); if the system is chaotic and has time-reversal symmetry, Δ(L) = ln L/π^2 + E if 1 ~= L ~= Lmax (as in the gaussian orthogonal ensemble). When L >> Lmax, Δ(L) saturates non-universally at a value, determined by short classical orbits, of order hslash-(N-1) for integrable systems and ln (hslash-1) for chaotic systems. These results are obtained by using the periodic-orbit expansion for the spectral density, together with classical sum rules for the intensities of long orbits and a semiclassical sum rule restricting the manner in which their contributions interfere. For two examples Δ(L) is studied in detail: the rectangular billiard (integrable), and the Riemann zeta function (assuming its zeros to be the eigenvalues of an unknown quantum system whose unknown classical limit is chaotic).

  14. Language identification using spectral and prosodic features

    CERN Document Server

    Rao, K Sreenivasa; Maity, Sudhamay

    2015-01-01

    This book discusses the impact of spectral features extracted from frame level, glottal closure regions, and pitch-synchronous analysis on the performance of language identification systems. In addition to spectral features, the authors explore prosodic features such as intonation, rhythm, and stress features for discriminating the languages. They present how the proposed spectral and prosodic features capture the language specific information from two complementary aspects, showing how the development of language identification (LID) system using the combination of spectral and prosodic features will enhance the accuracy of identification as well as improve the robustness of the system. This book provides the methods to extract the spectral and prosodic features at various levels, and also suggests the appropriate models for developing robust LID systems according to specific spectral and prosodic features. Finally, the book discuss about various combinations of spectral and prosodic features, and the desire...

  15. Planck 2013 results. IX. HFI spectral response

    CERN Document Server

    Ade, P A R; Armitage-Caplan, C; Arnaud, M; Ashdown, M; Atrio-Barandela, F; Aumont, J; Baccigalupi, C; Banday, A J; Barreiro, R B; Battaner, E; Benabed, K; Benoît, A; Benoit-Lévy, A; Bernard, J -P; Bersanelli, M; Bielewicz, P; Bobin, J; Bock, J J; Bond, J R; Borrill, J; Bouchet, F R; Boulanger, F; Bridges, M; Bucher, M; Burigana, C; Cardoso, J -F; Catalano, A; Challinor, A; Chamballu, A; Chary, R -R; Chen, X; Chiang, L -Y; Chiang, H C; Christensen, P R; Church, S; Clements, D L; Colombi, S; Colombo, L P L; Combet, C; Comis, B; Couchot, F; Coulais, A; Crill, B P; Curto, A; Cuttaia, F; Danese, L; Davies, R D; de Bernardis, P; de Rosa, A; de Zotti, G; Delabrouille, J; Delouis, J -M; Désert, F -X; Dickinson, C; Diego, J M; Dole, H; Donzelli, S; Doré, O; Douspis, M; Dupac, X; Efstathiou, G; Enßlin, T A; Eriksen, H K; Falgarone, E; Finelli, F; Forni, O; Frailis, M; Franceschi, E; Galeotta, S; Ganga, K; Giard, M; Giraud-Héraud, Y; González-Nuevo, J; Górski, K M; Gratton, S; Gregorio, A; Gruppuso, A; Hansen, F K; Hanson, D; Harrison, D; Henrot-Versillé, S; Hernández-Monteagudo, C; Herranz, D; Hildebrandt, S R; Hivon, E; Hobson, M; Holmes, W A; Hornstrup, A; Hovest, W; Huffenberger, K M; Hurier, G; Jaffe, T R; Jaffe, A H; Jones, W C; Juvela, M; Keihänen, E; Keskitalo, R; Kisner, T S; Kneissl, R; Knoche, J; Knox, L; Kunz, M; Kurki-Suonio, H; Lagache, G; Lamarre, J -M; Lasenby, A; Laureijs, R J; Lawrence, C R; Leahy, J P; Leonardi, R; Leroy, C; Lesgourgues, J; Liguori, M; Lilje, P B; Linden-Vørnle, M; López-Caniego, M; Lubin, P M; Macías-Pérez, J F; Maffei, B; Mandolesi, N; Maris, M; Marshall, D J; Martin, P G; Martínez-González, E; Masi, S; Matarrese, S; Matthai, F; Mazzotta, P; McGehee, P; Melchiorri, A; Mendes, L; Mennella, A; Migliaccio, M; Mitra, S; Miville-Deschênes, M -A; Moneti, A; Montier, L; Morgante, G; Mortlock, D; Munshi, D; Murphy, J A; Naselsky, P; Nati, F; Natoli, P; Netterfield, C B; Nørgaard-Nielsen, H U; North, C; Noviello, F; Novikov, D; Novikov, I; Osborne, S; Oxborrow, C A; Paci, F; Pagano, L; Pajot, F; Paoletti, D; Pasian, F; Patanchon, G; Perdereau, O; Perotto, L; Perrotta, F; Piacentini, F; Piat, M; Pierpaoli, E; Pietrobon, D; Plaszczynski, S; Pointecouteau, E; Polenta, G; Ponthieu, N; Popa, L; Poutanen, T; Pratt, G W; Prézeau, G; Prunet, S; Puget, J -L; Rachen, J P; Reinecke, M; Remazeilles, M; Renault, C; Ricciardi, S; Riller, T; Ristorcelli, I; Rocha, G; Rosset, C; Roudier, G; Rusholme, B; Santos, D; Savini, G; Shellard, E P S; Spencer, L D; Starck, J -L; Stolyarov, V; Stompor, R; Sudiwala, R; Sureau, F; Sutton, D; Suur-Uski, A -S; Sygnet, J -F; Tauber, J A; Tavagnacco, D; Terenzi, L; Tomasi, M; Tristram, M; Tucci, M; Umana, G; Valenziano, L; Valiviita, J; Van Tent, B; Vielva, P; Villa, F; Vittorio, N; Wade, L A; Wandelt, B D; Yvon, D; Zacchei, A; Zonca, A

    2014-01-01

    The Planck High Frequency Instrument (HFI) spectral response was determined through a series of ground based tests conducted with the HFI focal plane in a cryogenic environment prior to launch. The main goal of the spectral transmission tests was to measure the relative spectral response (including out-of-band signal rejection) of all HFI detectors. This was determined by measuring the output of a continuously scanned Fourier transform spectrometer coupled with all HFI detectors. As there is no on-board spectrometer within HFI, the ground-based spectral response experiments provide the definitive data set for the relative spectral calibration of the HFI. The spectral response of the HFI is used in Planck data analysis and component separation, this includes extraction of CO emission observed within Planck bands, dust emission, Sunyaev-Zeldovich sources, and intensity to polarization leakage. The HFI spectral response data have also been used to provide unit conversion and colour correction analysis tools. Ver...

  16. Spectral Methods in Spatial Statistics

    Directory of Open Access Journals (Sweden)

    Kun Chen

    2014-01-01

    Full Text Available When the spatial location area increases becoming extremely large, it is very difficult, if not possible, to evaluate the covariance matrix determined by the set of location distance even for gridded stationary Gaussian process. To alleviate the numerical challenges, we construct a nonparametric estimator called periodogram of spatial version to represent the sample property in frequency domain, because periodogram requires less computational operation by fast Fourier transform algorithm. Under some regularity conditions on the process, we investigate the asymptotic unbiasedness property of periodogram as estimator of the spectral density function and achieve the convergence rate.

  17. Spectral Synthesis with Empirical Priors

    Science.gov (United States)

    Sodre, L., Jr.

    2017-07-01

    We have been developing a Bayesian parameter estimator which is very competitive compared with other machine learning methods, as evidenced by several experiments performed by our group (e.g., on photometric redshifts and galaxy spectral synthesis). Our approach relies on a training set, i.e., a (empirical, theoretical or mixed) data set with known parameters, and outputs the probability distribution function of a certain parameter, as well as other statistical summaries of this distribution, for all galaxies in the survey. We propose to build a large training set using theoretical libraries and use them to derive galaxy parameters from S-PLUS, J-PLUS and J-PAS observations.

  18. Binary Population and Spectral Synthesis

    Science.gov (United States)

    Eldridge, J. J.; Stanway, E. R.; Xiao, L.; McClelland, L. A. S.; Bray, J. C.; Taylor, G.; Ng, M.

    2017-11-01

    We have recently released version 2.0 of the Binary Population and Spectral Synthesis (BPASS) population synthesis code. This is designed to construct the spectra and related properties of stellar populations built from ~200,000 detailed, individual stellar models of known age and metallicity. The output products enable a broad range of theoretical predictions for individual stars, binaries, resolved and unresolved stellar populations, supernovae and their progenitors, and compact remnant mergers. Here we summarise key applications that demonstrate that binary populations typically reproduce observations better than single star models.

  19. PASCAL - Planetary Atmospheres Spectral Catalog

    Science.gov (United States)

    Rothman, Laurence; Gordon, Iouli

    2010-05-01

    Spectroscopic observation of planetary atmospheres, stellar atmospheres, comets, and the interstellar medium is the most powerful tool for extracting detailed information concerning the properties of these objects. The HITRAN molecular spectroscopic database1 has traditionally served researchers involved with terrestrial atmospheric problems, such as remote-sensing of constituents in the atmosphere, pollution monitoring at the surface, identification of sources seen through the atmosphere, and numerous environmental issues. A new thrust of the HITRAN program is to extend this longstanding database to have capabilities for studying the above-mentioned planetary and astronomical systems. The new extension is called PASCAL (Planetary Atmospheres Spectral Catalog). The methodology and structure are basically identical to the construction of the HITRAN and HITEMP databases. We will acquire and assemble spectroscopic parameters for gases and spectral bands of molecules that are germane to the studies of planetary atmospheres. These parameters include the types of data that have already been considered for transmission and radiance algorithms, such as line position, intensity, broadening coefficients, lower-state energies, and temperature dependence values. Additional parameters beyond what is currently considered for the terrestrial atmosphere will be archived. Examples are collision-broadened halfwidths due to various foreign partners, collision-induced absorption, and temperature dependence factors. New molecules (and their isotopic variants), not currently included in the HITRAN database, will be incorporated. That includes hydrocarbons found on Titan but not archived in HITRAN (such as C3H4, C4H2, C3H8). Other examples include sulfur-bearing molecules such as SO and CS. A further consideration will be spectral bands that arise as opportunities to study exosolar planets. The task involves acquiring the best high-resolution data, both experimental and theoretical

  20. The effect of repeated physical disturbance on soft tissue decomposition--are taphonomic studies an accurate reflection of decomposition?

    Science.gov (United States)

    Adlam, Rachel E; Simmons, Tal

    2007-09-01

    Although the relationship between decomposition and postmortem interval has been well studied, almost no studies examined the potential effects of physical disturbance occurring as a result of data collection procedures. This study compares physically disturbed rabbit carcasses with a series of undisturbed carcasses to assess the presence and magnitude of any effects resulting from repetitive disturbance. Decomposition was scored using visual assessment of soft tissue changes, and numerical data such as weight loss and carcass temperature were recorded. The effects of disturbance over time on weight loss, carcass temperature, soil pH and decomposition were studied. In addition, this study aimed to validate some of the anecdotal evidence regarding decomposition. Results indicate disturbance significantly inversely affects both weight loss and carcass temperature. No differences were apparent between groups for soil pH change or overall decomposition stage. An insect-mediated mechanism for the disturbance effect is suggested, along with indications as to why this effect may be cancelled when scoring overall decomposition.