Energy Technology Data Exchange (ETDEWEB)
Sidler, Rolf, E-mail: rsidler@gmail.com [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland); Carcione, José M. [Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS), Borgo Grotta Gigante 42c, 34010 Sgonico, Trieste (Italy); Holliger, Klaus [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland)
2013-02-15
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge–Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid–solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.
DEFF Research Database (Denmark)
Tong, M.S.; Lu, Y.; Chen, Y.
2005-01-01
A planar stratified dielectric slab medium, which is an interesting problem in optics and geophysics, is studied using a pseudo-spectral time-domain (PSTD) algorithm. Time domain electric fields and frequency domain propagation characteristics of both single and periodic dielectric slab...
Seismic waves modeling with the Fourier pseudo-spectral method on massively parallel machines.
Klin, Peter
2015-04-01
The Fourier pseudo-spectral method (FPSM) is an approach for the 3D numerical modeling of the wave propagation, which is based on the discretization of the spatial domain in a structured grid and relies on global spatial differential operators for the solution of the wave equation. This last peculiarity is advantageous from the accuracy point of view but poses difficulties for an efficient implementation of the method to be run on parallel computers with distributed memory architecture. The 1D spatial domain decomposition approach has been so far commonly adopted in the parallel implementations of the FPSM, but it implies an intensive data exchange among all the processors involved in the computation, which can degrade the performance because of communication latencies. Moreover, the scalability of the 1D domain decomposition is limited, since the number of processors can not exceed the number of grid points along the directions in which the domain is partitioned. This limitation inhibits an efficient exploitation of the computational environments with a very large number of processors. In order to overcome the limitations of the 1D domain decomposition we implemented a parallel version of the FPSM based on a 2D domain decomposition, which allows to achieve a higher degree of parallelism and scalability on massively parallel machines with several thousands of processing elements. The parallel programming is essentially achieved using the MPI protocol but OpenMP parts are also included in order to exploit the single processor multi - threading capabilities, when available. The developed tool is aimed at the numerical simulation of the seismic waves propagation and in particular is intended for earthquake ground motion research. We show the scalability tests performed up to 16k processing elements on the IBM Blue Gene/Q computer at CINECA (Italy), as well as the application to the simulation of the earthquake ground motion in the alluvial plain of the Po river (Italy).
Zou, Peng
2017-05-10
Staggering grid is a very effective way to reduce the Nyquist errors and to suppress the non-causal ringing artefacts in the pseudo-spectral solution of first-order elastic wave equations. However, the straightforward use of a staggered-grid pseudo-spectral method is problematic for simulating wave propagation when the anisotropy level is greater than orthorhombic or when the anisotropic symmetries are not aligned with the computational grids. Inspired by the idea of rotated staggered-grid finite-difference method, we propose a modified pseudo-spectral method for wave propagation in arbitrary anisotropic media. Compared with an existing remedy of staggered-grid pseudo-spectral method based on stiffness matrix decomposition and a possible alternative using the Lebedev grids, the rotated staggered-grid-based pseudo-spectral method possesses the best balance between the mitigation of artefacts and efficiency. A 2D example on a transversely isotropic model with tilted symmetry axis verifies its effectiveness to suppress the ringing artefacts. Two 3D examples of increasing anisotropy levels demonstrate that the rotated staggered-grid-based pseudo-spectral method can successfully simulate complex wavefields in such anisotropic formations.
Domain decomposition methods for hyperbolic problems
Indian Academy of Sciences (India)
problems using domain decomposition but this technique faces difficulties if the system becomes characteristic at the inter-element boundaries. By making the inter-element boundaries move faster than the fastest wave speed associated with the hyperbolic system we are able to overcome this problem. Keywords. Domain ...
Vector domain decomposition schemes for parabolic equations
Vabishchevich, P. N.
2017-09-01
A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.
Domain decomposition methods for hyperbolic problems
Indian Academy of Sciences (India)
In this paper a method is developed for solving hyperbolic initial boundary value problems in one space dimension using domain decomposition, which can be extended to problems in several space dimensions. We minimize a functional which is the sum of squares of the 2 norms of the residuals and a term which is the ...
Multiple Shooting and Time Domain Decomposition Methods
Geiger, Michael; Körkel, Stefan; Rannacher, Rolf
2015-01-01
This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms. The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics. This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...
Domain decomposition methods for mortar finite elements
Energy Technology Data Exchange (ETDEWEB)
Widlund, O.
1996-12-31
In the last few years, domain decomposition methods, previously developed and tested for standard finite element methods and elliptic problems, have been extended and modified to work for mortar and other nonconforming finite element methods. A survey will be given of work carried out jointly with Yves Achdou, Mario Casarin, Maksymilian Dryja and Yvon Maday. Results on the p- and h-p-version finite elements will also be discussed.
Bregmanized Domain Decomposition for Image Restoration
Langer, Andreas
2012-05-22
Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems such as total variation minimization became increasingly important for the solution of these tasks. While being favorable due to the improved enhancement of images compared to smooth imaging approaches, non-smooth minimization problems typically scale badly with the dimension of the data. Hence, for large imaging problems solved by total variation minimization domain decomposition algorithms have been proposed, aiming to split one large problem into N > 1 smaller problems which can be solved on parallel CPUs. The N subproblems constitute constrained minimization problems, where the constraint enforces the support of the minimizer to be the respective subdomain. In this paper we discuss a fast computational algorithm to solve domain decomposition for total variation minimization. In particular, we accelerate the computation of the subproblems by nested Bregman iterations. We propose a Bregmanized Operator Splitting-Split Bregman (BOS-SB) algorithm, which enforces the restriction onto the respective subdomain by a Bregman iteration that is subsequently solved by a Split Bregman strategy. The computational performance of this new approach is discussed for its application to image inpainting and image deblurring. It turns out that the proposed new solution technique is up to three times faster than the iterative algorithm currently used in domain decomposition methods for total variation minimization. © Springer Science+Business Media, LLC 2012.
Wang, Zhiheng
2015-01-01
A simple multidomain Chebyshev pseudo-spectral method is developed for two-dimensional fluid flow and heat transfer over square cylinders. The incompressible Navier-Stokes equations with primitive variables are discretized in several subdomains of the computational domain. The velocities and pressure are discretized with the same order of Chebyshev polynomials, i.e., the PN-PN method. The Projection method is applied in coupling the pressure with the velocity. The present method is first validated by benchmark problems of natural convection in a square cavity. Then the method based on multidomains is applied to simulate fluid flow and heat transfer from square cylinders. The numerical results agree well with the existing results. © Taylor & Francis Group, LLC.
Damping Estimation by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Ventura, C. E.; Andersen, P.
2001-01-01
In this paper it is explained how the damping can be estimated using the Frequency Domain Decomposition technique for output-only modal identification, i.e. in the case where the modal parameters is to be estimated without knowing the forces exciting the system. Also it is explained how the natural...... back to time domain to identify damping and frequency. The technique is illustrated on a simple simulation case with 2 closely spaced modes. On this example it is illustrated how the identification is influenced by very closely spacing, by non-orthogonal modes, and by correlated input. The technique...... is further illustrated on the output-only identification of the Great Belt Bridge. On this example it is shown how the damping is identified on a weakly exited mode and a closely spaced mode....
Development and Validation of A 3d Staggered Fourier Pseudo-spectral Method For Wave Modeling
Seriani, G.; Vuan, A.; Priolo, E.; Carcione, J.
We have developed and implemented an algorithm for the 3D forward modeling of seismic wave-fields in complex geological structures. The algorithm is based on the solution of the elastic full wave equation in heterogeneous media using the Fourier pseudo-spectral method. Numerical accuracy and computational efficiency have been improved using a staggered scheme and parallel implementation, respectively. The parallel code can handle both SHMEM and MPI libraries which allow to perform the computations whether using a single massive parallel computer or distributed PC- clusters. The wave equation is written in a displacement-velocity-stress formulation using the equation of conservation of momentum, and the stress-strain relations for an isotropic elastic medium undergoing infinitesimal deformation. In the time domain, the system of partial differential equations is integrated through a leap-frog finite dif- ference discrete scheme. Spatial derivatives are computed globally by using the FFT. Attenuation is accommodated at each time step by multiplying the stress and velocity field by the attenuation factor. The presence of a free surface is obtained by FFT zero- padding. Wave reflections from the model edges are removed by applying absorbing strips. A generalized moment-tensor source formulation is used to represent source mechanisms. The 3-D staggered Fourier pseudo-spectral method is validated with both analytical and numerical solutions. In particular, we have used as reference solutions the full wave-field in homogeneous and layered media computed by the Cagniard-de Hoop technique and the wave-number integration method, respectively. The compari- son showed that the time histories are in good agreement with the reference solutions. Moreover, because of the staggered approach, the three components of the wave-field are defined on dual meshes displaced by a half grid-step. Therefore three components records are recovered by a fast interpolation.
Traffic Simulations on Parallel Computers Using Domain Decomposition Techniques
1995-01-01
Large scale simulations of Intelligent Transportation Systems (ITS) can only be acheived by using the computing resources offered by parallel computing architectures. Domain decomposition techniques are proposed which allow the performance of traffic...
Domain Decomposition: A Bridge between Nature and Parallel Computers
1992-09-01
AD-A256 575 NASA Contractor Report 189709 ICASE Report No. 92-44 ICASE DOMAIN DECOMPOSITION: A BRIDGE BETWEEN NATURE AND PARALLEL COMPUTERS DTIC dE...5225 , I I II I II I I1 DOMAIN DECOMPOSITION: A BRIDGE BETWEEN NATURE AND PARALLEL COMPUTERS Accesion For £ -- j David E. Keyes NTIS C{A&I Department...space decompositions, but of special form that can be motivated by, among other things, the memory hierarchies of distributed-memory parallel computers . Each
Sparse Pseudo Spectral Projection Methods with Directional Adaptation for Uncertainty Quantification
Winokur, J.
2015-12-19
We investigate two methods to build a polynomial approximation of a model output depending on some parameters. The two approaches are based on pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids, and aim at providing a finer control of the resolution along two distinct subsets of model parameters. The control of the error along different subsets of parameters may be needed for instance in the case of a model depending on uncertain parameters and deterministic design variables. We first consider a nested approach where an independent adaptive sparse grid PSP is performed along the first set of directions only, and at each point a sparse grid is constructed adaptively in the second set of directions. We then consider the application of aPSP in the space of all parameters, and introduce directional refinement criteria to provide a tighter control of the projection error along individual dimensions. Specifically, we use a Sobol decomposition of the projection surpluses to tune the sparse grid adaptation. The behavior and performance of the two approaches are compared for a simple two-dimensional test problem and for a shock-tube ignition model involving 22 uncertain parameters and 3 design parameters. The numerical experiments indicate that whereas both methods provide effective means for tuning the quality of the representation along distinct subsets of parameters, PSP in the global parameter space generally requires fewer model evaluations than the nested approach to achieve similar projection error. In addition, the global approach is better suited for generalization to more than two subsets of directions.
A Load Balanced Domain Decomposition Method Using Wavelet Analysis
Energy Technology Data Exchange (ETDEWEB)
Jameson, L; Johnson, J; Hesthaven, J
2001-05-31
Wavelet Analysis provides an orthogonal basis set which is localized in both the physical space and the Fourier transform space. We present here a domain decomposition method that uses wavelet analysis to maintain roughly uniform error throughout the computation domain while keeping the computational work balanced in a parallel computing environment.
22nd International Conference on Domain Decomposition Methods
Gander, Martin; Halpern, Laurence; Krause, Rolf; Pavarino, Luca
2016-01-01
These are the proceedings of the 22nd International Conference on Domain Decomposition Methods, which was held in Lugano, Switzerland. With 172 participants from over 24 countries, this conference continued a long-standing tradition of internationally oriented meetings on Domain Decomposition Methods. The book features a well-balanced mix of established and new topics, such as the manifold theory of Schwarz Methods, Isogeometric Analysis, Discontinuous Galerkin Methods, exploitation of modern HPC architectures, and industrial applications. As the conference program reflects, the growing capabilities in terms of theory and available hardware allow increasingly complex non-linear and multi-physics simulations, confirming the tremendous potential and flexibility of the domain decomposition concept.
Load Estimation by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Pedersen, Ivar Chr. Bjerg; Hansen, Søren Mosegaard; Brincker, Rune
2007-01-01
When performing operational modal analysis the dynamic loading is unknown, however, once the modal properties of the structure have been estimated, the transfer matrix can be obtained, and the loading can be estimated by inverse filtering. In this paper loads in frequency domain are estimated...... and the errors on the estimated loads are determined....
Overlapping domain decomposition methods for elliptic quasi ...
Indian Academy of Sciences (India)
In this paper we provide a maximum norm analysis of an overlapping Schwarz method on non-matching grids for quasi-variational inequalities related to impulse control problem with mixed boundary conditions. We provide that the discretization on every sub-domain converges in uniform norm. Furthermore, a result of ...
Overlapping domain decomposition methods for elliptic quasi ...
Indian Academy of Sciences (India)
Abstract. In this paper we provide a maximum norm analysis of an overlapping. Schwarz method on non-matching grids for quasi-variational inequalities related to impulse control problem with mixed boundary conditions. We provide that the dis- cretization on every sub-domain converges in uniform norm. Furthermore ...
Decomposition and removability properties of John domains
Indian Academy of Sciences (India)
A quasidisk is the image of a disk or a half-plane under a quasiconformal mapping of the ... also a John domain, where P = {p1,p2,...,pm} and pi ∈ D (i = 1, 2,...,m). .... (2.5). Proof. Let S = ∂Bn(w1,r1)∩∂Bn(w2,r2). If S contains at most one point, then the proof is obvious. On the other hand, if S contains at least two points, ...
Automated Frequency Domain Decomposition for Operational Modal Analysis
DEFF Research Database (Denmark)
Brincker, Rune; Andersen, Palle; Jacobsen, Niels-Jørgen
2007-01-01
The Frequency Domain Decomposition (FDD) technique is known as one of the most user friendly and powerful techniques for operational modal analysis of structures. However, the classical implementation of the technique requires some user interaction. The present paper describes an algorithm for au...
Coordination of distributed/parallel multiple-grid domain decomposition
C.T.H. Everaars (Kees); F. Arbab (Farhad)
1996-01-01
textabstractA workable approach for the solution of many (numerical and non-numerical) problems is domain decomposition. If a problem can be divided into a number of sub-problems that can be solved in a distributed/parallel fashion, the overall performance can significantly improve. In this paper,
Domain Decomposition Solvers for Frequency-Domain Finite Element Equations
Copeland, Dylan
2010-10-05
The paper is devoted to fast iterative solvers for frequency-domain finite element equations approximating linear and nonlinear parabolic initial boundary value problems with time-harmonic excitations. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple linear elliptic system for the amplitudes belonging to the sine- and to the cosine-excitation or a large nonlinear elliptic system for the Fourier coefficients in the linear and nonlinear case, respectively. The fast solution of the corresponding linear and nonlinear system of finite element equations is crucial for the competitiveness of this method. © 2011 Springer-Verlag Berlin Heidelberg.
Domain decomposition methods in FVM approach to gravity field modelling.
Macák, Marek
2017-04-01
The finite volume method (FVM) as a numerical method can be straightforwardly implemented for global or local gravity field modelling. This discretization method solves the geodetic boundary value problems in a space domain. In order to obtain precise numerical solutions, it usually requires very refined discretization leading to large-scale parallel computations. To optimize such computations, we present a special class of numerical techniques that are based on a physical decomposition of the global solution domain. The domain decomposition (DD) methods like the Multiplicative Schwarz Method and Additive Schwarz Method are very efficient methods for solving partial differential equations. We briefly present their mathematical formulations and we test their efficiency. Presented numerical experiments are dealing with gravity field modelling. Since there is no need to solve special interface problems between neighbouring subdomains, in our applications we use the overlapping DD methods.
Scalable Domain Decomposition Preconditioners for Heterogeneous Elliptic Problems
Directory of Open Access Journals (Sweden)
Pierre Jolivet
2014-01-01
Full Text Available Domain decomposition methods are, alongside multigrid methods, one of the dominant paradigms in contemporary large-scale partial differential equation simulation. In this paper, a lightweight implementation of a theoretically and numerically scalable preconditioner is presented in the context of overlapping methods. The performance of this work is assessed by numerical simulations executed on thousands of cores, for solving various highly heterogeneous elliptic problems in both 2D and 3D with billions of degrees of freedom. Such problems arise in computational science and engineering, in solid and fluid mechanics. While focusing on overlapping domain decomposition methods might seem too restrictive, it will be shown how this work can be applied to a variety of other methods, such as non-overlapping methods and abstract deflation based preconditioners. It is also presented how multilevel preconditioners can be used to avoid communication during an iterative process such as a Krylov method.
An algorithm for domain decomposition in finite element analysis
Al-Nasra, M.; Nguyen, D. T.
1991-01-01
A simple and efficient algorithm is described for automatic decomposition of an arbitrary finite element domain into a specified number of subdomains for finite element and substructuring analysis in a multiprocessor computer environment. The algorithm is designed to balance the work loads, to minimize the communication among processors and to minimize the bandwidths of the resulting system of equations. Small- to large-scale finite element models, which have two-node elements (truss, beam element), three-node elements (triangular element) and four-node elements (quadrilateral element), are solved on the Convex computer to illustrate the effectiveness of the proposed algorithm. A FORTRAN computer program is also included.
Rapid expansion and pseudo spectral implementation for reverse time migration in VTI media
Pestana, Reynam C
2012-04-24
In isotropic media, we use the scalar acoustic wave equation to perform reverse time migration (RTM) of the recorded pressure wavefield data. In anisotropic media, P- and SV-waves are coupled, and the elastic wave equation should be used for RTM. For computational efficiency, a pseudo-acoustic wave equation is often used. This may be solved using a coupled system of second-order partial differential equations. We solve these using a pseudo spectral method and the rapid expansion method (REM) for the explicit time marching. This method generates a degenerate SV-wave in addition to the P-wave arrivals of interest. To avoid this problem, the elastic wave equation for vertical transversely isotropic (VTI) media can be split into separate wave equations for P- and SV-waves. These separate wave equations are stable, and they can be effectively used to model and migrate seismic data in VTI media where |ε- δ| is small. The artifact for the SV-wave has also been removed. The independent pseudo-differential wave equations can be solved one for each mode using the pseudo spectral method for the spatial derivatives and the REM for the explicit time advance of the wavefield. We show numerically stable and high-resolution modeling and RTM results for the pure P-wave mode in VTI media. © 2012 Sinopec Geophysical Research Institute.
Simplified approaches to some nonoverlapping domain decomposition methods
Energy Technology Data Exchange (ETDEWEB)
Xu, Jinchao
1996-12-31
An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.
Energy Technology Data Exchange (ETDEWEB)
Feng, Xiaobing [Univ. of Tennessee, Knoxville, TN (United States)
1996-12-31
A non-overlapping domain decomposition iterative method is proposed and analyzed for mixed finite element methods for a sequence of noncoercive elliptic systems with radiation boundary conditions. These differential systems describe the motion of a nearly elastic solid in the frequency domain. The convergence of the iterative procedure is demonstrated and the rate of convergence is derived for the case when the domain is decomposed into subdomains in which each subdomain consists of an individual element associated with the mixed finite elements. The hybridization of mixed finite element methods plays a important role in the construction of the discrete procedure.
23rd International Conference on Domain Decomposition Methods in Science and Engineering
Cai, Xiao-Chuan; Keyes, David; Kim, Hyea; Klawonn, Axel; Park, Eun-Jae; Widlund, Olof
2017-01-01
This book is a collection of papers presented at the 23rd International Conference on Domain Decomposition Methods in Science and Engineering, held on Jeju Island, Korea on July 6-10, 2015. Domain decomposition methods solve boundary value problems by splitting them into smaller boundary value problems on subdomains and iterating to coordinate the solution between adjacent subdomains. Domain decomposition methods have considerable potential for a parallelization of the finite element methods, and serve a basis for distributed, parallel computations.
Deriving a new domain decomposition method for the Stokes equations using the Smith factorization
Dolean, Victorita; Nataf, Frédéric; Rapin, Gerd
2009-01-01
International audience; In this paper the Smith factorization is used systematically to derive a new domain decomposition method for the Stokes problem. In two dimensions the key idea is the transformation of the Stokes problem into a scalar bi-harmonic problem. We show, how a proposed domain decomposition method for the bi-harmonic problem leads to a domain decomposition method for the Stokes equations which inherits the convergence behavior of the scalar problem. Thus, it is sufficient to s...
A domain decomposition preconditioner of Neumann-Neumann type for the Stokes equations
Dolean, Victorita; Nataf, Frédéric; Rapin, Gerd
2009-01-01
In this paper we recall a new domain decomposition method for the Stokes problem obtained via the Smith factorization. From the theoretical point of view, this domain decomposition method is optimal in the sense that it converges in two iterations for a decomposition into two equal domains. Previous results illustrated the fast convergence of the proposed algorithm in some cases. Our algorithm has shown a more robust behavior than Neumann- Neumann or FETI type methods for particular decomposi...
Pseudo spectral collocation with Maxwell polynomials for kinetic equations with energy diffusion
Sánchez-Vizuet, Tonatiuh; Cerfon, Antoine J.
2018-02-01
We study the approximation and stability properties of a recently popularized discretization strategy for the speed variable in kinetic equations, based on pseudo-spectral collocation on a grid defined by the zeros of a non-standard family of orthogonal polynomials called Maxwell polynomials. Taking a one-dimensional equation describing energy diffusion due to Fokker–Planck collisions with a Maxwell–Boltzmann background distribution as the test bench for the performance of the scheme, we find that Maxwell based discretizations outperform other commonly used schemes in most situations, often by orders of magnitude. This provides a strong motivation for their use in high-dimensional gyrokinetic simulations. However, we also show that Maxwell based schemes are subject to a non-modal time stepping instability in their most straightforward implementation, so that special care must be given to the discrete representation of the linear operators in order to benefit from the advantages provided by Maxwell polynomials.
A convergent overlapping domain decomposition method for total variation minimization
Fornasier, Massimo
2010-06-22
In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.
Traffic simulations on parallel computers using domain decomposition techniques
Energy Technology Data Exchange (ETDEWEB)
Hanebutte, U.R.; Tentner, A.M.
1995-12-31
Large scale simulations of Intelligent Transportation Systems (ITS) can only be achieved by using the computing resources offered by parallel computing architectures. Domain decomposition techniques are proposed which allow the performance of traffic simulations with the standard simulation package TRAF-NETSIM on a 128 nodes IBM SPx parallel supercomputer as well as on a cluster of SUN workstations. Whilst this particular parallel implementation is based on NETSIM, a microscopic traffic simulation model, the presented strategy is applicable to a broad class of traffic simulations. An outer iteration loop must be introduced in order to converge to a global solution. A performance study that utilizes a scalable test network that consist of square-grids is presented, which addresses the performance penalty introduced by the additional iteration loop.
Analysis of generalized Schwarz alternating procedure for domain decomposition
Energy Technology Data Exchange (ETDEWEB)
Engquist, B.; Zhao, Hongkai [Univ. of California, Los Angeles, CA (United States)
1996-12-31
The Schwartz alternating method(SAM) is the theoretical basis for domain decomposition which itself is a powerful tool both for parallel computation and for computing in complicated domains. The convergence rate of the classical SAM is very sensitive to the overlapping size between each subdomain, which is not desirable for most applications. We propose a generalized SAM procedure which is an extension of the modified SAM proposed by P.-L. Lions. Instead of using only Dirichlet data at the artificial boundary between subdomains, we take a convex combination of u and {partial_derivative}u/{partial_derivative}n, i.e. {partial_derivative}u/{partial_derivative}n + {Lambda}u, where {Lambda} is some {open_quotes}positive{close_quotes} operator. Convergence of the modified SAM without overlapping in a quite general setting has been proven by P.-L.Lions using delicate energy estimates. The important questions remain for the generalized SAM. (1) What is the most essential mechanism for convergence without overlapping? (2) Given the partial differential equation, what is the best choice for the positive operator {Lambda}? (3) In the overlapping case, is the generalized SAM superior to the classical SAM? (4) What is the convergence rate and what does it depend on? (5) Numerically can we obtain an easy to implement operator {Lambda} such that the convergence is independent of the mesh size. To analyze the convergence of the generalized SAM we focus, for simplicity, on the Poisson equation for two typical geometry in two subdomain case.
A physics-motivated Centroidal Voronoi Particle domain decomposition method
Energy Technology Data Exchange (ETDEWEB)
Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de
2017-04-15
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.
A physics-motivated Centroidal Voronoi Particle domain decomposition method
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-04-01
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.
Parallel Finite Element Domain Decomposition for Structural/Acoustic Analysis
Nguyen, Duc T.; Tungkahotara, Siroj; Watson, Willie R.; Rajan, Subramaniam D.
2005-01-01
A domain decomposition (DD) formulation for solving sparse linear systems of equations resulting from finite element analysis is presented. The formulation incorporates mixed direct and iterative equation solving strategics and other novel algorithmic ideas that are optimized to take advantage of sparsity and exploit modern computer architecture, such as memory and parallel computing. The most time consuming part of the formulation is identified and the critical roles of direct sparse and iterative solvers within the framework of the formulation are discussed. Experiments on several computer platforms using several complex test matrices are conducted using software based on the formulation. Small-scale structural examples are used to validate thc steps in the formulation and large-scale (l,000,000+ unknowns) duct acoustic examples are used to evaluate the ORIGIN 2000 processors, and a duster of 6 PCs (running under the Windows environment). Statistics show that the formulation is efficient in both sequential and parallel computing environmental and that the formulation is significantly faster and consumes less memory than that based on one of the best available commercialized parallel sparse solvers.
Directory of Open Access Journals (Sweden)
P. Di Maida
2016-01-01
Full Text Available A pseudo-spectral approximation is presented to solve the problem of pull-in instability in a cantilever micro-switch. As well known, pull-in instability arises when the acting force reaches a critical threshold beyond which equilibrium is no longer possible. In particular, Coulomb electrostatic force is considered, although the method can be easily generalized to account for fringe as well as Casimir effects. A numerical comparison is presented between a pseudo-spectral and a Finite Element (FE approximation of the problem, both methods employing the same number of degrees of freedom. It is shown that the pseudo-spectral method appears more effective in accurately approximating the behavior of the cantilever near its tip. This fact is crucial to capturing the threshold voltage on the verge of pull-in. Conversely, the FE approximation presents rapid successions of attracting/repulsing regions along the cantilever, which are not restricted to the near pull-in regime.
Output-Only Modal Analysis by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, Lingmi; Andersen, Palle
2000-01-01
approach where the modal parameters are estimated by simple peak picking. However, by introducing a decomposition of the spectral density function matrix, the response spectra can be separated into a set of single degree of freedom systems, each corresponding to an individual mode. By using...
Output-only Modal Analysis by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, L.; Andersen, P.
2000-01-01
approach where the modal parameters are estimated by simple peak picking. However, by introducing a decomposition of the spectral density function matrix, the response spectra can be separated into a set of single degree of freedom systems, each corresponding to an individual mode. By using...
Domain decomposition techniques for boundary elements application to fluid flow
Brebbia, C A; Skerget, L
2007-01-01
The sub-domain techniques in the BEM are nowadays finding its place in the toolbox of numerical modellers, especially when dealing with complex 3D problems. We see their main application in conjunction with the classical BEM approach, which is based on a single domain, when part of the domain needs to be solved using a single domain approach, the classical BEM, and part needs to be solved using a domain approach, BEM subdomain technique. This has usually been done in the past by coupling the BEM with the FEM, however, it is much more efficient to use a combination of the BEM and a BEM sub-domain technique. The advantage arises from the simplicity of coupling the single domain and multi-domain solutions, and from the fact that only one formulation needs to be developed, rather than two separate formulations based on different techniques. There are still possibilities for improving the BEM sub-domain techniques. However, considering the increased interest and research in this approach we believe that BEM sub-do...
Lattice QCD with Domain Decomposition on Intel Xeon Phi Co-Processors
Energy Technology Data Exchange (ETDEWEB)
Heybrock, Simon; Joo, Balint; Kalamkar, Dhiraj D; Smelyanskiy, Mikhail; Vaidyanathan, Karthikeyan; Wettig, Tilo; Dubey, Pradeep
2014-12-01
The gap between the cost of moving data and the cost of computing continues to grow, making it ever harder to design iterative solvers on extreme-scale architectures. This problem can be alleviated by alternative algorithms that reduce the amount of data movement. We investigate this in the context of Lattice Quantum Chromodynamics and implement such an alternative solver algorithm, based on domain decomposition, on Intel Xeon Phi co-processor (KNC) clusters. We demonstrate close-to-linear on-chip scaling to all 60 cores of the KNC. With a mix of single- and half-precision the domain-decomposition method sustains 400-500 Gflop/s per chip. Compared to an optimized KNC implementation of a standard solver [1], our full multi-node domain-decomposition solver strong-scales to more nodes and reduces the time-to-solution by a factor of 5.
A domain decomposition method for time fractional reaction-diffusion equation.
Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie
2014-01-01
The computational complexity of one-dimensional time fractional reaction-diffusion equation is O(N²M) compared with O(NM) for classical integer reaction-diffusion equation. Parallel computing is used to overcome this challenge. Domain decomposition method (DDM) embodies large potential for parallelization of the numerical solution for fractional equations and serves as a basis for distributed, parallel computations. A domain decomposition algorithm for time fractional reaction-diffusion equation with implicit finite difference method is proposed. The domain decomposition algorithm keeps the same parallelism but needs much fewer iterations, compared with Jacobi iteration in each time step. Numerical experiments are used to verify the efficiency of the obtained algorithm.
Fast structural design and analysis via hybrid domain decomposition on massively parallel processors
Farhat, Charbel
1993-01-01
A hybrid domain decomposition framework for static, transient and eigen finite element analyses of structural mechanics problems is presented. Its basic ingredients include physical substructuring and /or automatic mesh partitioning, mapping algorithms, 'gluing' approximations for fast design modifications and evaluations, and fast direct and preconditioned iterative solvers for local and interface subproblems. The overall methodology is illustrated with the structural design of a solar viewing payload that is scheduled to fly in March 1993. This payload has been entirely designed and validated by a group of undergraduate students at the University of Colorado using the proposed hybrid domain decomposition approach on a massively parallel processor. Performance results are reported on the CRAY Y-MP/8 and the iPSC-860/64 Touchstone systems, which represent both extreme parallel architectures. The hybrid domain decomposition methodology is shown to outperform leading solution algorithms and to exhibit an excellent parallel scalability.
Lattice QCD with Domain Decomposition on Intel Xeon Phi Co-Processors
Heybrock, Simon; Kalamkar, Dhiraj D; Smelyanskiy, Mikhail; Vaidyanathan, Karthikeyan; Wettig, Tilo; Dubey, Pradeep
2014-01-01
The gap between the cost of moving data and the cost of computing continues to grow, making it ever harder to design iterative solvers on extreme-scale architectures. This problem can be alleviated by alternative algorithms that reduce the amount of data movement. We investigate this in the context of Lattice Quantum Chromodynamics and implement such an alternative solver algorithm, based on domain decomposition, on Intel Xeon Phi co-processor (KNC) clusters. We demonstrate close-to-linear on-chip scaling to all 60 cores of the KNC. With a mix of single- and half-precision the domain-decomposition method sustains 400-500 Gflop/s per chip. Compared to an optimized KNC implementation of a standard solver [1], our full multi-node domain-decomposition solver strong-scales to more nodes and reduces the time-to-solution by a factor of 5.
Domain decomposition for a mixed finite element method in three dimensions
Cai, Z.; Parashkevov, R.R.; Russell, T.F.; Wilson, J.D.; Ye, X.
2003-01-01
We consider the solution of the discrete linear system resulting from a mixed finite element discretization applied to a second-order elliptic boundary value problem in three dimensions. Based on a decomposition of the velocity space, these equations can be reduced to a discrete elliptic problem by eliminating the pressure through the use of substructures of the domain. The practicality of the reduction relies on a local basis, presented here, for the divergence-free subspace of the velocity space. We consider additive and multiplicative domain decomposition methods for solving the reduced elliptic problem, and their uniform convergence is established.
Hershkovitz, Yaron; Anker, Yaakov; Ben-Dor, Eyal; Schwartz, Guy; Gasith, Avital
2010-05-01
In-stream vegetation is a key ecosystem component in many fluvial ecosystems, having cascading effects on stream conditions and biotic structure. Traditionally, ground-level surveys (e.g. grid and transect analyses) are commonly used for estimating cover of aquatic macrophytes. Nonetheless, this methodological approach is highly time consuming and usually yields information which is practically limited to habitat and sub-reach scales. In contrast, remote-sensing techniques (e.g. satellite imagery and airborne photography), enable collection of large datasets over section, stream and basin scales, in relatively short time and reasonable cost. However, the commonly used spatial high resolution (1m) is often inadequate for examining aquatic vegetation on habitat or sub-reach scales. We examined the utility of a pseudo-spectral methodology, using RGB digital photography for estimating the cover of in-stream vegetation in a small Mediterranean-climate stream. We compared this methodology with that obtained by traditional ground-level grid methodology and with an airborne hyper-spectral remote sensing survey (AISA-ES). The study was conducted along a 2 km section of an intermittent stream (Taninim stream, Israel). When studied, the stream was dominated by patches of watercress (Nasturtium officinale) and mats of filamentous algae (Cladophora glomerata). The extent of vegetation cover at the habitat and section scales (100 and 104 m, respectively) were estimated by the pseudo-spectral methodology, using an airborne Roli camera with a Phase-One P 45 (39 MP) CCD image acquisition unit. The swaths were taken in elevation of about 460 m having a spatial resolution of about 4 cm (NADIR). For measuring vegetation cover at the section scale (104 m) we also used a 'push-broom' AISA-ES hyper-spectral swath having a sensor configuration of 182 bands (350-2500 nm) at elevation of ca. 1,200 m (i.e. spatial resolution of ca. 1 m). Simultaneously, with every swath we used an Analytical
Output-only Modal Analysis by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, L.; Andersen, P.
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification of output-only systems, i.e. for the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical...
Modal Identification from Ambient Responses using Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, L.; Andersen, P.
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification from ambient responses, ie. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical...
Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions
Energy Technology Data Exchange (ETDEWEB)
Fattebert, J.-L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Richards, D.F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glosli, J.N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2012-12-01
We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10^{6} particles on 65,536 MPI tasks.
Domain decomposition solvers for nonlinear multiharmonic finite element equations
Copeland, D. M.
2010-01-01
In many practical applications, for instance, in computational electromagnetics, the excitation is time-harmonic. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple elliptic equation for the amplitude. This is true for linear problems, but not for nonlinear problems. However, due to the periodicity of the solution, we can expand the solution in a Fourier series. Truncating this Fourier series and approximating the Fourier coefficients by finite elements, we arrive at a large-scale coupled nonlinear system for determining the finite element approximation to the Fourier coefficients. The construction of fast solvers for such systems is very crucial for the efficiency of this multiharmonic approach. In this paper we look at nonlinear, time-harmonic potential problems as simple model problems. We construct and analyze almost optimal solvers for the Jacobi systems arising from the Newton linearization of the large-scale coupled nonlinear system that one has to solve instead of performing the expensive time-integration procedure. © 2010 de Gruyter.
Energy Technology Data Exchange (ETDEWEB)
Gaiffe, St.
2000-03-23
In this thesis, we are interested in the modeling of fluid flow through porous media with 2-D and 3-D unstructured meshes, and in the use of domain decomposition methods. The behavior of flow through porous media is strongly influenced by heterogeneities: either large-scale lithological discontinuities or quite localized phenomena such as fluid flow in the neighbourhood of wells. In these two typical cases, an accurate consideration of the singularities requires the use of adapted meshes. After having shown the limits of classic meshes we present the future prospects offered by hybrid and flexible meshes. Next, we consider the generalization possibilities of the numerical schemes traditionally used in reservoir simulation and we draw two available approaches: mixed finite elements and U-finite volumes. The investigated phenomena being also characterized by different time-scales, special treatments in terms of time discretization on various parts of the domain are required. We think that the combination of domain decomposition methods with operator splitting techniques may provide a promising approach to obtain high flexibility for a local tune-steps management. Consequently, we develop a new numerical scheme for linear parabolic equations which allows to get a higher flexibility in the local space and time steps management. To conclude, a priori estimates and error estimates on the two variables of interest, namely the pressure and the velocity are proposed. (author)
Energy Technology Data Exchange (ETDEWEB)
Flauraud, E.
2004-05-01
In this thesis, we are interested in using domain decomposition methods for solving fluid flows in faulted porous media. This study comes within the framework of sedimentary basin modeling which its aim is to predict the presence of possible oil fields in the subsoil. A sedimentary basin is regarded as a heterogeneous porous medium in which fluid flows (water, oil, gas) occur. It is often subdivided into several blocks separated by faults. These faults create discontinuities that have a tremendous effect on the fluid flow in the basin. In this work, we present two approaches to model faults from the mathematical point of view. The first approach consists in considering faults as sub-domains, in the same way as blocks but with their own geological properties. However, because of the very small width of the faults in comparison with the size of the basin, the second and new approach consists in considering faults no longer as sub-domains, but as interfaces between the blocks. A mathematical study of the two models is carried out in order to investigate the existence and the uniqueness of solutions. Then; we are interested in using domain decomposition methods for solving the previous models. The main part of this study is devoted to the design of Robin interface conditions and to the formulation of the interface problem. The Schwarz algorithm can be seen as a Jacobi method for solving the interface problem. In order to speed up the convergence, this problem can be solved by a Krylov type algorithm (BICGSTAB). We discretize the equations with a finite volume scheme, and perform extensive numerical tests to compare the different methods. (author)
Cafiero, M.; Lloberas-Valls, O.; Cante, J.; Oliver, J.
2016-04-01
A domain decomposition technique is proposed which is capable of properly connecting arbitrary non-conforming interfaces. The strategy essentially consists in considering a fictitious zero-width interface between the non-matching meshes which is discretized using a Delaunay triangulation. Continuity is satisfied across domains through normal and tangential stresses provided by the discretized interface and inserted in the formulation in the form of Lagrange multipliers. The final structure of the global system of equations resembles the dual assembly of substructures where the Lagrange multipliers are employed to nullify the gap between domains. A new approach to handle floating subdomains is outlined which can be implemented without significantly altering the structure of standard industrial finite element codes. The effectiveness of the developed algorithm is demonstrated through a patch test example and a number of tests that highlight the accuracy of the methodology and independence of the results with respect to the framework parameters. Considering its high degree of flexibility and non-intrusive character, the proposed domain decomposition framework is regarded as an attractive alternative to other established techniques such as the mortar approach.
Sapphire decomposition and inversion domains in N-polar aluminum nitride
Energy Technology Data Exchange (ETDEWEB)
Hussey, Lindsay, E-mail: lkhussey@ncsu.edu; White, Ryan M.; Kirste, Ronny; Bryan, Isaac; Guo, Wei; Osterman, Katherine; Haidet, Brian; Bryan, Zachary; Bobea, Milena; Collazo, Ramón; Sitar, Zlatko [Department of Materials Science and Engineering, North Carolina State University, Raleigh, North Carolina 27695-7919 (United States); Mita, Seiji [HexaTech, Inc., 991 Aviation Pkwy, Suite 800, Morrisville, North Carolina 27560 (United States)
2014-01-20
Transmission electron microscopy (TEM) techniques and potassium hydroxide (KOH) etching confirmed that inversion domains in the N-polar AlN grown on c-plane sapphire were due to the decomposition of sapphire in the presence of hydrogen. The inversion domains were found to correspond to voids at the AlN and sapphire interface, and transmission electron microscopy results showed a V-shaped, columnar inversion domain with staggered domain boundary sidewalls. Voids were also observed in the simultaneously grown Al-polar AlN, however no inversion domains were present. The polarity of AlN grown above the decomposed regions of the sapphire substrate was confirmed to be Al-polar by KOH etching and TEM.
Dijkstra, D.
2002-01-01
In the conventional pseudo-spectral collocation method to solve an ordinary first order differential equation, the derivative is obtained from Lagrange interpolation and has degree of precision N for a grid of (N+1) points. In the present, novel method Hermite interpolation is used as point of
Moussawi, Ali
2015-02-24
Summary: The post-treatment of (3D) displacement fields for the identification of spatially varying elastic material parameters is a large inverse problem that remains out of reach for massive 3D structures. We explore here the potential of the constitutive compatibility method for tackling such an inverse problem, provided an appropriate domain decomposition technique is introduced. In the method described here, the statically admissible stress field that can be related through the known constitutive symmetry to the kinematic observations is sought through minimization of an objective function, which measures the violation of constitutive compatibility. After this stress reconstruction, the local material parameters are identified with the given kinematic observations using the constitutive equation. Here, we first adapt this method to solve 3D identification problems and then implement it within a domain decomposition framework which allows for reduced computational load when handling larger problems.
Multitasking domain decomposition fast Poisson solvers on the Cray Y-MP
Chan, Tony F.; Fatoohi, Rod A.
1990-01-01
The results of multitasking implementation of a domain decomposition fast Poisson solver on eight processors of the Cray Y-MP are presented. The object of this research is to study the performance of domain decomposition methods on a Cray supercomputer and to analyze the performance of different multitasking techniques using highly parallel algorithms. Two implementations of multitasking are considered: macrotasking (parallelism at the subroutine level) and microtasking (parallelism at the do-loop level). A conventional FFT-based fast Poisson solver is also multitasked. The results of different implementations are compared and analyzed. A speedup of over 7.4 on the Cray Y-MP running in a dedicated environment is achieved for all cases.
Directory of Open Access Journals (Sweden)
Dolean Victorita
2014-07-01
Full Text Available Multiphase, compositional porous media flow models lead to the solution of highly heterogeneous systems of Partial Differential Equations (PDE. We focus on overlapping Schwarz type methods on parallel computers and on multiscale methods. We present a coarse space [Nataf F., Xiang H., Dolean V., Spillane N. (2011 SIAM J. Sci. Comput. 33, 4, 1623-1642] that is robust even when there are such heterogeneities. The two-level domain decomposition approach is compared to multiscale methods.
Crack identification for transient heat operator by using domain decomposition method
Directory of Open Access Journals (Sweden)
Anis Bel Hadj Hassin
2017-08-01
Full Text Available This work deals with cracks identification from over-determined boundary data. The consideration physical phenomena corresponds to the transient heat equation. we give a theoretical result of identifiability for the inverse problem under consideration. Then, we consider a recovering process based on coupling domain decomposition method and minimizing an energy-type error functional. The efficiency of the proposed approach is illustrated by several numerical results.
A new discretization for the polarizable continuum model within the domain decomposition paradigm.
Stamm, Benjamin; Cancès, Eric; Lipparini, Filippo; Maday, Yvon
2016-02-07
We present a new algorithm to solve the polarizable continuum model equation in a framework compatible with the strategy previously developed by us for the conductor-like screening model based on Schwarz's domain decomposition method (ddCOSMO). The new discretization is systematically improvable and is fully consistent with ddCOSMO so that it reproduces ddCOSMO results for large dielectric constants.
A new discretization for the polarizable continuum model within the domain decomposition paradigm
Stamm, Benjamin; Cancès, Eric; Lipparini, Filippo; Maday, Yvon
2016-02-01
We present a new algorithm to solve the polarizable continuum model equation in a framework compatible with the strategy previously developed by us for the conductor-like screening model based on Schwarz's domain decomposition method (ddCOSMO). The new discretization is systematically improvable and is fully consistent with ddCOSMO so that it reproduces ddCOSMO results for large dielectric constants.
Java-Based Coupling for Parallel Predictive-Adaptive Domain Decomposition
Directory of Open Access Journals (Sweden)
Cécile Germain‐Renaud
1999-01-01
Full Text Available Adaptive domain decomposition exemplifies the problem of integrating heterogeneous software components with intermediate coupling granularity. This paper describes an experiment where a data‐parallel (HPF client interfaces with a sequential computation server through Java. We show that seamless integration of data‐parallelism is possible, but requires most of the tools from the Java palette: Java Native Interface (JNI, Remote Method Invocation (RMI, callbacks and threads.
DEFF Research Database (Denmark)
Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune
2006-01-01
The presence of harmonic components in the measured responses is unavoidable in many applications of Operational Modal Analysis. This is especially true when measuring on mechanical structures containing rotating or reciprocating parts. This paper describes a new method based on the popular...... Enhanced Frequency Domain Decomposition technique for eliminating the influence of these harmonic components in the modal parameter extraction process. For various experiments, the quality of the method is assessed and compared to the results obtained using broadband stochastic excitation forces. Good...
Energy Technology Data Exchange (ETDEWEB)
Griebel, M. [Technische Universitaet Muenchen (Germany)
1994-12-31
In recent years, it has turned out that many modern iterative algorithms (multigrid schemes, multilevel preconditioners, domain decomposition methods etc.) for solving problems resulting from the discretization of PDEs can be interpreted as additive (Jacobi-like) or multiplicative (Gauss-Seidel-like) subspace correction methods. The key to their analysis is the study of certain metric properties of the underlying splitting of the discretization space V into a sum of subspaces V{sub j}, j = 1{hor_ellipsis}, J resp. of the variational problem on V into auxiliary problems on these subspaces. Here, the author proposes a modified approach to the abstract convergence theory of these additive and multiplicative Schwarz iterative methods, that makes the relation to traditional iteration methods more explicit. To this end he introduces the enlarged Hilbert space V = V{sub 0} x {hor_ellipsis} x V{sub j} which is nothing else but the usual construction of the Cartesian product of the Hilbert spaces V{sub j} and use it now in the discretization process. This results in an enlarged, semidefinite linear system to be solved instead of the usual definite system. Then, modern multilevel methods as well as domain decomposition methods simplify to just traditional (block-) iteration methods. Now, the convergence analysis can be carried out directly for these traditional iterations on the enlarged system, making convergence proofs of multilevel and domain decomposition methods more clear, or, at least, more classical. The terms that enter the convergence proofs are exactly the ones of the classical iterative methods. It remains to estimate them properly. The convergence proof itself follow basically line by line the old proofs of the respective traditional iterative methods. Additionally, new multilevel/domain decomposition methods are constructed straightforwardly by now applying just other old and well known traditional iterative methods to the enlarged system.
Energy Technology Data Exchange (ETDEWEB)
Jemcov, A.; Matovic, M.D. [Queen`s Univ., Kingston, Ontario (Canada)
1996-12-31
This paper examines the sparse representation and preconditioning of a discrete Steklov-Poincare operator which arises in domain decomposition methods. A non-overlapping domain decomposition method is applied to a second order self-adjoint elliptic operator (Poisson equation), with homogeneous boundary conditions, as a model problem. It is shown that the discrete Steklov-Poincare operator allows sparse representation with a bounded condition number in wavelet basis if the transformation is followed by thresholding and resealing. These two steps combined enable the effective use of Krylov subspace methods as an iterative solution procedure for the system of linear equations. Finding the solution of an interface problem in domain decomposition methods, known as a Schur complement problem, has been shown to be equivalent to the discrete form of Steklov-Poincare operator. A common way to obtain Schur complement matrix is by ordering the matrix of discrete differential operator in subdomain node groups then block eliminating interface nodes. The result is a dense matrix which corresponds to the interface problem. This is equivalent to reducing the original problem to several smaller differential problems and one boundary integral equation problem for the subdomain interface.
Mechanical and assembly units of viral capsids identified via quasi-rigid domain decomposition.
Directory of Open Access Journals (Sweden)
Guido Polles
Full Text Available Key steps in a viral life-cycle, such as self-assembly of a protective protein container or in some cases also subsequent maturation events, are governed by the interplay of physico-chemical mechanisms involving various spatial and temporal scales. These salient aspects of a viral life cycle are hence well described and rationalised from a mesoscopic perspective. Accordingly, various experimental and computational efforts have been directed towards identifying the fundamental building blocks that are instrumental for the mechanical response, or constitute the assembly units, of a few specific viral shells. Motivated by these earlier studies we introduce and apply a general and efficient computational scheme for identifying the stable domains of a given viral capsid. The method is based on elastic network models and quasi-rigid domain decomposition. It is first applied to a heterogeneous set of well-characterized viruses (CCMV, MS2, STNV, STMV for which the known mechanical or assembly domains are correctly identified. The validated method is next applied to other viral particles such as L-A, Pariacoto and polyoma viruses, whose fundamental functional domains are still unknown or debated and for which we formulate verifiable predictions. The numerical code implementing the domain decomposition strategy is made freely available.
Lezina, Natalya; Agoshkov, Valery
2017-04-01
Domain decomposition method (DDM) allows one to present a domain with complex geometry as a set of essentially simpler subdomains. This method is particularly applied for the hydrodynamics of oceans and seas. In each subdomain the system of thermo-hydrodynamic equations in the Boussinesq and hydrostatic approximations is solved. The problem of obtaining solution in the whole domain is that it is necessary to combine solutions in subdomains. For this purposes iterative algorithm is created and numerical experiments are conducted to investigate an effectiveness of developed algorithm using DDM. For symmetric operators in DDM, Poincare-Steklov's operators [1] are used, but for the problems of the hydrodynamics, it is not suitable. In this case for the problem, adjoint equation method [2] and inverse problem theory are used. In addition, it is possible to create algorithms for the parallel calculations using DDM on multiprocessor computer system. DDM for the model of the Baltic Sea dynamics is numerically studied. The results of numerical experiments using DDM are compared with the solution of the system of hydrodynamic equations in the whole domain. The work was supported by the Russian Science Foundation (project 14-11-00609, the formulation of the iterative process and numerical experiments). [1] V.I. Agoshkov, Domain Decompositions Methods in the Mathematical Physics Problem // Numerical processes and systems, No 8, Moscow, 1991 (in Russian). [2] V.I. Agoshkov, Optimal Control Approaches and Adjoint Equations in the Mathematical Physics Problem, Institute of Numerical Mathematics, RAS, Moscow, 2003 (in Russian).
Directory of Open Access Journals (Sweden)
Yu-Ju Wang
2016-02-01
Full Text Available Ground motions from normal-faulting earthquakes are generally considered to be smaller than those of strike-slip and thrust events. On 11 April 2011 a crustal normal-faulting earthquake [the Fukushima earthquake (Mw 6.6] occurred in Eastern Japan. The peak ground acceleration (PGA observed was considerably higher than the predictions of several ground-motion prediction equations (GMPEs, which were derived mainly from thrust or strike-slip earthquakes. In northeast Taiwan, the tectonic structure of the Ryukyu Arc and the Okinawa Trough typically entail normal-faulting earthquakes. Because of the normal-faulting earthquakes relevance to ground motions and nuclear power plant sites in northeast Taiwan, we evaluated the impact of the ground motion of normal-faulting earthquakes in offshore northeast Taiwan using a newly constructed attenuation relationship for PGA and pseudo-spectral acceleration (Sa. We collected 832 records from 13 normal-faulting earthquakes with focal depths of less than 50 km. The moment magnitude (Mw of the 13 events was between 4 - 6. The Sa and PGA of normal-faulting earthquakes offshore northeast Taiwan determined with the newly constructed attenuation relationship were higher and lower, respectively, than those obtained using attenuation equations commonly used in the Taiwan subduction zone.
Energy Technology Data Exchange (ETDEWEB)
Li,Jing; Tu, Xuemin
2008-12-10
A variant of balancing domain decomposition method by constraints (BDDC) is proposed for solving a class of indefinite system of linear equations, which arises from the finite element discretization of the Helmholtz equation of time-harmonic wave propagation in a bounded interior domain. The proposed BDDC algorithm is closely related to the dual-primal finite element tearing and interconnecting algorithm for solving Helmholtz equations (FETI-DPH). Under the condition that the diameters of the subdomains are small enough, the rate of convergence is established which depends polylogarithmically on the dimension of the individual subdomain problems and which improves with the decrease of the subdomain diameters. These results are supported by numerical experiments of solving a Helmholtz equation on a two-dimensional square domain.
Directory of Open Access Journals (Sweden)
Theo van der Weide
2016-07-01
Full Text Available Modeling is one of the most important parts of requirements engineering. Most modeling techniques focus primarily on their pragmatics and pay less attention to their syntax and semantics. Different modeling techniques describe different aspects, for example, Object-Role Modeling (ORM describes underlying concepts and their relations while System Dynamics (SD focuses on the dynamic behavior of relevant objects in the underlying application domain. In this paper we provide an inductive definition for a generic modeling technique. Not only do we describe the underlying data structure, we also show how states can be introduced for relevant concepts and how their life cycles can be modeled in terms of System Dynamics. We also show how decomposition can be applied to master complex application domains. As a result we get an integrated modeling technique covering both static and dynamic aspects of application domains. The inductive definition of this integrated modeling technique will facilitate the implementation of supporting tools for practitioners.
Gatto, Paolo; Lipparini, Filippo; Stamm, Benjamin
2017-12-14
The domain-decomposition (dd) paradigm, originally introduced for the conductor-like screening model, has been recently extended to the dielectric Polarizable Continuum Model (PCM), resulting in the ddPCM method. We present here a complete derivation of the analytical derivatives of the ddPCM energy with respect to the positions of the solute's atoms and discuss their efficient implementation. As it is the case for the energy, we observe a quadratic scaling, which is discussed and demonstrated with numerical tests.
Fast Domain Decomposition Algorithm for Continuum Solvation Models: Energy and First Derivatives.
Lipparini, Filippo; Stamm, Benjamin; Cancès, Eric; Maday, Yvon; Mennucci, Benedetta
2013-08-13
In this contribution, an efficient, parallel, linear scaling implementation of the conductor-like screening model (COSMO) is presented, following the domain decomposition (dd) algorithm recently proposed by three of us. The implementation is detailed and its linear scaling properties, both in computational cost and memory requirements, are demonstrated. Such behavior is also confirmed by several numerical examples on linear and globular large-sized systems, for which the calculation of the energy and of the forces is achieved with timings compatible with the use of polarizable continuum solvation for molecular dynamics simulations.
A domain decomposition method for analyzing a coupling between multiple acoustical spaces (L).
Chen, Yuehua; Jin, Guoyong; Liu, Zhigang
2017-05-01
This letter presents a domain decomposition method to predict the acoustic characteristics of an arbitrary enclosure made up of any number of sub-spaces. While the Lagrange multiplier technique usually has good performance for conditional extremum problems, the present method avoids involving extra coupling parameters and theoretically ensures the continuity conditions of both sound pressure and particle velocity at the coupling interface. Comparisons with the finite element results illustrate the accuracy and efficiency of the present predictions and the effect of coupling parameters between sub-spaces on the natural frequencies and mode shapes of the overall enclosure is revealed.
Energy Technology Data Exchange (ETDEWEB)
Fischer, J.W.; Azmy, Y.Y. [Pennsylvania State Univ. (United States)
2003-07-01
A previously reported parallel performance model for Angular Domain Decomposition (ADD) of the Discrete Ordinates method for solving multidimensional neutron transport problems is revisited for further validation. Three communication schemes: native MPI, the bucket algorithm, and the distributed bucket algorithm, are included in the validation exercise that is successfully conducted on a Beowulf cluster. The parallel performance model is comprised of three components: serial, parallel, and communication. The serial component is largely independent of the number of participating processors, P, while the parallel component decreases like 1/P. These two components are independent of the communication scheme, in contrast with the communication component that typically increases with P in a manner highly dependent on the global reduced algorithm. Correct trends for each component and each communication scheme were measured for the Arbitrarily High Order Transport (AHOT) code, thus validating the performance models. Furthermore, extensive experiments illustrate the superiority of the bucket algorithm. The primary question addressed in this research is: for a given problem size, which domain decomposition method, angular or spatial, is best suited to parallelize Discrete Ordinates methods on a specific computational platform? We address this question for three-dimensional applications via parallel performance models that include parameters specifying the problem size and system performance: the above-mentioned ADD, and a previously constructed and validated Spatial Domain Decomposition (SDD) model. We conclude that for large problems the parallel component dwarfs the communication component even on moderately large numbers of processors. The main advantages of SDD are: (a) scalability to higher numbers of processors of the order of the number of computational cells; (b) smaller memory requirement; (c) better performance than ADD on high-end platforms and large number of
Rafiq Abuturab, Muhammad
2016-06-01
A new multiple color-image authentication system based on HSI (Hue-Saturation-Intensity) color space and QR decomposition in gyrator domains is proposed. In this scheme, original color images are converted from RGB (Red-Green-Blue) color spaces to HSI color spaces, divided into their H, S, and I components, and then obtained corresponding phase-encoded components. All the phase-encoded H, S, and I components are individually multiplied, and then modulated by random phase functions. The modulated H, S, and I components are convoluted into a single gray image with asymmetric cryptosystem. The resulting image is segregated into Q and R parts by QR decomposition. Finally, they are independently gyrator transformed to get their encoded parts. The encoded Q and R parts should be gathered without missing anyone for decryption. The angles of gyrator transform afford sensitive keys. The protocol based on QR decomposition of encoded matrix and getting back decoded matrix after multiplying matrices Q and R, enhances the security level. The random phase keys, individual phase keys, and asymmetric phase keys provide high robustness to the cryptosystem. Numerical simulation results demonstrate that this scheme is the superior than the existing techniques.
Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density
Hohl, A.; Delmelle, E. M.; Tang, W.
2015-07-01
Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.
Domain Decomposition Preconditioners for Multiscale Flows in High-Contrast Media
Galvis, Juan
2010-01-01
In this paper, we study domain decomposition preconditioners for multiscale flows in high-contrast media. We consider flow equations governed by elliptic equations in heterogeneous media with a large contrast in the coefficients. Our main goal is to develop domain decomposition preconditioners with the condition number that is independent of the contrast when there are variations within coarse regions. This is accomplished by designing coarse-scale spaces and interpolators that represent important features of the solution within each coarse region. The important features are characterized by the connectivities of high-conductivity regions. To detect these connectivities, we introduce an eigenvalue problem that automatically detects high-conductivity regions via a large gap in the spectrum. A main observation is that this eigenvalue problem has a few small, asymptotically vanishing eigenvalues. The number of these small eigenvalues is the same as the number of connected high-conductivity regions. The coarse spaces are constructed such that they span eigenfunctions corresponding to these small eigenvalues. These spaces are used within two-level additive Schwarz preconditioners as well as overlapping methods for the Schur complement to design preconditioners. We show that the condition number of the preconditioned systems is independent of the contrast. More detailed studies are performed for the case when the high-conductivity region is connected within coarse block neighborhoods. Our numerical experiments confirm the theoretical results presented in this paper. © 2010 Society for Industrial and Applied Mathematics.
Two-phase flow steam generator simulations on parallel computers using domain decomposition method
Energy Technology Data Exchange (ETDEWEB)
Belliard, M. [CEA Cadarache (DEN/DTP/STH), 13 - Saint-Paul-lez-Durance (France)
2003-07-01
Within the framework of the Domain Decomposition Method (DDM), we present industrial steady state two-phase flow simulations of PWR Steam Generators (SG) using iteration-by-sub-domain methods: standard and Adaptive Dirichlet/Neumann methods (ADN). The averaged mixture balance equations are solved by a Fractional-Step algorithm, jointly with the Crank-Nicholson scheme and the Finite Element Method. The algorithm works with overlapping or non-overlapping sub-domains and with conforming or nonconforming meshing. Computations are run on PC networks or on massively parallel mainframe computers. A CEA code-linker and the PVM package are used (master-slave context). SG mock-up simulations, involving up to 32 sub-domains, highlight the efficiency (speed-up, scalability) and the robustness of the chosen approach. With the DDM, the computational problem size is easily increased to about 1,000,000 cells and the CPU time is significantly reduced. The difficulties related to industrial use are also discussed. (author)
Energy Technology Data Exchange (ETDEWEB)
Guerin, P
2007-12-15
The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)
Preconditioned domain decomposition scheme for three-dimensional aerodynamic sensitivity analysis
Eleshaky, Mohamed E.; Baysal, Oktay
1994-01-01
A discrete sensitivity analysis algorithm had previously been developed and applied to two-dimensional aerodynamic optimization problems, where the computational domains were discretized by using single grids. The sparse, unsymmetric systems of linear equations resulting from this algorithm were solved by a direct matrix inversion matrix. However, for large two-dimensional problems and, practically, all three-dimensional problems, direct inversion methods become inapplicable, primarily due to the prohibitive computer storage needed. In an attempt to alleviate such hindrances, the sensitivity analysis with domain decomposition (SADD) scheme was developed. This scheme divides the computational domain into smaller and nonoverlapping subdomains (multiblock grids) that are solved separately. Then, the final solution is constructed from the subdomain solutions. As the number of grid points in the interface boundaries of the subdomains becomes large, the computer memory required to store the effective coefficient matrix of these interface points starts to increase. Presented in this Technical Note is a preconditioned iterative procedure to overcome this particular problem.
Time-domain noise reduction based on an orthogonal decomposition for desired signal extraction.
Benesty, Jacob; Chen, Jingdong; Arden Huang, Yiteng; Gaensler, Tomas
2012-07-01
This paper addresses the problem of noise reduction in the time domain where the clean speech sample at every time instant is estimated by filtering a vector of the noisy speech signal. Such a clean speech estimate consists of both the filtered speech and residual noise (filtered noise) as the noisy vector is the sum of the clean speech and noise vectors. Traditionally, the filtered speech is treated as the desired signal after noise reduction. This paper proposes to decompose the clean speech vector into two orthogonal components: one is correlated and the other is uncorrelated with the current clean speech sample. While the correlated component helps estimate the clean speech, it is shown that the uncorrelated component interferes with the estimation, just as the additive noise. Based on this orthogonal decomposition, the paper presents a way to define the error signal and cost functions and addresses the issue of how to design different optimal noise reduction filters by optimizing these cost functions. Specifically, it discusses how to design the maximum SNR filter, the Wiener filter, the minimum variance distortionless response (MVDR) filter, the tradeoff filter, and the linearly constrained minimum variance (LCMV) filter. It demonstrates that the maximum SNR, Wiener, MVDR, and tradeoff filters are identical up to a scaling factor. It also shows from the orthogonal decomposition that many performance measures can be defined, which seem to be more appropriate than the traditional ones for the evaluation of the noise reduction filters.
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
2017-12-01
We present a novel approach for solving steady-state stochastic partial differential equations in high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that accurate global solutions can be obtained with significantly reduced computational costs.
Final Report, DE-FG01-06ER25718 Domain Decomposition and Parallel Computing
Energy Technology Data Exchange (ETDEWEB)
Widlund, Olof B. [New York Univ. (NYU), NY (United States). Courant Inst.
2015-06-09
The goal of this project is to develop and improve domain decomposition algorithms for a variety of partial differential equations such as those of linear elasticity and electro-magnetics.These iterative methods are designed for massively parallel computing systems and allow the fast solution of the very large systems of algebraic equations that arise in large scale and complicated simulations. A special emphasis is placed on problems arising from Maxwell's equation. The approximate solvers, the preconditioners, are combined with the conjugate gradient method and must always include a solver of a coarse model in order to have a performance which is independent of the number of processors used in the computer simulation. A recent development allows for an adaptive construction of this coarse component of the preconditioner.
Energy Technology Data Exchange (ETDEWEB)
Maliassov, S.Y. [Texas A& M Univ., College Station, TX (United States)
1996-12-31
An approach to the construction of an iterative method for solving systems of linear algebraic equations arising from nonconforming finite element discretizations with nonmatching grids for second order elliptic boundary value problems with anisotropic coefficients is considered. The technique suggested is based on decomposition of the original domain into nonoverlapping subdomains. The elliptic problem is presented in the macro-hybrid form with Lagrange multipliers at the interfaces between subdomains. A block diagonal preconditioner is proposed which is spectrally equivalent to the original saddle point matrix and has the optimal order of arithmetical complexity. The preconditioner includes blocks for preconditioning subdomain and interface problems. It is shown that constants of spectral equivalence axe independent of values of coefficients and mesh step size.
A balancing domain decomposition method by constraints for advection-diffusion problems
Energy Technology Data Exchange (ETDEWEB)
Tu, Xuemin; Li, Jing
2008-12-10
The balancing domain decomposition methods by constraints are extended to solving nonsymmetric, positive definite linear systems resulting from the finite element discretization of advection-diffusion equations. A pre-conditioned GMRES iteration is used to solve a Schur complement system of equations for the subdomain interface variables. In the preconditioning step of each iteration, a partially sub-assembled finite element problem is solved. A convergence rate estimate for the GMRES iteration is established, under the condition that the diameters of subdomains are small enough. It is independent of the number of subdomains and grows only slowly with the subdomain problem size. Numerical experiments for several two-dimensional advection-diffusion problems illustrate the fast convergence of the proposed algorithm.
Energy Technology Data Exchange (ETDEWEB)
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
2017-12-01
We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.
An iterative finite-element collocation method for parabolic problems using domain decomposition
Energy Technology Data Exchange (ETDEWEB)
Curran, M.C.
1992-11-01
Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.
An iterative finite-element collocation method for parabolic problems using domain decomposition
Energy Technology Data Exchange (ETDEWEB)
Curran, M.C.
1992-01-01
Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.
Lubineau, Gilles
2015-03-01
We propose a domain decomposition formalism specifically designed for the identification of local elastic parameters based on full-field measurements. This technique is made possible by a multi-scale implementation of the constitutive compatibility method. Contrary to classical approaches, the constitutive compatibility method resolves first some eigenmodes of the stress field over the structure rather than directly trying to recover the material properties. A two steps micro/macro reconstruction of the stress field is performed: a Dirichlet identification problem is solved first over every subdomain, the macroscopic equilibrium is then ensured between the subdomains in a second step. We apply the method to large linear elastic 2D identification problems to efficiently produce estimates of the material properties at a much lower computational cost than classical approaches.
Reduced reference image quality assessment based on wavelet domain singular value decomposition
Zhang, Fei-Yan; Sun, Tao; Tu, Ya-Fei; Qin, Qian-Qing
2009-10-01
The traditional image quality assessment methods based on pixels have many limitations. Such as the lack of consideration of the image structure, or the need of a complete reference image. To avoid these problems, this paper presented a new image quality assessment method based on weighted singular value decomposition in wavelet domain (WWSVD). In this algorithm, the singular value vector difference and the mean bias between the original image and the distorted image are considered to evaluate the distortion degree. Many tests were conducted to evaluate the performance, the 227 testing images of JPEG2000 compression were come from the Live Image Quality Assessment Database, Release 2005. The results showed a great improvement in both the consistency with the DMOS (Differential Mean Opinion Score, DMOS) and the stability when applied to a large range of compression rates.
DOMAIN DECOMPOSITION FOR POROELASTICITY AND ELASTICITY WITH DG JUMPS AND MORTARS
GIRAULT, V.
2011-01-01
We couple a time-dependent poroelastic model in a region with an elastic model in adjacent regions. We discretize each model independently on non-matching grids and we realize a domain decomposition on the interface between the regions by introducing DG jumps and mortars. The unknowns are condensed on the interface, so that at each time step, the computation in each subdomain can be performed in parallel. In addition, by extrapolating the displacement, we present an algorithm where the computations of the pressure and displacement are decoupled. We show that the matrix of the interface problem is positive definite and establish error estimates for this scheme. © 2011 World Scientific Publishing Company.
Domain decomposition approach to extract pore-network models from large 3D porous media images
Timofey, Sizonenko; Marina, Karsanina; Irina, Bayuk; Kirill, Gerke
2017-04-01
Pore-network are very useful and effective method to model porous media structure and properties such as permeability and multi-phase flow. Several methods for pore-network extraction were proposed to date, including median axis, maximal inscribed ball, watershed techniques and their modifications. Input data for pore-network extraction algorithms usually represent 3D binary image. Modern X-ray tomography devices can easily provide scans with dimensions of 4k x 4k x 10k voxels. For such large images extraction algorithms may hit the problem of memory (RAM) consumption or will too time consuming. To overcome such problems or create parallelizable algorithm here we propose to divide the whole volume into sub-volumes with smaller size and extract pore- network sequentially/in parallel manner separately. However, the problem of correct pore-network extraction at the sub-volume connection areas is challenging. In this contribution we address this issue in detail. We propose a method to merge such sub-volumes. Our method explores the slices of porous medium under study at the sub-volumes intersections. Each slice has its own geometric features and associated with a number of pores or throats. Characteristics of pore that associated with slice such as diameter, distance its center to the sub-domain boundary are also taken into account. Based on the pore element properties and also properties of aforementioned slices the algorithm makes decision about how pores from opposite sides of sub-volumes should be connected. There are 3 cases of merging: 1) building a throat between pores, 2) absorption of one pore by the other, 3) breaking connection (no pore or throat are built). We have tested our approach on several different binary 3D images, including soil, sandstones, and carbonates. We also compared this new approach against a conventional one where the extraction is performed using the whole domain without its decomposition into sub-domains. We show that our approach
A New Efficient Algorithm for the 2D WLP-FDTD Method Based on Domain Decomposition Technique
Directory of Open Access Journals (Sweden)
Bo-Ao Xu
2016-01-01
Full Text Available This letter introduces a new efficient algorithm for the two-dimensional weighted Laguerre polynomials finite difference time-domain (WLP-FDTD method based on domain decomposition scheme. By using the domain decomposition finite difference technique, the whole computational domain is decomposed into several subdomains. The conventional WLP-FDTD and the efficient WLP-FDTD methods are, respectively, used to eliminate the splitting error and speed up the calculation in different subdomains. A joint calculation scheme is presented to reduce the amount of calculation. Through our work, the iteration is not essential to obtain the accurate results. Numerical example indicates that the efficiency and accuracy are improved compared with the efficient WLP-FDTD method.
Qiao, Xiaoli; Zhang, Xinming; Ren, Jiaojiao; Zhang, Dandan; Cao, Guohua; Li, Lijuan
2017-09-01
The wavelet-domain de-noising technique has many applications in terahertz time-domain spectroscopy (THz-TDS). However, it requires a complex procedure for the selection of the optimal wavelet basis and threshold, which varies for different materials. Inappropriate selections can lead to de-noising failure. Here, we propose the Mean Estimation Empirical Mode Decomposition (ME-EMD) de-noising method for THz-TDS. First, the THz-TDS signal and the collected reference noise are decomposed into the intrinsic mode functions (IMFs); second, the maximum and mean absolute values of the noise IMF amplitudes are calculated and defined as the adaptive threshold and adaptive estimated noise value, respectively; finally, these thresholds and estimated noise values are utilized to filter the noise from the signal IMFs and reconstruct the THz-TDS signal. We also calculate the signal-to-noise ratio (SNR) and mean square error (MSE) for the ME-EMD method, the "db7" wavelet basis, and the "sym8" wavelet basis after de-noising in both the simulation and the real sample experiments. Both theoretical analysis and experimental results demonstrated that the new ME-EMD method is a simple, effective, and high-stability de-noising tool for THz-TDS pulses. The measured refractive index curves are compared before and after de-noising and demonstrated that the de-noising process is necessary and useful for measuring the optical constants of a sample.
Cheng, Jiubing
2014-08-05
In elastic imaging, the extrapolated vector fields are decomposed into pure wave modes, such that the imaging condition produces interpretable images, which characterize reflectivity of different reflection types. Conventionally, wavefield decomposition in anisotropic media is costly as the operators involved is dependent on the velocity, and thus not stationary. In this abstract, we propose an efficient approach to directly extrapolate the decomposed elastic waves using lowrank approximate mixed space/wavenumber domain integral operators for heterogeneous transverse isotropic (TI) media. The low-rank approximation is, thus, applied to the pseudospectral extrapolation and decomposition at the same time. The pseudo-spectral implementation also allows for relatively large time steps in which the low-rank approximation is applied. Synthetic examples show that it can yield dispersionfree extrapolation of the decomposed quasi-P (qP) and quasi- SV (qSV) modes, which can be used for imaging, as well as the total elastic wavefields.
Energy Technology Data Exchange (ETDEWEB)
Saas, L.
2004-05-01
This Thesis deals with sedimentary basin modeling whose goal is the prediction through geological times of the localizations and appraisal of hydrocarbons quantities present in the ground. Due to the natural and evolutionary decomposition of the sedimentary basin in blocks and stratigraphic layers, domain decomposition methods are requested to simulate flows of waters and of hydrocarbons in the ground. Conservations laws are used to model the flows in the ground and form coupled partial differential equations which must be discretized by finite volume method. In this report we carry out a study on finite volume methods on non-matching grids solved by domain decomposition methods. We describe a family of finite volume schemes on non-matching grids and we prove that the associated global discretized problem is well posed. Then we give an error estimate. We give two examples of finite volume schemes on non matching grids and the corresponding theoretical results (Constant scheme and Linear scheme). Then we present the resolution of the global discretized problem by a domain decomposition method using arbitrary interface conditions (for example Robin conditions). Finally we give numerical results which validate the theoretical results and study the use of finite volume methods on non-matching grids for basin modeling. (author)
Statistical image-domain multimaterial decomposition for dual-energy CT.
Xue, Yi; Ruan, Ruoshui; Hu, Xiuhua; Kuang, Yu; Wang, Jing; Long, Yong; Niu, Tianye
2017-03-01
Dual-energy CT (DECT) enhances tissue characterization because of its basis material decomposition capability. In addition to conventional two-material decomposition from DECT measurements, multimaterial decomposition (MMD) is required in many clinical applications. To solve the ill-posed problem of reconstructing multi-material images from dual-energy measurements, additional constraints are incorporated into the formulation, including volume and mass conservation and the assumptions that there are at most three materials in each pixel and various material types among pixels. The recently proposed flexible image-domain MMD method decomposes pixels sequentially into multiple basis materials using a direct inversion scheme which leads to magnified noise in the material images. In this paper, we propose a statistical image-domain MMD method for DECT to suppress the noise. The proposed method applies penalized weighted least-square (PWLS) reconstruction with a negative log-likelihood term and edge-preserving regularization for each material. The statistical weight is determined by a data-based method accounting for the noise variance of high- and low-energy CT images. We apply the optimization transfer principles to design a serial of pixel-wise separable quadratic surrogates (PWSQS) functions which monotonically decrease the cost function. The separability in each pixel enables the simultaneous update of all pixels. The proposed method is evaluated on a digital phantom, Catphan©600 phantom and three patients (pelvis, head, and thigh). We also implement the direct inversion and low-pass filtration methods for a comparison purpose. Compared with the direct inversion method, the proposed method reduces noise standard deviation (STD) in soft tissue by 95.35% in the digital phantom study, by 88.01% in the Catphan©600 phantom study, by 92.45% in the pelvis patient study, by 60.21% in the head patient study, and by 81.22% in the thigh patient study, respectively. The
Pioldi, Fabio; Rizzi, Egidio
2017-07-01
Output-only structural identification is developed by a refined Frequency Domain Decomposition ( rFDD) approach, towards assessing current modal properties of heavy-damped buildings (in terms of identification challenge), under strong ground motions. Structural responses from earthquake excitations are taken as input signals for the identification algorithm. A new dedicated computational procedure, based on coupled Chebyshev Type II bandpass filters, is outlined for the effective estimation of natural frequencies, mode shapes and modal damping ratios. The identification technique is also coupled with a Gabor Wavelet Transform, resulting in an effective and self-contained time-frequency analysis framework. Simulated response signals generated by shear-type frames (with variable structural features) are used as a necessary validation condition. In this context use is made of a complete set of seismic records taken from the FEMA P695 database, i.e. all 44 "Far-Field" (22 NS, 22 WE) earthquake signals. The modal estimates are statistically compared to their target values, proving the accuracy of the developed algorithm in providing prompt and accurate estimates of all current strong ground motion modal parameters. At this stage, such analysis tool may be employed for convenient application in the realm of Earthquake Engineering, towards potential Structural Health Monitoring and damage detection purposes.
Roy, Soumitra; Pal, Arup Kumar
2017-06-01
Digital image watermarking has emerged as a promising solution for copyright protection. In this paper, a discrete cosine transform (DCT) and singular value decomposition (SVD) based hybrid robust image watermarking method using Arnold scrambling is proposed and simulated to protect the copyright of natural images. In this proposed scheme, before embedding, watermark is scrambled with Arnold scrambling. Then, the greyscale cover image and encrypted watermark logo are decomposed into non-overlapping blocks and subsequently some selected image blocks are transformed into the DCT domain for inserting the watermark blocks permanently. For better imperceptibility and effectiveness, in this proposed algorithm, watermark image blocks are embedded into singular values of selected blocks by multiplying with a feasible scaling factor. Simulation result demonstrates that robustness is achieved by recovering satisfactory watermark data from the reconstructed cover image after applying common geometric transformation attacks (such as rotation, flip operation, cropping, scaling, shearing and deletion of lines or columns operation), common enhancement technique attacks (such as low-pass filtering, histogram equalization, sharpening, gamma correction, noise addition) and jpeg compression attacks.
Directory of Open Access Journals (Sweden)
Carlo Ruzzo
2016-10-01
Full Text Available System identification of offshore floating platforms is usually performed by testing small-scale models in wave tanks, where controlled conditions, such as still water for free decay tests, regular and irregular wave loading can be represented. However, this approach may result in constraints on model dimensions, testing time, and costs of the experimental activity. For such reasons, intermediate-scale field modelling of offshore floating structures may become an interesting as well as cost-effective alternative in a near future. Clearly, since the open sea is not a controlled environment, traditional system identification may become challenging and less precise. In this paper, a new approach based on Frequency Domain Decomposition (FDD method for Operational Modal Analysis is proposed and validated against numerical simulations in ANSYS AQWA v.16.0 on a simple spar-type structure. The results obtained match well with numerical predictions, showing that this new approach, opportunely coupled with more traditional wave tanks techniques, proves to be very promising to perform field-site identification of the model structures.
Directory of Open Access Journals (Sweden)
Ran Zhao
2015-01-01
Full Text Available The hybrid solvers based on integral equation domain decomposition method (HS-DDM are developed for modeling of electromagnetic radiation. Based on the philosophy of “divide and conquer,” the IE-DDM divides the original multiscale problem into many closed nonoverlapping subdomains. For adjacent subdomains, the Robin transmission conditions ensure the continuity of currents, so the meshes of different subdomains can be allowed to be nonconformal. It also allows different fast solvers to be used in different subdomains based on the property of different subdomains to reduce the time and memory consumption. Here, the multilevel fast multipole algorithm (MLFMA and hierarchical (H- matrices method are combined in the framework of IE-DDM to enhance the capability of IE-DDM and realize efficient solution of multiscale electromagnetic radiating problems. The MLFMA is used to capture propagating wave physics in large, smooth regions, while H-matrices are used to capture evanescent wave physics in small regions which are discretized with dense meshes. Numerical results demonstrate the validity of the HS-DDM.
Directory of Open Access Journals (Sweden)
Jafar Biazar
2015-01-01
Full Text Available We combine the Adomian decomposition method (ADM and Adomian’s asymptotic decomposition method (AADM for solving Riccati equations. We investigate the approximate global solution by matching the near-field approximation derived from the Adomian decomposition method with the far-field approximation derived from Adomian’s asymptotic decomposition method for Riccati equations and in such cases when we do not find any region of overlap between the obtained approximate solutions by the two proposed methods, we connect the two approximations by the Padé approximant of the near-field approximation. We illustrate the efficiency of the technique for several specific examples of the Riccati equation for which the exact solution is known in advance.
Directory of Open Access Journals (Sweden)
Khaled Loukhaoukha
2013-01-01
Full Text Available We present a new optimal watermarking scheme based on discrete wavelet transform (DWT and singular value decomposition (SVD using multiobjective ant colony optimization (MOACO. A binary watermark is decomposed using a singular value decomposition. Then, the singular values are embedded in a detailed subband of host image. The trade-off between watermark transparency and robustness is controlled by multiple scaling factors (MSFs instead of a single scaling factor (SSF. Determining the optimal values of the multiple scaling factors (MSFs is a difficult problem. However, a multiobjective ant colony optimization is used to determine these values. Experimental results show much improved performances of the proposed scheme in terms of transparency and robustness compared to other watermarking schemes. Furthermore, it does not suffer from the problem of high probability of false positive detection of the watermarks.
DEFF Research Database (Denmark)
Madsen, Kristoffer Hougaard; Hansen, Lars Kai; Mørup, Morten
2009-01-01
We propose the Time Frequency Gradient Method (TFGM) which forms a framework for optimization of models that are constrained in the time domain while having efficient representations in the frequency domain. Since the constraints in the time domain in general are not transparent in a frequency......-negative Matrix Factorization, Convolutive Sparse Coding as well as Smooth and Sparse Matrix Factorization. Matlab implementation of the proposed algorithms are available for download at www.erpwavelab.org....
Robust domain decomposition preconditioners for abstract symmetric positive definite bilinear forms
Efendiev, Yalchin
2012-02-22
An abstract framework for constructing stable decompositions of the spaces corresponding to general symmetric positive definite problems into "local" subspaces and a global "coarse" space is developed. Particular applications of this abstract framework include practically important problems in porous media applications such as: the scalar elliptic (pressure) equation and the stream function formulation of its mixed form, Stokes\\' and Brinkman\\'s equations. The constant in the corresponding abstract energy estimate is shown to be robust with respect to mesh parameters as well as the contrast, which is defined as the ratio of high and low values of the conductivity (or permeability). The derived stable decomposition allows to construct additive overlapping Schwarz iterative methods with condition numbers uniformly bounded with respect to the contrast and mesh parameters. The coarse spaces are obtained by patching together the eigenfunctions corresponding to the smallest eigenvalues of certain local problems. A detailed analysis of the abstract setting is provided. The proposed decomposition builds on a method of Galvis and Efendiev [Multiscale Model. Simul. 8 (2010) 1461-1483] developed for second order scalar elliptic problems with high contrast. Applications to the finite element discretizations of the second order elliptic problem in Galerkin and mixed formulation, the Stokes equations, and Brinkman\\'s problem are presented. A number of numerical experiments for these problems in two spatial dimensions are provided. © EDP Sciences, SMAI, 2012.
Subber, Waad; Sarkar, Abhijit
2014-01-01
Recent advances in high performance computing systems and sensing technologies motivate computational simulations with extremely high resolution models with capabilities to quantify uncertainties for credible numerical predictions. A two-level domain decomposition method is reported in this investigation to devise a linear solver for the large-scale system in the Galerkin spectral stochastic finite element method (SSFEM). In particular, a two-level scalable preconditioner is introduced in order to iteratively solve the large-scale linear system in the intrusive SSFEM using an iterative substructuring based domain decomposition solver. The implementation of the algorithm involves solving a local problem on each subdomain that constructs the local part of the preconditioner and a coarse problem that propagates information globally among the subdomains. The numerical and parallel scalabilities of the two-level preconditioner are contrasted with the previously developed one-level preconditioner for two-dimensional flow through porous media and elasticity problems with spatially varying non-Gaussian material properties. A distributed implementation of the parallel algorithm is carried out using MPI and PETSc parallel libraries. The scalabilities of the algorithm are investigated in a Linux cluster.
Vera, N. C.; GMMC
2013-05-01
In this paper we present the results of macrohybrid mixed Darcian flow in porous media in a general three-dimensional domain. The global problem is solved as a set of local subproblems which are posed using a domain decomposition method. Unknown fields of local problems, velocity and pressure are approximated using mixed finite elements. For this application, a general three-dimensional domain is considered which is discretized using tetrahedra. The discrete domain is decomposed into subdomains and reformulated the original problem as a set of subproblems, communicated through their interfaces. To solve this set of subproblems, we use finite element mixed and parallel computing. The parallelization of a problem using this methodology can, in principle, to fully exploit a computer equipment and also provides results in less time, two very important elements in modeling. Referencias G.Alduncin and N.Vera-Guzmán Parallel proximal-point algorithms for mixed _nite element models of _ow in the subsurface, Commun. Numer. Meth. Engng 2004; 20:83-104 (DOI: 10.1002/cnm.647) Z. Chen, G.Huan and Y. Ma Computational Methods for Multiphase Flows in Porous Media, SIAM, Society for Industrial and Applied Mathematics, Philadelphia, 2006. A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations, Springer-Verlag, Berlin, 1994. Brezzi F, Fortin M. Mixed and Hybrid Finite Element Methods. Springer: New York, 1991.
A Temporal Domain Decomposition Algorithmic Scheme for Large-Scale Dynamic Traffic Assignment
Directory of Open Access Journals (Sweden)
Eric J. Nava
2012-03-01
This paper presents a temporal decomposition scheme for large spatial- and temporal-scale dynamic traffic assignment, in which the entire analysis period is divided into Epochs. Vehicle assignment is performed sequentially in each Epoch, thus improving the model scalability and confining the peak run-time memory requirement regardless of the total analysis period. A proposed self-turning scheme adaptively searches for the run-time-optimal Epoch setting during iterations regardless of the characteristics of the modeled network. Extensive numerical experiments confirm the promising performance of the proposed algorithmic schemes.
An efficient domain decomposition strategy for wave loads on surface piercing circular cylinders
DEFF Research Database (Denmark)
Paulsen, Bo Terp; Bredmose, Henrik; Bingham, Harry B.
2014-01-01
A fully nonlinear domain decomposed solver is proposed for efficient computations of wave loads on surface piercing structures in the time domain. A fully nonlinear potential flow solver was combined with a fully nonlinear Navier–Stokes/VOF solver via generalized coupling zones of arbitrary shape....... Sensitivity tests of the extent of the inner Navier–Stokes/VOF domain were carried out. Numerical computations of wave loads on surface piercing circular cylinders at intermediate water depths are presented. Four different test cases of increasing complexity were considered; 1) weakly nonlinear regular waves...
Directory of Open Access Journals (Sweden)
L. Wang
2015-01-01
Full Text Available Economical space transportation systems to launch small satellites into Earth’s orbits are researched in many countries. Using aerospace systems, included aircraft and air-launched launch vehicle, is one of the low cost technical solutions. The airborne launch vehicle application to launch a small satellite with the purpose of remote sensing requires high precision exit on specified sun-synchronous orbit. So a problem is stated to construct an optimal ascent trajectory and optimal control.In this paper, the mathematical motion model of the air-launched launch vehicle with the external disturbances caused by the Earth’s non-sphericity, drag and wind is put forward based on the three-stage flight program with passive intermediate section. A discrete process based on pseudo-spectral method is used to solve the problem, which allows converting the initial problem into a nonlinear programming problem with dynamic constraints and aims for the criteria of maximization of the final mass released onto the target orbit.Application of the proposed solution procedure is illustrated by calculating the optimal control and the corresponding trajectory for two-stage liquid launch vehicle, which places the small spacecraft on the orbit of sun-synchronous at the height of 512 km. The numerical simulation results have demonstrated the effectiveness of the proposed algorithm and allow us to analyze three-stage trajectory parameters with intermediate passive flight phase. It can be noted that in the resulting ascent trajectory, the intermediate passive flight part is a suborbital trajectory with low energy integral, perigee of which is under the surface of the Earth.
Directory of Open Access Journals (Sweden)
Yasong Qiu
2015-02-01
Full Text Available In this paper a new flow field prediction method which is independent of the governing equations, is developed to predict stationary flow fields of variable physical domain. Predicted flow fields come from linear superposition of selected basis modes generated by proper orthogonal decomposition (POD. Instead of traditional projection methods, kriging surrogate model is used to calculate the superposition coefficients through building approximate function relationships between profile geometry parameters of physical domain and these coefficients. In this context, the problem which troubles the traditional POD-projection method due to viscosity and compressibility has been avoided in the whole process. Moreover, there are no constraints for the inner product form, so two forms of simple ones are applied to improving computational efficiency and cope with variable physical domain problem. An iterative algorithm is developed to determine how many basis modes ranking front should be used in the prediction. Testing results prove the feasibility of this new method for subsonic flow field, but also prove that it is not proper for transonic flow field because of the poor predicted shock waves.
Yücel, Abdulkadir C.
2013-07-01
Reliable and effective wireless communication and tracking systems in mine environments are key to ensure miners\\' productivity and safety during routine operations and catastrophic events. The design of such systems greatly benefits from simulation tools capable of analyzing electromagnetic (EM) wave propagation in long mine tunnels and large mine galleries. Existing simulation tools for analyzing EM wave propagation in such environments employ modal decompositions (Emslie et. al., IEEE Trans. Antennas Propag., 23, 192-205, 1975), ray-tracing techniques (Zhang, IEEE Tran. Vehic. Tech., 5, 1308-1314, 2003), and full wave methods. Modal approaches and ray-tracing techniques cannot accurately account for the presence of miners and their equipments, as well as wall roughness (especially when the latter is comparable to the wavelength). Full-wave methods do not suffer from such restrictions but require prohibitively large computational resources. To partially alleviate this computational burden, a 2D integral equation-based domain decomposition technique has recently been proposed (Bakir et. al., in Proc. IEEE Int. Symp. APS, 1-2, 8-14 July 2012). © 2013 IEEE.
Dolean, Victorita; Gander, Martin J.; Lanteri, Stephane; Lee, Jin-Fa; Peng, Zhen
2015-01-01
The time-harmonic Maxwell equations describe the propagation of electromagnetic waves and are therefore fundamental for the simulation of many modern devices we have become used to in everyday life. The numerical solution of these equations is hampered by two fundamental problems: first, in the high frequency regime, very fine meshes need to be used in order to avoid the pollution effect well known for the Helmholtz equation, and second the large scale systems obtained from the vector valued equations in three spatial dimensions need to be solved by iterative methods, since direct factorizations are not feasible any more at that scale. As for the Helmholtz equation, classical iterative methods applied to discretized Maxwell equations have severe convergence problems. We explain in this paper a family of domain decomposition methods based on well chosen transmission conditions. We show that all transmission conditions proposed so far in the literature, both for the first and second order formulation of Maxwell's equations, can be written and optimized in the common framework of optimized Schwarz methods, independently of the first or second order formulation one uses, and the performance of the corresponding algorithms is identical. We use a decomposition into transverse electric and transverse magnetic fields to describe these algorithms, which greatly simplifies the convergence analysis of the methods. We illustrate the performance of our algorithms with large scale numerical simulations.
The mixed problem in Lipschitz domains with general decompositions of the boundary
Taylor, Justin L.; Ott, Katharine A.; Brown, Russell M.
2011-01-01
This paper continues the study of the mixed problem for the Laplacian. We consider a bounded Lipschitz domain $\\Omega\\subset \\reals^n$, $n\\geq2$, with boundary that is decomposed as $\\partial\\Omega=D\\cup N$, $D$ and $N$ disjoint. We let $\\Lambda$ denote the boundary of $D$ (relative to $\\partial\\Omega$) and impose conditions on the dimension and shape of $\\Lambda$ and the sets $N$ and $D$. Under these geometric criteria, we show that there exists $p_0>1$ depending on the domain $\\Omega$ such ...
Jerez-Hanckes, Carlos; Pérez-Arancibia, Carlos; Turc, Catalin
2017-12-01
We present Nyström discretizations of multitrace/singletrace formulations and non-overlapping Domain Decomposition Methods (DDM) for the solution of Helmholtz transmission problems for bounded composite scatterers with piecewise constant material properties. We investigate the performance of DDM with both classical Robin and optimized transmission boundary conditions. The optimized transmission boundary conditions incorporate square root Fourier multiplier approximations of Dirichlet to Neumann operators. While the multitrace/singletrace formulations as well as the DDM that use classical Robin transmission conditions are not particularly well suited for Krylov subspace iterative solutions of high-contrast high-frequency Helmholtz transmission problems, we provide ample numerical evidence that DDM with optimized transmission conditions constitute efficient computational alternatives for these type of applications. In the case of large numbers of subdomains with different material properties, we show that the associated DDM linear system can be efficiently solved via hierarchical Schur complements elimination.
Kourakos, G.; Harter, T.
2012-12-01
Groundwater contamination in semi-arid agricultural regions is increasing around the globe. Communities in such areas typically rely on groundwater resources for domestic and irrigation uses. Intensive farming practices are a significant source of groundwater contamination, which affects communities via well pumping and ecosystems via groundwater return flow to streams. Agricultural contamination or diffuse pollution is generally difficult to simulate due to large amount of sources and the large number of distributed wells, requiring high resolution flow and transport simulations. Individual contributing sources are on the order of few hectare to a few tens of hectare, while many of the larger agricultural groundwater basins encompass hundreds to thousands of square kilometers. Classical 3D transport modeling approaches are intractable across such scales with the current computing power. In this study we develop an efficient, highly parallelizable transport method known as streamline transport simulation. The approach decomposes a multi-dimensional problem into multiple one-dimensional subproblems which are trivial to solve. The streamline modeling requires a highly detailed 3D velocity field. The simulation of highly detailed groundwater flow in large agricultural basin is achieved by developing a substructuring iterative domain decomposition method or Complement Schur method for obtaining the velocity field. For unconfined aquifers, we illustrate that it is critical to use a moving mesh such that finite element adapts according to the head field. We therefore combined an iterative moving mesh approach with the Complement Schur domain decomposition method. The importance of using the moving mesh approach is illustrated with a hypothetical example and with an application to a real case study in the southern Central Valley, California.
Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos
Energy Technology Data Exchange (ETDEWEB)
Ragusa, J.C. [CEA Saclay, Direction de l' Energie Nucleaire, Service d' Etudes des Reacteurs et de Modelisations Avancees (DEN/SERMA), 91 - Gif sur Yvette (France)
2003-07-01
The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster.
Directory of Open Access Journals (Sweden)
Eugenio Aulisa
2009-04-01
Full Text Available Solving complex coupled processes involving fluid-structure-thermal interactions is a challenging problem in computational sciences and engineering. Currently there exist numerous public-domain and commercial codes available in the area of Computational Fluid Dynamics (CFD, Computational Structural Dynamics (CSD and Computational Thermodynamics (CTD. Different groups specializing in modelling individual process such as CSD, CFD, CTD often come together to solve a complex coupled application. Direct numerical simulation of the non-linear equations for even the most simplified fluid-structure-thermal interaction (FSTI model depends on the convergence of iterative solvers which in turn rely heavily on the properties of the coupled system. The purpose of this paper is to introduce a flexible multilevel algorithm with finite elements that can be used to study a coupled FSTI. The method relies on decomposing the complex global domain, into several local sub-domains, solving smaller problems over these sub-domains and then gluing back the local solution in an efficient and accurate fashion to yield the global solution. Our numerical results suggest that the proposed solution methodology is robust and reliable.
Modal Identification of Output-only Systems Using Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, L.M.; Andersen, Palle
2001-01-01
In this paper a new frequency domain technique is introduced for the modal identification of output-only systems, i.e. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classica...
Modal Identification of Output-Only Systems using Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, L.; Andersen, P.
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification from ambient responses, ie. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical ...
Directory of Open Access Journals (Sweden)
Jingjing He
2016-08-01
Full Text Available Structural health monitoring has been studied by a number of researchers as well as various industries to keep up with the increasing demand for preventive maintenance routines. This work presents a novel method for reconstruct prompt, informed strain/stress responses at the hot spots of the structures based on strain measurements at remote locations. The structural responses measured from usage monitoring system at available locations are decomposed into modal responses using empirical mode decomposition. Transformation equations based on finite element modeling are derived to extrapolate the modal responses from the measured locations to critical locations where direct sensor measurements are not available. Then, two numerical examples (a two-span beam and a 19956-degree of freedom simplified airfoil are used to demonstrate the overall reconstruction method. Finally, the present work investigates the effectiveness and accuracy of the method through a set of experiments conducted on an aluminium alloy cantilever beam commonly used in air vehicle and spacecraft. The experiments collect the vibration strain signals of the beam via optical fiber sensors. Reconstruction results are compared with theoretical solutions and a detailed error analysis is also provided.
Hammerschmidt, Martin; Lockau, Daniel; Zschiedrich, Lin; Schmidt, Frank
2014-03-01
In many experimentally realized applications, e.g. photonic crystals, solar cells and light-emitting diodes, nanophotonic systems are coupled to a thick substrate layer, which in certain cases has to be included as a part of the optical system. The finite element method (FEM) yields rigorous, high accuracy solutions of full 3D vectorial Maxwell's equations1 and allows for great flexibility and accuracy in the geometrical modelling. Time-harmonic FEM solvers have been combined with Fourier methods in domain decomposition algorithms to compute coherent solutions of these coupled system.2, 3 The basic idea of a domain decomposition approach lies in a decomposition of the domain into smaller subdomains, separate calculations of the solutions and coupling of these solutions on adjacent subdomains. In experiments light sources are often not perfectly monochromatic and hence a comparision to simulation results might only be justified if the simulation results, which include interference patterns in the substrate, are spectrally averaged. In this contribution we present a scattering matrix domain decomposition algorithm for Maxwell's equations based on FEM. We study its convergence and advantages in the context of optical simulations of silicon thin film multi-junction solar cells. This allows for substrate lighttrapping to be included in optical simulations and leads to a more realistic estimation of light path enhancement factors in thin-film devices near the band edge.
Directory of Open Access Journals (Sweden)
Tromeur-Dervout Damien
2013-12-01
Full Text Available This paper deals with the representation of the trace of iterative Schwarz solutions at the interfaces of domain decomposition to approximate adaptively the interface error operator. This allows to build a cost-effectively accelerating of the convergence of the iterative method by extending to the vectorial case the Aitken’s accelerating convergence technique. The first representation is based on the building of a nonuniform discrete Fourier transform defined on a non-regular grid. We show how to construct a Fourier basis of dimension N+1 on this grid by building numerically a sesquilinear form, its exact accuracy to represent trigonometric polynomials of degree N / 2, and its spectral approximation property that depends on the continuity of the function to approximate. The decay of Fourier-like modes of the approximation of the trace of the iterative solution at the interfaces provides an estimate to adaptively select the modes involved in the acceleration. The drawback of this approach is to be dependent on the continuity of the trace of the iterated solution at the interfaces. The second representation, purely algebraic, uses a singular value decomposition of the trace of the iterative solution at the interfaces to provide a set of orthogonal singular vectors of which the associated singular values provide an estimate to adapt the acceleration. The resulting Aitken-Schwarz methodology is then applied to large scale computing on 3D linear Darcy flow where the permeability follows a log normal random distribution. Cet acte traite de la représentation des solutions itérées aux interfaces de la méthode de décomposition de domaine de type Schwarz afin d’approximer de manière adaptative son opérateur d’erreur aux interfaces des sous domaines. Ceci permet de construire de manière économique l’accélération de la convergence de la méthode itérative en étendant la technique d’accélération de la convergence de Aitken au cas
Directory of Open Access Journals (Sweden)
Jingang Liang
2016-06-01
Full Text Available Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC codes in accomplishing pin-wise three-dimensional (3D full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
Energy Technology Data Exchange (ETDEWEB)
Clerc, S
1998-07-01
In this work, the numerical simulation of fluid dynamics equations is addressed. Implicit upwind schemes of finite volume type are used for this purpose. The first part of the dissertation deals with the improvement of the computational precision in unfavourable situations. A non-conservative treatment of some source terms is studied in order to correct some shortcomings of the usual operator-splitting method. Besides, finite volume schemes based on Godunov's approach are unsuited to compute low Mach number flows. A modification of the up-winding by preconditioning is introduced to correct this defect. The second part deals with the solution of steady-state problems arising from an implicit discretization of the equations. A well-posed linearized boundary value problem is formulated. We prove the convergence of a domain decomposition algorithm of Schwartz type for this problem. This algorithm is implemented either directly, or in a Schur complement framework. Finally, another approach is proposed, which consists in decomposing the non-linear steady state problem. (author)
Jones, Adam; Utyuzhnikov, Sergey
2017-08-01
Turbulent flow in a ribbed channel is studied using an efficient near-wall domain decomposition (NDD) method. The NDD approach is formulated by splitting the computational domain into an inner and outer region, with an interface boundary between the two. The computational mesh covers the outer region, and the flow in this region is solved using the open-source CFD code Code_Saturne with special boundary conditions on the interface boundary, called interface boundary conditions (IBCs). The IBCs are of Robin type and incorporate the effect of the inner region on the flow in the outer region. IBCs are formulated in terms of the distance from the interface boundary to the wall in the inner region. It is demonstrated that up to 90% of the region between the ribs in the ribbed passage can be removed from the computational mesh with an error on the friction factor within 2.5%. In addition, computations with NDD are faster than computations based on low Reynolds number (LRN) models by a factor of five. Different rib heights can be studied with the same mesh in the outer region without affecting the accuracy of the friction factor. This is tested with six different rib heights in an example of a design optimisation study. It is found that the friction factors computed with NDD are almost identical to the fully-resolved results. When used for inverse problems, NDD is considerably more efficient than LRN computations because only one computation needs to be performed and only one mesh needs to be generated.
Zampini, Stefano
2017-08-03
Multilevel balancing domain decomposition by constraints (BDDC) deluxe algorithms are developed for the saddle point problems arising from mixed formulations of Darcy flow in porous media. In addition to the standard no-net-flux constraints on each face, adaptive primal constraints obtained from the solutions of local generalized eigenvalue problems are included to control the condition number. Special deluxe scaling and local generalized eigenvalue problems are designed in order to make sure that these additional primal variables lie in a benign subspace in which the preconditioned operator is positive definite. The current multilevel theory for BDDC methods for porous media flow is complemented with an efficient algorithm for the computation of the so-called malign part of the solution, which is needed to make sure the rest of the solution can be obtained using the conjugate gradient iterates lying in the benign subspace. We also propose a new technique, based on the Sherman--Morrison formula, that lets us preserve the complexity of the subdomain local solvers. Condition number estimates are provided under certain standard assumptions. Extensive numerical experiments confirm the theoretical estimates; additional numerical results prove the effectiveness of the method with higher order elements and high-contrast problems from real-world applications.
Energy Technology Data Exchange (ETDEWEB)
Girardi, E
2004-12-15
A new methodology for the solution of the neutron transport equation, based on domain decomposition has been developed. This approach allows us to employ different numerical methods together for a whole core calculation: a variational nodal method, a discrete ordinate nodal method and a method of characteristics. These new developments authorize the use of independent spatial and angular expansion, non-conformal Cartesian and unstructured meshes for each sub-domain, introducing a flexibility of modeling which is not allowed in today available codes. The effectiveness of our multi-domain/multi-method approach has been tested on several configurations. Among them, one particular application: the benchmark model of the Phebus experimental facility at Cea-Cadarache, shows why this new methodology is relevant to problems with strong local heterogeneities. This comparison has showed that the decomposition method brings more accuracy all along with an important reduction of the computer time.
Zhao, Wei; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K
2016-01-01
Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection Reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The reference drawn from the comparison study includes: (1) ...
Le, Thien-Phu
2017-10-01
The frequency-scale domain decomposition technique has recently been proposed for operational modal analysis. The technique is based on the Cauchy mother wavelet. In this paper, the approach is extended to the Morlet mother wavelet, which is very popular in signal processing due to its superior time-frequency localization. Based on the regressive form and an appropriate norm of the Morlet mother wavelet, the continuous wavelet transform of the power spectral density of ambient responses enables modes in the frequency-scale domain to be highlighted. Analytical developments first demonstrate the link between modal parameters and the local maxima of the continuous wavelet transform modulus. The link formula is then used as the foundation of the proposed modal identification method. Its practical procedure, combined with the singular value decomposition algorithm, is presented step by step. The proposition is finally verified using numerical examples and a laboratory test.
Energy Technology Data Exchange (ETDEWEB)
Mehboob, Shoaib, E-mail: smehboob@pieas.edu.pk [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan); Mehmood, Mazhar [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan); Ahmed, Mushtaq [National Institute of Lasers and Optronics (NILOP), Nilore 45650, Islamabad (Pakistan); Ahmad, Jamil; Tanvir, Muhammad Tauseef [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan); Ahmad, Izhar [National Institute of Lasers and Optronics (NILOP), Nilore 45650, Islamabad (Pakistan); Hassan, Syed Mujtaba ul [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan)
2017-04-15
The objective of this work is to study the changes in optical and dielectric properties with the transformation of aluminum ammonium carbonate hydroxide (AACH) to α-alumina, using terahertz time domain spectroscopy (THz-TDS). The nanostructured AACH was synthesized by hydrothermal treatment of the raw chemicals at 140 °C for 12 h. This AACH was then calcined at different temperatures. The AACH was decomposed to amorphous phase at 400 °C and transformed to δ* + α-alumina at 1000 °C. Finally, the crystalline α-alumina was achieved at 1200 °C. X-ray diffraction (XRD) and Fourier transform infrared (FTIR) spectroscopy were employed to identify the phases formed after calcination. The morphology of samples was studied using scanning electron microscopy (SEM), which revealed that the AACH sample had rod-like morphology which was retained in the calcined samples. THz-TDS measurements showed that AACH had lowest refractive index in the frequency range of measurements. The refractive index at 0.1 THZ increased from 2.41 for AACH to 2.58 for the amorphous phase and to 2.87 for the crystalline α-alumina. The real part of complex permittivity increased with the calcination temperature. Further, the absorption coefficient was highest for AACH, which reduced with calcination temperature. The amorphous phase had higher absorption coefficient than the crystalline alumina. - Highlights: • Aluminum oxide nanostructures were obtained by thermal decomposition of AACH. • Crystalline phases of aluminum oxide have higher refractive index than that of amorphous phase. • The removal of heavier ionic species led to the lower absorption of THz radiations.
Zheng, Xiang
2015-03-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors. © 2015 Elsevier Inc.
Directory of Open Access Journals (Sweden)
Daniel Marcsa
2015-01-01
Full Text Available The analysis and design of electromechanical devices involve the solution of large sparse linear systems, and require therefore high performance algorithms. In this paper, the primal Domain Decomposition Method (DDM with parallel forward-backward and with parallel Preconditioned Conjugate Gradient (PCG solvers are introduced in two-dimensional parallel time-stepping finite element formulation to analyze rotating machine considering the electromagnetic field, external circuit and rotor movement. The proposed parallel direct and the iterative solver with two preconditioners are analyzed concerning its computational efficiency and number of iterations of the solver with different preconditioners. Simulation results of a rotating machine is also presented.
Antoine, Xavier; Lorin, Emmanuel; Bandrauk, André D.
2015-01-01
International audience; This paper is devoted to the efficient computation of the Time Dependent Schrödinger Equation (TDSE) for quantum particles subject to intense electromagnetic fields including ionization and recombination of electrons with their parent ion. The proposed approach is based on a domain decomposition technique, allowing a fine computation of the wavefunction in the vicinity of the nuclei located in a domain Ω 1 and a fast computation in a roughly meshed domain Ω 2 far from ...
Blaclard, G; Lehe, R; Vay, J L
2016-01-01
With the advent of PW class lasers, the very large laser intensities attainable on-target should enable the production of intense high order Doppler harmonics from relativistic laser-plasma mirrors interactions. At present, the modeling of these harmonics with Particle-In-Cell (PIC) codes is extremely challenging as it implies an accurate description of tens of harmonic orders on a a broad range of angles. In particular, we show here that standard Finite Difference Time Domain (FDTD) Maxwell solvers used in most PIC codes partly fail to model Doppler harmonic generation because they induce numerical dispersion of electromagnetic waves in vacuum which is responsible for a spurious angular deviation of harmonic beams. This effect was extensively studied and a simple toy-model based on Snell-Descartes law was developed that allows us to finely predict the angular deviation of harmonics depending on the spatio-temporal resolution and the Maxwell solver used in the simulations. Our model demonstrates that the miti...
Yi, Xi; Wang, Xin; Chen, Weiting; Wan, Wenbo; Zhao, Huijuan; Gao, Feng
2014-05-01
The common approach to diffuse optical tomography is to solve a nonlinear and ill-posed inverse problem using a linearized iteration process that involves repeated use of the forward and inverse solvers on an appropriately discretized domain of interest. This scheme normally brings severe computation and storage burdens to its applications on large-sized tissues, such as breast tumor diagnosis and brain functional imaging, and prevents from using the matrix-fashioned linear inversions for improved image quality. To cope with the difficulties, we propose in this paper a parallelized full domain-decomposition scheme, which divides the whole domain into several overlapped subdomains and solves the corresponding subinversions independently within the framework of the Schwarz-type iterations, with the support of a combined multicore CPU and multithread graphics processing unit (GPU) parallelization strategy. The numerical and phantom experiments both demonstrate that the proposed method can effectively reduce the computation time and memory occupation for the large-sized problem and improve the quantitative performance of the reconstruction.
Bertrand, G.; Comperat, M.; Lallemant, M.; Watelle, G.
1980-03-01
Copper sulfate pentahydrate dehydration into trihydrate was investigated using monocrystalline platelets with varying crystallographic orientations. The morphological and kinetic features of the trihydrate domains were examined. Different shapes were observed: polygons (parallelograms, hexagons) and ellipses; their conditions of occurrence are reported in the (P, T) diagram. At first (for about 2 min), the ratio of the long to the short axes of elliptical domains changes with time; these subsequently develop homothetically and the rate ratio is then only pressure dependent. Temperature influence is inferred from that of pressure. Polygonal shapes are time dependent and result in ellipses. So far, no model can be put forward. Yet, qualitatively, the polygonal shape of a domain may be explained by the prevalence of the crystal arrangement and the elliptical shape by that of the solid tensorial properties. The influence of those factors might be modulated versus pressure, temperature, interface extent, and, thus, time.
Aagaard, Brad T.; Knepley, M.G.; Williams, C.A.
2013-01-01
We employ a domain decomposition approach with Lagrange multipliers to implement fault slip in a finite-element code, PyLith, for use in both quasi-static and dynamic crustal deformation applications. This integrated approach to solving both quasi-static and dynamic simulations leverages common finite-element data structures and implementations of various boundary conditions, discretization schemes, and bulk and fault rheologies. We have developed a custom preconditioner for the Lagrange multiplier portion of the system of equations that provides excellent scalability with problem size compared to conventional additive Schwarz methods. We demonstrate application of this approach using benchmarks for both quasi-static viscoelastic deformation and dynamic spontaneous rupture propagation that verify the numerical implementation in PyLith.
Energy Technology Data Exchange (ETDEWEB)
Masiello, Emiliano; Martin, Brunella; Do, Jean-Michel, E-mail: emiliano.masiello@cea.fr, E-mail: brunella.martin@gmail.com, E-mail: jean-michel.do@cea.fr [Commissariat a l' Energie Atomique et aux Energies Alternatives, Direction de l' Energie Nucleaire, Service d' Etudes de Reacteurs et de Mathematiques Appliquees, Gif sur Yvette, (France)
2011-07-01
A new development for the IDT solver is presented for large reactor core applications in XYZ geometries. The multigroup discrete-ordinate neutron transport equation is solved using a Domain-Decomposition (DD) method coupled with the Coarse-Mesh Finite Differences (CMFD). The later is used for accelerating the DD convergence rate. In particular, the external power iterations are preconditioned for stabilizing the oscillatory behavior of the DD iterative process. A set of critical 2-D and 3-D numerical tests on a single processor will be presented for the analysis of the performances of the method. The results show that the application of the CMFD to the DD can be a good candidate for large 3D full-core parallel applications. (author)
Parallel QR Decomposition for Electromagnetic Scattering Problems
National Research Council Canada - National Science Library
Boleng, Jeff
1997-01-01
This report introduces a new parallel QR decomposition algorithm. Test results are presented for several problem sizes, numbers of processors, and data from the electromagnetic scattering problem domain...
DEFF Research Database (Denmark)
Merker, Martin
The topic of this PhD thesis is graph decompositions. While there exist various kinds of decompositions, this thesis focuses on three problems concerning edgedecompositions. Given a family of graphs H we ask the following question: When can the edge-set of a graph be partitioned so that each part...... induces a subgraph isomorphic to a member of H? Such a decomposition is called an H-decomposition. Apart from the existence of an H-decomposition, we are also interested in the number of parts needed in an H-decomposition. Firstly, we show that for every tree T there exists a constant k(T) such that every...... k(T)-edge-connected graph whose size is divisible by the size of T admits a T-decomposition. This proves a conjecture by Barát and Thomassen from 2006. Moreover, we introduce a new arboricity notion where we restrict the diameter of the trees in a decomposition into forests. We conjecture...
Some nonlinear space decomposition algorithms
Energy Technology Data Exchange (ETDEWEB)
Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)
1996-12-31
Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.
Chao, T.T.; Sanzolone, R.F.
1992-01-01
Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.
DEFF Research Database (Denmark)
Dyson, Mark
2003-01-01
. Not only have design tools changed character, but also the processes associated with them. Today, the composition of problems and their decomposition into parcels of information, calls for a new paradigm. This paradigm builds on the networking of agents and specialisations, and the paths of communication...
Mode decomposition evolution equations.
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2012-03-01
Partial differential equation (PDE) based methods have become some of the most powerful tools for exploring the fundamental problems in signal processing, image processing, computer vision, machine vision and artificial intelligence in the past two decades. The advantages of PDE based approaches are that they can be made fully automatic, robust for the analysis of images, videos and high dimensional data. A fundamental question is whether one can use PDEs to perform all the basic tasks in the image processing. If one can devise PDEs to perform full-scale mode decomposition for signals and images, the modes thus generated would be very useful for secondary processing to meet the needs in various types of signal and image processing. Despite of great progress in PDE based image analysis in the past two decades, the basic roles of PDEs in image/signal analysis are only limited to PDE based low-pass filters, and their applications to noise removal, edge detection, segmentation, etc. At present, it is not clear how to construct PDE based methods for full-scale mode decomposition. The above-mentioned limitation of most current PDE based image/signal processing methods is addressed in the proposed work, in which we introduce a family of mode decomposition evolution equations (MoDEEs) for a vast variety of applications. The MoDEEs are constructed as an extension of a PDE based high-pass filter (Europhys. Lett., 59(6): 814, 2002) by using arbitrarily high order PDE based low-pass filters introduced by Wei (IEEE Signal Process. Lett., 6(7): 165, 1999). The use of arbitrarily high order PDEs is essential to the frequency localization in the mode decomposition. Similar to the wavelet transform, the present MoDEEs have a controllable time-frequency localization and allow a perfect reconstruction of the original function. Therefore, the MoDEE operation is also called a PDE transform. However, modes generated from the present approach are in the spatial or time domain and can be
Waring decompositions of monomials
National Research Council Canada - National Science Library
Buczyńska, Weronika; Buczyński, Jarosław; Teitler, Zach
2013-01-01
.... We prove that any Waring decomposition of a monomial is obtained from a complete intersection ideal, determine the dimension of the set of Waring decompositions, and give the conditions under which...
Parallel decomposition methods for the solution of electromagnetic scattering problems
Cwik, Tom
1992-01-01
This paper contains a overview of the methods used in decomposing solutions to scattering problems onto coarse-grained parallel processors. Initially, a short summary of relevant computer architecture is presented as background to the subsequent discussion. After the introduction of a programming model for problem decomposition, specific decompositions of finite difference time domain, finite element, and integral equation solutions to Maxwell's equations are presented. The paper concludes with an outline of possible software-assisted decomposition methods and a summary.
Snapshot wavefield decomposition for heterogeneous velocity media
Holicki, M.E.; Wapenaar, C.P.A.
2017-01-01
We propose a novel directional decomposition operator for wavefield snapshots in heterogeneous-velocity media. The proposed operator demonstrates the link between the amplitude of pressure and particlevelocity plane waves in the wavenumber domain. The proposed operator requires two spatial Fourier
Martin, R.; Gonzalez Ortiz, A.
momentum exchange forces and the interphase heat exchanges are 1 treated implicitly to ensure stability. In order to reduce one more time the computa- tional cost, a decomposition of the global domain in N subdomains is introduced and all the previous algorithms applied to one block is performed in each block. At the in- terface between subdomains, an overlapping procedure is used. Another advantage is that different sets of equations can be solved in each block like fluid/structure interac- tions for instance. We show here the hydrodynamics of a two-phase flow in a vertical conduct as in industrial plants of fluid catalytical cracking processes with a complex geometry. With an initial Richardson number of 0.16 slightly higher than the critical Richardson number of 0.1, particles and water vapor are injected at the bottom of the riser. Countercurrents appear near the walls and gravity effects begin to dominate in- ducing an increase of particulate volumic fractions near the walls. We show here the hydrodynamics for 13s. 2
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Self-consistent field theory simulations of polymers on arbitrary domains
Energy Technology Data Exchange (ETDEWEB)
Ouaknin, Gaddiel, E-mail: gaddielouaknin@umail.ucsb.edu [Department of Mechanical Engineering, University of California, Santa Barbara, CA 93106-5070 (United States); Laachi, Nabil; Delaney, Kris [Materials Research Laboratory, University of California, Santa Barbara, CA 93106-5080 (United States); Fredrickson, Glenn H. [Materials Research Laboratory, University of California, Santa Barbara, CA 93106-5080 (United States); Department of Chemical Engineering, University of California, Santa Barbara, CA 93106-5080 (United States); Department of Materials, University of California, Santa Barbara, CA 93106-5050 (United States); Gibou, Frederic [Department of Mechanical Engineering, University of California, Santa Barbara, CA 93106-5070 (United States); Department of Computer Science, University of California, Santa Barbara, CA 93106-5110 (United States)
2016-12-15
We introduce a framework for simulating the mesoscale self-assembly of block copolymers in arbitrary confined geometries subject to Neumann boundary conditions. We employ a hybrid finite difference/volume approach to discretize the mean-field equations on an irregular domain represented implicitly by a level-set function. The numerical treatment of the Neumann boundary conditions is sharp, i.e. it avoids an artificial smearing in the irregular domain boundary. This strategy enables the study of self-assembly in confined domains and enables the computation of physically meaningful quantities at the domain interface. In addition, we employ adaptive grids encoded with Quad-/Oc-trees in parallel to automatically refine the grid where the statistical fields vary rapidly as well as at the boundary of the confined domain. This approach results in a significant reduction in the number of degrees of freedom and makes the simulations in arbitrary domains using effective boundary conditions computationally efficient in terms of both speed and memory requirement. Finally, in the case of regular periodic domains, where pseudo-spectral approaches are superior to finite differences in terms of CPU time and accuracy, we use the adaptive strategy to store chain propagators, reducing the memory footprint without loss of accuracy in computed physical observables.
Haris, A.; Morena, V.; Riyanto, A.; Zulivandama, S. R.
2017-07-01
Non-stationer signal from the seismic survey is difficult to be directly interpreted in time domain analysis. Spectral decomposition is one of the spectral analysis methods that can analyze the non-stationer signal in frequency domain. The Fast Fourier Transform method was commonly used for spectral decomposition analysis, however, this method had a limitation in the scaled window analysis and produced pure quality for low-frequency shadow. The S-Transform and Empirical the Mode Decomposition (EMD) is another method of spectral decomposition that can be used to enhanced low-frequency shadows. In this research, comparison of the S-Transform and the EMD methods that can show the difference imaging result of low-frequency shadows zone is applied to Eldo Field, Jambi Province. The spectral decomposition result based on the EMD method produced better imaging of low-frequency shadows zone in tuning thickness compared to S-Transform methods.
Thermal decomposition of hemicelluloses
Werner, Kajsa; Pommer, Linda; Broström, Markus
2014-01-01
Decomposition modeling of biomass often uses commercially available xylan as model compound representing hemicelluloses, not taking in account the heterogeneous nature of that group of carbohydrates. In this study, the thermal decomposition behavior of seven different hemicelluloses (beta-glucan, arabinogalactan, arabinoxylan, galactomannan, glucomannan, xyloglucan, and xylan) were investigated in inert atmosphere using (i) thermogravimetric analysis coupled to Fourier transform infrared spec...
Multiresolution signal decomposition schemes
J. Goutsias (John); H.J.A.M. Heijmans (Henk)
1998-01-01
textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis
Azimuthal decomposition of optical modes
CSIR Research Space (South Africa)
Dudley, Angela L
2012-07-01
Full Text Available This presentation analyses the azimuthal decomposition of optical modes. Decomposition of azimuthal modes need two steps, namely generation and decomposition. An azimuthally-varying phase (bounded by a ring-slit) placed in the spatial frequency...
Graph Decompositions and Factorizing Permutations
Directory of Open Access Journals (Sweden)
Christian Capelle
2002-12-01
Full Text Available A factorizing permutation of a given graph is simply a permutation of the vertices in which all decomposition sets appear to be factors. Such a concept seems to play a central role in recent papers dealing with graph decomposition. It is applied here for modular decomposition and we propose a linear algorithm that computes the whole decomposition tree when a factorizing permutation is provided. This algorithm can be seen as a common generalization of Ma and Hsu for modular decomposition of chordal graphs and Habib, Huchard and Spinrad for inheritance graphs decomposition. It also suggests many new decomposition algorithms for various notions of graph decompositions.
Daverman, Robert J
2007-01-01
Decomposition theory studies decompositions, or partitions, of manifolds into simple pieces, usually cell-like sets. Since its inception in 1929, the subject has become an important tool in geometric topology. The main goal of the book is to help students interested in geometric topology to bridge the gap between entry-level graduate courses and research at the frontier as well as to demonstrate interrelations of decomposition theory with other parts of geometric topology. With numerous exercises and problems, many of them quite challenging, the book continues to be strongly recommended to eve
Polyethylene hydroperoxide decomposition products
National Research Council Canada - National Science Library
Lacoste, J; Carlsson, David James (Dave); Falicki, S; Wiles, D. M
1991-01-01
The decomposition products from pre-oxidized, linear low-density polyethylene have been identified and quantified for films exposed in the absence of oxygen to ultra-violet irradiation, heat or γ-irradiation...
Litter Decomposition Rates, 2015
U.S. Geological Survey, Department of the Interior — This data set contains decomposition rates for litter of Salicornia pacifica, Distichlis spicata, and Deschampsia cespitosa buried at 7 tidal marsh sites in 2015....
Orthogonal tensor decompositions
Energy Technology Data Exchange (ETDEWEB)
Tamara G. Kolda
2000-03-01
The authors explore the orthogonal decomposition of tensors (also known as multi-dimensional arrays or n-way arrays) using two different definitions of orthogonality. They present numerous examples to illustrate the difficulties in understanding such decompositions. They conclude with a counterexample to a tensor extension of the Eckart-Young SVD approximation theorem by Leibovici and Sabatier [Linear Algebra Appl. 269(1998):307--329].
Snapshot wavefield decomposition for heterogeneous velocity media
Holicki, M.E.; Wapenaar, C.P.A.
2017-01-01
We propose a novel directional decomposition operator for wavefield snapshots in heterogeneous-velocity media. The proposed operator demonstrates the link between the amplitude of pressure and particlevelocity plane waves in the wavenumber domain. The proposed operator requires two spatial Fourier transforms (one forward and one backward) per spatial dimension and time slice. To illustrate the operator we demonstrate its applicability to heterogeneous velocity models using a simple velocity-b...
Energy decompositions according to physical space partitioning schemes
Alcoba, Diego R.; Torre, Alicia; Lain, Luis; Bochicchio, Roberto C.
2005-02-01
This work describes simple decompositions of the energy of molecular systems according to schemes that partition the three-dimensional space. The components of those decompositions depend on one and two atomic domains thus providing a meaningful chemical information about the nature of different bondings among the atoms which compose the system. Our algorithms can be applied at any level of theory (correlated or uncorrelated wave functions). The results reported here, obtained at the Hartree-Fock level in selected molecules, show a good agreement with the chemical picture of molecules and require a low computational cost in comparison with other previously reported decompositions.
Decomposing Nekrasov decomposition
Energy Technology Data Exchange (ETDEWEB)
Morozov, A. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); Institute for Information Transmission Problems,19-1 Bolshoy Karetniy, Moscow, 127051 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Zenkevich, Y. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Institute for Nuclear Research of Russian Academy of Sciences,6a Prospekt 60-letiya Oktyabrya, Moscow, 117312 (Russian Federation)
2016-02-16
AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.
DEFF Research Database (Denmark)
Haberland, Hartmut
2005-01-01
The domain concept, originally suggested by Schmidt-Rohr in the 1930’s (as credited in Fishman’s writings in the 1970s), was an attempt to sort out different areas of language use in multilingual societies, which are relevant for language choice. In Fishman’s version, domains were considered...... as theoretical constructs that can explain language choice which were supposed to be a more powerful explanatory tool than more obvious (and observable) parameters like topic, place (setting) and interlocutor. In the meantime, at least in Scandinavia, the term ‘domain’ has been taken up in the debate among...... politicians and in the media, especially in the discussion whether some languages undergo ‘domain loss’ vis-à-vis powerful international languages like English. An objection that has been raised here is that domains, as originally conceived, are parameters of language choice and not properties of languages...
Symmetric Tensor Decomposition
DEFF Research Database (Denmark)
Brachat, Jerome; Comon, Pierre; Mourrain, Bernard
2010-01-01
We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables....... Exploiting this duality, we propose necessary and sufficient conditions for the existence of such a decomposition of a given rank, using the properties of Hankel (and quasi-Hankel) matrices, derived from multivariate polynomials and normal form computations. This leads to the resolution of systems...
Kosambi and Proper Orthogonal Decomposition
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 16; Issue 6. Kosambi and the Proper Orthogonal Decomposition. Roddam Narasimha. General ... Keywords. Proper orthogonal decomposition; Karhunen–Loéve expansion; statistics in function space; characteristic eddies; special calculating machines.
Formal Language Decomposition into Semantic Primes
Directory of Open Access Journals (Sweden)
Johannes FÄHNDRICH
2014-10-01
Full Text Available This paper describes an algorithm for semantic decomposition. For that we surveys languages used to enrich contextual information with semantic descriptions. Such descriptions can be e.g. applied to enable reasoning when collecting vast amounts of information. In particular, we focus on the elements of the languages that make up their semantic. To do so, we compare the expressiveness of the well-known languages OWL, PDDL and MOF with a theory from linguistic called the Natural Semantic Metalanguage. We then analyze how the semantic of the language is build up and describe how semantic decomposition based on the semantic primes can be used for a so called mental lexicon. This mental lexicon can be used to reason upon semantic service description in the research domain of service match making.
Wood decomposition as influenced by invertebrates.
Ulyshen, Michael D
2016-02-01
The diversity and habitat requirements of invertebrates associated with dead wood have been the subjects of hundreds of studies in recent years but we still know very little about the ecological or economic importance of these organisms. The purpose of this review is to examine whether, how and to what extent invertebrates affect wood decomposition in terrestrial ecosystems. Three broad conclusions can be reached from the available literature. First, wood decomposition is largely driven by microbial activity but invertebrates also play a significant role in both temperate and tropical environments. Primary mechanisms include enzymatic digestion (involving both endogenous enzymes and those produced by endo- and ectosymbionts), substrate alteration (tunnelling and fragmentation), biotic interactions and nitrogen fertilization (i.e. promoting nitrogen fixation by endosymbiotic and free-living bacteria). Second, the effects of individual invertebrate taxa or functional groups can be accelerative or inhibitory but the cumulative effect of the entire community is generally to accelerate wood decomposition, at least during the early stages of the process (most studies are limited to the first 2-3 years). Although methodological differences and design limitations preclude meta-analysis, studies aimed at quantifying the contributions of invertebrates to wood decomposition commonly attribute 10-20% of wood loss to these organisms. Finally, some taxa appear to be particularly influential with respect to promoting wood decomposition. These include large wood-boring beetles (Coleoptera) and termites (Termitoidae), especially fungus-farming macrotermitines. The presence or absence of these species may be more consequential than species richness and the influence of invertebrates is likely to vary biogeographically. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Cacciatori, Sergio L; Marrani, Alessio
2013-01-01
By exploiting a "mixed" non-symmetric Freudenthal-Rozenfeld-Tits magic square, two types of coset decompositions are analyzed for the non-compact special K\\"ahler symmetric rank-3 coset E7(-25)/[(E6(-78) x U(1))/Z_3], occurring in supergravity as the vector multiplets' scalar manifold in N=2, D=4 exceptional Maxwell-Einstein theory. The first decomposition exhibits maximal manifest covariance, whereas the second (triality-symmetric) one is of Iwasawa type, with maximal SO(8) covariance. Generalizations to conformal non-compact, real forms of non-degenerate, simple groups "of type E7" are presented for both classes of coset parametrizations, and relations to rank-3 simple Euclidean Jordan algebras and normed trialities over division algebras are also discussed.
Vibration fatigue using modal decomposition
Mršnik, Matjaž; Slavič, Janko; Boltežar, Miha
2018-01-01
Vibration-fatigue analysis deals with the material fatigue of flexible structures operating close to natural frequencies. Based on the uniaxial stress response, calculated in the frequency domain, the high-cycle fatigue model using the S-N curve material data and the Palmgren-Miner hypothesis of damage accumulation is applied. The multiaxial criterion is used to obtain the equivalent uniaxial stress response followed by the spectral moment approach to the cycle-amplitude probability density estimation. The vibration-fatigue analysis relates the fatigue analysis in the frequency domain to the structural dynamics. However, once the stress response within a node is obtained, the physical model of the structure dictating that response is discarded and does not propagate through the fatigue-analysis procedure. The structural model can be used to evaluate how specific dynamic properties (e.g., damping, modal shapes) affect the damage intensity. A new approach based on modal decomposition is presented in this research that directly links the fatigue-damage intensity with the dynamic properties of the system. It thus offers a valuable insight into how different modes of vibration contribute to the total damage to the material. A numerical study was performed showing good agreement between results obtained using the newly presented approach with those obtained using the classical method, especially with regards to the distribution of damage intensity and critical point location. The presented approach also offers orders of magnitude faster calculation in comparison with the conventional procedure. Furthermore, it can be applied in a straightforward way to strain experimental modal analysis results, taking advantage of experimentally measured strains.
Thermal decomposition of illite
Directory of Open Access Journals (Sweden)
Araújo José Humberto de
2004-01-01
Full Text Available The effect of heat treatment on illite in air at temperatures ranging from 750 to 1150 °C was studied using the Mössbauer effect in 57Fe. The dependence of the Mössbauer parameters and relative percentage of the radiation absorption area was measured as a function of the firing temperature. The onset of thermal structural decomposition occurred at 800 °C. With rising temperature, the formation of hematite (Fe2O3 increased at the expense of the silicate mineral.
Clustering via Kernel Decomposition
DEFF Research Database (Denmark)
Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan
2006-01-01
Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....
DEFF Research Database (Denmark)
Hjørland, Birger
2017-01-01
The domain-analytic approach to knowledge organization (KO) (and to the broader field of library and information science, LIS) is outlined. The article reviews the discussions and proposals on the definition of domains, and provides an example of a domain-analytic study in the field of art studie....... Varieties of domain analysis as well as criticism and controversies are presented and discussed....
Evaluation of Damping Using Frequency Domain Operational Modal Analysis Techniques
DEFF Research Database (Denmark)
Bajric, Anela; Georgakis, Christos T.; Brincker, Rune
2015-01-01
domain techniques, the Frequency Domain Decomposition (FDD) and the Frequency Domain Polyreference (FDPR). The response of a two degree-of-freedom (2DOF) system is numerically established with specified modal parameters subjected to white noise loading. The system identification is evaluated with well...
Directory of Open Access Journals (Sweden)
Liao Li
2010-10-01
Full Text Available Abstract Background Protein-protein interaction (PPI plays essential roles in cellular functions. The cost, time and other limitations associated with the current experimental methods have motivated the development of computational methods for predicting PPIs. As protein interactions generally occur via domains instead of the whole molecules, predicting domain-domain interaction (DDI is an important step toward PPI prediction. Computational methods developed so far have utilized information from various sources at different levels, from primary sequences, to molecular structures, to evolutionary profiles. Results In this paper, we propose a computational method to predict DDI using support vector machines (SVMs, based on domains represented as interaction profile hidden Markov models (ipHMM where interacting residues in domains are explicitly modeled according to the three dimensional structural information available at the Protein Data Bank (PDB. Features about the domains are extracted first as the Fisher scores derived from the ipHMM and then selected using singular value decomposition (SVD. Domain pairs are represented by concatenating their selected feature vectors, and classified by a support vector machine trained on these feature vectors. The method is tested by leave-one-out cross validation experiments with a set of interacting protein pairs adopted from the 3DID database. The prediction accuracy has shown significant improvement as compared to InterPreTS (Interaction Prediction through Tertiary Structure, an existing method for PPI prediction that also uses the sequences and complexes of known 3D structure. Conclusions We show that domain-domain interaction prediction can be significantly enhanced by exploiting information inherent in the domain profiles via feature selection based on Fisher scores, singular value decomposition and supervised learning based on support vector machines. Datasets and source code are freely available on
Erbium hydride decomposition kinetics.
Energy Technology Data Exchange (ETDEWEB)
Ferrizz, Robert Matthew
2006-11-01
Thermal desorption spectroscopy (TDS) is used to study the decomposition kinetics of erbium hydride thin films. The TDS results presented in this report are analyzed quantitatively using Redhead's method to yield kinetic parameters (E{sub A} {approx} 54.2 kcal/mol), which are then utilized to predict hydrogen outgassing in vacuum for a variety of thermal treatments. Interestingly, it was found that the activation energy for desorption can vary by more than 7 kcal/mol (0.30 eV) for seemingly similar samples. In addition, small amounts of less-stable hydrogen were observed for all erbium dihydride films. A detailed explanation of several approaches for analyzing thermal desorption spectra to obtain kinetic information is included as an appendix.
Decomposition methods for unsupervised learning
DEFF Research Database (Denmark)
Mørup, Morten
2008-01-01
This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...
Domain decomposition in time for PDE-constrained optimization
Barker, Andrew T.; Stoll, Martin
2015-12-01
PDE-constrained optimization problems have a wide range of applications, but they lead to very large and ill-conditioned linear systems, especially if the problems are time dependent. In this paper we outline an approach for dealing with such problems by decomposing them in time and applying an additive Schwarz preconditioner in time, so that we can take advantage of parallel computers to deal with the very large linear systems. We then illustrate the performance of our method on a variety of problems.
Domain decomposition methods for space fractional partial differential equations
Jiang, Yingjun; Xu, Xuejun
2017-12-01
A two-level additive Schwarz preconditioner is proposed for solving the algebraic systems resulting from the finite element approximations of space fractional partial differential equations (SFPDEs). It is shown that the condition number of the preconditioned system is bounded by C (1 + H / δ), where H is the maximum diameter of subdomains and δ is the overlap size among the subdomains. Numerical results are given to support our theoretical findings.
Decomposition of the Security Requirements for Connected Information Domains
Schotanus, H.A.; Boonstra, D.; Broenink, E.G.
2011-01-01
The introduction of network enabled capabilities (NEC) changed the way defence organisations look at their IT infrastructure. Finding the right balance between security and duty-to-share has proven to be a difficult challenge. The situations are complex and may lead to high security requirements
Spectral Tensor-Train Decomposition
DEFF Research Database (Denmark)
Bigoni, Daniele; Engsig-Karup, Allan Peter; Marzouk, Youssef M.
2016-01-01
.e., the “cores”) comprising the functional TT decomposition. This result motivates an approximation scheme employing polynomial approximations of the cores. For functions with appropriate regularity, the resulting spectral tensor-train decomposition combines the favorable dimension-scaling of the TT......The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We first define a functional version of the TT...
Comparison of Accuracy and Efficiency of Time-domain Schemes for Calculating Synthetic Seismograms
Mizutani, Hiromitsu; Geller, Robert J.; Takeuchi, Nozomu
2000-04-01
We conduct numerical experiments for several simple models to illustrate the advantages and disadvantages of various schemes for computing synthetic seismograms in the time domain. We consider both schemes that use the pseudo-spectral method (PSM) to compute spatial derivatives and schemes that use the finite difference method (FDM) to compute spatial derivatives. We show that schemes satisfying the criterion for optimal accuracy of Geller and Takeuchi (1995) are significantly more cost-effective than non-optimally accurate schemes of the same type. We then compare optimally accurate PSM schemes to optimally accurate FDM schemes. For homogeneous or smoothly varying heterogeneous media, PSM schemes require significantly fewer grid points per wavelength than FDM schemes, and are thus more cost-effective. In contrast, we show that FDM schemes are more cost-effective for media with sharp boundaries or steep velocity gradients. Thus FDM schemes appear preferable to PSM schemes for practical seismological applications. We analyze the solution error of various schemes and show that widely cited Lax-Wendroff PSM or FDM schemes that are frequently referred to as higher order schemes are in fact equivalent to second-order optimally accurate PSM or FDM schemes implemented as two-step (predictor-corrector) schemes. The error of solutions obtained using such schemes is thus second-order, rather than fourth-order.
AUTONOMOUS GAUSSIAN DECOMPOSITION
Energy Technology Data Exchange (ETDEWEB)
Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.; Stanimirović, Snežana; Babler, Brian [Department of Astronomy, University of Wisconsin, 475 North Charter Street, Madison, WI 53706 (United States); Heiles, Carl [Radio Astronomy Lab, UC Berkeley, 601 Campbell Hall, Berkeley, CA 94720 (United States); Hennebelle, Patrick [Laboratoire AIM, Paris-Saclay, CEA/IRFU/SAp-CNRS-Université Paris Diderot, F-91191 Gif-sur Yvette Cedex (France); Goss, W. M. [National Radio Astronomy Observatory, P.O. Box O, 1003 Lopezville, Socorro, NM 87801 (United States); Dickey, John, E-mail: rlindner@astro.wisc.edu [University of Tasmania, School of Maths and Physics, Private Bag 37, Hobart, TAS 7001 (Australia)
2015-04-15
We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocity width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.
Kharchenko, Dmitrii O.; Kharchenko, Vasyl O.; Lysenko, Irina O.; Shuda, Irina A.
2017-11-01
We present a comprehensive study of phase decomposition of binary alloys by taking into account lattice mismatch, coupling of both solute and vacancy concentrations with elastic deformation and multiplicative noise satisfying fluctuation dissipation relation. We discuss scaling dynamics and universality of domain size growth. We verified numerically delaying dynamics of mean domain size growth caused by field dependent mobilities. It is shown that vacancy-deformation coupling leads to vacancies agglomeration in soft phase, it suppress phase decomposition at early stages and promotes an increase in the domain size at late stages.
NRSA enzyme decomposition model data
U.S. Environmental Protection Agency — Microbial enzyme activities measured at more than 2000 US streams and rivers. These enzyme data were then used to predict organic matter decomposition and microbial...
Decomposition of Network Communication Games
Dietzenbacher, Bas; Borm, Peter; Hendrickx, Ruud
2015-01-01
Using network control structures this paper introduces network communication games as a generalization of vertex games and edge games corresponding to communication situations and studies their decomposition into unanimity games. We obtain a relation between the dividends of the network
Decomposition Bounds for Marginal MAP
PING, WEI; Liu,Qiang; Ihler, Alexander
2015-01-01
Marginal MAP inference involves making MAP predictions in systems defined with latent variables or missing information. It is significantly more difficult than pure marginalization and MAP tasks, for which a large class of efficient and convergent variational algorithms, such as dual decomposition, exist. In this work, we generalize dual decomposition to a generic power sum inference task, which includes marginal MAP, along with pure marginalization and MAP, as special cases. Our method is ba...
Facility Location Using Cross Decomposition
Jackson, Leroy A.
1995-01-01
The views expressed in this thesis are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Determining the best base stationing for military units can be modeled as a capacitated facility location problem with sole sourcing and multiple resource categories. Computational experience suggests that cross decomposition, a unification of Benders Decomposition and Lagrangean relaxation, is superior to other contempo...
Pressure-induced decomposition of indium hydroxide.
Gurlo, Aleksander; Dzivenko, Dmytro; Andrade, Miria; Riedel, Ralf; Lauterbach, Stefan; Kleebe, Hans-Joachim
2010-09-15
A static pressure-induced decomposition of indium hydroxide into metallic indium that takes place at ambient temperature is reported. The lattice parameter of c-In(OH)(3) decreased upon compression from 7.977(2) to approximately 7.45 A at 34 GPa, corresponding to a decrease in specific volume of approximately 18%. Fitting the second-order Birch-Murnaghan equation of state to the obtained compression data gave a bulk modulus of 99 +/- 3 GPa for c-In(OH)(3). The c-In(OH)(3) crystals with a size of approximately 100 nm are comminuted upon compression, as indicated by the grain-size reduction reflected in broadening of the diffraction reflections and the appearance of smaller (approximately 5 nm) incoherently oriented domains in TEM. The rapid decompression of compressed c-In(OH)(3) leads to partial decomposition of indium hydroxide into metallic indium, mainly as a result of localized stress gradients caused by relaxation of the highly disordered indium sublattice in indium hydroxide. This partial decomposition of indium hydroxide into metallic indium is irreversible, as confirmed by angle-dispersive X-ray diffraction, transmission electron microscopy imaging, Raman scattering, and FTIR spectroscopy. Recovered c-In(OH)(3) samples become completely black and nontransparent and show typical features of metals, i.e., a falling absorption in the 100-250 cm(-1) region accompanied by a featureless spectrum in the 250-2500 cm(-1) region in the Raman spectrum and Drude-like absorption of free electrons in the region of 4000-8000 cm(-1) in the FTIR spectrum. These features were not observed in the initial c-In(OH)(3), which is a typical white wide-band-gap semiconductor.
Colloid-in-Liquid Crystal Gels Formed via Spinodal Decomposition
Pal, Santanu Kumar; de Pablo, Juan J.
2014-01-01
We report that colloid-in-liquid crystal (CLC) gels can be formed via a two-step process that involves spinodal decomposition of a dispersion of colloidal particles in an isotropic phase of mesogens followed by nucleation of nematic domains within the colloidal network defined by the spinodal process. This pathway contrasts to previously reported routes leading to the formation of CLC gels, which have involved entanglement of defects or exclusion of particles from growing nematic domains. The new route provides the basis of simple design rules that enable control of the microstructure and dynamic mechanical properties of the gels. PMID:24651134
Directory of Open Access Journals (Sweden)
M.M. Khader
2014-10-01
Full Text Available In this article, we present a new numerical method to solve the integro-differential equations (IDEs. The proposed method uses the Legendre cardinal functions to express the approximate solution as a finite series. In our method the operational matrix of derivatives is used to reduce IDEs to a system of algebraic equations. To demonstrate the validity and applicability of the proposed method, we present some numerical examples. We compare the obtained numerical results from the proposed method with some other methods. The results show that the proposed algorithm is of high accuracy, more simple and effective.
A data parallel pseudo-spectral semi-implicit magnetohydrodynamics code
Keppens, R.; Poedts, S.; Meijer, P. M.; Goedbloed, J. P.; Hertzberger, B.; Sloot, P.
1997-01-01
The set of eight nonlinear partial differential equations of magnetohydrodynamics (MHD) is used for time dependent simulations of three-dimensional (3D) fluid flow in a magnetic field. A data parallel code is presented, which integrates the MHD equations in cylindrical geometry, combining a
Directory of Open Access Journals (Sweden)
A. H. Bhrawy
2013-01-01
Full Text Available We extend the application of the Galerkin method for treating the multiterm fractional differential equations (FDEs subject to initial conditions. A new shifted Legendre-Galerkin basis is constructed which satisfies exactly the homogeneous initial conditions by expanding the unknown variable using a new polynomial basis of functions which is built upon the shifted Legendre polynomials. A new spectral collocation approximation based on the Gauss-Lobatto quadrature nodes of shifted Legendre polynomials is investigated for solving the nonlinear multiterm FDEs. The main advantage of this approximation is that the solution is expanding by a truncated series of Legendre-Galerkin basis functions. Illustrative examples are presented to ensure the high accuracy and effectiveness of the proposed algorithms are discussed.
Abstract decomposition theorem and applications
Grossberg, R; Grossberg, Rami; Lessmann, Olivier
2005-01-01
Let K be an Abstract Elementary Class. Under the asusmptions that K has a nicely behaved forking-like notion, regular types and existence of some prime models we establish a decomposition theorem for such classes. The decomposition implies a main gap result for the class K. The setting is general enough to cover \\aleph_0-stable first-order theories (proved by Shelah in 1982), Excellent Classes of atomic models of a first order tehory (proved Grossberg and Hart 1987) and the class of submodels of a large sequentially homogenuus \\aleph_0-stable model (which is new).
Compressed sensing MRI exploiting complementary dual decomposition.
Park, Suhyung; Park, Jaeseok
2014-04-01
Compressed sensing (CS) MRI exploits the sparsity of an image in a transform domain to reconstruct the image from incoherently under-sampled k-space data. However, it has been shown that CS suffers particularly from loss of low-contrast image features with increasing reduction factors. To retain image details in such degraded experimental conditions, in this work we introduce a novel CS reconstruction method exploiting feature-based complementary dual decomposition with joint estimation of local scale mixture (LSM) model and images. Images are decomposed into dual block sparse components: total variation for piecewise smooth parts and wavelets for residuals. The LSM model parameters of residuals in the wavelet domain are estimated and then employed as a regional constraint in spatially adaptive reconstruction of high frequency subbands to restore image details missing in piecewise smooth parts. Alternating minimization of the dual image components subject to data consistency is performed to extract image details from residuals and add them back to their complementary counterparts while the LSM model parameters and images are jointly estimated in a sequential fashion. Simulations and experiments demonstrate the superior performance of the proposed method in preserving low-contrast image features even at high reduction factors. Copyright © 2014 Elsevier B.V. All rights reserved.
Thermal decomposition of natural dolomite
Indian Academy of Sciences (India)
Keywords. TGA–DTA; FTIR; X-ray diffraction; dolomite. Abstract. Thermal decomposition behaviour of dolomite sample has been studied by thermogravimetric (TG) measurements. Differential thermal analysis (DTA) curve of dolomite shows two peaks at 777.8°C and 834°C. The two endothermic peaks observed in dolomite ...
Probability inequalities for decomposition integrals
Czech Academy of Sciences Publication Activity Database
Agahi, H.; Mesiar, Radko
2017-01-01
Roč. 315, č. 1 (2017), s. 240-248 ISSN 0377-0427 Institutional support: RVO:67985556 Keywords : Decomposition integral * Superdecomposition integral * Probability inequalities Subject RIV: BA - General Mathematics Impact factor: 1.357, year: 2016 http:// library .utia.cas.cz/separaty/2017/E/mesiar-0470959.pdf
Thermal decomposition of ammonium hexachloroosmate
DEFF Research Database (Denmark)
Asanova, T I; Kantor, Innokenty; Asanov, I. P.
2016-01-01
polymeric structure. Being revealed for the first time the intermediate was subjected to determine the local atomic structure around osmium. The thermal decomposition of hexachloroosmate is much more complex and occurs within a minimum two-step process, which has never been observed before....
Wavefront reconstruction by modal decomposition
CSIR Research Space (South Africa)
Schulze, C
2012-08-01
Full Text Available We propose a new method to determine the wavefront of a laser beam based on modal decomposition by computer-generated holograms. The hologram is encoded with a transmission function suitable for measuring the amplitudes and phases of the modes...
Torsion and Open Book Decompositions
Etnyre, John B.; Vela-Vick, David Shea
2009-01-01
We show that if (B,\\pi) is an open book decomposition of a contact 3-manifold (Y,\\xi), then the complement of the binding B has no Giroux torsion. We also prove the sutured Heegaard-Floer c-bar invariant of the binding of an open book is non-zero.
Modular Decomposition of Boolean Functions
J.C. Bioch (Cor)
2002-01-01
textabstractModular decomposition is a thoroughly investigated topic in many areas such as switching theory, reliability theory, game theory and graph theory. Most appli- cations can be formulated in the framework of Boolean functions. In this paper we give a uni_ed treatment of modular
Thermal decomposition of natural dolomite
Indian Academy of Sciences (India)
TECS
the effects of experimental variables i.e. sample weight, particle size, purge gas velocity and crystalline structure, ... effect of chlorine ions on the decomposition kinetics of dolomite at various temperatures studied by ... to 1000°C at a heating rate of 10 K/min, (ii) N2-gas dyna- mic atmosphere (90 cm. 3 min. –1. ), (iii) alumina ...
Decomposition of network communication games
Dietzenbacher, Bas; Borm, Peter; Hendrickx, Ruud
Using network control structures, this paper introduces a general class of network communication games and studies their decomposition into unanimity games. We obtain a relation between the dividends in any network communication game and its underlying transferable utility game, which depends on the
DEFF Research Database (Denmark)
Schraefel, M. C.; Rouncefield, Mark; Kellogg, Wendy
2012-01-01
In CSCW, how much do we need to know about another domain/culture before we observe, intersect and intervene with designs. What optimally would that other culture need to know about us? Is this a “how long is a piece of string” question, or an inquiry where we can consider a variety of contexts...
Energy Technology Data Exchange (ETDEWEB)
Pee, J H; Kim, Y J; Kim, J Y; Cho, W S; Kim, K J [Whiteware Ceramic Center, KICET (Korea, Republic of); Seong, N E, E-mail: pee@kicet.re.kr [Recytech Korea Co., Ltd. (Korea, Republic of)
2011-10-29
Decomposition promoting factors and decomposition mechanism in the zinc decomposition process of waste hard metals which are composed mostly of tungsten carbide and cobalt were evaluated. Zinc volatility amount was suppressed and zinc steam pressure was produced in the reaction graphite crucible inside an electric furnace for ZDP. Reaction was done for 2 hrs at 650 deg. C, which 100% decomposed the waste hard metals that were over 30 mm thick. As for the separation-decomposition of waste hard metals, zinc melted alloy formed a liquid composed of a mixture of {gamma}-{beta}1 phase from the cobalt binder layer (reaction interface). The volume of reacted zone was expanded and the waste hard metal layer was decomposed-separated horizontally from the hard metal. Zinc used in the ZDP process was almost completely removed-collected by decantation and volatilization-collection process at 1000 deg. C. The small amount of zinc remaining in the tungsten carbide-cobalt powder which was completely decomposed was fully removed by using phosphate solution which had a slow cobalt dissolution speed.
Thermal decomposition and non-isothermal decomposition kinetics of carbamazepine
Qi, Zhen-li; Zhang, Duan-feng; Chen, Fei-xiong; Miao, Jun-yan; Ren, Bao-zeng
2014-12-01
The thermal stability and kinetics of isothermal decomposition of carbamazepine were studied under isothermal conditions by thermogravimetry (TGA) and differential scanning calorimetry (DSC) at three heating rates. Particularly, transformation of crystal forms occurs at 153.75°C. The activation energy of this thermal decomposition process was calculated from the analysis of TG curves by Flynn-Wall-Ozawa, Doyle, distributed activation energy model, Šatava-Šesták and Kissinger methods. There were two different stages of thermal decomposition process. For the first stage, E and log A [s-1] were determined to be 42.51 kJ mol-1 and 3.45, respectively. In the second stage, E and log A [s-1] were 47.75 kJ mol-1 and 3.80. The mechanism of thermal decomposition was Avrami-Erofeev (the reaction order, n = 1/3), with integral form G(α) = [-ln(1 - α)]1/3 (α = ˜0.1-0.8) in the first stage and Avrami-Erofeev (the reaction order, n = 1) with integral form G(α) = -ln(1 - α) (α = ˜0.9-0.99) in the second stage. Moreover, Δ H ≠, Δ S ≠, Δ G ≠ values were 37.84 kJ mol-1, -192.41 J mol-1 K-1, 146.32 kJ mol-1 and 42.68 kJ mol-1, -186.41 J mol-1 K-1, 156.26 kJ mol-1 for the first and second stage, respectively.
Fractional Fourier Transform for Ultrasonic Chirplet Signal Decomposition
Directory of Open Access Journals (Sweden)
Yufeng Lu
2012-01-01
Full Text Available A fractional fourier transform (FrFT based chirplet signal decomposition (FrFT-CSD algorithm is proposed to analyze ultrasonic signals for NDE applications. Particularly, this method is utilized to isolate dominant chirplet echoes for successive steps in signal decomposition and parameter estimation. FrFT rotates the signal with an optimal transform order. The search of optimal transform order is conducted by determining the highest kurtosis value of the signal in the transformed domain. A simulation study reveals the relationship among the kurtosis, the transform order of FrFT, and the chirp rate parameter in the simulated ultrasonic echoes. Benchmark and ultrasonic experimental data are used to evaluate the FrFT-CSD algorithm. Signal processing results show that FrFT-CSD not only reconstructs signal successfully, but also characterizes echoes and estimates echo parameters accurately. This study has a broad range of applications of importance in signal detection, estimation, and pattern recognition.
A Generalized Demodulation and Hilbert Transform Based Signal Decomposition Method
Directory of Open Access Journals (Sweden)
Zhi-Xiang Hu
2017-01-01
Full Text Available This paper proposes a new signal decomposition method that aims to decompose a multicomponent signal into monocomponent signal. The main procedure is to extract the components with frequencies higher than a given bisecting frequency by three steps: (1 the generalized demodulation is used to project the components with lower frequencies onto negative frequency domain, (2 the Hilbert transform is performed to eliminate the negative frequency components, and (3 the inverse generalized demodulation is used to obtain the signal which contains components with higher frequencies only. By running the procedure recursively, all monocomponent signals can be extracted efficiently. A comprehensive derivation of the decomposition method is provided. The validity of the proposed method has been demonstrated by extensive numerical analysis. The proposed method is also applied to decompose the dynamic strain signal of a cable-stayed bridge and the echolocation signal of a bat.
DEFF Research Database (Denmark)
Hjorth, Theis Solberg; Torbensen, Rune
2012-01-01
In the digital age of home automation and with the proliferation of mobile Internet access, the intelligent home and its devices should be accessible at any time from anywhere. There are many challenges such as security, privacy, ease of configuration, incompatible legacy devices, a wealth...... of wireless standards, limited resources of embedded systems, etc. Taking these challenges into account, we present a Trusted Domain home automation platform, which dynamically and securely connects heterogeneous networks of Short-Range Wireless devices via simple non-expert user. interactions, and allows...... remote access via IP-based devices such as smartphones. The Trusted Domain platform fits existing legacy technologies by managing their interoperability and access controls, and it seeks to avoid the security issues of relying on third-party servers outside the home. It is a distributed system...
Variance decomposition in stochastic simulators
Le Maître, O. P.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Li, Lei; Wang, Linyuan; Yan, Bin; Zhang, Hanming; Zheng, Zhizhong; Zhang, Wenkun; Lu, Wanli; Hu, Guoen
2016-01-01
Dual-energy computed tomography (DECT) has shown great potential and promising applications in advanced imaging fields for its capabilities of material decomposition. However, image reconstructions and decompositions under sparse views dataset suffers severely from multi factors, such as insufficiencies of data, appearances of noise, and inconsistencies of observations. Under sparse views, conventional filtered back-projection type reconstruction methods fails to provide CT images with satisfying quality. Moreover, direct image decomposition is unstable and meet with noise boost even with full views dataset. This paper proposes an iterative image reconstruction algorithm and a practical image domain decomposition method for DECT. On one hand, the reconstruction algorithm is formulated as an optimization problem, which containing total variation regularization term and data fidelity term. The alternating direction method is utilized to design the corresponding algorithm which shows faster convergence speed com...
Thermic decomposition of biphenyl; Decomposition thermique du biphenyle
Energy Technology Data Exchange (ETDEWEB)
Lutz, M. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1966-03-01
Liquid and vapour phase pyrolysis of very pure biphenyl obtained by methods described in the text was carried out at 400 C in sealed ampoules, the fraction transformed being always less than 0.1 per cent. The main products were hydrogen, benzene, terphenyls, and a deposit of polyphenyls strongly adhering to the walls. Small quantities of the lower aliphatic hydrocarbons were also found. The variation of the yields of these products with a) the pyrolysis time, b) the state (gas or liquid) of the biphenyl, and c) the pressure of the vapour was measured. Varying the area and nature of the walls showed that in the absence of a liquid phase, the pyrolytic decomposition takes place in the adsorbed layer, and that metallic walls promote the reaction more actively than do those of glass (pyrex or silica). A mechanism is proposed to explain the results pertaining to this decomposition in the adsorbed phase. The adsorption seems to obey a Langmuir isotherm, and the chemical act which determines the overall rate of decomposition is unimolecular. (author) [French] Du biphenyle tres pur, dont la purification est decrite, est pyrolyse a 400 C en phase vapeur et en phase liquide dans des ampoules scellees sous vide, a des taux de decomposition n'ayant jamais depasse 0,1 pour cent. Les produits provenant de la pyrolyse sont essentiellement: l' hydrogene, le benzene, les therphenyles, et un depot de polyphenyles adherant fortement aux parois. En plus il se forme de faibles quantites d'hydrocarbures aliphatiques gazeux. On indique la variation des rendements des differents produits avec la duree de pyrolyse, l'etat gazeux ou liquide du biphenyle, et la pression de la vapeur. Variant la superficie et la nature des parois, on montre qu'en absence de liquide la pyrolyse se fait en phase adsorbee. La pyrolyse est plus active au contact de parois metalliques que de celles de verres (pyrex ou silice). A partir des resultats experimentaux un mecanisme de
Decomposition of Diethylstilboestrol in Soil
DEFF Research Database (Denmark)
Gregers-Hansen, Birte
1964-01-01
The rate of decomposition of DES-monoethyl-1-C14 in soil was followed by measurement of C14O2 released. From 1.6 to 16% of the added C14 was recovered as C14O2 during 3 months. After six months as much as 12 to 28 per cent was released as C14O2.Determination of C14 in the soil samples after...... not inhibit the CO2 production from the soil.Experiments with γ-sterilized soil indicated that enzymes present in the soil are able to attack DES....
Azimuthal decomposition with digital holograms
CSIR Research Space (South Africa)
Litvin, IA
2012-05-01
Full Text Available stream_source_info Litvin_2012.pdf.txt stream_content_type text/plain stream_size 26000 Content-Encoding ISO-8859-1 stream_name Litvin_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 Azimuthal decomposition... outside the annular ring and 1 inside the ring was programmed using complex amplitude modulation for amplitude only effects on a phase-only device. The hologram takes the form of a high frequency grating that oscillates between phase values of 0...
Highly Scalable Matching Pursuit Signal Decomposition Algorithm
National Aeronautics and Space Administration — In this research, we propose a variant of the classical Matching Pursuit Decomposition (MPD) algorithm with significantly improved scalability and computational...
Thermal-decomposition studies of HMX
Energy Technology Data Exchange (ETDEWEB)
Kolb, J.R.; Garza, R.G.
1981-10-20
We have investigated the rates of decomposition as functions of time and temperature on a combined thermogravimetric analyzer-residual gas analyzer (TGA-RGA). This technique also allows us to identify decomposition products generated as the original HMX begins to decompose. The temperature range studied was 50 to 200/sup 0/C. The decomposition process and the nature of decomposition products as functions of HMX polymorphs and conformations of the organic ring systems and possible reactive intermediates are discussed. 7 figures, 3 tables.
Symmetric Decomposition of Asymmetric Games.
Tuyls, Karl; Pérolat, Julien; Lanctot, Marc; Ostrovski, Georg; Savani, Rahul; Leibo, Joel Z; Ord, Toby; Graepel, Thore; Legg, Shane
2018-01-17
We introduce new theoretical insights into two-population asymmetric games allowing for an elegant symmetric decomposition into two single population symmetric games. Specifically, we show how an asymmetric bimatrix game (A,B) can be decomposed into its symmetric counterparts by envisioning and investigating the payoff tables (A and B) that constitute the asymmetric game, as two independent, single population, symmetric games. We reveal several surprising formal relationships between an asymmetric two-population game and its symmetric single population counterparts, which facilitate a convenient analysis of the original asymmetric game due to the dimensionality reduction of the decomposition. The main finding reveals that if (x,y) is a Nash equilibrium of an asymmetric game (A,B), this implies that y is a Nash equilibrium of the symmetric counterpart game determined by payoff table A, and x is a Nash equilibrium of the symmetric counterpart game determined by payoff table B. Also the reverse holds and combinations of Nash equilibria of the counterpart games form Nash equilibria of the asymmetric game. We illustrate how these formal relationships aid in identifying and analysing the Nash structure of asymmetric games, by examining the evolutionary dynamics of the simpler counterpart games in several canonical examples.
Decomposition methods in turbulence research
Uruba, Václav
2012-04-01
Nowadays we have the dynamical velocity vector field of turbulent flow at our disposal coming thanks advances of either mathematical simulation (DNS) or of experiment (time-resolved PIV). Unfortunately there is no standard method for analysis of such data describing complicated extended dynamical systems, which is characterized by excessive number of degrees of freedom. An overview of candidate methods convenient to spatiotemporal analysis for such systems is to be presented. Special attention will be paid to energetic methods including Proper Orthogonal Decomposition (POD) in regular and snapshot variants as well as the Bi-Orthogonal Decomposition (BOD) for joint space-time analysis. Then, stability analysis using Principal Oscillation Patterns (POPs) will be introduced. Finally, the Independent Component Analysis (ICA) method will be proposed for detection of coherent structures in turbulent flow-field defined by time-dependent velocity vector field. Principle and some practical aspects of the methods are to be shown. Special attention is to be paid to physical interpretation of outputs of the methods listed above.
Decomposition methods in turbulence research
Directory of Open Access Journals (Sweden)
Uruba Václav
2012-04-01
Full Text Available Nowadays we have the dynamical velocity vector field of turbulent flow at our disposal coming thanks advances of either mathematical simulation (DNS or of experiment (time-resolved PIV. Unfortunately there is no standard method for analysis of such data describing complicated extended dynamical systems, which is characterized by excessive number of degrees of freedom. An overview of candidate methods convenient to spatiotemporal analysis for such systems is to be presented. Special attention will be paid to energetic methods including Proper Orthogonal Decomposition (POD in regular and snapshot variants as well as the Bi-Orthogonal Decomposition (BOD for joint space-time analysis. Then, stability analysis using Principal Oscillation Patterns (POPs will be introduced. Finally, the Independent Component Analysis (ICA method will be proposed for detection of coherent structures in turbulent flow-field defined by time-dependent velocity vector field. Principle and some practical aspects of the methods are to be shown. Special attention is to be paid to physical interpretation of outputs of the methods listed above.
Advances in audio watermarking based on singular value decomposition
Dhar, Pranab Kumar
2015-01-01
This book introduces audio watermarking methods for copyright protection, which has drawn extensive attention for securing digital data from unauthorized copying. The book is divided into two parts. First, an audio watermarking method in discrete wavelet transform (DWT) and discrete cosine transform (DCT) domains using singular value decomposition (SVD) and quantization is introduced. This method is robust against various attacks and provides good imperceptible watermarked sounds. Then, an audio watermarking method in fast Fourier transform (FFT) domain using SVD and Cartesian-polar transformation (CPT) is presented. This method has high imperceptibility and high data payload and it provides good robustness against various attacks. These techniques allow media owners to protect copyright and to show authenticity and ownership of their material in a variety of applications. · Features new methods of audio watermarking for copyright protection and ownership protection · Outl...
Efficient Delaunay Tessellation through K-D Tree Decomposition
Energy Technology Data Exchange (ETDEWEB)
Morozov, Dmitriy; Peterka, Tom
2017-08-21
Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluate the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.
Non-equilibrium theory of arrested spinodal decomposition
Energy Technology Data Exchange (ETDEWEB)
Olais-Govea, José Manuel; López-Flores, Leticia; Medina-Noyola, Magdaleno [Instituto de Física “Manuel Sandoval Vallarta,” Universidad Autónoma de San Luis Potosí, Álvaro Obregón 64, 78000 San Luis Potosí, SLP (Mexico)
2015-11-07
The non-equilibrium self-consistent generalized Langevin equation theory of irreversible relaxation [P. E. Ramŕez-González and M. Medina-Noyola, Phys. Rev. E 82, 061503 (2010); 82, 061504 (2010)] is applied to the description of the non-equilibrium processes involved in the spinodal decomposition of suddenly and deeply quenched simple liquids. For model liquids with hard-sphere plus attractive (Yukawa or square well) pair potential, the theory predicts that the spinodal curve, besides being the threshold of the thermodynamic stability of homogeneous states, is also the borderline between the regions of ergodic and non-ergodic homogeneous states. It also predicts that the high-density liquid-glass transition line, whose high-temperature limit corresponds to the well-known hard-sphere glass transition, at lower temperature intersects the spinodal curve and continues inside the spinodal region as a glass-glass transition line. Within the region bounded from below by this low-temperature glass-glass transition and from above by the spinodal dynamic arrest line, we can recognize two distinct domains with qualitatively different temperature dependence of various physical properties. We interpret these two domains as corresponding to full gas-liquid phase separation conditions and to the formation of physical gels by arrested spinodal decomposition. The resulting theoretical scenario is consistent with the corresponding experimental observations in a specific colloidal model system.
Decomposition kinetics of plutonium hydride
Energy Technology Data Exchange (ETDEWEB)
Haschke, J.M.; Stakebake, J.L.
1979-01-01
Kinetic data for decomposition of PuH/sub 1/ /sub 95/ provides insight into a possible mechanism for the hydriding and dehydriding reactions of plutonium. The fact that the rate of the hydriding reaction, K/sub H/, is proportional to P/sup 1/2/ and the rate of the dehydriding process, K/sub D/, is inversely proportional to P/sup 1/2/ suggests that the forward and reverse reactions proceed by opposite paths of the same mechanism. The P/sup 1/2/ dependence of hydrogen solubility in metals is characteristic of the dissociative absorption of hydrogen; i.e., the reactive species is atomic hydrogen. It is reasonable to assume that the rates of the forward and reverse reactions are controlled by the surface concentration of atomic hydrogen, (H/sub s/), that K/sub H/ = c'(H/sub s/), and that K/sub D/ = c/(H/sub s/), where c' and c are proportionality constants. For this surface model, the pressure dependence of K/sub D/ is related to (H/sub s/) by the reaction (H/sub s/) reversible 1/2H/sub 2/(g) and by its equilibrium constant K/sub e/ = (H/sub 2/)/sup 1/2//(H/sub s/). In the pressure range of ideal gas behavior, (H/sub s/) = K/sub e//sup -1/(RT)/sup -1/2/ and the decomposition rate is given by K/sub D/ = cK/sub e/(RT)/sup -1/2/P/sup 1/2/. For an analogous treatment of the hydriding process with this model, it can be readily shown that K/sub H/ = c'K/sub e//sup -1/(RT)/sup -1/2/P/sup 1/2/. The inverse pressure dependence and direct temperature dependence of the decomposition rate are correctly predicted by this mechanism which is most consistent with the observed behavior of the Pu--H system.
General Services Administration — This dataset offers the list of all .gov domains, including state, local, and tribal .gov domains. It does not include .mil domains, or other federal domains outside...
Application Of Adomian's Decomposition Method In Solving ...
African Journals Online (AJOL)
It is shown in literature that Adomian's decomposition method gives better results than any other computational techniques. We use this method to tackle simple heat equation and compare the result with the closed form solution of the giving problem. Keywords: Adomian decomposition method; accuracy; nonlinear equation ...
Modular polynomial arithmetic in partial fraction decomposition
Abdali, S. K.; Caviness, B. F.; Pridor, A.
1977-01-01
Algorithms for general partial fraction decomposition are obtained by using modular polynomial arithmetic. An algorithm is presented to compute inverses modulo a power of a polynomial in terms of inverses modulo that polynomial. This algorithm is used to make an improvement in the Kung-Tong partial fraction decomposition algorithm.
Spinodal decomposition in fine grained materials
Indian Academy of Sciences (India)
Unknown
A-rich grain boundary layer followed by a B-rich layer; the grain interior exhibits a spinodally decomposed microstructure, evolving slowly. Further, grain growth is suppressed completely during the decomposition process. Keywords. Spinodal decomposition; grain boundary effects; phase field models. 1. Introduction.
An Introduction to Clique Minimal Separator Decomposition
Directory of Open Access Journals (Sweden)
Anne Berry
2010-05-01
Full Text Available This paper is a review which presents and explains the decomposition of graphs by clique minimal separators. The pace is leisurely, we give many examples and figures. Easy algorithms are provided to implement this decomposition. The historical and theoretical background is given, as well as sketches of proofs of the structural results involved.
Some Aspects of Thermochemical Decomposition of Peat
Directory of Open Access Journals (Sweden)
Y. A. Losiuk
2008-01-01
Full Text Available The paper considers peculiar features of thermochemical decomposition of peat as a result of quick pyrolysis. Evaluation of energy and economic expediency of the preliminary peat decomposition process for obtaining liquid and gaseous products has been made in the paper. The paper reveals prospects pertaining to application of the given technology while generating electric power and heat.
Moisture controls decomposition rate in thawing tundra
C.E. Hicks-Pries; E.A.G. Schuur; S.M. Natali; J.G. Vogel
2013-01-01
Permafrost thaw can affect decomposition rates by changing environmental conditions and litter quality. As permafrost thaws, soils warm and thermokarst (ground subsidence) features form, causing some areas to become wetter while other areas become drier. We used a common substrate to measure how permafrost thaw affects decomposition rates in the surface soil in a...
Spinodal decomposition in fine grained materials
Indian Academy of Sciences (India)
We have used a phase field model to study spinodal decomposition in polycrystalline materials in which the grain size is of the same order of magnitude as the characteristic decomposition wavelength ( λ S D ). In the spirit of phase field models, each grain () in our model has an order parameter ( η i ) associated with it; ...
Climate history shapes contemporary leaf litter decomposition
Michael S. Strickland; Ashley D. Keiser; Mark A. Bradford
2015-01-01
Litter decomposition is mediated by multiple variables, of which climate is expected to be a dominant factor at global scales. However, like other organisms, traits of decomposers and their communities are shaped not just by the contemporary climate but also their climate history. Whether or not this affects decomposition rates is underexplored. Here we source...
Light-induced decomposition of indocyanine green.
Engel, Eva; Schraml, Rüdiger; Maisch, Tim; Kobuch, Karin; König, Burkhard; Szeimies, Rolf-Markus; Hillenkamp, Jost; Bäumler, Wolfgang; Vasold, Rudolf
2008-05-01
To investigate the light-induced decomposition of indocyanine green (ICG) and to test the cytotoxicity of light-induced ICG decomposition products. ICG in solution was irradiated with laser light, solar light, or surgical endolight. The light-induced decomposition of ICG was analyzed by high-performance liquid chromatography (HPLC) and mass spectrometry. Porcine retinal pigment epithelial (RPE) cells were incubated with the light-induced decomposition products of ICG, and cell viability was measured by trypan blue exclusion assay. Independent of the light source used, singlet oxygen (photodynamic type 2 reaction) is generated by ICG leading to dioxetanes by [2+2]-cycloaddition of singlet oxygen. These dioxetanes thermally decompose into several carbonyl compounds. The decomposition products were identified by mass spectrometry. The decomposition of ICG was inhibited by adding sodium azide, a quencher of singlet oxygen. Incubation with ICG decomposition products significantly reduced the viability of RPE cells in contrast to control cells. ICG is decomposed by light within a self-sensitized photo oxidation. The decomposition products reduce the viability of RPE cells in vitro. The toxic effects of decomposed ICG should be further investigated under in vivo conditions.
Multilinear operators for higher-order decompositions.
Energy Technology Data Exchange (ETDEWEB)
Kolda, Tamara Gibson
2006-04-01
We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.
Mode Decomposition Methods for Soil Moisture Prediction
Jana, R. B.; Efendiev, Y. R.; Mohanty, B.
2014-12-01
Lack of reliable, well-distributed, long-term datasets for model validation is a bottle-neck for most exercises in soil moisture analysis and prediction. Understanding what factors drive soil hydrological processes at different scales and their variability is very critical to further our ability to model the various components of the hydrologic cycle more accurately. For this, a comprehensive dataset with measurements across scales is very necessary. Intensive fine-resolution sampling of soil moisture over extended periods of time is financially and logistically prohibitive. Installation of a few long term monitoring stations is also expensive, and needs to be situated at critical locations. The concept of Time Stable Locations has been in use for some time now to find locations that reflect the mean values for the soil moisture across the watershed under all wetness conditions. However, the soil moisture variability across the watershed is lost when measuring at only time stable locations. We present here a study using techniques such as Dynamic Mode Decomposition (DMD) and Discrete Empirical Interpolation Method (DEIM) that extends the concept of time stable locations to arrive at locations that provide not simply the average soil moisture values for the watershed, but also those that can help re-capture the dynamics across all locations in the watershed. As with the time stability, the initial analysis is dependent on an intensive sampling history. The DMD/DEIM method is an application of model reduction techniques for non-linearly related measurements. Using this technique, we are able to determine the number of sampling points that would be required for a given accuracy of prediction across the watershed, and the location of those points. Locations with higher energetics in the basis domain are chosen first. We present case studies across watersheds in the US and India. The technique can be applied to other hydro-climates easily.
Symmetry Decomposition of Chaotic Dynamics
Cvitanovic, P; Cvitanovi\\'c, Predrag; Eckhardt, Bruno
1993-01-01
Discrete symmetries of dynamical flows give rise to relations between periodic orbits, reduce the dynamics to a fundamental domain, and lead to factorizations of zeta functions. These factorizations in turn reduce the labor and improve the convergence of cycle expansions for classical and quantum spectra associated with the flow. In this paper the general formalism is developed, with the $N$-disk pinball model used as a concrete example and a series of physically interesting cases worked out in detail.
Surface-directed spinodal decomposition
Energy Technology Data Exchange (ETDEWEB)
Puri, Sanjay [School of Physical Sciences, Jawaharlal Nehru University, New Delhi-110067 (India)
2005-01-26
We review analytical and numerical results for surface-directed spinodal decomposition (SDSD), namely, the interplay of wetting kinetics and phase separation in a binary (AB) mixture in contact with a surface S which prefers one of the components (say, A). Depending on the relative strengths of the A-B, A-S and B-S interactions, the surface is either partially wetted or completely wetted by A in equilibrium. We discuss the theoretical framework for modelling SDSD, and review results obtained from both microscopic and coarse-grained models. We clarify the differences between diffusion-driven SDSD in solids, and SDSD in fluids, where velocity fields play an important role. Furthermore, we discuss the dependence of wetting-layer kinetics on the composition of the mixture. Some results are also presented for phase separation in a confined geometry, e.g., thin films. Finally, we discuss the problem of surface-enrichment kinetics, namely, the kinetics of enrichment of an attracting surface when the bulk mixture is stable. These nonequilibrium processes have important applications in the preparation of nanomaterials and multi-layered structures. (topical review)
Geometric decompositions of collective motion
Mischiati, Matteo; Krishnaprasad, P. S.
2017-04-01
Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes-including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots.
Space Deformation for Character Deformation using Multi-Domain Smooth Embedding
Luo, Zhiping; Veltkamp, Remco|info:eu-repo/dai/nl/084742984; Egges, Arjan|info:eu-repo/dai/nl/304822779
2015-01-01
We propose a novel space deformation method based on domain-decomposition to animate character skin. The method supports smoothness and local controllability of deformations, and can achieve interactive interpolating rates. Given a character, we partition it into multiple domains according to
Further remarks on convergence of decomposition method.
Cherruault, Y; Adomian, G; Abbaoui, K; Rach, R
1995-01-01
The decomposition method solves a wide class of nonlinear functional equations. This method uses a series solution with rapid convergence. This paper is intended as a useful review and clarification of related issues.
A Decomposition Theorem for Finite Automata.
Santa Coloma, Teresa L.; Tucci, Ralph P.
1990-01-01
Described is automata theory which is a branch of theoretical computer science. A decomposition theorem is presented that is easier than the Krohn-Rhodes theorem. Included are the definitions, the theorem, and a proof. (KR)
Decomposition Analysis of Forest Ecosystem Services Values
National Research Council Canada - National Science Library
Hidemichi Fujii; Masayuki Sato; Shunsuke Managi
2017-01-01
.... We applied two approaches: a contingent valuation method for estimating the forest ecosystem service value per area and a decomposition analysis for identifying the main driving factors of changes in the value of forest ecosystem services...
The decomposition of estuarine macrophytes under different ...
African Journals Online (AJOL)
2013-04-29
Apr 29, 2013 ... knowledge of the decomposition rates of algal species in order to validate their role in the ... sure in the Great Brak Estuary, numerous filamentous green algae ... structure and functioning of the estuary and as such need to.
Multipartite graph decomposition: cycles and closed trails
Directory of Open Access Journals (Sweden)
Elizabeth J. Billington
2004-11-01
Full Text Available This paper surveys results on cycle decompositions of complete multipartite graphs (where the parts are not all of size 1, so the graph is not K_n , in the case that the cycle lengths are “small”. Cycles up to length n are considered, when the complete multipartite graph has n parts, but not hamilton cycles. Properties which the decompositions may have, such as being gregarious, are also mentioned.
Interdiffusion and Spinodal Decomposition in Electrically Conducting Polymer Blends
Directory of Open Access Journals (Sweden)
Antti Takala
2015-08-01
Full Text Available The impact of phase morphology in electrically conducting polymer composites has become essential for the efficiency of the various functional applications, in which the continuity of the electroactive paths in multicomponent systems is essential. For instance in bulk heterojunction organic solar cells, where the light-induced electron transfer through photon absorption creating excitons (electron-hole pairs, the control of diffusion of the spatially localized excitons and their dissociation at the interface and the effective collection of holes and electrons, all depend on the surface area, domain sizes, and connectivity in these organic semiconductor blends. We have used a model semiconductor polymer blend with defined miscibility to investigate the phase separation kinetics and the formation of connected pathways. Temperature jump experiments were applied from a miscible region of semiconducting poly(alkylthiophene (PAT blends with ethylenevinylacetate-elastomers (EVA and the kinetics at the early stages of phase separation were evaluated in order to establish bicontinuous phase morphology via spinodal decomposition. The diffusion in the blend was followed by two methods: first during a miscible phase separating into two phases: from the measurement of the spinodal decomposition. Secondly the diffusion was measured by monitoring the interdiffusion of PAT film into the EVA film at elected temperatures and eventually compared the temperature dependent diffusion characteristics. With this first quantitative evaluation of the spinodal decomposition as well as the interdiffusion in conducting polymer blends, we show that a systematic control of the phase separation kinetics in a polymer blend with one of the components being electrically conducting polymer can be used to optimize the morphology.
Microbiological decomposition of bagasse after radiation pasteurization
Energy Technology Data Exchange (ETDEWEB)
Ito, Hitoshi; Ishigaki, Isao
1987-11-01
Microbiological decomposition of bagasse was studied for upgrading to animal feeds after radiation pasteurization. Solid-state culture media of bagasse were prepared with addition of some amount of inorganic salts for nitrogen source, and after irradiation, fungi were infected for cultivation. In this study, many kind of cellulosic fungi such as Pleurotus ostreatus, P. flavellatus, Verticillium sp., Coprinus cinereus, Lentinus edodes, Aspergillus niger, Trichoderma koningi, T. viride were used for comparison of decomposition of crude fibers. In alkali nontreated bagasse, P. ostreatus, P. flavellatus, C. cinereus and Verticillium sp. could decompose crude fibers from 25 to 34 % after one month of cultivation, whereas other fungi such as A. niger, T. koningi, T. viride, L. edodes decomposed below 10 %. On the contrary, alkali treatment enhanced the decomposition of crude fiber by A. niger, T. koningi and T. viride to be 29 to 47 % as well as Pleurotus species or C. cinereus. Other species of mushrooms such as L. edodes had a little ability of decomposition even after alkali treatment. Radiation treatment with 10 kGy could not enhance the decomposition of bagasse compared with steam treatment, whereas higher doses of radiation treatment enhanced a little of decomposition of crude fibers by microorganisms.
On watermarking in frequency domain
Dasre, Narendrakumar Ramchandra; Patil, Hemraj Ramdas
2010-02-01
A wavelet-based image watermarking scheme is proposed, based on insertion of 'logo' image as watermark in midfrequency domain. This new approach provides flexibility in determining the pixel to be watermarked and increases the data hiding capacity. It is easy to implement watermark embedding algorithm as well as the corresponding detection algorithm. The watermarking algorithm is tested under different attacks such as median filtering, image cropping and image compression. It is also robust. The experimental results prove that the method is more tamper proof and less perceptible for any type of images other than well known private methods in frequency domain. In the proposed approach, an original image is decomposed into wavelet coefficients then watermark is embedded through algorithm. The wavelet transform filters can be used as security key for the extraction of inserted watermark. The proposed watermark extraction technique is independent of the original image. The watermark embedded image is produced by taking the inverse 2-D discrete wavelet transform of the altered wavelet decomposition. Here we have given the relation between the area of the channel in which we insert the watermark and the area affected in original image.
Hua-Qing Wang; Wei Hou; Gang Tang; Hong-Fang Yuan; Qing-Liang Zhao; Xi Cao
2014-01-01
Vibration signals of rolling element bearings faults are usually immersed in background noise, which makes it difficult to detect the faults. Wavelet-based methods being used commonly can reduce some types of noise, but there is still plenty of room for improvement due to the insufficient sparseness of vibration signals in wavelet domain. In this work, in order to eliminate noise and enhance the weak fault detection, a new kind of peak-based approach combined with multiscale decomposition and...
Decomposition and Simplification of Multivariate Data using Pareto Sets.
Huettenberger, Lars; Heine, Christian; Garth, Christoph
2014-12-01
Topological and structural analysis of multivariate data is aimed at improving the understanding and usage of such data through identification of intrinsic features and structural relationships among multiple variables. We present two novel methods for simplifying so-called Pareto sets that describe such structural relationships. Such simplification is a precondition for meaningful visualization of structurally rich or noisy data. As a framework for simplification operations, we introduce a decomposition of the data domain into regions of equivalent structural behavior and the reachability graph that describes global connectivity of Pareto extrema. Simplification is then performed as a sequence of edge collapses in this graph; to determine a suitable sequence of such operations, we describe and utilize a comparison measure that reflects the changes to the data that each operation represents. We demonstrate and evaluate our methods on synthetic and real-world examples.
Automatic classification of visual evoked potentials based on wavelet decomposition
Stasiakiewicz, Paweł; Dobrowolski, Andrzej P.; Tomczykiewicz, Kazimierz
2017-04-01
Diagnosis of part of the visual system, that is responsible for conducting compound action potential, is generally based on visual evoked potentials generated as a result of stimulation of the eye by external light source. The condition of patient's visual path is assessed by set of parameters that describe the time domain characteristic extremes called waves. The decision process is compound therefore diagnosis significantly depends on experience of a doctor. The authors developed a procedure - based on wavelet decomposition and linear discriminant analysis - that ensures automatic classification of visual evoked potentials. The algorithm enables to assign individual case to normal or pathological class. The proposed classifier has a 96,4% sensitivity at 10,4% probability of false alarm in a group of 220 cases and area under curve ROC equals to 0,96 which, from the medical point of view, is a very good result.
Aridity and decomposition processes in complex landscapes
Ossola, Alessandro; Nyman, Petter
2015-04-01
Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally
Domains in multiband superconductors
Energy Technology Data Exchange (ETDEWEB)
Tanaka, Y., E-mail: y.tanaka@aist.go.jp [National Institute of Advanced Industrial Science and Technology, 1-1-1 Umezono, Tsukuba-shi, Ibaraki-ken 305-8568 (Japan); Yanagisawa, T. [National Institute of Advanced Industrial Science and Technology, 1-1-1 Umezono, Tsukuba-shi, Ibaraki-ken 305-8568 (Japan); Crisan, A. [University of Birmingham, Edgbaston, Birmingham B15 2TT (United Kingdom)] [National Institute of Materials Physics, P.O. Box MG-7, Bucharest 077125 (Romania); Shirage, P.M.; Iyo, A. [National Institute of Advanced Industrial Science and Technology, 1-1-1 Umezono, Tsukuba-shi, Ibaraki-ken 305-8568 (Japan); Tokiwa, K. [Tokyo University of Science, 2641 Yamazaki, Noda-shi, Chiba-ken 278-8510 (Japan); Nishio, T. [Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601 (Japan); Sundaresan, A. [Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur, Bangalore 560 064 (India); Terada, N. [Kagoshima University, Korimoto 1-21-24, Kagoshima-shi, Kagoshima-ken 890-8580 (Japan)
2011-11-15
Positive interband Josephson interactions disperse order parameters. It creates configuration domain in multiband superconductors. This domain poses a problem for the stability of superconductivity. However it also offer new potential for novel electronics. Multiband superconductors can have several types of domains that are inhibited in conventional single-band superconductors. These domains are phase domains and chiral domains and their domain wall are an interband phase difference soliton. In a superconductor with an odd number of electronic bands (five or more) and with positive interband Josephson interactions, we find other types of domains with different interband phase differences. We call these domains configuration domains because pseudo-order parameters for each band are dispersed in the complex plain and several configurations, which have several local minima. Fractional vortices serve as hubs for phase difference solitons (configuration domain walls). The divergence of the number of configurations with local minima would pose a serious problem for the stability of superconductivity.
Regularization of nonlinear decomposition of spectral x-ray projection images.
Ducros, Nicolas; Abascal, Juan Felipe Perez-Juste; Sixou, Bruno; Rit, Simon; Peyrin, Françoise
2017-09-01
Exploiting the x-ray measurements obtained in different energy bins, spectral computed tomography (CT) has the ability to recover the 3-D description of a patient in a material basis. This may be achieved solving two subproblems, namely the material decomposition and the tomographic reconstruction problems. In this work, we address the material decomposition of spectral x-ray projection images, which is a nonlinear ill-posed problem. Our main contribution is to introduce a material-dependent spatial regularization in the projection domain. The decomposition problem is solved iteratively using a Gauss-Newton algorithm that can benefit from fast linear solvers. A Matlab implementation is available online. The proposed regularized weighted least squares Gauss-Newton algorithm (RWLS-GN) is validated on numerical simulations of a thorax phantom made of up to five materials (soft tissue, bone, lung, adipose tissue, and gadolinium), which is scanned with a 120 kV source and imaged by a 4-bin photon counting detector. To evaluate the method performance of our algorithm, different scenarios are created by varying the number of incident photons, the concentration of the marker and the configuration of the phantom. The RWLS-GN method is compared to the reference maximum likelihood Nelder-Mead algorithm (ML-NM). The convergence of the proposed method and its dependence on the regularization parameter are also studied. We show that material decomposition is feasible with the proposed method and that it converges in few iterations. Material decomposition with ML-NM was very sensitive to noise, leading to decomposed images highly affected by noise, and artifacts even for the best case scenario. The proposed method was less sensitive to noise and improved contrast-to-noise ratio of the gadolinium image. Results were superior to those provided by ML-NM in terms of image quality and decomposition was 70 times faster. For the assessed experiments, material decomposition was possible
Energy Technology Data Exchange (ETDEWEB)
Mburu, Sarah; Kolli, R. Prakash; Perea, Daniel E.; Schwarm, Samuel C.; Eaton, Arielle; Liu, Jia; Patel, Shiv; Bartrand, Jonah; Ankem, Sreeramamurthy
2017-04-01
The microstructure and mechanical properties in unaged and thermally aged (at 280 oC, 320 oC, 360 oC, and 400 oC to 4300 h) CF–3 and CF–8 cast duplex stainless steels (CDSS) are investigated. The unaged CF–8 steel has Cr-rich M23C6 carbides located at the δ–ferrite/γ– austenite heterophase interfaces that were not observed in the CF–3 steel and this corresponds to a difference in mechanical properties. Both unaged steels exhibit incipient spinodal decomposition into Fe-rich α–domains and Cr-rich α’–domains. During aging, spinodal decomposition progresses and the mean wavelength (MW) and mean amplitude (MA) of the compositional fluctuations increase as a function of aging temperature. Additionally, G–phase precipitates form between the spinodal decomposition domains in CF–3 at 360 oC and 400 oC and in CF–8 at 400 oC. The microstructural evolution is correlated to changes in mechanical properties.
Decomposition of hydroxylamine by hemoglobin.
Bazylinski, D A; Arkowitz, R A; Hollocher, T C
1987-12-01
The reaction between hydroxylamine (NH2OH) and human hemoglobin (Hb) at pH 6-8 and the reaction between NH2OH and methemoglobin (Hb+) chiefly at pH 7 were studied under anaerobic conditions at 25 degrees C. In presence of cyanide, which was used to trap Hb+, Hb was oxidized by NH2OH to methemoglobin cyanide with production of about 0.5 mol NH+4/mol of heme oxidized at pH 7. The conversion of Hb to Hb+ was first order in [Hb] (or nearly so) but the pseudo-first-order rate constant was not strictly proportional to [NH2OH]. Thus, the apparent second-order rate constant at pH 7 decreased from about 30 M-1 X s-1 to a limiting value of 11.3 M-1 X s-1 with increasing [NH2OH]. The rate of Hb oxidation was not much affected by cyanide, whereas there was no reaction between NH2OH and carbonmonoxyhemoglobin (HbCO). The pseudo-first-order rate constant for Hb oxidation at 500 microM NH2OH increased from about 0.008 s-1 at pH 6 to 0.02 s-1 at pH 8. The oxidation of Hb by NH2OH terminated prematurely at 75-90% completion at pH 7 and at 30-35% completion at pH 8. Data on the premature termination of reaction fit the titration curve for a group with pK = 7.5-7.7. NH2OH was decomposed by Hb+ to N2, NH+4, and a small amount of N2O in what appears to be a dismutation reaction. Nitrite and hydrazine were not detected, and N2 and NH+4 were produced in nearly equimolar amounts. The dismutation reaction was first order in [Hb+] and [NH2OH] only at low concentrations of reactants and was cleanly inhibited by cyanide. The spectrum of Hb+ remained unchanged during the reaction, except for the gradual formation of some choleglobin-like (green) pigment, whereas in the presence of CO, HbCO was formed. Kinetics are consistent with the view advanced previously by J. S. Colter and J. H. Quastel [1950) Arch. Biochem. 27, 368-389) that the decomposition of NH2OH proceeds by a mechanism involving a Hb/Hb+ cycle (reactions [1] and [2]) in which Hb is oxidized to Hb+ by NH2OH.
Bertrand, G.; Comperat, M.; Lallemant, M.
1980-09-01
Copper sulfate pentahydrate dehydration into trihydrate was investigated using monocrystalline platelets with (110) crystallographic orientation. Temperature and pressure conditions were selected so as to obtain elliptical trihydrate domains. The study deals with the evolution, vs time, of elliptical domain dimensions and the evolution, vs water vapor pressure, of the {D}/{d} ratio of ellipse axes and on the other hand of the interface displacement rate along a given direction. The phenomena observed are not basically different from those yielded by the overall kinetic study of the solid sample. Their magnitude, however, is modulated depending on displacement direction. The results are analyzed within the scope of our study of endothermic decomposition of solids.
Problem decomposition by mutual information and force-based clustering
Otero, Richard Edward
alternative global optimizer, called MIMIC, which is unrelated to Genetic Algorithms. Advancement to the current practice demonstrates the use of MIMIC as a global method that explicitly models problem structure with mutual information, providing an alternate method for globally searching multi-modal domains. By leveraging discovered problem inter- dependencies, MIMIC may be appropriate for highly coupled problems or those with large function evaluation cost. This work introduces a useful addition to the MIMIC algorithm that enables its use on continuous input variables. By leveraging automatic decision tree generation methods from Machine Learning and a set of randomly generated test problems, decision trees for which method to apply are also created, quantifying decomposition performance over a large region of the design space.
Directory of Open Access Journals (Sweden)
Sheng-Ping Yan
2014-01-01
Full Text Available We perform a comparison between the local fractional Adomian decomposition and local fractional function decomposition methods applied to the Laplace equation. The operators are taken in the local sense. The results illustrate the significant features of the two methods which are both very effective and straightforward for solving the differential equations with local fractional derivative.
Scaling of a Fast Fourier Transform and a pseudo-spectral fluid solver up to 196608 cores
Chatterjee, Anando G.
2017-11-04
In this paper we present scaling results of a FFT library, FFTK, and a pseudospectral code, Tarang, on grid resolutions up to 81923 grid using 65536 cores of Blue Gene/P and 196608 cores of Cray XC40 supercomputers. We observe that communication dominates computation, more so on the Cray XC40. The computation time scales as Tcomp∼p−1, and the communication time as Tcomm∼n−γ2 with γ2 ranging from 0.7 to 0.9 for Blue Gene/P, and from 0.43 to 0.73 for Cray XC40. FFTK, and the fluid and convection solvers of Tarang exhibit weak as well as strong scaling nearly up to 196608 cores of Cray XC40. We perform a comparative study of the performance on the Blue Gene/P and Cray XC40 clusters.
Fast approximate convex decomposition using relative concavity
Ghosh, Mukulika
2013-02-01
Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.
Two Notes on Discrimination and Decomposition
DEFF Research Database (Denmark)
Nielsen, Helena Skyt
1998-01-01
1. It turns out that the Oaxaca-Blinder wage decomposition is inadequate when it comes to calculation of separate contributions for indicator variables. The contributions are not robust against a change of reference group. I extend the Oaxaca-Blinder decomposition to handle this problem. 2. The p....... The paper suggests how to use the logit model to decompose the gender difference in the probability of an occurrence. The technique is illustrated by an analysis of discrimination in child labor in rural Zambia....
Claw-decompositions and Tutte-orientations
DEFF Research Database (Denmark)
Barat, Janos; Thomassen, Carsten
2006-01-01
We conjecture that, for each tree T there exists a natural number k(T) such that the following holds: If G is a k(T)-edge-connected graph such that \\E(T)\\ divides \\EG)\\, then the edges of G can be divided into parts, each of which is isomorphic to T. We prove that for T=K-1,K-3 (the claw), this h......]-edge-connected graph with n vertices has an edge-decomposition into claws provided its number of edges is divisible by 3. We also prove that every triangulation of a surface has an edge-decomposition into claws. (C) 2006 Wiley Periodicals, Inc....
Surface Modes of Coherent Spinodal Decomposition
Tang, Ming; Karma, Alain
2012-06-01
We use linear stability theory and numerical simulations to show that spontaneous phase separation in elastically coherent solids is fundamentally altered by the presence of free surfaces. Because of misfit stress relaxation near surfaces, phase separation is mediated by unique surface modes of spinodal decomposition that have faster kinetics than bulk modes and are unstable even when spinodal decomposition is suppressed in the bulk. Consequently, in the presence of free surfaces, the limit of metastability of supersaturated solid solutions of crystalline materials is shifted from the coherent to chemical spinodal.
Multiresolution signal decomposition transforms, subbands, and wavelets
Akansu, Ali N; Haddad, Paul R
2001-01-01
The uniqueness of this book is that it covers such important aspects of modern signal processing as block transforms from subband filter banks and wavelet transforms from a common unifying standpoint, thus demonstrating the commonality among these decomposition techniques. In addition, it covers such ""hot"" areas as signal compression and coding, including particular decomposition techniques and tables listing coefficients of subband and wavelet filters and other important properties.The field of this book (Electrical Engineering/Computer Science) is currently booming, which is, of course
Decomposition of aquatic plants in lakes
Energy Technology Data Exchange (ETDEWEB)
Godshalk, G.L.
1977-01-01
This study was carried out to systematically determine the effects of temperature and oxygen concentration, two environmental parameters crucial to lake metabolism in general, on decomposition of five species of aquatic vascular plants of three growth forms in a Michigan lake. Samples of dried plant material were decomposed in flasks in the laboratory under three different oxygen regimes, aerobic-to-anaerobic, strict anaerobic, and aerated, each at 10/sup 0/C and 25/sup 0/C. In addition, in situ decomposition of the same species was monitored using the litter bag technique under four conditions.
DEFF Research Database (Denmark)
Engvall, E; Wewer, U M
1996-01-01
Extracellular matrix molecules are often very large and made up of several independent domains, frequently with autonomous activities. Laminin is no exception. A number of globular and rod-like domains can be identified in laminin and its isoforms by sequence analysis as well as by electron...... microscopy. Here we present the structure-function relations in laminins by examination of their individual domains. This approach to viewing laminin is based on recent results from several laboratories. First, some mutations in laminin genes that cause disease have affected single laminin domains, and some...... laminin isoforms lack particular domains. These mutants and isoforms are informative with regard to the activities of the mutated and missing domains. These mutants and isoforms are informative with regard to the activities of the mutated and missing domains. Second, laminin-like domains have now been...
Distributed Model Predictive Control via Dual Decomposition
DEFF Research Database (Denmark)
Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle
2014-01-01
This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...
Lignin Derivatives Formation In Catalysed Thermal Decomposition ...
African Journals Online (AJOL)
denise
in the heat of gasification and mass fraction of non-combustible volatiles in solid. NaOH-catalysed thermal decomposition of pure and fire-retardant- cellulose. Kuroda and co-workers14 studied the Curie-point pyrolysis of Japanese softwood species of the red pine, cedar and cypress in the presence of inorganic substances ...
KINETICS OF HYDROXIDE PHOMOTED DECOMPOSITION 0F ...
African Journals Online (AJOL)
1991-04-26
(Received July 2?. 1990; revised April 26, 1991). ABSTRACT. The effects of varying concentrations of dimethyl sulphoxide in mixture with water on rates and activation parameters for the hydroxide promoted decomposition of tetraphenylphosphonium chloride have been studied. Increasing the DMSO content of the reaction ...
The decomposition of estuarine macrophytes under different ...
African Journals Online (AJOL)
The estuary is subject to a variety of anthropogenic impacts (e.g. freshwater abstraction and sewage discharge) that increases its susceptibility to prolonged periods of mouth closure, eutrophication, and ultimately the formation of macroalgal blooms. The aim of this study was to determine the decomposition characteristics of ...
Direct observation of nanowire growth and decomposition
DEFF Research Database (Denmark)
Rackauskas, Simas; Shandakov, Sergey D; Jiang, Hua
2017-01-01
knowledge, so far this has been only postulated, but never observed at the atomic level. By means of in situ environmental transmission electron microscopy we monitored and examined the atomic layer transformation at the conditions of the crystal growth and its decomposition using CuO nanowires selected...
Preparation, Structure Characterization and Thermal Decomposition ...
African Journals Online (AJOL)
NJD
thermal decomposition process of [Dy(m-MBA)3phen]2·H2O has been followed by thermal analysis. KEYWORDS ... X-ray diffraction, elemental analysis, UV and IR spectroscopy, .... diffractometer with graphite-monochromated Mo Kα radiation.
Organic matter decomposition in simulated aquaculture ponds
Torres Beristain, B.
2005-01-01
Different kinds of organic and inorganic compounds (e.g. formulated food, manures, fertilizers) are added to aquaculture ponds to increase fish production. However, a large part of these inputs are not utilized by the fish and are decomposed inside the pond. The microbiological decomposition of the
Decomposition and nutrient release patterns of Pueraria ...
African Journals Online (AJOL)
Decomposition and nutrient release patterns of Pueraria phaseoloides, Flemingia macrophylla and Chromolaena odorata leaf residues in tropical land use ... The slowest releases, irrespective of type of leaf residue, were in Ca and Mg. The study concluded that among the planted fallows, Pueraria phaseoloides had the ...
Methodologies in forensic and decomposition microbiology
Culturable microorganisms represent only 0.1-1% of the total microbial diversity of the biosphere. This has severely restricted the ability of scientists to study the microbial biodiversity associated with the decomposition of ephemeral resources in the past. Innovations in technology are bringing...
Thermal decomposition of barium valerate in argon
DEFF Research Database (Denmark)
Torres, P.; Norby, Poul; Grivel, Jean-Claude
2015-01-01
The thermal decomposition of barium valerate (Ba(C4H9CO2)(2)/Ba-pentanoate) was studied in argon by means of thermogravimetry, differential thermal analysis, IR-spectroscopy, X-ray diffraction and hot-stage optical microscopy. Melting takes place in two different steps, at 200 degrees C and 280...
Compactly supported frames for decomposition spaces
DEFF Research Database (Denmark)
Nielsen, Morten; Rasmussen, Kenneth Niemann
2012-01-01
In this article we study a construction of compactly supported frame expansions for decomposition spaces of Triebel-Lizorkin type and for the associated modulation spaces. This is done by showing that finite linear combinations of shifts and dilates of a single function with sufficient decay in b...
The Algorithmic Complexity of Modular Decomposition
J.C. Bioch (Cor)
2001-01-01
textabstractModular decomposition is a thoroughly investigated topic in many areas such as switching theory, reliability theory, game theory and graph theory. We propose an O(mn)-algorithm for the recognition of a modular set of a monotone Boolean function f with m prime implicants and n variables.
Thermal decomposition of lead titanyl oxalate tetrahydrate
van de Velde, G.M.H.; Oranje, P.J.D.
1976-01-01
The thermal behaviour of PbTiO(C2O4)2·4H2O (PTO) has been investigated, employing TG, quantitative DTA, infrared spectroscopy and (high temperature) X-ray powder diffraction. The decomposition involves four main steps. The first is the dehydration of the tetrahydrate (30–180°C), followed by a small
Decomposition of variance for spatial Cox processes
DEFF Research Database (Denmark)
Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus
2013-01-01
Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with addit...
TP89 - SIRZ Decomposition Spectral Estimation
Energy Technology Data Exchange (ETDEWEB)
Seetho, Isacc M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Azevedo, Steve [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, Jerel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brown, William D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Martz, Jr., Harry E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-12-08
The primary objective of this test plan is to provide X-ray CT measurements of known materials for the purposes of generating and testing MicroCT and EDS spectral estimates. These estimates are to be used in subsequent Ze/RhoE decomposition analyses of acquired data.
Reference model decomposition in direct adaptive control
Butler, H.; Honderd, G.; van Amerongen, J.
1991-01-01
This paper introduces the method of reference model decomposition as a way to improve the robustness of model reference adaptive control systems (MRACs) with respect to unmodelled dynamics with a known structure. Such unmodelled dynamics occur when some of the nominal plant dynamics are purposely
Influence of Family Structure on Variance Decomposition
DEFF Research Database (Denmark)
Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter
Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained...
Factors affecting decomposition and Diptera colonization.
Campobasso, C P; Di Vella, G; Introna, F
2001-08-15
Understanding the process of corpse decomposition is basic to establishing the postmortem interval (PMI) in any death investigation even using insect evidence. The sequence of postmortem changes in soft tissues usually gives an idea of how long an individual has been dead. However, modification of the decomposition process can considerably alter the estimate of the time of death. A body after death is sometimes subject to depredation by various types of animals among which insects can have a predominant role in the breakdown of the corpse thus, accelerating the decomposition rate. The interference of the insect community in the decomposition process has been investigated by several experimental studies using animal models and very few contributions directly on cadavers. Several of the most frequent factors affecting PMI estimates such as temperature, burial depth and access of the body to insects are fully reviewed. On account of their activity and world wide distribution, Diptera are the insects of greatest forensic interest. The knowledge of factors inhibiting or favouring colonization and Diptera development is a necessary pre-requisite for estimating the PMI using entomological data.
Riesz Riemann-Liouville difference on discrete domains
Wu, Guo-Cheng; Baleanu, Dumitru; Xie, He-Ping
2016-08-01
A Riesz difference is defined by the use of the Riemann-Liouville differences on time scales. Then the definition is considered for discrete fractional modelling. A lattice fractional equation method is proposed among which the space variable is defined on discrete domains. Finite memory effects are introduced into the lattice system and the numerical formulae are given. Adomian decomposition method is adopted to solve the fractional partial difference equations numerically.
The Slice Algorithm For Irreducible Decomposition of Monomial Ideals
DEFF Research Database (Denmark)
Roune, Bjarke Hammersholt
2009-01-01
Irreducible decomposition of monomial ideals has an increasing number of applications from biology to pure math. This paper presents the Slice Algorithm for computing irreducible decompositions, Alexander duals and socles of monomial ideals. The paper includes experiments showing good performance...
Adomian decomposition method used to solve the gravity wave equations
Mungkasi, Sudi; Dheno, Maria Febronia Sedho
2017-01-01
The gravity wave equations are considered. We solve these equations using the Adomian decomposition method. We obtain that the approximate Adomian decomposition solution to the gravity wave equations is accurate (physically correct) for early stages of fluid flows.
Litter decomposition and nutrient dynamics of ten selected tree ...
African Journals Online (AJOL)
Litter decomposition processes in tropical rainforests are still poorly understood. Leaf litter decomposition and nutrient dynamics of ten contrasting tree species, Entandraphragma utile, Guibourtia tessmannii, Klainedoxa gabonensis, Musanga cecropioides, Panda oleosa, Plagiostyles africana, Pterocarpus soyauxii, ...
TRIANGLE-SHAPED DC CORONA DISCHARGE DEVICE FOR MOLECULAR DECOMPOSITION
The paper discusses the evaluation of electrostatic DC corona discharge devices for the application of molecular decomposition. A point-to-plane geometry corona device with a rectangular cross section demonstrated low decomposition efficiencies in earlier experimental work. The n...
Sliding Window Empirical Mode Decomposition -its performance and quality
Directory of Open Access Journals (Sweden)
Stepien Pawel
2014-12-01
Proposed algorithm speeds up (about 10 times the computation with acceptable quality of decomposition. Conclusions Sliding Window EMD algorithm is suitable for decomposition of long signals with high sampling frequency.
Domain Specific Problem Solving.
Eade, Frank
1989-01-01
Outlines a possible framework for allowing teachers to explore how children learn mathematics. A mathematical modelling process and three domains, including content, process and pragmatic domain, are described. Twelve strategies for encouraging children to translate between the domains are suggested. (YP)
Decomposition of Amino Diazeniumdiolates (NONOates): Molecular Mechanisms
Energy Technology Data Exchange (ETDEWEB)
Shaikh, Nizamuddin; Valiev, Marat; Lymar, Sergei V.
2014-08-23
Although diazeniumdiolates (X[N(O)NO]-) are extensively used in biochemical, physiological, and pharmacological studies due to their ability to slowly release NO and/or its congeneric nitroxyl, the mechanisms of these processes remain obscure. In this work, we used a combination of spectroscopic, kinetic, and computational techniques to arrive at a qualitatively consistent molecular mechanism for decomposition of amino diazeniumdiolates (amino NONOates: R2N[N(O)NO]-, where R = -N(C2H5)2 (1), -N(C3H4NH2)2 (2), or -N(C2H4NH2)2 (3)). Decomposition of these NONOates is triggered by protonation of their [NN(O)NO]- group with apparent pKa and decomposition rate constants of 4.6 and 1 s-1 for 1-H, 3.5 and 83 x 10-3 s-1 for 2-H, and 3.8 and 3.3 x 10-3 s-1 for 3-H. Although protonation occurs mainly on the O atoms of the functional group, only the minor R2N(H)N(O)NO tautomer (population ~0.01%, for 1) undergoes the N-N heterolytic bond cleavage (k ~102 s-1 for 1) leading to amine and NO. Decompositions of protonated amino NONOates are strongly temperature-dependent; activation enthalpies are 20.4 and 19.4 kcal/mol for 1 and 2, respectively, which includes contributions from both the tautomerization and bond cleavage. The bond cleavage rates exhibit exceptional sensitivity to the nature of R substituents which strongly modulate activation entropy. At pH < 2, decompositions of all these NONOates are subject to additional acid catalysis that occurs through di-protonation of the [NN(O)NO]- group.
Photodegradation at day, microbial decomposition at night - decomposition in arid lands
Gliksman, Daniel; Gruenzweig, Jose
2014-05-01
Our current knowledge of decomposition in dry seasons and its role in carbon turnover is fragmentary. So far, decomposition during dry seasons was mostly attributed to abiotic mechanisms, mainly photochemical and thermal degradation, while the contribution of microorganisms to the decay process was excluded. We asked whether microbial decomposition occurs during the dry season and explored its interaction with photochemical degradation under Mediterranean climate. We conducted a litter bag experiment with local plant litter and manipulated litter exposure to radiation using radiation filters. We found notable rates of CO2 fluxes from litter which were related to microbial activity mainly during night-time throughout the dry season. This activity was correlated with litter moisture content and high levels of air humidity and dew. Day-time CO2 fluxes were related to solar radiation, and radiation manipulation suggested photodegradation as the underlying mechanism. In addition, a decline in microbial activity was followed by a reduction in photodegradation-related CO2 fluxes. The levels of microbial decomposition and photodegradation in the dry season were likely the factors influencing carbon mineralization during the subsequent wet season. This study showed that microbial decomposition can be a dominant contributor to CO2 emissions and mass loss in the dry season and it suggests a regulating effect of microbial activity on photodegradation. Microbial decomposition is an important contributor to the dry season decomposition and impacts the annual litter turn-over rates in dry regions. Global warming may lead to reduced moisture availability and dew deposition, which may greatly influence not only microbial decomposition of plant litter, but also photodegradation.
Alexandrov, Nickolai; Shindyalov, Ilya
2003-02-12
We have developed a program for automatic identification of domains in protein three-dimensional structures. Performance of the program was assessed by three different benchmarks: (i) by comparison with the expert-curated SCOP database of structural domains; (ii) by comparison with a collection of manual domain assignments; and (iii) by comparison with a set of 55 proteins, frequently used as a benchmark for automatic domain assignment. In all these benchmarks PDP identified domains correctly in more than 80% of proteins. http://123d.ncifcrf.gov/.
Climate fails to predict wood decomposition at regional scales
Mark A. Bradford; Robert J. Warren; Petr Baldrian; Thomas W. Crowther; Daniel S. Maynard; Emily E. Oldfield; William R. Wieder; Stephen A. Wood; Joshua R. King
2014-01-01
Decomposition of organic matter strongly influences ecosystem carbon storage1. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter2, 3, 4, 5. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on...
Decomposition of cattle dung on grazed signalgrass ( Brachiaria ...
African Journals Online (AJOL)
Livestock excreta is one of the major nutrient sources in natural grasslands. Understanding how livestock diet and season affects the decomposition dynamics is critical to nutrient cycling models. We hypothesised that livestock diet and season of the year affect dung decomposition. This study evaluated the decomposition ...
Specific leaf area predicts dryland litter decomposition via two mechanisms
Liu, Guofang; Wang, Lei; Jiang, Li; Pan, Xu; Huang, Zhenying; Dong, Ming; Cornelissen, Johannes H.C.
2018-01-01
Litter decomposition plays important roles in carbon and nutrient cycling. In dryland, both microbial decomposition and abiotic degradation (by UV light or other forces) drive variation in decomposition rates, but whether and how litter traits and position determine the balance between these
Through-wall image enhancement using fuzzy and QR decomposition.
Riaz, Muhammad Mohsin; Ghafoor, Abdul
2014-01-01
QR decomposition and fuzzy logic based scheme is proposed for through-wall image enhancement. QR decomposition is less complex compared to singular value decomposition. Fuzzy inference engine assigns weights to different overlapping subspaces. Quantitative measures and visual inspection are used to analyze existing and proposed techniques.
Coupling of temperature with pressure induced initial decomposition ...
Indian Academy of Sciences (India)
The pressure effects on the initial decomposition stepsand initially generated products on PETN and NTO were very different. PETN was triggered by C-H... O intermolecular hydrogen transfer. The initial decomposition mechanism was independent of the pressure. ForNTO, two different initial decomposition mechanisms ...
Plant litter decomposition in wetlands receiving acid mine drainage
Energy Technology Data Exchange (ETDEWEB)
Kittle, D.L.; McGraw, J.B.; Garbutt, K. [West Virginia University, Morgantown, WV (United States). Dept. of Biology
1995-03-01
The impact of acid mine drainage on the decomposition of wetland plant species of northern West Virginia was studied to determine if the potential exists for nutrient cycling to be altered in systems used to treat this drainage. There were two objectives of this study. First, decomposition of aboveground plant material was measured to determine species decomposition patterns as a function of pH. Second, decomposition of litter from various pH environments was compared to assess whether litter origin affects decomposition rates. Species differences were detected throughout the study. Decomposition rates of woolgrass ({ital Scirpus cyperinus} (L.) Kunth) and common rush ({ital Juncus effusus} L.) were significantly lower than the use of calamus ({ital Acorus calamus} L.) and rice cutgrass ({ital Leersia oryzoids} L.). Differences among species explained a large proportion of the variation in percentage of biomass remaining. Thus, differences in litter quality among species was important in determining the rate of decomposition. In general, significantly more decomposition occurred for all species in high pH environments, indicating impeded decomposition at low pH. While decomposition of some species litter differed depending on its origin, other species showed no effect. Cattail ({ital Typha latifolia} L.) in particular, was found to have lower decomposition rates occurring with material grown at low pH. Lower decomposition rates could result in lower nutrient availability leading to further reduction of productivity under low pH conditions. 34 refs., 4 figs., 4 tabs.
Biogeochemistry of Decomposition and Detrital Processing
Sanderman, J.; Amundson, R.
2003-12-01
Decomposition is a key ecological process that roughly balances net primary production in terrestrial ecosystems and is an essential process in resupplying nutrients to the plant community. Decomposition consists of three concurrent processes: communition or fragmentation, leaching of water-soluble compounds, and microbial catabolism. Decomposition can also be viewed as a sequential process, what Eijsackers and Zehnder (1990) compare to a Russian matriochka doll. Soil macrofauna fragment and partially solubilize plant residues, facilitating establishment of a community of decomposer microorganisms. This decomposer community will gradually shift as the most easily degraded plant compounds are utilized and the more recalcitrant materials begin to accumulate. Given enough time and the proper environmental conditions, most naturally occurring compounds can completely be mineralized to inorganic forms. Simultaneously with mineralization, the process of humification acts to transform a fraction of the plant residues into stable soil organic matter (SOM) or humus. For reference, Schlesinger (1990) estimated that only ˜0.7% of detritus eventually becomes stabilized into humus.Decomposition plays a key role in the cycling of most plant macro- and micronutrients and in the formation of humus. Figure 1 places the roles of detrital processing and mineralization within the context of the biogeochemical cycling of essential plant nutrients. Chapin (1991) found that while the atmosphere supplied 4% and mineral weathering supplied no nitrogen and 95% of all the nitrogen and phosphorus uptake by tundra species in Barrow, Alaska. In a cool temperate forest, nutrient recycling accounted for 93%, 89%, 88%, and 65% of total sources for nitrogen, phosphorus, potassium, and calcium, respectively ( Chapin, 1991). (13K)Figure 1. A decomposition-centric biogeochemical model of nutrient cycling. Although there is significant external input (1) and output (2) from neighboring ecosystems
ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE
Energy Technology Data Exchange (ETDEWEB)
Ketusky, E.; Subramanian, K.
2012-02-29
At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration
Duan, Jun-Sheng; Rach, Randolph; Wazwaz, Abdul-Majid
2014-11-01
In this paper, we present a reliable algorithm to calculate positive solutions of homogeneous nonlinear boundary value problems (BVPs). The algorithm converts the nonlinear BVP to an equivalent nonlinear Fredholm- Volterra integral equation.We employ the multistage Adomian decomposition method for BVPs on two or more subintervals of the domain of validity, and then solve the matching equation for the flux at the interior point, or interior points, to determine the solution. Several numerical examples are used to highlight the effectiveness of the proposed scheme to interpolate the interior values of the solution between boundary points. Furthermore we demonstrate two novel techniques to accelerate the rate of convergence of our decomposition series solutions by increasing the number of subintervals and adjusting the lengths of subintervals in the multistage Adomian decomposition method for BVPs.
Nucleon spin decomposition and orbital angular momentum in the nucleon
Wakamatsu, Masashi
2014-09-01
To get a complete decomposition of nucleon spin is a fundamentally important homework of QCD. In fact, if our researches end up without accomplishing this task, a tremendous efforts since the 1st discovery of the nucleon spin crisis would end in the air. We now have a general agreement that there are at least two physically inequivalent gauge-invariant decompositions of the nucleon. In these two decompositions, the intrinsic spin parts of quarks and gluons are just common. What discriminate these two decompositions are the orbital angular momentum (OAM) parts. The OAMs of quarks and gluons appearing in the first decomposition are the so-called ``mechanical'' OAMs, while those appearing in the second decomposition are the generalized (gauge-invariant) ``canonical'' ones. By this reason, these decompositions are broadly called the ``mechanical'' and ``canonical'' decompositions of the nucleon spin. Still, there remains several issues, which have not reached a complete consensus among the experts. (See the latest recent). In the present talk, I will mainly concentrate on the practically most important issue, i.e. which decomposition is more favorable from the observational viewpoint. There are two often-claimed advantages of canonical decomposition. First, each piece of this decomposition satisfies the SU(2) commutation relation or angular momentum algebra. Second, the canonical OAM rather than the mechanical OAM is compatible with free partonic picture of constituent orbital motion. In the present talk, I will show that both these claims are not necessarily true, and push forward a viewpoint that the ``mechanical'' decomposition is more physical in that it has more direct connection with observables. I also emphasize that the nucleon spin decomposition accessed by the lattice QCD analyses is the ``mechanical'' decomposition not the ``canonical'' one. The recent lattice QCD studies of the nucleon spin decomposition are also briefly overviewed.
Evolutionary cores of domain co-occurrence networks
Directory of Open Access Journals (Sweden)
Almaas Eivind
2005-03-01
Full Text Available Abstract Background The modeling of complex systems, as disparate as the World Wide Web and the cellular metabolism, as networks has recently uncovered a set of generic organizing principles: Most of these systems are scale-free while at the same time modular, resulting in a hierarchical architecture. The structure of the protein domain network, where individual domains correspond to nodes and their co-occurrences in a protein are interpreted as links, also falls into this category, suggesting that domains involved in the maintenance of increasingly developed, multicellular organisms accumulate links. Here, we take the next step by studying link based properties of the protein domain co-occurrence networks of the eukaryotes S. cerevisiae, C. elegans, D. melanogaster, M. musculus and H. sapiens. Results We construct the protein domain co-occurrence networks from the PFAM database and analyze them by applying a k-core decomposition method that isolates the globally central (highly connected domains in the central cores from the locally central (highly connected domains in the peripheral cores protein domains through an iterative peeling process. Furthermore, we compare the subnetworks thus obtained to the physical domain interaction network of S. cerevisiae. We find that the innermost cores of the domain co-occurrence networks gradually grow with increasing degree of evolutionary development in going from single cellular to multicellular eukaryotes. The comparison of the cores across all the organisms under consideration uncovers patterns of domain combinations that are predominately involved in protein functions such as cell-cell contacts and signal transduction. Analyzing a weighted interaction network of PFAM domains of Yeast, we find that domains having only a few partners frequently interact with these, while the converse is true for domains with a multitude of partners. Combining domain co-occurrence and interaction information, we observe
Algorithms for Sparse Non-negative Tucker Decompositions
DEFF Research Database (Denmark)
Mørup, Morten; Hansen, Lars Kai
2008-01-01
There is a increasing interest in analysis of large scale multi-way data. The concept of multi-way data refers to arrays of data with more than two dimensions, i.e., taking the form of tensors. To analyze such data, decomposition techniques are widely used. The two most common decompositions...... decompositions). To reduce ambiguities of this type of decomposition we develop updates that can impose sparseness in any combination of modalities, hence, proposed algorithms for sparse non-negative Tucker decompositions (SN-TUCKER). We demonstrate how the proposed algorithms are superior to existing algorithms...
Li, Duan; Xu, Lijun; Li, Xiaolu
2017-04-01
To measure the distances and properties of the objects within a laser footprint, a decomposition method for full-waveform light detection and ranging (LiDAR) echoes is proposed. In this method, firstly, wavelet decomposition is used to filter the noise and estimate the noise level in a full-waveform echo. Secondly, peak and inflection points of the filtered full-waveform echo are used to detect the echo components in the filtered full-waveform echo. Lastly, particle swarm optimization (PSO) is used to remove the noise-caused echo components and optimize the parameters of the most probable echo components. Simulation results show that the wavelet-decomposition-based filter is of the best improvement of SNR and decomposition success rates than Wiener and Gaussian smoothing filters. In addition, the noise level estimated using wavelet-decomposition-based filter is more accurate than those estimated using other two commonly used methods. Experiments were carried out to evaluate the proposed method that was compared with our previous method (called GS-LM for short). In experiments, a lab-build full-waveform LiDAR system was utilized to provide eight types of full-waveform echoes scattered from three objects at different distances. Experimental results show that the proposed method has higher success rates for decomposition of full-waveform echoes and more accurate parameters estimation for echo components than those of GS-LM. The proposed method based on wavelet decomposition and PSO is valid to decompose the more complicated full-waveform echoes for estimating the multi-level distances of the objects and measuring the properties of the objects in a laser footprint.
Infinite order decompositions of C*-algebras.
Nematjonovich, Arzikulov Farhodjon
2016-01-01
The present paper is devoted to infinite order decompositions of C*-algebras. It is proved that an infinite order decomposition (IOD) of a C*-algebra forms the complexification of an order unit space, and, if the C*-algebra is monotone complete (not necessarily weakly closed) then its IOD is also monotone complete ordered vector space. Also it is established that an IOD of a C*-algebra is a C*-algebra if and only if this C*-algebra is a von Neumann algebra. As a summary we obtain that the norm of an infinite dimensional matrix is equal to the supremum of norms of all finite dimensional main diagonal submatrices of this matrix and an infinite dimensional matrix is positive if and only if all finite dimensional main diagonal submatrices of this matrix are positive.
Spinodal Decomposition in Critical and Tricritical Systems.
Dee, Gregory Thomas
In this thesis we study the dynamical process of phase separation known as spinodal decomposition. We use the best available theoretical techniques (linear stability analysis, and the Langer, Bar-on, and Miller('1) theory) to study the phenomena in both critical and tricritical systems. We deal with the problems of the early stage evolution and the late stage coarsening in these systems. We use Monte Carlo computer simulation techniques to study the process of spinodal decomposition in two systems one of which is a model for a two dimensional system with an order-disorder transition and the other is a two dimensional model of a binary alloy system. We also present a renormalization group calculation of a mean field character for the coarse grained free energy.
Decentralized Model Predictive Control via Dual Decomposition
Wakasa, Yuji; Arakawa, Mizue; Tanaka, Kanya; Akashi, Takuya
This paper proposes a decentralized model predictive control method based on a dual decomposition technique. A model predictive control problem for a system with multiple subsystems is formulated as a convex optimization problem. In particular, we deal with the case where the control outputs of the subsystems have coupling constraints represented by linear equalities. A dual decomposition technique is applied to this problem in order to derive the dual problem with decoupled equality constraints. A projected subgradient method is used to solve the dual problem, which leads to a decentralized algorithm. In the algorithm, a small-scale problem is solved at each subsystem, and information exchange is performed in each group consisting of some subsystems. Also, it is shown that the computational complexity in the decentralized algorithm is reduced if the dynamics of the subsystems are all the same.
Thermal decompositions of light lanthanide aconitates
Energy Technology Data Exchange (ETDEWEB)
Brzyska, W.; Ozga, W. (Uniwersytet Marii Curie-Sklodowskiej, Lublin (Poland))
The conditions of thermal decomposition of Y, La, Ce(III), Pr, Nd, Sm, and Gd aconitates have been studied. On heating, the aconitate of Ce(III) loses crystallization water to yield anhydrous salt, which then is transformed to oxide CeO/sub 2/. The aconitates of Y, Pr, Nd, Sm, Eu and Gd decompose in three stages. First, aconitates undergo dehydration to form the anhydrous salts, which next decompose to Ln/sub 2/O/sub 2/CO/sub 3/. In the last stage the thermal decomposition of Ln/sub 2/O/sub 2/CO/sub 3/ is accompanied by endothermic effect. Dehydration of aconitate of La undergoes in two stages. The anhydrous complex decomposes to La/sub 2/O/sub 2/CO/sub 3/; this subsequently decomposes to La/sub 2/O/sub 3/.
Heuristic decomposition for non-hierarchic systems
Bloebaum, Christina L.; Hajela, P.
1991-01-01
Design and optimization is substantially more complex in multidisciplinary and large-scale engineering applications due to the existing inherently coupled interactions. The paper introduces a quasi-procedural methodology for multidisciplinary optimization that is applicable for nonhierarchic systems. The necessary decision-making support for the design process is provided by means of an embedded expert systems capability. The method employs a decomposition approach whose modularity allows for implementation of specialized methods for analysis and optimization within disciplines.
Influence of Family Structure on Variance Decomposition
DEFF Research Database (Denmark)
Edwards, Stefan McKinnon; Sarup, Pernille Merete; Sørensen, Peter
Partitioning genetic variance by sets of randomly sampled genes for complex traits in D. melanogaster and B. taurus, has revealed that population structure can affect variance decomposition. In fruit flies, we found that a high likelihood ratio is correlated with a high proportion of explained ge...... capturing pure noise. Therefore it is necessary to use both criteria, high likelihood ratio in favor of a more complex genetic model and proportion of genetic variance explained, to identify biologically important gene groups...
Grandchild of the frequency: Decomposition multigrid method
Energy Technology Data Exchange (ETDEWEB)
Dendy, J.E. Jr. [Los Alamos National Lab., NM (United States); Tazartes, C.C. [Univ. of California, Los Angeles, CA (United States)
1994-12-31
Previously the authors considered the frequency decomposition multigrid method and rejected it because it was not robust for problems with discontinuous coefficients. In this paper they show how to modify the method so as to obtain such robustness while retaining robustness for problems with anisotropic coefficients. They also discuss application of this method to a problem arising in global ocean modeling on the CM-5.
Numerical CP Decomposition of Some Difficult Tensors
Czech Academy of Sciences Publication Activity Database
Tichavský, Petr; Phan, A. H.; Cichocki, A.
2017-01-01
Roč. 317, č. 1 (2017), s. 362-370 ISSN 0377-0427 R&D Projects: GA ČR(CZ) GA14-13713S Institutional support: RVO:67985556 Keywords : Small matrix multiplication * Canonical polyadic tensor decomposition * Levenberg-Marquardt method Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/tichavsky-0468385. pdf
Gelman, Susan A; Noles, Nicholaus S
2011-09-01
Human cognition entails domain-specific cognitive processes that influence memory, attention, categorization, problem-solving, reasoning, and knowledge organization. This review examines domain-specific causal theories, which are of particular interest for permitting an examination of how knowledge structures change over time. We first describe the properties of commonsense theories, and how commonsense theories differ from scientific theories, illustrating with children's classification of biological and non-biological kinds. We next consider the implications of domain-specificity for broader issues regarding cognitive development and conceptual change. We then examine the extent to which domain-specific theories interact, and how people reconcile competing causal frameworks. Future directions for research include examining how different content domains interact, the nature of theory change, the role of context (including culture, language, and social interaction) in inducing different frameworks, and the neural bases for domain-specific reasoning.
Perspectives on Pentaerythritol Tetranitrate (PETN) Decomposition
Energy Technology Data Exchange (ETDEWEB)
Chambers, D; Brackett, C; Sparkman, D O
2002-07-01
This report evaluates the large body of work involving the decomposition of PETN and identifies the major decomposition routes and byproducts. From these studies it becomes apparent that the PETN decomposition mechanisms and the resulting byproducts are primarily determined by the chemical environment. In the absence of water, PETN can decompose through the scission of the O-NO{sup 2} bond resulting in the formation of an alkoxy radical and NO{sub 2}. Because of the relatively high reactivity of both these initial byproducts, they are believed to drive a number of autocatalytic reactions eventually forming (NO{sub 2}OCH{sub 2}){sub 3}CCHO, (NO{sub 2}OCH{sub 2}){sub 2}C=CHONO{sub 2}, NO{sub 2}OCH=C=CHONO{sub 2}, (NO{sub 2}OCH{sub 2}){sub 3}C-NO{sub 2}, (NO{sub 2}OCH{sub 2}){sub 2}C(NO{sub 2}){sub 2}, NO{sub 2}OCH{sub 2}C(NO{sub 2}){sub 3}, and C(NO{sub 2}){sub 4} as well as polymer-like species such as di-PEHN and tri-PEON. Surprisingly, the products of many of these proposed autocatalytic reactions have never been analytically validated. Conversely, in the presence of water, PETN has been shown to decompose primarily to mono, di, and tri nitrates of pentaerythritol.
Hydroxyl radical formation during peroxynitrous acid decomposition
Energy Technology Data Exchange (ETDEWEB)
Coddington, J.W.; Hurst, J.K.; Lymar, S.V.
1999-03-24
Yields of O{sub 2} formed during decomposition of peroxynitrous acid (ONOOH) under widely varying medium conditions are compared to predictions based upon the assumption that the reaction involves formation of discrete {sm{underscore}bullet}OH and {sm{underscore}bullet}NO{sub 2} radicals as oxidizing intermediates. The kinetic model used includes all reactions of {sm{underscore}bullet}OH, {sm{underscore}bullet}O{sub 2}{sup {minus}}, and reactive nitrogen species known to be important under the prevailing conditions; because the rate constants for all of these reactions have been independently measured, the calculations contain no adjustable fitting parameters. The model quantitatively accounts for (1) the complex pH dependence of the O{sub 2} yields and (2) the unusual effects of NO{sub 2} {sup {minus}}, which inhibits O{sub 2} formation in neutral, but not alkaline, solutions and also reverses inhibition by organic {sm{underscore}bullet}OH scavengers in alkaline media. Other observations, including quenching of O{sub 2} yields by ferrocyanide and bicarbonate, the pressure dependence of the decomposition rate, and the reported dynamic behavior for O{sub 2} generation in the presence of H{sub 2}O{sub 2}, also appear to be in accord with the suggested mechanism. Overall, the close correspondence between observed and calculated O{sub 2} yields provides strong support for decomposition via homolysis of the ONOOH peroxo bond.
SEIM, PATRIK
2013-01-01
Abstract Molecular dynamics simulates the behaviour of particles in interaction by calculating the trajectories using computational demanding numerical methods. A Molecular dynamics application often takes advantage of modern computer architecture with large multi-core systems by subdividing the simulated scenario into smaller domains. The subdivided domains are distributed over a number of instances running in parallel to gain computational performance. A problem with domain decomposition is...
Energy Technology Data Exchange (ETDEWEB)
Usoltsev, Ilya; Eichler, Robert; Tuerler, Andreas [Paul Scherrer Institut (PSI), Villigen (Switzerland); Bern Univ. (Switzerland)
2016-11-01
The decomposition behavior of group 6 metal hexacarbonyl complexes (M(CO){sub 6}) in a tubular flow reactor is simulated. A microscopic Monte-Carlo based model is presented for assessing the first bond dissociation enthalpy of M(CO){sub 6} complexes. The suggested approach superimposes a microscopic model of gas adsorption chromatography with a first-order heterogeneous decomposition model. The experimental data on the decomposition of Mo(CO){sub 6} and W(CO){sub 6} are successfully simulated by introducing available thermodynamic data. Thermodynamic data predicted by relativistic density functional theory is used in our model to deduce the most probable experimental behavior of the corresponding Sg carbonyl complex. Thus, the design of a chemical experiment with Sg(CO){sub 6} is suggested, which is sensitive to benchmark our theoretical understanding of the bond stability in carbonyl compounds of the heaviest elements.
Shape decomposition technique in electrical impedance tomography.
Han, David K.; Prosperetti, Andrea
1999-01-01
Consider a two-dimensional domain containing a medium with unit electrical conductivity and one or more non-conducting objects. The problem considered here is that of identifying shape and position of the objects on the sole basis of measurements on the external boundary of the domain. An iterative
Bergshoeff, Eric A.; Kleinschmidt, Axel; Riccioni, Fabio
2012-01-01
We classify the half-supersymmetric "domain walls," i.e., branes of codimension one, in toroidally compactified IIA/IIB string theory and show to which gauged supergravity theory each of these domain walls belong. We use as input the requirement of supersymmetric Wess-Zumino terms, the properties of
DEFF Research Database (Denmark)
Joshi, Hiren J; Jørgensen, Anja; Schjoldager, Katrine T
2018-01-01
The GlycoDomainViewer is a bioinformatic tool to aid in the mining of glycoproteomic data sets from different sources and facilitate incorporation of glycosylation into studies of protein structure and function. We present a version 2.0 of GlycoDomainViewer incorporating a number of advanced feat...
Multifragmentation of a very heavy nuclear system (II): bulk properties and spinodal decomposition
Frankland, J D; Bacri, C O; Bellaize, N; Bocage, F; Borderie, B; Bougault, R; Brou, R; Buchet, P; Chbihi, A; Chomaz, P; Colin, J; Colonna, M; Cussol, D; Dayras, R; Demeyer, A N; Doré, D; Durand, D; Galíchet, E; Genouin-Duhamel, E; Gerlic, E; Guarnera, A; Guinet, D; Lautesse, P; Laville, J L; Le Neindre, N; Lecolley, J F; Legrain, R; Louvel, M; Maskay, A M; Nalpas, L; Nguyen, A D; Plagnol, E; Pârlog, M; Rivet, M F; Rosato, E; Saint-Laurent, F; Salou, S; Squalli, M; Steckmeyer, J C; Tabacaru, G; Tamain, B; Tassan-Got, L; Tirel, O; Vient, E; Volant, C; Wieleczko, J P
2001-01-01
The properties of fragments and light charged particles emitted in multifragmentation of single sources formed in central 36 A MeV Gd+U collisions are reviewed. Most of the products are isotropically distributed in the reaction c.m. Fragment kinetic energies reveal the onset of radial collective energy. A bulk effect is experimentally evidenced from the similarity of the charge distribution with that from the lighter 32 A MeV Xe+Sn system. Spinodal decomposition of finite nuclear matter exhibits the same property in simulated central collisions for the two systems, and appears therefore as a possible mechanism at the origin of multifragmentation in this incident energy domain.
Aichmayer, Barbara; Fratzl, Peter; Puri, Sanjay; Saller, Gabriele
2003-07-01
Interactions with the macroscopic specimen surface can profoundly modify phase-separation processes. This has previously been observed in liquids and polymer films and is theoretically described by the theory of surface-directed spinodal decomposition (SDSD). Here we report first observations of SDSD in a metallic alloy on a macroscopic scale. The influence of the surface leads to the development of concentric domains extending over the whole 10mm thick cylindrical steel specimen, due to long-range interactions via elastic stresses and long-range diffusion of the interstitial elements nitrogen and carbon.
Multifragmentation of a very heavy nuclear system (II): bulk properties and spinodal decomposition
Energy Technology Data Exchange (ETDEWEB)
Frankland, J.D. E-mail: frankland@ganil.fr; Borderie, B.; Colonna, M.; Rivet, M.F.; Bacri, Ch.O.; Chomaz, Ph.; Durand, D.; Guarnera, A.; Parlog, M.; Squalli, M.; Tabacaru, G.; Auger, G.; Bellaize, N.; Bocage, F.; Bougault, R.; Brou, R.; Buchet, P.; Chbihi, A.; Colin, J.; Cussol, D.; Dayras, R.; Demeyer, A.; Dore, D.; Galichet, E.; Genouin-Duhamel, E.; Gerlic, E.; Guinet, D.; Lautesse, P.; Laville, J.L.; Lecolley, J.F.; Legrain, R.; Le Neindre, N.; Lopez, O.; Louvel, M.; Maskay, A.M.; Nalpas, L.; Nguyen, A.D.; Plagnol, E.; Rosato, E.; Saint-Laurent, F.; Salou, S.; Steckmeyer, J.C.; Tamain, B.; Tassan-Got, L.; Tirel, O.; Vient, E.; Volant, C.; Wieleczko, J.P
2001-07-02
The properties of fragments and light charged particles emitted in multifragmentation of single sources formed in central 36 A MeV Gd+U collisions are reviewed. Most of the products are isotropically distributed in the reaction c.m. Fragment kinetic energies reveal the onset of radial collective energy. A bulk effect is experimentally evidenced from the similarity of the charge distribution with that from the lighter 32 A MeV Xe+Sn system. Spinodal decomposition of finite nuclear matter exhibits the same property in simulated central collisions for the two systems, and appears therefore as a possible mechanism at the origin of multifragmentation in this incident energy domain.
Robust regularized singular value decomposition with application to mortality data
Zhang, Lingsong
2013-09-01
We develop a robust regularized singular value decomposition (RobRSVD) method for analyzing two-way functional data. The research is motivated by the application of modeling human mortality as a smooth two-way function of age group and year. The RobRSVD is formulated as a penalized loss minimization problem where a robust loss function is used to measure the reconstruction error of a low-rank matrix approximation of the data, and an appropriately defined two-way roughness penalty function is used to ensure smoothness along each of the two functional domains. By viewing the minimization problem as two conditional regularized robust regressions, we develop a fast iterative reweighted least squares algorithm to implement the method. Our implementation naturally incorporates missing values. Furthermore, our formulation allows rigorous derivation of leaveone- row/column-out cross-validation and generalized cross-validation criteria, which enable computationally efficient data-driven penalty parameter selection. The advantages of the new robust method over nonrobust ones are shown via extensive simulation studies and the mortality rate application. © Institute of Mathematical Statistics, 2013.
Ohmichi, Yuya
2017-07-01
In this letter, we propose a simple and efficient framework of dynamic mode decomposition (DMD) and mode selection for large datasets. The proposed framework explicitly introduces a preconditioning step using an incremental proper orthogonal decomposition (POD) to DMD and mode selection algorithms. By performing the preconditioning step, the DMD and mode selection can be performed with low memory consumption and therefore can be applied to large datasets. Additionally, we propose a simple mode selection algorithm based on a greedy method. The proposed framework is applied to the analysis of three-dimensional flow around a circular cylinder.
Cholesterol Domains Enhance Transfection
Betker, Jamie L.; Kullberg, Max; Gomez, Joe; Anchordoquy, Thomas J.
2014-01-01
The formation of cholesterol domains in lipoplexes has been associated with enhanced serum stability and transfection rates both in cell culture and in vivo. This study utilizes the ability of saturated phosphatidylcholines to promote the formation of cholesterol domains at much lower cholesterol contents than have been utilized in previous work. The results show that lipoplexes with identical cholesterol and cationic lipid contents exhibit significantly improved transfection efficiencies when a domain is present, consistent with previous work. In addition, studies assessing transfection rates in the absence of serum demonstrate that the ability of domains to enhance transfection is not dependent on interactions with serum proteins. Consistent with this hypothesis, characterization of the adsorbed proteins composing the corona of these lipoplex formulations did not reveal a correlation between transfection and the adsorption of a specific protein. Finally, we show that the interaction with serum proteins can promote domain formation in some formulations, and thereby result in enhanced transfection only after serum exposure. PMID:23557286
Energy Technology Data Exchange (ETDEWEB)
Dahlgren, Kathryn Marie [California State Univ., Turlock, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Morris, Karla Vanessa [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Debusschere, Bert [Sandia National Lab. (SNL-CA), Livermore, CA (United States)
2014-08-01
The future of extreme-scale computing is expected to magnify the influence of soft faults as a source of inaccuracy or failure in solutions obtained from distributed parallel computations. The development of resilient computational tools represents an essential recourse for understanding the best methods for absorbing the impacts of soft faults without sacrificing solution accuracy. The Rexsss (Resilient Extreme Scale Scientific Simulations) project pursues the development of fault resilient algorithms for solving partial differential equations (PDEs) on distributed systems. Performance analyses of current algorithm implementations assist in the identification of runtime inefficiencies.
Datta, Anubhav; Johnson, Wayne R.
2009-01-01
This paper has two objectives. The first objective is to formulate a 3-dimensional Finite Element Model for the dynamic analysis of helicopter rotor blades. The second objective is to implement and analyze a dual-primal iterative substructuring based Krylov solver, that is parallel and scalable, for the solution of the 3-D FEM analysis. The numerical and parallel scalability of the solver is studied using two prototype problems - one for ideal hover (symmetric) and one for a transient forward flight (non-symmetric) - both carried out on up to 48 processors. In both hover and forward flight conditions, a perfect linear speed-up is observed, for a given problem size, up to the point of substructure optimality. Substructure optimality and the linear parallel speed-up range are both shown to depend on the problem size as well as on the selection of the coarse problem. With a larger problem size, linear speed-up is restored up to the new substructure optimality. The solver also scales with problem size - even though this conclusion is premature given the small prototype grids considered in this study.
Antonietti, P. F.
2014-05-13
We propose and study an iterative substructuring method for an h-p Nitsche-type discretization, following the original approach introduced in Bramble et al. Math. Comp. 47(175):103–134, (1986) for conforming methods. We prove quasi-optimality with respect to the mesh size and the polynomial degree for the proposed preconditioner. Numerical experiments assess the performance of the preconditioner and verify the theory. © 2014, Springer-Verlag Italia.
Identification of the Swiss Z24 Highway Bridge by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Andersen, P.
2002-01-01
This paper presents the result of the modal identification of the Swiss highway bridge Z24. A series of 15 progressive damage tests were performed on the bridge before it was demolished in autumn 1998, and the ambient response of the bridge was recorded for each damage case. In this paper the modal...
DEFF Research Database (Denmark)
Brincker, Rune; Andersen, Palle; Zhang, Lingmi
2007-01-01
As a part of a research project co-founded by the European Community, a series of 15 damage tests were performed on a prestressed concrete highway bridge in Switzerland. The ambient response of the bridge was recorded for each damage case. A dense array of instruments allowed the identification...... is to show the application of the FDD method as an efficient way to perform health monitoring of civil engineering structures. The modal properties, frequencies, damping ratios and mode shapes for the different damage cases were compared with those for the undamaged bridge....
DEFF Research Database (Denmark)
Brincker, Rune; Andersen, P.; Zhang, L.
2002-01-01
As a part of a research project co-founded by the European Community, a series of 15 damage tests were performed on a prestressed concrete highway bridge in Switzerland. The ambient response of the bridge was recorded for each damage case. A dense array of instruments allowed the identification...... is to show the application of the FDD-method as an efficient way to perform health monitoring of civil engineering structures. The modal properties, frequencies, damping ratios and mode shapes for the different damage cases were compared with those for the undamaged bridge....
DEFF Research Database (Denmark)
Brincker, Rune; Andersen, P.; Cantieni, R.
2001-01-01
A series of 15 progressive damage tests were performed on a prestressed concrete highway bridge in Switzerland. The ambient response of the bridge was recorded for each damage case with a relatively large number of sensors. Changes in frequencies, damping ratios and MAC values were determined...
Applications of domain decomposition techniques for the multiscale modelling of softening materials
Lloberas Valls, O.; Rixen, D.J.; Simone, A.; Sluys, L.J.
2009-01-01
In this contribution we describe a methodology for the study of softening brittle materials at different scales of observation. The goal is to account for a higher resolution at those areas that undergo the non-linear processes. We apply the FETI (Finite Element Tearing and Interconnecting)
Gharekhan, Anita H.; Rath, Dhaitri; Oza, Ashok N.; Pradhan, Asima; Sureshkumar, M. B.; Panigrahi, Prasanta K.
2009-02-01
A systematic investigation of the fluorescence characteristics of normal and cancerous human breast tissues is carried out, using laser and lamp as excitation sources. It is found that earlier observed subtle differences between these two tissue types in the wavelet domain are absent, when lamp is used as excitation source. However, singular value decomposition of the average spectral profile in the wavelet domain yields strong correlation for the cancer tissues in the 580-750 nm regimes indicating weak fluorophore activity in this wavelength range.
A wavelet "time-shift-detail" decomposition
Levan, N.; Kubrusly, Carlos S.
2003-01-01
\\begin{abstract}We show that, with respect to an orthonormal wavelet $\\psi(.)\\in \\L^{2}(\\RR),$ any $f(.)\\in\\L^{2}(\\RR)$ is, on the one hand, the sum of its ``layers of details'' over all time-shifts, and on the other hand, the sum of its layers of details over all scales. The latter is well known and is a consequence of a wandering subspace decomposition of $\\L^{2}(\\RR)$ which, in turn, resulted from a wavelet Multiresolution Analysis (MRA). The former has not been discussed before. We show ...
Tensor Decompositions for Learning Latent Variable Models
2012-12-08
for several popular latent variable models Tensor Decompositions for Learning Latent Variable Models Anima Anandkumar1, Rong Ge2, Daniel Hsu3, Sham M...the ARO Award W911NF-12-1-0404. References [AFH+12] A. Anandkumar, D. P. Foster, D. Hsu, S. M. Kakade, and Y.-K. Liu . A spectral algorithm for latent...volume 13. Cambridge University Press, 2005. [PSX11] A. Parikh, L. Song , and E. P. Xing. A spectral algorithm for latent tree graphical models. In
Thermal decomposition as route for silver nanoparticles
Directory of Open Access Journals (Sweden)
Navaladian S
2006-01-01
Full Text Available AbstractSingle crystalline silver nanoparticles have been synthesized by thermal decomposition of silver oxalate in water and in ethylene glycol. Polyvinyl alcohol (PVA was employed as a capping agent. The particles were spherical in shape with size below 10 nm. The chemical reduction of silver oxalate by PVA was also observed. Increase of the polymer concentration led to a decrease in the size of Ag particles. Ag nanoparticle was not formed in the absence of PVA. Antibacterial activity of the Ag colloid was studied by disc diffusion method.
Diffuse Optical Imaging Using Decomposition Methods
Directory of Open Access Journals (Sweden)
Binlin Wu
2012-01-01
Full Text Available Diffuse optical imaging (DOI for detecting and locating targets in a highly scattering turbid medium is treated as a blind source separation (BSS problem. Three matrix decomposition methods, independent component analysis (ICA, principal component analysis (PCA, and nonnegative matrix factorization (NMF were used to study the DOI problem. The efficacy of resulting approaches was evaluated and compared using simulated and experimental data. Samples used in the experiments included Intralipid-10% or Intralipid-20% suspension in water as the medium with absorptive or scattering targets embedded.
Decomposition of nitrous oxide at medium temperatures
Energy Technology Data Exchange (ETDEWEB)
Loeffler, G.; Wargadalam, V.J.; Winter, F.; Hofbauer, H.
2000-03-01
Flow reactor experiments were done to study the decomposition of N{sub 2}O at atmospheric pressure and in a temperature range of 600--1,000 C. Dilute mixtures of N{sub 2}O with H{sub 2}, CH{sub 4}, CO with and without oxygen with N{sub 2} as carrier gas were studied. To see directly the relative importance of the thermal decomposition versus the destruction by free radicals (i.e.: H, O, OH) iodine was added to the reactant mixture suppressing the radicals' concentrations towards their equilibrium concentrations. The experimental results were discussed using a detailed chemistry model. This work shows that there are still some uncertainties regarding the kinetics of the thermal decomposition and the reaction between N{sub 2}O and the O radical. Using the recommendations applied in this work for the reaction N{sub 2}O + M {leftrightarrow} N{sub 2} + O + M and for N{sub 2}O + O {leftrightarrow} products, a good agreement with the experimental data can be obtained over a wide range of experimental conditions. The reaction between N{sub 2}O and OH is of minor importance under present conditions as stated in latest literature. The results show that N{sub 2}O + H {leftrightarrow} N{sub 2} + OH is the most important reaction in the destruction of N{sub 2}O. In the presence of oxygen it competes with H + O{sub 2} + M {leftrightarrow} HO{sub 2} + M and H + O{sub 2} {leftrightarrow} O + OH, respectively. The importance of the thermal decomposition (N{sub 2}O + M {leftrightarrow} N{sub 2} + O + M) increases with residence time. Reducing conditions and a long residence time lead to a high potential in N{sub 2}O reduction. Especially mixtures of H{sub 2}/N{sub 2}O and CO/H{sub 2}O/N{sub 2}O in nitrogen lead to a chain reaction mechanism causing a strong N{sub 2}O reduction.
Multiresolution signal decomposition transforms, subbands, and wavelets
Akansu, Ali N
1992-01-01
This book provides an in-depth, integrated, and up-to-date exposition of the topic of signal decomposition techniques. Application areas of these techniques include speech and image processing, machine vision, information engineering, High-Definition Television, and telecommunications. The book will serve as the major reference for those entering the field, instructors teaching some or all of the topics in an advanced graduate course and researchers needing to consult an authoritative source.n The first book to give a unified and coherent exposition of multiresolutional signal decompos
MADCam: The multispectral active decomposition camera
DEFF Research Database (Denmark)
Hilger, Klaus Baggesen; Stegmann, Mikkel Bille
2001-01-01
A real-time spectral decomposition of streaming three-band image data is obtained by applying linear transformations. The Principal Components (PC), the Maximum Autocorrelation Factors (MAF), and the Maximum Noise Fraction (MNF) transforms are applied. In the presented case study the PC transform...... that utilised information drawn from the temporal dimension instead of the traditional spatial approach. Using the CIF format (352x288) frame rates up to 30 Hz are obtained and in VGA mode (640x480) up to 15 Hz....
Thermal decomposition of meat and bone meal
Energy Technology Data Exchange (ETDEWEB)
Conesa, J.A.; Fullana, A.; Font, R. [Department of Chemical Engineering, University of Alicante, P.O. Box 99, E-03080 Alicante (Spain)
2003-12-01
A series of runs has been performed to study the thermal behavior of meat and bone meal (MBM) both in inert and reactive atmosphere. Although they are actually burned, the thermal decomposition of such MBM wastes has not been studied from a scientific point of view until now. The aim of this work is to present and discuss the thermogravimetric behavior of MBM both in nitrogen and air atmospheres. A thermobalance has been used to carry out the study at three different heating rates. A kinetic scheme able to correlate simultaneously (with no variation of the kinetic constants) the runs performed at different heating rates and different atmospheres of reaction is presented.
Decomposition Polypropylene Plastic Waste with Pyrolysis Methode
Naimah, Siti; Nuraeni, Chicha; Rumondang, Irma; Jati, Bumiarto Nugroho; Ermawati, Rahyani
2012-01-01
Various attempts have been made to reduce plastic waste. One of the attempts is to convert plastic waste into energy sources. The process of converting waste plastics involves several stages of the process, one of which is the pyrolysis (thermal cracking). Pyrolysis is the decomposition process of plastic waste and distillation process without O2 at high temperatures (500-1000 °C). Results of pyrolysis process is solids and liquids forms. With the reactor temperature at 500 °C, pyrolysis equi...
Thermoanalytical study of the decomposition of yttrium trifluoroacetate thin films
Energy Technology Data Exchange (ETDEWEB)
Eloussifi, H. [GRMT, GRMT, Department of Physics, University of Girona, Campus Montilivi, E17071 Girona, Catalonia (Spain); Laboratoire de Chimie Inorganique, Faculté des Sciences de Sfax, Université de Sfax, BP 1171, 3000 Sfax (Tunisia); Farjas, J., E-mail: jordi.farjas@udg.cat [GRMT, GRMT, Department of Physics, University of Girona, Campus Montilivi, E17071 Girona, Catalonia (Spain); Roura, P. [GRMT, GRMT, Department of Physics, University of Girona, Campus Montilivi, E17071 Girona, Catalonia (Spain); Ricart, S.; Puig, T.; Obradors, X. [Institut de Ciència de Materials de Barcelona (CSIC), Campus UAB, 08193 Bellaterra, Catalonia (Spain); Dammak, M. [Laboratoire de Chimie Inorganique, Faculté des Sciences de Sfax, Université de Sfax, BP 1171, 3000 Sfax (Tunisia)
2013-10-31
We present the use of the thermal analysis techniques to study yttrium trifluoroacetate thin films decomposition. In situ analysis was done by means of thermogravimetry, differential thermal analysis, and evolved gas analysis. Solid residues at different stages and the final product have been characterized by X-ray diffraction and scanning electron microscopy. The thermal decomposition of yttrium trifluoroacetate thin films results in the formation of yttria and presents the same succession of intermediates than powder's decomposition, however, yttria and all intermediates but YF{sub 3} appear at significantly lower temperatures. We also observe a dependence on the water partial pressure that was not observed in the decomposition of yttrium trifluoroacetate powders. Finally, a dependence on the substrate chemical composition is discerned. - Highlights: • Thermal decomposition of yttrium trifluoroacetate films. • Very different behavior of films with respect to powders. • Decomposition is enhanced in films. • Application of thermal analysis to chemical solution deposition synthesis of films.
Review of Matrix Decomposition Techniques for Signal Processing Applications
Directory of Open Access Journals (Sweden)
Monika Agarwal,
2014-01-01
Full Text Available Decomposition of matrix is a vital part of many scientific and engineering applications. It is a technique that breaks down a square numeric matrix into two different square matrices and is a basis for efficiently solving a system of equations, which in turn is the basis for inverting a matrix. An inverting matrix is a part of many important algorithms. Matrix factorizations have wide applications in numerical linear algebra, in solving linear systems, computing inertia, and rank estimation is an important consideration. This paper presents review of all the matrix decomposition techniques used in signal processing applications on the basis of their computational complexity, advantages and disadvantages. Various Decomposition techniques such as LU Decomposition, QR decomposition , Cholesky decomposition are discussed here. Keywords –
Freeman-Durden Decomposition with Oriented Dihedral Scattering
Directory of Open Access Journals (Sweden)
Yan Jian
2014-10-01
Full Text Available In this paper, when the azimuth direction of polarimetric Synthetic Aperature Radars (SAR differs from the planting direction of crops, the double bounce of the incident electromagnetic waves from the terrain surface to the growing crops is investigated and compared with the normal double bounce. Oriented dihedral scattering model is developed to explain the investigated double bounce and is introduced into the Freeman-Durden decomposition. The decomposition algorithm corresponding to the improved decomposition is then proposed. The airborne polarimetric SAR data for agricultural land covering two flight tracks are chosen to validate the algorithm; the decomposition results show that for agricultural vegetated land, the improved Freeman-Durden decomposition has the advantage of increasing the decomposition coherency among the polarimetric SAR data along the different flight tracks.
Prediction of the thermal decomposition of organic peroxides by validated QSPR models.
Prana, Vinca; Rotureau, Patricia; Fayet, Guillaume; André, David; Hub, Serge; Vicot, Patricia; Rao, Li; Adamo, Carlo
2014-07-15
Organic peroxides are unstable chemicals which can easily decompose and may lead to explosion. Such a process can be characterized by physico-chemical parameters such as heat and temperature of decomposition, whose determination is crucial to manage related hazards. These thermal stability properties are also required within many regulatory frameworks related to chemicals in order to assess their hazardous properties. In this work, new quantitative structure-property relationships (QSPR) models were developed to predict accurately the thermal stability of organic peroxides from their molecular structure respecting the OECD guidelines for regulatory acceptability of QSPRs. Based on the acquisition of 38 reference experimental data using DSC (differential scanning calorimetry) apparatus in homogenous experimental conditions, multi-linear models were derived for the prediction of the decomposition heat and the onset temperature using different types of molecular descriptors. Models were tested by internal and external validation tests and their applicability domains were defined and analyzed. Being rigorously validated, they presented the best performances in terms of fitting, robustness and predictive power and the descriptors used in these models were linked to the peroxide bond whose breaking represents the main decomposition mechanism of organic peroxides. Copyright © 2014 Elsevier B.V. All rights reserved.
Prediction of the thermal decomposition of organic peroxides by validated QSPR models
Energy Technology Data Exchange (ETDEWEB)
Prana, Vinca [Institut de Recherche de Chimie Paris, Chimie ParisTech CNRS, 11 rue P. et M. Curie, Paris 75005 (France); Institut National de l’Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP2, Verneuil-en-Halatte 60550 (France); Rotureau, Patricia, E-mail: patricia.rotureau@ineris.fr [Institut National de l’Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP2, Verneuil-en-Halatte 60550 (France); Fayet, Guillaume [Institut National de l’Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP2, Verneuil-en-Halatte 60550 (France); André, David; Hub, Serge [ARKEMA, rue Henri Moissan, BP63, Pierre Benite 69493 (France); Vicot, Patricia [Institut National de l’Environnement Industriel et des Risques (INERIS), Parc Technologique Alata, BP2, Verneuil-en-Halatte 60550 (France); Rao, Li [Institut de Recherche de Chimie Paris, Chimie ParisTech CNRS, 11 rue P. et M. Curie, Paris 75005 (France); Adamo, Carlo [Institut de Recherche de Chimie Paris, Chimie ParisTech CNRS, 11 rue P. et M. Curie, Paris 75005 (France); Institut Universitaire de France, 103 Boulevard Saint Michel, Paris F-75005 (France)
2014-07-15
Highlights: • QSPR models were developed for thermal stability of organic peroxides. • Two accurate MLR models were exhibited based on quantum chemical descriptors. • Performances were evaluated by a series of internal and external validations. • The new QSPR models satisfied all OCDE principles of validation for regulatory use. - Abstract: Organic peroxides are unstable chemicals which can easily decompose and may lead to explosion. Such a process can be characterized by physico-chemical parameters such as heat and temperature of decomposition, whose determination is crucial to manage related hazards. These thermal stability properties are also required within many regulatory frameworks related to chemicals in order to assess their hazardous properties. In this work, new quantitative structure–property relationships (QSPR) models were developed to predict accurately the thermal stability of organic peroxides from their molecular structure respecting the OECD guidelines for regulatory acceptability of QSPRs. Based on the acquisition of 38 reference experimental data using DSC (differential scanning calorimetry) apparatus in homogenous experimental conditions, multi-linear models were derived for the prediction of the decomposition heat and the onset temperature using different types of molecular descriptors. Models were tested by internal and external validation tests and their applicability domains were defined and analyzed. Being rigorously validated, they presented the best performances in terms of fitting, robustness and predictive power and the descriptors used in these models were linked to the peroxide bond whose breaking represents the main decomposition mechanism of organic peroxides.
Plant identity influences decomposition through more than one mechanism.
Directory of Open Access Journals (Sweden)
Jennie R McLaren
Full Text Available Plant litter decomposition is a critical ecosystem process representing a major pathway for carbon flux, but little is known about how it is affected by changes in plant composition and diversity. Single plant functional groups (graminoids, legumes, non-leguminous forbs were removed from a grassland in northern Canada to examine the impacts of functional group identity on decomposition. Removals were conducted within two different environmental contexts (fertilization and fungicide application to examine the context-dependency of these identity effects. We examined two different mechanisms by which the loss of plant functional groups may impact decomposition: effects of the living plant community on the decomposition microenvironment, and changes in the species composition of the decomposing litter, as well as the interaction between these mechanisms. We show that the identity of the plant functional group removed affects decomposition through both mechanisms. Removal of both graminoids and forbs slowed decomposition through changes in the decomposition microenvironment. We found non-additive effects of litter mixing, with both the direction and identity of the functional group responsible depending on year; in 2004 graminoids positively influenced decomposition whereas in 2006 forbs negatively influenced decomposition rate. Although these two mechanisms act independently, their effects may be additive if both mechanisms are considered simultaneously. It is essential to understand the variety of mechanisms through which even a single ecosystem property is affected if we are to predict the future consequences of biodiversity loss.
Microbial community functional change during vertebrate carrion decomposition
National Research Council Canada - National Science Library
Pechal, Jennifer L; Crippen, Tawni L; Tarone, Aaron M; Lewis, Andrew J; Tomberlin, Jeffery K; Benbow, M Eric
2013-01-01
.... The objective of this study was to provide a description of the carrion associated microbial community functional activity using differential carbon source use throughout decomposition over seasons...
Microbial Community Functional Change during Vertebrate Carrion Decomposition: e79035
National Research Council Canada - National Science Library
Jennifer L Pechal; Tawni L Crippen; Aaron M Tarone; Andrew J Lewis; Jeffery K Tomberlin; M Eric Benbow
2013-01-01
.... The objective of this study was to provide a description of the carrion associated microbial community functional activity using differential carbon source use throughout decomposition over seasons...
Primary decomposition of torsion R[X]-modules
Directory of Open Access Journals (Sweden)
William A. Adkins
1994-01-01
Full Text Available This paper is concerned with studying hereditary properties of primary decompositions of torsion R[X]-modules M which are torsion free as R-modules. Specifically, if an R[X]-submodule of M is pure as an R-submodule, then the primary decomposition of M determines a primary decomposition of the submodule. This is a generalization of the classical fact from linear algebra that a diagonalizable linear transformation on a vector space restricts to a diagonalizable linear transformation of any invariant subspace. Additionally, primary decompositions are considered under direct sums and tensor product.
An investigation of the decomposition mechanism of calcium carbonate
Directory of Open Access Journals (Sweden)
D. Wang
2017-01-01
Full Text Available This paper focuses on investigating the decomposition mechanism of ca lcium carbonate. The non-isothermal thermal decompositions of calcium carbonate under vacuum and flowing nitrogen atmosphere have been studied by thermogravimetric analysis. With the application of the advanced nonlinear isoconversional method, the determined activation energy for each condition is dependent on the extent of reaction. Based on the dependences, a process involving two consecutive decomposition steps has been simulated. The simulation results match the experimental results of flowing nitrogen atmosphere. Results indicate that the decomposition of calcium carbonate undergoes the process of the formation of the intermediate and metastable product.
Conserved Domain Database (CDD)
U.S. Department of Health & Human Services — CDD is a protein annotation resource that consists of a collection of well-annotated multiple sequence alignment models for ancient domains and full-length proteins.
Modeling yields insight into thermal decomposition
Energy Technology Data Exchange (ETDEWEB)
Case, J.L.; Carr, R.V.; Simpson, M.S. [Air Products and Chemicals, Inc., Allentown, PA (United States)
1995-12-01
A fundamental understanding of the thermal decomposition of nitrotoluenes is critical in evaluating the hazards associated with transporting and storing commercial volumes of these chemicals. Detailed modeling of an adiabatic, low PHI and semi-open (vented to a larger pressure vessel) calorimeter provides insight into a multiple reaction mechanism. The reaction rates developed, along with the significant effect of reactant or intermediates vaporization were confirmed with additional experimental results. Such an interpretation of nitrotoluene decomposition is consistent with recent isothermal experiments as well as with the body of data reported in the open literature. The low temperature or induction reactions are accurately represented with a first order Arrhenius model having typical values for kinetic and thermodynamic parameters. These reactions generate minimal amounts of non condensable gas. If the material is maintained at an elevated temperature, but prevented from self-heating (by external cooling), the intermediate products form thermally unstable and nonvolatile oligomers. At higher temperatures the remaining materials undergo explosive reactions characterized by high heats of reaction, large activation energies and massive releases of non condensable gas. Quantifying the rates of nitrotoluene and/or intermediate vaporization versus oligomerization is essential in evaluating the hazard of a thermal explosion involving a commercial quantity of nitrotoluene.
Interactions between Fine Wood Decomposition and Flammability
Directory of Open Access Journals (Sweden)
Weiwei Zhao
2014-04-01
Full Text Available Fire is nearly ubiquitous in the terrestrial biosphere, with profound effects on earth surface carbon storage, climate, and forest functions. Fuel quality is an important parameter determining forest fire behavior, which differs among both tree species and organs. Fuel quality is not static: when dead plant material decomposes, its structural, chemical, and water dynamic properties change, with implications for fuel flammability. However, the interactions between decomposition and flammability are poorly understood. This study aimed to determine decomposition’s effects on fuel quality and how this directly and indirectly affects wood flammability. We did controlled experiments on water dynamics and fire using twigs of four temperate tree species. We found considerable direct and indirect effects of decomposition on twig flammability, particularly on ignitability and burning time, which are important variables for fire spread. More decomposed twigs ignite and burn faster at given water content. Moreover, decomposed twigs dry out faster than fresh twigs, which make them flammable sooner when drying out after rain. Decomposed fine woody litters may promote horizontal fire spread as ground fuels and act as a fuel ladder when staying attached to trees. Our results add an important, previously poorly studied dynamic to our understanding of forest fire spread.
Spectral decomposition of nonlinear systems with memory
Svenkeson, Adam; Glaz, Bryan; Stanton, Samuel; West, Bruce J.
2016-02-01
We present an alternative approach to the analysis of nonlinear systems with long-term memory that is based on the Koopman operator and a Lévy transformation in time. Memory effects are considered to be the result of interactions between a system and its surrounding environment. The analysis leads to the decomposition of a nonlinear system with memory into modes whose temporal behavior is anomalous and lacks a characteristic scale. On average, the time evolution of a mode follows a Mittag-Leffler function, and the system can be described using the fractional calculus. The general theory is demonstrated on the fractional linear harmonic oscillator and the fractional nonlinear logistic equation. When analyzing data from an ill-defined (black-box) system, the spectral decomposition in terms of Mittag-Leffler functions that we propose may uncover inherent memory effects through identification of a small set of dynamically relevant structures that would otherwise be obscured by conventional spectral methods. Consequently, the theoretical concepts we present may be useful for developing more general methods for numerical modeling that are able to determine whether observables of a dynamical system are better represented by memoryless operators, or operators with long-term memory in time, when model details are unknown.
Kinetics of bromochloramine formation and decomposition.
Luh, Jeanne; Mariñas, Benito J
2014-01-01
Batch experiments were performed to study the kinetics of bromochloramine formation and decomposition from the reaction of monochloramine and bromide ion. The effects of pH, initial monochloramine and bromide ion concentrations, phosphate buffer concentration, and excess ammonia were evaluated. Results showed that the monochloramine decay rate increased with decreasing pH and increasing bromide ion concentration, and the concentration of bromochloramine increased to a maximum before decreasing gradually. The maximum bromochloramine concentration reached was found to decrease with increasing phosphate and ammonia concentrations. Previous models in the literature were not able to capture the decay of bromochloramine, and therefore we proposed an extended model consisting of reactions for monochloramine autodecomposition, the decay of bromamines in the presence of bromide, bromochloramine formation, and bromochloramine decomposition. Reaction rate constants were obtained through least-squares fitting to 11 data sets representing the effect of pH, bromide, monochloramine, phosphate, and excess ammonia. The reaction rate constants were then used to predict monochloramine and bromochloramine concentration profiles for all experimental conditions tested. In general, the modeled lines were found to provide good agreement with the experimental data under most conditions tested, with deviations occurring at low pH and high bromide concentrations.
DECOMPOSITION OF MANUFACTURING PROCESSES: A REVIEW
Directory of Open Access Journals (Sweden)
N.M.Z.N. Mohamed
2012-06-01
Full Text Available Manufacturing is a global activity that started during the industrial revolution in the late 19th century to cater for the large-scale production of products. Since then, manufacturing has changed tremendously through the innovations of technology, processes, materials, communication and transportation. The major challenge facing manufacturing is to produce more products using less material, less energy and less involvement of labour. To face these challenges, manufacturing companies must have a strategy and competitive priority in order for them to compete in a dynamic market. A review of the literature on the decomposition of manufacturing processes outlines three main processes, namely: high volume, medium volume and low volume. The decomposition shows that each sub process has its own characteristics and depends on the nature of the firm’s business. Two extreme processes are continuous line production (fast extreme and project shop (slow extreme. Other processes are in between these two extremes of the manufacturing spectrum. Process flow patterns become less complex with cellular, line and continuous flow compared with jobbing and project. The review also indicates that when the product is high variety and low volume, project or functional production is applied.
Experimental study of trimethyl aluminum decomposition
Zhang, Zhi; Pan, Yang; Yang, Jiuzhong; Jiang, Zhiming; Fang, Haisheng
2017-09-01
Trimethyl aluminum (TMA) is an important precursor used for metal-organic chemical vapor deposition (MOCVD) of most Al-containing structures, in particular of nitride structures. The reaction mechanism of TMA with ammonia is neither clear nor certain due to its complexity. Pyrolysis of trimethyl metal is the start of series of reactions, thus significantly affecting the growth. Experimental study of TMA pyrolysis, however, has not yet been conducted in detail. In this paper, a reflectron time-of-flight mass spectrometer is adopted to measure the TMA decomposition from room temperature to 800 °C in a special pyrolysis furnace, activated by soft X-ray from the synchrotron radiation. The results show that generation of methyl, ethane and monomethyl aluminum (MMA) indicates the start of the pyrolysis process. In the low temperature range from 25 °C to 700 °C, the main product is dimethyl aluminum (DMA) from decomposition of TMA. For temperatures larger than 700 °C, the main products are MMA, DMA, methyl and ethane.
Spectral decomposition of nonlinear systems with memory.
Svenkeson, Adam; Glaz, Bryan; Stanton, Samuel; West, Bruce J
2016-02-01
We present an alternative approach to the analysis of nonlinear systems with long-term memory that is based on the Koopman operator and a Lévy transformation in time. Memory effects are considered to be the result of interactions between a system and its surrounding environment. The analysis leads to the decomposition of a nonlinear system with memory into modes whose temporal behavior is anomalous and lacks a characteristic scale. On average, the time evolution of a mode follows a Mittag-Leffler function, and the system can be described using the fractional calculus. The general theory is demonstrated on the fractional linear harmonic oscillator and the fractional nonlinear logistic equation. When analyzing data from an ill-defined (black-box) system, the spectral decomposition in terms of Mittag-Leffler functions that we propose may uncover inherent memory effects through identification of a small set of dynamically relevant structures that would otherwise be obscured by conventional spectral methods. Consequently, the theoretical concepts we present may be useful for developing more general methods for numerical modeling that are able to determine whether observables of a dynamical system are better represented by memoryless operators, or operators with long-term memory in time, when model details are unknown.
Gaussian Decomposition of Laser Altimeter Waveforms
Hofton, Michelle A.; Minster, J. Bernard; Blair, J. Bryan
1999-01-01
We develop a method to decompose a laser altimeter return waveform into its Gaussian components assuming that the position of each Gaussian within the waveform can be used to calculate the mean elevation of a specific reflecting surface within the laser footprint. We estimate the number of Gaussian components from the number of inflection points of a smoothed copy of the laser waveform, and obtain initial estimates of the Gaussian half-widths and positions from the positions of its consecutive inflection points. Initial amplitude estimates are obtained using a non-negative least-squares method. To reduce the likelihood of fitting the background noise within the waveform and to minimize the number of Gaussians needed in the approximation, we rank the "importance" of each Gaussian in the decomposition using its initial half-width and amplitude estimates. The initial parameter estimates of all Gaussians ranked "important" are optimized using the Levenburg-Marquardt method. If the sum of the Gaussians does not approximate the return waveform to a prescribed accuracy, then additional Gaussians are included in the optimization procedure. The Gaussian decomposition method is demonstrated on data collected by the airborne Laser Vegetation Imaging Sensor (LVIS) in October 1997 over the Sequoia National Forest, California.
Overlapping Community Detection based on Network Decomposition
Ding, Zhuanlian; Zhang, Xingyi; Sun, Dengdi; Luo, Bin
2016-04-01
Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.
Empirical Mode Decomposition and Hilbert Spectral Analysis
Huang, Norden E.
1998-01-01
The difficult facing data analysis is the lack of method to handle nonlinear and nonstationary time series. Traditional Fourier-based analyses simply could not be applied here. A new method for analyzing nonlinear and nonstationary data has been developed. The key part is the Empirical Mode Decomposition (EMD) method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF) that serve as the basis of the representation of the data. This decomposition method is adaptive, and, therefore, highly efficient. The IMFs admit well-behaved Hilbert transforms, and yield instantaneous energy and frequency as functions of time that give sharp identifications of imbedded structures. The final presentation of the results is an energy-frequency-time distribution, designated as the Hilbert Spectrum. Among the main conceptual innovations is the introduction of the instantaneous frequencies for complicated data sets, which eliminate the need of spurious harmonics to represent nonlinear and nonstationary signals. Examples from the numerical results of the classical nonlinear equation systems and data representing natural phenomena are given to demonstrate the power of this new method. The classical nonlinear system data are especially interesting, for they serve to illustrate the roles played by the nonlinear and nonstationary effects in the energy-frequency-time distribution.
Minimax eigenvector decomposition for data hiding
Davidson, Jennifer
2005-09-01
Steganography is the study of hiding information within a covert channel in order to transmit a secret message. Any public media such as image data, audio data, or even file packets, can be used as a covert channel. This paper presents an embedding algorithm that hides a message in an image using a technique based on a nonlinear matrix transform called the minimax eigenvector decomposition (MED). The MED is a minimax algebra version of the well-known singular value decomposition (SVD). Minimax algebra is a matrix algebra based on the algebraic operations of maximum and addition, developed initially for use in operations research and extended later to represent a class of nonlinear image processing operations. The discrete mathematical morphology operations of dilation and erosion, for example, are contained within minimax algebra. The MED is much quicker to compute than the SVD and avoids the numerical computational issues of the SVD because the operations involved only integer addition, subtraction, and compare. We present the algorithm to embed data using the MED, show examples applied to image data, and discuss limitations and advantages as compared with another similar algorithm.
Elastic and acoustic wavefield decompositions and application to reverse time migrations
Wang, Wenlong
P- and S-waves coexist in elastic wavefields, and separation between them is an essential step in elastic reverse-time migrations (RTMs). Unlike the traditional separation methods that use curl and divergence operators, which do not preserve the wavefield vector component information, we propose and compare two vector decomposition methods, which preserve the same vector components that exist in the input elastic wavefield. The amplitude and phase information is automatically preserved, so no amplitude or phase corrections are required. The decoupled propagation method is extended from elastic to viscoelastic wavefields. To use the decomposed P and S vector wavefields and generate PP and PS images, we create a new 2D migration context for isotropic, elastic RTM which includes PS vector decomposition; the propagation directions of both incident and reflected P- and S-waves are calculated directly from the stress and particle velocity definitions of the decomposed P- and S-wave Poynting vectors. Then an excitation-amplitude image condition that scales the receiver wavelet by the source vector magnitude produces angle-dependent images of PP and PS reflection coefficients with the correct polarities, polarization, and amplitudes. It thus simplifies the process of obtaining PP and PS angle-domain common-image gathers (ADCIGs); it is less effort to generate ADCIGs from vector data than from scalar data. Besides P- and S-waves decomposition, separations of up- and down-going waves are also a part of processing of multi-component recorded data and propagating wavefields. A complex trace based up/down separation approach is extended from acoustic to elastic, and combined with P- and S-wave decomposition by decoupled propagation. This eliminates the need for a Fourier transform over time, thereby significantly reducing the storage cost and improving computational efficiency. Wavefield decomposition is applied to both synthetic elastic VSP data and propagating wavefield
CPUF - a chemical-structure-based polyurethane foam decomposition and foam response model.
Energy Technology Data Exchange (ETDEWEB)
Fletcher, Thomas H. (Brigham Young University, Provo, UT); Thompson, Kyle Richard; Erickson, Kenneth L.; Dowding, Kevin J.; Clayton, Daniel (Brigham Young University, Provo, UT); Chu, Tze Yao; Hobbs, Michael L.; Borek, Theodore Thaddeus III
2003-07-01
A Chemical-structure-based PolyUrethane Foam (CPUF) decomposition model has been developed to predict the fire-induced response of rigid, closed-cell polyurethane foam-filled systems. The model, developed for the B-61 and W-80 fireset foam, is based on a cascade of bondbreaking reactions that produce CO2. Percolation theory is used to dynamically quantify polymer fragment populations of the thermally degrading foam. The partition between condensed-phase polymer fragments and gas-phase polymer fragments (i.e. vapor-liquid split) was determined using a vapor-liquid equilibrium model. The CPUF decomposition model was implemented into the finite element (FE) heat conduction codes COYOTE and CALORE, which support chemical kinetics and enclosure radiation. Elements were removed from the computational domain when the calculated solid mass fractions within the individual finite element decrease below a set criterion. Element removal, referred to as ?element death,? creates a radiation enclosure (assumed to be non-participating) as well as a decomposition front, which separates the condensed-phase encapsulant from the gas-filled enclosure. All of the chemistry parameters as well as thermophysical properties for the CPUF model were obtained from small-scale laboratory experiments. The CPUF model was evaluated by comparing predictions to measurements. The validation experiments included several thermogravimetric experiments at pressures ranging from ambient pressure to 30 bars. Larger, component-scale experiments were also used to validate the foam response model. The effects of heat flux, bulk density, orientation, embedded components, confinement and pressure were measured and compared to model predictions. Uncertainties in the model results were evaluated using a mean value approach. The measured mass loss in the TGA experiments and the measured location of the decomposition front were within the 95% prediction limit determined using the CPUF model for all of the
Pulliam, G. R.; Ross, W. E.; MacNeal, B.; Bailey, R. F.
1982-03-01
Large, thin-film single domain areas have been observed, in the absence of a bias field, in garnets with magnetization perpendicular to the film plane.1,2 The domain stability in the work by Krumme1 was attributed to a combination of low saturation magnetization and a low Curie temperature. Uchishiba2 relates the stability in his double layer system to appropriate anisotropy fields in one layer compared to the magnetization in the other layer. A more complete model for large domain stability in a bias field free environment is given in this work. Three distinct stability regimes are predicted by the model and all have been observed experimentally. Areas 3.5-cm in diameter have been made into stable single domains. This was achieved in a material showing a zero bias strip width of 4.5 μm. The single domain diameter was, therefore, 7500 times the equilibrium energy domain width. The technique developed and the model have led to a new means for observing magnetic defects. More importantly, it also offers a means for measuring the strength of the defects. Possible applications of the model are also discussed.
Directory of Open Access Journals (Sweden)
Hua-Qing Wang
2014-01-01
Full Text Available Vibration signals of rolling element bearings faults are usually immersed in background noise, which makes it difficult to detect the faults. Wavelet-based methods being used commonly can reduce some types of noise, but there is still plenty of room for improvement due to the insufficient sparseness of vibration signals in wavelet domain. In this work, in order to eliminate noise and enhance the weak fault detection, a new kind of peak-based approach combined with multiscale decomposition and envelope demodulation is developed. First, to preserve effective middle-low frequency signals while making high frequency noise more significant, a peak-based piecewise recombination is utilized to convert middle frequency components into low frequency ones. The newly generated signal becomes so smoother that it will have a sparser representation in wavelet domain. Then a noise threshold is applied after wavelet multiscale decomposition, followed by inverse wavelet transform and backward peak-based piecewise transform. Finally, the amplitude of fault characteristic frequency is enhanced by means of envelope demodulation. The effectiveness of the proposed method is validated by rolling bearings faults experiments. Compared with traditional wavelet-based analysis, experimental results show that fault features can be enhanced significantly and detected easily by the proposed method.
Domains in Ferroelectric Nanostructures
Gregg, Marty
2010-03-01
Ferroelectric materials have great potential in influencing the future of small scale electronics. At a basic level, this is because ferroelectric surfaces are charged, and so interact strongly with charge-carrying metals and semiconductors - the building blocks for all electronic systems. Since the electrical polarity of the ferroelectric can be reversed, surfaces can both attract and repel charges in nearby materials, and can thereby exert complete control over both charge distribution and movement. It should be no surprise, therefore, that microelectronics industries have already looked very seriously at harnessing ferroelectric materials in a variety of applications, from solid state memory chips (FeRAMs) to field effect transistors (FeFETs). In all such applications, switching the direction of the polarity of the ferroelectric is a key aspect of functional behavior. The mechanism for switching involves the field-induced nucleation and growth of domains. Domain coarsening, through domain wall propagation, eventually causes the entire ferroelectric to switch its polar direction. It is thus the existence and behavior of domains that determine the switching response, and ultimately the performance of the ferroelectric device. A major issue, associated with the integration of ferroelectrics into microelectronic devices, has been that the fundamental properties associated with ferroelectrics, when in bulk form, appear to change quite dramatically and unpredictably when at the nanoscale: new modes of behaviour, and different functional characteristics from those seen in bulk appear. For domains, in particular, the proximity of surfaces and boundaries have a dramatic effect: surface tension and depolarizing fields both serve to increase the equilibrium density of domains, such that minor changes in scale or morphology can have major ramifications for domain redistribution. Given the importance of domains in dictating the overall switching characteristics of a device
Thermodynamic anomaly in magnesium hydroxide decomposition
Energy Technology Data Exchange (ETDEWEB)
Reis, T.A.
1983-08-01
The Origin of the discrepancy in the equilibrium water vapor pressure measurements for the reaction Mg(OH)/sub 2/(s) = MgO(s) + H/sub 2/O(g) when determined by Knudsen effusion and static manometry at the same temperature was investigated. For this reaction undergoing continuous thermal decomposition in Knudsen cells, Kay and Gregory observed that by extrapolating the steady-state apparent equilibrium vapor pressure measurements to zero-orifice, the vapor pressure was approx. 10/sup -4/ of that previously established by Giauque and Archibald as the true thermodynamic equilibrium vapor pressure using statistical mechanical entropy calculations for the entropy of water vapor. This large difference in vapor pressures suggests the possibility of the formation in a Knudsen cell of a higher energy MgO that is thermodynamically metastable by about 48 kJ / mole. It has been shown here that experimental results are qualitatively independent of the type of Mg(OH)/sub 2/ used as a starting material, which confirms the inferences of Kay and Gregory. Thus, most forms of Mg(OH)/sub 2/ are considered to be the stable thermodynamic equilibrium form. X-ray diffraction results show that during the course of the reaction only the equilibrium NaCl-type MgO is formed, and no different phases result from samples prepared in Knudsen cells. Surface area data indicate that the MgO molar surface area remains constant throughout the course of the reaction at low decomposition temperatures, and no significant annealing occurs at less than 400/sup 0/C. Scanning electron microscope photographs show no change in particle size or particle surface morphology. Solution calorimetric measurements indicate no inherent hgher energy content in the MgO from the solid produced in Knudsen cells. The Knudsen cell vapor pressure discrepancy may reflect the formation of a transient metastable MgO or Mg(OH)/sub 2/-MgO solid solution during continuous thermal decomposition in Knudsen cells.
Litter evenness influences short-term peatland decomposition processes.
Ward, Susan E; Ostle, Nick J; McNamara, Niall P; Bardgett, Richard D
2010-10-01
There is concern that changes in climate and land use could increase rates of decomposition in peatlands, leading to release of stored C to the atmosphere. Rates of decomposition are driven by abiotic factors such as temperature and moisture, but also by biotic factors such as changes in litter quality resulting from vegetation change. While effects of litter species identity and diversity on decomposition processes are well studied, the impact of changes in relative abundance (evenness) of species has received less attention. In this study we investigated effects of changes in short-term peatland plant species evenness on decomposition in mixed litter assemblages, measured as litter weight loss, respired CO(2) and leachate C and N. We found that over the 307-day incubation period, higher levels of species evenness increased rates of decomposition in mixed litters, measured as weight loss and leachate dissolved organic N. We also found that the identity of the dominant species influenced rates of decomposition, measured as weight loss, CO(2) flux and leachate N. Greatest rates of decomposition were when the dwarf shrub Calluna vulgaris dominated litter mixtures, and lowest rates when the bryophyte Pleurozium schreberi dominated. Interactions between evenness and dominant species identity were also detected for litter weight loss and leachate N. In addition, positive non-additive effects of mixing litter were observed for litter weight loss. Our findings highlight the importance of changes in the evenness of plant community composition for short-term decomposition processes in UK peatlands.
An Approach to Operational Analysis: Doctrinal Task Decomposition
2016-08-04
SESSION AUGUST 2-4, 2016 – NOVI, MICHIGAN AN APPROACH TO OPERATIONAL ANALYSIS: DOCTRINAL TASK DECOMPOSITION Major Matthew A. Horning U.S...Engineering and Technology Symposium (GVSETS) An Approach To Operational Analysis: Doctrinal Task Decomposition UNCLASSIFIED: Distribution...NCO from any branch, such as logistics, can describe Armor doctrine to the TRADOC standards. DOCTRINAL TASK ANALYSIS FRAMEWORK This approach to
Assessment of three major decomposition techniques for sample ...
African Journals Online (AJOL)
Three main rock-decomposition techniques; microwave oven, open beaker acid and basic fusion were examined in an attempt to establish the most appropriate method for the decomposition of granite rocks for elemental analysis. Standard reference rock material NIM- SARM-I was dissolved using each of the digestion ...
Pitfalls in VAR based return decompositions: A clarification
DEFF Research Database (Denmark)
Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten
Based on Chen and Zhao's (2009) criticism of VAR based return de- compositions, we explain in detail the various limitations and pitfalls involved in such decompositions. First, we show that Chen and Zhao's interpretation of their excess bond return decomposition is wrong: the residual component...
Kinetics of the thermal decomposition of tetramethylsilane behind ...
Indian Academy of Sciences (India)
Thermal decomposition of tetramethylsilane (TMS) diluted in argon was studied behind the reflected shock waves in a single pulse shock tube (SPST) in the temperature range of 1058–1194 K. The major products formed in the decomposition are methane (CH4) and ethylene (C2H4); whereas ethane and propylene were ...
Rate of Decomposition of Leaflitter in an Age Series Gmelina ...
African Journals Online (AJOL)
The study was carried out to investigate the rate of decomposition of Gmelina arborea Robx leaflitter in an age series in Gmelina plantation in shasa forest reserve in a Nigerian low land Forest. Rate of decomposition of Gmelina leaf litter was determined using litter bag technique and mass balance analysis to quantify the ...
Litter fall and decomposition of mangrove species Avicennia marina ...
African Journals Online (AJOL)
Abstract—Litter fall and decomposition of mangrove leaves were compared for different seasons, species (Avicennia marina and Rhizophora mucronata) and sites in southern Mozambique. Mangrove leaf litter fall and decomposition was estimated using small mesh collecting-baskets and litter bags respectively in 2006 and ...
Effect of hydrofluoric acid on acid decomposition mixtures for ...
African Journals Online (AJOL)
Effect of hydrofluoric acid on acid decomposition mixtures for determining iron and other metallic elements in green vegetables. ... Therefore, the inclusion of HF in the acid decomposition mixtures would ensure total and precise estimation of Fe in plant materials, but not critical for analysis of Mn, Mg, Cu, Zn and Ca.
Organic fertilizer decomposition and nutrient loads in water reservoir ...
African Journals Online (AJOL)
Decomposition in aquatic ecosystems is controlled by various factors. The study investigated the trend of decomposition and the potential nutrients loaded in reservoir water. Analysis of water samples and organic fertilizer composition was according to APHA (1995) and Klute (1986) respectively. Reservoir water ...
Improved beamforming performance using pulsed plane wave decomposition
DEFF Research Database (Denmark)
Munk, Peter; Jensen, Jørgen Arendt
2000-01-01
A tool for calculating the beamformer setup associated with a specified pulsed acoustic field is presented. The method is named Pulsed Plane Wave Decomposition (PPWD) and is based on the decomposition of a pulsed acoustic field into a set of PPWs at a given depth. Each PPW can be propagated to th...
Decomposition characteristics of maize ( Zea mays . L.) straw with ...
African Journals Online (AJOL)
Decomposition of maize straw incorporated into soil with various nitrogen amended carbon to nitrogen (C/N) ratios under a range of moisture was studied through a laboratory incubation trial. The experiment was set up to simulate the most suitable C/N ratio for straw carbon (C) decomposition and sequestering in the soil.
Consequences of biodiversity loss for litter decomposition across biomes
Handa, I.T.; Aerts, R.; Berendse, F.; Berg, M.P.; Butenschoen, O.; Bruder, A.; Chauvet, E.; Gessner, M.O.; Jabiol, J.; Makkonen, M.; McKie, B.G.; Malmqvist, B.; Peeters, E.T.H.M.; Scheu, S.; Schmid, B.; Ruijven, van J.; Vos, V.C.A.; Hattenschwiler, S.
2014-01-01
The decomposition of dead organic matter is a major determinant of carbon and nutrient cycling in ecosystems, and of carbon fluxes between the biosphere and the atmosphere1, 2, 3. Decomposition is driven by a vast diversity of organisms that are structured in complex food webs2, 4. Identifying the
DEFF Research Database (Denmark)
Nygaard, Mikkel
Concurrent computation can be given an abstract mathematical treatment very similar to that provided for sequential computation by domain theory and denotational semantics of Scott and Strachey. A simple domain theory for concurrency is presented. Based on a categorical model of linear logic and ...... towards more expressive languages than HOPLA and Affine HOPLA—in particular concerning extensions to cover independence models. The thesis concludes with a discussion of related work towards a fully fledged domain theory for concurrency.......Concurrent computation can be given an abstract mathematical treatment very similar to that provided for sequential computation by domain theory and denotational semantics of Scott and Strachey. A simple domain theory for concurrency is presented. Based on a categorical model of linear logic...... equivalence. One language, called HOPLA for Higher-Order Process LAnguage, derives from an exponential of linear logic. It can be viewed as an extension of the simply-typed lambda calculus with CCS-like nondeterministic sum and prefix operations, in which types express the form of computation path of which...
Generalized Benders’ Decomposition for topology optimization problems
DEFF Research Database (Denmark)
Munoz Queupumil, Eduardo Javier; Stolpe, Mathias
2011-01-01
This article considers the non-linear mixed 0–1 optimization problems that appear in topology optimization of load carrying structures. The main objective is to present a Generalized Benders’ Decomposition (GBD) method for solving single and multiple load minimum compliance (maximum stiffness......) problems with discrete design variables to global optimality. We present the theoretical aspects of the method, including a proof of finite convergence and conditions for obtaining global optimal solutions. The method is also linked to, and compared with, an Outer-Approximation approach and a mixed 0......–1 semi definite programming formulation of the considered problem. Several ways to accelerate the method are suggested and an implementation is described. Finally, a set of truss topology optimization problems are numerically solved to global optimality....
Hydrogen peroxide decomposition kinetics in aquaculture water
DEFF Research Database (Denmark)
Arvin, Erik; Pedersen, Lars-Flemming
2015-01-01
Hydrogen peroxide (HP) is used in aquaculture systems where preventive or curative water treatments occasionally are required. Use of chemical agents can be challenging in recirculating aquaculture systems (RAS) due to extended water retention time and because the agents must not damage the fish ...... in RAS by addressing disinfection demand and identify efficient and safe water treatment routines.......Hydrogen peroxide (HP) is used in aquaculture systems where preventive or curative water treatments occasionally are required. Use of chemical agents can be challenging in recirculating aquaculture systems (RAS) due to extended water retention time and because the agents must not damage the fish...... reared or the nitrifying bacteria in the biofilters at concentrations required to eliminating pathogens. This calls for quantitative insight into the fate of the disinfectant residuals during water treatment. This paper presents a kinetic model that describes the HP decomposition in aquaculture water...
On Double-Star Decomposition of Graphs
Directory of Open Access Journals (Sweden)
Akbari Saieed
2017-08-01
Full Text Available A tree containing exactly two non-pendant vertices is called a double-star. A double-star with degree sequence (k1 + 1, k2 + 1, 1, . . . , 1 is denoted by Sk1,k2. We study the edge-decomposition of graphs into double-stars. It was proved that every double-star of size k decomposes every 2k-regular graph. In this paper, we extend this result by showing that every graph in which every vertex has degree 2k + 1 or 2k + 2 and containing a 2-factor is decomposed into Sk1,k2 and Sk1−1,k2, for all positive integers k1 and k2 such that k1 + k2 = k.
Generalized decomposition methods for singular oscillators
Energy Technology Data Exchange (ETDEWEB)
Ramos, J.I. [Room I-320-D, E. T. S. Ingenieros Industriales, Universidad de Malaga, Plaza El Ejido, s/n 29013 Malaga (Spain)], E-mail: jirs@lcc.uma.es
2009-10-30
Generalized decomposition methods based on a Volterra integral equation, the introduction of an ordering parameter and a power series expansion of the solution in terms of the ordering parameter are developed and used to determine the solution and the frequency of oscillation of a singular, nonlinear oscillator with an odd nonlinearity. It is shown that these techniques provide solutions which are free from secularities if the unknown frequency of oscillation is also expanded in power series of the ordering parameter, require that the nonlinearities be analytic functions of their arguments, and, at leading-order, provide the same frequency of oscillation as two-level iterative techniques, the homotopy perturbation method if the constants that appear in the governing equation are expanded in power series of the ordering parameter, and modified artificial parameter - Linstedt-Poincare procedures.
Decomposition of time-resolved tomographic PIV
Energy Technology Data Exchange (ETDEWEB)
Schmid, Peter J. [Ecole Polytechnique, Laboratoire d' Hydrodynamique (LadHyX), Palaiseau (France); Violato, Daniele; Scarano, Fulvio [Delft University of Technology, Department of Aerospace Engineering, Delft (Netherlands)
2012-06-15
An experimental study has been conducted on a transitional water jet at a Reynolds number of Re = 5,000. Flow fields have been obtained by means of time-resolved tomographic particle image velocimetry capturing all relevant spatial and temporal scales. The measured three-dimensional flow fields have then been postprocessed by the dynamic mode decomposition which identifies coherent structures that contribute significantly to the dynamics of the jet. Both temporal and spatial analyses have been performed. Where the jet exhibits a primary axisymmetric instability followed by a pairing of the vortex rings, dominant dynamic modes have been extracted together with their amplitude distribution. These modes represent a basis for the low-dimensional description of the dominant flow features. (orig.)
Biclustering via Sparse Singular Value Decomposition
Lee, Mihee
2010-02-16
Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets. © 2010, The International Biometric Society.
Image compression using singular value decomposition
Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.
2017-11-01
We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.
Decomposition of spectra using maximum autocorrelation factors
DEFF Research Database (Denmark)
Larsen, Rasmus
2001-01-01
This paper addresses the problem of generating a low dimensional representation of the variation present in a set of spectra, e.g. reflection spectra recorded from a series of objects. The resulting low dimensional description may subseque ntly be input through variable selection schemes into cla......This paper addresses the problem of generating a low dimensional representation of the variation present in a set of spectra, e.g. reflection spectra recorded from a series of objects. The resulting low dimensional description may subseque ntly be input through variable selection schemes...... Fourier decomposition these new variables are located in frequency as well as well wavelength. The proposed algorithm is tested on 100 samples of NIR spectra of wheat....
Heterogeneous Thermochemical Decomposition Under Direct Irradiation
Energy Technology Data Exchange (ETDEWEB)
Lipinski, W.; Steinfeld, A. [PSI and ETH Zuerich(Switzerland)
2005-03-01
Radiative heat transfer in a chemical reacting system directly exposed to an external source of high-flux radiation is considered. The endothermic decomposition of CaCO{sub 3}(s) into CaO(s) and CO{sub 2}(g) is selected as the model heterogeneous reaction. Experimentation using an Ar arc as the radiation source was carried out in which powder samples were subjected to radiative power fluxes in the range 400-930 kW/m{sup 2}. A 3D transient heat transfer model that links conduction-convection-radiation heat transfer to the chemical kinetics is formulated using wavelength and chemical composition dependent material properties. Monte-Carlo ray tracing and the Rosseland diffusion approximation are employed to obtain the radiative transport. The unsteady energy equation is solved by finite volume technique. The model is validated by com-paring the computed reaction extent variation with time to the values experimentally measured. (author)
Sparsity-promoting dynamic mode decomposition
Jovanović, Mihailo R.; Schmid, Peter J.; Nichols, Joseph W.
2014-02-01
Dynamic mode decomposition (DMD) represents an effective means for capturing the essential features of numerically or experimentally generated flow fields. In order to achieve a desirable tradeoff between the quality of approximation and the number of modes that are used to approximate the given fields, we develop a sparsity-promoting variant of the standard DMD algorithm. Sparsity is induced by regularizing the least-squares deviation between the matrix of snapshots and the linear combination of DMD modes with an additional term that penalizes the ℓ1-norm of the vector of DMD amplitudes. The globally optimal solution of the resulting regularized convex optimization problem is computed using the alternating direction method of multipliers, an algorithm well-suited for large problems. Several examples of flow fields resulting from numerical simulations and physical experiments are used to illustrate the effectiveness of the developed method.
Thermal decompositions of heavy lanthanide aconitates
Energy Technology Data Exchange (ETDEWEB)
Brzyska, W.; Ozga, W. (Uniwersytet Marii Curie-Sklodowskiej, Lublin (Poland))
The conditions of thermal decomposition of Tb(III), Dy, Ho, Er, Tm, Yb and Lu aconitates have been studied. On heating, the aconitates of heavy lanthanides lose crystallization water to yield anhydrous salts, which are then transformed into oxides. The aconitate of Tb(III) decomposes in two stages. First, the complex undergoes dehydration to form the anhydrous salt, which next decomposes directly to Tb/sub 4/O/sub 7/. The aconitates of Dy, Ho, Er, Tm, Yb and Lu decompose in three stages. On heating, the hydrated complexes lose crystallization water, yielding the anhydrous complexes; these subsequently decompose to Ln/sub 2/O/sub 3/ with intermediate formation of Ln/sub 2/O/sub 2/CO/sub 3/.
Decomposition Analysis of Forest Ecosystem Services Values
Directory of Open Access Journals (Sweden)
Hidemichi Fujii
2017-04-01
Full Text Available Forest ecosystem services are fundamental for human life. To protect and increase forest ecosystem services, the driving factors underlying changes in forest ecosystem service values must be determined to properly implement forest resource management planning. This study examines the driving factors that affect changes in forest ecosystem service values by focusing on regional forest characteristics using a dataset of 47 prefectures in Japan for 2000, 2007, and 2012. We applied two approaches: a contingent valuation method for estimating the forest ecosystem service value per area and a decomposition analysis for identifying the main driving factors of changes in the value of forest ecosystem services. The results indicate that the value of forest ecosystem services has increased due to the expansion of forest area from 2000 to 2007. However, factors related to forest management and ecosystem service value per area have contributed to a decrease in the value of ecosystem services from 2000 to 2007 and from 2007 to 2012, respectively.
Decomposition of childhood malnutrition in Cambodia.
Sunil, Thankam S; Sagna, Marguerite
2015-10-01
Childhood malnutrition is a major problem in developing countries, and in Cambodia, it is estimated that approximately 42% of the children are stunted, which is considered to be very high. In the present study, we examined the effects of proximate and socio-economic determinants on childhood malnutrition in Cambodia. In addition, we examined the effects of the changes in these proximate determinants on childhood malnutrition between 2000 and 2005. Our analytical approach included descriptive, logistic regression and decomposition analyses. Separate analyses are estimated for 2000 and 2005 survey. The primary component of the difference in stunting is attributable to the rates component, indicating that the decrease of stunting is due mainly to the decrease in stunting rates between 2000 and 2005. While majority of the differences in childhood malnutrition between 2000 and 2005 can be attributed to differences in the distribution of malnutrition determinants between 2000 and 2005, differences in their effects also showed some significance. © 2013 John Wiley & Sons Ltd.
Thermal Decomposition Chemistry of Amine Borane (U)
Energy Technology Data Exchange (ETDEWEB)
Stowe, A. C.; Feigerle, J.; Smyrl, N. R.; Morrell, J. S.
2010-01-29
The conclusions of this presentation are: (1) Amine boranes potentially can be used as a vehicular hydrogen storage material. (2) Purity of the hydrogen stream is critical for use with a fuel cell. Pure H{sub 2} can be provided by carefully conditioning the fuel (encapsulation, drying, heating rate, impurities). (3) Thermodynamics and kinetics can be controlled by conditioning as well. (4) Regeneration of the spent amine borane fuel is still the greatest challenge to its potential use. (5) Addition of hydrocarbon-substituted amine boranes alter the chemistry dramatically. (6) Decomposition of the substituted amine borane mixed system favors reaction products that are more potentially easier to regenerate the hydrogenated fuel. (7) t-butylamine borane is not the best substituted amine borane to use since it releases isobutane; however, formation of CNBH{sub x} products does occur.
Block term decomposition for modelling epileptic seizures
Hunyadi, Borbála; Camps, Daan; Sorber, Laurent; Paesschen, Wim Van; Vos, Maarten De; Huffel, Sabine Van; Lathauwer, Lieven De
2014-12-01
Recordings of neural activity, such as EEG, are an inherent mixture of different ongoing brain processes as well as artefacts and are typically characterised by low signal-to-noise ratio. Moreover, EEG datasets are often inherently multidimensional, comprising information in time, along different channels, subjects, trials, etc. Additional information may be conveyed by expanding the signal into even more dimensions, e.g. incorporating spectral features applying wavelet transform. The underlying sources might show differences in each of these modes. Therefore, tensor-based blind source separation techniques which can extract the sources of interest from such multiway arrays, simultaneously exploiting the signal characteristics in all dimensions, have gained increasing interest. Canonical polyadic decomposition (CPD) has been successfully used to extract epileptic seizure activity from wavelet-transformed EEG data (Bioinformatics 23(13):i10-i18, 2007; NeuroImage 37:844-854, 2007), where each source is described by a rank-1 tensor, i.e. by the combination of one particular temporal, spectral and spatial signature. However, in certain scenarios, where the seizure pattern is nonstationary, such a trilinear signal model is insufficient. Here, we present the application of a recently introduced technique, called block term decomposition (BTD) to separate EEG tensors into rank- ( L r , L r ,1) terms, allowing to model more variability in the data than what would be possible with CPD. In a simulation study, we investigate the robustness of BTD against noise and different choices of model parameters. Furthermore, we show various real EEG recordings where BTD outperforms CPD in capturing complex seizure characteristics.
Thermal Decomposition of Radiation-Damaged Polystyrene
Energy Technology Data Exchange (ETDEWEB)
J Abrefah GS Klinger
2000-09-26
The radiation-damaged polystyrene material (''polycube'') used in this study was synthesized by mixing a high-density polystyrene (''Dylene Fines No. 100'') with plutonium and uranium oxides. The polycubes were used on the Hanford Site in the 1960s for criticality studies to determine the hydrogen-to-fissile atom ratios for neutron moderation during processing of spent nuclear fuel. Upon completion of the studies, two methods were developed to reclaim the transuranic (TRU) oxides from the polymer matrix: (1) burning the polycubes in air at 873 K; and (2) heating the polycubes in the absence of oxygen and scrubbing the released monomer and other volatile organics using carbon tetrachloride. Neither of these methods was satisfactory in separating the TRU oxides from the polystyrene. Consequently, the remaining polycubes were sent to the Hanford Plutonium Finishing Plant (PFP) for storage. Over time, the high dose of alpha and gamma radiation has resulted in a polystyrene matrix that is highly cross-linked and hydrogen deficient and a stabilization process is being developed in support of Defense Nuclear Facility Safety Board Recommendation 94-1. Baseline processes involve thermal treatment to pyrolyze the polycubes in a furnace to decompose the polystyrene and separate out the TRU oxides. Thermal decomposition products from this degraded polystyrene matrix were characterized by Pacific Northwest National Laboratory to provide information for determining the environmental impact of the process and for optimizing the process parameters. A gas chromatography/mass spectrometry (GC/MS) system coupled to a horizontal tube furnace was used for the characterization studies. The decomposition studies were performed both in air and helium atmospheres at 773 K, the planned processing temperature. The volatile and semi-volatile organic products identified for the radiation-damaged polystyrene were different from those observed for virgin
Thermal Decomposition of Radiation-Damaged Polystyrene
Energy Technology Data Exchange (ETDEWEB)
Abrefah, John; Klinger, George S.
2000-09-26
The radiation-damaged polystyrene (given the identification name of 'polycube') was fabricated by mixing high-density polystyrene material ("Dylene Fines # 100") with plutonium and uranium oxides. The polycubes were used in the 1960s for criticality studies during processing of spent nuclear fuel. The polycubes have since been stored for almost 40 years at the Hanford Plutonium Finishing Plant (PFP) after failure of two processes to reclaim the plutonium and uranium oxides from the polystyrene matrix. Thermal decomposition products from this highly cross-linked polystyrene matrix were characterized using Gas Chromatograph/Mass Spectroscopy (GC/MS) system coupled to a horizontal furnace. The decomposition studies were performed in air and helium atmospheres at about 773 K. The volatile and semi-volatile organic products for the radiation-damaged polystyrene were different compared to virgin polystyrene. The differences were in the number of organic species generated and their concentrations. In the inert (i.e., helium) atmosphere, the major volatile organic products identified (in order of decreasing concentrations) were styrene, benzene, toluene, ethylbenzene, xylene, nathphalene, propane, .alpha.-methylbenzene, indene and 1,2,3-trimethylbenzene. But in air, the major volatile organic species identified changed slightly. Concentrations of the organic species in the inert atmosphere were significantly higher than those for the air atmosphere processing. Overall, 38 volatile organic species were identified in the inert atmosphere compared to 49 species in air. Twenty of the 38 species in the inert conditions were also products in the air atmosphere. Twenty-two oxidized organic products were identified during thermal processing in air.
Reasoning in incomplete domains
Energy Technology Data Exchange (ETDEWEB)
Rosenberg, S.
1979-01-01
Most real-world domains differ from the micro-worlds traditionally used in A.I. in that they have an incomplete factual data base which changes over time. Understanding in these domains can be thought of as the gneration of plausible infoerences which are able to use the facts available, and respond to changes in them. A traditional rule interpreter such as Planner can be extended to construct plausible inferences in these domains by allowing assumptions to be made in applying rules, resultsing in simplifications of rules which can be used in an incomplete data base; monitoring the antecedents and consequents of a rule so that inferences can be maintained over a changing data base.
Wang, Wenkang; Pan, Chong; Wang, Jinjun
2016-11-01
Turbulent boundary layer (TBL) is believed to contain a wide spectrum of coherent structures, from near-wall low-speed streaks characterized by inner scale to log-layer large-scale coherent motions (LSM and VLSM) characterized by outer scale. Recent studies have evidenced the interaction between these multi-scale structures via either bottom-up or top-down mechanisms, which implies the possibility of identifying the coexistence of their footprints at medium flow layer. Here, we propose a Quasi-Bivariate Variational Mode Decomposition method (QB-VMD), which is an update of the traditional Empirical Mode Decomposition (EMD) with bandwidth limitation, for the decomposition of the PIV measured 2D flow fields with large ROI (Δx × Δz 4 δ × 1 . 5 δ) at specified wall-normal heights (y / δ = 0 . 05 0 . 2) of a turbulent boundary layer with Reτ = 3460 . The empirical modes identified by QB-VMD well capture the characteristics of log-layer LSMs as well as that of near-wall streak-like structures. The lateral scales of these structures are analyzed and their respective energy contribution are evaluated. Supported by both the National Natural Science Foundation of China (Grant Nos. 11372001 and 11490552) and the Fundamental Research Funds for the Central Universities of China (No. YWF-16-JCTD-A-05).
Nejadgholi, Isar; Caytak, Herschel; Bolic, Miodrag; Batkin, Izmail; Shirmohammadi, Shervin
2015-05-01
In several applications of bioimpedance spectroscopy, the measured spectrum is parameterized by being fitted into the Cole equation. However, the extracted Cole parameters seem to be inconsistent from one measurement session to another, which leads to a high standard deviation of extracted parameters. This inconsistency is modeled with a source of random variations added to the voltage measurement carried out in the time domain. These random variations may originate from biological variations that are irrelevant to the evidence that we are investigating. Yet, they affect the voltage measured by using a bioimpedance device based on which magnitude and phase of impedance are calculated.By means of simulated data, we showed that Cole parameters are highly affected by this type of variation. We further showed that singular value decomposition (SVD) is an effective tool for parameterizing bioimpedance measurements, which results in more consistent parameters than Cole parameters. We propose to apply SVD as a preprocessing method to reconstruct denoised bioimpedance measurements. In order to evaluate the method, we calculated the relative difference between parameters extracted from noisy and clean simulated bioimpedance spectra. Both mean and standard deviation of this relative difference are shown to effectively decrease when Cole parameters are extracted from preprocessed data in comparison to being extracted from raw measurements.We evaluated the performance of the proposed method in distinguishing three arm positions, for a set of experiments including eight subjects. It is shown that Cole parameters of different positions are not distinguishable when extracted from raw measurements. However, one arm position can be distinguished based on SVD scores. Moreover, all three positions are shown to be distinguished by two parameters, R0/R∞ and Fc, when Cole parameters are extracted from preprocessed measurements. These results suggest that SVD could be considered as an
Data Structure and Parallel Decomposition Considerations on a Fibonacci Grid
Michalakes, John; Purser,James; Swinbank, Richard
1999-01-01
The Fibonacci grid, proposed by Swinbank and Purser (see companion abstract), provides attractive properties for global numerical atmospheric prediction by offering an optimally homogeneous, geometrically regular, and approximately isotropic discretization, with only the polar regions requiring special numerical treatment. It is a mathematical idealization, applied to the sphere, of the multi-spiral patterns often found in botanical structures, such as in pine cones and sunflower heads. Computationally, it is natural to organize the domain, into zones, in each of which the same pair, or triple, of "Fibonacci spirals" dominate. But the further subdivision of such zones into "tiles" of a shape and size suitable for distribution to the processors of a massively parallel computer requires very careful consideration if the subsequent spatial computations along the respective spirals, especially those computations (such as compact differencing schemes) that involve recursion, can be implemented in an efficient "load-balanced "manner without requiring excessive amounts of inter-processor communications. In this paper we show how certain "number theoretic" properties of the Fibonacci sequence (whose numbers prescribe the multiplicity of successive spirals) may be exploited in the decomposition of grid zones into tidy arrangements of triangular grid tiles, each tile possessing one side approximately parallel to the constant-latitude zone boundary. We also describe how the spatially recursive processes may be decomposed across such a tiling, and the directionality of the recursions reversed on alternate grid lines, to ensure a very high degree of load balancing throughout the execution of the computations required for one time step of a global model.
DEFF Research Database (Denmark)
Hessellund, Anders
the overall level of abstraction. It does, however, also introduce a new problem of coordinating multiple different languages in a single system. We call this problem the coordination problem. In this thesis, we present the coordination method for domain-specific multimodeling that explicitly targets...
Campanelli, L.; Cea, P.; Fogli, G. L.; Tedesco, L.
2003-01-01
In this paper we investigate Charged Domain Walls (CDW's), topological defects that acquire surface charge density $Q$ induced by fermion states localized on the walls. The presence of an electric and magnetic field on the walls is also discussed. We find a relation in which the value of the surface charge density $Q$ is connected with the existence of such topological defects.
Oude Mulders, J.; Wadensjö, E.; Hasselhorn, H.M.; Apt, W.
This domain chapter is dedicated to summarize research on the effects of labour market contextual factors on labour market participation of older workers (aged 50+) and identify research gaps. While employment participation and the timing of (early) retirement is often modelled as an individual
An EMG Decomposition System Aimed at Detailed Analysis of Motor Unit Activity
DEFF Research Database (Denmark)
Nikolic, Mile; Krarup, Christian; Dahl, Kristian
1997-01-01
Decomposition of EMG signals by segmentation oftime signals, clustering and resolving of compoundsegments.......Decomposition of EMG signals by segmentation oftime signals, clustering and resolving of compoundsegments....
Decomposition of silicon carbide at high pressures and temperatures
Energy Technology Data Exchange (ETDEWEB)
Daviau, Kierstin; Lee, Kanani K. M.
2017-11-01
We measure the onset of decomposition of silicon carbide, SiC, to silicon and carbon (e.g., diamond) at high pressures and high temperatures in a laser-heated diamond-anvil cell. We identify decomposition through x-ray diffraction and multiwavelength imaging radiometry coupled with electron microscopy analyses on quenched samples. We find that B3 SiC (also known as 3C or zinc blende SiC) decomposes at high pressures and high temperatures, following a phase boundary with a negative slope. The high-pressure decomposition temperatures measured are considerably lower than those at ambient, with our measurements indicating that SiC begins to decompose at ~ 2000 K at 60 GPa as compared to ~ 2800 K at ambient pressure. Once B3 SiC transitions to the high-pressure B1 (rocksalt) structure, we no longer observe decomposition, despite heating to temperatures in excess of ~ 3200 K. The temperature of decomposition and the nature of the decomposition phase boundary appear to be strongly influenced by the pressure-induced phase transitions to higher-density structures in SiC, silicon, and carbon. The decomposition of SiC at high pressure and temperature has implications for the stability of naturally forming moissanite on Earth and in carbon-rich exoplanets.
Decomposition of dioxin analogues and ablation study for carbon nanotube
Energy Technology Data Exchange (ETDEWEB)
Yamauchi, Toshihiko [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2002-08-01
Two application studies associated with the free electron laser are presented separately, which are the titles of 'Decomposition of Dioxin Analogues' and 'Ablation Study for Carbon Nanotube'. The decomposition of dioxin analogues by infrared (IR) laser irradiation includes the thermal destruction and multiple-photon dissociation. It is important for us to choose the highly absorbable laser wavelength for the decomposition. The thermal decomposition takes place by the irradiation of the low IR laser power. Considering the model of thermal decomposition, it is proposed that adjacent water molecules assist the decomposition of dioxin analogues in addition to the thermal decomposition by the direct laser absorption. The laser ablation study is performed for the aim of a carbon nanotube synthesis. The vapor by the ablation is weakly ionized in the power of several-hundred megawatts. The plasma internal energy is kept over an 8.5 times longer than the vacuum. The cluster was produced from the weakly ionized gas in the enclosed gas, which is composed of the rough particles in the low power laser more than the high power which is composed of the fine particles. (J.P.N.)
Analysis of complex metabolic behavior through pathway decomposition
Directory of Open Access Journals (Sweden)
Ip Kuhn
2011-06-01
Full Text Available Abstract Background Understanding complex systems through decomposition into simple interacting components is a pervasive paradigm throughout modern science and engineering. For cellular metabolism, complexity can be reduced by decomposition into pathways with particular biochemical functions, and the concept of elementary flux modes provides a systematic way for organizing metabolic networks into such pathways. While decomposition using elementary flux modes has proven to be a powerful tool for understanding and manipulating cellular metabolism, its utility, however, is severely limited since the number of modes in a network increases exponentially with its size. Results Here, we present a new method for decomposition of metabolic flux distributions into elementary flux modes. Our method can easily operate on large, genome-scale networks since it does not require all relevant modes of the metabolic network to be generated. We illustrate the utility of our method for metabolic engineering of Escherichia coli and for understanding the survival of Mycobacterium tuberculosis (MTB during infection. Conclusions Our method can achieve computational time improvements exceeding 2000-fold and requires only several seconds to generate elementary mode decompositions on genome-scale networks. These improvements arise from not having to generate all relevant elementary modes prior to initiating the decomposition. The decompositions from our method are useful for understanding complex flux distributions and debugging genome-scale models.
The framing of scientific domains
DEFF Research Database (Denmark)
Dam Christensen, Hans
2014-01-01
Purpose: By using the UNISIST models this article argues for the necessity of domain analysis in order to qualify scientific information seeking. The models better understanding of communication processes in a scientific domain and embraces the point that domains are always both unstable over time...... domains, and UNISIST helps understanding this navigation. Design/methodology/approach The UNISIST models are tentatively applied to the domain of art history at three stages, respectively two modern, partially overlapping domains, as well as an outline of an art historical domain anno c1820...... as according to the agents that are charting them. As such, power in a Foucauldian sense is unavoidable in outlining a domain. Originality/value 1. The UNISIST models are applied to the domain of art history; and 2. the article discusses the instability of a scientific domain as well as, at the same time...
Dynamics in the Decompositions Approach to Quantum Mechanics
Harding, John
2017-12-01
In Harding (Trans. Amer. Math. Soc. 348(5), 1839-1862 1996) it was shown that the direct product decompositions of any non-empty set, group, vector space, and topological space X form an orthomodular poset Fact X. This is the basis for a line of study in foundational quantum mechanics replacing Hilbert spaces with other types of structures. Here we develop dynamics and an abstract version of a time independent Schrödinger's equation in the setting of decompositions by considering representations of the group of real numbers in the automorphism group of the orthomodular poset Fact X of decompositions.
Integral decomposition and polarization properties of depolarizing Mueller matrices.
Ossikovski, Razvigor; Arteaga, Oriol
2015-03-15
We show that, by suitably defining the integral decomposition of a depolarizing Mueller matrix, it becomes possible to fully interpret the polarization response of the medium or structure under study in terms of mean values and variances-covariances of a set of six integral polarization properties. The latter appear as natural counterparts of the elementary (differential) polarization properties stemming from the differential decomposition of the Mueller matrix. However, unlike the differential decomposition, the integral one is always mathematically and physically realizable and is furthermore unambiguously defined inasmuch as a nondepolarizing estimate of the initial Mueller matrix is secured. The theoretical results are illustrated on an experimental example.
Seizure detection from EEG signals using Multivariate Empirical Mode Decomposition.
Zahra, Asmat; Kanwal, Nadia; Ur Rehman, Naveed; Ehsan, Shoaib; McDonald-Maier, Klaus D
2017-09-01
We present a data driven approach to classify ictal (epileptic seizure) and non-ictal EEG signals using the multivariate empirical mode decomposition (MEMD) algorithm. MEMD is a multivariate extension of empirical mode decomposition (EMD), which is an established method to perform the decomposition and time-frequency (T-F) analysis of non-stationary data sets. We select suitable feature sets based on the multiscale T-F representation of the EEG data via MEMD for the classification purposes. The classification is achieved using the artificial neural networks. The efficacy of the proposed method is verified on extensive publicly available EEG datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.
Seismic entangled patterns analyzed via multiresolution decomposition
Directory of Open Access Journals (Sweden)
F. E. A. Leite
2009-03-01
Full Text Available This article explores a method for distinguishing entangled coherent structures embedded in geophysical images. The original image is decomposed in a series of j-scale-images using multiresolution decomposition. To improve the image processing analysis each j-image is divided in l-spacial regions generating set of (j, l-regions. At each (j, l-region we apply a continuous wavelet transform to evaluate E_{ν}, the spectrum of energy. E_{ν} has two maxima in the original data. Otherwise, at each scale E_{ν} hast typically one peak. The localization of the peaks changes according to the (j, l-region. The intensity of the peaks is linked with the presence of coherent structures, or patterns, at the respective (j, l-region. The method is successfully applied to distinguish, in scale and region, the ground roll noise from the relevant geologic information in the signal.
Dissociative Ionization and Thermal Decomposition of Cyclopentanone.
Pastoors, Johan I M; Bodi, Andras; Hemberger, Patrick; Bouwman, Jordy
2017-09-21
Despite the growing use of renewable and sustainable biofuels in transportation, their combustion chemistry is poorly understood, limiting our efforts to reduce harmful emissions. Here we report on the (dissociative) ionization and the thermal decomposition mechanism of cyclopentanone, studied using imaging photoelectron photoion coincidence spectroscopy. The fragmentation of the ions is dominated by loss of CO, C 2 H 4 , and C 2 H 5 , leading to daughter ions at m/z 56 and 55. Exploring the C 5 H 8 O . + potential energy surface reveals hydrogen tunneling to play an important role in low-energy decarbonylation and probably also in the ethene-loss processes, yielding 1-butene and methylketene cations, respectively. At higher energies, pathways without a reverse barrier open up to oxopropenyl and cyclopropanone cations by ethyl-radical loss and a second ethene-loss channel, respectively. A statistical Rice-Ramsperger-Kassel-Marcus model is employed to test the viability of this mechanism. The pyrolysis of cyclopentanone is studied at temperatures ranging from about 800 to 1100 K. Closed-shell pyrolysis products, namely 1,3-butadiene, ketene, propyne, allene, and ethene, are identified based on their photoion mass-selected threshold photoelectron spectrum. Furthermore, reactive radical species such as allyl, propargyl, and methyl are found. A reaction mechanism is derived incorporating both stable and reactive species, which were not predicted in prior computational studies. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
Plasma-catalytic decomposition of TCE
Energy Technology Data Exchange (ETDEWEB)
Vandenbroucke, A.; Morent, R.; De Geyter, N.; Leys, C. [Ghent Univ., Ghent (Belgium). Dept. of Applied Physics; Tuan, N.D.M.; Giraudon, J.M.; Lamonier, J.F. [Univ. des Sciences et Technologies de Lille, Villeneuve (France). Dept. de Catalyse et Chimie du Solide
2010-07-01
Volatile organic compounds (VOCs) are gaseous pollutants that pose an environmental hazard due to their high volatility and their possible toxicity. Conventional technologies to reduce the emission of VOCs have their advantages, but they become cost-inefficient when low concentrations have to be treated. In the past 2 decades, non-thermal plasma technology has received growing attention as an alternative and promising remediation method. Non-thermal plasmas are effective because they produce a series of strong oxidizers such as ozone, oxygen radicals and hydroxyl radicals that provide a reactive chemical environment in which VOCs are completely oxidized. This study investigated whether the combination of NTP and catalysis could improve the energy efficiency and the selectivity towards carbon dioxide (CO{sub 2}). Trichloroethylene (TCE) was decomposed by non-thermal plasma generated in a DC-excited atmospheric pressure glow discharge. The production of by-products was qualitatively investigated through FT-IR spectrometry. The results were compared with those from a catalytic reactor. The removal rate of TCE reached a maximum of 78 percent at the highest input energy. The by-products of TCE decomposition were CO{sub 2}, carbon monoxide (CO) hydrochloric acid (HCl) and dichloroacetylchloride. Combining the plasma system with a catalyst located in an oven downstream resulted in a maximum removal of 80 percent, at an energy density of 300 J/L, a catalyst temperature of 373 K and a total air flow rate of 2 slm. 14 refs., 6 figs.
Differential Decomposition of Bacterial and Viral Fecal ...
Understanding the decomposition of microorganisms associated with different human fecal pollution types is necessary for proper implementation of many water qualitymanagement practices, as well as predicting associated public health risks. Here, thedecomposition of select cultivated and molecular indicators of fecal pollution originating from fresh human feces, septage, and primary effluent sewage in a subtropical marine environment was assessed over a six day period with an emphasis on the influence of ambient sunlight and indigenous microbiota. Ambient water mixed with each fecal pollution type was placed in dialysis bags and incubated in situ in a submersible aquatic mesocosm. Genetic and cultivated fecal indicators including fecal indicator bacteria (enterococci, E. coli, and Bacteroidales), coliphage (somatic and F+), Bacteroides fragilis phage (GB-124), and human-associated geneticindicators (HF183/BacR287 and HumM2) were measured in each sample. Simple linearregression assessing treatment trends in each pollution type over time showed significant decay (p ≤ 0.05) in most treatments for feces and sewage (27/28 and 32/40, respectively), compared to septage (6/26). A two-way analysis of variance of log10 reduction values for sewage and feces experiments indicated that treatments differentially impact survival of cultivated bacteria, cultivated phage, and genetic indicators. Findings suggest that sunlight is critical for phage decay, and indigenous microbio
Interior tomography with continuous singular value decomposition.
Jin, Xin; Katsevich, Alexander; Yu, Hengyong; Wang, Ge; Li, Liang; Chen, Zhiqiang
2012-11-01
The long-standing interior problem has important mathematical and practical implications. The recently developed interior tomography methods have produced encouraging results. A particular scenario for theoretically exact interior reconstruction from truncated projections is that there is a known sub-region in the ROI. In this paper, we improve a novel continuous singular value decomposition (SVD) method for interior reconstruction assuming a known sub-region. First, two sets of orthogonal eigen-functions are calculated for the Hilbert and image spaces respectively. Then, after the interior Hilbert data are calculated from projection data through the ROI, they are projected onto the eigen-functions in the Hilbert space, and an interior image is recovered by a linear combination of the eigen-functions with the resulting coefficients. Finally, the interior image is compensated for the ambiguity due to the null space utilizing the prior sub-region knowledge. Experiments with simulated and real data demonstrate the advantages of our approach relative to the POCS type interior reconstructions.
Algebraic Davis Decomposition and Asymmetric Doob Inequalities
Hong, Guixiang; Junge, Marius; Parcet, Javier
2016-09-01
In this paper we investigate asymmetric forms of Doob maximal inequality. The asymmetry is imposed by noncommutativity. Let {({M}, τ)} be a noncommutative probability space equipped with a filtration of von Neumann subalgebras {({M}_n)_{n ≥ 1}}, whose union {bigcup_{n≥1}{M}_n} is weak-* dense in {{M}}. Let {{E}_n} denote the corresponding family of conditional expectations. As an illustration for an asymmetric result, we prove that for {1 Hardy spaces {{H}_p^r({M})} and {{H}_p^c({M})} respectively. In particular, this solves a problem posed by the Defant and Junge in 2004. In the case p = 1, our results establish a noncommutative form of the Davis celebrated theorem on the relation betwe en martingale maximal and square functions in L 1, whose noncommutative form has remained open for quite some time. Given {1 ≤ p ≤ 2}, we also provide new weak type maximal estimates, which imply in turn left/right almost uniform convergence of {{E}_n(x)} in row/column Hardy spaces. This improves the bilateral convergence known so far. Our approach is based on new forms of Davis martingale decomposition which are of independent interest, and an algebraic atomic description for the involved Hardy spaces. The latter results are new even for commutative von Neumann algebras.
Singular Value Decomposition and Ligand Binding Analysis
Directory of Open Access Journals (Sweden)
André Luiz Galo
2013-01-01
Full Text Available Singular values decomposition (SVD is one of the most important computations in linear algebra because of its vast application for data analysis. It is particularly useful for resolving problems involving least-squares minimization, the determination of matrix rank, and the solution of certain problems involving Euclidean norms. Such problems arise in the spectral analysis of ligand binding to macromolecule. Here, we present a spectral data analysis method using SVD (SVD analysis and nonlinear fitting to determine the binding characteristics of intercalating drugs to DNA. This methodology reduces noise and identifies distinct spectral species similar to traditional principal component analysis as well as fitting nonlinear binding parameters. We applied SVD analysis to investigate the interaction of actinomycin D and daunomycin with native DNA. This methodology does not require prior knowledge of ligand molar extinction coefficients (free and bound, which potentially limits binding analysis. Data are acquired simply by reconstructing the experimental data and by adjusting the product of deconvoluted matrices and the matrix of model coefficients determined by the Scatchard and McGee and von Hippel equation.
Experimental Shock Decomposition of Siderite to Magnetite
Bell, M. S.; Golden, D. C.; Zolensky, M. E.
2005-01-01
The debate about fossil life on Mars includes the origin of magnetites of specific sizes and habits in the siderite-rich portions of the carbonate spheres in ALH 84001 [1,2]. Specifically [2] were able to demonstrate that inorganic synthesis of these compositionally zoned spheres from aqueous solutions of variable ion-concentrations is possible. They further demonstrated the formation of magnetite from siderite upon heating at 550 C under a Mars-like CO2-rich atmosphere according to 3FeCO3 = Fe3O4 + 2CO2 + CO [3] and they postulated that the carbonates in ALH 84001 were heated to these temperatures by some shock event. The average shock pressure for ALH 84001, substantially based on the refractive index of diaplectic feldspar glasses [3,4,5] is some 35-40 GPa and associated temperatures are some 300-400 C [4]. However, some of the feldspar is melted [5], requiring local deviations from this average as high as 45-50 GPa. Indeed, [5] observes the carbonates in ALH 84001 to be melted locally, requiring pressures in excess of 60 GPa and temperatures > 600 C. Combining these shock studies with the above inorganic synthesis of zoned carbonates it seems possible to produce the ALH 84001 magnetites by the shock-induced decomposition of siderite.
Evolution-Based Functional Decomposition of Proteins.
Directory of Open Access Journals (Sweden)
Olivier Rivoire
2016-06-01
Full Text Available The essential biological properties of proteins-folding, biochemical activities, and the capacity to adapt-arise from the global pattern of interactions between amino acid residues. The statistical coupling analysis (SCA is an approach to defining this pattern that involves the study of amino acid coevolution in an ensemble of sequences comprising a protein family. This approach indicates a functional architecture within proteins in which the basic units are coupled networks of amino acids termed sectors. This evolution-based decomposition has potential for new understandings of the structural basis for protein function. To facilitate its usage, we present here the principles and practice of the SCA and introduce new methods for sector analysis in a python-based software package (pySCA. We show that the pattern of amino acid interactions within sectors is linked to the divergence of functional lineages in a multiple sequence alignment-a model for how sector properties might be differentially tuned in members of a protein family. This work provides new tools for studying proteins and for generally testing the concept of sectors as the principal units of function and adaptive variation.
Dynamic mode decomposition for compressive system identification
Bai, Zhe; Kaiser, Eurika; Proctor, Joshua L.; Kutz, J. Nathan; Brunton, Steven L.
2017-11-01
Dynamic mode decomposition has emerged as a leading technique to identify spatiotemporal coherent structures from high-dimensional data. In this work, we integrate and unify two recent innovations that extend DMD to systems with actuation and systems with heavily subsampled measurements. When combined, these methods yield a novel framework for compressive system identification, where it is possible to identify a low-order model from limited input-output data and reconstruct the associated full-state dynamic modes with compressed sensing, providing interpretability of the state of the reduced-order model. When full-state data is available, it is possible to dramatically accelerate downstream computations by first compressing the data. We demonstrate this unified framework on simulated data of fluid flow past a pitching airfoil, investigating the effects of sensor noise, different types of measurements (e.g., point sensors, Gaussian random projections, etc.), compression ratios, and different choices of actuation (e.g., localized, broadband, etc.). This example provides a challenging and realistic test-case for the proposed method, and results indicate that the dominant coherent structures and dynamics are well characterized even with heavily subsampled data.
Organocatalytic decomposition of polyethylene terephthalate using triazabicyclodecene
Lecuyer, Julien Matsumoto
This study focuses on the organocatalytic decomposition of polyethylene terephthalate (PET) using 1,5,7-triazabicyclo[4.4.0]dec-5-ene (TBD) to form a diverse library of aromatic amides. The reaction scheme was specifically designed to use low reaction temperatures (>150°C) and avoid using solvents during the reaction to provide a more environmentally friendly process. Of all the amines tested, PET aminolysis with aliphatic and aromatic amines demonstrated the best performance with yields higher than 72%. PET aminolysis with click functionalized and non-symmetric reagents facilitated attack on certain sites on the basis of reactivity. Finally, the performance of the PET degradation reactions with secondary amine and tertiary amine functionalized reagents yielded mixed results due to complications with isolating the product from the crude solution. Four of the PET-based monomers were also selected as modifiers for epoxy hardening to demonstrate the ability to convert waste into monomers for high-value applications. The glass transition temperatures, obtained using differential scanning calorimetry (DSC) and dynamic mechanical analysis (DMA) of the epoxy composite samples treated with the PET-based monomers, were generally higher in comparison to the samples cured with the basic diamines due to the hydrogen bonding and added rigidity from the aromatic amide group. Developing these monomers provides a green and commercially viable alternative to eradicating a waste product that is becoming an environmental concern.
Lauber, Arnd
2007-07-01
We consider the family of transcendental entire functions given by \\{f_c:{\\mathbb C} \\rightarrow {\\mathbb C}:z-c+e^z, c \\in {\\mathbb C} \\} . If Re c > 0, then fc features a Baker domain as the only component of the Fatou set, while the functions fc show a different dynamical behaviour if c \\in \\rmi{\\mathbb R} . We describe the dynamical planes of these functions and show that the Julia sets converge in the limit process f_{c_1+\\rmi c_2} \\rightarrow f_{\\rmi c_2} with respect to the Hausdorff metric, where c_1 \\in {\\mathbb R}^+ and c_2 \\in {\\mathbb R} . We use this to show that Baker domains of any type (concerning a classification of König) are not necessarily stable under perturbation.
Time Domain Induced Polarization
DEFF Research Database (Denmark)
Fiandaca, Gianluca; Auken, Esben; Christiansen, Anders Vest
2012-01-01
Time-domain-induced polarization has significantly broadened its field of reference during the last decade, from mineral exploration to environmental geophysics, e.g., for clay and peat identification and landfill characterization. Though, insufficient modeling tools have hitherto limited the use......%. Furthermore, the presence of low-pass filters in time-domain-induced polarization instruments affects the early times of the acquired decays (typically up to 100 ms) and has to be modeled in the forward response to avoid significant loss of resolution. The developed forward code has been implemented in a 1D...... polarization response when compared to traditional integral chargeability inversion. The quality of the inversion results has been assessed by a complete uncertainty analysis of the model parameters; furthermore, borehole information confirm the outcomes of the field interpretations. With this new accurate...
C7-Decompositions of the Tensor Product of Complete Graphs
Directory of Open Access Journals (Sweden)
Manikandan R.S.
2017-08-01
Full Text Available In this paper we consider a decomposition of Km × Kn, where × denotes the tensor product of graphs, into cycles of length seven. We prove that for m, n ≥ 3, cycles of length seven decompose the graph Km × Kn if and only if (1 either m or n is odd and (2 14 | m(m − 1n(n − 1. The results of this paper together with the results of [Cp-Decompositions of some regular graphs, Discrete Math. 306 (2006 429–451] and [C5-Decompositions of the tensor product of complete graphs, Australasian J. Combinatorics 37 (2007 285–293], give necessary and sufficient conditions for the existence of a p-cycle decomposition, where p ≥ 5 is a prime number, of the graph Km × Kn.
Mapping litter decomposition by remote-detected indicators
Directory of Open Access Journals (Sweden)
L. Rossi
2006-06-01
Full Text Available Leaf litter decomposition is a key process for the functioning of natural ecosystems. An important limiting factor for this process is detritus availability, which we have estimated by remote sensed indices of canopy green biomass (NDVI. Here, we describe the use of multivariate geostatistical analysis to couple in situ measures with hyper-spectral and multi-spectral remote-sensed data for producing maps of litter decomposition. A direct relationship between the decomposition rates in four different CORINE habitats and NDVI, calculated at different scales from Landsat ETM+ multi-spectral data and MIVIS hyper-spectral data was found. Variogram analysis was used to evaluate the spatial properties of each single variable and their common interaction. Co-variogram and co-kriging analysis of the two variables turned out to be an effective approach for decomposition mapping from remote-sensed spatial explicit data.
Thermal Decomposition of RDX from Reactive Molecular Dynamics
National Research Council Canada - National Science Library
Strachan, Alejandro; Kober, Edward M; van Duin, Adri C; Oxgaard, Jonas; Goddard, III, William A
2005-01-01
...] at various temperatures and densities. We find that the time evolution of the potential energy can be described reasonably well with a single exponential function from which we obtain an overall characteristic time of decomposition...
Modal Decomposition of Synthetic Jet Flow Based on CFD Computation
Directory of Open Access Journals (Sweden)
Hyhlík Tomáš
2015-01-01
Full Text Available The article analyzes results of numerical simulation of synthetic jet flow using modal decomposition. The analyzes are based on the numerical simulation of axisymmetric unsteady laminar flow obtained using ANSYS Fluent CFD code. Three typical laminar regimes are compared from the point of view of modal decomposition. The first regime is without synthetic jet creation with Reynolds number Re = 76 and Stokes number S = 19.7. The second studied regime is defined by Re = 145 and S = 19.7. The third regime of synthetic jet work is regime with Re = 329 and S = 19.7. Modal decomposition of obtained flow fields is done using proper orthogonal decomposition (POD where energetically most important modes are identified. The structure of POD modes is discussed together with classical approach based on phase averaged velocities.
Bark traits, decomposition and flammability of Australian forest trees
Grootemaat, Saskia; Wright, Ian J.; Van Bodegom, Peter M.; Cornelissen, Johannes H.C.; Shaw, Veronica
2017-01-01
Bark shedding is a remarkable feature of Australian trees, yet relatively little is known about interspecific differences in bark decomposability and flammability, or what chemical or physical traits drive variation in these properties. We measured the decomposition rate and flammability
Thermal decomposition of potassium bis-oxalatodiaqua-indate (III ...
Indian Academy of Sciences (India)
2] 3H2O. Thermal decomposition studies show that the compound decomposes first to the anhydrous potassium indium oxalate ... Bio-inorganic Chemistry Laboratories, School of Chemistry, Andhra University, Visakhapatnam 530 003, India ...
Ozone Decomposition on the Surface of Metal Oxide Catalyst
Directory of Open Access Journals (Sweden)
Batakliev Todor Todorov
2014-12-01
Full Text Available The catalytic decomposition of ozone to molecular oxygen over catalytic mixture containing manganese, copper and nickel oxides was investigated in the present work. The catalytic activity was evaluated on the basis of the decomposition coefficient which is proportional to ozone decomposition rate, and it has been already used in other studies for catalytic activity estimation. The reaction was studied in the presence of thermally modified catalytic samples operating at different temperatures and ozone flow rates. The catalyst changes were followed by kinetic methods, surface measurements, temperature programmed reduction and IR-spectroscopy. The phase composition of the metal oxide catalyst was determined by X-ray diffraction. The catalyst mixture has shown high activity in ozone decomposition at wet and dry O3/O2 gas mixtures. The mechanism of catalytic ozone degradation was suggested.
DECOMPOSITION OF TARS IN MICROWAVE PLASMA – PRELIMINARY RESULTS
Directory of Open Access Journals (Sweden)
Mateusz Wnukowski
2014-07-01
Full Text Available The paper refers to the main problem connected with biomass gasification - a presence of tar in a product gas. This paper presents preliminary results of tar decomposition in a microwave plasma reactor. It gives a basic insight into the construction and work of the plasma reactor. During the experiment, researches were carried out on toluene as a tar surrogate. As a carrier gas for toluene and as a plasma agent, nitrogen was used. Flow rates of the gases and the microwave generator’s power were constant during the whole experiment. Results of the experiment showed that the decomposition process of toluene was effective because the decomposition efficiency attained above 95%. The main products of tar decomposition were light hydrocarbons and soot. The article also gives plans for further research in a matter of tar removal from the product gas.
Using Microwave Sample Decomposition in Undergraduate Analytical Chemistry
Griff Freeman, R.; McCurdy, David L.
1998-08-01
A shortcoming of many undergraduate classes in analytical chemistry is that students receive little exposure to sample preparation in chemical analysis. This paper reports the progress made in introducing microwave sample decomposition into several quantitative analysis experiments at Truman State University. Two experiments being performed in our current laboratory rotation include closed vessel microwave decomposition applied to the classical gravimetric determination of nickel and the determination of sodium in snack foods by flame atomic emission spectrometry. A third lab, using open-vessel microwave decomposition for the Kjeldahl nitrogen determination is now ready for student trial. Microwave decomposition reduces the time needed to complete these experiments and significantly increases the student awareness of the importance of sample preparation in quantitative chemical analyses, providing greater breadth and realism in the experiments.
A test of the hierarchical model of litter decomposition
DEFF Research Database (Denmark)
Bradford, Mark A.; Veen, G. F.; Bonis, Anne
2017-01-01
Our basic understanding of plant litter decomposition informs the assumptions underlying widely applied soil biogeochemical models, including those embedded in Earth system models. Confidence in projected carbon cycle-climate feedbacks therefore depends on accurate knowledge about the controls...
Plant diversity effects on root decomposition in grasslands
Chen, Hongmei; Mommer, Liesje; van Ruijven, Jasper; de Kroon, Hans; Gessler, Arthur; Scherer-Lorenzen, Michael; Wirth, Christian; Weigelt, Alexandra
2016-04-01
Loss of plant diversity impairs ecosystem functioning. Compared to other well-studied processes, we know little about whether and how plant diversity affects root decomposition, which is limiting our knowledge on biodiversity-carbon cycling relationships in the soil. Plant diversity potentially affects root decomposition via two non-exclusive mechanisms: by providing roots of different substrate quality and/or by altering the soil decomposition environment. To disentangle these two mechanisms, three decomposition experiments using a litter-bag approach were conducted on experimental grassland plots differing in plant species richness, functional group richness and functional group composition (e.g. presence/absence of grasses, legumes, small herbs and tall herbs, the Jena Experiment). We studied: 1) root substrate quality effects by decomposing roots collected from the different experimental plant communities in one common plot; 2) soil decomposition environment effects by decomposing standard roots in all experimental plots; and 3) the overall plant diversity effects by decomposing community roots in their 'home' plots. Litter bags were installed in April 2014 and retrieved after 1, 2 and 4 months to determine the mass loss. We found that mass loss decreased with increasing plant species richness, but not with functional group richness in the three experiments. However, functional group presence significantly affected mass loss with primarily negative effects of the presence of grasses and positive effects of the presence of legumes and small herbs. Our results thus provide clear evidence that species richness has a strong negative effect on root decomposition via effects on both root substrate quality and soil decomposition environment. This negative plant diversity-root decomposition relationship may partly account for the positive effect of plant diversity on soil C stocks by reducing C loss in addition to increasing primary root productivity. However, to fully
Curtis, Tyler E; Roeder, Ryan K
2017-10-01
Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in
Decomposition in lake sediments: bacterial action and interaction
Jones, J.G.
1985-01-01
This review discusses the processes involved in the decomposition of organic carbon derived initially from structural components of algae and other primary producers. It describes how groups of bacteria interact in time and space in a eutrophic lake. The relative importance of anaerobic and aerobic processes are discussed. The bulk of decomposition occurs within the sediment. The role of bacteria in the nitrogen cycle and the iron cycle, and in sulphate reduction and methanogenesis as the ter...
Thermal decomposition of lanthanum(III) butyrate in argon atmosphere
DEFF Research Database (Denmark)
Grivel, Jean-Claude; Yue, Zhao; Xiao, Tang
2013-01-01
The thermal decomposition of La(C3H7CO2)3·xH2O (x≈0.82) was studied in argon during heating at 5K/min. After the loss of bound H2O, the anhydrous butyrate presents at 135°C a phase transition to a mesophase, which turns to an isotropic liquid at 180°C. The decomposition of the anhydrous butyrate ...
ROLE OF MICROORGANISM AND MICROFAUNA IN PLANT LITTER DECOMPOSITION
Raj Singh*, Anju Rani, Permod Kumar, Gyanika Shukla, Amit Kumar
2016-01-01
Though the fungi play a very significant role in the plant litter decomposition, studies revealed that the bacteria colonize the litters in the initial stages of decomposition. It has been observed that leaf species with low C:N ratio harbored higher number of bacteria than the more resistant species. The results of various workers outlined the development of the bacterial flora after litter fall due to improved moisture conditions but there is no change in the species composition. The plant ...
Mitigation of artifacts in rtm with migration kernel decomposition
Zhan, Ge
2012-01-01
The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.
Modified decomposition method for nonlinear Volterra-Fredholm integral equations
Energy Technology Data Exchange (ETDEWEB)
Bildik, Necdet [Department of Mathematics, Celal Bayar University, 45030 Manisa (Turkey)]. E-mail: necdet.bildik@bayar.edu.tr; Inc, Mustafa [Department of Mathematics, Firat University, 23119 Elazig (Turkey)]. E-mail: minc@firat.edu.tr
2007-07-15
In this paper, the nonlinear Volterra-Fredholm integral equations are solved by using the modified decomposition method (MDM). The approximate solution of this equation is calculated in the form of a series with easily computable components. The accuracy of the proposed numerical scheme is examined by comparison with other analytical and numerical results. Two test problems are presented to illustrate the reliability and the performance of the modified decomposition method.
Decomposition in conic optimization with partially separable structure
DEFF Research Database (Denmark)
Sun, Yifan; Andersen, Martin Skovgaard; Vandenberghe, Lieven
2014-01-01
Decomposition techniques for linear programming are difficult to extend to conic optimization problems with general nonpolyhedral convex cones because the conic inequalities introduce an additional nonlinear coupling between the variables. However in many applications the convex cones have...... semidefinite and positive-semidefinite-completable matrices with chordal sparsity patterns. The paper describes a decomposition method that exploits partial separability in conic linear optimization. The method is based on Spingarn's method for equality constrained convex optimization, combined with a fast...
Decomposition of Mueller matrices of scattering media: Theory and experiment
Directory of Open Access Journals (Sweden)
R. Ossikovski
2011-09-01
Full Text Available Algebraic decomposition of Mueller matrices is a particularly promising approach to the retrieval of the optical properties of the medium investigated in a polarized light scattering experiment. Various decompositions of generally depolarizing Mueller matrices are revisited and discussed. Both classic as well as recently proposed approaches are reviewed. Physical and mathematical aspects such as depolarization and limits of applicability are comparatively addressed. Experimental matrices of scattering media are decomposed by different methodologies and physically interpreted.
Conservation Rules of Direct Sum Decomposition of Groups
Directory of Open Access Journals (Sweden)
Nakasho Kazuhisa
2016-03-01
Full Text Available In this article, conservation rules of the direct sum decomposition of groups are mainly discussed. In the first section, we prepare miscellaneous definitions and theorems for further formalization in Mizar [5]. In the next three sections, we formalized the fact that the property of direct sum decomposition is preserved against the substitutions of the subscript set, flattening of direct sum, and layering of direct sum, respectively. We referred to [14], [13] [6] and [11] in the formalization.
Sector decomposition and Hironaka's polyhedra game
Energy Technology Data Exchange (ETDEWEB)
Bogner, Christian [Institut fuer Physik, Universitaet Mainz (Germany)
2008-07-01
Sector decomposition is a method to compute numerically the Laurent expansion of divergent multi-loop Feynman integrals. In this talk we point out, that winning strategies for Hironaka's polyhedra game, encoding the combinatorics of resolutions of singularities by a blow-up sequence, can be applied to this method. We indicate how these strategies are used to guarantee for the termination of the sector decomposition algorithm by Binoth and Heinrich.
A Decomposition Algorithm for Learning Bayesian Network Structures from Data
DEFF Research Database (Denmark)
Zeng, Yifeng; Cordero Hernandez, Jorge
2008-01-01
It is a challenging task of learning a large Bayesian network from a small data set. Most conventional structural learning approaches run into the computational as well as the statistical problems. We propose a decomposition algorithm for the structure construction without having to learn...... the complete network. The new learning algorithm firstly finds local components from the data, and then recover the complete network by joining the learned components. We show the empirical performance of the decomposition algorithm in several benchmark networks....
Wavelet-bounded empirical mode decomposition for measured time series analysis
Moore, Keegan J.; Kurt, Mehmet; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.
2018-01-01
Empirical mode decomposition (EMD) is a powerful technique for separating the transient responses of nonlinear and nonstationary systems into finite sets of nearly orthogonal components, called intrinsic mode functions (IMFs), which represent the dynamics on different characteristic time scales. However, a deficiency of EMD is the mixing of two or more components in a single IMF, which can drastically affect the physical meaning of the empirical decomposition results. In this paper, we present a new approached based on EMD, designated as wavelet-bounded empirical mode decomposition (WBEMD), which is a closed-loop, optimization-based solution to the problem of mode mixing. The optimization routine relies on maximizing the isolation of an IMF around a characteristic frequency. This isolation is measured by fitting a bounding function around the IMF in the frequency domain and computing the area under this function. It follows that a large (small) area corresponds to a poorly (well) separated IMF. An optimization routine is developed based on this result with the objective of minimizing the bounding-function area and with the masking signal parameters serving as free parameters, such that a well-separated IMF is extracted. As examples of application of WBEMD we apply the proposed method, first to a stationary, two-component signal, and then to the numerically simulated response of a cantilever beam with an essentially nonlinear end attachment. We find that WBEMD vastly improves upon EMD and that the extracted sets of IMFs provide insight into the underlying physics of the response of each system.
Optimizing the decomposition of soil moisture time-series data using genetic algorithms
Kulkarni, C.; Mengshoel, O. J.; Basak, A.; Schmidt, K. M.
2015-12-01
The task of determining near-surface volumetric water content (VWC), using commonly available dielectric sensors (based upon capacitance or frequency domain technology), is made challenging due to the presence of "noise" such as temperature-driven diurnal variations in the recorded data. We analyzed a post-wildfire rainfall and runoff monitoring dataset for hazard studies in Southern California. VWC was measured with EC-5 sensors manufactured by Decagon Devices. Many traditional signal smoothing techniques such as moving averages, splines, and Loess smoothing exist. Unfortunately, when applied to our post-wildfire dataset, these techniques diminish maxima, introduce time shifts, and diminish signal details. A promising seasonal trend-decomposition procedure based on Loess (STL) decomposes VWC time series into trend, seasonality, and remainder components. Unfortunately, STL with its default parameters produces similar results as previously mentioned smoothing methods. We propose a novel method to optimize seasonal decomposition using STL with genetic algorithms. This method successfully reduces "noise" including diurnal variations while preserving maxima, minima, and signal detail. Better decomposition results for the post-wildfire VWC dataset were achieved by optimizing STL's control parameters using genetic algorithms. The genetic algorithms minimize an additive objective function with three weighted terms: (i) root mean squared error (RMSE) of straight line relative to STL trend line; (ii) range of STL remainder; and (iii) variance of STL remainder. Our optimized STL method, combining trend and remainder, provides an improved representation of signal details by preserving maxima and minima as compared to the traditional smoothing techniques for the post-wildfire rainfall and runoff monitoring data. This method identifies short- and long-term VWC seasonality and provides trend and remainder data suitable for forecasting VWC in response to precipitation.
Termites promote resistance of decomposition to spatiotemporal variability in rainfall.
Veldhuis, Michiel P; Laso, Francisco J; Olff, Han; Berg, Matty P
2017-02-01
The ecological impact of rapid environmental change will depend on the resistance of key ecosystems processes, which may be promoted by species that exert strong control over local environmental conditions. Recent theoretical work suggests that macrodetritivores increase the resistance of African savanna ecosystems to changing climatic conditions, but experimental evidence is lacking. We examined the effect of large fungus-growing termites and other non-fungus-growing macrodetritivores on decomposition rates empirically with strong spatiotemporal variability in rainfall and temperature. Non-fungus-growing larger macrodetritivores (earthworms, woodlice, millipedes) promoted decomposition rates relative to microbes and small soil fauna (+34%) but both groups reduced their activities with decreasing rainfall. However, fungus-growing termites increased decomposition rates strongest (+123%) under the most water-limited conditions, making overall decomposition rates mostly independent from rainfall. We conclude that fungus-growing termites are of special importance in decoupling decomposition rates from spatiotemporal variability in rainfall due to the buffered environment they create within their extended phenotype (mounds), that allows decomposition to continue when abiotic conditions outside are less favorable. This points at a wider class of possibly important ecological processes, where soil-plant-animal interactions decouple ecosystem processes from large-scale climatic gradients. This may strongly alter predictions from current climate change models. © 2016 by the Ecological Society of America.
Nonequilibrium adiabatic molecular dynamics simulations of methane clathrate hydrate decomposition.
Alavi, Saman; Ripmeester, J A
2010-04-14
Nonequilibrium, constant energy, constant volume (NVE) molecular dynamics simulations are used to study the decomposition of methane clathrate hydrate in contact with water. Under adiabatic conditions, the rate of methane clathrate decomposition is affected by heat and mass transfer arising from the breakup of the clathrate hydrate framework and release of the methane gas at the solid-liquid interface and diffusion of methane through water. We observe that temperature gradients are established between the clathrate and solution phases as a result of the endothermic clathrate decomposition process and this factor must be considered when modeling the decomposition process. Additionally we observe that clathrate decomposition does not occur gradually with breakup of individual cages, but rather in a concerted fashion with rows of structure I cages parallel to the interface decomposing simultaneously. Due to the concerted breakup of layers of the hydrate, large amounts of methane gas are released near the surface which can form bubbles that will greatly affect the rate of mass transfer near the surface of the clathrate phase. The effects of these phenomena on the rate of methane hydrate decomposition are determined and implications on hydrate dissociation in natural methane hydrate reservoirs are discussed.
Consequences of biodiversity loss for litter decomposition across biomes.
Handa, I Tanya; Aerts, Rien; Berendse, Frank; Berg, Matty P; Bruder, Andreas; Butenschoen, Olaf; Chauvet, Eric; Gessner, Mark O; Jabiol, Jérémy; Makkonen, Marika; McKie, Brendan G; Malmqvist, Björn; Peeters, Edwin T H M; Scheu, Stefan; Schmid, Bernhard; van Ruijven, Jasper; Vos, Veronique C A; Hättenschwiler, Stephan
2014-05-08
The decomposition of dead organic matter is a major determinant of carbon and nutrient cycling in ecosystems, and of carbon fluxes between the biosphere and the atmosphere. Decomposition is driven by a vast diversity of organisms that are structured in complex food webs. Identifying the mechanisms underlying the effects of biodiversity on decomposition is critical given the rapid loss of species worldwide and the effects of this loss on human well-being. Yet despite comprehensive syntheses of studies on how biodiversity affects litter decomposition, key questions remain, including when, where and how biodiversity has a role and whether general patterns and mechanisms occur across ecosystems and different functional types of organism. Here, in field experiments across five terrestrial and aquatic locations, ranging from the subarctic to the tropics, we show that reducing the functional diversity of decomposer organisms and plant litter types slowed the cycling of litter carbon and nitrogen. Moreover, we found evidence of nitrogen transfer from the litter of nitrogen-fixing plants to that of rapidly decomposing plants, but not between other plant functional types, highlighting that specific interactions in litter mixtures control carbon and nitrogen cycling during decomposition. The emergence of this general mechanism and the coherence of patterns across contrasting terrestrial and aquatic ecosystems suggest that biodiversity loss has consistent consequences for litter decomposition and the cycling of major elements on broad spatial scales.
Long-term litter decomposition controlled by manganese redox cycling.
Keiluweit, Marco; Nico, Peter; Harmon, Mark E; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus
2015-09-22
Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn(2+) provided by fresh plant litter to produce oxidative Mn(3+) species at sites of active decay, with Mn eventually accumulating as insoluble Mn(3+/4+) oxides. Formation of reactive Mn(3+) species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn(3+)-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn(3+) species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant-soil system may have a profound impact on litter decomposition rates.
The Products of the Thermal Decomposition of CH3CHO
Energy Technology Data Exchange (ETDEWEB)
Vasiliou, AnGayle; Piech, Krzysztof M.; Zhang, Xu; Nimlos, Mark R.; Ahmed, Musahid; Golan, Amir; Kostko, Oleg; Osborn, David L.; Daily, John W.; Stanton, John F.; Ellison, G. Barney
2011-04-06
We have used a heated 2 cm x 1 mm SiC microtubular (mu tubular) reactor to decompose acetaldehyde: CH3CHO + DELTA --> products. Thermal decomposition is followed at pressures of 75 - 150 Torr and at temperatures up to 1700 K, conditions that correspond to residence times of roughly 50 - 100 mu sec in the mu tubular reactor. The acetaldehyde decomposition products are identified by two independent techniques: VUV photoionization mass spectroscopy (PIMS) and infrared (IR) absorption spectroscopy after isolation in a cryogenic matrix. Besides CH3CHO, we have studied three isotopologues, CH3CDO, CD3CHO, and CD3CDO. We have identified the thermal decomposition products CH3(PIMS), CO (IR, PIMS), H (PIMS), H2 (PIMS), CH2CO (IR, PIMS), CH2=CHOH (IR, PIMS), H2O (IR, PIMS), and HC=CH (IR, PIMS). Plausible evidence has been found to support the idea that there are at least three different thermal decomposition pathways for CH3CHO: Radical decomposition: CH3CHO + DELTA --> CH3 + [HCO] --> CH3 + H + CO Elimination: CH3CHO + DELTA --> H2 + CH2=C=O. Isomerization/elimination: CH3CHO + DELTA --> [CH2=CH-OH] --> HC=CH + H2O. Both PIMS and IR spectroscopy show compelling evidence for the participation of vinylidene, CH2=C:, as an intermediate in the decomposition of vinyl alchohol: CH2=CH-OH + DELTA --> [CH2=C:] + H2O --> HC=CH + H2O.
Canonical decomposition of fluctuation interferences using the delta function formalism
Directory of Open Access Journals (Sweden)
Alexander V. Denisov
2015-10-01
Full Text Available The paper deals with the discrete spectral-orthogonal decompositions of centered Gaussian random processes for two cases. In the first case, the process implementations are a sequence of pulses that are short in comparison with the observation time. The process decomposition was obtained as a generalized Fourier series on the basis of the delta function formalism, and the variances of the coefficients (random values of this series were found as well. The resulting expressions complement Kotel'nikov's formula because they cover both the high-frequency and the low-frequency regions of the canonical-decomposition spectrum. In the second case, a random process is a superposition of narrow-band Gaussian random processes, and its implementations are characterized by oscillations. For such a process the canonical decomposition in terms of the Walsh functions was obtained on the basis of the generalized function formalism. Then this decomposition was re-decomposed in terms of trigonometric functions; it follows from the resulting series that the canonical decomposition spectrum is not uniform since a pedestal is formed in the constant component region.
Hydrogen production by the decomposition of water
Hollabaugh, C.M.; Bowman, M.G.
A process is described for the production of hydrogen from water by a sulfuric acid process employing electrolysis and thermo-chemical decomposition. The water containing SO/sub 2/ is electrolyzed to produce H/sub 2/ at the cathode and to oxidize the SO/sub 2/ to form H/sub 2/SO/sub 4/ at the anode. After the H/sub 2/ has been separated, a compound of the type M/sub r/X/sub s/ is added to produce a water insoluble sulfate of M and a water insoluble oxide of the metal in the radical X. In the compound M/sub r/X/sub s/, M is at least one metal selected from the group consisting of Ba/sup 2 +/, Ca/sup 2 +/, Sr/sup 2 +/, La/sup 2 +/, and Pb/sup 2 +/; X is at least one radical selected from the group consisting of molybdate (MoO/sub 4//sup 2 -/), tungstate (WO/sub 4//sup 2 -/), and metaborate (BO/sub 2//sup 1 -/); and r and s are either 1, 2, or 3 depending upon the valence of M and X. The precipitated mixture is filtered and heated to a temperature sufficiently high to form SO/sub 3/ gas and to reform M/sub r/X/sub s/. The SO/sub 3/ is dissolved in a small amount of H/sub 2/O to produce concentrated H/sub 2/SO/sub 4/, and the M/sub r/X/sub s/ is recycled to the process. Alternatively, the SO/sub 3/ gas can be recycled to the beginning of the process to provide a continuous process for the production of H/sub 2/ in which only water need be added in a substantial amount. (BLM)
Flat norm decomposition of integral currents
Directory of Open Access Journals (Sweden)
Sharif Ibrahim
2016-05-01
Full Text Available Currents represent generalized surfaces studied in geometric measure theory. They range from relatively tame integral currents representing oriented compact manifolds with boundary and integer multiplicities, to arbitrary elements of the dual space of differential forms. The flat norm provides a natural distance in the space of currents, and works by decomposing a $d$-dimensional current into $d$- and (the boundary of $(d+1$-dimensional pieces in an optimal way.Given an integral current, can we expect its at norm decomposition to be integral as well? This is not known in general, except in the case of $d$-currents that are boundaries of $(d+1$-currents in $\\mathbb{R}^{d+1}$ (following results from a corresponding problem on the $L^1$ total variation ($L^1$TV of functionals. On the other hand, for a discretized at norm on a finite simplicial complex, the analogous statement holds even when the inputs are not boundaries. This simplicial version relies on the total unimodularity of the boundary matrix of the simplicial complex; a result distinct from the $L^1$TV approach.We develop an analysis framework that extends the result in the simplicial setting to one for $d$-currents in $\\mathbb{R}^{d+1}$, provided a suitable triangulation result holds. In $\\mathbb{R}^2$, we use a triangulation result of Shewchuk (bounding both the size and location of small angles, and apply the framework to show that the discrete result implies the continuous result for $1$-currents in $\\mathbb{R}^2$ .
Ramanujan subspace pursuit for signal periodic decomposition
Deng, Shi-Wen; Han, Ji-Qing
2017-06-01
The period estimation and periodic decomposition of a signal represent long-standing problems in the field of signal processing and biomolecular sequence analysis. To address such problems, we introduce the Ramanujan subspace pursuit (RSP) based on the Ramanujan subspace. As a greedy iterative algorithm, the RSP can uniquely decompose any signal into a sum of exactly periodic components by selecting and removing the most dominant periodic component from the residual signal in each iteration. In the RSP, a novel periodicity metric is derived based on the energy of the exactly periodic component obtained by orthogonally projecting the residual signal into the Ramanujan subspace. The metric is then used to select the most dominant periodic component in each iteration. To reduce the computational cost of the RSP, we also propose the fast RSP (FRSP) based on the relationship between the periodic subspace and the Ramanujan subspace and based on the maximum likelihood estimation of the energy of the periodic component in the periodic subspace. The fast RSP has a lower computational cost and can decompose a signal of length N into a sum of K exactly periodic components in O (KNlogN) . In short, the main contributions of this paper are threefold: First, we present the RSP algorithm for decomposing a signal into its periodic components and theoretically prove the convergence of the algorithm based on the Ramanujan subspaces. Second, we present the FRSP algorithm, which is used to reduce the computational cost. Finally, we derive a periodic metric to measure the periodicity of the hidden periodic components of a signal. In addition, our results show that the RSP outperforms current algorithms for period estimation.
Ultrasound elastography using empirical mode decomposition analysis.
Sadeghi, Sajjad; Behnam, Hamid; Tavakkoli, Jahan
2014-01-01
Ultrasound elastography is a non-invasive method which images the elasticity of soft-tissues. To make an image, pre and after a small compression, ultrasound radio frequency (RF) signals are acquired and the time delays between them are estimated. The first differentiation of displacement estimations is called elastogram. In this study, we are going to make an elastogram using the processing method named empirical mode decomposition (EMD). EMD is an analytic technique which decomposes a complicated signal to a collection of simple signals called intrinsic mode functions (IMFs). The idea of paper is using these IMFs instead of primary RF signals. To implement the algorithms two different datasets selected. The first one was data from a sandwich structure of normal and cooked tissue. The second dataset consisted of around 180 frames acquired from a malignant breast tumor. For displacement estimating, two different methods, cross-correlation and wavelet transform, were used too and for evaluating the quality, two conventional parameters, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) calculated for each image. Results show that in both methods after using EMD the quality improves. In first dataset and cross correlation technique CNR and SNR improve about 16 dB and 9 dB respectively. In same dataset by using wavelet technique, the parameters show 14 dB and 10 dB improvement respectively. In second dataset (breast tumor data) CNR and SNR in cross correlation method improve 18 dB and 7 dB and in wavelet technique improve 17 dB and 6 dB respectively.
Bayesian approach to magnetotelluric tensor decomposition
Directory of Open Access Journals (Sweden)
Michel Menvielle
2010-05-01
;} -->
Magnetotelluric directional analysis and impedance tensor decomposition are basic tools to validate a local/regional composite electrical model of the underlying structure. Bayesian stochastic methods approach the problem of the parameter estimation and their uncertainty characterization in a fully probabilistic fashion, through the use of posterior model probabilities.We use the standard GroomBailey 3D local/2D regional composite model in our bayesian approach. We assume that the experimental impedance estimates are contamined with the Gaussian noise and define the likelihood of a particular composite model with respect to the observed data. We use noninformative, flat priors over physically reasonable intervals for the standard GroomBailey decomposition parameters. We apply two numerical methods, the Markov chain Monte Carlo procedure based on the Gibbs sampler and a singlecomponent adaptive Metropolis algorithm. From the posterior samples, we characterize the estimates and uncertainties of the individual decomposition parameters by using the respective marginal posterior probabilities. We conclude that the stochastic scheme performs reliably for a variety of models, including the multisite and multifrequency case with up to
Iris identification system based on Fourier coefficients and singular value decomposition
Somnugpong, Sawet; Phimoltares, Suphakant; Maneeroj, Saranya
2011-12-01
Nowadays, both personal identification and classification are very important. In order to identify the person for some security applications, physical or behavior-based characteristics of individuals with high uniqueness might be analyzed. Biometric becomes the mostly used in personal identification purpose. There are many types of biometric information currently used. In this work, iris, one kind of personal characteristics is considered because of its uniqueness and collectable. Recently, the problem of various iris recognition systems is the limitation of space to store the data in a variety of environments. This work proposes the iris recognition system with small-size of feature vector causing a reduction in space complexity term. For this experiment, each iris is presented in terms of frequency domain, and based on neural network classification model. First, Fast Fourier Transform (FFT) is used to compute the Discrete Fourier Coefficients of iris data in frequency domain. Once the iris data was transformed into frequency-domain matrix, Singular Value Decomposition (SVD) is used to reduce a size of the complex matrix to single vector. All of these vectors would be input for neural networks for the classification step. With this approach, the merit of our technique is that size of feature vector is smaller than that of other techniques with the acceptable level of accuracy when compared with other existing techniques.
Wavelet decomposition based principal component analysis for face recognition using MATLAB
Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish
2016-03-01
For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.
A new analytical solution of the hyperbolic Kepler equation using the Adomian decomposition method
Ebaid, Abdelhalim; Rach, Randolph; El-Zahar, Essam
2017-09-01
In this paper, the Adomian decomposition method (ADM) is proposed to solve the hyperbolic Kepler equation which is often used to describe the eccentric anomaly of a comet of extrasolar origin in its hyperbolic trajectory past the Sun. A convenient method is therefore needed to solve this equation to accurately determine the radial distance and/or the Cartesian coordinates of the comet. It has been shown that Adomian's series using a few terms are sufficient to achieve extremely accurate numerical results even for much higher values of eccentricity than those in the literature. Besides, an exceptionally rapid rate of convergence of the sequence of the obtained approximate solutions has been demonstrated. Such approximate solutions possess the odd property in the mean anomaly which are illustrated through several plots. Moreover, the absolute remainder error, using only three components of Adomian's solution decreases across a specified domain, approaches zero as the eccentric anomaly tends to infinity. Also, the absolute remainder error decreases by increasing the number of components of the Adomian decomposition series. In view of the obtained results, the present method may be the most effective approach to treat the hyperbolic Kepler equation.
Directory of Open Access Journals (Sweden)
Elias D. Nino-Ruiz
2017-07-01
Full Text Available In this paper, a matrix-free posterior ensemble Kalman filter implementation based on a modified Cholesky decomposition is proposed. The method works as follows: the precision matrix of the background error distribution is estimated based on a modified Cholesky decomposition. The resulting estimator can be expressed in terms of Cholesky factors which can be updated based on a series of rank-one matrices in order to approximate the precision matrix of the analysis distribution. By using this matrix, the posterior ensemble can be built by either sampling from the posterior distribution or using synthetic observations. Furthermore, the computational effort of the proposed method is linear with regard to the model dimension and the number of observed components from the model domain. Experimental tests are performed making use of the Lorenz-96 model. The results reveal that, the accuracy of the proposed implementation in terms of root-mean-square-error is similar, and in some cases better, to that of a well-known ensemble Kalman filter (EnKF implementation: the local ensemble transform Kalman filter. In addition, the results are comparable to those obtained by the EnKF with large ensemble sizes.
Partial domain wall partition functions
National Research Council Canada - National Science Library
Foda, O; Wheeler, M
2012-01-01
We consider six-vertex model configurations on an (n × N) lattice, n ≤ N, that satisfy a variation on domain wall boundary conditions that we define and call partial domain wall boundary conditions...
Ghehsareh, Hadi Roohani; Abbasbandy, Saeid; Soltanalizadeh, Babak
2012-05-01
In this research, the Laplace-Adomian decomposition method (LADM) is applied for the analytical and numerical treatment of the nonlinear differential equation that describes a magnetohydrodynamic (MHD) flow under slip condition over a permeable stretching surface. The technique is well applied to approximate the similarity solutions of the problem for some typical values of model parameters. The obtained series solutions by the LADM are combined with the Padé approximation to improve the accuracy and enlarge the convergence domain of the obtained results. Through tables and figures, the efficiency of the presented method is illustrated.
Motif decomposition of the phosphotyrosine proteome reveals a new N-terminal binding motif for SHIP2
DEFF Research Database (Denmark)
Miller, Martin Lee; Hanke, S.; Hinsby, A. M.
2008-01-01
(P)-specific binding partners for peptides corresponding to the extracted motifs. We confirmed numerous previously known interaction motifs and found 15 new interactions mediated by phosphosites not previously known to bind SH2 or PTB. Remarkably, a novel hydrophobic N-terminal motif ((L/V/I)(L/V/I)pY) was identified...... and validated as a binding motif for the SH2 domain-containing inositol phosphatase SHIP2. Our decomposition of the in vivo Tyr(P) proteome furthermore suggests that two-thirds of the Tyr(P) sites mediate interaction, whereas the remaining third govern processes such as enzyme activation and nucleic acid...
Directory of Open Access Journals (Sweden)
T. Lukas
2014-12-01
Full Text Available The combined finite–discrete element method (FDEM belongs to a family of methods of computational mechanics of discontinua. The method is suitable for problems of discontinua, where particles are deformable and can fracture or fragment. The applications of FDEM have spread over a number of disciplines including rock mechanics, where problems like mining, mineral processing or rock blasting can be solved by employing FDEM. In this work, a novel approach for the parallelization of two-dimensional (2D FDEM aiming at clusters and desktop computers is developed. Dynamic domain decomposition based parallelization solvers covering all aspects of FDEM have been developed. These have been implemented into the open source Y2D software package and have been tested on a PC cluster. The overall performance and scalability of the parallel code have been studied using numerical examples. The results obtained confirm the suitability of the parallel implementation for solving large scale problems.
Multifragmentation of a very heavy nuclear system (II): bulk properties and spinodal decomposition
Energy Technology Data Exchange (ETDEWEB)
Frankland, J.D.; Rivet, M.F.; Borderie, B. [Paris-11 Univ., Inst. de Physique Nucleaire, 91 - Orsay (France)] [and others
2000-07-01
The properties of fragments and light charged particles emitted in multifragmentation of single sources formed in central 36 A.MeV Gd+U collisions are reviewed. Most of the products are isotropically distributed in the reaction c.m. Fragment kinetic energies reveal the onset of radial collective energy. A bulk effect is experimentally evidenced from the similarity of the charge distribution with that from the lighter 32 A.MeV Xe+Sn system. Spinodal decomposition of finite nuclear matter exhibits the same property in simulated central collisions for the two systems, and appears therefore as a possible mechanism at the origin of multifragmentation in this incident energy domain. (authors)
Optical image encryption using equal modulus decomposition and multiple diffractive imaging
Fatima, Areeba; Mehra, Isha; Nishchal, Naveen K.
2016-08-01
The equal modulus decomposition (EMD) is a novel asymmetric cryptosystem based on coherent superposition which was proposed to resist the specific attack. In a subsequent work, the scheme was shown to be vulnerable to specific attack. In this paper, we counter the vulnerability through an encoding technique which uses multiple diffraction intensity pattern recordings as the input to the EMD setup in the gyrator domain. This allows suppression of the random phase mask in the EMD path. As a result, the proposed scheme achieves resistance to specific attack. The simulation results and the security analysis demonstrate that EMD based on multiple intensity pattern recording is an effective optical asymmetric cryptosystem suitable for securing data and images.
Dela Cruz, Juan J; Karpiak, Stephen E; Brennan-Ing, Mark
2014-08-22
Although HIV and aging are two well-established medical and economic domains, their intersection represents an emerging area of study. Older adults with HIV, who sill comprise 50% of the US HIV-infected population by 2015, are disadvantaged as evidenced by disproportionately poorer health outcomes. The Oaxaca Decomposition Approach (ODA) was used to analyze data from the Research on Older Adults with HIV (ROAH) Study of 1,000 older adults with HIV in New York City (NYC). This paper establishes the sources of health disparities for Hispanics with HIV compared to a match group of Non-Hispanics with HIV. The ODA analyses shows that Hispanics on average have higher levels of declining health and increased depression attributable to the discrimination factor.
Adlam, Rachel E; Simmons, Tal
2007-09-01
Although the relationship between decomposition and postmortem interval has been well studied, almost no studies examined the potential effects of physical disturbance occurring as a result of data collection procedures. This study compares physically disturbed rabbit carcasses with a series of undisturbed carcasses to assess the presence and magnitude of any effects resulting from repetitive disturbance. Decomposition was scored using visual assessment of soft tissue changes, and numerical data such as weight loss and carcass temperature were recorded. The effects of disturbance over time on weight loss, carcass temperature, soil pH and decomposition were studied. In addition, this study aimed to validate some of the anecdotal evidence regarding decomposition. Results indicate disturbance significantly inversely affects both weight loss and carcass temperature. No differences were apparent between groups for soil pH change or overall decomposition stage. An insect-mediated mechanism for the disturbance effect is suggested, along with indications as to why this effect may be cancelled when scoring overall decomposition.
Bergshoeff, E; van der Schaar, JP; Papadopoulos, G
1998-01-01
We show that all branes admit worldvolume domain wall solutions. We find one class of solutions for which the tension of the brane changes discontinuously along the domain wall. These solutions are not supersymmetric. We argue that there is another class of domain wall solutions which is
Feature-level domain adaptation
DEFF Research Database (Denmark)
Kouw, Wouter M.; Van Der Maaten, Laurens J P; Krijthe, Jesse H.
2016-01-01
Domain adaptation is the supervised learning setting in which the training and test data are sampled from different distributions: training data is sampled from a source domain, whilst test data is sampled from a target domain. This paper proposes and studies an approach, called feature...
Metaphors, domains and embodiment
Directory of Open Access Journals (Sweden)
M.E. Botha
2005-07-01
Full Text Available Investigations of metaphorical meaning constitution and meaning (in- variance have revealed the significance of semantic and semiotic domains and the contexts within which they function as basis for the grounding of metaphorical meaning. In this article some of the current views concerning the grounding of metaphorical meaning in experience and embodiment are explored. My provisional agreement with Lakoff, Johnson and others about the “conceptual” nature of metaphor rests on an important caveat, viz. that this bodily based conceptual structure which lies at the basis of linguistic articulations of metaphor, is grounded in a deeper ontic structure of the world and of human experience. It is the “metaphorical” (actually “analogical” ontological structure of this grounding that is of interest for the line of argumentation followed in this article. Because Johnson, Lakoff and other’s proposal to ground metaphorical meaning in embodiment and neural processes is open to being construed as subjectivist and materialist, I shall attempt to articulate the contours of an alternative theory of conceptual metaphor, meaning and embodiment which counteracts these possibilities. This theory grounds metaphorical meaning and meaning change in an ontological and anthropological framework which recognises the presence and conditioning functioning of radially ordered structures for reality. These categorisations in which humankind, human knowledge and reality participate, condition and constrain (ground analogical and metaphorical meaning transfer, cross-domain mappings, and blends in cognition and in language, provide the basis for the analogical concepts found in these disciplines.
Frič, Roman; Papčo, Martin
2017-12-01
Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.
Frič, Roman; Papčo, Martin
2017-06-01
Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.
Expansion of protein domain repeats.
Directory of Open Access Journals (Sweden)
Asa K Björklund
2006-08-01
Full Text Available Many proteins, especially in eukaryotes, contain tandem repeats of several domains from the same family. These repeats have a variety of binding properties and are involved in protein-protein interactions as well as binding to other ligands such as DNA and RNA. The rapid expansion of protein domain repeats is assumed to have evolved through internal tandem duplications. However, the exact mechanisms behind these tandem duplications are not well-understood. Here, we have studied the evolution, function, protein structure, gene structure, and phylogenetic distribution of domain repeats. For this purpose we have assigned Pfam-A domain families to 24 proteomes with more sensitive domain assignments in the repeat regions. These assignments confirmed previous findings that eukaryotes, and in particular vertebrates, contain a much higher fraction of proteins with repeats compared with prokaryotes. The internal sequence similarity in each protein revealed that the domain repeats are often expanded through duplications of several domains at a time, while the duplication of one domain is less common. Many of the repeats appear to have been duplicated in the middle of the repeat region. This is in strong contrast to the evolution of other proteins that mainly works through additions of single domains at either terminus. Further, we found that some domain families show distinct duplication patterns, e.g., nebulin domains have mainly been expanded with a unit of seven domains at a time, while duplications of other domain families involve varying numbers of domains. Finally, no common mechanism for the expansion of all repeats could be detected. We found that the duplication patterns show no dependence on the size of the domains. Further, repeat expansion in some families can possibly be explained by shuffling of exons. However, exon shuffling could not have created all repeats.
Energy Technology Data Exchange (ETDEWEB)
Adamopoulou, Theodora [Department of Environmental and Natural Resources Management, University of Western Greece (formerly of University of Ioannina), Seferi 2, Agrinio GR30100 (Greece); Papadaki, Maria I., E-mail: mpapadak@cc.uoi.gr [Department of Environmental and Natural Resources Management, University of Western Greece (formerly of University of Ioannina), Seferi 2, Agrinio GR30100 (Greece); Kounalakis, Manolis [Department of Environmental and Natural Resources Management, University of Western Greece (formerly of University of Ioannina), Seferi 2, Agrinio GR30100 (Greece); Vazquez-Carreto, Victor; Pineda-Solano, Alba [Mary Kay O’Connor Process Safety Center, Artie McFerrin Department of Chemical Engineering, Texas A and M University, College Station, TX 77843 (United States); Wang, Qingsheng [Department of Fire Protection and Safety and Department of Chemical Engineering, Oklahoma State University, 494 Cordell South, Stillwater, OK 74078 (United States); Mannan, M.Sam [Mary Kay O’Connor Process Safety Center, Artie McFerrin Department of Chemical Engineering, Texas A and M University, College Station, TX 77843 (United States)
2013-06-15
Highlights: • Hydroxylamine thermal decomposition enthalpy was measured using larger quantities. • The rate at which heat is evolved depends on hydroxylamine concentration. • Decomposition heat is strongly affected by the conditions and the selected baseline. • The need for enthalpy measurements using a larger reactant mass is pinpointed. • Hydroxylamine decomposition in the presence of argon is much faster than in air. -- Abstract: Thermal decomposition of hydroxylamine, NH{sub 2}OH, was responsible for two serious accidents. However, its reactive behavior and the synergy of factors affecting its decomposition are not being understood. In this work, the global enthalpy of hydroxylamine decomposition has been measured in the temperature range of 130–150 °C employing isoperibolic calorimetry. Measurements were performed in a metal reactor, employing 30–80 ml solutions containing 1.4–20 g of pure hydroxylamine (2.8–40 g of the supplied reagent). The measurements showed that increased concentration or temperature, results in higher global enthalpies of reaction per unit mass of reactant. At 150 °C, specific enthalpies as high as 8 kJ per gram of hydroxylamine were measured, although in general they were in the range of 3−5 kJ g{sup −1}. The accurate measurement of the generated heat was proven to be a cumbersome task as (a) it is difficult to identify the end of decomposition, which after a fast initial stage, proceeds very slowly, especially at lower temperatures and (b) the environment of gases affects the reaction rate.
DEFF Research Database (Denmark)
Chi, Celestine N.; Bach, Anders; Strømgaard, Kristian
2012-01-01
The postsynaptic density protein-95/disks large/zonula occludens-1 (PDZ) protein domain family is one of the most common protein-protein interaction modules in mammalian cells, with paralogs present in several hundred human proteins. PDZ domains are found in most cell types, but neuronal proteins......, for example, are particularly rich in these domains. The general function of PDZ domains is to bring proteins together within the appropriate cellular compartment, thereby facilitating scaffolding, signaling, and trafficking events. The many functions of PDZ domains under normal physiological as well...
López-Mondéjar, Rubén; Zühlke, Daniela; Becher, Dörte; Riedel, Katharina; Baldrian, Petr
2016-04-01
Evidence shows that bacteria contribute actively to the decomposition of cellulose and hemicellulose in forest soil; however, their role in this process is still unclear. Here we performed the screening and identification of bacteria showing potential cellulolytic activity from litter and organic soil of a temperate oak forest. The genomes of three cellulolytic isolates previously described as abundant in this ecosystem were sequenced and their proteomes were characterized during the growth on plant biomass and on microcrystalline cellulose. Pedobacter and Mucilaginibacter showed complex enzymatic systems containing highly diverse carbohydrate-active enzymes for the degradation of cellulose and hemicellulose, which were functionally redundant for endoglucanases, β-glucosidases, endoxylanases, β-xylosidases, mannosidases and carbohydrate-binding modules. Luteibacter did not express any glycosyl hydrolases traditionally recognized as cellulases. Instead, cellulose decomposition was likely performed by an expressed GH23 family protein containing a cellulose-binding domain. Interestingly, the presence of plant lignocellulose as well as crystalline cellulose both trigger the production of a wide set of hydrolytic proteins including cellulases, hemicellulases and other glycosyl hydrolases. Our findings highlight the extensive and unexplored structural diversity of enzymatic systems in cellulolytic soil bacteria and indicate the roles of multiple abundant bacterial taxa in the decomposition of cellulose and other plant polysaccharides.
Domain architecture conservation in orthologs.
Forslund, Kristoffer; Pekkari, Isabella; Sonnhammer, Erik L L
2011-08-05
As orthologous proteins are expected to retain function more often than other homologs, they are often used for functional annotation transfer between species. However, ortholog identification methods do not take into account changes in domain architecture, which are likely to modify a protein's function. By domain architecture we refer to the sequential arrangement of domains along a protein sequence.To assess the level of domain architecture conservation among orthologs, we carried out a large-scale study of such events between human and 40 other species spanning the entire evolutionary range. We designed a score to measure domain architecture similarity and used it to analyze differences in domain architecture conservation between orthologs and paralogs relative to the conservation of primary sequence. We also statistically characterized the extents of different types of domain swapping events across pairs of orthologs and paralogs. The analysis shows that orthologs exhibit greater domain architecture conservation than paralogous homologs, even when differences in average sequence divergence are compensated for, for homologs that have diverged beyond a certain threshold. We interpret this as an indication of a stronger selective pressure on orthologs than paralogs to retain the domain architecture required for the proteins to perform a specific function. In general, orthologs as well as the closest paralogous homologs have very similar domain architectures, even at large evolutionary separation.The most common domain architecture changes observed in both ortholog and paralog pairs involved insertion/deletion of new domains, while domain shuffling and segment duplication/deletion were very infrequent. On the whole, our results support the hypothesis that function conservation between orthologs demands higher domain architecture conservation than other types of homologs, relative to primary sequence conservation. This supports the notion that orthologs are
Domain architecture conservation in orthologs
2011-01-01
Background As orthologous proteins are expected to retain function more often than other homologs, they are often used for functional annotation transfer between species. However, ortholog identification methods do not take into account changes in domain architecture, which are likely to modify a protein's function. By domain architecture we refer to the sequential arrangement of domains along a protein sequence. To assess the level of domain architecture conservation among orthologs, we carried out a large-scale study of such events between human and 40 other species spanning the entire evolutionary range. We designed a score to measure domain architecture similarity and used it to analyze differences in domain architecture conservation between orthologs and paralogs relative to the conservation of primary sequence. We also statistically characterized the extents of different types of domain swapping events across pairs of orthologs and paralogs. Results The analysis shows that orthologs exhibit greater domain architecture conservation than paralogous homologs, even when differences in average sequence divergence are compensated for, for homologs that have diverged beyond a certain threshold. We interpret this as an indication of a stronger selective pressure on orthologs than paralogs to retain the domain architecture required for the proteins to perform a specific function. In general, orthologs as well as the closest paralogous homologs have very similar domain architectures, even at large evolutionary separation. The most common domain architecture changes observed in both ortholog and paralog pairs involved insertion/deletion of new domains, while domain shuffling and segment duplication/deletion were very infrequent. Conclusions On the whole, our results support the hypothesis that function conservation between orthologs demands higher domain architecture conservation than other types of homologs, relative to primary sequence conservation. This supports the
Reactive Goal Decomposition Hierarchies for On-Board Autonomy
Hartmann, L.
2002-01-01
As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react
Decomposition of organic waste products under aerobic and anaerobic conditions
Energy Technology Data Exchange (ETDEWEB)
Gale, P.M.
1988-01-01
The objectives of this research were to determine the kinetics of C and N mineralization under aerobic and anaerobic conditions. These parameters were then used to verify the simulation model, DECOMPOSITION, for the anaerobic system. Incubation experiments were conducted to compare the aerobic and anaerobic decomposition of alfalfa (Medicago sativa L.), a substrate with a low C:N ratio. Under anaerobic conditions the net mineralization of N occurred more rapidly than that under aerobic conditions. However, the rate of C mineralization as measured by CO{sub 2} evolution was much lower. For the anaerobic decomposition of alfalfa, C mineralization was best described as the sum of the CO{sub 2} and CH{sub 4} evolved plus the water soluble organic C formed. The kinetics of C mineralization, as determined by this approach, were used to successfully predict the rate and amount of N mineralization from alfalfa undergoing anaerobic decomposition. The decomposition of paper mill sludge, a high C:N ratio substrate, was also evaluated.
Application of variational mode decomposition to seismic random noise reduction
Liu, Wei; Cao, Siyuan; Wang, Zhiming
2017-08-01
We have proposed a new denoising method for the simultaneous noise reduction and preservation of seismic signals based on variational mode decomposition (VMD). VMD is a recently developed adaptive signal decomposition method and an advance in non-stationary signal analysis. It solves the mode-mixing and non-optimal reconstruction performance problems of empirical mode decomposition that have existed for a long time. By using VMD, a multi-component signal can be non-recursively decomposed into a series of quasi-orthogonal intrinsic mode functions (IMFs), each of which has a relatively local frequency range. Meanwhile, the signal will focus on a smaller number of obtained IMFs after decomposition, and thus the denoised result is able to be obtained by reconstructing these signal-dominant IMFs. Synthetic examples are given to demonstrate the effectiveness of the proposed approach and comparison is made with the complete ensemble empirical mode decomposition, which demonstrates that the VMD algorithm has lower computational cost and better random noise elimination performance. The application of on field seismic data further illustrates the superior performance of our method in both random noise attenuation and the recovery of seismic events.
Experimental comparison of empirical material decomposition methods for spectral CT.
Zimmerman, Kevin C; Schmidt, Taly Gilat
2015-04-21
Material composition can be estimated from spectral information acquired using photon counting x-ray detectors with pulse height analysis. Non-ideal effects in photon counting x-ray detectors such as charge-sharing, k-escape, and pulse-pileup distort the detected spectrum, which can cause material decomposition errors. This work compared the performance of two empirical decomposition methods: a neural network estimator and a linearized maximum likelihood estimator with correction (A-table method). The two investigated methods differ in how they model the nonlinear relationship between the spectral measurements and material decomposition estimates. The bias and standard deviation of material decomposition estimates were compared for the two methods, using both simulations and experiments with a photon-counting x-ray detector. Both the neural network and A-table methods demonstrated a similar performance for the simulated data. The neural network had lower standard deviation for nearly all thicknesses of the test materials in the collimated (low scatter) and uncollimated (higher scatter) experimental data. In the experimental study of Teflon thicknesses, non-ideal detector effects demonstrated a potential bias of 11-28%, which was reduced to 0.1-11% using the proposed empirical methods. Overall, the results demonstrated preliminary experimental feasibility of empirical material decomposition for spectral CT using photon-counting detectors.