Jordi, Antoni; Georgas, Nickitas; Blumberg, Alan
2017-05-01
This paper presents a new parallel domain decomposition algorithm based on integer linear programming (ILP), a mathematical optimization method. To minimize the computation time of coastal ocean circulation models, the ILP decomposition algorithm divides the global domain in local domains with balanced work load according to the number of processors and avoids computations over as many as land grid cells as possible. In addition, it maintains the use of logically rectangular local domains and achieves the exact same results as traditional domain decomposition algorithms (such as Cartesian decomposition). However, the ILP decomposition algorithm may not converge to an exact solution for relatively large domains. To overcome this problem, we developed two ILP decomposition formulations. The first one (complete formulation) has no additional restriction, although it is impractical for large global domains. The second one (feasible) imposes local domains with the same dimensions and looks for the feasibility of such decomposition, which allows much larger global domains. Parallel performance of both ILP formulations is compared to a base Cartesian decomposition by simulating two cases with the newly created parallel version of the Stevens Institute of Technology's Estuarine and Coastal Ocean Model (sECOM). Simulations with the ILP formulations run always faster than the ones with the base decomposition, and the complete formulation is better than the feasible one when it is applicable. In addition, parallel efficiency with the ILP decomposition may be greater than one.
Adaptive aggregation-based domain decomposition multigrid for twisted mass fermions
Alexandrou, Constantia; Bacchio, Simone; Finkenrath, Jacob; Frommer, Andreas; Kahl, Karsten; Rottmann, Matthias
2016-12-01
The adaptive aggregation-base domain decomposition multigrid method [A. Frommer et al., SIAM J. Sci. Comput. 36, A1581 (2014)] is extended for two degenerate flavors of twisted mass fermions. By fine-tuning the parameters we achieve a speed-up of the order of a hundred times compared to the conjugate gradient algorithm for the physical value of the pion mass. A thorough analysis of the aggregation parameters is presented, which provides a novel insight into multigrid methods for lattice quantum chromodynamics independently of the fermion discretization.
Stoica, Petre; Sandgren, Niclas; Selén, Yngve; Vanhamme, Leentje; Van Huffel, Sabine
2003-11-01
In several applications of NMR spectroscopy the user is interested only in the components lying in a small frequency band of the spectrum. A frequency selective analysis deals precisely with this kind of NMR spectroscopy: parameter estimation of only those spectroscopic components that lie in a preselected frequency band of the NMR data spectrum, with as little interference as possible from the out-of-band components and in a computationally efficient way. In this paper we introduce a frequency-domain singular value decomposition (SVD)-based method for frequency selective spectroscopy that is computationally simple, statistically accurate, and which has a firm theoretical basis. To illustrate the good performance of the proposed method we present a number of numerical examples for both simulated and in vitro NMR data.
Lubineau, Gilles
2015-03-01
We propose a domain decomposition formalism specifically designed for the identification of local elastic parameters based on full-field measurements. This technique is made possible by a multi-scale implementation of the constitutive compatibility method. Contrary to classical approaches, the constitutive compatibility method resolves first some eigenmodes of the stress field over the structure rather than directly trying to recover the material properties. A two steps micro/macro reconstruction of the stress field is performed: a Dirichlet identification problem is solved first over every subdomain, the macroscopic equilibrium is then ensured between the subdomains in a second step. We apply the method to large linear elastic 2D identification problems to efficiently produce estimates of the material properties at a much lower computational cost than classical approaches.
Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions
Energy Technology Data Exchange (ETDEWEB)
Fattebert, J.-L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Richards, D.F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glosli, J.N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2012-12-01
We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10^{6} particles on 65,536 MPI tasks.
Lezina, Natalya; Agoshkov, Valery
2017-04-01
Domain decomposition method (DDM) allows one to present a domain with complex geometry as a set of essentially simpler subdomains. This method is particularly applied for the hydrodynamics of oceans and seas. In each subdomain the system of thermo-hydrodynamic equations in the Boussinesq and hydrostatic approximations is solved. The problem of obtaining solution in the whole domain is that it is necessary to combine solutions in subdomains. For this purposes iterative algorithm is created and numerical experiments are conducted to investigate an effectiveness of developed algorithm using DDM. For symmetric operators in DDM, Poincare-Steklov's operators [1] are used, but for the problems of the hydrodynamics, it is not suitable. In this case for the problem, adjoint equation method [2] and inverse problem theory are used. In addition, it is possible to create algorithms for the parallel calculations using DDM on multiprocessor computer system. DDM for the model of the Baltic Sea dynamics is numerically studied. The results of numerical experiments using DDM are compared with the solution of the system of hydrodynamic equations in the whole domain. The work was supported by the Russian Science Foundation (project 14-11-00609, the formulation of the iterative process and numerical experiments). [1] V.I. Agoshkov, Domain Decompositions Methods in the Mathematical Physics Problem // Numerical processes and systems, No 8, Moscow, 1991 (in Russian). [2] V.I. Agoshkov, Optimal Control Approaches and Adjoint Equations in the Mathematical Physics Problem, Institute of Numerical Mathematics, RAS, Moscow, 2003 (in Russian).
Vector domain decomposition schemes for parabolic equations
Vabishchevich, P. N.
2017-09-01
A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.
Directory of Open Access Journals (Sweden)
P. K. Dhar
2017-06-01
Full Text Available Digital watermarking has drawn extensive attention for copyright protection of multimedia data. This paper introduces a blind audio watermarking scheme in discrete cosine transform (DCT domain based on singular value decomposition (SVD, exponential operation (EO, and logarithm operation (LO. In our proposed scheme, initially the original audio is segmented into non-overlapping frames and DCT is applied to each frame. Low frequency DCT coefficients are divided into sub-bands and power of each sub band is calculated. EO is performed on the sub-band with highest power of the DCT coefficients of each frame. SVD is applied to the exponential coefficients of each sub bands with highest power represented in matrix form. Watermark information is embedded into the largest singular value by using a quantization function. Simulation results indicate that the proposed watermarking scheme is highly robust against different attacks. In addition, it has high data payload and shows low error probability rates. Moreover, it provides good performance in terms of imperceptibility, robustness, and data payload compared with some recent state-of-the-art watermarking methods.
Men, Kuo; Quan, Hong; Yang, Peipei; Cao, Ting; Li, Weihao
2010-04-01
The frequency-domain magnetic resonance spectroscopy (MRS) is achieved by the Fast Fourier Transform (FFT) of the time-domain signals. Usually we are only interested in the portion lying in a frequency band of the whole spectrum. A method based on the singular value decomposition (SVD) and frequency-selection is presented in this article. The method quantifies the spectrum lying in the interested frequency band and reduces the interference of the parts lying out of the band in a computationally efficient way. Comparative experiments with the standard time-domain SVD method indicate that the method introduced in this article is accurate and timesaving in practical situations.
Kim, Jae In; Na, Sungsoo; Eom, Kilho
2011-01-15
Normal mode analysis (NMA) with coarse-grained model, such as elastic network model (ENM), has allowed the quantitative understanding of protein dynamics. As the protein size is increased, there emerges the expensive computational process to find the dynamically important low-frequency normal modes due to diagonalization of massive Hessian matrix. In this study, we have provided the domain decomposition-based structural condensation method that enables the efficient computations on low-frequency motions. Specifically, our coarse-graining method is established by coupling between model condensation (MC; Eom et al., J Comput Chem 2007, 28, 1400) and component mode synthesis (Kim et al., J Chem Theor Comput 2009, 5, 1931). A protein structure is first decomposed into substructural units, and then each substructural unit is coarse-grained by MC. Once the NMA is implemented to coarse-grained substructural units, normal modes and natural frequencies for each coarse-grained substructural unit are assembled by using geometric constraints to provide the normal modes and natural frequencies for whole protein structure. It is shown that our coarse-graining method enhances the computational efficiency for analysis of large protein complexes. It is clearly suggested that our coarse-graining method provides the B-factors of 100 large proteins, quantitatively comparable with those obtained from original NMA, with computational efficiency. Moreover, the collective behaviors and/or the correlated motions for model proteins are well delineated by our suggested coarse-grained models, quantitatively comparable with those computed from original NMA. It is implied that our coarse-grained method enables the computationally efficient studies on conformational dynamics of large protein complex.
Stability estimates for hybrid coupled domain decomposition methods
Steinbach, Olaf
2003-01-01
Domain decomposition methods are a well established tool for an efficient numerical solution of partial differential equations, in particular for the coupling of different model equations and of different discretization methods. Based on the approximate solution of local boundary value problems either by finite or boundary element methods, the global problem is reduced to an operator equation on the skeleton of the domain decomposition. Different variational formulations then lead to hybrid domain decomposition methods.
Directory of Open Access Journals (Sweden)
Khaled Loukhaoukha
2013-01-01
Full Text Available We present a new optimal watermarking scheme based on discrete wavelet transform (DWT and singular value decomposition (SVD using multiobjective ant colony optimization (MOACO. A binary watermark is decomposed using a singular value decomposition. Then, the singular values are embedded in a detailed subband of host image. The trade-off between watermark transparency and robustness is controlled by multiple scaling factors (MSFs instead of a single scaling factor (SSF. Determining the optimal values of the multiple scaling factors (MSFs is a difficult problem. However, a multiobjective ant colony optimization is used to determine these values. Experimental results show much improved performances of the proposed scheme in terms of transparency and robustness compared to other watermarking schemes. Furthermore, it does not suffer from the problem of high probability of false positive detection of the watermarks.
Domain decomposition multigrid for unstructured grids
Energy Technology Data Exchange (ETDEWEB)
Shapira, Yair
1997-01-01
A two-level preconditioning method for the solution of elliptic boundary value problems using finite element schemes on possibly unstructured meshes is introduced. It is based on a domain decomposition and a Galerkin scheme for the coarse level vertex unknowns. For both the implementation and the analysis, it is not required that the curves of discontinuity in the coefficients of the PDE match the interfaces between subdomains. Generalizations to nonmatching or overlapping grids are made.
Lakshminarasimhulu, Pasupulati; Madura, Jeffry D.
2002-04-01
A domain decomposition algorithm for molecular dynamics simulation of atomic and molecular systems with arbitrary shape and non-periodic boundary conditions is described. The molecular dynamics program uses cell multipole method for efficient calculation of long range electrostatic interactions and a multiple time step method to facilitate bigger time steps. The system is enclosed in a cube and the cube is divided into a hierarchy of cells. The deepest level cells are assigned to processors such that each processor has contiguous cells and static load balancing is achieved by redistributing the cells so that each processor has approximately same number of atoms. The resulting domains have irregular shape and may have more than 26 neighbors. Atoms constituting bond angles and torsion angles may straddle more than two processors. An efficient strategy is devised for initial assignment and subsequent reassignment of such multiple-atom potentials to processors. At each step, computation is overlapped with communication greatly reducing the effect of communication overhead on parallel performance. The algorithm is tested on a spherical cluster of water molecules, a hexasaccharide and an enzyme both solvated by a spherical cluster of water molecules. In each case a spherical boundary containing oxygen atoms with only repulsive interactions is used to prevent evaporation of water molecules. The algorithm shows excellent parallel efficiency even for small number of cells/atoms per processor.
Wang, Benfeng; Jakobsen, Morten; Wu, Ru-Shan; Lu, Wenkai; Chen, Xiaohong
2017-03-01
Full waveform inversion (FWI) has been regarded as an effective tool to build the velocity model for the following pre-stack depth migration. Traditional inversion methods are built on Born approximation and are initial model dependent, while this problem can be avoided by introducing Transmission matrix (T-matrix), because the T-matrix includes all orders of scattering effects. The T-matrix can be estimated from the spatial aperture and frequency bandwidth limited seismic data using linear optimization methods. However the full T-matrix inversion method (FTIM) is always required in order to estimate velocity perturbations, which is very time consuming. The efficiency can be improved using the previously proposed inverse thin-slab propagator (ITSP) method, especially for large scale models. However, the ITSP method is currently designed for smooth media, therefore the estimation results are unsatisfactory when the velocity perturbation is relatively large. In this paper, we propose a domain decomposition method (DDM) to improve the efficiency of the velocity estimation for models with large perturbations, as well as guarantee the estimation accuracy. Numerical examples for smooth Gaussian ball models and a reservoir model with sharp boundaries are performed using the ITSP method, the proposed DDM and the FTIM. The estimated velocity distributions, the relative errors and the elapsed time all demonstrate the validity of the proposed DDM.
Directory of Open Access Journals (Sweden)
Guochao Lao
2018-02-01
Full Text Available The maneuvering target echo of high-resolution radar can be expressed as a multicomponent polynomial phase signal (mc-PPS. However, with improvements in radar resolution and increases in the synthetic period, classical time frequency analysis methods cannot satisfy the requirements of maneuvering target radar echo processing. In this paper, a novel frequency domain extraction-based adaptive joint time frequency (FDE-AJTF decomposition method was proposed with three improvements. First, the maximum frequency spectrum of the phase compensation signal was taken as the fitness function, while the fitness comparison, component extraction, and residual updating were operated in the frequency domain; second, the time window was adopted on the basis function to fit the uncertain signal component time; and third, constant false alarm ratio (CFAR detection was applied in the component extraction to reduce the ineffective components. Through these means, the stability and speed of phase parameters estimation increased with one domination ignored in the phase parameter estimation, and the accuracy and effectiveness of the signal component extraction performed better with less influence from the estimation errors, clutters, and noises. Finally, these advantages of the FDE-AJTF decomposition method were verified through a comparison with the classical method in simulation and experimental tests.
Qin, Zengguang; Chen, Hui; Chang, Jun
2017-08-14
We propose a novel denoising method based on empirical mode decomposition (EMD) to improve the signal-to-noise ratio (SNR) for vibration sensing in phase-sensitive optical time domain reflectometry (φ-OTDR) systems. Raw Rayleigh backscattering traces are decomposed into a series of intrinsic mode functions (IMFs) and a residual component using an EMD algorithm. High frequency noise is eliminated by removing several IMFs at the position without vibration selected by the Pearson correlation coefficient (PCC). When the pulse width is 50 ns, the SNR of location information for the vibration events of 100 Hz and 1.2 kHz is increased to as high as 42.52 dB and 39.58 dB, respectively, with a 2 km sensing fiber, which demonstrates the excellent performance of this new method.
Parallel pseudospectral domain decomposition techniques
Gottlieb, David; Hirsch, Richard S.
1989-01-01
The influence of interface boundary conditions on the ability to parallelize pseudospectral multidomain algorithms is investigated. Using the properties of spectral expansions, a novel parallel two domain procedure is generalized to an arbitrary number of domains each of which can be solved on a separate processor. This interface boundary condition considerably simplifies influence matrix techniques.
Convergence Analysis of a Domain Decomposition Paradigm
Energy Technology Data Exchange (ETDEWEB)
Bank, R E; Vassilevski, P S
2006-06-12
We describe a domain decomposition algorithm for use in several variants of the parallel adaptive meshing paradigm of Bank and Holst. This algorithm has low communication, makes extensive use of existing sequential solvers, and exploits in several important ways data generated as part of the adaptive meshing paradigm. We show that for an idealized version of the algorithm, the rate of convergence is independent of both the global problem size N and the number of subdomains p used in the domain decomposition partition. Numerical examples illustrate the effectiveness of the procedure.
Domain decomposition based iterative methods for nonlinear elliptic finite element problems
Energy Technology Data Exchange (ETDEWEB)
Cai, X.C. [Univ. of Colorado, Boulder, CO (United States)
1994-12-31
The class of overlapping Schwarz algorithms has been extensively studied for linear elliptic finite element problems. In this presentation, the author considers the solution of systems of nonlinear algebraic equations arising from the finite element discretization of some nonlinear elliptic equations. Several overlapping Schwarz algorithms, including the additive and multiplicative versions, with inexact Newton acceleration will be discussed. The author shows that the convergence rate of the Newton`s method is independent of the mesh size used in the finite element discretization, and also independent of the number of subdomains into which the original domain in decomposed. Numerical examples will be presented.
Overlapping domain decomposition methods for elliptic quasi ...
Indian Academy of Sciences (India)
[10] Lions P L, On the Schwarz alternating method ˙I˙I, Stochastic interpretation and order properties, Domain Decomposition Methods (Los Angeles, California, 1988) (SIAM,. Philadelphia) (1989) pp. 47–70. [11] Perthame B, Some remarks on quasi-variational inequalities and the associated impulsive control problem ...
Load Estimation by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Pedersen, Ivar Chr. Bjerg; Hansen, Søren Mosegaard; Brincker, Rune
2007-01-01
by analysis of simulated responses of a 4 DOF system, for which the exact modal parameters are known. This estimation approach entails modal identification of the natural eigenfrequencies, mode shapes and damping ratios by the frequency domain decomposition technique. Scaled mode shapes are determined by use......When performing operational modal analysis the dynamic loading is unknown, however, once the modal properties of the structure have been estimated, the transfer matrix can be obtained, and the loading can be estimated by inverse filtering. In this paper loads in frequency domain are estimated...... of the mass change method. The problem of inverting the often singular or nearly singular transfer function matrix is solved by the singular value decomposition technique using a limited number of singular values. The dependence of the eigenfrequencies on the accuracy of the scaling factors is investigated...
Multiple Shooting and Time Domain Decomposition Methods
Geiger, Michael; Körkel, Stefan; Rannacher, Rolf
2015-01-01
This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms. The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics. This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...
Domain decomposition methods for mortar finite elements
Energy Technology Data Exchange (ETDEWEB)
Widlund, O.
1996-12-31
In the last few years, domain decomposition methods, previously developed and tested for standard finite element methods and elliptic problems, have been extended and modified to work for mortar and other nonconforming finite element methods. A survey will be given of work carried out jointly with Yves Achdou, Mario Casarin, Maksymilian Dryja and Yvon Maday. Results on the p- and h-p-version finite elements will also be discussed.
Bregmanized Domain Decomposition for Image Restoration
Langer, Andreas
2012-05-22
Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems such as total variation minimization became increasingly important for the solution of these tasks. While being favorable due to the improved enhancement of images compared to smooth imaging approaches, non-smooth minimization problems typically scale badly with the dimension of the data. Hence, for large imaging problems solved by total variation minimization domain decomposition algorithms have been proposed, aiming to split one large problem into N > 1 smaller problems which can be solved on parallel CPUs. The N subproblems constitute constrained minimization problems, where the constraint enforces the support of the minimizer to be the respective subdomain. In this paper we discuss a fast computational algorithm to solve domain decomposition for total variation minimization. In particular, we accelerate the computation of the subproblems by nested Bregman iterations. We propose a Bregmanized Operator Splitting-Split Bregman (BOS-SB) algorithm, which enforces the restriction onto the respective subdomain by a Bregman iteration that is subsequently solved by a Split Bregman strategy. The computational performance of this new approach is discussed for its application to image inpainting and image deblurring. It turns out that the proposed new solution technique is up to three times faster than the iterative algorithm currently used in domain decomposition methods for total variation minimization. © Springer Science+Business Media, LLC 2012.
Directory of Open Access Journals (Sweden)
Yasong Qiu
2015-02-01
Full Text Available In this paper a new flow field prediction method which is independent of the governing equations, is developed to predict stationary flow fields of variable physical domain. Predicted flow fields come from linear superposition of selected basis modes generated by proper orthogonal decomposition (POD. Instead of traditional projection methods, kriging surrogate model is used to calculate the superposition coefficients through building approximate function relationships between profile geometry parameters of physical domain and these coefficients. In this context, the problem which troubles the traditional POD-projection method due to viscosity and compressibility has been avoided in the whole process. Moreover, there are no constraints for the inner product form, so two forms of simple ones are applied to improving computational efficiency and cope with variable physical domain problem. An iterative algorithm is developed to determine how many basis modes ranking front should be used in the prediction. Testing results prove the feasibility of this new method for subsonic flow field, but also prove that it is not proper for transonic flow field because of the poor predicted shock waves.
Damping Estimation by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Ventura, C. E.; Andersen, P.
2001-01-01
In this paper it is explained how the damping can be estimated using the Frequency Domain Decomposition technique for output-only modal identification, i.e. in the case where the modal parameters is to be estimated without knowing the forces exciting the system. Also it is explained how the natural...... back to time domain to identify damping and frequency. The technique is illustrated on a simple simulation case with 2 closely spaced modes. On this example it is illustrated how the identification is influenced by very closely spacing, by non-orthogonal modes, and by correlated input. The technique...... frequencies can be accurately estimated without being limited by the frequency resolution of the discrete Fourier transform. It is explained how the spectral density matrix is decomposed into a set of single degree of freedom systems, and how the individual SDOF auto spectral density functions are transformed...
Domain decomposition methods in FVM approach to gravity field modelling.
Macák, Marek
2017-04-01
The finite volume method (FVM) as a numerical method can be straightforwardly implemented for global or local gravity field modelling. This discretization method solves the geodetic boundary value problems in a space domain. In order to obtain precise numerical solutions, it usually requires very refined discretization leading to large-scale parallel computations. To optimize such computations, we present a special class of numerical techniques that are based on a physical decomposition of the global solution domain. The domain decomposition (DD) methods like the Multiplicative Schwarz Method and Additive Schwarz Method are very efficient methods for solving partial differential equations. We briefly present their mathematical formulations and we test their efficiency. Presented numerical experiments are dealing with gravity field modelling. Since there is no need to solve special interface problems between neighbouring subdomains, in our applications we use the overlapping DD methods.
Zheng, Xiang
2015-03-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors. © 2015 Elsevier Inc.
Domain decomposition methods for the neutron diffusion problem
International Nuclear Information System (INIS)
Guerin, P.; Baudron, A. M.; Lautard, J. J.
2010-01-01
The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, simplified transport (SPN) or diffusion approximations are often used. The MINOS solver developed at CEA Saclay uses a mixed dual finite element method for the resolution of these problems. and has shown his efficiency. In order to take into account the heterogeneities of the geometry, a very fine mesh is generally required, and leads to expensive calculations for industrial applications. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose here two domain decomposition methods based on the MINOS solver. The first approach is a component mode synthesis method on overlapping sub-domains: several Eigenmodes solutions of a local problem on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is an iterative method based on a non-overlapping domain decomposition with Robin interface conditions. At each iteration, we solve the problem on each sub-domain with the interface conditions given by the solutions on the adjacent sub-domains estimated at the previous iteration. Numerical results on parallel computers are presented for the diffusion model on realistic 2D and 3D cores. (authors)
Domain decomposition algorithms and computational fluid dynamics
International Nuclear Information System (INIS)
Chan, T.F.
1988-01-01
In the past several years, domain decomposition has been a very popular topic, partly because of the potential of parallelization. Although numerous theories and algorithms have been developed for model elliptic problems, they are only recently starting to be tested on realistic applications. This paper investigates the application of some of these methods to two model problems in computational fluid dynamics: two-dimensional convection-diffusion problems and the incompressible driven cavity flow problem. The authors approach is the construction and analysis of efficient preconditioners for the interface operator to be used in the iterative solution of the interface solution. For the convection-diffusion problems, they discuss the effect of the convection term and its discretization on the performance of some of the preconditioners. For the driven cavity problem, they discuss the effectiveness of a class of boundary probe preconditioners
Domain decomposition methods for solving an image problem
Energy Technology Data Exchange (ETDEWEB)
Tsui, W.K.; Tong, C.S. [Hong Kong Baptist College (Hong Kong)
1994-12-31
The domain decomposition method is a technique to break up a problem so that ensuing sub-problems can be solved on a parallel computer. In order to improve the convergence rate of the capacitance systems, pre-conditioned conjugate gradient methods are commonly used. In the last decade, most of the efficient preconditioners are based on elliptic partial differential equations which are particularly useful for solving elliptic partial differential equations. In this paper, the authors apply the so called covering preconditioner, which is based on the information of the operator under investigation. Therefore, it is good for various kinds of applications, specifically, they shall apply the preconditioned domain decomposition method for solving an image restoration problem. The image restoration problem is to extract an original image which has been degraded by a known convolution process and additive Gaussian noise.
Scalable Domain Decomposition Preconditioners for Heterogeneous Elliptic Problems
Directory of Open Access Journals (Sweden)
Pierre Jolivet
2014-01-01
Full Text Available Domain decomposition methods are, alongside multigrid methods, one of the dominant paradigms in contemporary large-scale partial differential equation simulation. In this paper, a lightweight implementation of a theoretically and numerically scalable preconditioner is presented in the context of overlapping methods. The performance of this work is assessed by numerical simulations executed on thousands of cores, for solving various highly heterogeneous elliptic problems in both 2D and 3D with billions of degrees of freedom. Such problems arise in computational science and engineering, in solid and fluid mechanics. While focusing on overlapping domain decomposition methods might seem too restrictive, it will be shown how this work can be applied to a variety of other methods, such as non-overlapping methods and abstract deflation based preconditioners. It is also presented how multilevel preconditioners can be used to avoid communication during an iterative process such as a Krylov method.
Composite structured mesh generation with automatic domain decomposition in complex geometries
This paper presents a novel automatic domain decomposition method to generate quality composite structured meshes in complex domains with arbitrary shapes, in which quality structured mesh generation still remains a challenge. The proposed decomposition algorithm is based on the analysis of an initi...
Directory of Open Access Journals (Sweden)
Yong-Woon Kim
2017-01-01
Full Text Available Pyrotechnic devices are used to separate substructures from main structures. Pyroshock can cause failure in electronic components that are sensitive to high frequency shock. Most of the existing methods to analyze pyroshock have limitations for high frequency simulations and are only available for simulation of point explosive-induced pyroshock. To solve the problem of existing methods, we developed a laser shock-based pyroshock reconstruction algorithm covering high frequency range that can predict linear explosive-induced pyroshock, as well as point explosive-induced ones. The developed algorithm reconstructs pyroshock from laser shock test in both temporal and spectral domains using an iterative signal decomposition and synthesis method. In the signal decomposition and synthesis process, unremoved signals in the stopbands occurred and were compensated by iteration to improve the results. At the end of this paper, various types of pyroshock were processed through the proposed method. Pyroshock wave propagation images and shock response spectrum images were presented as a result. To verify the algorithm, we compared the obtained result with a real pyroshock. The time domain signal was reconstructed with an averaged peak to peak acceleration difference of 20.21%, and the shock response spectrum was reconstructed with an average mean acceleration difference of 25.86%.
Higher order statistical frequency domain decomposition for operational modal analysis
Nita, G. M.; Mahgoub, M. A.; Sharyatpanahi, S. G.; Cretu, N. C.; El-Fouly, T. M.
2017-02-01
Experimental methods based on modal analysis under ambient vibrational excitation are often employed to detect structural damages of mechanical systems. Many of such frequency domain methods, such as Basic Frequency Domain (BFD), Frequency Domain Decomposition (FFD), or Enhanced Frequency Domain Decomposition (EFFD), use as first step a Fast Fourier Transform (FFT) estimate of the power spectral density (PSD) associated with the response of the system. In this study it is shown that higher order statistical estimators such as Spectral Kurtosis (SK) and Sample to Model Ratio (SMR) may be successfully employed not only to more reliably discriminate the response of the system against the ambient noise fluctuations, but also to better identify and separate contributions from closely spaced individual modes. It is shown that a SMR-based Maximum Likelihood curve fitting algorithm may improve the accuracy of the spectral shape and location of the individual modes and, when combined with the SK analysis, it provides efficient means to categorize such individual spectral components according to their temporal dynamics as coherent or incoherent system responses to unknown ambient excitations.
Iterative image-domain decomposition for dual-energy CT.
Niu, Tianye; Dong, Xue; Petrongolo, Michael; Zhu, Lei
2014-04-01
Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but
Simplified approaches to some nonoverlapping domain decomposition methods
Energy Technology Data Exchange (ETDEWEB)
Xu, Jinchao
1996-12-31
An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.
A New Domain Decomposition Approach for the Gust Response Problem
Scott, James R.; Atassi, Hafiz M.; Susan-Resiga, Romeo F.
2002-01-01
A domain decomposition method is developed for solving the aerodynamic/aeroacoustic problem of an airfoil in a vortical gust. The computational domain is divided into inner and outer regions wherein the governing equations are cast in different forms suitable for accurate computations in each region. Boundary conditions which ensure continuity of pressure and velocity are imposed along the interface separating the two regions. A numerical study is presented for reduced frequencies ranging from 0.1 to 3.0. It is seen that the domain decomposition approach in providing robust and grid independent solutions.
Implementation of domain decomposition and data decomposition algorithms in RMC code
International Nuclear Information System (INIS)
Liang, J.G.; Cai, Y.; Wang, K.; She, D.
2013-01-01
The applications of Monte Carlo method in reactor physics analysis is somewhat restricted due to the excessive memory demand in solving large-scale problems. Memory demand in MC simulation is analyzed firstly, it concerns geometry data, data of nuclear cross-sections, data of particles, and data of tallies. It appears that tally data is dominant in memory cost and should be focused on in solving the memory problem. Domain decomposition and tally data decomposition algorithms are separately designed and implemented in the reactor Monte Carlo code RMC. Basically, the domain decomposition algorithm is a strategy of 'divide and rule', which means problems are divided into different sub-domains to be dealt with separately and some rules are established to make sure the whole results are correct. Tally data decomposition consists in 2 parts: data partition and data communication. Two algorithms with differential communication synchronization mechanisms are proposed. Numerical tests have been executed to evaluate performance of the new algorithms. Domain decomposition algorithm shows potentials to speed up MC simulation as a space parallel method. As for tally data decomposition algorithms, memory size is greatly reduced
22nd International Conference on Domain Decomposition Methods
Gander, Martin; Halpern, Laurence; Krause, Rolf; Pavarino, Luca
2016-01-01
These are the proceedings of the 22nd International Conference on Domain Decomposition Methods, which was held in Lugano, Switzerland. With 172 participants from over 24 countries, this conference continued a long-standing tradition of internationally oriented meetings on Domain Decomposition Methods. The book features a well-balanced mix of established and new topics, such as the manifold theory of Schwarz Methods, Isogeometric Analysis, Discontinuous Galerkin Methods, exploitation of modern HPC architectures, and industrial applications. As the conference program reflects, the growing capabilities in terms of theory and available hardware allow increasingly complex non-linear and multi-physics simulations, confirming the tremendous potential and flexibility of the domain decomposition concept.
Domain decomposition methods for hyperbolic problems
Indian Academy of Sciences (India)
Here K denotes a generic constant. Combining (2.6) and (2.7) we obtain the result. □. Let Oi be the domain as shown in figure 1, and wi be a continuously differentiable function defined on ¯Oi. Letw be the function defined on ×(0,τ)such that its restriction to. Oi is wi. Then w will, in general, be discontinuous across the lines li ...
Eigenvalue Decomposition-Based Modified Newton Algorithm
Directory of Open Access Journals (Sweden)
Wen-jun Wang
2013-01-01
Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.
Directory of Open Access Journals (Sweden)
MOHAMMED FAIZ ABOALMAALY
2014-10-01
Full Text Available With the continuous revolution of multicore architecture, several parallel programming platforms have been introduced in order to pave the way for fast and efficient development of parallel algorithms. Back into its categories, parallel computing can be done through two forms: Data-Level Parallelism (DLP or Task-Level Parallelism (TLP. The former can be done by the distribution of data among the available processing elements while the latter is based on executing independent tasks concurrently. Most of the parallel programming platforms have built-in techniques to distribute the data among processors, these techniques are technically known as automatic distribution (scheduling. However, due to their wide range of purposes, variation of data types, amount of distributed data, possibility of extra computational overhead and other hardware-dependent factors, manual distribution could achieve better outcomes in terms of performance when compared to the automatic distribution. In this paper, this assumption is investigated by conducting a comparison between automatic and our newly proposed manual distribution of data among threads in parallel. Empirical results of matrix addition and matrix multiplication show a considerable performance gain when manual distribution is applied against automatic distribution.
Domain Decomposition Solvers for Frequency-Domain Finite Element Equations
Copeland, Dylan
2010-10-05
The paper is devoted to fast iterative solvers for frequency-domain finite element equations approximating linear and nonlinear parabolic initial boundary value problems with time-harmonic excitations. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple linear elliptic system for the amplitudes belonging to the sine- and to the cosine-excitation or a large nonlinear elliptic system for the Fourier coefficients in the linear and nonlinear case, respectively. The fast solution of the corresponding linear and nonlinear system of finite element equations is crucial for the competitiveness of this method. © 2011 Springer-Verlag Berlin Heidelberg.
Modal Identification from Ambient Responses Using Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, Lingmi; Andersen, Palle
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification from ambient responses, i.e. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical...... approach where the modal parameters are estimated by simple peak picking. However, by introducing a decomposition of the spectral density function matrix, the response can be separated into a set of single degree of freedom systems, each corresponding to an individual mode. By using this decomposition...
Output-Only Modal Analysis by Frequency Domain Decomposition
Brincker, Rune; Zhang, Lingmi; Andersen, Palle
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification of output-only systems, i.e. for the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical approach where the modal parameters are estimated by simple peak picking. However, by introducing a decomposition of the spectral density function matrix, the response spectra can be separated into ...
Modal Identification from Ambient Responses using Frequency Domain Decomposition
Brincker, Rune; Zhang, L.; Andersen, P.
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification from ambient responses, ie. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical approach where the modal parameters are estimated by simple peak picking. However, by introducing a decomposition of the spectral density function matrix, the response can be separated into a set of s...
Lattice QCD with Domain Decomposition on Intel Xeon Phi Co-Processors
Energy Technology Data Exchange (ETDEWEB)
Heybrock, Simon; Joo, Balint; Kalamkar, Dhiraj D; Smelyanskiy, Mikhail; Vaidyanathan, Karthikeyan; Wettig, Tilo; Dubey, Pradeep
2014-12-01
The gap between the cost of moving data and the cost of computing continues to grow, making it ever harder to design iterative solvers on extreme-scale architectures. This problem can be alleviated by alternative algorithms that reduce the amount of data movement. We investigate this in the context of Lattice Quantum Chromodynamics and implement such an alternative solver algorithm, based on domain decomposition, on Intel Xeon Phi co-processor (KNC) clusters. We demonstrate close-to-linear on-chip scaling to all 60 cores of the KNC. With a mix of single- and half-precision the domain-decomposition method sustains 400-500 Gflop/s per chip. Compared to an optimized KNC implementation of a standard solver [1], our full multi-node domain-decomposition solver strong-scales to more nodes and reduces the time-to-solution by a factor of 5.
A physics-motivated Centroidal Voronoi Particle domain decomposition method
Energy Technology Data Exchange (ETDEWEB)
Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de
2017-04-15
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.
Europlexus: a domain decomposition method in explicit dynamics
International Nuclear Information System (INIS)
Faucher, V.; Hariddh, Bung; Combescure, A.
2003-01-01
Explicit time integration methods are used in structural dynamics to simulate fast transient phenomena, such as impacts or explosions. A very fine analysis is required in the vicinity of the loading areas but extending the same method, and especially the same small time-step, to the whole structure frequently yields excessive calculation times. We thus perform a dual Schur domain decomposition, to divide the global problem into several independent ones, to which is added a reduced size interface problem, to ensure connections between sub-domains. Each sub-domain is given its own time-step and its own mesh fineness. Non-matching meshes at the interfaces are handled. An industrial example demonstrates the interest of our approach. (authors)
Output-only Modal Analysis by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, L.; Andersen, P.
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification of output-only systems, i.e. for the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical...... approach where the modal parameters are estimated by simple peak picking. However, by introducing a decomposition of the spectral density function matrix, the response spectra can be separated into a set of single degree of freedom systems, each corresponding to an individual mode. By using...... this decomposition technique close modes can be identified with high accuracy even in the case of strong noise contamination of the signals. Also, the technique clearly indicates harmonic components in the response signals....
Output-Only Modal Analysis by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, Lingmi; Andersen, Palle
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification of output-only systems, i.e. for the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical...... approach where the modal parameters are estimated by simple peak picking. However, by introducing a decomposition of the spectral density function matrix, the response spectra can be separated into a set of single degree of freedom systems, each corresponding to an individual mode. By using...... this decomposition technique close modes can be identified with high accuracy even in the case of strong noise contamination of the signals. Also, the technique clearly indicates harmonic components in the response signals....
A TFETI domain decomposition solver for elastoplastic problems
Czech Academy of Sciences Publication Activity Database
Čermák, M.; Kozubek, T.; Sysala, Stanislav; Valdman, J.
2014-01-01
Roč. 231, č. 1 (2014), s. 634-653 ISSN 0096-3003 Institutional support: RVO:68145535 Keywords : elastoplasticity * Total FETI domain decomposition method * Finite element method * Semismooth Newton method Subject RIV: BA - General Mathematics Impact factor: 1.551, year: 2014 http://ac.els-cdn.com/S0096300314000253/1-s2.0-S0096300314000253-main.pdf?_tid=33a29cf4-996a-11e3-8c5a-00000aacb360&acdnat=1392816896_4584697dc26cf934dcf590c63f0dbab7
Automated Frequency Domain Decomposition for Operational Modal Analysis
DEFF Research Database (Denmark)
Brincker, Rune; Andersen, Palle; Jacobsen, Niels-Jørgen
2007-01-01
The Frequency Domain Decomposition (FDD) technique is known as one of the most user friendly and powerful techniques for operational modal analysis of structures. However, the classical implementation of the technique requires some user interaction. The present paper describes an algorithm...... for automated FDD, thus a version of FDD where no user interaction is required. Such algorithm can be used for obtaining a default estimate of modal parameters in commercial software for operational modal analysis - or even more important - it can be used as the modal information engine in a system...
Segmented Domain Decomposition Multigrid For 3-D Turbomachinery Flows
Celestina, M. L.; Adamczyk, J. J.; Rubin, S. G.
2001-01-01
A Segmented Domain Decomposition Multigrid (SDDMG) procedure was developed for three-dimensional viscous flow problems as they apply to turbomachinery flows. The procedure divides the computational domain into a coarse mesh comprised of uniformly spaced cells. To resolve smaller length scales such as the viscous layer near a surface, segments of the coarse mesh are subdivided into a finer mesh. This is repeated until adequate resolution of the smallest relevant length scale is obtained. Multigrid is used to communicate information between the different grid levels. To test the procedure, simulation results will be presented for a compressor and turbine cascade. These simulations are intended to show the ability of the present method to generate grid independent solutions. Comparisons with data will also be presented. These comparisons will further demonstrate the usefulness of the present work for they allow an estimate of the accuracy of the flow modeling equations independent of error attributed to numerical discretization.
Domain decomposition multigrid methods for nonlinear reaction-diffusion problems
Arrarás, A.; Gaspar, F. J.; Portero, L.; Rodrigo, C.
2015-03-01
In this work, we propose efficient discretizations for nonlinear evolutionary reaction-diffusion problems on general two-dimensional domains. The spatial domain is discretized through an unstructured coarse triangulation, which is subsequently refined via regular triangular grids. Following the method of lines approach, we first consider a finite element spatial discretization, and then use a linearly implicit splitting time integrator related to a suitable decomposition of the triangulation nodes. Such a procedure provides a linear system per internal stage. The equations corresponding to those nodes lying strictly inside the elements of the coarse triangulation can be decoupled and solved in parallel using geometric multigrid techniques. The method is unconditionally stable and computationally efficient, since it avoids the need for Schwarz-type iteration procedures. In addition, it is formulated for triangular elements, thus yielding much flexibility in the discretization of complex geometries. To illustrate its practical utility, the algorithm is shown to reproduce the pattern-forming dynamics of the Schnakenberg model.
Energy Technology Data Exchange (ETDEWEB)
Feng, Xiaobing [Univ. of Tennessee, Knoxville, TN (United States)
1996-12-31
A non-overlapping domain decomposition iterative method is proposed and analyzed for mixed finite element methods for a sequence of noncoercive elliptic systems with radiation boundary conditions. These differential systems describe the motion of a nearly elastic solid in the frequency domain. The convergence of the iterative procedure is demonstrated and the rate of convergence is derived for the case when the domain is decomposed into subdomains in which each subdomain consists of an individual element associated with the mixed finite elements. The hybridization of mixed finite element methods plays a important role in the construction of the discrete procedure.
Botts, Jonathan; Savioja, Lauri
2015-04-01
For time-domain modeling based on the acoustic wave equation, spectral methods have recently demonstrated promise. This letter presents an extension of a spectral domain decomposition approach, previously used to solve the lossless linear wave equation, which accommodates frequency-dependent atmospheric attenuation and assignment of arbitrary dispersion relations. Frequency-dependence is straightforward to assign when time-stepping is done in the spectral domain, so combined losses from molecular relaxation, thermal conductivity, and viscosity can be approximated with little extra computation or storage. A mode update free from numerical dispersion is derived, and the model is confirmed with a numerical experiment.
Dictionary-Based Tensor Canonical Polyadic Decomposition
Cohen, Jeremy Emile; Gillis, Nicolas
2018-04-01
To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.
Deriving a new domain decomposition method for the Stokes equations using the Smith factorization
Dolean, Victorita; Nataf, Frédéric; Rapin, Gerd
2009-01-01
International audience; In this paper the Smith factorization is used systematically to derive a new domain decomposition method for the Stokes problem. In two dimensions the key idea is the transformation of the Stokes problem into a scalar bi-harmonic problem. We show, how a proposed domain decomposition method for the bi-harmonic problem leads to a domain decomposition method for the Stokes equations which inherits the convergence behavior of the scalar problem. Thus, it is sufficient to s...
A domain decomposition preconditioner of Neumann-Neumann type for the Stokes equations
Dolean, Victorita; Nataf, Frédéric; Rapin, Gerd
2009-01-01
In this paper we recall a new domain decomposition method for the Stokes problem obtained via the Smith factorization. From the theoretical point of view, this domain decomposition method is optimal in the sense that it converges in two iterations for a decomposition into two equal domains. Previous results illustrated the fast convergence of the proposed algorithm in some cases. Our algorithm has shown a more robust behavior than Neumann- Neumann or FETI type methods for particular decomposi...
Domain decomposition techniques for boundary elements application to fluid flow
Brebbia, C A; Skerget, L
2007-01-01
The sub-domain techniques in the BEM are nowadays finding its place in the toolbox of numerical modellers, especially when dealing with complex 3D problems. We see their main application in conjunction with the classical BEM approach, which is based on a single domain, when part of the domain needs to be solved using a single domain approach, the classical BEM, and part needs to be solved using a domain approach, BEM subdomain technique. This has usually been done in the past by coupling the BEM with the FEM, however, it is much more efficient to use a combination of the BEM and a BEM sub-domain technique. The advantage arises from the simplicity of coupling the single domain and multi-domain solutions, and from the fact that only one formulation needs to be developed, rather than two separate formulations based on different techniques. There are still possibilities for improving the BEM sub-domain techniques. However, considering the increased interest and research in this approach we believe that BEM sub-do...
A convergent overlapping domain decomposition method for total variation minimization
Fornasier, Massimo
2010-06-22
In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.
DEFF Research Database (Denmark)
Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune
2006-01-01
The presence of harmonic components in the measured responses is unavoidable in many applications of Operational Modal Analysis. This is especially true when measuring on mechanical structures containing rotating or reciprocating parts. This paper describes a new method based on the popular...... Enhanced Frequency Domain Decomposition technique for eliminating the influence of these harmonic components in the modal parameter extraction process. For various experiments, the quality of the method is assessed and compared to the results obtained using broadband stochastic excitation forces. Good...
Analysis of generalized Schwarz alternating procedure for domain decomposition
Energy Technology Data Exchange (ETDEWEB)
Engquist, B.; Zhao, Hongkai [Univ. of California, Los Angeles, CA (United States)
1996-12-31
The Schwartz alternating method(SAM) is the theoretical basis for domain decomposition which itself is a powerful tool both for parallel computation and for computing in complicated domains. The convergence rate of the classical SAM is very sensitive to the overlapping size between each subdomain, which is not desirable for most applications. We propose a generalized SAM procedure which is an extension of the modified SAM proposed by P.-L. Lions. Instead of using only Dirichlet data at the artificial boundary between subdomains, we take a convex combination of u and {partial_derivative}u/{partial_derivative}n, i.e. {partial_derivative}u/{partial_derivative}n + {Lambda}u, where {Lambda} is some {open_quotes}positive{close_quotes} operator. Convergence of the modified SAM without overlapping in a quite general setting has been proven by P.-L.Lions using delicate energy estimates. The important questions remain for the generalized SAM. (1) What is the most essential mechanism for convergence without overlapping? (2) Given the partial differential equation, what is the best choice for the positive operator {Lambda}? (3) In the overlapping case, is the generalized SAM superior to the classical SAM? (4) What is the convergence rate and what does it depend on? (5) Numerically can we obtain an easy to implement operator {Lambda} such that the convergence is independent of the mesh size. To analyze the convergence of the generalized SAM we focus, for simplicity, on the Poisson equation for two typical geometry in two subdomain case.
Simulation of two-phase flows by domain decomposition
International Nuclear Information System (INIS)
Dao, T.H.
2013-01-01
This thesis deals with numerical simulations of compressible fluid flows by implicit finite volume methods. Firstly, we studied and implemented an implicit version of the Roe scheme for compressible single-phase and two-phase flows. Thanks to Newton method for solving nonlinear systems, our schemes are conservative. Unfortunately, the resolution of nonlinear systems is very expensive. It is therefore essential to use an efficient algorithm to solve these systems. For large size matrices, we often use iterative methods whose convergence depends on the spectrum. We have studied the spectrum of the linear system and proposed a strategy, called Scaling, to improve the condition number of the matrix. Combined with the classical ILU pre-conditioner, our strategy has reduced significantly the GMRES iterations for local systems and the computation time. We also show some satisfactory results for low Mach-number flows using the implicit centered scheme. We then studied and implemented a domain decomposition method for compressible fluid flows. We have proposed a new interface variable which makes the Schur complement method easy to build and allows us to treat diffusion terms. Using GMRES iterative solver rather than Richardson for the interface system also provides a better performance compared to other methods. We can also decompose the computational domain into any number of sub-domains. Moreover, the Scaling strategy for the interface system has improved the condition number of the matrix and reduced the number of GMRES iterations. In comparison with the classical distributed computing, we have shown that our method is more robust and efficient. (author) [fr
Multitasking domain decomposition fast Poisson solvers on the Cray Y-MP
Chan, Tony F.; Fatoohi, Rod A.
1990-01-01
The results of multitasking implementation of a domain decomposition fast Poisson solver on eight processors of the Cray Y-MP are presented. The object of this research is to study the performance of domain decomposition methods on a Cray supercomputer and to analyze the performance of different multitasking techniques using highly parallel algorithms. Two implementations of multitasking are considered: macrotasking (parallelism at the subroutine level) and microtasking (parallelism at the do-loop level). A conventional FFT-based fast Poisson solver is also multitasked. The results of different implementations are compared and analyzed. A speedup of over 7.4 on the Cray Y-MP running in a dedicated environment is achieved for all cases.
International Nuclear Information System (INIS)
Azmy, Y.Y.
1997-01-01
The effect of three communication schemes for solving Arbitrarily High Order Transport (AHOT) methods of the Nodal type on parallel performance is examined via direct measurements and performance models. The target architecture in this study is Oak Ridge National Laboratory's 128 node Paragon XP/S 5 computer and the parallelization is based on the Parallel Virtual Machine (PVM) library. However, the conclusions reached can be easily generalized to a large class of message passing platforms and communication software. The three schemes considered here are: (1) PVM's global operations (broadcast and reduce) which utilizes the Paragon's native corresponding operations based on a spanning tree routing; (2) the Bucket algorithm wherein the angular domain decomposition of the mesh sweep is complemented with a spatial domain decomposition of the accumulation process of the scalar flux from the angular flux and the convergence test; (3) a distributed memory version of the Bucket algorithm that pushes the spatial domain decomposition one step farther by actually distributing the fixed source and flux iterates over the memories of the participating processes. Their conclusion is that the Bucket algorithm is the most efficient of the three if all participating processes have sufficient memories to hold the entire problem arrays. Otherwise, the third scheme becomes necessary at an additional cost to speedup and parallel efficiency that is quantifiable via the parallel performance model
An Improved Traffic Matrix Decomposition Method with Frequency-Domain Regularization
Wang, Zhe; Hu, Kai; Yin, Baolin
2012-01-01
We propose a novel network traffic matrix decomposition method named Stable Principal Component Pursuit with Frequency-Domain Regularization (SPCP-FDR), which improves the Stable Principal Component Pursuit (SPCP) method by using a frequency-domain noise regularization function. An experiment demonstrates the feasibility of this new decomposition method.
A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis
Jokhio, G. A.; Izzuddin, B. A.
2015-05-01
This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.
Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai
1998-01-01
This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.
Mechanical and assembly units of viral capsids identified via quasi-rigid domain decomposition.
Directory of Open Access Journals (Sweden)
Guido Polles
Full Text Available Key steps in a viral life-cycle, such as self-assembly of a protective protein container or in some cases also subsequent maturation events, are governed by the interplay of physico-chemical mechanisms involving various spatial and temporal scales. These salient aspects of a viral life cycle are hence well described and rationalised from a mesoscopic perspective. Accordingly, various experimental and computational efforts have been directed towards identifying the fundamental building blocks that are instrumental for the mechanical response, or constitute the assembly units, of a few specific viral shells. Motivated by these earlier studies we introduce and apply a general and efficient computational scheme for identifying the stable domains of a given viral capsid. The method is based on elastic network models and quasi-rigid domain decomposition. It is first applied to a heterogeneous set of well-characterized viruses (CCMV, MS2, STNV, STMV for which the known mechanical or assembly domains are correctly identified. The validated method is next applied to other viral particles such as L-A, Pariacoto and polyoma viruses, whose fundamental functional domains are still unknown or debated and for which we formulate verifiable predictions. The numerical code implementing the domain decomposition strategy is made freely available.
Energy Technology Data Exchange (ETDEWEB)
Guerin, P
2007-12-15
The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
International Nuclear Information System (INIS)
Desai, Ajit; Pettit, Chris; Poirel, Dominique; Sarkar, Abhijit
2017-01-01
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolution in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.
Rafiq Abuturab, Muhammad
2016-06-01
A new multiple color-image authentication system based on HSI (Hue-Saturation-Intensity) color space and QR decomposition in gyrator domains is proposed. In this scheme, original color images are converted from RGB (Red-Green-Blue) color spaces to HSI color spaces, divided into their H, S, and I components, and then obtained corresponding phase-encoded components. All the phase-encoded H, S, and I components are individually multiplied, and then modulated by random phase functions. The modulated H, S, and I components are convoluted into a single gray image with asymmetric cryptosystem. The resulting image is segregated into Q and R parts by QR decomposition. Finally, they are independently gyrator transformed to get their encoded parts. The encoded Q and R parts should be gathered without missing anyone for decryption. The angles of gyrator transform afford sensitive keys. The protocol based on QR decomposition of encoded matrix and getting back decoded matrix after multiplying matrices Q and R, enhances the security level. The random phase keys, individual phase keys, and asymmetric phase keys provide high robustness to the cryptosystem. Numerical simulation results demonstrate that this scheme is the superior than the existing techniques.
Adaptive dynamic load-balancing with irregular domain decomposition for particle simulations
Begau, Christoph; Sutmann, Godehard
2015-05-01
We present a flexible and fully adaptive dynamic load-balancing scheme, which is designed for particle simulations of three-dimensional systems with short ranged interactions. The method is based on domain decomposition with non-orthogonal non-convex domains, which are constructed based on a local repartitioning of computational work between neighbouring processors. Domains are dynamically adjusted in a flexible way under the condition that the original topology is not changed, i.e. neighbour relations between domains are retained, which guarantees a fixed communication pattern for each domain during a simulation. Extensions of this scheme are discussed and illustrated with examples, which generalise the communication patterns and do not fully restrict data exchange to direct neighbours. The proposed method relies on a linked cell algorithm, which makes it compatible with existing implementations in particle codes and does not modify the underlying algorithm for calculating the forces between particles. The method has been implemented into the molecular dynamics community code IMD and performance has been measured for various molecular dynamics simulations of systems representing realistic problems from materials science. It is found that the method proves to balance the work between processors in simulations with strongly inhomogeneous and dynamically changing particle distributions, which results in a significant increase of the efficiency of the parallel code compared both to unbalanced simulations and conventional load-balancing strategies.
González, Alvaro J; Liao, Li
2010-10-29
Protein-protein interaction (PPI) plays essential roles in cellular functions. The cost, time and other limitations associated with the current experimental methods have motivated the development of computational methods for predicting PPIs. As protein interactions generally occur via domains instead of the whole molecules, predicting domain-domain interaction (DDI) is an important step toward PPI prediction. Computational methods developed so far have utilized information from various sources at different levels, from primary sequences, to molecular structures, to evolutionary profiles. In this paper, we propose a computational method to predict DDI using support vector machines (SVMs), based on domains represented as interaction profile hidden Markov models (ipHMM) where interacting residues in domains are explicitly modeled according to the three dimensional structural information available at the Protein Data Bank (PDB). Features about the domains are extracted first as the Fisher scores derived from the ipHMM and then selected using singular value decomposition (SVD). Domain pairs are represented by concatenating their selected feature vectors, and classified by a support vector machine trained on these feature vectors. The method is tested by leave-one-out cross validation experiments with a set of interacting protein pairs adopted from the 3DID database. The prediction accuracy has shown significant improvement as compared to InterPreTS (Interaction Prediction through Tertiary Structure), an existing method for PPI prediction that also uses the sequences and complexes of known 3D structure. We show that domain-domain interaction prediction can be significantly enhanced by exploiting information inherent in the domain profiles via feature selection based on Fisher scores, singular value decomposition and supervised learning based on support vector machines. Datasets and source code are freely available on the web at http
Directory of Open Access Journals (Sweden)
Liao Li
2010-10-01
Full Text Available Abstract Background Protein-protein interaction (PPI plays essential roles in cellular functions. The cost, time and other limitations associated with the current experimental methods have motivated the development of computational methods for predicting PPIs. As protein interactions generally occur via domains instead of the whole molecules, predicting domain-domain interaction (DDI is an important step toward PPI prediction. Computational methods developed so far have utilized information from various sources at different levels, from primary sequences, to molecular structures, to evolutionary profiles. Results In this paper, we propose a computational method to predict DDI using support vector machines (SVMs, based on domains represented as interaction profile hidden Markov models (ipHMM where interacting residues in domains are explicitly modeled according to the three dimensional structural information available at the Protein Data Bank (PDB. Features about the domains are extracted first as the Fisher scores derived from the ipHMM and then selected using singular value decomposition (SVD. Domain pairs are represented by concatenating their selected feature vectors, and classified by a support vector machine trained on these feature vectors. The method is tested by leave-one-out cross validation experiments with a set of interacting protein pairs adopted from the 3DID database. The prediction accuracy has shown significant improvement as compared to InterPreTS (Interaction Prediction through Tertiary Structure, an existing method for PPI prediction that also uses the sequences and complexes of known 3D structure. Conclusions We show that domain-domain interaction prediction can be significantly enhanced by exploiting information inherent in the domain profiles via feature selection based on Fisher scores, singular value decomposition and supervised learning based on support vector machines. Datasets and source code are freely available on
Polyphase decompositions and shift-invariant discrete wavelet transforms in the frequency domain
Wink, Alle Meije; Roerdink, Jos B.T.M.
Given a signal and its Fourier transform, we derive formulas for its polyphase decomposition in the frequency domain and for the reconstruction from the polyphase representation back to the Fourier representation. We present two frequency-domain implementations of the shift-invariant periodic
Multiscale analysis of damage using dual and primal domain decomposition techniques
Lloberas-Valls, O.; Everdij, F.P.X.; Rixen, D.J.; Simone, A.; Sluys, L.J.
2014-01-01
In this contribution, dual and primal domain decomposition techniques are studied for the multiscale analysis of failure in quasi-brittle materials. The multiscale strategy essentially consists in decomposing the structure into a number of nonoverlapping domains and considering a refined spatial
SPATIOTEMPORAL DOMAIN DECOMPOSITION FOR MASSIVE PARALLEL COMPUTATION OF SPACE-TIME KERNEL DENSITY
Directory of Open Access Journals (Sweden)
A. Hohl
2015-07-01
Full Text Available Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.
Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density
Hohl, A.; Delmelle, E. M.; Tang, W.
2015-07-01
Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.
Energy Technology Data Exchange (ETDEWEB)
Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)
2016-06-15
Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
Emerson, Arnold I; Andrews, Simeon; Ahmed, Ikhlak; Azis, Thasni Ka; Malek, Joel A
2015-01-01
Network biology currently focuses primarily on metabolic pathways, gene regulatory, and protein-protein interaction networks. While these approaches have yielded critical information, alternative methods to network analysis will offer new perspectives on biological information. A little explored area is the interactions between domains that can be captured using domain co-occurrence networks (DCN). A DCN can be used to study the function and interaction of proteins by representing protein domains and their co-existence in genes and by mapping cancer mutations to the individual protein domains to identify signals. The domain co-occurrence network was constructed for the human proteome based on PFAM domains in proteins. Highly connected domains in the central cores were identified using the k-core decomposition technique. Here we show that these domains were found to be more evolutionarily conserved than the peripheral domains. The somatic mutations for ovarian, breast and prostate cancer diseases were obtained from the TCGA database. We mapped the somatic mutations to the individual protein domains and the local false discovery rate was used to identify significantly mutated domains in each cancer type. Significantly mutated domains were found to be enriched in cancer disease pathways. However, we found that the inner cores of the DCN did not contain any of the significantly mutated domains. We observed that the inner core protein domains are highly conserved and these domains co-exist in large numbers with other protein domains. Mutations and domain co-occurrence networks provide a framework for understanding hierarchal designs in protein function from a network perspective. This study provides evidence that a majority of protein domains in the inner core of the DCN have a lower mutation frequency and that protein domains present in the peripheral regions of the k-core contribute more heavily to the disease. These findings may contribute further to drug development.
Wang, Zhaohui; Li, Zhilin; Lubkin, Sharon
2014-01-01
A new numerical method based on locally modified Cartesian meshes is proposed for solving a coupled system of a fluid flow and a porous media flow. The fluid flow is modeled by the Stokes equations while the porous media flow is modeled by Darcy's law. The method is based on a Robin-Robin domain decomposition method with a Cartesian mesh with local modifications near the interface. Some computational examples are presented and discussed.
Solving high Reynolds-number viscous flows by the general BEM and domain decomposition method
Wu, Yongyan; Liao, Shijun
2005-01-01
In this paper, the domain decomposition method (DDM) and the general boundary element method (GBEM) are applied to solve the laminar viscous flow in a driven square cavity, governed by the exact Navier-Stokes equations. The convergent numerical results at high Reynolds number Re = 7500 are obtained. We find that the DDM can considerably improve the efficiency of the GBEM, and that the combination of the domain decomposition techniques and the parallel computation can further greatly improve the efficiency of the GBEM. This verifies the great potential of the GBEM for strongly non-linear problems in science and engineering.
Modal Identification from Ambient Responses using Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, L.; Andersen, P.
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification from ambient responses, ie. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical ...
Energy Technology Data Exchange (ETDEWEB)
Maliassov, S.Y. [Texas A& M Univ., College Station, TX (United States)
1996-12-31
An approach to the construction of an iterative method for solving systems of linear algebraic equations arising from nonconforming finite element discretizations with nonmatching grids for second order elliptic boundary value problems with anisotropic coefficients is considered. The technique suggested is based on decomposition of the original domain into nonoverlapping subdomains. The elliptic problem is presented in the macro-hybrid form with Lagrange multipliers at the interfaces between subdomains. A block diagonal preconditioner is proposed which is spectrally equivalent to the original saddle point matrix and has the optimal order of arithmetical complexity. The preconditioner includes blocks for preconditioning subdomain and interface problems. It is shown that constants of spectral equivalence axe independent of values of coefficients and mesh step size.
Domain decomposition solvers for nonlinear multiharmonic finite element equations
Copeland, D. M.
2010-01-01
In many practical applications, for instance, in computational electromagnetics, the excitation is time-harmonic. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple elliptic equation for the amplitude. This is true for linear problems, but not for nonlinear problems. However, due to the periodicity of the solution, we can expand the solution in a Fourier series. Truncating this Fourier series and approximating the Fourier coefficients by finite elements, we arrive at a large-scale coupled nonlinear system for determining the finite element approximation to the Fourier coefficients. The construction of fast solvers for such systems is very crucial for the efficiency of this multiharmonic approach. In this paper we look at nonlinear, time-harmonic potential problems as simple model problems. We construct and analyze almost optimal solvers for the Jacobi systems arising from the Newton linearization of the large-scale coupled nonlinear system that one has to solve instead of performing the expensive time-integration procedure. © 2010 de Gruyter.
International Nuclear Information System (INIS)
Haeberlein, F.
2011-01-01
Reactive transport modelling is a basic tool to model chemical reactions and flow processes in porous media. A totally reduced multi-species reactive transport model including kinetic and equilibrium reactions is presented. A structured numerical formulation is developed and different numerical approaches are proposed. Domain decomposition methods offer the possibility to split large problems into smaller subproblems that can be treated in parallel. The class of Schwarz-type domain decomposition methods that have proved to be high-performing algorithms in many fields of applications is presented with a special emphasis on the geometrical viewpoint. Numerical issues for the realisation of geometrical domain decomposition methods and transmission conditions in the context of finite volumes are discussed. We propose and validate numerically a hybrid finite volume scheme for advection-diffusion processes that is particularly well-suited for the use in a domain decomposition context. Optimised Schwarz waveform relaxation methods are studied in detail on a theoretical and numerical level for a two species coupled reactive transport system with linear and nonlinear coupling terms. Well-posedness and convergence results are developed and the influence of the coupling term on the convergence behaviour of the Schwarz algorithm is studied. Finally, we apply a Schwarz waveform relaxation method on the presented multi-species reactive transport system. (author)
Czech Academy of Sciences Publication Activity Database
Daněk, Josef; Hlaváček, Ivan; Nedoma, Jiří
2005-01-01
Roč. 68, č. 3 (2005), s. 271-300 ISSN 0378-4754 R&D Projects: GA MPO FT-TA/087 Keywords : domain decomposition * unilateral contact * Tresca's friction model * formulation in displacements * linear finite elements Subject RIV: BA - General Mathematics Impact factor: 0.554, year: 2005
Algebraic Nonoverlapping Domain Decomposition Methods for Stabilized FEM and FV Discretizations
Barth, Timothy J.; Bailey, David (Technical Monitor)
1998-01-01
We consider preconditioning methods for convection dominated fluid flow problems based on a nonoverlapping Schur complement domain decomposition procedure for arbitrary triangulated domains. The triangulation is first partitioned into a number of subdomains and interfaces which induce a natural 2 x 2 partitioning of the p.d.e. discretization matrix. We view the Schur complement induced by this partitioning as an algebraically derived coarse space approximation. This avoids the known difficulties associated with the direct formation of an effective coarse discretization for advection dominated equations. By considering various approximations of the block factorization of the 2 x 2 system, we have developed a family of robust preconditioning techniques. A computer code based on these ideas has been developed and tested on the IBM SP2 using MPI message passing protocol. A number of 2-D CFD calculations will be presented for both scalar advection-diffusion equations and the Euler equations discretized using stabilized finite element and finite volume methods. These results show very good scalability of the preconditioner for various discretizations as the number of processors is increased while the number of degrees of freedom per processor is fixed.
Energy Technology Data Exchange (ETDEWEB)
Gaiffe, St.
2000-03-23
In this thesis, we are interested in the modeling of fluid flow through porous media with 2-D and 3-D unstructured meshes, and in the use of domain decomposition methods. The behavior of flow through porous media is strongly influenced by heterogeneities: either large-scale lithological discontinuities or quite localized phenomena such as fluid flow in the neighbourhood of wells. In these two typical cases, an accurate consideration of the singularities requires the use of adapted meshes. After having shown the limits of classic meshes we present the future prospects offered by hybrid and flexible meshes. Next, we consider the generalization possibilities of the numerical schemes traditionally used in reservoir simulation and we draw two available approaches: mixed finite elements and U-finite volumes. The investigated phenomena being also characterized by different time-scales, special treatments in terms of time discretization on various parts of the domain are required. We think that the combination of domain decomposition methods with operator splitting techniques may provide a promising approach to obtain high flexibility for a local tune-steps management. Consequently, we develop a new numerical scheme for linear parabolic equations which allows to get a higher flexibility in the local space and time steps management. To conclude, a priori estimates and error estimates on the two variables of interest, namely the pressure and the velocity are proposed. (author)
Parallel finite elements with domain decomposition and its pre-processing
International Nuclear Information System (INIS)
Yoshida, A.; Yagawa, G.; Hamada, S.
1993-01-01
This paper describes a parallel finite element analysis using a domain decomposition method, and the pre-processing for the parallel calculation. Computer simulations are about to replace experiments in various fields, and the scale of model to be simulated tends to be extremely large. On the other hand, computational environment has drastically changed in these years. Especially, parallel processing on massively parallel computers or computer networks is considered to be promising techniques. In order to achieve high efficiency on such parallel computation environment, large granularity of tasks, a well-balanced workload distribution are key issues. It is also important to reduce the cost of pre-processing in such parallel FEM. From the point of view, the authors developed the domain decomposition FEM with the automatic and dynamic task-allocation mechanism and the automatic mesh generation/domain subdivision system for it. (author)
Lahmiri, Salim
2016-03-01
Hybridisation of the bi-dimensional empirical mode decomposition (BEMD) with denoising techniques has been proposed in the literature as an effective approach for image denoising. In this Letter, the Student's probability density function is introduced in the computation of the mean envelope of the data during the BEMD sifting process to make it robust to values that are far from the mean. The resulting BEMD is denoted tBEMD. In order to show the effectiveness of the tBEMD, several image denoising techniques in tBEMD domain are employed; namely, fourth order partial differential equation (PDE), linear complex diffusion process (LCDP), non-linear complex diffusion process (NLCDP), and the discrete wavelet transform (DWT). Two biomedical images and a standard digital image were considered for experiments. The original images were corrupted with additive Gaussian noise with three different levels. Based on peak-signal-to-noise ratio, the experimental results show that PDE, LCDP, NLCDP, and DWT all perform better in the tBEMD than in the classical BEMD domain. It is also found that tBEMD is faster than classical BEMD when the noise level is low. When it is high, the computational cost in terms of processing time is similar. The effectiveness of the presented approach makes it promising for clinical applications.
International Nuclear Information System (INIS)
Zerr, R.J.; Azmy, Y.Y.
2010-01-01
A spatial domain decomposition with a parallel block Jacobi solution algorithm has been developed based on the integral transport matrix formulation of the discrete ordinates approximation for solving the within-group transport equation. The new methodology abandons the typical source iteration scheme and solves directly for the fully converged scalar flux. Four matrix operators are constructed based upon the integral form of the discrete ordinates equations. A single differential mesh sweep is performed to construct these operators. The method is parallelized by decomposing the problem domain into several smaller sub-domains, each treated as an independent problem. The scalar flux of each sub-domain is solved exactly given incoming angular flux boundary conditions. Sub-domain boundary conditions are updated iteratively, and convergence is achieved when the scalar flux error in all cells meets a pre-specified convergence criterion. The method has been implemented in a computer code that was then employed for strong scaling studies of the algorithm's parallel performance via a fixed-size problem in tests ranging from one domain up to one cell per sub-domain. Results indicate that the best parallel performance compared to source iterations occurs for optically thick, highly scattering problems, the variety that is most difficult for the traditional SI scheme to solve. Moreover, the minimum execution time occurs when each sub-domain contains a total of four cells. (authors)
International Nuclear Information System (INIS)
Sarma, Manoj; Hu, Peng; Rapacchi, Stanislas; Ennis, Daniel; Thomas, Albert; Lee, Percy; Kupelian, Patrick; Sheng, Ke
2014-01-01
Purpose: To evaluate a low-rank decomposition method to reconstruct down-sampled k-space data for the purpose of tumor tracking. Methods and Materials: Seven retrospective lung cancer patients were included in the simulation study. The fully-sampled k-space data were first generated from existing 2-dimensional dynamic MR images and then down-sampled by 5 × -20 × before reconstruction using a Cartesian undersampling mask. Two methods, a low-rank decomposition method using combined dynamic MR images (k-t SLR based on sparsity and low-rank penalties) and a total variation (TV) method using individual dynamic MR frames, were used to reconstruct images. The tumor trajectories were derived on the basis of autosegmentation of the resultant images. To further test its feasibility, k-t SLR was used to reconstruct prospective data of a healthy subject. An undersampled balanced steady-state free precession sequence with the same undersampling mask was used to acquire the imaging data. Results: In the simulation study, higher imaging fidelity and low noise levels were achieved with the k-t SLR compared with TV. At 10 × undersampling, the k-t SLR method resulted in an average normalized mean square error <0.05, as opposed to 0.23 by using the TV reconstruction on individual frames. Less than 6% showed tracking errors >1 mm with 10 × down-sampling using k-t SLR, as opposed to 17% using TV. In the prospective study, k-t SLR substantially reduced reconstruction artifacts and retained anatomic details. Conclusions: Magnetic resonance reconstruction using k-t SLR on highly undersampled dynamic MR imaging data results in high image quality useful for tumor tracking. The k-t SLR was superior to TV by better exploiting the intrinsic anatomic coherence of the same patient. The feasibility of k-t SLR was demonstrated by prospective imaging acquisition and reconstruction
Energy Technology Data Exchange (ETDEWEB)
Flauraud, E.
2004-05-01
In this thesis, we are interested in using domain decomposition methods for solving fluid flows in faulted porous media. This study comes within the framework of sedimentary basin modeling which its aim is to predict the presence of possible oil fields in the subsoil. A sedimentary basin is regarded as a heterogeneous porous medium in which fluid flows (water, oil, gas) occur. It is often subdivided into several blocks separated by faults. These faults create discontinuities that have a tremendous effect on the fluid flow in the basin. In this work, we present two approaches to model faults from the mathematical point of view. The first approach consists in considering faults as sub-domains, in the same way as blocks but with their own geological properties. However, because of the very small width of the faults in comparison with the size of the basin, the second and new approach consists in considering faults no longer as sub-domains, but as interfaces between the blocks. A mathematical study of the two models is carried out in order to investigate the existence and the uniqueness of solutions. Then; we are interested in using domain decomposition methods for solving the previous models. The main part of this study is devoted to the design of Robin interface conditions and to the formulation of the interface problem. The Schwarz algorithm can be seen as a Jacobi method for solving the interface problem. In order to speed up the convergence, this problem can be solved by a Krylov type algorithm (BICGSTAB). We discretize the equations with a finite volume scheme, and perform extensive numerical tests to compare the different methods. (author)
Frequency response of multipoint chemical shift-based spectral decomposition.
Brodsky, Ethan K; Chebrolu, Venkata V; Block, Walter F; Reeder, Scott B
2010-10-01
To provide a framework for characterizing the frequency response of multipoint chemical shift based species separation techniques. Multipoint chemical shift based species separation techniques acquire complex images at multiple echo times and perform maximum likelihood estimation to decompose signal from different species into separate images. In general, after a nonlinear process of estimating and demodulating the field map, these decomposition methods are linear transforms from the echo-time domain to the chemical-shift-frequency domain, analogous to the discrete Fourier transform (DFT). In this work we describe a technique for finding the magnitude and phase of chemical shift decomposition for input signals over a range of frequencies using numerical and experimental modeling and examine several important cases of species separation. Simple expressions can be derived to describe the response to a wide variety of input signals. Agreement between numerical modeling and experimental results is very good. Chemical shift-based species separation is linear, and therefore can be fully described by the magnitude and phase curves of the frequency response. The periodic nature of the frequency response has important implications for the robustness of various techniques for resolving ambiguities in field inhomogeneity.
Cafiero, M; Lloberas-Valls, O; Cante, J; Oliver, J
A domain decomposition technique is proposed which is capable of properly connecting arbitrary non-conforming interfaces. The strategy essentially consists in considering a fictitious zero-width interface between the non-matching meshes which is discretized using a Delaunay triangulation. Continuity is satisfied across domains through normal and tangential stresses provided by the discretized interface and inserted in the formulation in the form of Lagrange multipliers. The final structure of the global system of equations resembles the dual assembly of substructures where the Lagrange multipliers are employed to nullify the gap between domains. A new approach to handle floating subdomains is outlined which can be implemented without significantly altering the structure of standard industrial finite element codes. The effectiveness of the developed algorithm is demonstrated through a patch test example and a number of tests that highlight the accuracy of the methodology and independence of the results with respect to the framework parameters. Considering its high degree of flexibility and non-intrusive character, the proposed domain decomposition framework is regarded as an attractive alternative to other established techniques such as the mortar approach.
Sapphire decomposition and inversion domains in N-polar aluminum nitride
Energy Technology Data Exchange (ETDEWEB)
Hussey, Lindsay, E-mail: lkhussey@ncsu.edu; White, Ryan M.; Kirste, Ronny; Bryan, Isaac; Guo, Wei; Osterman, Katherine; Haidet, Brian; Bryan, Zachary; Bobea, Milena; Collazo, Ramón; Sitar, Zlatko [Department of Materials Science and Engineering, North Carolina State University, Raleigh, North Carolina 27695-7919 (United States); Mita, Seiji [HexaTech, Inc., 991 Aviation Pkwy, Suite 800, Morrisville, North Carolina 27560 (United States)
2014-01-20
Transmission electron microscopy (TEM) techniques and potassium hydroxide (KOH) etching confirmed that inversion domains in the N-polar AlN grown on c-plane sapphire were due to the decomposition of sapphire in the presence of hydrogen. The inversion domains were found to correspond to voids at the AlN and sapphire interface, and transmission electron microscopy results showed a V-shaped, columnar inversion domain with staggered domain boundary sidewalls. Voids were also observed in the simultaneously grown Al-polar AlN, however no inversion domains were present. The polarity of AlN grown above the decomposed regions of the sapphire substrate was confirmed to be Al-polar by KOH etching and TEM.
Directory of Open Access Journals (Sweden)
Ran Zhao
2015-01-01
Full Text Available The hybrid solvers based on integral equation domain decomposition method (HS-DDM are developed for modeling of electromagnetic radiation. Based on the philosophy of “divide and conquer,” the IE-DDM divides the original multiscale problem into many closed nonoverlapping subdomains. For adjacent subdomains, the Robin transmission conditions ensure the continuity of currents, so the meshes of different subdomains can be allowed to be nonconformal. It also allows different fast solvers to be used in different subdomains based on the property of different subdomains to reduce the time and memory consumption. Here, the multilevel fast multipole algorithm (MLFMA and hierarchical (H- matrices method are combined in the framework of IE-DDM to enhance the capability of IE-DDM and realize efficient solution of multiscale electromagnetic radiating problems. The MLFMA is used to capture propagating wave physics in large, smooth regions, while H-matrices are used to capture evanescent wave physics in small regions which are discretized with dense meshes. Numerical results demonstrate the validity of the HS-DDM.
International Nuclear Information System (INIS)
Wagner, John C.; Mosher, Scott W.; Evans, Thomas M.; Peplow, Douglas E.; Turner, John A.
2010-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which
International Nuclear Information System (INIS)
Wagner, J.C.; Mosher, S.W.; Evans, T.M.; Peplow, D.E.; Turner, J.A.
2010-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform 'real' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the 'gold standard' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method
Moussawi, Ali
2015-02-24
Summary: The post-treatment of (3D) displacement fields for the identification of spatially varying elastic material parameters is a large inverse problem that remains out of reach for massive 3D structures. We explore here the potential of the constitutive compatibility method for tackling such an inverse problem, provided an appropriate domain decomposition technique is introduced. In the method described here, the statically admissible stress field that can be related through the known constitutive symmetry to the kinematic observations is sought through minimization of an objective function, which measures the violation of constitutive compatibility. After this stress reconstruction, the local material parameters are identified with the given kinematic observations using the constitutive equation. Here, we first adapt this method to solve 3D identification problems and then implement it within a domain decomposition framework which allows for reduced computational load when handling larger problems.
A Gyro Signal Characteristics Analysis Method Based on Empirical Mode Decomposition
Zeng, Qinghua; Gu, Shanshan; Liu, Jianye; Liu, Sheng; Chen, Weina
2016-01-01
It is difficult to analyze the nonstationary gyro signal in detail for the Allan variance (AV) analysis method. A novel approach in the time-frequency domain for gyro signal characteristics analysis is proposed based on the empirical mode decomposition and Allan variance (EMDAV). The output signal of gyro is decomposed by empirical mode decomposition (EMD) first, and then the decomposed signal is analyzed by AV algorithm. Consequently, the gyro noise characteristics are demonstrated in the ti...
Directory of Open Access Journals (Sweden)
Sanping Rao
2013-01-01
Full Text Available This paper is an attempt to develop quantitative domain theory over frames. Firstly, we propose the notion of a fuzzy basis, and several equivalent characterizations of fuzzy bases are obtained. Furthermore, the concept of a fuzzy algebraic domain is introduced, and a relationship between fuzzy algebraic domains and fuzzy domains is discussed from the viewpoint of fuzzy basis. We finally give an application of fuzzy bases, where the image of a fuzzy domain can be preserved under some special kinds of fuzzy Galois connections.
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe
2010-01-01
Roč. 5910, - (2010), s. 76-83 ISSN 0302-9743. [International Conference on Large-Scale Scientific Computations, LSSC 2009 /7./. Sozopol, 04.06.2009-08.06.2009] R&D Projects: GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z30860518 Keywords : additive matrix * condition number * domain decomposition Subject RIV: BA - General Mathematics www.springerlink.com
Coupling parallel adaptive mesh refinement with a nonoverlapping domain decomposition solver
Czech Academy of Sciences Publication Activity Database
Kůs, Pavel; Šístek, Jakub
2017-01-01
Roč. 110, August (2017), s. 34-54 ISSN 0965-9978 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : adaptive mesh refinement * parallel algorithms * domain decomposition Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 3.000, year: 2016 http://www.sciencedirect.com/science/article/pii/S0965997816305737
Energy Technology Data Exchange (ETDEWEB)
Jemcov, A.; Matovic, M.D. [Queen`s Univ., Kingston, Ontario (Canada)
1996-12-31
This paper examines the sparse representation and preconditioning of a discrete Steklov-Poincare operator which arises in domain decomposition methods. A non-overlapping domain decomposition method is applied to a second order self-adjoint elliptic operator (Poisson equation), with homogeneous boundary conditions, as a model problem. It is shown that the discrete Steklov-Poincare operator allows sparse representation with a bounded condition number in wavelet basis if the transformation is followed by thresholding and resealing. These two steps combined enable the effective use of Krylov subspace methods as an iterative solution procedure for the system of linear equations. Finding the solution of an interface problem in domain decomposition methods, known as a Schur complement problem, has been shown to be equivalent to the discrete form of Steklov-Poincare operator. A common way to obtain Schur complement matrix is by ordering the matrix of discrete differential operator in subdomain node groups then block eliminating interface nodes. The result is a dense matrix which corresponds to the interface problem. This is equivalent to reducing the original problem to several smaller differential problems and one boundary integral equation problem for the subdomain interface.
Singh, Phool; Yadav, A. K.; Singh, Kehar; Saini, Indu
2017-01-01
A new scheme for image encryption is proposed, using fractional Hartley transform followed by Arnold transform and singular value decomposition in the frequency domain. As the plaintext is an amplitude image, the mask used in the spatial domain is a random phase mask (RPM). The proposed scheme has been validated for grayscale images and is sensitive to the encryption parameters such as order of Arnold transform and fractional orders of the Hartley transform. We have also evaluated the scheme's resistance to the well-known noise and occlusions attacks.
Singh, Phool; Yadav, A. K.; Singh, Kehar
2017-04-01
A novel scheme for image encryption of phase images is proposed, using fractional Hartley transform followed by Arnold transform and singular value decomposition in the frequency domain. Since the plaintext is a phase image, the mask used in the spatial domain is a random amplitude mask. The proposed scheme has been validated for grayscale images and is sensitive to the encryption parameters such as the order of the Arnold transform and the fractional orders of the Hartley transform. We have also evaluated the scheme's resistance to the well-known noise and occlusion attacks.
Pioldi, Fabio; Rizzi, Egidio
2017-07-01
Output-only structural identification is developed by a refined Frequency Domain Decomposition ( rFDD) approach, towards assessing current modal properties of heavy-damped buildings (in terms of identification challenge), under strong ground motions. Structural responses from earthquake excitations are taken as input signals for the identification algorithm. A new dedicated computational procedure, based on coupled Chebyshev Type II bandpass filters, is outlined for the effective estimation of natural frequencies, mode shapes and modal damping ratios. The identification technique is also coupled with a Gabor Wavelet Transform, resulting in an effective and self-contained time-frequency analysis framework. Simulated response signals generated by shear-type frames (with variable structural features) are used as a necessary validation condition. In this context use is made of a complete set of seismic records taken from the FEMA P695 database, i.e. all 44 "Far-Field" (22 NS, 22 WE) earthquake signals. The modal estimates are statistically compared to their target values, proving the accuracy of the developed algorithm in providing prompt and accurate estimates of all current strong ground motion modal parameters. At this stage, such analysis tool may be employed for convenient application in the realm of Earthquake Engineering, towards potential Structural Health Monitoring and damage detection purposes.
Directory of Open Access Journals (Sweden)
Carlo Ruzzo
2016-10-01
Full Text Available System identification of offshore floating platforms is usually performed by testing small-scale models in wave tanks, where controlled conditions, such as still water for free decay tests, regular and irregular wave loading can be represented. However, this approach may result in constraints on model dimensions, testing time, and costs of the experimental activity. For such reasons, intermediate-scale field modelling of offshore floating structures may become an interesting as well as cost-effective alternative in a near future. Clearly, since the open sea is not a controlled environment, traditional system identification may become challenging and less precise. In this paper, a new approach based on Frequency Domain Decomposition (FDD method for Operational Modal Analysis is proposed and validated against numerical simulations in ANSYS AQWA v.16.0 on a simple spar-type structure. The results obtained match well with numerical predictions, showing that this new approach, opportunely coupled with more traditional wave tanks techniques, proves to be very promising to perform field-site identification of the model structures.
Middleton, Beth A.
2014-01-01
A cornerstone of ecosystem ecology, decomposition was recognized as a fundamental process driving the exchange of energy in ecosystems by early ecologists such as Lindeman 1942 and Odum 1960). In the history of ecology, studies of decomposition were incorporated into the International Biological Program in the 1960s to compare the nature of organic matter breakdown in various ecosystem types. Such studies still have an important role in ecological studies of today. More recent refinements have brought debates on the relative role microbes, invertebrates and environment in the breakdown and release of carbon into the atmosphere, as well as how nutrient cycling, production and other ecosystem processes regulated by decomposition may shift with climate change. Therefore, this bibliography examines the primary literature related to organic matter breakdown, but it also explores topics in which decomposition plays a key supporting role including vegetation composition, latitudinal gradients, altered ecosystems, anthropogenic impacts, carbon storage, and climate change models. Knowledge of these topics is relevant to both the study of ecosystem ecology as well projections of future conditions for human societies.
Le, Thien-Phu; ARGOUL, Pierre
2015-01-01
The time-frequency domain decomposition technique has been proposed for modal identification in ambient vibration testing. In the presence of harmonic excitations, the modal identification process can provide not only structural modes but also non-structural ones relative to harmonic components. It is thus important to distinguish between them. In this study, by using the time-frequency domain decomposition technique, it is demonstrated that the distinction between non-structural harmonic com...
Directory of Open Access Journals (Sweden)
Lin Chen
2011-09-01
Full Text Available The main purpose of this paper is to establish a signal decomposition system aiming at mixed over-voltages in power systems. In an electric power system, over-voltage presents a great threat for the system safety. Analysis and identification of over-voltages is helpful to improve the stability and safety of power systems. Through statistical analysis of a collection of field over-voltage records, it was found that a kind of complicated signals created by mixing of multiple different over-voltages is difficult to identify correctly with current classification algorithms. In order to improve the classification and identification accuracy of over-voltages, a mixed over-voltage decomposition system based on the atomic decomposition and a damped sinusoid atom dictionary has been established. This decomposition system is optimized by using particle swarm optimization and the fast Fourier transform. Aiming at possible fault decomposition results during decomposition of the over-voltage signal, a double-atom decomposition algorithm is proposed in this paper. By taking three typical mixed over-voltages as examples, the validity of the algorithm is demonstrated.
Energy Technology Data Exchange (ETDEWEB)
Li,Jing; Tu, Xuemin
2008-12-10
A variant of balancing domain decomposition method by constraints (BDDC) is proposed for solving a class of indefinite system of linear equations, which arises from the finite element discretization of the Helmholtz equation of time-harmonic wave propagation in a bounded interior domain. The proposed BDDC algorithm is closely related to the dual-primal finite element tearing and interconnecting algorithm for solving Helmholtz equations (FETI-DPH). Under the condition that the diameters of the subdomains are small enough, the rate of convergence is established which depends polylogarithmically on the dimension of the individual subdomain problems and which improves with the decrease of the subdomain diameters. These results are supported by numerical experiments of solving a Helmholtz equation on a two-dimensional square domain.
M. Genseberger (Menno)
2008-01-01
htmlabstractMost computational work in Jacobi-Davidson [9], an iterative method for large scale eigenvalue problems, is due to a so-called correction equation. In [5] a strategy for the approximate solution of the correction equation was proposed. This strategy is based on a domain decomposition
Investigation of Ag2O Thermal Decomposition by Terahertz Time-Domain Spectroscopy
International Nuclear Information System (INIS)
Hua, Chen; Li, Wang
2009-01-01
Application of terahertz time-domain spectroscopy is demonstrated to study the process of Ag 2 O thermal decomposition. In the process of decomposition, the time-resolved signals are characterized by broad oscillations and decreased intensity, and THz pulse essentially contains two broad spectral components: one centered at around 0.35 THz and a band with a maximum at around 0.81 THz shift to 0.71 THz. Optical absorption spectra of different specimens are studied in the frequency range 0.3–1.4 THz and the data are analyzed by the relevant theory of the effective medium approach combined with the Drude–Lorentz model. The analysis suggests that optical properties stem from the Drude term for the metallic phase and the Lorentz term for the insulator phase in the complex system. (fundamental areas of phenomenology(including applications))
Identification of the Swiss Z24 Highway Bridge by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Andersen, P.
2002-01-01
This paper presents the result of the modal identification of the Swiss highway bridge Z24. A series of 15 progressive damage tests were performed on the bridge before it was demolished in autumn 1998, and the ambient response of the bridge was recorded for each damage case. In this paper the modal...... properties are identified from the ambient responses by frequency domain decomposition. 6 modes were identified for all 15 damage cases. The identification was carried out for the full 3D data case i.e. including all measurements, a total of 291 channels, a reduced data case in 2D including 153 channels...
International Nuclear Information System (INIS)
Fischer, J.W.; Azmy, Y.Y.
2003-01-01
A previously reported parallel performance model for Angular Domain Decomposition (ADD) of the Discrete Ordinates method for solving multidimensional neutron transport problems is revisited for further validation. Three communication schemes: native MPI, the bucket algorithm, and the distributed bucket algorithm, are included in the validation exercise that is successfully conducted on a Beowulf cluster. The parallel performance model is comprised of three components: serial, parallel, and communication. The serial component is largely independent of the number of participating processors, P, while the parallel component decreases like 1/P. These two components are independent of the communication scheme, in contrast with the communication component that typically increases with P in a manner highly dependent on the global reduced algorithm. Correct trends for each component and each communication scheme were measured for the Arbitrarily High Order Transport (AHOT) code, thus validating the performance models. Furthermore, extensive experiments illustrate the superiority of the bucket algorithm. The primary question addressed in this research is: for a given problem size, which domain decomposition method, angular or spatial, is best suited to parallelize Discrete Ordinates methods on a specific computational platform? We address this question for three-dimensional applications via parallel performance models that include parameters specifying the problem size and system performance: the above-mentioned ADD, and a previously constructed and validated Spatial Domain Decomposition (SDD) model. We conclude that for large problems the parallel component dwarfs the communication component even on moderately large numbers of processors. The main advantages of SDD are: (a) scalability to higher numbers of processors of the order of the number of computational cells; (b) smaller memory requirement; (c) better performance than ADD on high-end platforms and large number of
A domain decomposition method for analyzing a coupling between multiple acoustical spaces (L).
Chen, Yuehua; Jin, Guoyong; Liu, Zhigang
2017-05-01
This letter presents a domain decomposition method to predict the acoustic characteristics of an arbitrary enclosure made up of any number of sub-spaces. While the Lagrange multiplier technique usually has good performance for conditional extremum problems, the present method avoids involving extra coupling parameters and theoretically ensures the continuity conditions of both sound pressure and particle velocity at the coupling interface. Comparisons with the finite element results illustrate the accuracy and efficiency of the present predictions and the effect of coupling parameters between sub-spaces on the natural frequencies and mode shapes of the overall enclosure is revealed.
Pseudospectral reverse time migration based on wavefield decomposition
Du, Zengli; Liu, Jianjun; Xu, Feng; Li, Yongzhang
2017-05-01
The accuracy of seismic numerical simulations and the effectiveness of imaging conditions are important in reverse time migration studies. Using the pseudospectral method, the precision of the calculated spatial derivative of the seismic wavefield can be improved, increasing the vertical resolution of images. Low-frequency background noise, generated by the zero-lag cross-correlation of mismatched forward-propagated and backward-propagated wavefields at the impedance interfaces, can be eliminated effectively by using the imaging condition based on the wavefield decomposition technique. The computation complexity can be reduced when imaging is performed in the frequency domain. Since the Fourier transformation in the z-axis may be derived directly as one of the intermediate results of the spatial derivative calculation, the computation load of the wavefield decomposition can be reduced, improving the computation efficiency of imaging. Comparison of the results for a pulse response in a constant-velocity medium indicates that, compared with the finite difference method, the peak frequency of the Ricker wavelet can be increased by 10-15 Hz for avoiding spatial numerical dispersion, when the second-order spatial derivative of the seismic wavefield is obtained using the pseudospectral method. The results for the SEG/EAGE and Sigsbee2b models show that the signal-to-noise ratio of the profile and the imaging quality of the boundaries of the salt dome migrated using the pseudospectral method are better than those obtained using the finite difference method.
Terahertz Spectrum Analysis Based on Empirical Mode Decomposition
Su, Yunpeng; Zheng, Xiaoping; Deng, Xiaojiao
2017-08-01
Precise identification of terahertz absorption peaks for materials with low concentration and high attenuation still remains a challenge. Empirical mode decomposition was applied to terahertz spectrum analysis in order to improve the performance on spectral fingerprints identification. We conducted experiments on water vapor and carbon monoxide respectively with terahertz time domain spectroscopy. By comparing their absorption spectra before and after empirical mode decomposition, we demonstrated that the first-order intrinsic mode function shows absorption peaks clearly in high-frequency range. By comparing the frequency spectra of the sample signals and their intrinsic mode functions, we proved that the first-order function contains most of the original signal's energy and frequency information so that it cannot be left out or replaced by high-order functions in spectral fingerprints detection. Empirical mode decomposition not only acts as an effective supplementary means to terahertz time-domain spectroscopy but also shows great potential in discrimination of materials and prediction of their concentrations.
Li, Duan; Xu, Lijun; Li, Xiaolu
2017-04-01
To measure the distances and properties of the objects within a laser footprint, a decomposition method for full-waveform light detection and ranging (LiDAR) echoes is proposed. In this method, firstly, wavelet decomposition is used to filter the noise and estimate the noise level in a full-waveform echo. Secondly, peak and inflection points of the filtered full-waveform echo are used to detect the echo components in the filtered full-waveform echo. Lastly, particle swarm optimization (PSO) is used to remove the noise-caused echo components and optimize the parameters of the most probable echo components. Simulation results show that the wavelet-decomposition-based filter is of the best improvement of SNR and decomposition success rates than Wiener and Gaussian smoothing filters. In addition, the noise level estimated using wavelet-decomposition-based filter is more accurate than those estimated using other two commonly used methods. Experiments were carried out to evaluate the proposed method that was compared with our previous method (called GS-LM for short). In experiments, a lab-build full-waveform LiDAR system was utilized to provide eight types of full-waveform echoes scattered from three objects at different distances. Experimental results show that the proposed method has higher success rates for decomposition of full-waveform echoes and more accurate parameters estimation for echo components than those of GS-LM. The proposed method based on wavelet decomposition and PSO is valid to decompose the more complicated full-waveform echoes for estimating the multi-level distances of the objects and measuring the properties of the objects in a laser footprint.
Reprint of Domain decomposition multigrid methods for nonlinear reaction-diffusion problems
Arrarás, A.; Gaspar, F. J.; Portero, L.; Rodrigo, C.
2015-04-01
In this work, we propose efficient discretizations for nonlinear evolutionary reaction-diffusion problems on general two-dimensional domains. The spatial domain is discretized through an unstructured coarse triangulation, which is subsequently refined via regular triangular grids. Following the method of lines approach, we first consider a finite element spatial discretization, and then use a linearly implicit splitting time integrator related to a suitable decomposition of the triangulation nodes. Such a procedure provides a linear system per internal stage. The equations corresponding to those nodes lying strictly inside the elements of the coarse triangulation can be decoupled and solved in parallel using geometric multigrid techniques. The method is unconditionally stable and computationally efficient, since it avoids the need for Schwarz-type iteration procedures. In addition, it is formulated for triangular elements, thus yielding much flexibility in the discretization of complex geometries. To illustrate its practical utility, the algorithm is shown to reproduce the pattern-forming dynamics of the Schnakenberg model.
Combined spatial/angular domain decomposition SN algorithms for shared memory parallel machines
International Nuclear Information System (INIS)
Hunter, M.A.; Haghighat, A.
1993-01-01
Several parallel processing algorithms on the basis of spatial and angular domain decomposition methods are developed and incorporated into a two-dimensional discrete ordinates transport theory code. These algorithms divide the spatial and angular domains into independent subdomains so that the flux calculations within each subdomain can be processed simultaneously. Two spatial parallel algorithms (Block-Jacobi, red-black), one angular parallel algorithm (η-level), and their combinations are implemented on an eight processor CRAY Y-MP. Parallel performances of the algorithms are measured using a series of fixed source RZ geometry problems. Some of the results are also compared with those executed on an IBM 3090/600J machine. (orig.)
Domain Decomposition Preconditioners for Multiscale Flows in High-Contrast Media
Galvis, Juan
2010-01-01
In this paper, we study domain decomposition preconditioners for multiscale flows in high-contrast media. We consider flow equations governed by elliptic equations in heterogeneous media with a large contrast in the coefficients. Our main goal is to develop domain decomposition preconditioners with the condition number that is independent of the contrast when there are variations within coarse regions. This is accomplished by designing coarse-scale spaces and interpolators that represent important features of the solution within each coarse region. The important features are characterized by the connectivities of high-conductivity regions. To detect these connectivities, we introduce an eigenvalue problem that automatically detects high-conductivity regions via a large gap in the spectrum. A main observation is that this eigenvalue problem has a few small, asymptotically vanishing eigenvalues. The number of these small eigenvalues is the same as the number of connected high-conductivity regions. The coarse spaces are constructed such that they span eigenfunctions corresponding to these small eigenvalues. These spaces are used within two-level additive Schwarz preconditioners as well as overlapping methods for the Schur complement to design preconditioners. We show that the condition number of the preconditioned systems is independent of the contrast. More detailed studies are performed for the case when the high-conductivity region is connected within coarse block neighborhoods. Our numerical experiments confirm the theoretical results presented in this paper. © 2010 Society for Industrial and Applied Mathematics.
Directory of Open Access Journals (Sweden)
Jingang Liang
2016-06-01
Full Text Available Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC codes in accomplishing pin-wise three-dimensional (3D full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
Problem decomposition by mutual information and force-based clustering
Otero, Richard Edward
The scale of engineering problems has sharply increased over the last twenty years. Larger coupled systems, increasing complexity, and limited resources create a need for methods that automatically decompose problems into manageable sub-problems by discovering and leveraging problem structure. The ability to learn the coupling (inter-dependence) structure and reorganize the original problem could lead to large reductions in the time to analyze complex problems. Such decomposition methods could also provide engineering insight on the fundamental physics driving problem solution. This work forwards the current state of the art in engineering decomposition through the application of techniques originally developed within computer science and information theory. The work describes the current state of automatic problem decomposition in engineering and utilizes several promising ideas to advance the state of the practice. Mutual information is a novel metric for data dependence and works on both continuous and discrete data. Mutual information can measure both the linear and non-linear dependence between variables without the limitations of linear dependence measured through covariance. Mutual information is also able to handle data that does not have derivative information, unlike other metrics that require it. The value of mutual information to engineering design work is demonstrated on a planetary entry problem. This study utilizes a novel tool developed in this work for planetary entry system synthesis. A graphical method, force-based clustering, is used to discover related sub-graph structure as a function of problem structure and links ranked by their mutual information. This method does not require the stochastic use of neural networks and could be used with any link ranking method currently utilized in the field. Application of this method is demonstrated on a large, coupled low-thrust trajectory problem. Mutual information also serves as the basis for an
Two-phase flow steam generator simulations on parallel computers using domain decomposition method
International Nuclear Information System (INIS)
Belliard, M.
2003-01-01
Within the framework of the Domain Decomposition Method (DDM), we present industrial steady state two-phase flow simulations of PWR Steam Generators (SG) using iteration-by-sub-domain methods: standard and Adaptive Dirichlet/Neumann methods (ADN). The averaged mixture balance equations are solved by a Fractional-Step algorithm, jointly with the Crank-Nicholson scheme and the Finite Element Method. The algorithm works with overlapping or non-overlapping sub-domains and with conforming or nonconforming meshing. Computations are run on PC networks or on massively parallel mainframe computers. A CEA code-linker and the PVM package are used (master-slave context). SG mock-up simulations, involving up to 32 sub-domains, highlight the efficiency (speed-up, scalability) and the robustness of the chosen approach. With the DDM, the computational problem size is easily increased to about 1,000,000 cells and the CPU time is significantly reduced. The difficulties related to industrial use are also discussed. (author)
Arslan, Mehmet Ali
This thesis examines decomposition based procedures in the optimal design of large-scale multidisciplinary systems. The use of formal optimization methods in such systems is complicated by the presence of a large number of design variables and constraints. Decomposition reduces a large-scale system into a sequence of smaller, more tractable subsystems, each with a smaller set of design variables and constraints. The decomposed subsystems, however, are not totally decoupled, and design changes in one subsystem may have a profound influence on changes in other subsystems. The present work examines the effectiveness of counterpropagation (CP) neural networks as a tool to account for this coupling. This capability derives from a pattern completion capability of such networks. The proposed approach is implemented for a class of structural design problems where the decomposed subsystems exhibit hierarchy, i.e., there is a distinct chain of command in the nature of couplings between the subsystems. The role of artificial neural networks is also explored in the context of concurrent subspace optimization (CSSO) where this decomposition based approach is applicable to problems where no distinct hierarchy of influences can be identified. Essential components of decomposition based design methods are strategies to identify a topology for problem decomposition, and to develop coordination strategies which account for couplings among the decomposed problems. The present thesis examines the effectiveness of artificial neural networks as a tool to both account for the coupling, and to develop methods to coordinate the solution in the different subproblems. The solution process for decomposition based design is further enhanced by a novel approach of using Intelligent Agents (IA's). This agent-based paradigm provides the necessary support structure for representing salient characteristics of the design, and for coordinating the solutions in different subproblems. The CSSO method
Barka, André; Picard, Clément
2008-03-01
In this paper, we discuss several improvements of a substructuring Domain Decomposition Method (DDM) devoted to Electromagnetic computations, based on the Boundary Element Method (BEM) and the Finite Element Method (FEM). This computation procedure is applied to the analysis of antenna performance on board vehicles as well as Radar Cross Section (RCS). The benefits of the subdomain Computational Electromagnetic Method are mainly the ability to deal with collaborative studies involving several companies, and the reduction of the computation costs by one or more orders of magnitude, especially in the context of parametric studies. Furthermore, this paper proposes a Spectral Basis Function (SBF) defined on fictitious surfaces surrounding equipment, to deal with both the computation of antenna far field patterns and RCS in a multi-domain mode. By masking the complexity of the equipment (wires, thin surfaces, materials, supply network, weapons) the external domain of the vehicle can be closed so that the Combined Field Integral Equation (CFIE) can be used, which is better conditioned than the Electric Field Integral Equation (EFIE). This calculation procedure leads to a faster convergence when using iterative Multi Level Fast Multiple Algorithms (MLFMA). The accuracy and efficiency of this technique is assessed by performing the computation of the diffraction and radiation of several test-objects in a multi-domain way cross compared with reference integral equation results.
Pitfalls in VAR based return decompositions: A clarification
DEFF Research Database (Denmark)
Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten
Based on Chen and Zhao's (2009) criticism of VAR based return de- compositions, we explain in detail the various limitations and pitfalls involved in such decompositions. First, we show that Chen and Zhao's interpretation of their excess bond return decomposition is wrong: the residual component......, the asset price needs to be included as a state variable. In parts of Chen and Zhao's analysis the price does not appear as a state variable, thus rendering those parts of their analysis invalid. Finally, we clarify the intriguing issue of the role of the residual component in equity return decompositions...
Domain decomposition method for dynamic faulting under slip-dependent friction
International Nuclear Information System (INIS)
Badea, Lori; Ionescu, Ioan R.; Wolf, Sylvie
2004-01-01
The anti-plane shearing problem on a system of finite faults under a slip-dependent friction in a linear elastic domain is considered. Using a Newmark method for the time discretization of the problem, we have obtained an elliptic variational inequality at each time step. An upper bound for the time step size, which is not a CFL condition, is deduced from the solution uniqueness criterion using the first eigenvalue of the tangent problem. Finite element form of the variational inequality is solved by a Schwarz method assuming that the inner nodes of the domain lie in one subdomain and the nodes on the fault lie in other subdomains. Two decompositions of the domain are analyzed, one made up of two subdomains and another one with three subdomains. Numerical experiments are performed to illustrate convergence for a single time step (convergence of the Schwarz algorithm, influence of the mesh size, influence of the time step), convergence in time (instability capturing, energy dissipation, optimal time step) and an application to a relevant physical problem (interacting parallel fault segments)
DOMAIN DECOMPOSITION FOR POROELASTICITY AND ELASTICITY WITH DG JUMPS AND MORTARS
GIRAULT, V.
2011-01-01
We couple a time-dependent poroelastic model in a region with an elastic model in adjacent regions. We discretize each model independently on non-matching grids and we realize a domain decomposition on the interface between the regions by introducing DG jumps and mortars. The unknowns are condensed on the interface, so that at each time step, the computation in each subdomain can be performed in parallel. In addition, by extrapolating the displacement, we present an algorithm where the computations of the pressure and displacement are decoupled. We show that the matrix of the interface problem is positive definite and establish error estimates for this scheme. © 2011 World Scientific Publishing Company.
A balancing domain decomposition method by constraints for advection-diffusion problems
Energy Technology Data Exchange (ETDEWEB)
Tu, Xuemin; Li, Jing
2008-12-10
The balancing domain decomposition methods by constraints are extended to solving nonsymmetric, positive definite linear systems resulting from the finite element discretization of advection-diffusion equations. A pre-conditioned GMRES iteration is used to solve a Schur complement system of equations for the subdomain interface variables. In the preconditioning step of each iteration, a partially sub-assembled finite element problem is solved. A convergence rate estimate for the GMRES iteration is established, under the condition that the diameters of subdomains are small enough. It is independent of the number of subdomains and grows only slowly with the subdomain problem size. Numerical experiments for several two-dimensional advection-diffusion problems illustrate the fast convergence of the proposed algorithm.
An iterative finite-element collocation method for parabolic problems using domain decomposition
Energy Technology Data Exchange (ETDEWEB)
Curran, M.C.
1992-01-01
Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.
An iterative finite-element collocation method for parabolic problems using domain decomposition
Energy Technology Data Exchange (ETDEWEB)
Curran, M.C.
1992-11-01
Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.
Energy Technology Data Exchange (ETDEWEB)
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
2017-12-01
We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
2017-12-01
We present a novel approach for solving steady-state stochastic partial differential equations in high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that accurate global solutions can be obtained with significantly reduced computational costs.
Final Report, DE-FG01-06ER25718 Domain Decomposition and Parallel Computing
Energy Technology Data Exchange (ETDEWEB)
Widlund, Olof B. [New York Univ. (NYU), NY (United States). Courant Inst.
2015-06-09
The goal of this project is to develop and improve domain decomposition algorithms for a variety of partial differential equations such as those of linear elasticity and electro-magnetics.These iterative methods are designed for massively parallel computing systems and allow the fast solution of the very large systems of algebraic equations that arise in large scale and complicated simulations. A special emphasis is placed on problems arising from Maxwell's equation. The approximate solvers, the preconditioners, are combined with the conjugate gradient method and must always include a solver of a coarse model in order to have a performance which is independent of the number of processors used in the computer simulation. A recent development allows for an adaptive construction of this coarse component of the preconditioner.
Energy Technology Data Exchange (ETDEWEB)
Girardi, E
2004-12-15
A new methodology for the solution of the neutron transport equation, based on domain decomposition has been developed. This approach allows us to employ different numerical methods together for a whole core calculation: a variational nodal method, a discrete ordinate nodal method and a method of characteristics. These new developments authorize the use of independent spatial and angular expansion, non-conformal Cartesian and unstructured meshes for each sub-domain, introducing a flexibility of modeling which is not allowed in today available codes. The effectiveness of our multi-domain/multi-method approach has been tested on several configurations. Among them, one particular application: the benchmark model of the Phebus experimental facility at Cea-Cadarache, shows why this new methodology is relevant to problems with strong local heterogeneities. This comparison has showed that the decomposition method brings more accuracy all along with an important reduction of the computer time.
Modal Decomposition of Synthetic Jet Flow Based on CFD Computation
Directory of Open Access Journals (Sweden)
Hyhlík Tomáš
2015-01-01
Full Text Available The article analyzes results of numerical simulation of synthetic jet flow using modal decomposition. The analyzes are based on the numerical simulation of axisymmetric unsteady laminar flow obtained using ANSYS Fluent CFD code. Three typical laminar regimes are compared from the point of view of modal decomposition. The first regime is without synthetic jet creation with Reynolds number Re = 76 and Stokes number S = 19.7. The second studied regime is defined by Re = 145 and S = 19.7. The third regime of synthetic jet work is regime with Re = 329 and S = 19.7. Modal decomposition of obtained flow fields is done using proper orthogonal decomposition (POD where energetically most important modes are identified. The structure of POD modes is discussed together with classical approach based on phase averaged velocities.
Le, Thien-Phu
2017-10-01
The frequency-scale domain decomposition technique has recently been proposed for operational modal analysis. The technique is based on the Cauchy mother wavelet. In this paper, the approach is extended to the Morlet mother wavelet, which is very popular in signal processing due to its superior time-frequency localization. Based on the regressive form and an appropriate norm of the Morlet mother wavelet, the continuous wavelet transform of the power spectral density of ambient responses enables modes in the frequency-scale domain to be highlighted. Analytical developments first demonstrate the link between modal parameters and the local maxima of the continuous wavelet transform modulus. The link formula is then used as the foundation of the proposed modal identification method. Its practical procedure, combined with the singular value decomposition algorithm, is presented step by step. The proposition is finally verified using numerical examples and a laboratory test.
Yücel, Abdulkadir C.
2013-07-01
Reliable and effective wireless communication and tracking systems in mine environments are key to ensure miners\\' productivity and safety during routine operations and catastrophic events. The design of such systems greatly benefits from simulation tools capable of analyzing electromagnetic (EM) wave propagation in long mine tunnels and large mine galleries. Existing simulation tools for analyzing EM wave propagation in such environments employ modal decompositions (Emslie et. al., IEEE Trans. Antennas Propag., 23, 192-205, 1975), ray-tracing techniques (Zhang, IEEE Tran. Vehic. Tech., 5, 1308-1314, 2003), and full wave methods. Modal approaches and ray-tracing techniques cannot accurately account for the presence of miners and their equipments, as well as wall roughness (especially when the latter is comparable to the wavelength). Full-wave methods do not suffer from such restrictions but require prohibitively large computational resources. To partially alleviate this computational burden, a 2D integral equation-based domain decomposition technique has recently been proposed (Bakir et. al., in Proc. IEEE Int. Symp. APS, 1-2, 8-14 July 2012). © 2013 IEEE.
Zhang, Shuangyue; Han, Dong; Politte, David G; Williamson, Jeffrey F; O'Sullivan, Joseph A
2018-03-23
To assess the performance of a novel dual-energy CT (DECT) approach for proton stopping power ratio (SPR) mapping that integrates image reconstruction andmaterial characterization using a joint statistical image reconstruction (JSIR) method based on a linear basis vector model (BVM). A systematic comparison between the JSIR-BVM method and previously described DECT image- and sinogram-domain decomposition approaches is also carried out on synthetic data. The JSIR-BVM method was implemented to estimate the electron densities and mean excitation energies (I-values) required by the Bethe equation for SPR mapping. In addition, image- and sinogram-domain DECT methods based on three available SPR models including BVM were implemented for comparison. The intrinsic SPR modeling accuracy of the three models was first validated. Synthetic DECT transmission sinograms of two 330 mm diameter phantoms each containing 17 soft and bony tissues (for a total of 34) of known composition were then generated with spectra of 90 and 140 kVp. The estimation accuracy of the reconstructed SPR images were evaluated for the seven investigated methods. The impact of phantom size and insert location on SPR estimation accuracy was also investigated. All three selected DECT-SPR models predict the SPR of all tissue types with less than 0:2% RMS errors under idealized conditions with no reconstruction uncertainties. When applied to synthetic sinograms, the JSIR-BVM method achieves the best performance with mean and RMS-average errors of less than 0:05% and 0:3%, respectively, for all noise levels, while the image- and sinogram-domain decomposition methods show increasing mean and RMS-average errors with increasing noise level. The JSIR-BVM method also reduces statistical SPR variation by six-fold compared to other methods. A 25% phantom diameter change causes up to 4% SPR differences for the image-domain decomposition approach, while the JSIR-BVM method and sinogram-domain decomposition methods are
A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems
Taverniers, Søren; Tartakovsky, Daniel M.
2017-02-01
Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton-Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement of path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.
Energy Technology Data Exchange (ETDEWEB)
Saas, L.
2004-05-01
This Thesis deals with sedimentary basin modeling whose goal is the prediction through geological times of the localizations and appraisal of hydrocarbons quantities present in the ground. Due to the natural and evolutionary decomposition of the sedimentary basin in blocks and stratigraphic layers, domain decomposition methods are requested to simulate flows of waters and of hydrocarbons in the ground. Conservations laws are used to model the flows in the ground and form coupled partial differential equations which must be discretized by finite volume method. In this report we carry out a study on finite volume methods on non-matching grids solved by domain decomposition methods. We describe a family of finite volume schemes on non-matching grids and we prove that the associated global discretized problem is well posed. Then we give an error estimate. We give two examples of finite volume schemes on non matching grids and the corresponding theoretical results (Constant scheme and Linear scheme). Then we present the resolution of the global discretized problem by a domain decomposition method using arbitrary interface conditions (for example Robin conditions). Finally we give numerical results which validate the theoretical results and study the use of finite volume methods on non-matching grids for basin modeling. (author)
A Generalized Demodulation and Hilbert Transform Based Signal Decomposition Method
Directory of Open Access Journals (Sweden)
Zhi-Xiang Hu
2017-01-01
Full Text Available This paper proposes a new signal decomposition method that aims to decompose a multicomponent signal into monocomponent signal. The main procedure is to extract the components with frequencies higher than a given bisecting frequency by three steps: (1 the generalized demodulation is used to project the components with lower frequencies onto negative frequency domain, (2 the Hilbert transform is performed to eliminate the negative frequency components, and (3 the inverse generalized demodulation is used to obtain the signal which contains components with higher frequencies only. By running the procedure recursively, all monocomponent signals can be extracted efficiently. A comprehensive derivation of the decomposition method is provided. The validity of the proposed method has been demonstrated by extensive numerical analysis. The proposed method is also applied to decompose the dynamic strain signal of a cable-stayed bridge and the echolocation signal of a bat.
Pierson, Kendall Hugh
The Finite Element Tearing and Interconnecting (FETI) algorithms are numerically scalable iterative domain decomposition methods for solving systems of equations generated from the finite element discretization of second- or fourth-order elasticity problems. These methods have been substantially improved over the last ten years and recently shown parallel scalability up to one thousand processors. The purpose of this thesis is to present and investigate a dual-primal FETI method, which addresses some of the critical issues related to the original FETI methods. These critical issues involve the accurate computation of the local rigid body modes, the cost and size of the FETI coarse problems with respect to fourth-order elasticity problems, and the overall robustness and versatility of the equation solver. These improvements due to the dual-primal FETI formulation are especially beneficial when implemented on massively parallel distributed memory computers such as the Accelerated Strategic Computing Initiative (ASCI) Red Option supercomputer. Numerical results will be shown detailing scalability with respect to the mesh size, subdomain size, and the number of elements per subdomain for both second- and fourth-order elasticity problems. Parallel scalability will be reported for various large scale realistic problems on a SGI Origin 2000 and the ASCI Red option massively parallel supercomputer. Lastly, results from linear dynamics, eigenvalue analysis and geometrically non-linear static problems will be shown highlighting the benefits of FETI methods for solving large-scale problems with multiple right hand sides.
Le Tallec, Patrick; Tidriri, Moulay D.
1994-01-01
Projet MENUSIN; The aim of this paper is to study the convergence properties of a Time Marching Algorithm solving Advection-Diffusion problems on two domains using incompatible discretizations. The basic algorithm is first presented, and theoretical or numerical results illustrate its convergence properties. This study is based on spectral theory, a priori estimates and a Di-Giorgi-Nash maximum principle .
Asynchronous Task-Based Polar Decomposition on Manycore Architectures
Sukkari, Dalal
2016-10-25
This paper introduces the first asynchronous, task-based implementation of the polar decomposition on manycore architectures. Based on a new formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original and hostile LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is also capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been severely weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations (i.e., Intel MKL and Elemental) for the polar decomposition on latest shared-memory vendors\\' systems (i.e., Intel Haswell/Broadwell/Knights Landing, NVIDIA K80/P100 GPUs and IBM Power8), while maintaining high numerical accuracy.
Distributed Prognostics Based on Structural Model Decomposition
National Aeronautics and Space Administration — Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based...
Efficient block-based frequency domain wavelet transform implementations.
Lin, Jianyu; Smith, Mark J T
2009-08-01
Subband decompositions for image coding have been explored extensively over the last few decades. The condensed wavelet packet (CWP) transform is one such decomposition that was recently shown to have coding performance advantages over conventional decompositions. A special feature of the CWP is that its design and implementation are performed in the cyclic frequency domain. While performance gains have been reported, efficient implementations of the CWP (or more generally, efficient implementations of cyclic filter banks) have not yet been fully explored. In this paper, we present efficient block-based implementations of cyclic filter banks along with an analysis of the arithmetic complexity. Block-based cyclic filter bank implementations of the CWP coder are compared with conventional subband/wavelet image coders whose filter banks are implemented in the time domain. It is shown that block-based cyclic filter bank implementations can result in CWP coding systems that outperform the popular image coding systems both in terms of arithmetic complexity and coding performance.
Alles, E. J.; Zhu, Y.; van Dongen, K. W. A.; McGough, R. J.
2013-01-01
The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared to those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method. PMID:23160476
Alles, E J; Zhu, Y; van Dongen, K W A; McGough, R J
2012-10-01
The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency-domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared with those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method.
Advances in audio watermarking based on singular value decomposition
Dhar, Pranab Kumar
2015-01-01
This book introduces audio watermarking methods for copyright protection, which has drawn extensive attention for securing digital data from unauthorized copying. The book is divided into two parts. First, an audio watermarking method in discrete wavelet transform (DWT) and discrete cosine transform (DCT) domains using singular value decomposition (SVD) and quantization is introduced. This method is robust against various attacks and provides good imperceptible watermarked sounds. Then, an audio watermarking method in fast Fourier transform (FFT) domain using SVD and Cartesian-polar transformation (CPT) is presented. This method has high imperceptibility and high data payload and it provides good robustness against various attacks. These techniques allow media owners to protect copyright and to show authenticity and ownership of their material in a variety of applications. · Features new methods of audio watermarking for copyright protection and ownership protection · Outl...
Structural system identification based on variational mode decomposition
Bagheri, Abdollah; Ozbulut, Osman E.; Harris, Devin K.
2018-03-01
In this paper, a new structural identification method is proposed to identify the modal properties of engineering structures based on dynamic response decomposition using the variational mode decomposition (VMD). The VMD approach is a decomposition algorithm that has been developed as a means to overcome some of the drawbacks and limitations of the empirical mode decomposition method. The VMD-based modal identification algorithm decomposes the acceleration signal into a series of distinct modal responses and their respective center frequencies, such that when combined their cumulative modal responses reproduce the original acceleration response. The decaying amplitude of the extracted modal responses is then used to identify the modal damping ratios using a linear fitting function on modal response data. Finally, after extracting modal responses from available sensors, the mode shape vector for each of the decomposed modes in the system is identified from all obtained modal response data. To demonstrate the efficiency of the algorithm, a series of numerical, laboratory, and field case studies were evaluated. The laboratory case study utilized the vibration response of a three-story shear frame, whereas the field study leveraged the ambient vibration response of a pedestrian bridge to characterize the modal properties of the structure. The modal properties of the shear frame were computed using analytical approach for a comparison with the experimental modal frequencies. Results from these case studies demonstrated that the proposed method is efficient and accurate in identifying modal data of the structures.
Energy Technology Data Exchange (ETDEWEB)
Clerc, S
1998-07-01
In this work, the numerical simulation of fluid dynamics equations is addressed. Implicit upwind schemes of finite volume type are used for this purpose. The first part of the dissertation deals with the improvement of the computational precision in unfavourable situations. A non-conservative treatment of some source terms is studied in order to correct some shortcomings of the usual operator-splitting method. Besides, finite volume schemes based on Godunov's approach are unsuited to compute low Mach number flows. A modification of the up-winding by preconditioning is introduced to correct this defect. The second part deals with the solution of steady-state problems arising from an implicit discretization of the equations. A well-posed linearized boundary value problem is formulated. We prove the convergence of a domain decomposition algorithm of Schwartz type for this problem. This algorithm is implemented either directly, or in a Schur complement framework. Finally, another approach is proposed, which consists in decomposing the non-linear steady state problem. (author)
Zampini, Stefano
2017-08-03
Multilevel balancing domain decomposition by constraints (BDDC) deluxe algorithms are developed for the saddle point problems arising from mixed formulations of Darcy flow in porous media. In addition to the standard no-net-flux constraints on each face, adaptive primal constraints obtained from the solutions of local generalized eigenvalue problems are included to control the condition number. Special deluxe scaling and local generalized eigenvalue problems are designed in order to make sure that these additional primal variables lie in a benign subspace in which the preconditioned operator is positive definite. The current multilevel theory for BDDC methods for porous media flow is complemented with an efficient algorithm for the computation of the so-called malign part of the solution, which is needed to make sure the rest of the solution can be obtained using the conjugate gradient iterates lying in the benign subspace. We also propose a new technique, based on the Sherman--Morrison formula, that lets us preserve the complexity of the subdomain local solvers. Condition number estimates are provided under certain standard assumptions. Extensive numerical experiments confirm the theoretical estimates; additional numerical results prove the effectiveness of the method with higher order elements and high-contrast problems from real-world applications.
A novel method for EMG decomposition based on matched filters
Directory of Open Access Journals (Sweden)
Ailton Luiz Dias Siqueira Júnior
Full Text Available Introduction Decomposition of electromyography (EMG signals into the constituent motor unit action potentials (MUAPs can allow for deeper insights into the underlying processes associated with the neuromuscular system. The vast majority of the methods for EMG decomposition found in the literature depend on complex algorithms and specific instrumentation. As an attempt to contribute to solving these issues, we propose a method based on a bank of matched filters for the decomposition of EMG signals. Methods Four main units comprise our method: a bank of matched filters, a peak detector, a motor unit classifier and an overlapping resolution module. The system’s performance was evaluated with simulated and real EMG data. Classification accuracy was measured by comparing the responses of the system with known data from the simulator and with the annotations of a human expert. Results The results show that decomposition of non-overlapping MUAPs can be achieved with up to 99% accuracy for signals with up to 10 active motor units and a signal-to-noise ratio (SNR of 10 dB. For overlapping MUAPs with up to 10 motor units per signal and a SNR of 20 dB, the technique allows for correct classification of approximately 71% of the MUAPs. The method is capable of processing, decomposing and classifying a 50 ms window of data in less than 5 ms using a standard desktop computer. Conclusion This article contributes to the ongoing research on EMG decomposition by describing a novel technique capable of delivering high rates of success by means of a fast algorithm, suggesting its possible use in future real-time embedded applications, such as myoelectric prostheses control and biofeedback systems.
Directory of Open Access Journals (Sweden)
Cancan Yi
2016-01-01
Full Text Available Variational mode decomposition (VMD is a new method of signal adaptive decomposition. In the VMD framework, the vibration signal is decomposed into multiple mode components by Wiener filtering in Fourier domain, and the center frequency of each mode component is updated as the center of gravity of the mode’s power spectrum. Therefore, each decomposed mode is compact around a center pulsation and has a limited bandwidth. In view of the situation that the penalty parameter and the number of components affect the decomposition effect in VMD algorithm, a novel method of fault feature extraction based on the combination of VMD and particle swarm optimization (PSO algorithm is proposed. In this paper, the numerical simulation and the measured fault signals of the rolling bearing experiment system are analyzed by the proposed method. The results indicate that the proposed method is much more robust to sampling and noise. Additionally, the proposed method has an advantage over the EMD in complicated signal decomposition and can be utilized as a potential method in extracting the faint fault information of rolling bearings compared with the common method of envelope spectrum analysis.
Topology Based Domain Search (TBDS)
National Research Council Canada - National Science Library
Manning, William
2002-01-01
This effort will explore radical changes in the way Domain Name System (DNS) is used by endpoints in a network to improve the resilience of the endpoint and its applications in the face of dynamically changing infrastructure topology...
Optimal (Solvent) Mixture Design through a Decomposition Based CAMD methodology
Achenie, L.; Karunanithi, Arunprakash T.; Gani, Rafiqul
2004-01-01
Computer Aided Molecular/Mixture design (CAMD) is one of the most promising techniques for solvent design and selection. A decomposition based CAMD methodology has been formulated where the mixture design problem is solved as a series of molecular and mixture design sub-problems. This approach is able to overcome most of the difficulties associated with the solution of mixture design problems. The new methodology has been illustrated with the help of a case study involving the design of solve...
Gaussian beam shooting algorithm based on iterative frame decomposition
Ghannoum, Ihssan; Letrou, Christine; Beauquet, Gilles
2010-01-01
International audience; Adaptive beam re-shooting is proposed as a solution to overcome essential limitations of the Gaussian Beam Shooting technique. The proposed algorithm is based on iterative frame decompositions of beam fields in situations where usual paraxial formulas fail to give accurate enough results, such as interactions with finite obstacle edges. Collimated beam fields are successively re-expanded on narrow and wide window frames, allowing for re-shooting and further propagation...
Sensitivity Analysis of the Proximal-Based Parallel Decomposition Methods
Directory of Open Access Journals (Sweden)
Feng Ma
2014-01-01
Full Text Available The proximal-based parallel decomposition methods were recently proposed to solve structured convex optimization problems. These algorithms are eligible for parallel computation and can be used efficiently for solving large-scale separable problems. In this paper, compared with the previous theoretical results, we show that the range of the involved parameters can be enlarged while the convergence can be still established. Preliminary numerical tests on stable principal component pursuit problem testify to the advantages of the enlargement.
Image Fakery Detection Based on Singular Value Decomposition
Directory of Open Access Journals (Sweden)
T. Basaruddin
2009-11-01
Full Text Available The growing of image processing technology nowadays make it easier for user to modify and fake the images. Image fakery is a process to manipulate part or whole areas of image either in it content or context with the help of digital image processing techniques. Image fakery is barely unrecognizable because the fake image is looking so natural. Yet by using the numerical computation technique it is able to detect the evidence of fake image. This research is successfully applied the singular value decomposition method to detect image fakery. The image preprocessing algorithm prior to the detection process yields two vectors orthogonal to the singular value vector which are important to detect fake image. The result of experiment to images in several conditions successfully detects the fake images with threshold value 0.2. Singular value decomposition-based detection of image fakery can be used to investigate fake image modified from original image accurately.
A Compressed Sensing Based Decomposition of Electrodermal Activity Signals.
Jain, Swayambhoo; Oswal, Urvashi; Xu, Kevin Shuai; Eriksson, Brian; Haupt, Jarvis
2017-09-01
The measurement and analysis of electrodermal activity (EDA) offers applications in diverse areas ranging from market research to seizure detection and to human stress analysis. Unfortunately, the analysis of EDA signals is made difficult by the superposition of numerous components that can obscure the signal information related to a user's response to a stimulus. We show how simple preprocessing followed by a novel compressed sensing based decomposition can mitigate the effects of the undesired noise components and help reveal the underlying physiological signal. The proposed framework allows for decomposition of EDA signals with provable bounds on the recovery of user responses. We test our procedure on both synthetic and real-world EDA signals from wearable sensors and demonstrate that our approach allows for more accurate recovery of user responses as compared with the existing techniques.
Directory of Open Access Journals (Sweden)
Orović Irena
2010-01-01
Full Text Available The eigenvalues decomposition based on the S-method is employed to extract the specific time-frequency characteristics of speech signals. This approach is used to create a flexible speech watermark, shaped according to the time-frequency characteristics of the host signal. Also, the Hermite projection method is applied for characterization of speech regions. Namely, time-frequency regions that contain voiced components are selected for watermarking. The watermark detection is performed in the time-frequency domain as well. The theory is tested on several examples.
Satellite Image Time Series Decomposition Based on EEMD
Directory of Open Access Journals (Sweden)
Yun-long Kong
2015-11-01
Full Text Available Satellite Image Time Series (SITS have recently been of great interest due to the emerging remote sensing capabilities for Earth observation. Trend and seasonal components are two crucial elements of SITS. In this paper, a novel framework of SITS decomposition based on Ensemble Empirical Mode Decomposition (EEMD is proposed. EEMD is achieved by sifting an ensemble of adaptive orthogonal components called Intrinsic Mode Functions (IMFs. EEMD is noise-assisted and overcomes the drawback of mode mixing in conventional Empirical Mode Decomposition (EMD. Inspired by these advantages, the aim of this work is to employ EEMD to decompose SITS into IMFs and to choose relevant IMFs for the separation of seasonal and trend components. In a series of simulations, IMFs extracted by EEMD achieved a clear representation with physical meaning. The experimental results of 16-day compositions of Moderate Resolution Imaging Spectroradiometer (MODIS, Normalized Difference Vegetation Index (NDVI, and Global Environment Monitoring Index (GEMI time series with disturbance illustrated the effectiveness and stability of the proposed approach to monitoring tasks, such as applications for the detection of abrupt changes.
Directory of Open Access Journals (Sweden)
Hua-Qing Wang
2014-01-01
Full Text Available Vibration signals of rolling element bearings faults are usually immersed in background noise, which makes it difficult to detect the faults. Wavelet-based methods being used commonly can reduce some types of noise, but there is still plenty of room for improvement due to the insufficient sparseness of vibration signals in wavelet domain. In this work, in order to eliminate noise and enhance the weak fault detection, a new kind of peak-based approach combined with multiscale decomposition and envelope demodulation is developed. First, to preserve effective middle-low frequency signals while making high frequency noise more significant, a peak-based piecewise recombination is utilized to convert middle frequency components into low frequency ones. The newly generated signal becomes so smoother that it will have a sparser representation in wavelet domain. Then a noise threshold is applied after wavelet multiscale decomposition, followed by inverse wavelet transform and backward peak-based piecewise transform. Finally, the amplitude of fault characteristic frequency is enhanced by means of envelope demodulation. The effectiveness of the proposed method is validated by rolling bearings faults experiments. Compared with traditional wavelet-based analysis, experimental results show that fault features can be enhanced significantly and detected easily by the proposed method.
Benders’ Decomposition for Curriculum-Based Course Timetabling
DEFF Research Database (Denmark)
Bagger, Niels-Christian F.; Sørensen, Matias; Stidsen, Thomas R.
2018-01-01
In this paper we applied Benders’ decomposition to the Curriculum-Based Course Timetabling (CBCT) problem. The objective of the CBCT problem is to assign a set of lectures to time slots and rooms. Our approach was based on segmenting the problem into time scheduling and room allocation problems....... The Benders’ algorithm was then employed to generate cuts that connected the time schedule and room allocation. We generated only feasibility cuts, meaning that most of the solutions we obtained from a mixed integer programming solver were infeasible, therefore, we also provided a heuristic in order to regain...
Domain decomposition methods for systems of conservation laws: Spectral collocation approximations
Quarteroni, Alfio
1989-01-01
Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.
Domain Adaption Based on ELM Autoencoder
Directory of Open Access Journals (Sweden)
Wan-Yu Deng
2017-01-01
Full Text Available We propose a new ELM Autoencoder (ELM-AE based domain adaption algorithm which describes the subspaces of source and target domain by ELM-AE and then carries out subspace alignment to project different domains into a common new space. By leveraging nonlinear approximation ability and efficient one-pass learning ability of ELM-AE, the proposed domain adaption algorithm can efficiently seek a better cross-domain feature representation than linear feature representation approaches such as PCA to improve domain adaption performance. The widely experimental results on Office/Caltech-256 datasets show that the proposed algorithm can achieve better classification accuracy than PCA subspace alignment algorithm and other state-of-the-art domain adaption algorithms in most cases.
Optimal (Solvent) Mixture Design through a Decomposition Based CAMD methodology
DEFF Research Database (Denmark)
Achenie, L.; Karunanithi, Arunprakash T.; Gani, Rafiqul
2004-01-01
Computer Aided Molecular/Mixture design (CAMD) is one of the most promising techniques for solvent design and selection. A decomposition based CAMD methodology has been formulated where the mixture design problem is solved as a series of molecular and mixture design sub-problems. This approach is...... is able to overcome most of the difficulties associated with the solution of mixture design problems. The new methodology has been illustrated with the help of a case study involving the design of solvent-anti solvent binary mixtures for crystallization of Ibuprofen....
Robust domain decomposition preconditioners for abstract symmetric positive definite bilinear forms
Efendiev, Yalchin
2012-02-22
An abstract framework for constructing stable decompositions of the spaces corresponding to general symmetric positive definite problems into "local" subspaces and a global "coarse" space is developed. Particular applications of this abstract framework include practically important problems in porous media applications such as: the scalar elliptic (pressure) equation and the stream function formulation of its mixed form, Stokes\\' and Brinkman\\'s equations. The constant in the corresponding abstract energy estimate is shown to be robust with respect to mesh parameters as well as the contrast, which is defined as the ratio of high and low values of the conductivity (or permeability). The derived stable decomposition allows to construct additive overlapping Schwarz iterative methods with condition numbers uniformly bounded with respect to the contrast and mesh parameters. The coarse spaces are obtained by patching together the eigenfunctions corresponding to the smallest eigenvalues of certain local problems. A detailed analysis of the abstract setting is provided. The proposed decomposition builds on a method of Galvis and Efendiev [Multiscale Model. Simul. 8 (2010) 1461-1483] developed for second order scalar elliptic problems with high contrast. Applications to the finite element discretizations of the second order elliptic problem in Galerkin and mixed formulation, the Stokes equations, and Brinkman\\'s problem are presented. A number of numerical experiments for these problems in two spatial dimensions are provided. © EDP Sciences, SMAI, 2012.
Chen, Yen-Yu
2007-10-01
The work proposed a novel bit-rate-reduced approach for reducing the memory required to store a remote diagnosis and rapidly transmission it. In the work, an 8x8 Discrete Cosine Transform (DCT) approach is adopted to perform subband decomposition. Modified set partitioning in hierarchical trees (SPIHT) is then employed to organize data and entropy coding. The translation function can store the detailed characteristics of an image. A simple transformation to obtain DCT spectrum data in a single frequency domain decomposes the original signal into various frequency domains that can further compressed by wavelet-based algorithm. In this scheme, insignificant DCT coefficients that correspond to a particular spatial location in the high-frequency subbands can be employed to reduce redundancy by applying a proposed combined function in association with the modified SPIHT. Simulation results showed that the embedded DCT-CSPIHT image compression reduced the computational complexity to only a quarter of the wavelet-based subband decomposition, and improved the quality of the reconstructed medical image as given by both the peak signal-to-noise ratio (PSNR) and the perceptual results over JPEG2000 and the original SPIHT at the same bit rate. Additionally, since 8x8 fast DCT hardware implementation being commercially available, the proposed DCT-CSPIHT can perform well in high speed image coding and transmission.
Variance decomposition-based sensitivity analysis via neural networks
International Nuclear Information System (INIS)
Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo
2003-01-01
This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project
CPUF - a chemical-structure-based polyurethane foam decomposition and foam response model.
Energy Technology Data Exchange (ETDEWEB)
Fletcher, Thomas H. (Brigham Young University, Provo, UT); Thompson, Kyle Richard; Erickson, Kenneth L.; Dowding, Kevin J.; Clayton, Daniel (Brigham Young University, Provo, UT); Chu, Tze Yao; Hobbs, Michael L.; Borek, Theodore Thaddeus III
2003-07-01
A Chemical-structure-based PolyUrethane Foam (CPUF) decomposition model has been developed to predict the fire-induced response of rigid, closed-cell polyurethane foam-filled systems. The model, developed for the B-61 and W-80 fireset foam, is based on a cascade of bondbreaking reactions that produce CO2. Percolation theory is used to dynamically quantify polymer fragment populations of the thermally degrading foam. The partition between condensed-phase polymer fragments and gas-phase polymer fragments (i.e. vapor-liquid split) was determined using a vapor-liquid equilibrium model. The CPUF decomposition model was implemented into the finite element (FE) heat conduction codes COYOTE and CALORE, which support chemical kinetics and enclosure radiation. Elements were removed from the computational domain when the calculated solid mass fractions within the individual finite element decrease below a set criterion. Element removal, referred to as ?element death,? creates a radiation enclosure (assumed to be non-participating) as well as a decomposition front, which separates the condensed-phase encapsulant from the gas-filled enclosure. All of the chemistry parameters as well as thermophysical properties for the CPUF model were obtained from small-scale laboratory experiments. The CPUF model was evaluated by comparing predictions to measurements. The validation experiments included several thermogravimetric experiments at pressures ranging from ambient pressure to 30 bars. Larger, component-scale experiments were also used to validate the foam response model. The effects of heat flux, bulk density, orientation, embedded components, confinement and pressure were measured and compared to model predictions. Uncertainties in the model results were evaluated using a mean value approach. The measured mass loss in the TGA experiments and the measured location of the decomposition front were within the 95% prediction limit determined using the CPUF model for all of the
An efficient domain decomposition strategy for wave loads on surface piercing circular cylinders
DEFF Research Database (Denmark)
Paulsen, Bo Terp; Bredmose, Henrik; Bingham, Harry B.
2014-01-01
A fully nonlinear domain decomposed solver is proposed for efficient computations of wave loads on surface piercing structures in the time domain. A fully nonlinear potential flow solver was combined with a fully nonlinear Navier–Stokes/VOF solver via generalized coupling zones of arbitrary shape....... Sensitivity tests of the extent of the inner Navier–Stokes/VOF domain were carried out. Numerical computations of wave loads on surface piercing circular cylinders at intermediate water depths are presented. Four different test cases of increasing complexity were considered; 1) weakly nonlinear regular waves...
A Temporal Domain Decomposition Algorithmic Scheme for Large-Scale Dynamic Traffic Assignment
Directory of Open Access Journals (Sweden)
Eric J. Nava
2012-03-01
This paper presents a temporal decomposition scheme for large spatial- and temporal-scale dynamic traffic assignment, in which the entire analysis period is divided into Epochs. Vehicle assignment is performed sequentially in each Epoch, thus improving the model scalability and confining the peak run-time memory requirement regardless of the total analysis period. A proposed self-turning scheme adaptively searches for the run-time-optimal Epoch setting during iterations regardless of the characteristics of the modeled network. Extensive numerical experiments confirm the promising performance of the proposed algorithmic schemes.
DEFF Research Database (Denmark)
Madsen, Kristoffer Hougaard; Hansen, Lars Kai; Mørup, Morten
2009-01-01
representation we demonstrate how the class of objective functions that are separable in either time or frequency instances allow the gradient in the time or frequency domain to be converted to the opposing domain. We further demonstrate the usefulness of this framework for three different models; Shifted Non-negative...... Matrix Factorization, Convolutive Sparse Coding as well as Smooth and Sparse Matrix Factorization. Matlab implementation of the proposed algorithms are available for download at www.erpwavelab.org....
Comparative Analysis of Wavelet-based Feature Extraction for Intramuscular EMG Signal Decomposition.
Ghofrani Jahromi, M; Parsaei, H; Zamani, A; Dehbozorgi, M
2017-12-01
Electromyographic (EMG) signal decomposition is the process by which an EMG signal is decomposed into its constituent motor unit potential trains (MUPTs). A major step in EMG decomposition is feature extraction in which each detected motor unit potential (MUP) is represented by a feature vector. As with any other pattern recognition system, feature extraction has a significant impact on the performance of a decomposition system. EMG decomposition has been studied well and several systems were proposed, but feature extraction step has not been investigated in detail. Several EMG signals were generated using a physiologically-based EMG signal simulation algorithm. For each signal, the firing patterns of motor units (MUs) provided by the simulator were used to extract MUPs of each MU. For feature extraction, different wavelet families including Daubechies (db), Symlets, Coiflets, bi-orthogonal, reverse bi-orthogonal and discrete Meyer were investigated. Moreover, the possibility of reducing the dimensionality of MUP feature vector is explored in this work. The MUPs represented using wavelet-domain features are transformed into a new coordinate system using Principal Component Analysis (PCA). The features were evaluated regarding their capability in discriminating MUPs of individual MUs. Extensive studies on different mother wavelet functions revealed that db2, coif1, sym5, bior2.2, bior4.4, and rbior2.2 are the best ones in differentiating MUPs of different MUs. The best results were achieved at the 4th detail coefficient. Overall, rbior2.2 outperformed all wavelet functions studied; nevertheless for EMG signals composed of more than 12 MUPTs, syms5 wavelet function is the best function. Applying PCA slightly enhanced the results.
A Gyro Signal Characteristics Analysis Method Based on Empirical Mode Decomposition
Directory of Open Access Journals (Sweden)
Qinghua Zeng
2016-01-01
Full Text Available It is difficult to analyze the nonstationary gyro signal in detail for the Allan variance (AV analysis method. A novel approach in the time-frequency domain for gyro signal characteristics analysis is proposed based on the empirical mode decomposition and Allan variance (EMDAV. The output signal of gyro is decomposed by empirical mode decomposition (EMD first, and then the decomposed signal is analyzed by AV algorithm. Consequently, the gyro noise characteristics are demonstrated in the time-frequency domain with a three-dimensional (3D manner. Practical data of fiber optic gyro (FOG and MEMS gyro are processed by the AV method and the EMDAV algorithm separately. The results indicate that the details of gyro signal characteristics in different frequency bands can be described with the help of EMDAV, and the analysis dimensions are extended compared with the common AV. The proposed EMDAV, as a complementary tool of the AV, which provides a theoretical reference for the gyro signal preprocessing, is a general approach for the analysis and evaluation of gyro performance.
Automated Decomposition of Model-based Learning Problems
Williams, Brian C.; Millar, Bill
1996-01-01
A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.
Directory of Open Access Journals (Sweden)
Mingwei Zhang
2017-01-01
Full Text Available To reduce noise components from original microseismic waves, a comprehensive fine signal processing approach using the integrated decomposition analysis of the wave duration, frequency spectrum, and wavelet coefficient domain was developed and implemented. Distribution regularities of the wave component and redundant noise on the frequency spectrum and the wavelet coefficient domain were first expounded. The frequency threshold and wavelet coefficient threshold were determined for the identification and extraction of the effective wave component. The frequency components between the reconstructed microseismic wave and the original measuring signal were compared. The noise elimination effect via the scale-changed domain decomposition was evaluated. Interaction between the frequency threshold and the wavelet coefficient threshold in the time domain was discussed. The findings reveal that tri-domain decomposition analysis achieves the precise identification and extraction of the effective microseismic wave component and improves the reliability of waves by eliminating the redundant noise. The frequency threshold and the wavelet coefficient threshold on a specific time window are two critical parameters that determine the degree of precision for the identification of the extracted wave component. This research involves development of the proposed integrated domain decomposition method and provides a diverse view on the fine processing of the microseismic signal.
Texture analysis by fractal descriptors over the wavelet domain using a best basis decomposition
Florindo, J. B.; Bruno, O. M.
2016-02-01
This work proposes the development and study of a novel set of fractal descriptors for texture analysis. These descriptors are obtained by exploring the fractal-like relation among the coefficients and magnitudes of a particular type of wavelet decomposition, to know, the best basis selection. The proposed method is tested in the classification of three sets of textures from the literature: Brodatz, Vistex and USPTex. The method is also applied to a challenging real-world problem, which is the identification of species of plants from the Brazilian flora. The results are compared with other classical and state-of-the-art texture descriptors and demonstrate the efficiency of the proposed technique in this task.
XML Based Markup Languages for Specific Domains
Varde, Aparna; Rundensteiner, Elke; Fahrenholz, Sally
A challenging area in web based support systems is the study of human activities in connection with the web, especially with reference to certain domains. This includes capturing human reasoning in information retrieval, facilitating the exchange of domain-specific knowledge through a common platform and developing tools for the analysis of data on the web from a domain expert's angle. Among the techniques and standards related to such work, we have XML, the eXtensible Markup Language. This serves as a medium of communication for storing and publishing textual, numeric and other forms of data seamlessly. XML tag sets are such that they preserve semantics and simplify the understanding of stored information by users. Often domain-specific markup languages are designed using XML, with a user-centric perspective. Standardization bodies and research communities may extend these to include additional semantics of areas within and related to the domain. This chapter outlines the issues to be considered in developing domain-specific markup languages: the motivation for development, the semantic considerations, the syntactic constraints and other relevant aspects, especially taking into account human factors. Illustrating examples are provided from domains such as Medicine, Finance and Materials Science. Particular emphasis in these examples is on the Materials Markup Language MatML and the semantics of one of its areas, namely, the Heat Treating of Materials. The focus of this chapter, however, is not the design of one particular language but rather the generic issues concerning the development of domain-specific markup languages.
Energy Technology Data Exchange (ETDEWEB)
Sidler, Rolf, E-mail: rsidler@gmail.com [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland); Carcione, José M. [Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS), Borgo Grotta Gigante 42c, 34010 Sgonico, Trieste (Italy); Holliger, Klaus [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland)
2013-02-15
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge–Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid–solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.
Jerez-Hanckes, Carlos; Pérez-Arancibia, Carlos; Turc, Catalin
2017-12-01
We present Nyström discretizations of multitrace/singletrace formulations and non-overlapping Domain Decomposition Methods (DDM) for the solution of Helmholtz transmission problems for bounded composite scatterers with piecewise constant material properties. We investigate the performance of DDM with both classical Robin and optimized transmission boundary conditions. The optimized transmission boundary conditions incorporate square root Fourier multiplier approximations of Dirichlet to Neumann operators. While the multitrace/singletrace formulations as well as the DDM that use classical Robin transmission conditions are not particularly well suited for Krylov subspace iterative solutions of high-contrast high-frequency Helmholtz transmission problems, we provide ample numerical evidence that DDM with optimized transmission conditions constitute efficient computational alternatives for these type of applications. In the case of large numbers of subdomains with different material properties, we show that the associated DDM linear system can be efficiently solved via hierarchical Schur complements elimination.
Directory of Open Access Journals (Sweden)
Jesús García
2012-01-01
Full Text Available The application of a 3D domain decomposition finite-element and spherical mode expansion for the design of planar ESPAR (electronically steerable passive array radiator made with probe-fed circular microstrip patches is presented in this work. A global generalized scattering matrix (GSM in terms of spherical modes is obtained analytically from the GSM of the isolated patches by using rotation and translation properties of spherical waves. The whole behaviour of the array is characterized including all the mutual coupling effects between its elements. This procedure has been firstly validated by analyzing an array of monopoles on a ground plane, and then it has been applied to synthesize a prescribed radiation pattern optimizing the reactive loads connected to the feeding ports of the array of circular patches by means of a genetic algorithm.
Quantum game theory based on the Schmidt decomposition
International Nuclear Information System (INIS)
Ichikawa, Tsubasa; Tsutsui, Izumi; Cheon, Taksu
2008-01-01
We present a novel formulation of quantum game theory based on the Schmidt decomposition, which has the merit that the entanglement of quantum strategies is manifestly quantified. We apply this formulation to 2-player, 2-strategy symmetric games and obtain a complete set of quantum Nash equilibria. Apart from those available with the maximal entanglement, these quantum Nash equilibria are extensions of the Nash equilibria in classical game theory. The phase structure of the equilibria is determined for all values of entanglement, and thereby the possibility of resolving the dilemmas by entanglement in the game of Chicken, the Battle of the Sexes, the Prisoners' Dilemma, and the Stag Hunt, is examined. We find that entanglement transforms these dilemmas with each other but cannot resolve them, except in the Stag Hunt game where the dilemma can be alleviated to a certain degree
Railway Wheel Flat Detection Based on Improved Empirical Mode Decomposition
Directory of Open Access Journals (Sweden)
Yifan Li
2016-01-01
Full Text Available This study explores the capacity of the improved empirical mode decomposition (EMD in railway wheel flat detection. Aiming at the mode mixing problem of EMD, an EMD energy conservation theory and an intrinsic mode function (IMF superposition theory are presented and derived, respectively. Based on the above two theories, an improved EMD method is further proposed. The advantage of the improved EMD is evaluated by a simulated vibration signal. Then this method is applied to study the axle box vibration response caused by wheel flats, considering the influence of both track irregularity and vehicle running speed on diagnosis results. Finally, the effectiveness of the proposed method is verified by a test rig experiment. Research results demonstrate that the improved EMD can inhibit mode mixing phenomenon and extract the wheel fault characteristic effectively.
Dynamic Mode Decomposition based on Kalman Filter for Parameter Estimation
Shibata, Hisaichi; Nonomura, Taku; Takaki, Ryoji
2017-11-01
With the development of computational fluid dynamics, large-scale data can now be obtained. In order to model physical phenomena from such data, it is required to extract features of flow field. Dynamic mode decomposition (DMD) is a method which meets the request. DMD can compute dominant eigenmodes of flow field by approximating system matrix. From this point of view, DMD can be considered as parameter estimation of system matrix. To estimate such parameters, we propose a novel method based on Kalman filter. Our numerical experiments indicated that the proposed method can estimate the parameters more accurately if it is compared with standard DMD methods. With this method, it is also possible to improve the parameter estimation accuracy if characteristics of noise acting on the system is given.
Quantum Image Encryption Algorithm Based on Image Correlation Decomposition
Hua, Tianxiang; Chen, Jiamin; Pei, Dongju; Zhang, Wenquan; Zhou, Nanrun
2015-02-01
A novel quantum gray-level image encryption and decryption algorithm based on image correlation decomposition is proposed. The correlation among image pixels is established by utilizing the superposition and measurement principle of quantum states. And a whole quantum image is divided into a series of sub-images. These sub-images are stored into a complete binary tree array constructed previously and then randomly performed by one of the operations of quantum random-phase gate, quantum revolving gate and Hadamard transform. The encrypted image can be obtained by superimposing the resulting sub-images with the superposition principle of quantum states. For the encryption algorithm, the keys are the parameters of random phase gate, rotation angle, binary sequence and orthonormal basis states. The security and the computational complexity of the proposed algorithm are analyzed. The proposed encryption algorithm can resist brute force attack due to its very large key space and has lower computational complexity than its classical counterparts.
Quantum game theory based on the Schmidt decomposition
Energy Technology Data Exchange (ETDEWEB)
Ichikawa, Tsubasa; Tsutsui, Izumi [Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK), Tsukuba 305-0801 (Japan); Cheon, Taksu [Laboratory of Physics, Kochi University of Technology, Tosa Yamada, Kochi 782-8502 (Japan)], E-mail: tsubasa@post.kek.jp, E-mail: izumi.tsutsui@kek.jp, E-mail: taksu.cheon@kochi-tech.ac.jp
2008-04-04
We present a novel formulation of quantum game theory based on the Schmidt decomposition, which has the merit that the entanglement of quantum strategies is manifestly quantified. We apply this formulation to 2-player, 2-strategy symmetric games and obtain a complete set of quantum Nash equilibria. Apart from those available with the maximal entanglement, these quantum Nash equilibria are extensions of the Nash equilibria in classical game theory. The phase structure of the equilibria is determined for all values of entanglement, and thereby the possibility of resolving the dilemmas by entanglement in the game of Chicken, the Battle of the Sexes, the Prisoners' Dilemma, and the Stag Hunt, is examined. We find that entanglement transforms these dilemmas with each other but cannot resolve them, except in the Stag Hunt game where the dilemma can be alleviated to a certain degree.
Directory of Open Access Journals (Sweden)
Eugenio Aulisa
2009-04-01
Full Text Available Solving complex coupled processes involving fluid-structure-thermal interactions is a challenging problem in computational sciences and engineering. Currently there exist numerous public-domain and commercial codes available in the area of Computational Fluid Dynamics (CFD, Computational Structural Dynamics (CSD and Computational Thermodynamics (CTD. Different groups specializing in modelling individual process such as CSD, CFD, CTD often come together to solve a complex coupled application. Direct numerical simulation of the non-linear equations for even the most simplified fluid-structure-thermal interaction (FSTI model depends on the convergence of iterative solvers which in turn rely heavily on the properties of the coupled system. The purpose of this paper is to introduce a flexible multilevel algorithm with finite elements that can be used to study a coupled FSTI. The method relies on decomposing the complex global domain, into several local sub-domains, solving smaller problems over these sub-domains and then gluing back the local solution in an efficient and accurate fashion to yield the global solution. Our numerical results suggest that the proposed solution methodology is robust and reliable.
Modal Identification of Output-only Systems Using Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, L.M.; Andersen, Palle
2001-01-01
In this paper a new frequency domain technique is introduced for the modal identification of output-only systems, i.e. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classica...
Modal Identification of Output-Only Systems using Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, L.; Andersen, P.
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification from ambient responses, ie. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical ...
Directory of Open Access Journals (Sweden)
Elias D. Nino-Ruiz
2017-07-01
Full Text Available In this paper, a matrix-free posterior ensemble Kalman filter implementation based on a modified Cholesky decomposition is proposed. The method works as follows: the precision matrix of the background error distribution is estimated based on a modified Cholesky decomposition. The resulting estimator can be expressed in terms of Cholesky factors which can be updated based on a series of rank-one matrices in order to approximate the precision matrix of the analysis distribution. By using this matrix, the posterior ensemble can be built by either sampling from the posterior distribution or using synthetic observations. Furthermore, the computational effort of the proposed method is linear with regard to the model dimension and the number of observed components from the model domain. Experimental tests are performed making use of the Lorenz-96 model. The results reveal that, the accuracy of the proposed implementation in terms of root-mean-square-error is similar, and in some cases better, to that of a well-known ensemble Kalman filter (EnKF implementation: the local ensemble transform Kalman filter. In addition, the results are comparable to those obtained by the EnKF with large ensemble sizes.
Eugenio Aulisa; Sandro Manservisi; Padmanabhan Seshaiyer
2009-01-01
Solving complex coupled processes involving fluid-structure-thermal interactions is a challenging problem in computational sciences and engineering. Currently there exist numerous public-domain and commercial codes available in the area of Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD) and Computational Thermodynamics (CTD). Different groups specializing in modelling individual process such as CSD, CFD, CTD often come together to solve a complex coupled ap...
Multiple image encryption scheme based on pixel exchange operation and vector decomposition
Xiong, Y.; Quan, C.; Tay, C. J.
2018-02-01
We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.
Robust image watermarking based on multiband wavelets and empirical mode decomposition.
Bi, Ning; Sun, Qiyu; Huang, Daren; Yang, Zhihua; Huang, Jiwu
2007-08-01
In this paper, we propose a blind image watermarking algorithm based on the multiband wavelet transformation and the empirical mode decomposition. Unlike the watermark algorithms based on the traditional two-band wavelet transform, where the watermark bits are embedded directly on the wavelet coefficients, in the proposed scheme, we embed the watermark bits in the mean trend of some middle-frequency subimages in the wavelet domain. We further select appropriate dilation factor and filters in the multiband wavelet transform to achieve better performance in terms of perceptually invisibility and the robustness of the watermark. The experimental results show that the proposed blind watermarking scheme is robust against JPEG compression, Gaussian noise, salt and pepper noise, median filtering, and ConvFilter attacks. The comparison analysis demonstrate that our scheme has better performance than the watermarking schemes reported recently.
Directory of Open Access Journals (Sweden)
Qiuming Cheng
2007-06-01
Full Text Available The patterns shown on two-dimensional images (fields used in geosciences reflect the end products of geo-processes that occurred on the surface and in the subsurface of the Earth. Anisotropy of these types of patterns can provide information useful for interpretation of geo-processes and identification of features in the mapped area. Quantification of the anisotropy property is therefore essential for image processing and interpretation. This paper introduces several techniques newly developed on the basis of multifractal modeling in space, Fourier frequency, and eigen domains, respectively. A singularity analysis method implemented in the space domain can be used to quantify the intensity and anisotropy of local singularities. The second method, called S-A, characterizes the generalized scale invariance property of a field in the Fourier frequency domain. The third method characterizes the field using a power-law model on the basis of eigenvalues and eigenvectors of the field. The applications of these methods are demonstrated with a case study of Environment Scan Electric Microscope (ESEM microimages for identification of sphalerite (ZnS ore minerals from the Jinding Pb/Zn/Ag mineral deposit in Shangjiang District, Yunnan Province, China.
Interior sound field control using generalized singular value decomposition in the frequency domain.
Pasco, Yann; Gauthier, Philippe-Aubert; Berry, Alain; Moreau, Stéphane
2017-01-01
The problem of controlling a sound field inside a region surrounded by acoustic control sources is considered. Inspired by the Kirchhoff-Helmholtz integral, the use of double-layer source arrays allows such a control and avoids the modification of the external sound field by the control sources by the approximation of the sources as monopole and radial dipole transducers. However, the practical implementation of the Kirchhoff-Helmholtz integral in physical space leads to large numbers of control sources and error sensors along with excessive controller complexity in three dimensions. The present study investigates the potential of the Generalized Singular Value Decomposition (GSVD) to reduce the controller complexity and separate the effect of control sources on the interior and exterior sound fields, respectively. A proper truncation of the singular basis provided by the GSVD factorization is shown to lead to effective cancellation of the interior sound field at frequencies below the spatial Nyquist frequency of the control sources array while leaving the exterior sound field almost unchanged. Proofs of concept are provided through simulations achieved for interior problems by simulations in a free field scenario with circular arrays and in a reflective environment with square arrays.
Identifying key nodes in multilayer networks based on tensor decomposition
Wang, Dingjie; Wang, Haitao; Zou, Xiufen
2017-06-01
The identification of essential agents in multilayer networks characterized by different types of interactions is a crucial and challenging topic, one that is essential for understanding the topological structure and dynamic processes of multilayer networks. In this paper, we use the fourth-order tensor to represent multilayer networks and propose a novel method to identify essential nodes based on CANDECOMP/PARAFAC (CP) tensor decomposition, referred to as the EDCPTD centrality. This method is based on the perspective of multilayer networked structures, which integrate the information of edges among nodes and links between different layers to quantify the importance of nodes in multilayer networks. Three real-world multilayer biological networks are used to evaluate the performance of the EDCPTD centrality. The bar chart and ROC curves of these multilayer networks indicate that the proposed approach is a good alternative index to identify real important nodes. Meanwhile, by comparing the behavior of both the proposed method and the aggregated single-layer methods, we demonstrate that neglecting the multiple relationships between nodes may lead to incorrect identification of the most versatile nodes. Furthermore, the Gene Ontology functional annotation demonstrates that the identified top nodes based on the proposed approach play a significant role in many vital biological processes. Finally, we have implemented many centrality methods of multilayer networks (including our method and the published methods) and created a visual software based on the MATLAB GUI, called ENMNFinder, which can be used by other researchers.
Xiang, Deliang; Tang, Tao; Ban, Yifang; Su, Yi; Kuang, Gangyao
2016-06-01
Since it has been validated that cross-polarized scattering (HV) is caused not only by vegetation but also by rotated dihedrals, in this study, we use rotated dihedral corner reflectors to form a cross scattering matrix and propose an extended four-component model-based decomposition method for PolSAR data over urban areas. Unlike other urban area decomposition techniques which need to discriminate the urban and natural areas before decomposition, this proposed method is applied on PolSAR image directly. The building orientation angle is considered in this scattering matrix, making it flexible and adaptive in the decomposition. Therefore, we can separate cross scattering of urban areas from the overall HV component. Further, the cross and helix scattering components are also compared. Then, using these decomposed scattering powers, the buildings and natural areas can be easily discriminated from each other using a simple unsupervised K-means classifier. Moreover, buildings aligned and not aligned along the radar flight direction can be also distinguished clearly. Spaceborne RADARSAT-2 and airborne AIRSAR full polarimetric SAR data are used to validate the performance of our proposed method. The cross scattering power of oriented buildings is generated, leading to a better decomposition result for urban areas with respect to other state-of-the-art urban decomposition techniques. The decomposed scattering powers significantly improve the classification accuracy for urban areas.
Iris identification system based on Fourier coefficients and singular value decomposition
Somnugpong, Sawet; Phimoltares, Suphakant; Maneeroj, Saranya
2011-12-01
Nowadays, both personal identification and classification are very important. In order to identify the person for some security applications, physical or behavior-based characteristics of individuals with high uniqueness might be analyzed. Biometric becomes the mostly used in personal identification purpose. There are many types of biometric information currently used. In this work, iris, one kind of personal characteristics is considered because of its uniqueness and collectable. Recently, the problem of various iris recognition systems is the limitation of space to store the data in a variety of environments. This work proposes the iris recognition system with small-size of feature vector causing a reduction in space complexity term. For this experiment, each iris is presented in terms of frequency domain, and based on neural network classification model. First, Fast Fourier Transform (FFT) is used to compute the Discrete Fourier Coefficients of iris data in frequency domain. Once the iris data was transformed into frequency-domain matrix, Singular Value Decomposition (SVD) is used to reduce a size of the complex matrix to single vector. All of these vectors would be input for neural networks for the classification step. With this approach, the merit of our technique is that size of feature vector is smaller than that of other techniques with the acceptable level of accuracy when compared with other existing techniques.
Zhang, Mingwei; Meng, Qingbin; Liu, Shengdong; Shimada, Hideki
2017-01-01
To reduce noise components from original microseismic waves, a comprehensive fine signal processing approach using the integrated decomposition analysis of the wave duration, frequency spectrum, and wavelet coefficient domain was developed and implemented. Distribution regularities of the wave component and redundant noise on the frequency spectrum and the wavelet coefficient domain were first expounded. The frequency threshold and wavelet coefficient threshold were determined for the identif...
A domain decomposition method for pseudo-spectral electromagnetic simulations of plasmas
International Nuclear Information System (INIS)
Vay, Jean-Luc; Haber, Irving; Godfrey, Brendan B.
2013-01-01
Pseudo-spectral electromagnetic solvers (i.e. representing the fields in Fourier space) have extraordinary precision. In particular, Haber et al. presented in 1973 a pseudo-spectral solver that integrates analytically the solution over a finite time step, under the usual assumption that the source is constant over that time step. Yet, pseudo-spectral solvers have not been widely used, due in part to the difficulty for efficient parallelization owing to global communications associated with global FFTs on the entire computational domains. A method for the parallelization of electromagnetic pseudo-spectral solvers is proposed and tested on single electromagnetic pulses, and on Particle-In-Cell simulations of the wakefield formation in a laser plasma accelerator. The method takes advantage of the properties of the Discrete Fourier Transform, the linearity of Maxwell’s equations and the finite speed of light for limiting the communications of data within guard regions between neighboring computational domains. Although this requires a small approximation, test results show that no significant error is made on the test cases that have been presented. The proposed method opens the way to solvers combining the favorable parallel scaling of standard finite-difference methods with the accuracy advantages of pseudo-spectral methods
Kernel based pattern analysis methods using eigen-decompositions for reading Icelandic sagas
DEFF Research Database (Denmark)
Christiansen, Asger Nyman; Carstensen, Jens Michael
We want to test the applicability of kernel based eigen-decomposition methods, compared to the traditional eigen-decomposition methods. We have implemented and tested three kernel based methods methods, namely PCA, MAF and MNF, all using a Gaussian kernel. We tested the methods on a multispectral...
Multi-Domain Modeling Based on Modelica
Directory of Open Access Journals (Sweden)
Liu Jun
2016-01-01
Full Text Available With the application of simulation technology in large-scale and multi-field problems, multi-domain unified modeling become an effective way to solve these problems. This paper introduces several basic methods and advantages of the multidisciplinary model, and focuses on the simulation based on Modelica language. The Modelica/Mworks is a newly developed simulation software with features of an object-oriented and non-casual language for modeling of the large, multi-domain system, which makes the model easier to grasp, develop and maintain.It This article shows the single degree of freedom mechanical vibration system based on Modelica language special connection mechanism in Mworks. This method that multi-domain modeling has simple and feasible, high reusability. it closer to the physical system, and many other advantages.
NEW METHOD FOR FAST IMAGE EDGE DETECTION BASED ON SUBBAND DECOMPOSITION
Directory of Open Access Journals (Sweden)
Chong-Yang Hao
2011-05-01
Full Text Available A new method of detection the edges of an image is presented in this article. The method uses a kind of twodimensional subband spectrum analysis (2D-SSA filter that is based on subband decomposition, and it is very convenient to get the edge frequency spectrum of an image after certain preprocessing. Comparing with spatial methods, the method is less sensitive to noise. It is also superior to the conventional frequency methods. In conventional frequency methods, the bandwidth and central frequency of filter are fixed, and it needs to transform the whole image into frequency domain. While in this method, the bandwidth and central frequency can be adjusted flexibly, and it only uses a few pixels to implement FFT. So this method is a fast way to extract the edges of an image. The simulation results show its efficiency.
QR-decomposition based SENSE reconstruction using parallel architecture.
Ullah, Irfan; Nisar, Habab; Raza, Haseeb; Qasim, Malik; Inam, Omair; Omer, Hammad
2018-02-02
Magnetic Resonance Imaging (MRI) is a powerful medical imaging technique that provides essential clinical information about the human body. One major limitation of MRI is its long scan time. Implementation of advance MRI algorithms on a parallel architecture (to exploit inherent parallelism) has a great potential to reduce the scan time. Sensitivity Encoding (SENSE) is a Parallel Magnetic Resonance Imaging (pMRI) algorithm that utilizes receiver coil sensitivities to reconstruct MR images from the acquired under-sampled k-space data. At the heart of SENSE lies inversion of a rectangular encoding matrix. This work presents a novel implementation of GPU based SENSE algorithm, which employs QR decomposition for the inversion of the rectangular encoding matrix. For a fair comparison, the performance of the proposed GPU based SENSE reconstruction is evaluated against single and multicore CPU using openMP. Several experiments against various acceleration factors (AFs) are performed using multichannel (8, 12 and 30) phantom and in-vivo human head and cardiac datasets. Experimental results show that GPU significantly reduces the computation time of SENSE reconstruction as compared to multi-core CPU (approximately 12x speedup) and single-core CPU (approximately 53x speedup) without any degradation in the quality of the reconstructed images. Copyright © 2018 Elsevier Ltd. All rights reserved.
Accurate tempo estimation based on harmonic + noise decomposition
Alonso, Miguel; Richard, Gael; David, Bertrand
2006-12-01
We present an innovative tempo estimation system that processes acoustic audio signals and does not use any high-level musical knowledge. Our proposal relies on a harmonic + noise decomposition of the audio signal by means of a subspace analysis method. Then, a technique to measure the degree of musical accentuation as a function of time is developed and separately applied to the harmonic and noise parts of the input signal. This is followed by a periodicity estimation block that calculates the salience of musical accents for a large number of potential periods. Next, a multipath dynamic programming searches among all the potential periodicities for the most consistent prospects through time, and finally the most energetic candidate is selected as tempo. Our proposal is validated using a manually annotated test-base containing 961 music signals from various musical genres. In addition, the performance of the algorithm under different configurations is compared. The robustness of the algorithm when processing signals of degraded quality is also measured.
Energy Technology Data Exchange (ETDEWEB)
Behrens, R.; Minier, L.
1998-03-24
The thermal decomposition of ammonium perchlorate (AP) and ammonium-perchlorate-based composite propellants is studied using the simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) technique. The main objective of the present work is to evaluate whether the STMBMS can provide new data on these materials that will have sufficient detail on the reaction mechanisms and associated reaction kinetics to permit creation of a detailed model of the thermal decomposition process. Such a model is a necessary ingredient to engineering models of ignition and slow-cookoff for these AP-based composite propellants. Results show that the decomposition of pure AP is controlled by two processes. One occurs at lower temperatures (240 to 270 C), produces mainly H{sub 2}O, O{sub 2}, Cl{sub 2}, N{sub 2}O and HCl, and is shown to occur in the solid phase within the AP particles. 200{micro} diameter AP particles undergo 25% decomposition in the solid phase, whereas 20{micro} diameter AP particles undergo only 13% decomposition. The second process is dissociative sublimation of AP to NH{sub 3} + HClO{sub 4} followed by the decomposition of, and reaction between, these two products in the gas phase. The dissociative sublimation process occurs over the entire temperature range of AP decomposition, but only becomes dominant at temperatures above those for the solid-phase decomposition. AP-based composite propellants are used extensively in both small tactical rocket motors and large strategic rocket systems.
Energy Technology Data Exchange (ETDEWEB)
Burns, S.P.; Christon, M.A.
1996-11-01
Parallelism for gray participating media radiation heat transfer may be placed in two primary categories: spatial and angular domain-based parallelism. Angular, e.g., ray based, decomposition has received the greatest attention in the open literature for moderate sized applications where the entire geometry may be placed on processor. Angular based decomposition is limited, however, for large scale applications (O(10{sup 6}) to O(10{sup 8}) computational cells) given the memory required to store computational grids of this size on each processor. Therefore, the objective of this work is to examine the application of spatial domain-based parallelism to large scale, three-dimensional, participating-media radiation transport calculations using a massively parallel supercomputer architecture. Both scaled and fixed problem size efficiencies are presented for an application of the Discrete Ordinate method to a three dimensional, non-scattering radiative transport application with nonuniform absorptivity. The data presented shows that the spatial domain-based decomposition paradigm results in some degradation in the parallel efficiency but provides useful speedup for large computational grids.
Decomposition-Based Decision Making for Aerospace Vehicle Design
Borer, Nicholas K.; Mavris, DImitri N.
2005-01-01
reader to observe how this technique can be applied to aerospace systems design and compare the results of this so-called Decomposition-Based Decision Making to more traditional design approaches.
Li, Xin; Wang, Huihui; Wang, Yueru; Zhao, Fangfang
2011-12-01
According to the frequency overlapping of intrinsic mode function (IMF) based on the temporal and spatial filtering of empirical mode decomposition (EMD), which will lead to the question of useful signals and noises filtered together, we proposed a method that numbers of IMF is determined by energy estimate, temporal and spatial filtering combing wavelet threshold and EMD integrating wavelet local signal characteristics of time and scale domain. This method not only used multi-resolution wavelet transform features, but also combined the EMD and Hilbert decomposition of the adaptive spectral analysis of instantaneous frequency and significance of the relationship between energy, so as to solve the problem of useful signal being weakened. With MIT/BIH ECG database standard data subjects, experimental results showed it was an effective method of data processing for handling this type of physiological signals under strong noise.
Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K
2016-02-07
Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from to , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.
A Hybrid, Parallel Krylov Solver for MODFLOW using Schwarz Domain Decomposition
Sutanudjaja, E.; Verkaik, J.; Hughes, J. D.
2015-12-01
In order to support decision makers in solving hydrological problems, detailed high-resolution models are often needed. These models typically consist of a large number of computational cells and have large memory requirements and long run times. An efficient technique for obtaining realistic run times and memory requirements is parallel computing, where the problem is divided over multiple processor cores. The new Parallel Krylov Solver (PKS) for MODFLOW-USG is presented. It combines both distributed memory parallelization by the Message Passing Interface (MPI) and shared memory parallelization by Open Multi-Processing (OpenMP). PKS includes conjugate gradient and biconjugate gradient stabilized linear accelerators that are both preconditioned by an overlapping additive Schwarz preconditioner in a way that: a) subdomains are partitioned using the METIS library; b) each subdomain uses local memory only and communicates with other subdomains by MPI within the linear accelerator; c) is fully integrated in the MODFLOW-USG code. PKS is based on the unstructured PCGU-solver, and supports OpenMP. Depending on the available hardware, PKS can run exclusively with MPI, exclusively with OpenMP, or with a hybrid MPI/OpenMP approach. Benchmarks were performed on the Cartesius Dutch supercomputer (https://userinfo.surfsara.nl/systems/cartesius) using up to 144 cores, for a synthetic test (~112 million cells) and the Indonesia groundwater model (~4 million 1km cells). The latter, which includes all islands in the Indonesian archipelago, was built using publically available global datasets, and is an ideal test bed for evaluating the applicability of PKS parallelization techniques to a global groundwater model consisting of multiple continents and islands. Results show that run time reductions can be greatest with the hybrid parallelization approach for the problems tested.
Energy Technology Data Exchange (ETDEWEB)
Salloum, Maher N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Gharagozloo, Patricia E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)
2013-10-01
Metal particle beds have recently become a major technique for hydrogen storage. In order to extract hydrogen from such beds, it is crucial to understand the decomposition kinetics of the metal hydride. We are interested in obtaining a a better understanding of the uranium hydride (UH3) decomposition kinetics. We first developed an empirical model by fitting data compiled from different experimental studies in the literature and quantified the uncertainty resulting from the scattered data. We found that the decomposition time range predicted by the obtained kinetics was in a good agreement with published experimental results. Secondly, we developed a physics based mathematical model to simulate the rate of hydrogen diffusion in a hydride particle during the decomposition. We used this model to simulate the decomposition of the particles for temperatures ranging from 300K to 1000K while propagating parametric uncertainty and evaluated the kinetics from the results. We compared the kinetics parameters derived from the empirical and physics based models and found that the uncertainty in the kinetics predicted by the physics based model covers the scattered experimental data. Finally, we used the physics-based kinetics parameters to simulate the effects of boundary resistances and powder morphological changes during decomposition in a continuum level model. We found that the species change within the bed occurring during the decomposition accelerates the hydrogen flow by increasing the bed permeability, while the pressure buildup and the thermal barrier forming at the wall significantly impede the hydrogen extraction.
Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions
DEFF Research Database (Denmark)
Hansen, Per Christian; Jensen, Søren Holdt
2007-01-01
We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... diagonal (eigenvalue and singular value) decompositions and rank-revealing triangular decompositions (ULV, URV, VSV, ULLV and ULLIV). In addition we show how the subspace-based algorithms can be evaluated and compared by means of simple FIR filter interpretations. The algorithms are illustrated...... with working Matlab code and applications in speech processing....
Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions
DEFF Research Database (Denmark)
Hansen, Per Christian; Jensen, Søren Holdt
We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... diagonal (eigenvalue and singular value) decompositions and rank-revealing triangular decompositions (ULV, URV, VSV, ULLV and ULLIV). In addition we show how the subspace-based algorithms can be evaluated and compared by means of simple FIR filter interpretations. The algorithms are illustrated...... with working Matlab code and applications in speech processing....
Energy Technology Data Exchange (ETDEWEB)
Mehboob, Shoaib, E-mail: smehboob@pieas.edu.pk [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan); Mehmood, Mazhar [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan); Ahmed, Mushtaq [National Institute of Lasers and Optronics (NILOP), Nilore 45650, Islamabad (Pakistan); Ahmad, Jamil; Tanvir, Muhammad Tauseef [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan); Ahmad, Izhar [National Institute of Lasers and Optronics (NILOP), Nilore 45650, Islamabad (Pakistan); Hassan, Syed Mujtaba ul [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan)
2017-04-15
The objective of this work is to study the changes in optical and dielectric properties with the transformation of aluminum ammonium carbonate hydroxide (AACH) to α-alumina, using terahertz time domain spectroscopy (THz-TDS). The nanostructured AACH was synthesized by hydrothermal treatment of the raw chemicals at 140 °C for 12 h. This AACH was then calcined at different temperatures. The AACH was decomposed to amorphous phase at 400 °C and transformed to δ* + α-alumina at 1000 °C. Finally, the crystalline α-alumina was achieved at 1200 °C. X-ray diffraction (XRD) and Fourier transform infrared (FTIR) spectroscopy were employed to identify the phases formed after calcination. The morphology of samples was studied using scanning electron microscopy (SEM), which revealed that the AACH sample had rod-like morphology which was retained in the calcined samples. THz-TDS measurements showed that AACH had lowest refractive index in the frequency range of measurements. The refractive index at 0.1 THZ increased from 2.41 for AACH to 2.58 for the amorphous phase and to 2.87 for the crystalline α-alumina. The real part of complex permittivity increased with the calcination temperature. Further, the absorption coefficient was highest for AACH, which reduced with calcination temperature. The amorphous phase had higher absorption coefficient than the crystalline alumina. - Highlights: • Aluminum oxide nanostructures were obtained by thermal decomposition of AACH. • Crystalline phases of aluminum oxide have higher refractive index than that of amorphous phase. • The removal of heavier ionic species led to the lower absorption of THz radiations.
Kumar, Ravi; Bhaduri, Basanta; Nishchal, Naveen K.
2018-01-01
In this study, we propose a quick response (QR) code based nonlinear optical image encryption technique using spiral phase transform (SPT), equal modulus decomposition (EMD) and singular value decomposition (SVD). First, the primary image is converted into a QR code and then multiplied with a spiral phase mask (SPM). Next, the product is spiral phase transformed with particular spiral phase function, and further, the EMD is performed on the output of SPT, which results into two complex images, Z 1 and Z 2. Among these, Z 1 is further Fresnel propagated with distance d, and Z 2 is reserved as a decryption key. Afterwards, SVD is performed on Fresnel propagated output to get three decomposed matrices i.e. one diagonal matrix and two unitary matrices. The two unitary matrices are modulated with two different SPMs and then, the inverse SVD is performed using the diagonal matrix and modulated unitary matrices to get the final encrypted image. Numerical simulation results confirm the validity and effectiveness of the proposed technique. The proposed technique is robust against noise attack, specific attack, and brutal force attack. Simulation results are presented in support of the proposed idea.
Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes
Grigoryan, Artyom M.
2015-03-01
In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.
International Nuclear Information System (INIS)
Odry, Nans
2016-01-01
Deterministic calculation schemes are devised to numerically solve the neutron transport equation in nuclear reactors. Dealing with core-sized problems is very challenging for computers, so much that the dedicated core calculations have no choice but to allow simplifying assumptions (assembly- then core scale steps..). The PhD work aims at overcoming some of these approximations: thanks to important changes in computer architecture and capacities (HPC), nowadays one can solve 3D core-sized problems, using both high mesh refinement and the transport operator. It is an essential step forward in order to perform, in the future, reference calculations using deterministic schemes. This work focuses on a spatial domain decomposition method (DDM). Using massive parallelism, DDM allows much more ambitious computations in terms of both memory requirements and calculation time. Developments were performed inside the Sn core solver Minaret, from the new CEA neutronics platform APOLLO3. Only fast reactors (hexagonal periodicity) are considered, even if all kinds of geometries can be dealt with, using Minaret. The work has been divided in four steps: 1) The spatial domain decomposition with no overlap is inserted into the standard algorithmic structure of Minaret. The fundamental idea involves splitting a core-sized problem into smaller, independent, spatial sub-problems. angular flux is exchanged between adjacent sub-domains. In doing so, all combined sub-problems converge to the global solution at the outcome of an iterative process. Various strategies were explored regarding both data management and algorithm design. Results (k eff and flux) are systematically compared to the reference in a numerical verification step. 2) Introducing more parallelism is an unprecedented opportunity to heighten performances of deterministic schemes. Domain decomposition is particularly suited to this. A two-layer hybrid parallelism strategy, suited to HPC, is chosen. It benefits from the
Enhancement of lung sounds based on empirical mode decomposition and Fourier transform algorithm.
Mondal, Ashok; Banerjee, Poulami; Somkuwar, Ajay
2017-02-01
There is always heart sound (HS) signal interfering during the recording of lung sound (LS) signals. This obscures the features of LS signals and creates confusion on pathological states, if any, of the lungs. In this work, a new method is proposed for reduction of heart sound interference which is based on empirical mode decomposition (EMD) technique and prediction algorithm. In this approach, first the mixed signal is split into several components in terms of intrinsic mode functions (IMFs). Thereafter, HS-included segments are localized and removed from them. The missing values of the gap thus produced, is predicted by a new Fast Fourier Transform (FFT) based prediction algorithm and the time domain LS signal is reconstructed by taking an inverse FFT of the estimated missing values. The experiments have been conducted on simulated and recorded HS corrupted LS signals at three different flow rates and various SNR levels. The performance of the proposed method is evaluated by qualitative and quantitative analysis of the results. It is found that the proposed method is superior to the baseline method in terms of quantitative and qualitative measurement. The developed method gives better results compared to baseline method for different SNR levels. Our method gives cross correlation index (CCI) of 0.9488, signal to deviation ratio (SDR) of 9.8262, and normalized maximum amplitude error (NMAE) of 26.94 for 0 dB SNR value. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Wen, Qiaonong; Wan, Suiren
2013-01-01
Ultrasound image deconvolution involves noise reduction and image feature enhancement, denoising need equivalent the low-pass filtering, image feature enhancement is to strengthen the high-frequency parts, these two requirements are often combined together. It is a contradictory requirement that we must be reasonable balance between these two basic requirements. Image deconvolution method of partial differential equation model is the method based on diffusion theory, and sparse decomposition deconvolution is image representation-based method. The mechanisms of these two methods are not the same, effect of these two methods own characteristics. In contourlet transform domain, we combine the strengths of the two deconvolution method together by image fusion, and introduce the entropy of local orientation energy ratio into fusion decision-making, make a different treatment according to the actual situation on the low-frequency part of the coefficients and the high-frequency part of the coefficient. As deconvolution process is inevitably blurred image edge information, we fusion the edge gray-scale image information to the deconvolution results in order to compensate the missing edge information. Experiments show that our method is better than the effect separate of using deconvolution method, and restore part of the image edge information.
Region quad-tree decomposition based edge detection for medical images.
Dua, Sumeet; Kandiraju, Naveen; Chowriappa, Pradeep
2010-05-28
Edge detection in medical images has generated significant interest in the medical informatics community, especially in recent years. With the advent of imaging technology in biomedical and clinical domains, the growth in medical digital images has exceeded our capacity to analyze and store them for efficient representation and retrieval, especially for data mining applications. Medical decision support applications frequently demand the ability to identify and locate sharp discontinuities in an image for feature extraction and interpretation of image content, which can then be exploited for decision support analysis. However, due to the inherent high dimensional nature of the image content and the presence of ill-defined edges, edge detection using classical procedures is difficult, if not impossible, for sensitive and specific medical informatics-based discovery. In this paper, we propose a new edge detection technique based on the regional recursive hierarchical decomposition using quadtree and post-filtration of edges using a finite difference operator. We show that in medical images of common origin, focal and/or penumbral blurred edges can be characterized by an estimable intensity gradient. This gradient can further be used for dismissing false alarms. A detailed validation and comparison with related works on diabetic retinopathy images and CT scan images show that the proposed approach is efficient and accurate.
Video steganography based on bit-plane decomposition of wavelet-transformed video
Noda, Hideki; Furuta, Tomofumi; Niimi, Michiharu; Kawaguchi, Eiji
2004-06-01
This paper presents a steganography method using lossy compressed video which provides a natural way to send a large amount of secret data. The proposed method is based on wavelet compression for video data and bit-plane complexity segmentation (BPCS) steganography. BPCS steganography makes use of bit-plane decomposition and the characteristics of the human vision system, where noise-like regions in bit-planes of a dummy image are replaced with secret data without deteriorating image quality. In wavelet-based video compression methods such as 3-D set partitioning in hierarchical trees (SPIHT) algorithm and Motion-JPEG2000, wavelet coefficients in discrete wavelet transformed video are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain. 3-D SPIHT-BPCS steganography and Motion-JPEG2000-BPCS steganography are presented and tested, which are the integration of 3-D SPIHT video coding and BPCS steganography, and that of Motion-JPEG2000 and BPCS, respectively. Experimental results show that 3-D SPIHT-BPCS is superior to Motion-JPEG2000-BPCS with regard to embedding performance. In 3-D SPIHT-BPCS steganography, embedding rates of around 28% of the compressed video size are achieved for twelve bit representation of wavelet coefficients with no noticeable degradation in video quality.
Directory of Open Access Journals (Sweden)
Qiang Guo
2017-04-01
Full Text Available In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences. Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal.
Guo, Qiang; Qi, Liangang
2017-04-10
In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal.
Directory of Open Access Journals (Sweden)
Zhongliang Lv
2016-01-01
Full Text Available A novel fault diagnosis method based on variational mode decomposition (VMD and multikernel support vector machine (MKSVM optimized by Immune Genetic Algorithm (IGA is proposed to accurately and adaptively diagnose mechanical faults. First, mechanical fault vibration signals are decomposed into multiple Intrinsic Mode Functions (IMFs by VMD. Then the features in time-frequency domain are extracted from IMFs to construct the feature sets of mixed domain. Next, Semisupervised Locally Linear Embedding (SS-LLE is adopted for fusion and dimension reduction. The feature sets with reduced dimension are inputted to the IGA optimized MKSVM for failure mode identification. Theoretical analysis demonstrates that MKSVM can approximate any multivariable function. The global optimal parameter vector of MKSVM can be rapidly identified by IGA parameter optimization. The experiments of mechanical faults show that, compared to traditional fault diagnosis models, the proposed method significantly increases the diagnosis accuracy of mechanical faults and enhances the generalization of its application.
Base catalyzed decomposition of toxic and hazardous chemicals
International Nuclear Information System (INIS)
Rogers, C.J.; Kornel, A.; Sparks, H.L.
1991-01-01
There are vast amounts of toxic and hazardous chemicals, which have pervaded our environment during the past fifty years, leaving us with serious, crucial problems of remediation and disposal. The accumulation of polychlorinated biphenyls (PCBs), polychlorinated dibenzo-p-dioxins (PCDDs), ''dioxins'' and pesticides in soil sediments and living systems is a serious problem that is receiving considerable attention concerning the cancer-causing nature of these synthetic compounds.US EPA scientists developed in 1989 and 1990 two novel chemical Processes to effect the dehalogenation of chlorinated solvents, PCBs, PCDDs, PCDFs, PCP and other pollutants in soil, sludge, sediment and liquids. This improved technology employs hydrogen as a nucleophile to replace halogens on halogenated compounds. Hydrogen as nucleophile is not influenced by steric hinderance as with other nucleophile where complete dehalogenation of organohalogens can be achieved. This report discusses catalyzed decomposition of toxic and hazardous chemicals
International Nuclear Information System (INIS)
Mendez, M O; Cerutti, S; Bianchi, A M; Corthout, J; Van Huffel, S; Matteucci, M; Penzel, T
2010-01-01
This study analyses two different methods to detect obstructive sleep apnea (OSA) during sleep time based only on the ECG signal. OSA is a common sleep disorder caused by repetitive occlusions of the upper airways, which produces a characteristic pattern on the ECG. ECG features, such as the heart rate variability (HRV) and the QRS peak area, contain information suitable for making a fast, non-invasive and simple screening of sleep apnea. Fifty recordings freely available on Physionet have been included in this analysis, subdivided in a training and in a testing set. We investigated the possibility of using the recently proposed method of empirical mode decomposition (EMD) for this application, comparing the results with the ones obtained through the well-established wavelet analysis (WA). By these decomposition techniques, several features have been extracted from the ECG signal and complemented with a series of standard HRV time domain measures. The best performing feature subset, selected through a sequential feature selection (SFS) method, was used as the input of linear and quadratic discriminant classifiers. In this way we were able to classify the signals on a minute-by-minute basis as apneic or nonapneic with different best-subset sizes, obtaining an accuracy up to 89% with WA and 85% with EMD. Furthermore, 100% correct discrimination of apneic patients from normal subjects was achieved independently of the feature extractor. Finally, the same procedure was repeated by pooling features from standard HRV time domain, EMD and WA together in order to investigate if the two decomposition techniques could provide complementary features. The obtained accuracy was 89%, similarly to the one achieved using only Wavelet analysis as the feature extractor; however, some complementary features in EMD and WA are evident
Port-Based Modeling in Different Domains
Batlle, C.; Couenne, F.; Doria-Cerezo, A.; Fossas, E.; Jallut, C.; Lefevre, L.; Le Gorrec, Y.; Maschke, B.M.; Ortega, R.; Schlacher, K.; Tayakout, M.; Duindam, V.; Macchelli, Alessandro; Stramigioli, Stefano; Bruyninckx, Herman
2009-01-01
In this Chapter we present some detailed examples of modelling in several domains using port and port-Hamiltonian concepts, as have been presented in the previous chapters. We start with the electromechanical domain in Sect. 3.1, while in Sect. 3.2 it is shown how port-Hamiltonian systems can be
Sparse time-frequency decomposition based on dictionary adaptation.
Hou, Thomas Y; Shi, Zuoqiang
2016-04-13
In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. © 2016 The Author(s).
Purification of sodium phosphate yielded from Bangka's monazite base decomposition
International Nuclear Information System (INIS)
Walujo, Sugeng; Susilaningtyas; Mukhlis; Tukardi
2002-01-01
The aim of this experiment is to get purification condition of sodium phosphate from the filtration result of mixing mother liquor and filtrate of washing residue from Bangka monazite decomposition by alkaline. The method of purification which has been used was: dissolved the precipitation of sodium phosphate into water with the agitation time constant at 5 minutes and then the solution is settled for 12 hours until the sodium phosphate crystals appear. The variable of experiment included of dissolution time and the ratio of the amount precipitate sodium phosphate, which dissolved against the volume of water as solvent. Experiment data shown that the good temperature of dissolution at 70 oC with the ratio of precipitate sodium phosphate which dissolved is 60 gram/100 ml of water. The recovery of sodium phosphate crystallization is 65.18 % with Na 3 PO 4 purity is about 65.608 %, and it impurities content of U is 0.007% and NaOH and the others are 34.383%
Directory of Open Access Journals (Sweden)
Daniel Marcsa
2015-01-01
Full Text Available The analysis and design of electromechanical devices involve the solution of large sparse linear systems, and require therefore high performance algorithms. In this paper, the primal Domain Decomposition Method (DDM with parallel forward-backward and with parallel Preconditioned Conjugate Gradient (PCG solvers are introduced in two-dimensional parallel time-stepping finite element formulation to analyze rotating machine considering the electromagnetic field, external circuit and rotor movement. The proposed parallel direct and the iterative solver with two preconditioners are analyzed concerning its computational efficiency and number of iterations of the solver with different preconditioners. Simulation results of a rotating machine is also presented.
Radiation-induced decomposition of the purine bases within DNA and related model compounds
International Nuclear Information System (INIS)
Cadet, J.; Berger, M.
1985-01-01
This survey focuses on recent developments in the radiation chemistry of purine bases in nucleic acids and related model compounds. Both direct and indirect effects of ionizing radiation are investigated with special emphasis on the structural characterization of the final decomposition products of nucleic acid components. Available assays for monitoring radiation-induced base lesions are critically reviewed. (author)
A route-based decomposition for the Multi-Commodity k-splittable Maximum Flow Problem
DEFF Research Database (Denmark)
Gamst, Mette
2012-01-01
The Multi-Commodity k-splittable Maximum Flow Problem routes flow through a capacitated graph such that each commodity uses at most k paths and such that the total amount of routedflow is maximized. This paper proposes a branch-and-price algorithm based on a route-based Dantzig-Wolfe decomposition...
Digital Image Stabilization Method Based on Variational Mode Decomposition and Relative Entropy
Directory of Open Access Journals (Sweden)
Duo Hao
2017-11-01
Full Text Available Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD and relative entropy (RE. In this paper, the global motion vector (GMV is initially decomposed into several narrow-banded modes by VMD. REs, which exhibit the difference of probability distribution between two modes, are then calculated to identify the intentional and jitter motion modes. Finally, the summation of the jitter motion modes constitutes jitter motions, whereas the subtraction of the resulting sum from the GMV represents the intentional motions. The proposed stabilization method is compared with several known methods, namely, medium filter (MF, Kalman filter (KF, wavelet decomposition (MD method, empirical mode decomposition (EMD-based method, and enhanced EMD-based method, to evaluate stabilization performance. Experimental results show that the proposed method outperforms the other stabilization methods.
International Nuclear Information System (INIS)
Umegaki, Kikuo; Miki, Kazuyoshi
1990-01-01
A numerical method is developed to solve three-dimensional incompressible viscous flow in complicated geometry using curvilinear coordinate transformation and domain decomposition technique. In this approach, a complicated flow domain is decomposed into several subdomains, each of which has an overlapping region with neighboring subdomains. Curvilinear coordinates are numerically generated in each subdomain using the boundary-fitted coordinate transformation technique. The modified SMAC scheme is developed to solve Navier-Stokes equations in which the convective terms are discretized by the QUICK method. A fully vectorized computer program is developed on the basis of the proposed method. The program is applied to flow analysis in a semicircular curved, 90deg elbow and T-shape branched pipes. Computational time with the vector processor of the HITAC S-810/20 supercomputer system, is reduced to 1/10∼1/20 of that with a scalar processor. (author)
International Nuclear Information System (INIS)
Wang, Yamin; Wu, Lei
2016-01-01
This paper presents a comprehensive analysis on practical challenges of empirical mode decomposition (EMD) based algorithms on wind speed and solar irradiation forecasts that have been largely neglected in literature, and proposes an alternative approach to mitigate such challenges. Specifically, the challenges are: (1) Decomposed sub-series are very sensitive to the original time series data. That is, sub-series of the new time series, consisting of the original one plus a limit number of new data samples, may significantly differ from those used in training forecasting models. In turn, forecasting models established by original sub-series may not be suitable for newly decomposed sub-series and have to be trained more frequently; and (2) Key environmental factors usually play a critical role in non-decomposition based methods for forecasting wind speed and solar irradiation. However, it is difficult to incorporate such critical environmental factors into forecasting models of individual decomposed sub-series, because the correlation between the original data and environmental factors is lost after decomposition. Numerical case studies on wind speed and solar irradiation forecasting show that the performance of existing EMD-based forecasting methods could be worse than the non-decomposition based forecasting model, and are not effective in practical cases. Finally, the approximated forecasting model based on EMD is proposed to mitigate the challenges and achieve better forecasting results than existing EMD-based forecasting algorithms and the non-decomposition based forecasting models on practical wind speed and solar irradiation forecasting cases. - Highlights: • Two challenges of existing EMD-based forecasting methods are discussed. • Significant changes of sub-series in each step of the rolling forecast procedure. • Difficulties in incorporating environmental factors into sub-series forecasting models. • The approximated forecasting method is proposed to
Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques
2012-09-01
The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.
ECG baseline wander correction based on mean-median filter and empirical mode decomposition.
Xin, Yi; Chen, Yu; Hao, Wei Tuo
2014-01-01
A novel approach of ECG baseline wander correction based on mean-median filter and empirical mode decomposition is presented in this paper. The low frequency parts of the original signals were removed by the mean median filter in a nonlinear way to obtain the baseline wander estimation, then its series of IMFs were sifted by t-test after empirical mode decomposition. The proposed method, tested by the ECG signals in MIT-BIH Arrhythmia database and European ST_T database, is more effective compared with other baseline wander removal methods.
Ammonia synthesis and decomposition on a Ru-based catalyst modeled by first-principles
DEFF Research Database (Denmark)
Hellman, A.; Honkala, Johanna Karoliina; Remediakis, Ioannis
2009-01-01
A recently published first-principles model for the ammonia synthesis on an unpromoted Ru-based catalyst is extended to also describe ammonia decomposition. In addition, further analysis concerning trends in ammonia productivity, surface conditions during the reaction, and macro......-properties, such as apparent activation energies and reaction orders are provided. All observed trends in activity are captured by the model and the absolute value of ammonia synthesis/decomposition productivity is predicted to within a factor of 1-100 depending on the experimental conditions. Moreover it is shown: (i...
A Benders Decomposition-Based Matheuristic for the Cardinality Constrained Shift Design Problem
DEFF Research Database (Denmark)
Lusby, Richard Martin; Range, Troels Martin; Larsen, Jesper
2016-01-01
is bounded by an upper limit. We present an integer programming model for this problem and show that its structure lends itself very naturally to Benders decomposition. Due to convergence issues with a conventional implementation, we propose a matheuristic based on Benders decomposition for solving...... integer programming solver on instances with 1241 different shift types and remains competitive for larger cases with 2145 shift types. On all classes of problems the heuristic is able to quickly find good solutions. © 2016 Elsevier B.V. All rights reserved...
A novel ECG data compression method based on adaptive Fourier decomposition
Tan, Chunyu; Zhang, Liming
2017-12-01
This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.
Asynchronous Task-Based Polar Decomposition on Single Node Manycore Architectures
Sukkari, Dalal E.
2017-09-29
This paper introduces the first asynchronous, task-based formulation of the polar decomposition and its corresponding implementation on manycore architectures. Based on a formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations for the polar decomposition on latest shared-memory vendors\\' systems, while maintaining numerical accuracy.
Distributed Damage Estimation for Prognostics based on Structural Model Decomposition
National Aeronautics and Space Administration — Model-based prognostics approaches capture system knowl- edge in the form of physics-based models of components that include how they fail. These methods consist of...
Bai, X. T.; Wu, Y. H.; Zhang, K.; Chen, C. Z.; Yan, H. P.
2017-12-01
This paper mainly focuses on the calculation and analysis on the radiation noise of the angular contact ball bearing applied to the ceramic motorized spindle. The dynamic model containing the main working conditions and structural parameters is established based on dynamic theory of rolling bearing. The sub-source decomposition method is introduced in for the calculation of the radiation noise of the bearing, and a comparative experiment is adopted to check the precision of the method. Then the comparison between the contribution of different components is carried out in frequency domain based on the sub-source decomposition method. The spectrum of radiation noise of different components under various rotation speeds are used as the basis of assessing the contribution of different eigenfrequencies on the radiation noise of the components, and the proportion of friction noise and impact noise is evaluated as well. The results of the research provide the theoretical basis for the calculation of bearing noise, and offers reference to the impact of different components on the radiation noise of the bearing under different rotation speed.
Michelson interferometer based interleaver design using classic IIR filter decomposition.
Cheng, Chi-Hao; Tang, Shasha
2013-12-16
An elegant method to design a Michelson interferometer based interleaver using a classic infinite impulse response (IIR) filter such as Butterworth, Chebyshev, and elliptic filters as a starting point are presented. The proposed design method allows engineers to design a Michelson interferometer based interleaver from specifications seamlessly. Simulation results are presented to demonstrate the validity of the proposed design method.
Classification of Underwater Signals Using Wavelet-Based Decompositions
National Research Council Canada - National Science Library
Duzenli, Ozhan
1998-01-01
.... Two feature extraction tools are considered: Local Discriminant Bases scheme (LDB) and Power method. Several dimension reduction schemes including a newly proposed one called the Mean Separator neural network...
Low-rank approximation based non-negative multi-way array decomposition on event-related potentials.
Cong, Fengyu; Zhou, Guoxu; Astikainen, Piia; Zhao, Qibin; Wu, Qiang; Nandi, Asoke K; Hietanen, Jari K; Ristaniemi, Tapani; Cichocki, Andrzej
2014-12-01
Non-negative tensor factorization (NTF) has been successfully applied to analyze event-related potentials (ERPs), and shown superiority in terms of capturing multi-domain features. However, the time-frequency representation of ERPs by higher-order tensors are usually large-scale, which prevents the popularity of most tensor factorization algorithms. To overcome this issue, we introduce a non-negative canonical polyadic decomposition (NCPD) based on low-rank approximation (LRA) and hierarchical alternating least square (HALS) techniques. We applied NCPD (LRAHALS and benchmark HALS) and CPD to extract multi-domain features of a visual ERP. The features and components extracted by LRAHALS NCPD and HALS NCPD were very similar, but LRAHALS NCPD was 70 times faster than HALS NCPD. Moreover, the desired multi-domain feature of the ERP by NCPD showed a significant group difference (control versus depressed participants) and a difference in emotion processing (fearful versus happy faces). This was more satisfactory than that by CPD, which revealed only a group difference.
Ebrahimi, Farideh; Setarehdan, Seyed-Kamaledin; Ayala-Moyeda, Jose; Nazeran, Homer
2013-10-01
The conventional method for sleep staging is to analyze polysomnograms (PSGs) recorded in a sleep lab. The electroencephalogram (EEG) is one of the most important signals in PSGs but recording and analysis of this signal presents a number of technical challenges, especially at home. Instead, electrocardiograms (ECGs) are much easier to record and may offer an attractive alternative for home sleep monitoring. The heart rate variability (HRV) signal proves suitable for automatic sleep staging. Thirty PSGs from the Sleep Heart Health Study (SHHS) database were used. Three feature sets were extracted from 5- and 0.5-min HRV segments: time-domain features, nonlinear-dynamics features and time-frequency features. The latter was achieved by using empirical mode decomposition (EMD) and discrete wavelet transform (DWT) methods. Normalized energies in important frequency bands of HRV signals were computed using time-frequency methods. ANOVA and t-test were used for statistical evaluations. Automatic sleep staging was based on HRV signal features. The ANOVA followed by a post hoc Bonferroni was used for individual feature assessment. Most features were beneficial for sleep staging. A t-test was used to compare the means of extracted features in 5- and 0.5-min HRV segments. The results showed that the extracted features means were statistically similar for a small number of features. A separability measure showed that time-frequency features, especially EMD features, had larger separation than others. There was not a sizable difference in separability of linear features between 5- and 0.5-min HRV segments but separability of nonlinear features, especially EMD features, decreased in 0.5-min HRV segments. HRV signal features were classified by linear discriminant (LD) and quadratic discriminant (QD) methods. Classification results based on features from 5-min segments surpassed those obtained from 0.5-min segments. The best result was obtained from features using 5-min HRV
Primal Decomposition-Based Method for Weighted Sum-Rate Maximization in Downlink OFDMA Systems
Directory of Open Access Journals (Sweden)
Weeraddana Chathuranga
2010-01-01
Full Text Available We consider the weighted sum-rate maximization problem in downlink Orthogonal Frequency Division Multiple Access (OFDMA systems. Motivated by the increasing popularity of OFDMA in future wireless technologies, a low complexity suboptimal resource allocation algorithm is obtained for joint optimization of multiuser subcarrier assignment and power allocation. The algorithm is based on an approximated primal decomposition-based method, which is inspired from exact primal decomposition techniques. The original nonconvex optimization problem is divided into two subproblems which can be solved independently. Numerical results are provided to compare the performance of the proposed algorithm to Lagrange relaxation based suboptimal methods as well as to optimal exhaustive search-based method. Despite its reduced computational complexity, the proposed algorithm provides close-to-optimal performance.
International Nuclear Information System (INIS)
Amjady, Nima; Reza Ansari, Mohammad
2013-01-01
Highlights: ► A new HTUC model with AC constraints is presented. ► A new decomposition strategy for HTUC problem is suggested. ► A new Benders decomposition method is proposed. ► Effectiveness of the proposed solution strategy is extensively tested. - Abstract: This paper presents a new approach based on Benders decomposition (BD) to solve hydrothermal unit commitment problem with AC power flow and security constraints. The proposed method decomposes the problem into a master problem and two sets of sub-problems. The master problem applies integer programming method to solve unit commitment (UC) while the sub-problems apply nonlinear programming solution method to determine economic dispatch for each time period. If one sub-problem of the first set becomes infeasible, the corresponding sub-problem of the second set is called. Moreover, strong Benders cuts are proposed that reduce the number of iterations and CPU time of the Benders decomposition method. All constraints of the hydrothermal unit commitment problem can be completely satisfied with zero penalty terms by the proposed solution method. The methodology is tested on the 9-bus and IEEE 118-bus test systems. The obtained results confirm the validity of the developed approach.
Domain Adaptation for Pedestrian Detection Based on Prediction Consistency
Directory of Open Access Journals (Sweden)
Yu Li-ping
2014-01-01
Full Text Available Pedestrian detection is an active area of research in computer vision. It remains a quite challenging problem in many applications where many factors cause a mismatch between source dataset used to train the pedestrian detector and samples in the target scene. In this paper, we propose a novel domain adaptation model for merging plentiful source domain samples with scared target domain samples to create a scene-specific pedestrian detector that performs as well as rich target domain simples are present. Our approach combines the boosting-based learning algorithm with an entropy-based transferability, which is derived from the prediction consistency with the source classifications, to selectively choose the samples showing positive transferability in source domains to the target domain. Experimental results show that our approach can improve the detection rate, especially with the insufficient labeled data in target scene.
Towards Domain-specific Flow-based Languages
DEFF Research Database (Denmark)
Zarrin, Bahram; Baumeister, Hubert; Sarjoughian, Hessam S.
2018-01-01
describe their problems and solutions, instead of using general purpose programming languages. The goal of these languages is to improve the productivity and efficiency of the development and simulation of concurrent scientific models and systems. Moreover, they help to expose parallelism and to specify...... the concurrency within a component or across different independent components. In this paper, we introduce the concept of domain-specific flowbased languages which allows domain experts to use flow-based languages adapted to a particular problem domain. Flow-based programming is used to support concurrency, while......Due to the significant growth of the demand for data-intensive computing, in addition to the emergence of new parallel and distributed computing technologies, scientists and domain experts are leveraging languages specialized for their problem domain, i.e., domain-specific languages, to help them...
Hua, Wei; Qi, Ji; Jia, Meng
2017-05-01
Switched reluctance machines (SRMs) have attracted extensive attentions due to the inherent advantages, including simple and robust structure, low cost, excellent fault-tolerance and wide speed range, etc. However, one of the bottlenecks limiting the SRMs for further applications is its unfavorable torque ripple, and consequently noise and vibration due to the unique doubly-salient structure and pulse-current-based power supply method. In this paper, an inductance Fourier decomposition-based current-hysteresis-control (IFD-CHC) strategy is proposed to reduce torque ripple of SRMs. After obtaining a nonlinear inductance-current-position model based Fourier decomposition, reference currents can be calculated by reference torque and the derived inductance model. Both the simulations and experimental results confirm the effectiveness of the proposed strategy.
Directory of Open Access Journals (Sweden)
Changyun Liu
2017-01-01
Full Text Available A multisensor scheduling algorithm based on the hybrid task decomposition and modified binary particle swarm optimization (MBPSO is proposed. Firstly, aiming at the complex relationship between sensor resources and tasks, a hybrid task decomposition method is presented, and the resource scheduling problem is decomposed into subtasks; then the sensor resource scheduling problem is changed into the match problem of sensors and subtasks. Secondly, the resource match optimization model based on the sensor resources and tasks is established, which considers several factors, such as the target priority, detecting benefit, handover times, and resource load. Finally, MBPSO algorithm is proposed to solve the match optimization model effectively, which is based on the improved updating means of particle’s velocity and position through the doubt factor and modified Sigmoid function. The experimental results show that the proposed algorithm is better in terms of convergence velocity, searching capability, solution accuracy, and efficiency.
Linked-data based domain-specific sentiment lexicons
Vulcu, Gabriela; Lario Monje, Raúl; Muñoz, Mario; Buitelaar, Paul; Iglesias Fernandez, Carlos Angel
2014-01-01
In this paper we present a dataset componsed of domain-specific sentiment lexicons in six languages for two domains. We used existing collections of reviews from Trip Advisor, Amazon, the Stanford Network Analysis Project and the OpinRank Review Dataset. We use an RDF model based on the lemon and Marl formats to represent the lexicons. We describe the methodology that we applied to generate the domain-specific lexicons and we provide access information to our datasets.
Aagaard, Brad T.; Knepley, M.G.; Williams, C.A.
2013-01-01
We employ a domain decomposition approach with Lagrange multipliers to implement fault slip in a finite-element code, PyLith, for use in both quasi-static and dynamic crustal deformation applications. This integrated approach to solving both quasi-static and dynamic simulations leverages common finite-element data structures and implementations of various boundary conditions, discretization schemes, and bulk and fault rheologies. We have developed a custom preconditioner for the Lagrange multiplier portion of the system of equations that provides excellent scalability with problem size compared to conventional additive Schwarz methods. We demonstrate application of this approach using benchmarks for both quasi-static viscoelastic deformation and dynamic spontaneous rupture propagation that verify the numerical implementation in PyLith.
International Nuclear Information System (INIS)
Masiello, Emiliano; Martin, Brunella; Do, Jean-Michel
2011-01-01
A new development for the IDT solver is presented for large reactor core applications in XYZ geometries. The multigroup discrete-ordinate neutron transport equation is solved using a Domain-Decomposition (DD) method coupled with the Coarse-Mesh Finite Differences (CMFD). The later is used for accelerating the DD convergence rate. In particular, the external power iterations are preconditioned for stabilizing the oscillatory behavior of the DD iterative process. A set of critical 2-D and 3-D numerical tests on a single processor will be presented for the analysis of the performances of the method. The results show that the application of the CMFD to the DD can be a good candidate for large 3D full-core parallel applications. (author)
Spectral decomposition of asteroid Itokawa based on principal component analysis
Koga, Sumire C.; Sugita, Seiji; Kamata, Shunichi; Ishiguro, Masateru; Hiroi, Takahiro; Tatsumi, Eri; Sasaki, Sho
2018-01-01
The heliocentric stratification of asteroid spectral types may hold important information on the early evolution of the Solar System. Asteroid spectral taxonomy is based largely on principal component analysis. However, how the surface properties of asteroids, such as the composition and age, are projected in the principal-component (PC) space is not understood well. We decompose multi-band disk-resolved visible spectra of the Itokawa surface with principal component analysis (PCA) in comparison with main-belt asteroids. The obtained distribution of Itokawa spectra projected in the PC space of main-belt asteroids follows a linear trend linking the Q-type and S-type regions and is consistent with the results of space-weathering experiments on ordinary chondrites and olivine, suggesting that this trend may be a space-weathering-induced spectral evolution track for S-type asteroids. Comparison with space-weathering experiments also yield a short average surface age (component of Itokawa surface spectra is consistent with spectral change due to space weathering and that the spatial variation in the degree of space weathering is very large (a factor of three in surface age), which would strongly suggest the presence of strong regional/local resurfacing process(es) on this small asteroid.
Malagón-Romero, A.; Luque, A.
2018-04-01
At high pressure electric discharges typically grow as thin, elongated filaments. In a numerical simulation this large aspect ratio should ideally translate into a narrow, cylindrical computational domain that envelops the discharge as closely as possible. However, the development of the discharge is driven by electrostatic interactions and, if the computational domain is not wide enough, the boundary conditions imposed to the electrostatic potential on the external boundary have a strong effect on the discharge. Most numerical codes circumvent this problem by either using a wide computational domain or by calculating the boundary conditions by integrating the Green's function of an infinite domain. Here we describe an accurate and efficient method to impose free boundary conditions in the radial direction for an elongated electric discharge. To facilitate the use of our method we provide a sample implementation. Finally, we apply the method to solve Poisson's equation in cylindrical coordinates with free boundary conditions in both radial and longitudinal directions. This case is of particular interest for the initial stages of discharges in long gaps or natural discharges in the atmosphere, where it is not practical to extend the simulation volume to be bounded by two electrodes.
GoDec+: Fast and Robust Low-Rank Matrix Decomposition Based on Maximum Correntropy.
Guo, Kailing; Liu, Liu; Xu, Xiangmin; Xu, Dong; Tao, Dacheng
2017-04-24
GoDec is an efficient low-rank matrix decomposition algorithm. However, optimal performance depends on sparse errors and Gaussian noise. This paper aims to address the problem that a matrix is composed of a low-rank component and unknown corruptions. We introduce a robust local similarity measure called correntropy to describe the corruptions and, in doing so, obtain a more robust and faster low-rank decomposition algorithm: GoDec+. Based on half-quadratic optimization and greedy bilateral paradigm, we deliver a solution to the maximum correntropy criterion (MCC)-based low-rank decomposition problem. Experimental results show that GoDec+ is efficient and robust to different corruptions including Gaussian noise, Laplacian noise, salt & pepper noise, and occlusion on both synthetic and real vision data. We further apply GoDec+ to more general applications including classification and subspace clustering. For classification, we construct an ensemble subspace from the low-rank GoDec+ matrix and introduce an MCC-based classifier. For subspace clustering, we utilize GoDec+ values low-rank matrix for MCC-based self-expression and combine it with spectral clustering. Face recognition, motion segmentation, and face clustering experiments show that the proposed methods are effective and robust. In particular, we achieve the state-of-the-art performance on the Hopkins 155 data set and the first 10 subjects of extended Yale B for subspace clustering.
Li, Guohui; Zhang, Songling; Yang, Hong
2017-01-01
Aiming at the irregularity of nonlinear signal and its predicting difficulty, a deep learning prediction model based on extreme-point symmetric mode decomposition (ESMD) and clustering analysis is proposed. Firstly, the original data is decomposed by ESMD to obtain the finite number of intrinsic mode functions (IMFs) and residuals. Secondly, the fuzzy c-means is used to cluster the decomposed components, and then the deep belief network (DBN) is used to predict it. Finally, the reconstructed ...
Mo, Yirong; Bao, Peng; Gao, Jiali
2011-01-01
An interaction energy decomposition analysis method based on the block-localized wavefunction (BLW-ED) approach is described. The first main feature of the BLW-ED method is that it combines concepts of valence bond and molecular orbital theories such that the intermediate and physically intuitive electron-localized states are variationally optimized by self-consistent field calculations. Furthermore, the block-localization scheme can be used both in wave function theory and in density functio...
MEG masked priming evidence for form-based decomposition of irregular verbs
Directory of Open Access Journals (Sweden)
Joseph eFruchter
2013-11-01
Full Text Available To what extent does morphological structure play a role in early processing of visually presented English past tense verbs? Previous masked priming studies have demonstrated effects of obligatory form-based decomposition for genuinely affixed words (teacher-TEACH and pseudo-affixed words (corner-CORN, but not for orthographic controls (brothel-BROTH. Additionally, MEG single word reading studies have demonstrated that the transition probability from stem to affix (in genuinely affixed words modulates an early evoked response known as the M170; parallel findings have been shown for the transition probability from stem to pseudo-affix (in pseudo-affixed words. Here, utilizing the M170 as a neural index of visual form-based morphological decomposition, we ask whether the M170 demonstrates masked morphological priming effects for irregular past tense verbs (following a previous study which obtained behavioral masked priming effects for irregulars. Dual mechanism theories of the English past tense predict a rule-based decomposition for regulars but not for irregulars, while certain single mechanism theories predict rule-based decomposition even for irregulars. MEG data was recorded for 16 subjects performing a visual masked priming lexical decision task. Using a functional region of interest (fROI defined on the basis of repetition priming and regular morphological priming effects within the left fusiform and inferior temporal regions, we found that activity in this fROI was modulated by the masked priming manipulation for irregular verbs, during the time window of the M170. We also found effects of the scores generated by the learning model of Albright & Hayes (2003 on the degree of priming for irregular verbs. The results favor a single mechanism account of the English past tense, in which even irregulars are decomposed into stems and affixes prior to lexical access, as opposed to a dual mechanism model, in which irregulars are recognized as whole
MEG masked priming evidence for form-based decomposition of irregular verbs.
Fruchter, Joseph; Stockall, Linnaea; Marantz, Alec
2013-01-01
To what extent does morphological structure play a role in early processing of visually presented English past tense verbs? Previous masked priming studies have demonstrated effects of obligatory form-based decomposition for genuinely affixed words (teacher-TEACH) and pseudo-affixed words (corner-CORN), but not for orthographic controls (brothel-BROTH). Additionally, MEG single word reading studies have demonstrated that the transition probability from stem to affix (in genuinely affixed words) modulates an early evoked response known as the M170; parallel findings have been shown for the transition probability from stem to pseudo-affix (in pseudo-affixed words). Here, utilizing the M170 as a neural index of visual form-based morphological decomposition, we ask whether the M170 demonstrates masked morphological priming effects for irregular past tense verbs (following a previous study which obtained behavioral masked priming effects for irregulars). Dual mechanism theories of the English past tense predict a rule-based decomposition for regulars but not for irregulars, while certain single mechanism theories predict rule-based decomposition even for irregulars. MEG data was recorded for 16 subjects performing a visual masked priming lexical decision task. Using a functional region of interest (fROI) defined on the basis of repetition priming and regular morphological priming effects within the left fusiform and inferior temporal regions, we found that activity in this fROI was modulated by the masked priming manipulation for irregular verbs, during the time window of the M170. We also found effects of the scores generated by the learning model of Albright and Hayes (2003) on the degree of priming for irregular verbs. The results favor a single mechanism account of the English past tense, in which even irregulars are decomposed into stems and affixes prior to lexical access, as opposed to a dual mechanism model, in which irregulars are recognized as whole forms.
Efficient Divide-And-Conquer Classification Based on Feature-Space Decomposition
Guo, Qi; Chen, Bo-Wei; Jiang, Feng; Ji, Xiangyang; Kung, Sun-Yuan
2015-01-01
This study presents a divide-and-conquer (DC) approach based on feature space decomposition for classification. When large-scale datasets are present, typical approaches usually employed truncated kernel methods on the feature space or DC approaches on the sample space. However, this did not guarantee separability between classes, owing to overfitting. To overcome such problems, this work proposes a novel DC approach on feature spaces consisting of three steps. Firstly, we divide the feature ...
Adaptive Hybrid Visual Servo Regulation of Mobile Robots Based on Fast Homography Decomposition
Directory of Open Access Journals (Sweden)
Chunfu Wu
2015-01-01
Full Text Available For the monocular camera-based mobile robot system, an adaptive hybrid visual servo regulation algorithm which is based on a fast homography decomposition method is proposed to drive the mobile robot to its desired position and orientation, even when object’s imaging depth and camera’s position extrinsic parameters are unknown. Firstly, the homography’s particular properties caused by mobile robot’s 2-DOF motion are taken into account to induce a fast homography decomposition method. Secondly, the homography matrix and the extracted orientation error, incorporated with the desired view’s single feature point, are utilized to form an error vector and its open-loop error function. Finally, Lyapunov-based techniques are exploited to construct an adaptive regulation control law, followed by the experimental verification. The experimental results show that the proposed fast homography decomposition method is not only simple and efficient, but also highly precise. Meanwhile, the designed control law can well enable mobile robot position and orientation regulation despite the lack of depth information and camera’s position extrinsic parameters.
Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta
2016-01-01
This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.
Ammonia synthesis and decomposition on a Ru-based catalyst modeled by first-principles
Hellman, A.; Honkala, K.; Remediakis, I. N.; Logadóttir, Á.; Carlsson, A.; Dahl, S.; Christensen, C. H.; Nørskov, J. K.
2009-06-01
A recently published first-principles model for the ammonia synthesis on an unpromoted Ru-based catalyst is extended to also describe ammonia decomposition. In addition, further analysis concerning trends in ammonia productivity, surface conditions during the reaction, and macro-properties, such as apparent activation energies and reaction orders are provided. All observed trends in activity are captured by the model and the absolute value of ammonia synthesis/decomposition productivity is predicted to within a factor of 1-100 depending on the experimental conditions. Moreover it is shown: (i) that small changes in the relative adsorption potential energies are sufficient to get a quantitative agreement between theory and experiment ( Appendix A) and (ii) that it is possible to reproduce results from the first-principles model by a simple micro-kinetic model ( Appendix B).
Lu, Xingyu; Su, Weimin; Yang, Jianchao; Gu, Hong
2017-10-01
The narrowband interference (NBI) can degrade the synthetic aperture radar (SAR) imaging quality severely. This paper proposes an NBI mitigation method using the variational mode decomposition (VMD). The coarse estimation of NBI is obtained by decomposing the real part and imaginary part of the complex-valued raw echoes into a number of modes by VMD independently. Next, modes that correspond to NBI are refined by the mask technique in the frequency domain. Then the interference is mitigated by subtracting the refined estimated NBI components from the echoes, and a well-focused SAR image is obtained by conventional imaging schemes. The proposed method outperforms other time-varying NBI mitigation methods with smaller effective data loss and less impact on the focusing performance of images. Results of simulated and measured data prove the validity of the proposed method.
Directory of Open Access Journals (Sweden)
Søren Holdt Jensen
2007-01-01
Full Text Available We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both diagonal (eigenvalue and singular value decompositions and rank-revealing triangular decompositions (ULV, URV, VSV, ULLV, and ULLIV. In addition, we show how the subspace-based algorithms can be analyzed and compared by means of simple FIR filter interpretations. The algorithms are illustrated with working Matlab code and applications in speech processing.
On time-domain and frequency-domain MMSE-based TEQ design for DMT transmission
Vanbleu, K; Moonen, M; Ysebaert, G; 10.1109/TSP.2005.851161
2005-01-01
We reconsider the minimum mean square error (MMSE) time-domain equalizer (TEQ), bitrate maximizing TEQ (BM-TEQ), and per-tone equalizer design (PTEQ) for discrete multitone (DMT) transmission and cast them in a common least-squares (LS) based framework. The MMSE- TEQ design criterion can be formulated as a constrained linear least-squares (CLLS) criterion that minimizes a time-domain (TD) error energy. From this CLLS-based TD-MMSE-TEQ criterion, we derive two new least-squares (LS) based frequency-domain (FD) MMSE-TEQ design criteria: a CLLS-based FD-MMSE-TEQ criterion and a so-called separable nonlinear LS (SNLLS) based FD-MMSE-TEQ design. Finally, the original BM-TEQ design is shown to be equivalent to a so-called iteratively-reweighted (IR) version of the SNLLS-based FD-MMSE-TEQ design. This LS-based framework then results in the following contributions. The new, IR-SNLLS-based BM-TEQ design criterion gives rise to an elegant, iterative, fast converging, Gauss-Newton-based design algorithm that exploits th...
PREDICTD PaRallel Epigenomics Data Imputation with Cloud-based Tensor Decomposition.
Durham, Timothy J; Libbrecht, Maxwell W; Howbert, J Jeffry; Bilmes, Jeff; Noble, William Stafford
2018-04-11
The Encyclopedia of DNA Elements (ENCODE) and the Roadmap Epigenomics Project seek to characterize the epigenome in diverse cell types using assays that identify, for example, genomic regions with modified histones or accessible chromatin. These efforts have produced thousands of datasets but cannot possibly measure each epigenomic factor in all cell types. To address this, we present a method, PaRallel Epigenomics Data Imputation with Cloud-based Tensor Decomposition (PREDICTD), to computationally impute missing experiments. PREDICTD leverages an elegant model called "tensor decomposition" to impute many experiments simultaneously. Compared with the current state-of-the-art method, ChromImpute, PREDICTD produces lower overall mean squared error, and combining the two methods yields further improvement. We show that PREDICTD data captures enhancer activity at noncoding human accelerated regions. PREDICTD provides reference imputed data and open-source software for investigating new cell types, and demonstrates the utility of tensor decomposition and cloud computing, both promising technologies for bioinformatics.
SAR and Infrared Image Fusion in Complex Contourlet Domain Based on Joint Sparse Representation
Directory of Open Access Journals (Sweden)
Wu Yiquan
2017-08-01
Full Text Available To investigate the problems of the large grayscale difference between infrared and Synthetic Aperture Radar (SAR images and their fusion image not being fit for human visual perception, we propose a fusion method for SAR and infrared images in the complex contourlet domain based on joint sparse representation. First, we perform complex contourlet decomposition of the infrared and SAR images. Then, we employ the KSingular Value Decomposition (K-SVD method to obtain an over-complete dictionary of the low-frequency components of the two source images. Using a joint sparse representation model, we then generate a joint dictionary. We obtain the sparse representation coefficients of the low-frequency components of the source images in the joint dictionary by the Orthogonal Matching Pursuit (OMP method and select them using the selection maximization strategy. We then reconstruct these components to obtain the fused low-frequency components and fuse the high-frequency components using two criteria——the coefficient of visual sensitivity and the degree of energy matching. Finally, we obtain the fusion image by the inverse complex contourlet transform. Compared with the three classical fusion methods and recently presented fusion methods, e.g., that based on the Non-Subsampled Contourlet Transform (NSCT and another based on sparse representation, the method we propose in this paper can effectively highlight the salient features of the two source images and inherit their information to the greatest extent.
Directory of Open Access Journals (Sweden)
Arjun Singh
2017-02-01
Full Text Available This work describes thermal decomposition behaviour of plastic bonded explosives (PBXs based on mixture of l,3,5,7-tetranitro- 1,3,5,7-tetrazocane (HMX and 2,4,6- triamino-1,3,5-trinitrobenzene (TATB with Viton A as polymer binder. Thermal decomposition of PBXs was undertaken by applying simultaneous thermal analysis (STA and differential scanning calorimetry (DSC to investigate influence of the HMX amount on thermal behavior and its kinetics. Thermogravimetric analysis (TGA indicated that the thermal decomposition of PBXs based on mixture of HMX and TATB was occurred in a three-steps. The first step was mainly due to decomposition of HMX. The second step was ascribed due to decomposition of TATB, while the third step was occurred due to decomposition of the polymer matrices. The thermal decomposition % was increased with increasing HMX amount. The kinetics related to thermal decomposition were investigated under non-isothermal for a single heating rate measurement. The variation in the activation energy of PBXs based on mixture of HMX and TATB was observed with varying the HMX amount. The kinetics from the results of TGA data at various heating rates under non-isothermal conditions were also calculated by Flynn–Wall–Ozawa (FWO and Kissinger-Akahira-Sunose (KAS methods. The activation energies calculated by employing FWO method were very close to those obtained by KAS method. The mean activation energy calculated by FWO and KAS methods was also a good agreement with the activation energy obtained from single heating rate measurement in the first step decomposition.
Dynamic Mode Decomposition based on Bootstrapping Extended Kalman Filter Application to Noisy data
Nonomura, Taku; Shibata, Hisaichi; Takaki, Ryoji
2017-11-01
In this study, dynamic mode decomposition (DMD) based on bootstrapping extended Kalman filter is proposed for time-series data. In this framework, state variables (x and y) are filtered as well as the parameter estimation (aij) which is conducted in the conventional DMD and the standard Kalman-filter-based DMD. The filtering process of state variables enables us to obtain highly accurate eigenvalue of the system with strong noise. In the presentation, formulation, advantages and disadvantages are discussed. This research is partially supported by Presto, JST (JPMJPR1678).
Energy Technology Data Exchange (ETDEWEB)
Zhu, Yong; Jiang, Wan-lu; Kong, Xiang-dong [Yanshan University, Hebei (China)
2017-02-15
In mechanical fault diagnosis and condition monitoring, extracting and eliminating the trend term of machinery signal are necessary. In this paper, an adaptive extraction method for trend term of machinery signal based on Extreme-point symmetric mode decomposition (ESMD) was proposed. This method fully utilized ESMD, including the self-adaptive decomposition feature and optimal fitting strategy. The effectiveness and practicability of this method are tested through simulation analysis and measured data validation. Results indicate that this method can adaptively extract various trend terms hidden in machinery signal, and has commendable self-adaptability. Moreover, the extraction results are better than those of empirical mode decomposition.
Rai, Akhand; Upadhyay, S. H.
2017-09-01
Bearing is the most critical component in rotating machinery since it is more susceptible to failure. The monitoring of degradation in bearings becomes of great concern for averting the sudden machinery breakdown. In this study, a novel method for bearing performance degradation assessment (PDA) based on an amalgamation of empirical mode decomposition (EMD) and k-medoids clustering is encouraged. The fault features are extracted from the bearing signals using the EMD process. The extracted features are then subjected to k-medoids based clustering for obtaining the normal state and failure state cluster centres. A confidence value (CV) curve based on dissimilarity of the test data object to the normal state is obtained and employed as the degradation indicator for assessing the health of bearings. The proposed outlook is applied on the vibration signals collected in run-to-failure tests of bearings to assess its effectiveness in bearing PDA. To validate the superiority of the suggested approach, it is compared with commonly used time-domain features RMS and kurtosis, well-known fault diagnosis method envelope analysis (EA) and existing PDA classifiers i.e. self-organizing maps (SOM) and Fuzzy c-means (FCM). The results demonstrate that the recommended method outperforms the time-domain features, SOM and FCM based PDA in detecting the early stage degradation more precisely. Moreover, EA can be used as an accompanying method to confirm the early stage defect detected by the proposed bearing PDA approach. The study shows the potential application of k-medoids clustering as an effective tool for PDA of bearings.
Ku, Hwar-Ching; Ramaswamy, Bala
1993-01-01
The new multigrid (or adaptive) pseudospectral element method was carried out for the solution of incompressible flow in terms of primitive variable formulation. The desired features of the proposed method include the following: (1) the ability to treat complex geometry; (2) high resolution adapted in the interesting areas; (3) requires minimal working space; and (4) effective in a multiprocessing environment. The approach for flow problems, complex geometry or not, is to first divide the computational domain into a number of fine-grid and coarse-grid subdomains with the inter-overlapping area. Next, it is necessary to implement the Schwarz alternating procedure (SAP) to exchange the data among subdomains, where the coarse-grid correction is used to remove the high frequency error that occurs when the data interpolation from the fine-grid subdomain to the coarse-grid subdomain is conducted. The strategy behind the coarse-grid correction is to adopt the operator of the divergence of the velocity field, which intrinsically links the pressure equation, into this process. The solution of each subdomain can be efficiently solved by the direct (or iterative) eigenfunction expansion technique with the least storage requirement, i.e. O(N(exp 3)) in 3-D and O(N(exp 2)) in 2-D. Numerical results of both driven cavity and jet flow will be presented in the paper to account for the versatility of the proposed method.
Haris, A.; Morena, V.; Riyanto, A.; Zulivandama, S. R.
2017-07-01
Non-stationer signal from the seismic survey is difficult to be directly interpreted in time domain analysis. Spectral decomposition is one of the spectral analysis methods that can analyze the non-stationer signal in frequency domain. The Fast Fourier Transform method was commonly used for spectral decomposition analysis, however, this method had a limitation in the scaled window analysis and produced pure quality for low-frequency shadow. The S-Transform and Empirical the Mode Decomposition (EMD) is another method of spectral decomposition that can be used to enhanced low-frequency shadows. In this research, comparison of the S-Transform and the EMD methods that can show the difference imaging result of low-frequency shadows zone is applied to Eldo Field, Jambi Province. The spectral decomposition result based on the EMD method produced better imaging of low-frequency shadows zone in tuning thickness compared to S-Transform methods.
Directory of Open Access Journals (Sweden)
Baiyan Chen
2017-01-01
Full Text Available The vibration signal of the motor bearing has strong nonstationary and nonlinear characteristics, and it is arduous to accurately recognize the degradation state of the motor bearing with traditional single time or frequency domain indexes. A hybrid domain feature extraction method based on distance evaluation technique (DET is proposed to solve this problem. Firstly, the vibration signal of the motor bearing is decomposed by ensemble empirical mode decomposition (EEMD. The proper intrinsic mode function (IMF component that is the most sensitive to the degradation of the motor bearing is selected according to the sensitive IMF selection algorithm based on the similarity evaluation. Then the distance evaluation factor of each characteristic parameter is calculated by the DET method. The differential method is used to extract sensitive characteristic parameters which compose the characteristic matrix. And then the extracted degradation characteristic matrix is used as the input of support vector machine (SVM to identify the degradation state. Finally, It is demonstrated that the proposed hybrid domain feature extraction method has higher recognition accuracy and shorter recognition time by comparative analysis. The positive performance of the method is verified.
Directory of Open Access Journals (Sweden)
Wentao Huang
2017-06-01
Full Text Available Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD. Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD’s theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis.
Huang, Wentao; Sun, Hongjian; Wang, Weijie
2017-06-03
Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD). Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD's theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis.
Decomposition of LiDAR waveforms by B-spline-based modeling
Shen, Xiang; Li, Qing-Quan; Wu, Guofeng; Zhu, Jiasong
2017-06-01
Waveform decomposition is a widely used technique for extracting echoes from full-waveform LiDAR data. Most previous studies recommended the Gaussian decomposition approach, which employs the Gaussian function in laser pulse modeling. As the Gaussian-shape assumption is not always satisfied for real LiDAR waveforms, some other probability distributions (e.g., the lognormal distribution, the generalized normal distribution, and the Burr distribution) have also been introduced by researchers to fit sharply-peaked and/or heavy-tailed pulses. However, these models cannot be universally used, because they are only suitable for processing the LiDAR waveforms in particular shapes. In this paper, we present a new waveform decomposition algorithm based on the B-spline modeling technique. LiDAR waveforms are not assumed to have a priori shapes but rather are modeled by B-splines, and the shape of a received waveform is treated as the mixture of finite transmitted pulses after translation and scaling transformation. The performance of the new model was tested using two full-waveform data sets acquired by a Riegl LMS-Q680i laser scanner and an Optech Aquarius laser bathymeter, comparing with three classical waveform decomposition approaches: the Gaussian, generalized normal, and lognormal distribution-based models. The experimental results show that the B-spline model performed the best in terms of waveform fitting accuracy, while the generalized normal model yielded the worst performance in the two test data sets. Riegl waveforms have nearly Gaussian pulse shapes and were well fitted by the Gaussian mixture model, while the B-spline-based modeling algorithm produced a slightly better result by further reducing 6.4% of fitting residuals, largely benefiting from alleviating the adverse impact of the ringing effect. The pulse shapes of Optech waveforms, on the other hand, are noticeably right-skewed. The Gaussian modeling results deviated significantly from original signals, and
Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots
Directory of Open Access Journals (Sweden)
Ching-Long Shih
2012-08-01
Full Text Available This paper aims to demonstrate a clear relationship between Lagrange equations and Newton-Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method for using either symbolic or on-line numerical computations. Based on the decomposition approach and cross-product operation, a computing method for robot dynamics can be easily developed. The advantages of this computing framework are that: it can be used for both symbolic and on-line numeric computation purposes, and it can also be applied to biped systems, as well as some simple closed-chain robot systems.
International Nuclear Information System (INIS)
Tsuji, Masashi; Chiba, Gou
2000-01-01
A hierarchical domain decomposition boundary element method (HDD-BEM) for solving the multiregion neutron diffusion equation (NDE) has been fully parallelized, both for numerical computations and for data communications, to accomplish a high parallel efficiency on distributed memory message passing parallel computers. Data exchanges between node processors that are repeated during iteration processes of HDD-BEM are implemented, without any intervention of the host processor that was used to supervise parallel processing in the conventional parallelized HDD-BEM (P-HDD-BEM). Thus, the parallel processing can be executed with only cooperative operations of node processors. The communication overhead was even the dominant time consuming part in the conventional P-HDD-BEM, and the parallelization efficiency decreased steeply with the increase of the number of processors. With the parallel data communication, the efficiency is affected only by the number of boundary elements assigned to decomposed subregions, and the communication overhead can be drastically reduced. This feature can be particularly advantageous in the analysis of three-dimensional problems where a large number of processors are required. The proposed P-HDD-BEM offers a promising solution to the deterioration problem of parallel efficiency and opens a new path to parallel computations of NDEs on distributed memory message passing parallel computers. (author)
A comparative study on book shelf structure based on different domain modal analysis
Sabamehr, Ardalan; Roy, Timir Baran; Bagchi, Ashutosh
2017-04-01
Structural Health Monitoring (SHM) based on the vibration of structures has been very attractive topic for researchers in different fields such as: civil, aeronautical and mechanical engineering. The aim of this paper is to compare three most common modal identification techniques such as Frequency Domain Decomposition (FDD), Stochastic Subspace Identification (SSI) and Continuous Wavelet Transform (CWT) to find modal properties (such as natural frequency, mode shape and damping ratio) of three story book shelf steel structure which was built in Concordia University Lab. The modified Complex Morlet wavelet have been selected for wavelet in order to use asymptotic signal rather than real one with variable bandwidth and wavelet central frequency. So, CWT is able to detect instantaneous modulus and phase by use of local maxima ridge detection.
Directory of Open Access Journals (Sweden)
Hui Chen
2017-07-01
Full Text Available Seismic time-frequency analysis methods can be used for hydrocarbon detection because of the phenomena of energy and abnormal attenuation of frequency when the seismic waves travel across reservoirs. A high-resolution method based on variational mode decomposition (VMD, continuous-wavelet transform (CWT and frequency-weighted energy operator (FWEO is proposed for hydrocarbon detection in tight sandstone gas reservoirs. VMD can decompose seismic signals into a set of intrinsic mode functions (IMF in the frequency domain. In order to avoid meaningful frequency loss, the CWT method is used to obtain the time-frequency spectra of the selected IMFs. The energy separation algorithm based on FWEO can improve the resolution of time-frequency spectra and highlight abnormal energy, which is applied to track the instantaneous energy in the time-frequency spectra. The difference between the high-frequency section and low-frequency section acquired by applying the proposed method is utilized to detect hydrocarbons. Applications using the model and field data further demonstrate that the proposed method can effectively detect hydrocarbons in tight sandstone reservoirs, with good anti-noise performance. The newly-proposed method can be used as an analysis tool to detect hydrocarbons.
Bayesian Nonnegative CP Decomposition-Based Feature Extraction Algorithm for Drowsiness Detection.
Qian, Dong; Wang, Bei; Qing, Xiangyun; Zhang, Tao; Zhang, Yu; Wang, Xingyu; Nakamura, Masatoshi
2017-08-01
Daytime short nap involves physiological processes, such as alertness, drowsiness and sleep. The study of the relationship between drowsiness and nap based on physiological signals is a great way to have a better understanding of the periodical rhymes of physiological states. A model of Bayesian nonnegative CP decomposition (BNCPD) was proposed to extract common multiway features from the group-level electroencephalogram (EEG) signals. As an extension of the nonnegative CP decomposition, the BNCPD model involves prior distributions of factor matrices, while the underlying CP rank could be determined automatically based on a Bayesian nonparametric approach. In terms of computational speed, variational inference was applied to approximate the posterior distributions of unknowns. Extensive simulations on the synthetic data illustrated the capability of our model to recover the true CP rank. As a real-world application, the performance of drowsiness detection during daytime short nap by using the BNCPD-based features was compared with that of other traditional feature extraction methods. Experimental results indicated that the BNCPD model outperformed other methods for feature extraction in terms of two evaluation metrics, as well as different parameter settings. Our approach is likely to be a useful tool for automatic CP rank determination and offering a plausible multiway physiological information of individual states.
Inverse Method of Centrifugal Pump Impeller Based on Proper Orthogonal Decomposition (POD) Method
Zhang, Ren-Hui; Guo, Rong; Yang, Jun-Hu; Luo, Jia-Qi
2017-07-01
To improve the accuracy and reduce the calculation cost for the inverse problem of centrifugal pump impeller, the new inverse method based on proper orthogonal decomposition (POD) is proposed. The pump blade shape is parameterized by quartic Bezier curve, and the initial snapshots is generated by introducing the perturbation of the blade shape control parameters. The internal flow field and its hydraulic performance is predicted by CFD method. The snapshots vector includes the blade shape parameter and the distribution of blade load. The POD basis for the snapshots set are deduced by proper orthogonal decomposition. The sample vector set is expressed in terms of the linear combination of the orthogonal basis. The objective blade shape corresponding to the objective distribution of blade load is obtained by least square fit. The Iterative correction algorithm for the centrifugal pump blade inverse method based on POD is proposed. The objective blade load distributions are corrected according to the difference of the CFD result and the POD result. The two dimensional and three dimensional blade calculation cases show that the proposed centrifugal pump blade inverse method based on POD have good convergence and high accuracy, and the calculation cost is greatly reduced. After two iterations, the deviation of the blade load and the pump hydraulic performance are limited within 4.0% and 6.0% individually for most of the flow rate range. This paper provides a promising inverse method for centrifugal pump impeller, which will benefit the hydraulic optimization of centrifugal pump.
Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret
2003-12-01
A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.
Directory of Open Access Journals (Sweden)
Yu-Fei Gao
2017-04-01
Full Text Available This paper investigates a two-dimensional angle of arrival (2D AOA estimation algorithm for the electromagnetic vector sensor (EMVS array based on Type-2 block component decomposition (BCD tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD method.
Directory of Open Access Journals (Sweden)
Jingjing Ma
2014-01-01
Full Text Available Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms.
Directory of Open Access Journals (Sweden)
Hui Lu
2014-01-01
Full Text Available Test task scheduling problem (TTSP is a complex optimization problem and has many local optima. In this paper, a hybrid chaotic multiobjective evolutionary algorithm based on decomposition (CMOEA/D is presented to avoid becoming trapped in local optima and to obtain high quality solutions. First, we propose an improving integrated encoding scheme (IES to increase the efficiency. Then ten chaotic maps are applied into the multiobjective evolutionary algorithm based on decomposition (MOEA/D in three phases, that is, initial population and crossover and mutation operators. To identify a good approach for hybrid MOEA/D and chaos and indicate the effectiveness of the improving IES several experiments are performed. The Pareto front and the statistical results demonstrate that different chaotic maps in different phases have different effects for solving the TTSP especially the circle map and ICMIC map. The similarity degree of distribution between chaotic maps and the problem is a very essential factor for the application of chaotic maps. In addition, the experiments of comparisons of CMOEA/D and variable neighborhood MOEA/D (VNM indicate that our algorithm has the best performance in solving the TTSP.
Directory of Open Access Journals (Sweden)
Chun Wang
2017-01-01
Full Text Available A novel multiobjective memetic algorithm based on decomposition (MOMAD is proposed to solve multiobjective flexible job shop scheduling problem (MOFJSP, which simultaneously minimizes makespan, total workload, and critical workload. Firstly, a population is initialized by employing an integration of different machine assignment and operation sequencing strategies. Secondly, multiobjective memetic algorithm based on decomposition is presented by introducing a local search to MOEA/D. The Tchebycheff approach of MOEA/D converts the three-objective optimization problem to several single-objective optimization subproblems, and the weight vectors are grouped by K-means clustering. Some good individuals corresponding to different weight vectors are selected by the tournament mechanism of a local search. In the experiments, the influence of three different aggregation functions is first studied. Moreover, the effect of the proposed local search is investigated. Finally, MOMAD is compared with eight state-of-the-art algorithms on a series of well-known benchmark instances and the experimental results show that the proposed algorithm outperforms or at least has comparative performance to the other algorithms.
Ma, Jingjing; Liu, Jie; Ma, Wenping; Gong, Maoguo; Jiao, Licheng
2014-01-01
Community structure is one of the most important properties in social networks. In dynamic networks, there are two conflicting criteria that need to be considered. One is the snapshot quality, which evaluates the quality of the community partitions at the current time step. The other is the temporal cost, which evaluates the difference between communities at different time steps. In this paper, we propose a decomposition-based multiobjective community detection algorithm to simultaneously optimize these two objectives to reveal community structure and its evolution in dynamic networks. It employs the framework of multiobjective evolutionary algorithm based on decomposition to simultaneously optimize the modularity and normalized mutual information, which quantitatively measure the quality of the community partitions and temporal cost, respectively. A local search strategy dealing with the problem-specific knowledge is incorporated to improve the effectiveness of the new algorithm. Experiments on computer-generated and real-world networks demonstrate that the proposed algorithm can not only find community structure and capture community evolution more accurately, but also be steadier than the two compared algorithms. PMID:24723806
Decomposition of Polarimetric SAR Images Based on Second- and Third-order Statics Analysis
Kojima, S.; Hensley, S.
2012-12-01
There are many papers concerning the research of the decomposition of polerimetric SAR imagery. Most of them are based on second-order statics analysis that Freeman and Durden [1] suggested for the reflection symmetry condition that implies that the co-polarization and cross-polarization correlations are close to zero. Since then a number of improvements and enhancements have been proposed to better understand the underlying backscattering mechanisms present in polarimetric SAR images. For example, Yamaguchi et al. [2] added the helix component into Freeman's model and developed a 4 component scattering model for the non-reflection symmetry condition. In addition, Arii et al. [3] developed an adaptive model-based decomposition method that could estimate both the mean orientation angle and a degree of randomness for the canopy scattering for each pixel in a SAR image without the reflection symmetry condition. This purpose of this research is to develop a new decomposition method based on second- and third-order statics analysis to estimate the surface, dihedral, volume and helix scattering components from polarimetric SAR images without the specific assumptions concerning the model for the volume scattering. In addition, we evaluate this method by using both simulation and real UAVSAR data and compare this method with other methods. We express the volume scattering component using the wire formula and formulate the relationship equation between backscattering echo and each component such as the surface, dihedral, volume and helix via linearization based on second- and third-order statics. In third-order statics, we calculate the correlation of the correlation coefficients for each polerimetric data and get one new relationship equation to estimate each polarization component such as HH, VV and VH for the volume. As a result, the equation for the helix component in this method is the same formula as one in Yamaguchi's method. However, the equation for the volume
Mode decomposition evolution equations.
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2012-03-01
Partial differential equation (PDE) based methods have become some of the most powerful tools for exploring the fundamental problems in signal processing, image processing, computer vision, machine vision and artificial intelligence in the past two decades. The advantages of PDE based approaches are that they can be made fully automatic, robust for the analysis of images, videos and high dimensional data. A fundamental question is whether one can use PDEs to perform all the basic tasks in the image processing. If one can devise PDEs to perform full-scale mode decomposition for signals and images, the modes thus generated would be very useful for secondary processing to meet the needs in various types of signal and image processing. Despite of great progress in PDE based image analysis in the past two decades, the basic roles of PDEs in image/signal analysis are only limited to PDE based low-pass filters, and their applications to noise removal, edge detection, segmentation, etc. At present, it is not clear how to construct PDE based methods for full-scale mode decomposition. The above-mentioned limitation of most current PDE based image/signal processing methods is addressed in the proposed work, in which we introduce a family of mode decomposition evolution equations (MoDEEs) for a vast variety of applications. The MoDEEs are constructed as an extension of a PDE based high-pass filter (Europhys. Lett., 59(6): 814, 2002) by using arbitrarily high order PDE based low-pass filters introduced by Wei (IEEE Signal Process. Lett., 6(7): 165, 1999). The use of arbitrarily high order PDEs is essential to the frequency localization in the mode decomposition. Similar to the wavelet transform, the present MoDEEs have a controllable time-frequency localization and allow a perfect reconstruction of the original function. Therefore, the MoDEE operation is also called a PDE transform. However, modes generated from the present approach are in the spatial or time domain and can be
Alsharoa, Ahmad M.
2015-05-01
In this paper, the problem of radio and power resource management in long term evolution heterogeneous networks (LTE HetNets) is investigated. The goal is to minimize the total power consumption of the network while satisfying the user quality of service determined by each target data rate. We study the model where one macrocell base station is placed in the cell center, and multiple small cell base stations and femtocell access points are distributed around it. The dual decomposition technique is adopted to jointly optimize the power and carrier allocation in the downlink direction in addition to the selection of turned off small cell base stations. Our numerical results investigate the performance of the proposed scheme versus different system parameters and show an important saving in terms of total power consumption. © 2015 IEEE.
Directory of Open Access Journals (Sweden)
Lajmert Paweł
2018-01-01
Full Text Available In the paper a cutting stability in the milling process of nickel based alloy Inconel 625 is analysed. This problem is often considered theoretically, but the theoretical finding do not always agree with experimental results. For this reason, the paper presents different methods for instability identification during real machining process. A stability lobe diagram is created based on data obtained in impact test of an end mill. Next, the cutting tests were conducted in which the axial cutting depth of cut was gradually increased in order to find a stability limit. Finally, based on the cutting force measurements the stability estimation problem is investigated using the recurrence plot technique and Hilbert vibration decomposition method.
Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models
International Nuclear Information System (INIS)
Cai, Caifang
2013-01-01
Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also
Inter-domain Identity-based Proxy Re-encryption
Tang, Qiang; Hartel, Pieter H.; Jonker, Willem
2008-01-01
Proxy re-encryption is a cryptographic primitive developed to delegate the decryption right from one party (the delegator) to another (the delegatee). So far, no particular research efforts have been devoted to this primitive in the inter-domain identity-based setting, where the delegator and the
Geographical based situational awareness in military mobile domain
Sierksma, T.; Hoekstra, J.; Jansen, B.; Boltjes, B.; Oever, J. van den
2007-01-01
In 2006 it became clear for the Royal Netherlands Army (RNLA) that the traditional concept for exchanging data in the mobile domain, based upon hierarchical radio nets, could not offer sufficient performance for the future Battlefield Management System' (BMS). The C2 Support Centre, who develops the
Cost-Based Domain Filtering for Stochastic Constraint Programming
Rossi, R.; Tarim, S.A.; Hnich, B.; Prestwich, S.
2008-01-01
Cost-based filtering is a novel approach that combines techniques from Operations Research and Constraint Programming to filter from decision variable domains values that do not lead to better solutions [7]. Stochastic Constraint Programming is a framework for modeling combinatorial optimization
Mo, Yirong; Bao, Peng; Gao, Jiali
2011-04-21
An interaction energy decomposition analysis method based on the block-localized wavefunction (BLW-ED) approach is described. The first main feature of the BLW-ED method is that it combines concepts of valence bond and molecular orbital theories such that the intermediate and physically intuitive electron-localized states are variationally optimized by self-consistent field calculations. Furthermore, the block-localization scheme can be used both in wave function theory and in density functional theory, providing a useful tool to gain insights on intermolecular interactions that would otherwise be difficult to obtain using the delocalized Kohn-Sham DFT. These features allow broad applications of the BLW method to energy decomposition (BLW-ED) analysis for intermolecular interactions. In this perspective, we outline theoretical aspects of the BLW-ED method, and illustrate its applications in hydrogen-bonding and π-cation intermolecular interactions as well as metal-carbonyl complexes. Future prospects on the development of a multistate density functional theory (MSDFT) are presented, making use of block-localized electronic states as the basis configurations.
International Nuclear Information System (INIS)
Chen, Baojia; He, Zhengjia; Chen, Xuefeng; Cao, Hongrui; Cai, Gaigai; Zi, Yanyang
2011-01-01
Since machinery fault vibration signals are usually multicomponent modulation signals, how to decompose complex signals into a set of mono-components whose instantaneous frequency (IF) has physical sense has become a key issue. Local mean decomposition (LMD) is a new kind of time–frequency analysis approach which can decompose a signal adaptively into a set of product function (PF) components. In this paper, a modulation feature extraction method-based LMD is proposed. The envelope of a PF is the instantaneous amplitude (IA) and the derivative of the unwrapped phase of a purely flat frequency demodulated (FM) signal is the IF. The computed IF and IA are displayed together in the form of time–frequency representation (TFR). Modulation features can be extracted from the spectrum analysis of the IA and IF. In order to make the IF have physical meaning, the phase-unwrapping algorithm and IF processing method of extrema are presented in detail along with a simulation FM signal example. Besides, the dependence of the LMD method on the signal-to-noise ratio (SNR) is also investigated by analyzing synthetic signals which are added with Gaussian noise. As a result, the recommended critical SNRs for PF decomposition and IF extraction are given according to the practical application. Successful fault diagnosis on a rolling bearing and gear of locomotive bogies shows that LMD has better identification capacity for modulation signal processing and is very suitable for failure detection in rotating machinery
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Directory of Open Access Journals (Sweden)
Naveed ur Rehman
2015-05-01
Full Text Available A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA, discrete wavelet transform (DWT and non-subsampled contourlet transform (NCT. A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Ee, Tang Zo; Lim, Steven; Ling, Pang Yean; Huei, Wong Kam; Chyuan, Ong Hwai
2017-04-01
Experiment was carried out to study the feasibility of biomass derived solid acid catalyst for the production of biodiesel using Palm Fatty Acid Distillate (PFAD). Malaysia indigenous seaweed was selected as the biomass to be carbonized as the catalyst support. Sulfonation of seaweed based carbon material was carried out by thermal decomposition of ammonium sulfate, (NH4)2SO4. The effects of carbonization temperature at 200 to 600°C on the catalyst physical and chemical properties were studied. The effect of reaction parameters on the fatty acid methyl ester (FAME) yield was studied by varying the concentration of ammonium sulfate (5.0 to 40.0 w/v%) and thermal decomposition time (15 to 90 min). Characterizations of catalyst were carried out to study the catalyst surface morphology with Scanning Electron Microscope (SEM), acid density with back titration and functional group attached with FT-IR. Results showed that when the catalyst sulfonated with 10.0 w/v% ammonium sulfate solution and heated to 235°C for 30 min, the highest FAME yield achieved was 23.7% at the reaction condition of 5.0 wt.% catalyst loading, esterification time of 4 h, methanol to PFAD molar ratio of 20:1 at 100°C reaction temperature.
Sparse Localization with a Mobile Beacon Based on LU Decomposition in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Chunhui Zhao
2015-09-01
Full Text Available Node localization is the core in wireless sensor network. It can be solved by powerful beacons, which are equipped with global positioning system devices to know their location information. In this article, we present a novel sparse localization approach with a mobile beacon based on LU decomposition. Our scheme firstly translates node localization problem into a 1-sparse vector recovery problem by establishing sparse localization model. Then, LU decomposition pre-processing is adopted to solve the problem that measurement matrix does not meet the re¬stricted isometry property. Later, the 1-sparse vector can be exactly recovered by compressive sensing. Finally, as the 1-sparse vector is approximate sparse, weighted Cen¬troid scheme is introduced to accurately locate the node. Simulation and analysis show that our scheme has better localization performance and lower requirement for the mobile beacon than MAP+GC, MAP-M, and MAP-MN schemes. In addition, the obstacles and DOI have little effect on the novel scheme, and it has great localization performance under low SNR, thus, the scheme proposed is robust.
Bao, Peng
2013-01-01
An interaction energy decomposition analysis method based on the block-localized wavefunction (BLW-ED) approach is described. The first main feature of the BLW-ED method is that it combines concepts of valence bond and molecular orbital theories such that the intermediate and physically intuitive electron-localized states are variationally optimized by self-consistent field calculations. Furthermore, the block-localization scheme can be used both in wave function theory and in density functional theory, providing a useful tool to gain insights on intermolecular interactions that would otherwise be difficult to obtain using the delocalized Kohn–Sham DFT. These features allow broad applications of the BLW method to energy decomposition (BLW-ED) analysis for intermolecular interactions. In this perspective, we outline theoretical aspects of the BLW-ED method, and illustrate its applications in hydrogen-bonding and π–cation intermolecular interactions as well as metal–carbonyl complexes. Future prospects on the development of a multistate density functional theory (MSDFT) are presented, making use of block-localized electronic states as the basis configurations. PMID:21369567
Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.
Park, Jongin; Wi, Seok-Min; Lee, Jin S
2016-02-01
Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.
Directory of Open Access Journals (Sweden)
Jianchang Lu
2015-04-01
Full Text Available Based on the international community’s analysis of the present CO2 emissions situation, a Log Mean Divisia Index (LMDI decomposition model is proposed in this paper, aiming to reflect the decomposition of carbon productivity. The model is designed by analyzing the factors that affect carbon productivity. China’s contribution to carbon productivity is analyzed from the dimensions of influencing factors, regional structure and industrial structure. It comes to the conclusions that: (a economic output, the provincial carbon productivity and energy structure are the most influential factors, which are consistent with China’s current actual policy; (b the distribution patterns of economic output, carbon productivity and energy structure in different regions have nothing to do with the Chinese traditional sense of the regional economic development patterns; (c considering the regional protectionism, regional actual situation need to be considered at the same time; (d in the study of the industrial structure, the contribution value of industry is the most prominent factor for China’s carbon productivity, while the industrial restructuring has not been done well enough.
Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach
Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil
2016-01-01
Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.
Huang, X. Y.; Zhou, J. Q.; Wang, Z.; Deng, L. C.; Hong, S.
2017-05-01
China is now at a stage of accelerated industrialization and urbanization, with energy-intensive industries contributing a large proportion of economic growth. In this study, we examined industrial energy consumption by decomposition analysis to describe the driving factors of energy consumption in China. Based on input-output (I-O) tables from the World Input-Output Database (WIOD) website and China’s energy use data from 1995 to 2011, we studied the sectorial changes of energy efficiency during the examined period. The results showed that all industries increased their energy efficiency. Energy consumption was decomposed into three factors by the logarithmic mean Divisia index (LMDI) method. The increase in production output was the leading factor that drives up China’s energy consumption. World Trade Organization accession and financial crises had great impact on the energy consumption. Based on these results, a series of energy policy suggestions for decision-makers has been proposed.
Directory of Open Access Journals (Sweden)
Imaouchen Yacine
2015-01-01
Full Text Available To detect rolling element bearing defects, many researches have been focused on Motor Current Signal Analysis (MCSA using spectral analysis and wavelet transform. This paper presents a new approach for rolling element bearings diagnosis without slip estimation, based on the wavelet packet decomposition (WPD and the Hilbert transform. Specifically, the Hilbert transform first extracts the envelope of the motor current signal, which contains bearings fault-related frequency information. Subsequently, the envelope signal is adaptively decomposed into a number of frequency bands by the WPD algorithm. Two criteria based on the energy and correlation analyses have been investigated to automate the frequency band selection. Experimental studies have confirmed that the proposed approach is effective in diagnosing rolling element bearing faults for improved induction motor condition monitoring and damage assessment.
Multimode fiber modal decomposition based on hybrid genetic global optimization algorithm
Li, Lei; Leng, Jinyong; Zhou, Pu; Chen, Jinbao
2017-10-01
Numerical modal decomposition (MD) is an effective approach to reveal modal characteristics in high power fiber lasers. The main challenge is to find a suitable multi-dimensional optimization algorithm to reveal exact superposition of eigenmodes, especially for multimode fiber. A novel hybrid genetic global optimization algorithm, named GA-SPGD, which combines the advantages of genetic algorithm (GA) and stochastic parallel gradient descent (SPGD) algorithm, is firstly proposed to reduce local minima possibilities from sensitivity initial values. Firstly, GA is applied to search the rough global optimization position based on near-far-field intensity distribution with high accuracy. Upon those initial values, SPGD algorithm is afterwards used to find the exact optimization values based on near-field intensity distribution with fast convergence speed. Numerical simulations validate the feasibility and reliability.
A Novel Ship Detection Method Using Model-Based Decomposition as a Polarimetric Band-Stop Filter
Sugimoto, Mitsunobu; Marino, Armando; Ouchi, Kazuo; Nakamura, Yasuhiro
2013-08-01
In this study, a novel ship detection method using model-based decomposition is suggested. The model-based decomposition is one of the popular analytical methods of POLSAR (polarimetric SAR) data. Since most of the scattering on the sea is surface scattering, the model-based decomposition can be used as a band-stop filter, to block out surface scattering component. As a result, ships, which generally have more complex scattering process, can be detected. Advanced Land Observation Satellite-Phased Array L-band SAR (ALOS-PALSAR) polarimetric SAR data and available reference data for validation are used in the study. The result was processed using adaptive-CFAR (constant false alarm rate) technique and compared with the reference data.
Collaboration Research: An Optimization Framework based on Domain Decomposition and Model Reduction
2009-02-01
geometry variabilities on the work per cycle ( WPC ), which is defined as the integral of blade motion times the lift force over one unsteady cycle. The...CFD model and a reduced model of dimension 201. Figure 3 shows the resulting probability density functions of WPC for the first blade. Table 1 shows...distribution of WPC accurately. To further verify the quality of the reduced model, the Kolmogorov-Smirnov method is applied to test whether the reduced
Directory of Open Access Journals (Sweden)
Jinlu Sheng
2016-07-01
Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.
Directory of Open Access Journals (Sweden)
Yizhou Yang
2017-01-01
Full Text Available To diagnose mechanical faults of rotor-bearing-casing system by analyzing its casing vibration signal, this paper proposes a training procedure of a fault classifier based on variational mode decomposition (VMD, local linear embedding (LLE, and support vector machine (SVM. VMD is used first to decompose the casing signal into several modes, which are subsignals usually modulated by fault frequencies. Vibrational features are extracted from both VMD subsignals and the original one. LLE is employed here to reduce the dimensionality of these extracted features and make the samples more separable. Then low-dimensional data sets are used to train the multiclass SVM whose accuracy is tested by classifying the test samples. When the parameters of LLE and SVM are well optimized, this proposed method performs well on experimental data, showing its capacity of diagnosing casing vibration faults.
Fringe-projection profilometry based on two-dimensional empirical mode decomposition.
Zheng, Suzhen; Cao, Yiping
2013-11-01
In 3D shape measurement, because deformed fringes often contain low-frequency information degraded with random noise and background intensity information, a new fringe-projection profilometry is proposed based on 2D empirical mode decomposition (2D-EMD). The fringe pattern is first decomposed into numbers of intrinsic mode functions by 2D-EMD. Because the method has partial noise reduction, the background components can be removed to obtain the fundamental components needed to perform Hilbert transformation to retrieve the phase information. The 2D-EMD can effectively extract the modulation phase of a single direction fringe and an inclined fringe pattern because it is a full 2D analysis method and considers the relationship between adjacent lines of a fringe patterns. In addition, as the method does not add noise repeatedly, as does ensemble EMD, the data processing time is shortened. Computer simulations and experiments prove the feasibility of this method.
International Nuclear Information System (INIS)
Slanina, Z.
1987-01-01
Water vapor is treated as an equilibrium mixture of water clusters (H 2 O)/sub i/ using quantum-chemical evaluation of the equilibrium constants of water associations. The model is adapted to the conditions of atmospheric humidity, and a decomposition algorithm is suggested using the temperature and mass concentration of water as input information and used for a demonstration of evaluation of the water oligomer populations in the Earth's atmosphere. An upper limit of the populations is set up based on the water content in saturated aqueous vapor. It is proved that the cluster population in the saturated water vapor, as well as in the Earth's atmosphere for a typical temperature/humidity profile, increases with increasing temperatures
Directory of Open Access Journals (Sweden)
Guohui Li
2017-01-01
Full Text Available Aiming at the irregularity of nonlinear signal and its predicting difficulty, a deep learning prediction model based on extreme-point symmetric mode decomposition (ESMD and clustering analysis is proposed. Firstly, the original data is decomposed by ESMD to obtain the finite number of intrinsic mode functions (IMFs and residuals. Secondly, the fuzzy c-means is used to cluster the decomposed components, and then the deep belief network (DBN is used to predict it. Finally, the reconstructed IMFs and residuals are the final prediction results. Six kinds of prediction models are compared, which are DBN prediction model, EMD-DBN prediction model, EEMD-DBN prediction model, CEEMD-DBN prediction model, ESMD-DBN prediction model, and the proposed model in this paper. The same sunspots time series are predicted with six kinds of prediction models. The experimental results show that the proposed model has better prediction accuracy and smaller error.
Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei
2018-03-01
A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.
Barma, Shovan; Chen, Bo-Wei; Ji, Wen; Rho, Seungmin; Chou, Chih-Hung; Wang, Jhing-Fa
2016-08-01
This study presents a precise way to detect the third ( S3 ) heart sound, which is recognized as an important indication of heart failure, based on nonlinear single decomposition and time-frequency localization. The detection of the S3 is obscured due to its significantly low energy and frequency. Even more, the detected S3 may be misunderstood as an abnormal second heart sound with a fixed split, which was not addressed in the literature. To detect such S3, the Hilbert vibration decomposition method is applied to decompose the heart sound into a certain number of subcomponents while intactly preserving the phase information. Thus, the time information of all of the decomposed components are unchanged, which further expedites the identification and localization of any module/section of a signal properly. Next, the proposed localization step is applied to the decomposed subcomponents by using smoothed pseudo Wigner-Ville distribution followed by the reassignment method. Finally, based on the positional information, the S3 is distinguished and confirmed by measuring time delays between the S2 and S3. In total, 82 sets of cardiac cycles collected from different databases including Texas Heart Institute database are examined for evaluation of the proposed method. The result analysis shows that the proposed method can detect the S3 correctly, even when the normalized temporal energy of S3 is larger than 0.16, and the frequency of those is larger than 34 Hz. In a performance analysis, the proposed method demonstrates that the accuracy rate of S3 detection is as high as 93.9%, which is significantly higher compared with the other methods. Such findings prove the robustness of the proposed idea for detecting substantially low-energized S3 .
Guo, Wei; Tse, Peter W.
2013-01-01
Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect
Agent-based Simulation of the Maritime Domain
Directory of Open Access Journals (Sweden)
O. Vaněk
2010-01-01
Full Text Available In this paper, a multi-agent based simulation platform is introduced that focuses on legitimate and illegitimate aspects of maritime traffic, mainly on intercontinental transport through piracy afflicted areas. The extensible architecture presented here comprises several modules controlling the simulation and the life-cycle of the agents, analyzing the simulation output and visualizing the entire simulated domain. The simulation control module is initialized by various configuration scenarios to simulate various real-world situations, such as a pirate ambush, coordinated transit through a transport corridor, or coastal fishing and local traffic. The environmental model provides a rich set of inputs for agents that use the geo-spatial data and the vessel operational characteristics for their reasoning. The agent behavior model based on finite state machines together with planning algorithms allows complex expression of agent behavior, so the resulting simulation output can serve as a substitution for real world data from the maritime domain.
Novel Orthogonal Signal Based Decomposition of Digital Signals: Application to Sensor Fusion
Directory of Open Access Journals (Sweden)
Abdul Faheem Mohed
2010-03-01
Full Text Available In this research paper, a novel orthogonal decomposition of an arbitrary “digital” signal is proposed. An approach to attack the problem of wireless sensor fusion using digital signal processing techniques is discussed. The merits of the proposed orthogonal decomposition are briefly discussed. Simulation results are presented to illustrate the effectiveness of the proposed method.
Primal Recovery from Consensus-Based Dual Decomposition for Distributed Convex Optimization
Simonetto, A.; Jamali-Rad, H.
2015-01-01
Dual decomposition has been successfully employed in a variety of distributed convex optimization problems solved by a network of computing and communicating nodes. Often, when the cost function is separable but the constraints are coupled, the dual decomposition scheme involves local parallel
Restoration in multi-domain GMPLS-based networks
DEFF Research Database (Denmark)
Manolova, Anna; Ruepp, Sarah Renée; Dittmann, Lars
2011-01-01
are introduced: one based on the position of a failed link, called Location-Based, and another based on minimizing the additional resources consumed during restoration, called Shortest-New. A complete set of simulations in different network scenarios show where each mechanism is more efficient in terms, such as......In this paper, we evaluate the efficiency of using restoration mechanisms in a dynamic multi-domain GMPLS network. Major challenges and solutions are introduced and two well-known restoration schemes (End-to-End and Local-to-End) are evaluated. Additionally, new restoration mechanisms...
Specification-Based Testing Via Domain Specific Language
Sroka, Michal; Nagy, Roman; Fisch, Dominik
2014-12-01
The article presents tCF (testCaseFramework) - a domain specific language with corresponding toolchain for specification-based software testing of embedded software. tCF is designed for efficient preparation of maintainable and intelligible test cases and for testing process automation, as it allows to generate platform specific test cases for various testing levels. The article describes the essential parts of the tCF meta-model and the applied concept of platform specific test cases generators.
Domain-Based Storage Protection (DBSP) in Public Infrastructure Clouds
Paladi, Nicolae; Gehrmann, Christian; Morenius, Fredric
2013-01-01
Confidentiality and integrity of data in Infrastructure-as-a-Service (IaaS) environments increase in relevance as adoption of IaaS advances towards maturity. While current solutions assume a high degree of trust in IaaS provider staff and infrastructure management processes, earlier incidents have demon- strated that neither are impeccable. In this paper we introduce Domain-Based Storage Protection (DBSP) a data confidentiality and integrity protection mechanism for IaaS env...
International Nuclear Information System (INIS)
Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang
2015-01-01
As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions. (paper)
Directory of Open Access Journals (Sweden)
Khaled Loukhaoukha
2017-12-01
Full Text Available Among emergent applications of digital watermarking are copyright protection and proof of ownership. Recently, Makbol and Khoo (2013 have proposed for these applications a new robust blind image watermarking scheme based on the redundant discrete wavelet transform (RDWT and the singular value decomposition (SVD. In this paper, we present two ambiguity attacks on this algorithm that have shown that this algorithm fails when used to provide robustness applications like owner identification, proof of ownership, and transaction tracking. Keywords: Ambiguity attack, Image watermarking, Singular value decomposition, Redundant discrete wavelet transform
Directory of Open Access Journals (Sweden)
Qinghua Xie
2017-01-01
Full Text Available Recently, a general polarimetric model-based decomposition framework was proposed by Chen et al., which addresses several well-known limitations in previous decomposition methods and implements a simultaneous full-parameter inversion by using complete polarimetric information. However, it only employs four typical models to characterize the volume scattering component, which limits the parameter inversion performance. To overcome this issue, this paper presents two general polarimetric model-based decomposition methods by incorporating the generalized volume scattering model (GVSM or simplified adaptive volume scattering model, (SAVSM proposed by Antropov et al. and Huang et al., respectively, into the general decomposition framework proposed by Chen et al. By doing so, the final volume coherency matrix structure is selected from a wide range of volume scattering models within a continuous interval according to the data itself without adding unknowns. Moreover, the new approaches rely on one nonlinear optimization stage instead of four as in the previous method proposed by Chen et al. In addition, the parameter inversion procedure adopts the modified algorithm proposed by Xie et al. which leads to higher accuracy and more physically reliable output parameters. A number of Monte Carlo simulations of polarimetric synthetic aperture radar (PolSAR data are carried out and show that the proposed method with GVSM yields an overall improvement in the final accuracy of estimated parameters and outperforms both the version using SAVSM and the original approach. In addition, C-band Radarsat-2 and L-band AIRSAR fully polarimetric images over the San Francisco region are also used for testing purposes. A detailed comparison and analysis of decomposition results over different land-cover types are conducted. According to this study, the use of general decomposition models leads to a more accurate quantitative retrieval of target parameters. However, there
Incerti, Guido; Bonanomi, Giuliano; Sarker, Tushar Chandra; Giannino, Francesco; Cartenì, Fabrizio; Peressotti, Alessandro; Spaccini, Riccardo; Piccolo, Alessandro; Mazzoleni, Stefano
2017-04-01
Modelling organic matter decomposition is fundamental to predict biogeochemical cycling in terrestrial ecosystems. Current models use C/N or Lignin/N ratios to describe susceptibility to decomposition, or implement separate C pools decaying with different rates, disregarding biomolecular transformations and interactions and their effect on decomposition dynamics. We present a new process-based model of decomposition that includes a description of biomolecular dynamics obtained by 13C-CPMAS NMR spectroscopy. Baseline decay rates for relevant molecular classes and intermolecular protection were calibrated by best fitting of experimental data from leaves of 20 plant species decomposing for 180 days in controlled optimal conditions. The model was validated against field data from leaves of 32 plant species decomposing for 1-year at four sites in Mediterranean ecosystems. Our innovative approach accurately predicted decomposition of a wide range of litters across different climates. Simulations correctly reproduced mass loss data and variations of selected molecular classes both in controlled conditions and in the field, across different plant molecular compositions and environmental conditions. Prediction accuracy emerged from the species-specific partitioning of molecular types and from the representation of intermolecular interactions. The ongoing model implementation and calibration are oriented at representing organic matter dynamics in soil, including processes of interaction between mineral and organic soil fractions as a function of soil texture, physical aggregation of soil organic particles, and physical protection of soil organic matter as a function of aggregate size and abundance. Prospectively, our model shall satisfactorily reproduce C sequestration as resulting from experimental data of soil amended with a range of organic materials with different biomolecular quality, ranging from biochar to crop residues. Further application is also planned based on
Ho Huu, V.; Hartjes, S.; Visser, H.G.; Curran, R.; Gherman, B.; Porumbel, I.
2018-01-01
Recently, a multi-objective evolutionary algorithm based on decomposition (MOEA/D) has emerged as a potential method for solving multi-objective optimization problems (MOPs) and attracted much attention from researchers. In MOEA/D, the MOPs are decomposed into a number of scalar optimization
Czech Academy of Sciences Publication Activity Database
Asadi, M.; Asadi, Z.; Savaripoor, N.; Dušek, Michal; Eigner, Václav; Shorkaei, M.R.; Sedaghat, M.
2015-01-01
Roč. 136, Feb (2015), 625-634 ISSN 1386-1425 R&D Projects: GA ČR(CZ) GAP204/11/0809 Institutional support: RVO:68378271 Keywords : Oxovanadium(IV) complexes * Schiff base * Kinetics of thermal decomposition * Electrochemistry Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 2.653, year: 2015
Parallel QR Decomposition for Electromagnetic Scattering Problems
National Research Council Canada - National Science Library
Boleng, Jeff
1997-01-01
This report introduces a new parallel QR decomposition algorithm. Test results are presented for several problem sizes, numbers of processors, and data from the electromagnetic scattering problem domain...
Directory of Open Access Journals (Sweden)
Kaijian He
2016-11-01
Full Text Available The electricity market has experienced an increasing level of deregulation and reform over the years. There is an increasing level of electricity price fluctuation, uncertainty, and risk exposure in the marketplace. Traditional risk measurement models based on the homogeneous and efficient market assumption no longer suffice, facing the increasing level of accuracy and reliability requirements. In this paper, we propose a new Empirical Mode Decomposition (EMD-based Value at Risk (VaR model to estimate the downside risk measure in the electricity market. The proposed model investigates and models the inherent multiscale market risk structure. The EMD model is introduced to decompose the electricity time series into several Intrinsic Mode Functions (IMF with distinct multiscale characteristics. The Exponential Weighted Moving Average (EWMA model is used to model the individual risk factors across different scales. Experimental results using different models in the Australian electricity markets show that EMD-EWMA models based on Student’s t distribution achieves the best performance, and outperforms the benchmark EWMA model significantly in terms of model reliability and predictive accuracy.
Bivariate empirical mode decomposition for ECG-based biometric identification with emotional data.
Ferdinando, Hany; Seppanen, Tapio; Alasaarela, Esko
2017-07-01
Emotions modulate ECG signals such that they might affect ECG-based biometric identification in real life application. It motivated in finding good feature extraction methods where the emotional state of the subjects has minimum impacts. This paper evaluates feature extraction based on bivariate empirical mode decomposition (BEMD) for biometric identification when emotion is considered. Using the ECG signal from the Mahnob-HCI database for affect recognition, the features were statistical distributions of dominant frequency after applying BEMD analysis to ECG signals. The achieved accuracy was 99.5% with high consistency using kNN classifier in 10-fold cross validation to identify 26 subjects when the emotional states of the subjects were ignored. When the emotional states of the subject were considered, the proposed method also delivered high accuracy, around 99.4%. We concluded that the proposed method offers emotion-independent features for ECG-based biometric identification. The proposed method needs more evaluation related to testing with other classifier and variation in ECG signals, e.g. normal ECG vs. ECG with arrhythmias, ECG from various ages, and ECG from other affective databases.
Yeh, Jia-Rong; Lin, Tzu-Yu; Chen, Yun; Sun, Wei-Zen; Abbod, Maysam F; Shieh, Jiann-Shing
2012-01-01
Cardiovascular system is known to be nonlinear and nonstationary. Traditional linear assessments algorithms of arterial stiffness and systemic resistance of cardiac system accompany the problem of nonstationary or inconvenience in practical applications. In this pilot study, two new assessment methods were developed: the first is ensemble empirical mode decomposition based reflection index (EEMD-RI) while the second is based on the phase shift between ECG and BP on cardiac oscillation. Both methods utilise the EEMD algorithm which is suitable for nonlinear and nonstationary systems. These methods were used to investigate the properties of arterial stiffness and systemic resistance for a pig's cardiovascular system via ECG and blood pressure (BP). This experiment simulated a sequence of continuous changes of blood pressure arising from steady condition to high blood pressure by clamping the artery and an inverse by relaxing the artery. As a hypothesis, the arterial stiffness and systemic resistance should vary with the blood pressure due to clamping and relaxing the artery. The results show statistically significant correlations between BP, EEMD-based RI, and the phase shift between ECG and BP on cardiac oscillation. The two assessments results demonstrate the merits of the EEMD for signal analysis.
Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen
2018-01-01
With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP
Ship Radiated Noise Recognition Using Resonance-Based Sparse Signal Decomposition
Directory of Open Access Journals (Sweden)
Jiaquan Yan
2017-01-01
Full Text Available Under the complex oceanic environment, robust and effective feature extraction is the key issue of ship radiated noise recognition. Since traditional feature extraction methods are susceptible to the inevitable environmental noise, the type of vessels, and the speed of ships, the recognition accuracy will degrade significantly. Hence, we propose a robust time-frequency analysis method which combines resonance-based sparse signal decomposition (RSSD and Hilbert marginal spectrum (HMS analysis. First, the observed signals are decomposed into high resonance component, low resonance component, and residual component by RSSD, which is a nonlinear signal analysis method based not on frequency or scale but on resonance. High resonance component is multiple simultaneous sustained oscillations, low resonance component is nonoscillatory transients, and residual component is white Gaussian noises. According to the low-frequency periodic oscillatory characteristic of ship radiated noise, high resonance component is the purified ship radiated noise. RSSD is suited to noise suppression for low-frequency oscillation signals. Second, HMS of high resonance component is extracted by Hilbert-Huang transform (HHT as the feature vector. Finally, support vector machine (SVM is adopted as a classifier. Real audio recordings are employed in the experiments under different signal-to-noise ratios (SNRs. The experimental results indicate that the proposed method has a better recognition performance than the traditional method under different SNRs.
Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition
Directory of Open Access Journals (Sweden)
Chulhee Park
2016-05-01
Full Text Available A multispectral filter array (MSFA image sensor with red, green, blue and near-infrared (NIR filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF. However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.
Jiang, Shouyong; Yang, Shengxiang
2016-02-01
The multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been shown to be very efficient in solving multiobjective optimization problems (MOPs). In practice, the Pareto-optimal front (POF) of many MOPs has complex characteristics. For example, the POF may have a long tail and sharp peak and disconnected regions, which significantly degrades the performance of MOEA/D. This paper proposes an improved MOEA/D for handling such kind of complex problems. In the proposed algorithm, a two-phase strategy (TP) is employed to divide the whole optimization procedure into two phases. Based on the crowdedness of solutions found in the first phase, the algorithm decides whether or not to delicate computational resources to handle unsolved subproblems in the second phase. Besides, a new niche scheme is introduced into the improved MOEA/D to guide the selection of mating parents to avoid producing duplicate solutions, which is very helpful for maintaining the population diversity when the POF of the MOP being optimized is discontinuous. The performance of the proposed algorithm is investigated on some existing benchmark and newly designed MOPs with complex POF shapes in comparison with several MOEA/D variants and other approaches. The experimental results show that the proposed algorithm produces promising performance on these complex problems.
Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview
Han, G.; Lin, B.; Xu, Z.
2017-03-01
Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.
Karmakar, Bibha; Malkin, Ida; Kobyliansky, Eugene
2013-06-01
Dermatoglyphic asymmetry and diversity traits from a large number of twins (MZ and DZ) were analyzed based on principal factors to evaluate genetic effects and common familial environmental influences on twin data by the use of maximum likelihood-based Variance decomposition analysis. Sample consists of monozygotic (MZ) twins of two sexes (102 male pairs and 138 female pairs) and 120 pairs of dizygotic (DZ) female twins. All asymmetry (DA and FA) and diversity of dermatoglyphic traits were clearly separated into factors. These are perfectly corroborated with the earlier studies in different ethnic populations, which indicate a common biological validity perhaps exists of the underlying component structures of dermatoglyphic characters. Our heritability result in twins clearly showed that DA_F2 is inherited mostly in dominant type (28.0%) and FA_F1 is additive (60.7%), but no significant difference in sexes was observed for these factors. Inheritance is also very prominent in diversity Factor 1, which is exactly corroborated with our previous findings. The present results are similar with the earlier results of finger ridge count diversity in twin data, which suggested that finger ridge count diversity is under genetic control.
A new approach for crude oil price analysis based on empirical mode decomposition
International Nuclear Information System (INIS)
The importance of understanding the underlying characteristics of international crude oil price movements attracts much attention from academic researchers and business practitioners. Due to the intrinsic complexity of the oil market, however, most of them fail to produce consistently good results. Empirical Mode Decomposition (EMD), recently proposed by Huang et al., appears to be a novel data analysis method for nonlinear and non-stationary time series. By decomposing a time series into a small number of independent and concretely implicational intrinsic modes based on scale separation, EMD explains the generation of time series data from a novel perspective. Ensemble EMD (EEMD) is a substantial improvement of EMD which can better separate the scales naturally by adding white noise series to the original time series and then treating the ensemble averages as the true intrinsic modes. In this paper, we extend EEMD to crude oil price analysis. First, three crude oil price series with different time ranges and frequencies are decomposed into several independent intrinsic modes, from high to low frequency. Second, the intrinsic modes are composed into a fluctuating process, a slowly varying part and a trend based on fine-to-coarse reconstruction. The economic meanings of the three components are identified as short term fluctuations caused by normal supply-demand disequilibrium or some other market activities, the effect of a shock of a significant event, and a long term trend. Finally, the EEMD is shown to be a vital technique for crude oil price analysis. (author)
Energy Technology Data Exchange (ETDEWEB)
Schmidt, A.J.; Freeman, H.D.; Brown, M.D.; Zacher, A.H.; Neuenschwander, G.N.; Wilcox, W.A.; Gano, S.R. [Pacific Northwest National Lab., Richland, WA (United States); Kim, B.C.; Gavaskar, A.R. [Battelle Columbus Div., OH (United States)
1996-02-01
Base Catalyzed Decomposition (BCD) is a chemical dehalogenation process designed for treating soils and other substrate contaminated with polychlorinated biphenyls (PCB), pesticides, dioxins, furans, and other hazardous organic substances. PCBs are heavy organic liquids once widely used in industry as lubricants, heat transfer oils, and transformer dielectric fluids. In 1976, production was banned when PCBs were recognized as carcinogenic substances. It was estimated that significant quantities (one billion tons) of U.S. soils, including areas on U.S. military bases outside the country, were contaminated by PCB leaks and spills, and cleanup activities began. The BCD technology was developed in response to these activities. This report details the evolution of the process, from inception to deployment in Guam, and describes the process and system components provided to the Navy to meet the remediation requirements. The report is divided into several sections to cover the range of development and demonstration activities. Section 2.0 gives an overview of the project history. Section 3.0 describes the process chemistry and remediation steps involved. Section 4.0 provides a detailed description of each component and specific development activities. Section 5.0 details the testing and deployment operations and provides the results of the individual demonstration campaigns. Section 6.0 gives an economic assessment of the process. Section 7.0 presents the conclusions and recommendations form this project. The appendices contain equipment and instrument lists, equipment drawings, and detailed run and analytical data.
Directory of Open Access Journals (Sweden)
Yuqi Dong
2016-12-01
Full Text Available Accurate short-term electrical load forecasting plays a pivotal role in the national economy and people’s livelihood through providing effective future plans and ensuring a reliable supply of sustainable electricity. Although considerable work has been done to select suitable models and optimize the model parameters to forecast the short-term electrical load, few models are built based on the characteristics of time series, which will have a great impact on the forecasting accuracy. For that reason, this paper proposes a hybrid model based on data decomposition considering periodicity, trend and randomness of the original electrical load time series data. Through preprocessing and analyzing the original time series, the generalized regression neural network optimized by genetic algorithm is used to forecast the short-term electrical load. The experimental results demonstrate that the proposed hybrid model can not only achieve a good fitting ability, but it can also approximate the actual values when dealing with non-linear time series data with periodicity, trend and randomness.
International Nuclear Information System (INIS)
Hu Xintao; Zhu Jianxin; Ding Qiong
2011-01-01
Highlights: → We study the environmental impacts of two kinds of remediation technologies including Infrared High Temperature Incineration(IHTI) and Base Catalyzed Decomposition(BCD). → Combined midpoint/damage approaches were calculated for two technologies. → The results showed that major environmental impacts arose from energy consumption. → BCD has a lower environmental impact than IHTI in the view of single score. - Abstract: Remediation action is critical for the management of polychlorinated biphenyl (PCB) contaminated sites. Dozens of remediation technologies developed internationally could be divided in two general categories incineration and non-incineration. In this paper, life cycle assessment (LCA) was carried out to study the environmental impacts of these two kinds of remediation technologies in selected PCB contaminated sites, where Infrared High Temperature Incineration (IHTI) and Base Catalyzed Decomposition (BCD) were selected as representatives of incineration and non-incineration. A combined midpoint/damage approach was adopted by using SimaPro 7.2 and IMPACTA2002+ to assess the human toxicity, ecotoxicity, climate change impact, and resource consumption from the five subsystems of IHTI and BCD technologies, respectively. It was found that the major environmental impacts through the whole lifecycle arose from energy consumption in both IHTI and BCD processes. For IHTI, primary and secondary combustion subsystem contributes more than 50% of midpoint impacts concerning with carcinogens, respiratory inorganics, respiratory organics, terrestrial ecotoxity, terrestrial acidification/eutrophication and global warming. In BCD process, the rotary kiln reactor subsystem presents the highest contribution to almost all the midpoint impacts including global warming, non-renewable energy, non-carcinogens, terrestrial ecotoxity and respiratory inorganics. In the view of midpoint impacts, the characterization values for global warming from IHTI and
Domain-based small molecule binding site annotation
Directory of Open Access Journals (Sweden)
Dumontier Michel
2006-03-01
Full Text Available Abstract Background Accurate small molecule binding site information for a protein can facilitate studies in drug docking, drug discovery and function prediction, but small molecule binding site protein sequence annotation is sparse. The Small Molecule Interaction Database (SMID, a database of protein domain-small molecule interactions, was created using structural data from the Protein Data Bank (PDB. More importantly it provides a means to predict small molecule binding sites on proteins with a known or unknown structure and unlike prior approaches, removes large numbers of false positive hits arising from transitive alignment errors, non-biologically significant small molecules and crystallographic conditions that overpredict ion binding sites. Description Using a set of co-crystallized protein-small molecule structures as a starting point, SMID interactions were generated by identifying protein domains that bind to small molecules, using NCBI's Reverse Position Specific BLAST (RPS-BLAST algorithm. SMID records are available for viewing at http://smid.blueprint.org. The SMID-BLAST tool provides accurate transitive annotation of small-molecule binding sites for proteins not found in the PDB. Given a protein sequence, SMID-BLAST identifies domains using RPS-BLAST and then lists potential small molecule ligands based on SMID records, as well as their aligned binding sites. A heuristic ligand score is calculated based on E-value, ligand residue identity and domain entropy to assign a level of confidence to hits found. SMID-BLAST predictions were validated against a set of 793 experimental small molecule interactions from the PDB, of which 472 (60% of predicted interactions identically matched the experimental small molecule and of these, 344 had greater than 80% of the binding site residues correctly identified. Further, we estimate that 45% of predictions which were not observed in the PDB validation set may be true positives. Conclusion By
Energy Technology Data Exchange (ETDEWEB)
Dilek, Deniz [Faculty of Education, Secondary Science and Mathematics Education, Canakkale Onsekiz Mart University, 17100 Canakkale (Turkey); Dogan, Fatih, E-mail: fatihdogan@comu.edu.tr [Faculty of Education, Secondary Science and Mathematics Education, Canakkale Onsekiz Mart University, 17100 Canakkale (Turkey); Bilici, Ali, E-mail: alibilici66@hotmail.com [Control Laboratory of Agricultural and Forestry Ministry, 34153 Istanbul (Turkey); Kaya, Ismet [Department of Chemistry, Faculty of Science and Arts, Canakkale Onsekiz Mart University, Canakkale (Turkey)
2011-05-10
Research highlights: {yields} In this study, the synthesis and thermal characterization of a new functional polyphenol are reported. {yields} Non-isothermal methods were used to evaluate the thermal decomposition kinetics of resulting polymer. {yields} Thermal decomposition of polymer follows a diffusion type kinetic model. {yields} It is noted that this kinetic model is quite rare in polymer degradation studies. - Abstract: In here, the facile synthesis and thermal characterization of a novel polyphenol containing Schiff base pendant group, poly(4-{l_brace}[(4-hydroxyphenyl)imino]methyl{r_brace}benzene-1,2,3-triol) [PHPIMB], are reported. UV-vis, FT-IR, {sup 1}H NMR, {sup 13}C NMR, GPC, TG/DTG-DTA, CV (cyclic voltammetry) and solid state conductivity measurements were utilized to characterize the obtained monomer and polymer. The spectral analyses results showed that PHPIMB was composed of polyphenol main chains containing Schiff base pendant side groups. Thermal properties of the polymer were investigated by thermogravimetric analyses under a nitrogen atmosphere. Five methods were used to study the thermal decomposition of PHPIMB at different heating rate and the results obtained by using all the kinetic methods were compared with each other. The thermal decomposition of PHPIMB was found to be a simple process composed of three stages. These investigated methods were those of Flynn-Wall-Ozawa (FWO), Tang, Kissinger-Akahira-Sunose (KAS), Friedman and Kissinger methods.
Specification-Based Testing Via Domain Specific Language
Directory of Open Access Journals (Sweden)
Sroka Michal
2014-12-01
Full Text Available The article presents tCF (testCaseFramework - a domain specific language with corresponding toolchain for specification-based software testing of embedded software. tCF is designed for efficient preparation of maintainable and intelligible test cases and for testing process automation, as it allows to generate platform specific test cases for various testing levels. The article describes the essential parts of the tCF meta-model and the applied concept of platform specific test cases generators.
Directory of Open Access Journals (Sweden)
Dong Cui
2015-09-01
Full Text Available EEG characteristics that correlate with the cognitive functions are important in detecting mild cognitive impairment (MCI in T2DM. To investigate the complexity between aMCI group and age-matched non-aMCI control group in T2DM, six entropies combining empirical mode decomposition (EMD, including Approximate entropy (ApEn, Sample entropy (SaEn, Fuzzy entropy (FEn, Permutation entropy (PEn, Power spectrum entropy (PsEn and Wavelet entropy (WEn were used in the study. A feature extraction technique based on maximization of the area under the curve (AUC and a support vector machine (SVM were subsequently used to for features selection and classification. Finally, Pearson's linear correlation was employed to study associations between these entropies and cognitive functions. Compared to other entropies, FEn had a higher classification accuracy, sensitivity and specificity of 68%, 67.1% and 71.9%, respectively. Top 43 salient features achieved classification accuracy, sensitivity and specificity of 73.8%, 72.3% and 77.9%, respectively. P4, T4 and C4 were the highest ranking salient electrodes. Correlation analysis showed that FEn based on EMD was positively correlated to memory at electrodes F7, F8 and P4, and PsEn based on EMD was positively correlated to Montreal cognitive assessment (MoCA and memory at electrode T4. In sum, FEn based on EMD in right-temporal and occipital regions may be more suitable for early diagnosis of the MCI with T2DM.
A Tensor Decomposition-Based Approach for Detecting Dynamic Network States From EEG.
Mahyari, Arash Golibagh; Zoltowski, David M; Bernat, Edward M; Aviyente, Selin
2017-01-01
Functional connectivity (FC), defined as the statistical dependency between distinct brain regions, has been an important tool in understanding cognitive brain processes. Most of the current works in FC have focused on the assumption of temporally stationary networks. However, recent empirical work indicates that FC is dynamic due to cognitive functions. The purpose of this paper is to understand the dynamics of FC for understanding the formation and dissolution of networks of the brain. In this paper, we introduce a two-step approach to characterize the dynamics of functional connectivity networks (FCNs) by first identifying change points at which the network connectivity across subjects shows significant changes and then summarizing the FCNs between consecutive change points. The proposed approach is based on a tensor representation of FCNs across time and subjects yielding a four-mode tensor. The change points are identified using a subspace distance measure on low-rank approximations to the tensor at each time point. The network summarization is then obtained through tensor-matrix projections across the subject and time modes. The proposed framework is applied to electroencephalogram (EEG) data collected during a cognitive control task. The detected change-points are consistent with a priori known ERN interval. The results show significant connectivities in medial-frontal regions which are consistent with widely observed ERN amplitude measures. The tensor-based method outperforms conventional matrix-based methods such as singular value decomposition in terms of both change-point detection and state summarization. The proposed tensor-based method captures the topological structure of FCNs which provides more accurate change-point-detection and state summarization.
Directory of Open Access Journals (Sweden)
Y. Sun
2017-09-01
Full Text Available Hyperspectral imaging system can obtain spectral and spatial information simultaneously with bandwidth to the level of 10 nm or even less. Therefore, hyperspectral remote sensing has the ability to detect some kinds of objects which can not be detected in wide-band remote sensing, making it becoming one of the hottest spots in remote sensing. In this study, under conditions with a fuzzy set of full constraints, Normalized Multi-Endmember Decomposition Method (NMEDM for vegetation, water, and soil was proposed to reconstruct hyperspectral data using a large number of high-quality multispectral data and auxiliary spectral library data. This study considered spatial and temporal variation and decreased the calculation time required to reconstruct the hyper-spectral data. The results of spectral reconstruction based on NMEDM showed that the reconstructed data has good qualities and certain applications, which makes it possible to carry out spectral features identification. This method also extends the application of depth and breadth of remote sensing data, helping to explore the law between multispectral and hyperspectral data.
[Denoising of Fetal Heart Sound Based on Empirical Mode Decomposition Method].
Liu, Qiaoqiao; Tan, Zhixiang; Zhang, Yi; Wang, Hua
2015-08-01
Fetal heart sound is nonlinear and non-stationary, which contains a lot of noise when it is colleced, so the denoising method is important. We proposed a new denoising method in our study. Firstly, we chose the preprocessing of low-pass filter with a cutoff frequency of 200 Hz and the resampling. Secondly, we decomposed the signal based on empirical mode decomposition method (EMD) of Hilbert-Huang transform, then denoised some selected target components with wavelet soft threshold adaptive noise cancellation algorithm. Finally we got the clean fetal heart sound by combining the target components. In the EMD, we used a mask signal to eliminate the mode mixing problem, used mirroring extension method to eliminate the end effect, and referenced the stopping rule from the research of Rilling. This method eliminated the baseline drift and noise at once. To compare with wavelet transform (WT), mathematical morphology (MM) and the Fourier transform (FT), the SNR was improved obviously, and the RMSE was the minimum, which could satisfy the need of the practical application.
E, Jianwei; Bao, Yanling; Ye, Jimin
2017-10-01
As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.
Sierra, M; Grasa, J; Muñoz, M J; Miana-Mena, F J; González, D
2017-04-01
A novel technique is proposed to predict force reduction in skeletal muscle due to fatigue under the influence of electrical stimulus parameters and muscle physiological characteristics. Twelve New Zealand white rabbits were divided in four groups ([Formula: see text]) to obtain the active force evolution of in vitro Extensor Digitorum Longus muscles for an hour of repeated contractions under different electrical stimulation patterns. Left and right muscles were tested, and a total of 24 samples were used to construct a response surface based in the proper generalized decomposition. After the response surface development, one additional rabbit was used to check the predictive potential of the technique. This multidimensional surface takes into account not only the decay of the maximum repeated peak force, but also the shape evolution of each contraction, muscle weight, electrical input signal and stimulation protocol. This new approach of the fatigue simulation challenge allows to predict, inside the multispace surface generated, the muscle response considering other stimulation patterns, different tissue weight, etc.
Improving performance of channel equalization in RSOA-based WDM-PON by QR decomposition.
Li, Xiang; Zhong, Wen-De; Alphones, Arokiaswami; Yu, Changyuan; Xu, Zhaowen
2015-10-19
In reflective semiconductor optical amplifier (RSOA)-based wavelength division multiplexed passive optical network (WDM-PON), the bit rate is limited by low modulation bandwidth of RSOAs. To overcome the limitation, we apply QR decomposition in channel equalizer (QR-CE) to achieve successive interference cancellation (SIC) for discrete Fourier transform spreading orthogonal frequency division multiplexing (DFT-S OFDM) signal. Using an RSOA with a 3-dB modulation bandwidth of only ~800 MHz, we experimentally demonstrate a 15.5-Gb/s over 20-km SSMF DFT-S OFDM transmission with QR-CE. The experimental results show that DFTS-OFDM with QR-CE attains much better BER performance than DFTS-OFDM and OFDM with conventional channel equalizers. The impacts of several parameters on QR-CE are investigated. It is found that 2 sub-bands in one OFDM symbol and 1 pilot in each sub-band are sufficient to achieve optimal performance and maintain the high spectral efficiency.
Zhou, Nengjie; Lu, Zhenyu; Wu, Qin; Zhang, Yingkai
2014-06-07
We examine interatomic interactions for rare gas dimers using the density-based energy decomposition analysis (DEDA) in conjunction with computational results from CCSD(T) at the complete basis set (CBS) limit. The unique DEDA capability of separating frozen density interactions from density relaxation contributions is employed to yield clean interaction components, and the results are found to be consistent with the typical physical picture that density relaxations play a very minimal role in rare gas interactions. Equipped with each interaction component as reference, we develop a new three-term molecular mechanical force field to describe rare gas dimers: a smeared charge multipole model for electrostatics with charge penetration effects, a B3LYP-D3 dispersion term for asymptotically correct long-range attractions that is screened at short-range, and a Born-Mayer exponential function for the repulsion. The resulted force field not only reproduces rare gas interaction energies calculated at the CCSD(T)/CBS level, but also yields each interaction component (electrostatic or van der Waals) which agrees very well with its corresponding reference value.
Sun, Y.; Lin, Y.; Hu, X.; Zhao, S.; Liu, S.; Tong, Q.; Helder, D.; Yan, L.
2017-09-01
Hyperspectral imaging system can obtain spectral and spatial information simultaneously with bandwidth to the level of 10 nm or even less. Therefore, hyperspectral remote sensing has the ability to detect some kinds of objects which can not be detected in wide-band remote sensing, making it becoming one of the hottest spots in remote sensing. In this study, under conditions with a fuzzy set of full constraints, Normalized Multi-Endmember Decomposition Method (NMEDM) for vegetation, water, and soil was proposed to reconstruct hyperspectral data using a large number of high-quality multispectral data and auxiliary spectral library data. This study considered spatial and temporal variation and decreased the calculation time required to reconstruct the hyper-spectral data. The results of spectral reconstruction based on NMEDM showed that the reconstructed data has good qualities and certain applications, which makes it possible to carry out spectral features identification. This method also extends the application of depth and breadth of remote sensing data, helping to explore the law between multispectral and hyperspectral data.
Low-rank and sparse matrix decomposition-based anomaly detection for hyperspectral imagery
Sun, Weiwei; Liu, Chun; Li, Jialin; Lai, Yenming Mark; Li, Weiyue
2014-01-01
A low-rank and sparse matrix decomposition (LRaSMD) detector has been proposed to detect anomalies in hyperspectral imagery (HSI). The detector assumes background images are low-rank while anomalies are gross errors that are sparsely distributed throughout the image scene. By solving a constrained convex optimization problem, the LRaSMD detector separates the anomalies from the background. This protects the background model from corruption. An anomaly value for each pixel is calculated using the Euclidean distance, and anomalies are determined by thresholding the anomaly value. Four groups of experiments on three widely used HSI datasets are designed to completely analyze the performances of the new detector. Experimental results show that the LRaSMD detector outperforms the global Reed-Xiaoli (GRX), the orthogonal subspace projection-GRX, and the cluster-based detectors. Moreover, the results show that LRaSMD achieves equal or better detection performance than the local support vector data description detector within a shorter computational time.
Rafiq Abuturab, Muhammad
2018-01-01
A new asymmetric multiple information cryptosystem based on chaotic spiral phase mask (CSPM) and random spectrum decomposition is put forwarded. In the proposed system, each channel of secret color image is first modulated with a CSPM and then gyrator transformed. The gyrator spectrum is randomly divided into two complex-valued masks. The same procedure is applied to multiple secret images to get their corresponding first and second complex-valued masks. Finally, first and second masks of each channel are independently added to produce first and second complex ciphertexts, respectively. The main feature of the proposed method is the different secret images encrypted by different CSPMs using different parameters as the sensitive decryption/private keys which are completely unknown to unauthorized users. Consequently, the proposed system would be resistant to potential attacks. Moreover, the CSPMs are easier to position in the decoding process owing to their own centering mark on axis focal ring. The retrieved secret images are free from cross-talk noise effects. The decryption process can be implemented by optical experiment. Numerical simulation results demonstrate the viability and security of the proposed method.
Lu, Lei; Yan, Jihong; Chen, Wanqun; An, Shi
2018-03-01
This paper proposed a novel spatial frequency analysis method for the investigation of potassium dihydrogen phosphate (KDP) crystal surface based on an improved bidimensional empirical mode decomposition (BEMD) method. Aiming to eliminate end effects of the BEMD method and improve the intrinsic mode functions (IMFs) for the efficient identification of texture features, a denoising process was embedded in the sifting iteration of BEMD method. With removing redundant information in decomposed sub-components of KDP crystal surface, middle spatial frequencies of the cutting and feeding processes were identified. Comparative study with the power spectral density method, two-dimensional wavelet transform (2D-WT), as well as the traditional BEMD method, demonstrated that the method developed in this paper can efficiently extract texture features and reveal gradient development of KDP crystal surface. Furthermore, the proposed method was a self-adaptive data driven technique without prior knowledge, which overcame shortcomings of the 2D-WT model such as the parameters selection. Additionally, the proposed method was a promising tool for the application of online monitoring and optimal control of precision machining process.
Boniface Ngah Epo; Francis Menjo Baye; Nadine Teme Angele Manga
2011-01-01
This study applies the regression-based inequality decomposition technique to explain poverty and inequality trends in Cameroon. We also identify gender related factors which explain income disparities and discrimination based on the 2001 and 2007 Cameroon household consumption surveys. The results show that education, health, employment in the formal sector, age cohorts, household size, gender, ownership of farmland and urban versus rural residence explain household economic wellbeing; dispa...
International Nuclear Information System (INIS)
Hu, T. Y.; Connolly, S. M.; Lahoda, E. J.; Kriel, W.
2008-01-01
The key interface component between the reactor and chemical systems for the sulfuric acid based processes to make hydrogen is the sulfuric acid decomposition reactor. The materials issues for the decomposition reactor are severe since sulfuric acid must be heated, vaporized and decomposed. SiC has been identified and proven by others to be an acceptable material. However, SiC has a significant design issue when it must be interfaced with metals for connection to the remainder of the process. Westinghouse has developed a design utilizing SiC for the high temperature portions of the reactor that are in contact with the sulfuric acid and polymeric coated steel for low temperature portions. This design is expected to have a reasonable cost for an operating lifetime of 20 years. It can be readily maintained in the field, and is transportable by truck (maximum OD is 4.5 meters). This paper summarizes the detailed engineering design of the Westinghouse Decomposition Reactor and the decomposition reactor's capital cost. (authors)
DEFF Research Database (Denmark)
Merker, Martin
The topic of this PhD thesis is graph decompositions. While there exist various kinds of decompositions, this thesis focuses on three problems concerning edgedecompositions. Given a family of graphs H we ask the following question: When can the edge-set of a graph be partitioned so that each part...... k(T)-edge-connected graph whose size is divisible by the size of T admits a T-decomposition. This proves a conjecture by Barát and Thomassen from 2006. Moreover, we introduce a new arboricity notion where we restrict the diameter of the trees in a decomposition into forests. We conjecture......-connected planar graph contains two edge-disjoint 18/19 -thin spanning trees. Finally, we make progress on a conjecture by Baudon, Bensmail, Przybyło, and Wozniak stating that if a graph can be decomposed into locally irregular graphs, then there exists such a decomposition with at most 3 parts. We show...
Entropy based classifier for cross-domain opinion mining
Directory of Open Access Journals (Sweden)
Jyoti S. Deshmukh
2018-01-01
Full Text Available In recent years, the growth of social network has increased the interest of people in analyzing reviews and opinions for products before they buy them. Consequently, this has given rise to the domain adaptation as a prominent area of research in sentiment analysis. A classifier trained from one domain often gives poor results on data from another domain. Expression of sentiment is different in every domain. The labeling cost of each domain separately is very high as well as time consuming. Therefore, this study has proposed an approach that extracts and classifies opinion words from one domain called source domain and predicts opinion words of another domain called target domain using a semi-supervised approach, which combines modified maximum entropy and bipartite graph clustering. A comparison of opinion classification on reviews on four different product domains is presented. The results demonstrate that the proposed method performs relatively well in comparison to the other methods. Comparison of SentiWordNet of domain-specific and domain-independent words reveals that on an average 72.6% and 88.4% words, respectively, are correctly classified.
Optimization of dual-energy CT acquisitions for proton therapy using projection-based decomposition.
Vilches-Freixas, Gloria; Létang, Jean Michel; Ducros, Nicolas; Rit, Simon
2017-09-01
Dual-energy computed tomography (DECT) has been presented as a valid alternative to single-energy CT to reduce the uncertainty of the conversion of patient CT numbers to proton stopping power ratio (SPR) of tissues relative to water. The aim of this work was to optimize DECT acquisition protocols from simulations of X-ray images for the treatment planning of proton therapy using a projection-based dual-energy decomposition algorithm. We have investigated the effect of various voltages and tin filtration combinations on the SPR map accuracy and precision, and the influence of the dose allocation between the low-energy (LE) and the high-energy (HE) acquisitions. For all spectra combinations, virtual CT projections of the Gammex phantom were simulated with a realistic energy-integrating detector response model. Two situations were simulated: an ideal case without noise (infinite dose) and a realistic situation with Poisson noise corresponding to a 20 mGy total central dose. To determine the optimal dose balance, the proportion of LE-dose with respect to the total dose was varied from 10% to 90% while keeping the central dose constant, for four dual-energy spectra. SPR images were derived using a two-step projection-based decomposition approach. The ranges of 70 MeV, 90 MeV, and 100 MeV proton beams onto the adult female (AF) reference computational phantom of the ICRP were analytically determined from the reconstructed SPR maps. The energy separation between the incident spectra had a strong impact on the SPR precision. Maximizing the incident energy gap reduced image noise. However, the energy gap was not a good metric to evaluate the accuracy of the SPR. In terms of SPR accuracy, a large variability of the optimal spectra was observed when studying each phantom material separately. The SPR accuracy was almost flat in the 30-70% LE-dose range, while the precision showed a minimum slightly shifted in favor of lower LE-dose. Photon noise in the SPR images (20 mGy dose
Polarization-sensitive optical frequency domain imaging based on unpolarized light.
Kim, Ki Hean; Park, B Hyle; Tu, Yupeng; Hasan, Tayyaba; Lee, Byunghak; Li, Jianan; de Boer, Johannes F
2011-01-17
Polarization-sensitive optical coherence tomography (PS-OCT) is an augmented form of OCT, providing 3D images of both tissue structure and polarization properties. We developed a new method of polarization-sensitive optical frequency domain imaging (PS-OFDI), which is based on a wavelength-swept source. In this method the sample was illuminated with unpolarized light, which was composed of two orthogonal polarization states (i.e., separated by 180° in the Poincaré sphere) that are uncorrelated to each other. Reflection of these polarization states from within the sample was detected simultaneously and independently using a frequency multiplexing scheme. This simultaneous sample probing with two polarization states enabled determination of the depth-resolved Jones matrices of the sample. Polarization properties of the sample were obtained by analyzing the sample Jones matrices through eigenvector decomposition. The new PS-OFDI system ran at 31K wavelength-scans/s with 3072 pixels per wavelength-scan, and was tested by imaging a polarizer and several birefringent tissues such as chicken muscle and human skin. Lastly the new PS-OFDI was applied to imaging two cancer animal models: a mouse model by injecting cancer cells and a hamster cheek pouch model. These animal model studies demonstrated the significant differences in tissue polarization properties between cancer and normal tissues in vivo.
Directory of Open Access Journals (Sweden)
Mishra Vinod
2016-01-01
Full Text Available Numerical Laplace transform method is applied to approximate the solution of nonlinear (quadratic Riccati differential equations mingled with Adomian decomposition method. A new technique is proposed in this work by reintroducing the unknown function in Adomian polynomial with that of well known Newton-Raphson formula. The solutions obtained by the iterative algorithm are exhibited in an infinite series. The simplicity and efficacy of method is manifested with some examples in which comparisons are made among the exact solutions, ADM (Adomian decomposition method, HPM (Homotopy perturbation method, Taylor series method and the proposed scheme.
Speech Denoising in White Noise Based on Signal Subspace Low-rank Plus Sparse Decomposition
Directory of Open Access Journals (Sweden)
yuan Shuai
2017-01-01
Full Text Available In this paper, a new subspace speech enhancement method using low-rank and sparse decomposition is presented. In the proposed method, we firstly structure the corrupted data as a Toeplitz matrix and estimate its effective rank for the underlying human speech signal. Then the low-rank and sparse decomposition is performed with the guidance of speech rank value to remove the noise. Extensive experiments have been carried out in white Gaussian noise condition, and experimental results show the proposed method performs better than conventional speech enhancement methods, in terms of yielding less residual noise and lower speech distortion.
Hu, Xintao; Zhu, Jianxin; Ding, Qiong
2011-07-15
Remediation action is critical for the management of polychlorinated biphenyl (PCB) contaminated sites. Dozens of remediation technologies developed internationally could be divided in two general categories incineration and non-incineration. In this paper, life cycle assessment (LCA) was carried out to study the environmental impacts of these two kinds of remediation technologies in selected PCB contaminated sites, where Infrared High Temperature Incineration (IHTI) and Base Catalyzed Decomposition (BCD) were selected as representatives of incineration and non-incineration. A combined midpoint/damage approach was adopted by using SimaPro 7.2 and IMPACTA2002+ to assess the human toxicity, ecotoxicity, climate change impact, and resource consumption from the five subsystems of IHTI and BCD technologies, respectively. It was found that the major environmental impacts through the whole lifecycle arose from energy consumption in both IHTI and BCD processes. For IHTI, primary and secondary combustion subsystem contributes more than 50% of midpoint impacts concerning with carcinogens, respiratory inorganics, respiratory organics, terrestrial ecotoxity, terrestrial acidification/eutrophication and global warming. In BCD process, the rotary kiln reactor subsystem presents the highest contribution to almost all the midpoint impacts including global warming, non-renewable energy, non-carcinogens, terrestrial ecotoxity and respiratory inorganics. In the view of midpoint impacts, the characterization values for global warming from IHTI and BCD were about 432.35 and 38.5 kg CO(2)-eq per ton PCB-containing soils, respectively. LCA results showed that the single score of BCD environmental impact was 1468.97 Pt while IHTI's score is 2785.15 Pt, which indicates BCD potentially has a lower environmental impact than IHTI technology in the PCB contaminated soil remediation process. Copyright © 2011 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Daryl L Moorhead
2013-08-01
Full Text Available We re-examined data from a recent litter decay study to determine if additional insights could be gained to inform decomposition modeling. Rinkes et al. (2013 conducted 14-day laboratory incubations of sugar maple (Acer saccharum or white oak (Quercus alba leaves, mixed with sand (0.4% organic C content or loam (4.1% organic C. They measured microbial biomass C, carbon dioxide efflux, soil ammonium, nitrate, and phosphate concentrations, and β-glucosidase (BG, β-N-acetyl-glucosaminidase (NAG, and acid phosphatase (AP activities on days 1, 3, and 14. Analyses of relationships among variables yielded different insights than original analyses of individual variables. For example, although respiration rates per g soil were higher for loam than sand, rates per g soil C were actually higher for sand than loam, and rates per g microbial C showed little difference between treatments. Microbial biomass C peaked on day 3 when biomass-specific activities of enzymes were lowest, suggesting uptake of litter C without extracellular hydrolysis. This result refuted a common model assumption that all enzyme production is constitutive and thus proportional to biomass, and/or indicated that part of litter decay is independent of enzyme activity. The length and angle of vectors defined by ratios of enzyme activities (BG/NAG versus BG/AP represent relative microbial investments in C (length, and N and P (angle acquiring enzymes. Shorter lengths on day 3 suggested low C limitation, whereas greater lengths on day 14 suggested an increase in C limitation with decay. The soils and litter in this study generally had stronger P limitation (angles > 45˚. Reductions in vector angles to < 45˚ for sand by day 14 suggested a shift to N limitation. These relational variables inform enzyme-based models, and are usually much less ambiguous when obtained from a single study in which measurements were made on the same samples than when extrapolated from separate studies.
Empirical Mode Decomposition-Based Analysis of Heart Rate Signal Affected by Iranian Music
Directory of Open Access Journals (Sweden)
Soheila HAJIZADEH
2015-09-01
Full Text Available Purpose: Several studies have been done measuring the effects of music on various vital signs more frequently on the electrocardiogram (ECG and consequently the heart rate (HR. This study has been conducted to address the effects of Iranian music on cardiac functioning by thoroughly examining the extracted HR from ECG signals. A strong mathematical method is needed to extract signal features. One of the adaptive mathematical analyses is empirical mode decomposition (EMD, which is implemented to analyze the nonlinear and non-stationary data. This method can decompose any complicated signal into a group of intrinsic mode functions (IMFs through a sifting process. Basic methods: In this paper the EMD-based feature extraction algorithm of HR signal which does not require a priori functional basis will be described. Fast Fourier transforms (FFT are used to identify the peaks in the signal. Then maximum amplitude (MaxFFT and maximum frequency (MaxFreq using FFT and sample entropy (SampEn for each extracted IMF and their combinations are calculated. SampEn algorithm is applied to calculate the complexity of each IMF and their combinations. Paired sample t-test was also conducted to assess if there were any significant differences between MaxFFT, SampEn and MaxFreq values of the IMFs. Main results: Considering the high frequency IMFs, results indicate that the MaxFFT values are decreased, but the SampEn and MaxFreq values are increased during listening to Iranian music. Conclusion: Experimental results from 62 subjects showed that the proposed methodology can be useful to show the differences between pre-music and during-music stages.
An Improved Algorithm to Delineate Urban Targets with Model-Based Decomposition of PolSAR Data
Directory of Open Access Journals (Sweden)
Dingfeng Duan
2017-10-01
Full Text Available In model-based decomposition algorithms using polarimetric synthetic aperture radar (PolSAR data, urban targets are typically identified based on the existence of strong double-bounced scattering. However, urban targets with large azimuth orientation angles (AOAs produce strong volumetric scattering that appears similar to scattering characteristics from tree canopies. Due to scattering ambiguity, urban targets can be classified into the vegetation category if the same classification scheme of the model-based PolSAR decomposition algorithms is followed. To resolve the ambiguity and to reduce the misclassification eventually, we introduced a correlation coefficient that characterized scattering mechanisms of urban targets with variable AOAs. Then, an existing volumetric scattering model was modified, and a PolSAR decomposition algorithm developed. The validity and effectiveness of the algorithm were examined using four PolSAR datasets. The algorithm was valid and effective to delineate urban targets with a wide range of AOAs, and applicable to a broad range of ground targets from urban areas, and from upland and flooded forest stands.
Energy Technology Data Exchange (ETDEWEB)
Plonka, Anna M.; Wang, Qi; Gordon, Wesley O.; Balboa, Alex; Troya, Diego; Guo, Weiwei; Sharp, Conor H.; Senanayake, Sanjaya D.; Morris, John R.; Hill, Craig L.; Frenkel, Anatoly I. (BNL); (Virginia Tech); (ECBC); (Emory); (SBU)
2017-01-18
Zr-based metal organic frameworks (MOFs) have been recently shown to be among the fastest catalysts of nerve-agent hydrolysis in solution. We report a detailed study of the adsorption and decomposition of a nerve-agent simulant, dimethyl methylphosphonate (DMMP), on UiO-66, UiO-67, MOF-808, and NU-1000 using synchrotron-based X-ray powder diffraction, X-ray absorption, and infrared spectroscopy, which reveals key aspects of the reaction mechanism. The diffraction measurements indicate that all four MOFs adsorb DMMP (introduced at atmospheric pressures through a flow of helium or air) within the pore space. In addition, the combination of X-ray absorption and infrared spectra suggests direct coordination of DMMP to the Zr6 cores of all MOFs, which ultimately leads to decomposition to phosphonate products. These experimental probes into the mechanism of adsorption and decomposition of chemical warfare agent simulants on Zr-based MOFs open new opportunities in rational design of new and superior decontamination materials.
Czech Academy of Sciences Publication Activity Database
Sedláček, R.; Suchý, Tomáš; Balík, Karel; Sochor, M.; Sucharda, Zbyněk
2011-01-01
Roč. 14, 109-111 (2011), s. 9-11 ISSN 1429-7248 R&D Projects: GA ČR(CZ) GAP108/10/1457 Institutional research plan: CEZ:AV0Z30460519 Keywords : composite material * sterilization decomposition * carbon fibers Subject RIV: BO - Biophysics http://www.biomat.krakow.pl/english/journal/editorial.html
Evolution based on domain combinations: the case of glutaredoxins
Directory of Open Access Journals (Sweden)
Herrero Enrique
2009-03-01
Full Text Available Abstract Background Protein domains represent the basic units in the evolution of proteins. Domain duplication and shuffling by recombination and fusion, followed by divergence are the most common mechanisms in this process. Such domain fusion and recombination events are predicted to occur only once for a given multidomain architecture. However, other scenarios may be relevant in the evolution of specific proteins, such as convergent evolution of multidomain architectures. With this in mind, we study glutaredoxin (GRX domains, because these domains of approximately one hundred amino acids are widespread in archaea, bacteria and eukaryotes and participate in fusion proteins. GRXs are responsible for the reduction of protein disulfides or glutathione-protein mixed disulfides and are involved in cellular redox regulation, although their specific roles and targets are often unclear. Results In this work we analyze the distribution and evolution of GRX proteins in archaea, bacteria and eukaryotes. We study over one thousand GRX proteins, each containing at least one GRX domain, from hundreds of different organisms and trace the origin and evolution of the GRX domain within the tree of life. Conclusion Our results suggest that single domain GRX proteins of the CGFS and CPYC classes have, each, evolved through duplication and divergence from one initial gene that was present in the last common ancestor of all organisms. Remarkably, we identify a case of convergent evolution in domain architecture that involves the GRX domain. Two independent recombination events of a TRX domain to a GRX domain are likely to have occurred, which is an exception to the dominant mechanism of domain architecture evolution.
Hierarchical decomposition of burn body diagram based on cutaneous functional units and its utility.
Richard, Reg; Jones, John A; Parshley, Philip
2015-01-01
A burn body diagram (BBD) is a common feature used in the delivery of burn care for estimating the TBSA burn as well as calculating fluid resuscitation and nutritional requirements, wound healing, and rehabilitation intervention. However, little change has occurred for over seven decades in the configuration of the BBD. The purpose of this project was to develop a computerized model using hierarchical decomposition (HD) to more precisely determine the percentage burn within a BBD based on cutaneous functional units (CFUs). HD is a process by which a system is degraded into smaller parts that are more precise in their use. CFUs were previously identified fields of the skin involved in the range of motion. A standard Lund/Browder (LB) BBD template was used as the starting point to apply the CFU segments. LB body divisions were parceled down into smaller body area divisions through a HD process based on the CFU concept. A numerical pattern schema was used to label the various segments in a cephalo/caudal, anterior/posterior, medial/lateral manner. Hand/fingers were divided based on anatomical landmarks and known cutaneokinematic function. The face was considered using aesthetic units. Computer code was written to apply the numeric hierarchical schema to CFUs and applied within the context of the surface area graphic evaluation BBD program. Each segmented CFU was coded to express 100% of itself. The CFU/HD method refined the standard LB diagram from 13 body segments and 33 subdivisions into 182 isolated CFUs. Associated CFUs were reconstituted into 219 various surface area combinations totaling 401 possible surface segments. The CFU/HD schema of the body surface mapping is applicable to measuring and calculating percent wound healing in a more precise manner. It eliminates subjective assessment of the percentage wound healing and the need for additional devices such as planimetry. The development of CFU/HD body mapping schema has rendered a technologically advanced
Domain Wall Mobility in Co-Based Amorphous Wire
Directory of Open Access Journals (Sweden)
Maria Kladivova
2007-01-01
Full Text Available Dynamics of the domain wall between opposite circularly magnetized domains in amorphous cylindrical sample with circular easy direction is theoretically studied. The wall is driven by DC current. Various mechanisms which influence the wall velocity were taken into account: current magnitude, deformation of the mowing wall, Hall effect, axially magnetized domain in the middle of the wire. Theoretical results obtained are in a good agreement with experiments on Cobased amorphous ferromagnetic wires.
Chen, Ya-Chen; Hsiao, Tzu-Chien
2016-10-06
Thoracoabdominal asynchrony is often adopted to discriminate respiratory diseases in clinics. Conventionally, Lissajous figure analysis is the most frequently used estimation of the phase difference in thoracoabdominal asynchrony. However, the temporal resolution of the produced results is low and the estimation error increases when the signals are not sinusoidal. Other previous studies have reported time-domain procedures with the use of band-pass filters for phase-angle estimation. Nevertheless, the band-pass filters need calibration for phase delay elimination. To improve the estimation, we propose a novel method (named as instantaneous phase difference) that is based on complementary ensemble empirical mode decomposition for estimating the instantaneous phase relation between measured thoracic wall movement and abdominal wall movement. To validate the proposed method, experiments on simulated time series and human-subject respiratory data with two breathing types (i.e., thoracic breathing and abdominal breathing) were conducted. Latest version of Lissajous figure analysis and automatic phase estimation procedure were compared. The simulation results show that the standard deviations of the proposed method were lower than those of two other conventional methods. The proposed method performed more accurately than the two conventional methods. For the human-subject respiratory data, the results of the proposed method are in line with those in the literature, and the correlation analysis result reveals that they were positively correlated with the results generated by the two conventional methods. Furthermore, the standard deviation of the proposed method was also the smallest. To summarize, this study proposes a novel method for estimating instantaneous phase differences. According to the findings from both the simulation and human-subject data, our approach was demonstrated to be effective. The method offers the following advantages: (1) improves the temporal
Sim, Jaehyun; Sim, Jun; Park, Eunsung; Lee, Julian
2015-06-01
Many proteins undergo large-scale motions where relatively rigid domains move against each other. The identification of rigid domains, as well as the hinge residues important for their relative movements, is important for various applications including flexible docking simulations. In this work, we develop a method for protein rigid domain identification based on an exhaustive enumeration of maximal rigid domains, the rigid domains not fully contained within other domains. The computation is performed by mapping the problem to that of finding maximal cliques in a graph. A minimal set of rigid domains are then selected, which cover most of the protein with minimal overlap. In contrast to the results of existing methods that partition a protein into non-overlapping domains using approximate algorithms, the rigid domains obtained from exact enumeration naturally contain overlapping regions, which correspond to the hinges of the inter-domain bending motion. The performance of the algorithm is demonstrated on several proteins. © 2015 Wiley Periodicals, Inc.
Torres, A. F.
2011-12-01
Agricultural lands are sources of food and energy for population around the globe. These lands are vulnerable to the impacts of climate change including variations in rainfall regimes, weather patterns, and decreased availability of water for irrigation. In addition, it is not unusual that irrigated agriculture is forced to divert less water in order to make it available for other uses, e.g. human consumption and others. As part of implementation of better policies for water control and management, irrigation companies and water user associations have been implemented water conveyance and distribution monitoring systems along with soil moisture sensors networks in the last decades. These systems allow them to manage and distribute water among the users based on their requirements and water availability while collecting information about actual soil moisture conditions in representative crop fields. In spite of this, requested water deliveries by farmers/water users is based typically on total water share, traditions and past experience on irrigation, which in most cases do not correspond to the actual crop evapotranspiration, already affected by climate change. Therefore it is necessary to provide actual information about the crop water requirements to water users/managers, so they can better quantify the required vs. available water for the irrigation events along the irrigation season. To estimate the actual evapotranspiration in a spatial extent the Sensitivity Analysis of the Surface Energy Balance Algorithm for Land (SEBAL) algorithm has demonstrated its effectiveness using satellite or airborne data. Nonetheless the estimation is restricted to the day when the geospatial information was obtained. Without information of precise future daily water crop demand there is a continuous challenge for the implementation of better water distribution and management policies in the irrigation system. The purpose of this study is to investigate the plausibility of using
Susceptibility of Redundant Versus Singular Clock Domains Implemented in SRAM-Based FPGA TMR Designs
Berg, Melanie D.; LaBel, Kenneth A.; Pellish, Jonathan
2016-01-01
We present the challenges that arise when using redundant clock domains due to their clock-skew. Radiation data show that a singular clock domain (DTMR) provides an improved TMR methodology for SRAM-based FPGAs over redundant clocks.
Kang, Shouqiang; Ma, Danyang; Wang, Yujing; Lan, Chaofeng; Chen, Qingguo; Mikulovich, V. I.
2017-03-01
To effectively assess different fault locations and different degrees of performance degradation of a rolling bearing with a unified assessment index, a novel state assessment method based on the relative compensation distance of multiple-domain features and locally linear embedding is proposed. First, for a single-sample signal, time-domain and frequency-domain indexes can be calculated for the original vibration signal and each sensitive intrinsic mode function obtained by improved ensemble empirical mode decomposition, and the singular values of the sensitive intrinsic mode function matrix can be extracted by singular value decomposition to construct a high-dimensional hybrid-domain feature vector. Second, a feature matrix can be constructed by arranging each feature vector of multiple samples, the dimensions of each row vector of the feature matrix can be reduced by the locally linear embedding algorithm, and the compensation distance of each fault state of the rolling bearing can be calculated using the support vector machine. Finally, the relative distance between different fault locations and different degrees of performance degradation and the normal-state optimal classification surface can be compensated, and on the basis of the proposed relative compensation distance, the assessment model can be constructed and an assessment curve drawn. Experimental results show that the proposed method can effectively assess different fault locations and different degrees of performance degradation of the rolling bearing under certain conditions.
Spectral characteristics preserving image fusion based on Fourier domain filtering
Ehlers, Manfred
2004-10-01
Data fusion methods are usually classified into three levels: pixel level (ikonic), feature level (symbolic) and knowledge or decision level. Here, we will focus on the development of ikonic techniques for image fusion. Image transforms such as the Intensity-Hue-Saturation (IHS) or Principal Component (PC) transform are widely used to fuse panchromatic images of high spatial resolution with multispectral images of lower resolution. These techniques create multispectral images of higher spatial resolution but usually at the cost that these transforms do not preserve the original color or spectral characteristics of the input image data. In this study, a new method for image fusion will be presented that is based on filtering in the Fourier domain. This method preserves the spectral characteristics of the lower resolution mul-tispectral images. Examples are presented for SPOT and Ikonos panchromatic images fused with Landsat TM and Iko-nos multispectral data. Comparison with existing fusion techniques such as IHS, PC or Brovey transform prove the su-periority of the new method. While in principle based on the IHS transform (which usually only works for three bands), the method is extended to any arbitrary number of spectral bands. Using this approach, this method can be applied to sharpen hyperspectral images without changing their spectral behavior.
Facial Image Compression Based on Structured Codebooks in Overcomplete Domain
Directory of Open Access Journals (Sweden)
Vila-Forcén JE
2006-01-01
Full Text Available We advocate facial image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: image compression is considered from the position of source coding with side information and, contrarily to the existing scenarios where the side information is given explicitly; the side information is created based on a deterministic approximation of the local image features. We consider an image in the overcomplete transform domain as a realization of a random source with a structured codebook of symbols where each symbol represents a particular edge shape. Due to the partial availability of the side information at both encoder and decoder, we treat our problem as a modification of the Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available at the decoder. Finally, the paper presents a practical image compression algorithm for facial images based on our concept that demonstrates the superior performance in the very-low-bit-rate regime.
Some nonlinear space decomposition algorithms
Energy Technology Data Exchange (ETDEWEB)
Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)
1996-12-31
Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.
Ara, Sharmin R; Bashar, Syed Khairul; Alam, Farzana; Hasan, Md Kamrul
2017-09-01
Using a large set of ultrasound features does not necessarily ensure improved quantitative classification of breast tumors; rather, it often degrades the performance of a classifier. In this paper, we propose an effective feature reduction approach in the transform domain for improved multi-class classification of breast tumors. Feature transformation methods, such as empirical mode decomposition (EMD) and discrete wavelet transform (DWT), followed by a filter- or wrapper-based subset selection scheme are used to extract a set of non-redundant and more potential transform domain features through decorrelation of an optimally ordered sequence of N ultrasonic bi-modal (i.e., quantitative ultrasound and elastography) features. The proposed transform domain bi-modal reduced feature set with different conventional classifiers will classify 201 breast tumors into benign-malignant as well as BI-RADS⩽3, 4, and 5 categories. For the latter case, an inadmissible error probability is defined for the subset selection using a wrapper/filter. The classifiers use train truth from histopathology/cytology for binary (i.e., benign-malignant) separation of tumors and then bi-modal BI-RADS scores from the radiologists for separating malignant tumors into BI-RADS category 4 and 5. A comparative performance analysis of several widely used conventional classifiers is also presented to assess their efficacy for the proposed transform domain reduced feature set for classification of breast tumors. The results show that our transform domain bimodal reduced feature set achieves improvement of 5.35%, 3.45%, and 3.98%, respectively, in sensitivity, specificity, and accuracy as compared to that of the original domain optimal feature set for benign-malignant classification of breast tumors. In quantitative classification of breast tumors into BI-RADS categories⩽3, 4, and 5, the proposed transform domain reduced feature set attains improvement of 3.49%, 9.07%, and 3.06%, respectively, in
Directory of Open Access Journals (Sweden)
William Bains
2015-03-01
Full Text Available The components of life must survive in a cell long enough to perform their function in that cell. Because the rate of attack by water increases with temperature, we can, in principle, predict a maximum temperature above which an active terrestrial metabolism cannot function by analysis of the decomposition rates of the components of life, and comparison of those rates with the metabolites’ minimum metabolic half-lives. The present study is a first step in this direction, providing an analytical framework and method, and analyzing the stability of 63 small molecule metabolites based on literature data. Assuming that attack by water follows a first order rate equation, we extracted decomposition rate constants from literature data and estimated their statistical reliability. The resulting rate equations were then used to give a measure of confidence in the half-life of the metabolite concerned at different temperatures. There is little reliable data on metabolite decomposition or hydrolysis rates in the literature, the data is mostly confined to a small number of classes of chemicals, and the data available are sometimes mutually contradictory because of varying reaction conditions. However, a preliminary analysis suggests that terrestrial biochemistry is limited to environments below ~150–180 °C. We comment briefly on why pressure is likely to have a small effect on this.
Dong, Feng; Long, Ruyin; Chen, Hong; Li, Xiaohui; Yang, Qingliang
2013-01-01
China is considered to be the main carbon producer in the world. The per-capita carbon emissions indicator is an important measure of the regional carbon emissions situation. This study used the LMDI factor decomposition model-panel co-integration test two-step method to analyze the factors that affect per-capita carbon emissions. The main results are as follows. (1) During 1997, Eastern China, Central China, and Western China ranked first, second, and third in the per-capita carbon emissions, while in 2009 the pecking order changed to Eastern China, Western China, and Central China. (2) According to the LMDI decomposition results, the key driver boosting the per-capita carbon emissions in the three economic regions of China between 1997 and 2009 was economic development, and the energy efficiency was much greater than the energy structure after considering their effect on restraining increased per-capita carbon emissions. (3) Based on the decomposition, the factors that affected per-capita carbon emissions in the panel co-integration test showed that Central China had the best energy structure elasticity in its regional per-capita carbon emissions. Thus, Central China was ranked first for energy efficiency elasticity, while Western China was ranked first for economic development elasticity.
Single-Trial Decoding of Bistable Perception Based on Sparse Nonnegative Tensor Decomposition
Wang, Zhisong; Maier, Alexander; Logothetis, Nikos K.; Liang, Hualou
2008-01-01
The study of the neuronal correlates of the spontaneous alternation in perception elicited by bistable visual stimuli is promising for understanding the mechanism of neural information processing and the neural basis of visual perception and perceptual decision-making. In this paper, we develop a sparse nonnegative tensor factorization-(NTF)-based method to extract features from the local field potential (LFP), collected from the middle temporal (MT) visual cortex in a macaque monkey, for decoding its bistable structure-from-motion (SFM) perception. We apply the feature extraction approach to the multichannel time-frequency representation of the intracortical LFP data. The advantages of the sparse NTF-based feature extraction approach lies in its capability to yield components common across the space, time, and frequency domains yet discriminative across different conditions without prior knowledge of the discriminating frequency bands and temporal windows for a specific subject. We employ the support vector machines (SVMs) classifier based on the features of the NTF components for single-trial decoding the reported perception. Our results suggest that although other bands also have certain discriminability, the gamma band feature carries the most discriminative information for bistable perception, and that imposing the sparseness constraints on the nonnegative tensor factorization improves extraction of this feature. PMID:18528515
Synthesis, Optical Characterization, and Thermal Decomposition of Complexes Based on Biuret Ligand
Directory of Open Access Journals (Sweden)
Mei-Ling Wang
2016-01-01
Full Text Available Four complexes were synthesized in methanol solution using nickel acetate or nickel chloride, manganese acetate, manganese chloride, and biuret as raw materials. The complexes were characterized by elemental analyses, UV, FTIR, Raman spectra, X-ray powder diffraction, and thermogravimetric analysis. The compositions of the complexes were [Ni(bi2(H2O2](Ac2·H2O (1, [Ni(bi2Cl2] (2, [Mn(bi2(Ac2]·1.5H2O (3, and [Mn(bi2Cl2] (4 (bi = NH2CONHCONH2, respectively. In the complexes, every metal ion was coordinated by oxygen atoms or chlorine ions and even both. The nickel and manganese ions were all hexacoordinated. The thermal decomposition processes of the complexes under air included the loss of water molecule, the pyrolysis of ligands, and the decomposition of inorganic salts, and the final residues were nickel oxide and manganese oxide, respectively.
Dynamic Load Balancing Based on Constrained K-D Tree Decomposition for Parallel Particle Tracing
Energy Technology Data Exchange (ETDEWEB)
Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru; Hong, Fan; Peterka, Tom
2018-01-01
Particle tracing is a fundamental technique in flow field data visualization. In this work, we present a novel dynamic load balancing method for parallel particle tracing. Specifically, we employ a constrained k-d tree decomposition approach to dynamically redistribute tasks among processes. Each process is initially assigned a regularly partitioned block along with duplicated ghost layer under the memory limit. During particle tracing, the k-d tree decomposition is dynamically performed by constraining the cutting planes in the overlap range of duplicated data. This ensures that each process is reassigned particles as even as possible, and on the other hand the new assigned particles for a process always locate in its block. Result shows good load balance and high efficiency of our method.
Controllable pneumatic generator based on the catalytic decomposition of hydrogen peroxide
Kim, Kyung-Rok; Kim, Kyung-Soo; Kim, Soohyun
2014-07-01
This paper presents a novel compact and controllable pneumatic generator that uses hydrogen peroxide decomposition. A fuel micro-injector using a piston-pump mechanism is devised and tested to control the chemical decomposition rate. By controlling the injection rate, the feedback controller maintains the pressure of the gas reservoir at a desired pressure level. Thermodynamic analysis and experiments are performed to demonstrate the feasibility of the proposed pneumatic generator. Using a prototype of the pneumatic generator, it takes 6 s to reach 3.5 bars with a reservoir volume of 200 ml at the room temperature, which is sufficiently rapid and effective to maintain the repetitive lifting of a 1 kg mass.
Directory of Open Access Journals (Sweden)
Madej Tom
2009-05-01
Full Text Available Abstract Background The identification of protein domains plays an important role in protein structure comparison. Domain query size and composition are critical to structure similarity search algorithms such as the Vector Alignment Search Tool (VAST, the method employed for computing related protein structures in NCBI Entrez system. Currently, domains identified on the basis of structural compactness are used for VAST computations. In this study, we have investigated how alternative definitions of domains derived from conserved sequence alignments in the Conserved Domain Database (CDD would affect the domain comparisons and structure similarity search performance of VAST. Results Alternative domains, which have significantly different secondary structure composition from those based on structurally compact units, were identified based on the alignment footprints of curated protein sequence domain families. Our analysis indicates that domain boundaries disagree on roughly 8% of protein chains in the medium redundancy subset of the Molecular Modeling Database (MMDB. These conflicting sequence based domain boundaries perform slightly better than structure domains in structure similarity searches, and there are interesting cases when structure similarity search performance is markedly improved. Conclusion Structure similarity searches using domain boundaries based on conserved sequence information can provide an additional method for investigators to identify interesting similarities between proteins with known structures. Because of the improvement in performance of structure similarity searches using sequence domain boundaries, we are in the process of implementing their inclusion into the VAST search and MMDB resources in the NCBI Entrez system.
Qin, Xiwen; Li, Qiaoling; Dong, Xiaogang; Lv, Siqi
2017-01-01
Accurate diagnosis of rolling bearing fault on the normal operation of machinery and equipment has a very important significance. A method combining Ensemble Empirical Mode Decomposition (EEMD) and Random Forest (RF) is proposed. Firstly, the original signal is decomposed into several intrinsic mode functions (IMFs) by EEMD, and the effective IMFs are selected. Then their energy entropy is calculated as the feature. Finally, the classification is performed by RF. In addition, the wavelet meth...
Czech Academy of Sciences Publication Activity Database
Geleyn, J.- F.; Mašek, Jan; Brožková, Radmila; Kuma, P.; Degrauwe, D.; Hello, G.; Pristov, N.
2017-01-01
Roč. 143, č. 704 (2017), s. 1313-1335 ISSN 0035-9009 R&D Projects: GA MŠk(CZ) LO1415 Institutional support: RVO:86652079 Keywords : numerical weather prediction * climate models * clouds * parameterization * atmospheres * formulation * absorption * scattering * accurate * database * longwave radiative transfer * broadband approach * idealized optical paths * net exchanged rate decomposition * bracketing * selective intermittency Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Meteorology and atmospheric sciences Impact factor: 3.444, year: 2016
Probabilistic inference with noisy-threshold models based on a CP tensor decomposition
Czech Academy of Sciences Publication Activity Database
Vomlel, Jiří; Tichavský, Petr
2014-01-01
Roč. 55, č. 4 (2014), s. 1072-1092 ISSN 0888-613X R&D Projects: GA ČR GA13-20012S; GA ČR GA102/09/1278 Institutional support: RVO:67985556 Keywords : Bayesian networks * Probabilistic inference * Candecomp-Parafac tensor decomposition * Symmetric tensor rank Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.451, year: 2014 http://library.utia.cas.cz/separaty/2014/MTR/vomlel-0427059.pdf
Dynamic formant extraction of wa language based on adaptive variational mode decomposition
Fu, Meijun; Dong, Huazhen; Pan, Wenlin
2017-08-01
Wa language is one of Chinese minority languages spoken by the Wa nationality who lives in Yunnan Province, China. Until now, it has not been studied from the perspective of Engineering Phonetics. In this paper, for the above reason, by the adaptive variational mode decomposition (AVMD) we have investigated the dynamic formant characteristics of Wa language. Firstly, more precisely, use the synthetic dimension to split Wa language isolated words into voiceless and voiced segment, initials and finals. Secondly, use Linear Prediction Coding to estimate the first three formant frequencies and their bandwidths roughly. Thirdly, select the appropriate equilibrium constraint parameter and the number of decomposed layers so that Adaptive Variational Mode Decomposition (AVMD) can decompose the signal into some intrinsic mode functions (IMFs) without pattern aliasing. Fourthly, use the estimated formant frequencies and bandwidths to determine precisely the required IMFs. Fifthly, use the Hilbert transform to calculate the instantaneous frequency of the above determinate IMFs. Further, we implement the weight average operation on instantaneous frequencies to obtain the first three formant frequencies for each frame. Finally, comparing the first three formant frequencies obtained by the adaptive variance modal decomposition and by Praat software respectively, so we have drawn the conclusion that the relative correct rate of the former to the latter can reach 86% averagely in terms of the selected isolated words, which has shown that our method is effective on Wa language.
Yu, Xu; Lin, Jun-Yu; Jiang, Feng; Du, Jun-Wei; Han, Ji-Zhong
2018-01-01
Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.
Chiu, Chun-Huo; Chao, Anne
2014-01-01
Hill numbers (or the "effective number of species") are increasingly used to characterize species diversity of an assemblage. This work extends Hill numbers to incorporate species pairwise functional distances calculated from species traits. We derive a parametric class of functional Hill numbers, which quantify "the effective number of equally abundant and (functionally) equally distinct species" in an assemblage. We also propose a class of mean functional diversity (per species), which quantifies the effective sum of functional distances between a fixed species to all other species. The product of the functional Hill number and the mean functional diversity thus quantifies the (total) functional diversity, i.e., the effective total distance between species of the assemblage. The three measures (functional Hill numbers, mean functional diversity and total functional diversity) quantify different aspects of species trait space, and all are based on species abundance and species pairwise functional distances. When all species are equally distinct, our functional Hill numbers reduce to ordinary Hill numbers. When species abundances are not considered or species are equally abundant, our total functional diversity reduces to the sum of all pairwise distances between species of an assemblage. The functional Hill numbers and the mean functional diversity both satisfy a replication principle, implying the total functional diversity satisfies a quadratic replication principle. When there are multiple assemblages defined by the investigator, each of the three measures of the pooled assemblage (gamma) can be multiplicatively decomposed into alpha and beta components, and the two components are independent. The resulting beta component measures pure functional differentiation among assemblages and can be further transformed to obtain several classes of normalized functional similarity (or differentiation) measures, including N-assemblage functional generalizations of the
A time domain phase-gradient based ISAR autofocus algorithm
CSIR Research Space (South Africa)
Nel, W
2011-10-01
Full Text Available Autofocus is a well known required step in ISAR (and SAR) processing to compensate translational motion. This research proposes a time domain autofocus algorithm and discusses its relation to the well known phase gradient autofocus (PGA) technique...
Towards ontology based search and knowledgesharing using domain ontologies
DEFF Research Database (Denmark)
Zambach, Sine
This paper reports on work in progress. We present work on domain specific verbs and their role as relations in domain ontologies. The domain ontology which is in focus for our research is modeled in cooperation with the Danish biotech company Novo Nordic. Two of the main purposes of domain...... ontologies for enterprises are as background for search and knowledge sharing used for e.g. multi lingual product development. Our aim is to use linguistic methods and logic to construct consistent ontologies that can be used in both a search perspective and as knowledge sharing.This focuses on identifying...... verbs for relations in the ontology modeling. For this work we use frequency lists from a biomedical text corpus of different genres as well as a study of the relations used in other biomedical text mining tools. In addition, we discuss how these relations can be used in broarder perspective....
Magee, Daniel J.; Niemeyer, Kyle E.
2018-03-01
The expedient design of precision components in aerospace and other high-tech industries requires simulations of physical phenomena often described by partial differential equations (PDEs) without exact solutions. Modern design problems require simulations with a level of resolution difficult to achieve in reasonable amounts of time-even in effectively parallelized solvers. Though the scale of the problem relative to available computing power is the greatest impediment to accelerating these applications, significant performance gains can be achieved through careful attention to the details of memory communication and access. The swept time-space decomposition rule reduces communication between sub-domains by exhausting the domain of influence before communicating boundary values. Here we present a GPU implementation of the swept rule, which modifies the algorithm for improved performance on this processing architecture by prioritizing use of private (shared) memory, avoiding interblock communication, and overwriting unnecessary values. It shows significant improvement in the execution time of finite-difference solvers for one-dimensional unsteady PDEs, producing speedups of 2 - 9 × for a range of problem sizes, respectively, compared with simple GPU versions and 7 - 300 × compared with parallel CPU versions. However, for a more sophisticated one-dimensional system of equations discretized with a second-order finite-volume scheme, the swept rule performs 1.2 - 1.9 × worse than a standard implementation for all problem sizes.
Liu, Leili; Li, Jie; Zhang, Lingyao; Tian, Siyu
2018-01-15
MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 were prepared, and their structure and hydrogen storage properties were determined through X-ray photoelectron spectroscopy and thermal analyzer. The effects of MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 on the thermal decomposition, burning rate, and explosive heat of ammonium perchlorate-based composite solid propellant were subsequently studied. Results indicated that MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 can decrease the thermal decomposition peak temperature and increase the total released heat of decomposition. These compounds can improve the effect of thermal decomposition of the propellant. The burning rates of the propellant increased using Mg-based hydrogen storage materials as promoter. The burning rates of the propellant also increased using MgH 2 instead of Al in the propellant, but its explosive heat was not enlarged. Nonetheless, the combustion heat of MgH 2 was higher than that of Al. A possible mechanism was thus proposed. Copyright © 2017. Published by Elsevier B.V.
Directory of Open Access Journals (Sweden)
Xiaoxing Zhang
2016-11-01
Full Text Available Detection of decomposition products of sulfur hexafluoride (SF6 is one of the best ways to diagnose early latent insulation faults in gas-insulated equipment, and the occurrence of sudden accidents can be avoided effectively by finding early latent faults. Recently, functionalized graphene, a kind of gas sensing material, has been reported to show good application prospects in the gas sensor field. Therefore, calculations were performed to analyze the gas sensing properties of intrinsic graphene (Int-graphene and functionalized graphene-based material, Ag-decorated graphene (Ag-graphene, for decomposition products of SF6, including SO2F2, SOF2, and SO2, based on density functional theory (DFT. We thoroughly investigated a series of parameters presenting gas-sensing properties of adsorbing process about gas molecule (SO2F2, SOF2, SO2 and double gas molecules (2SO2F2, 2SOF2, 2SO2 on Ag-graphene, including adsorption energy, net charge transfer, electronic state density, and the highest and lowest unoccupied molecular orbital. The results showed that the Ag atom significantly enhances the electrochemical reactivity of graphene, reflected in the change of conductivity during the adsorption process. SO2F2 and SO2 gas molecules on Ag-graphene presented chemisorption, and the adsorption strength was SO2F2 > SO2, while SOF2 absorption on Ag-graphene was physical adsorption. Thus, we concluded that Ag-graphene showed good selectivity and high sensitivity to SO2F2. The results can provide a helpful guide in exploring Ag-graphene material in experiments for monitoring the insulation status of SF6-insulated equipment based on detecting decomposition products of SF6.
[Fusion of dual color MWIR images based on support value transform and top-hat decomposition].
Lin, Su-Zhen; Yang, Feng-Bao; Chen, Lei
2014-04-01
Fusion method of dual color mid-wave infrared images is presented in this paper in order to solve such frequently rising issues as limited contrast ratio improvement and serious marginal area distortion in the fusion of the above two images using multi-scale top-hat decomposition. The detailed procedure is shown as the following: A low-frequency component image and a sequence of support value images of the two subdivision band images of mid-wave infrared are obtained respectively with support value transform. Multi-scale bright and dim information are first extracted from the last layer of low-frequency image using the multi-scale top-hat decomposition method respectively. Then they are fused by selecting the maximum gray of each pixel in two subdivision band images of mid-wave infrared respectively. Following that, the two resulted images are enhanced using the gray-scale normalization and Gaussian filtering and fused with the two low-frequency images to get the low-frequency fusion image. After that, this fusion image is reversely transformed with the support sequence image fused by selecting the maximum gray. The final image is got at last. The result shows that compared with the simple support value transform fusion and the multi-scale top-hat decomposition fusion, the method suggested in this paper successfully increases the contrast ratio by 11.69%, decreases the distortion factor by 63.42%, and increases the local coarseness by 38.12%. All these show that the validity of fusion method proposed has been proved, which indicates that both bright and dim information from low-frequency images can effectively solve the contradiction between improving fused image's contrast ratio and reducing its' distortion after the both are fused and enhanced respectively, and then fused with the two low-frequency images, which provides a new useful method for improving the quality of fused inferred images.
α-Decomposition for estimating parameters in common cause failure modeling based on causal inference
International Nuclear Information System (INIS)
Zheng, Xiaoyu; Yamaguchi, Akira; Takata, Takashi
2013-01-01
The traditional α-factor model has focused on the occurrence frequencies of common cause failure (CCF) events. Global α-factors in the α-factor model are defined as fractions of failure probability for particular groups of components. However, there are unknown uncertainties in the CCF parameters estimation for the scarcity of available failure data. Joint distributions of CCF parameters are actually determined by a set of possible causes, which are characterized by CCF-triggering abilities and occurrence frequencies. In the present paper, the process of α-decomposition (Kelly-CCF method) is developed to learn about sources of uncertainty in CCF parameter estimation. Moreover, it aims to evaluate CCF risk significances of different causes, which are named as decomposed α-factors. Firstly, a Hybrid Bayesian Network is adopted to reveal the relationship between potential causes and failures. Secondly, because all potential causes have different occurrence frequencies and abilities to trigger dependent failures or independent failures, a regression model is provided and proved by conditional probability. Global α-factors are expressed by explanatory variables (causes’ occurrence frequencies) and parameters (decomposed α-factors). At last, an example is provided to illustrate the process of hierarchical Bayesian inference for the α-decomposition process. This study shows that the α-decomposition method can integrate failure information from cause, component and system level. It can parameterize the CCF risk significance of possible causes and can update probability distributions of global α-factors. Besides, it can provide a reliable way to evaluate uncertainty sources and reduce the uncertainty in probabilistic risk assessment. It is recommended to build databases including CCF parameters and corresponding causes’ occurrence frequency of each targeted system
DEFF Research Database (Denmark)
Dyson, Mark
2003-01-01
. Not only have design tools changed character, but also the processes associated with them. Today, the composition of problems and their decomposition into parcels of information, calls for a new paradigm. This paradigm builds on the networking of agents and specialisations, and the paths of communication...
Duality-Free Decomposition Based Data-Driven Stochastic Security-Constrained Unit Commitment
DEFF Research Database (Denmark)
Ding, Tao; Yang, Qingrun; Huang, Can
2018-01-01
To incorporate the superiority of both stochastic and robust approaches, a data-driven stochastic optimization is employed to solve the security-constrained unit commitment model. This approach makes the most use of the historical data to generate a set of possible probability distributions...... for wind power outputs and then it optimizes the unit commitment under the worst-case probability distribution. However, this model suffers from huge computational burden, as a large number of scenarios are considered. To tackle this issue, a duality-free decomposition method is proposed in this paper...
Directory of Open Access Journals (Sweden)
Xiwen Qin
2017-01-01
Full Text Available Accurate diagnosis of rolling bearing fault on the normal operation of machinery and equipment has a very important significance. A method combining Ensemble Empirical Mode Decomposition (EEMD and Random Forest (RF is proposed. Firstly, the original signal is decomposed into several intrinsic mode functions (IMFs by EEMD, and the effective IMFs are selected. Then their energy entropy is calculated as the feature. Finally, the classification is performed by RF. In addition, the wavelet method is also used in the proposed process, the same as EEMD. The results of the comparison show that the EEMD method is more accurate than the wavelet method.
Directory of Open Access Journals (Sweden)
Hong-Juan Li
2013-04-01
Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.
Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K
2013-08-01
A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.
Research on Misalignment Fault Isolation of Wind Turbines Based on the Mixed-Domain Features
Directory of Open Access Journals (Sweden)
Yancai Xiao
2017-06-01
Full Text Available The misalignment of the drive system of the DFIG (Doubly Fed Induction Generator wind turbine is one of the important factors that cause damage to the gears, bearings of the high-speed gearbox and the generator bearings. How to use the limited information to accurately determine the type of failure has become a difficult study for the scholars. In this paper, the time-domain indexes and frequency-domain indexes are extracted by using the vibration signals of various misaligned simulation conditions of the wind turbine drive system, and the time-frequency domain features—energy entropy are also extracted by the IEMD (Improved Empirical Mode Decomposition. A mixed-domain feature set is constructed by them. Then, SVM (Support Vector Machine is used as the classifier, the mixed-domain features are used as the inputs of SVM, and PSO (Particle Swarm Optimization is used to optimize the parameters of SVM. The fault types of misalignment are classified successfully. Compared with other methods, the accuracy of the given fault isolation model is improved.
Identifying APT Malware Domain Based on Mobile DNS Logging
Directory of Open Access Journals (Sweden)
Weina Niu
2017-01-01
Full Text Available Advanced Persistent Threat (APT is a serious threat against sensitive information. Current detection approaches are time-consuming since they detect APT attack by in-depth analysis of massive amounts of data after data breaches. Specifically, APT attackers make use of DNS to locate their command and control (C&C servers and victims’ machines. In this paper, we propose an efficient approach to detect APT malware C&C domain with high accuracy by analyzing DNS logs. We first extract 15 features from DNS logs of mobile devices. According to Alexa ranking and the VirusTotal’s judgement result, we give each domain a score. Then, we select the most normal domains by the score metric. Finally, we utilize our anomaly detection algorithm, called Global Abnormal Forest (GAF, to identify malware C&C domains. We conduct a performance analysis to demonstrate that our approach is more efficient than other existing works in terms of calculation efficiency and recognition accuracy. Compared with Local Outlier Factor (LOF, k-Nearest Neighbor (KNN, and Isolation Forest (iForest, our approach obtains more than 99% F-M and R for the detection of C&C domains. Our approach not only can reduce data volume that needs to be recorded and analyzed but also can be applicable to unsupervised learning.
Lu, Shikun; Zhang, Hao; Li, Xihai; Li, Yihong; Niu, Chao; Yang, Xiaoyun; Liu, Daizhi
2018-03-01
Combining analyses of spatial and temporal characteristics of the ionosphere is of great significance for scientific research and engineering applications. Tensor decomposition is performed to explore the temporal-longitudinal-latitudinal characteristics in the ionosphere. Three-dimensional tensors are established based on the time series of ionospheric vertical total electron content maps obtained from the Centre for Orbit Determination in Europe. To obtain large-scale characteristics of the ionosphere, rank-1 decomposition is used to obtain U^{(1)}, U^{(2)}, and U^{(3)}, which are the resulting vectors for the time, longitude, and latitude modes, respectively. Our initial finding is that the correspondence between the frequency spectrum of U^{(1)} and solar variation indicates that rank-1 decomposition primarily describes large-scale temporal variations in the global ionosphere caused by the Sun. Furthermore, the time lags between the maxima of the ionospheric U^{(2)} and solar irradiation range from 1 to 3.7 h without seasonal dependence. The differences in time lags may indicate different interactions between processes in the magnetosphere-ionosphere-thermosphere system. Based on the dataset displayed in the geomagnetic coordinates, the position of the barycenter of U^{(3)} provides evidence for north-south asymmetry (NSA) in the large-scale ionospheric variations. The daily variation in such asymmetry indicates the influences of solar ionization. The diurnal geomagnetic coordinate variations in U^{(3)} show that the large-scale EIA (equatorial ionization anomaly) variations during the day and night have similar characteristics. Considering the influences of geomagnetic disturbance on ionospheric behavior, we select the geomagnetic quiet GIMs to construct the ionospheric tensor. The results indicate that the geomagnetic disturbances have little effect on large-scale ionospheric characteristics.
Directory of Open Access Journals (Sweden)
Irene Lock Sow Mei
2016-08-01
Full Text Available Hydrogen production from the direct thermo-catalytic decomposition of methane is a promising alternative for clean fuel production. However, thermal decomposition of methane can hardly be of any practical and empirical interest in the industry unless highly efficient and effective catalysts, in terms of both catalytic activity and operational lifetime have been developed. In this study, the effect of palladium (Pd as a promoter onto Ni supported on alumina catalyst has been investigated by using co-precipitation technique. The introduction of Pd promotes better catalytic activity, operational lifetime and thermal stability of the catalyst. As expected, highest methane conversion was achieved at reaction temperature of 800 °C while the bimetallic catalyst (1 wt.% Ni -1wt.% Pd/Al2O3 gave the highest methane conversion of 70% over 15 min of time-on-stream (TOS. Interestingly, the introduction of Pd as promoter onto Ni-based catalyst also has a positive effect on the operational lifetime and thermal stability of the catalyst as the methane conversion has improved significantly over 240 min of TOS. Copyright © 2016 BCREC GROUP. All rights reserved Received: 21st January 2016; Revised: 6th February 2016; Accepted: 6th March 2016 How to Cite: Mei, I.L.S., Lock, S.S.M., Vo, D.V.N., Abdullah, B. (2016. Thermo-Catalytic Methane Decomposition for Hydrogen Production: Effect of Palladium Promoter on Ni-based Catalysts. Bulletin of Chemical Reaction Engineering & Catalysis, 11 (2: 191-199 (doi:10.9767/bcrec.11.2.550.191-199 Permalink/DOI: http://dx.doi.org/10.9767/bcrec.11.2.550.191-199
Path planning of decentralized multi-quadrotor based on fuzzy-cell decomposition algorithm
Iswanto, Wahyunggoro, Oyas; Cahyadi, Adha Imam
2017-04-01
The paper aims to present a design algorithm for multi quadrotor lanes in order to move towards the goal quickly and avoid obstacles in an area with obstacles. There are several problems in path planning including how to get to the goal position quickly and avoid static and dynamic obstacles. To overcome the problem, therefore, the paper presents fuzzy logic algorithm and fuzzy cell decomposition algorithm. Fuzzy logic algorithm is one of the artificial intelligence algorithms which can be applied to robot path planning that is able to detect static and dynamic obstacles. Cell decomposition algorithm is an algorithm of graph theory used to make a robot path map. By using the two algorithms the robot is able to get to the goal position and avoid obstacles but it takes a considerable time because they are able to find the shortest path. Therefore, this paper describes a modification of the algorithms by adding a potential field algorithm used to provide weight values on the map applied for each quadrotor by using decentralized controlled, so that the quadrotor is able to move to the goal position quickly by finding the shortest path. The simulations conducted have shown that multi-quadrotor can avoid various obstacles and find the shortest path by using the proposed algorithms.
Directory of Open Access Journals (Sweden)
Wang Jiajun
2010-05-01
Full Text Available Abstract Background The inverse problem of fluorescent molecular tomography (FMT often involves complex large-scale matrix operations, which may lead to unacceptable computational errors and complexity. In this research, a tree structured Schur complement decomposition strategy is proposed to accelerate the reconstruction process and reduce the computational complexity. Additionally, an adaptive regularization scheme is developed to improve the ill-posedness of the inverse problem. Methods The global system is decomposed level by level with the Schur complement system along two paths in the tree structure. The resultant subsystems are solved in combination with the biconjugate gradient method. The mesh for the inverse problem is generated incorporating the prior information. During the reconstruction, the regularization parameters are adaptive not only to the spatial variations but also to the variations of the objective function to tackle the ill-posed nature of the inverse problem. Results Simulation results demonstrate that the strategy of the tree structured Schur complement decomposition obviously outperforms the previous methods, such as the conventional Conjugate-Gradient (CG and the Schur CG methods, in both reconstruction accuracy and speed. As compared with the Tikhonov regularization method, the adaptive regularization scheme can significantly improve ill-posedness of the inverse problem. Conclusions The methods proposed in this paper can significantly improve the reconstructed image quality of FMT and accelerate the reconstruction process.
Reconstruction of Complex Network based on the Noise via QR Decomposition and Compressed Sensing.
Li, Lixiang; Xu, Dafei; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian
2017-11-08
It is generally known that the states of network nodes are stable and have strong correlations in a linear network system. We find that without the control input, the method of compressed sensing can not succeed in reconstructing complex networks in which the states of nodes are generated through the linear network system. However, noise can drive the dynamics between nodes to break the stability of the system state. Therefore, a new method integrating QR decomposition and compressed sensing is proposed to solve the reconstruction problem of complex networks under the assistance of the input noise. The state matrix of the system is decomposed by QR decomposition. We construct the measurement matrix with the aid of Gaussian noise so that the sparse input matrix can be reconstructed by compressed sensing. We also discover that noise can build a bridge between the dynamics and the topological structure. Experiments are presented to show that the proposed method is more accurate and more efficient to reconstruct four model networks and six real networks by the comparisons between the proposed method and only compressed sensing. In addition, the proposed method can reconstruct not only the sparse complex networks, but also the dense complex networks.
Automated polyp measurement based on colon structure decomposition for CT colonography
Wang, Huafeng; Li, Lihong C.; Han, Hao; Peng, Hao; Song, Bowen; Wei, Xinzhou; Liang, Zhengrong
2014-03-01
Accurate assessment of colorectal polyp size is of great significance for early diagnosis and management of colorectal cancers. Due to the complexity of colon structure, polyps with diverse geometric characteristics grow from different landform surfaces. In this paper, we present a new colon decomposition approach for polyp measurement. We first apply an efficient maximum a posteriori expectation-maximization (MAP-EM) partial volume segmentation algorithm to achieve an effective electronic cleansing on colon. The global colon structure is then decomposed into different kinds of morphological shapes, e.g. haustral folds or haustral wall. Meanwhile, the polyp location is identified by an automatic computer aided detection algorithm. By integrating the colon structure decomposition with the computer aided detection system, a patch volume of colon polyps is extracted. Thus, polyp size assessment can be achieved by finding abnormal protrusion on a relative uniform morphological surface from the decomposed colon landform. We evaluated our method via physical phantom and clinical datasets. Experiment results demonstrate the feasibility of our method in consistently quantifying the size of polyp volume and, therefore, facilitating characterizing for clinical management.
Wang, Lei; Liu, Zhiwen; Miao, Qiang; Zhang, Xin
2018-03-01
A time-frequency analysis method based on ensemble local mean decomposition (ELMD) and fast kurtogram (FK) is proposed for rotating machinery fault diagnosis. Local mean decomposition (LMD), as an adaptive non-stationary and nonlinear signal processing method, provides the capability to decompose multicomponent modulation signal into a series of demodulated mono-components. However, the occurring mode mixing is a serious drawback. To alleviate this, ELMD based on noise-assisted method was developed. Still, the existing environmental noise in the raw signal remains in corresponding PF with the component of interest. FK has good performance in impulse detection while strong environmental noise exists. But it is susceptible to non-Gaussian noise. The proposed method combines the merits of ELMD and FK to detect the fault for rotating machinery. Primarily, by applying ELMD the raw signal is decomposed into a set of product functions (PFs). Then, the PF which mostly characterizes fault information is selected according to kurtosis index. Finally, the selected PF signal is further filtered by an optimal band-pass filter based on FK to extract impulse signal. Fault identification can be deduced by the appearance of fault characteristic frequencies in the squared envelope spectrum of the filtered signal. The advantages of ELMD over LMD and EEMD are illustrated in the simulation analyses. Furthermore, the efficiency of the proposed method in fault diagnosis for rotating machinery is demonstrated on gearbox case and rolling bearing case analyses.
Converting One Type-Based Abstract Domain to Another
DEFF Research Database (Denmark)
Gallagher, John Patrick; Puebla, German; Albert, Elvira
2006-01-01
The specific problem that motivates this paper is how to obtain abstract descriptions of the meanings of imported predicates (such as built-ins) that can be used when analysing a module of a logic program with respect to some abstract domain. We assume that abstract descriptions of the imported p...
Wang, Huaqing; Li, Ruitong; Tang, Gang; Yuan, Hongfang; Zhao, Qingliang; Cao, Xi
2014-01-01
A Compound fault signal usually contains multiple characteristic signals and strong confusion noise, which makes it difficult to separate week fault signals from them through conventional ways, such as FFT-based envelope detection, wavelet transform or empirical mode decomposition individually. In order to improve the compound faults diagnose of rolling bearings via signals' separation, the present paper proposes a new method to identify compound faults from measured mixed-signals, which is based on ensemble empirical mode decomposition (EEMD) method and independent component analysis (ICA) technique. With the approach, a vibration signal is firstly decomposed into intrinsic mode functions (IMF) by EEMD method to obtain multichannel signals. Then, according to a cross correlation criterion, the corresponding IMF is selected as the input matrix of ICA. Finally, the compound faults can be separated effectively by executing ICA method, which makes the fault features more easily extracted and more clearly identified. Experimental results validate the effectiveness of the proposed method in compound fault separating, which works not only for the outer race defect, but also for the rollers defect and the unbalance fault of the experimental system.
Djamil, John; Segler, Stefan A W; Bensch, Wolfgang; Schürmann, Ulrich; Deng, Mao; Kienle, Lorenz; Hansen, Sven; Beweries, Torsten; von Wüllen, Leo; Rosenfeldt, Sabine; Förster, Stephan; Reinsch, Helge
2015-06-08
Nanocomposites based on molybdenum disulfide (MoS2 ) and different carbon modifications are intensively investigated in several areas of applications due to their intriguing optical and electrical properties. Addition of a third element may enhance the functionality and application areas of such nanocomposites. Herein, we present a facile synthetic approach based on directed thermal decomposition of (Ph4 P)2 MoS4 generating MoS2 nanocomposites containing carbon and phosphorous. Decomposition at 250 °C yields a composite material with significantly enlarged MoS2 interlayer distances caused by in situ formation of Ph3 PS bonded to the MoS2 slabs through MoS bonds and (Ph4 P)2 S molecules in the van der Waals gap, as was evidenced by (31) P solid-state NMR spectroscopy. Visible-light-driven hydrogen generation demonstrates a high catalytic performance of the materials. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
International Nuclear Information System (INIS)
Heinrich, A.; Al-Kassab, T.
2004-01-01
Full text: The initial stages of nucleation in alloys have always been of great interest for materials development. Due to the limited resolution of many modern analysis techniques, including the tomographic atom probe (TAP), these decomposition stages have hardly been characterized. In this contribution a brief introduction of an evaluation algorithm for the treatment of TAP data based on the radial distribution function is presented. The results on the nucleation stages in different Cu-base binary alloys are discussed. Initial findings show, that measurements of annealed stages, indistinguishable from homogeneous stages with other methods like the previously introduced cluster search algorithm, could be characterized with the new approach. A critical distance in the range of 10 nm between the minority component atoms was first observed. In the course of decomposition nuclei are formed and characterized. The main advantage of the novel approach is the enhancement of statistical exactness owing to the large analyzed volume. Future comparison with data from scattering method normally used to gain (partial) radial distribution functions are under processing. (author)
DEFF Research Database (Denmark)
Baum, Andreas; Wang, Sheng; Garcia, Monica
that are mosaicked into larger images to produce ortho-photomaps. Frequently, especially in northern latitudes, the images to be mosaicked have been acquired under varying irradiance conditions due to moving clouds that create artifacts in the detected signal unrelated to physical changes in vegetation properties....... In order to exploit the full potential of UAS, correction methods should be developed to provide ortho-rectified images that can provide robust estimates of vegetation properties. We applied a Tucker tensor decomposition method to reconstruct images using a four-way factorization scheme. By doing so.......g. normalized difference vegetation index derived from the corrected and un-corrected images also showed improvement. This method could also have the ability to resolve artifacts, such as temporary objects (e.g. humans, tractors etc.) from the vegetation background....
International Nuclear Information System (INIS)
Simeonidis, K.; Mourdikoudis, S.; Moulla, M.; Tsiaoussis, I.; Martinez-Boubeta, C.; Angelakeris, M.; Dendrinou-Samara, C.; Kalogirou, O.
2007-01-01
Iron oxide nanoparticles were synthesized by the thermal decomposition of Fe(acac) 3 and Fe(CO) 5 . Three different homogeneous procedures were used for the controlled synthesis of Fe 3 O 4 , γ-Fe 2 O 3 and Fe 3 O 4 /γ-Fe 2 O 3 mixture nanocrystals. A combination of characterization techniques was used in order to distinguish these oxides. The controllable size, the narrow distribution and the rhombic self-assembly of the nanoparticles were revealed by the high-resolution transmission electron microscopy images and the X-ray powder diffraction results. For the quantitative analysis of the samples manganometry was used. Preliminary magnetic measurements indicated the size and composition dependence of saturation magnetization, a superparamagnetic behavior of the samples and some ferromagnetic features
Huang, Yong; Wang, Kehong; Zhou, Zhilan; Zhou, Xiaoxiao; Fang, Jimi
2017-03-01
The arc of gas metal arc welding (GMAW) contains abundant information about its stability and droplet transition, which can be effectively characterized by extracting the arc electrical signals. In this study, ensemble empirical mode decomposition (EEMD) was used to evaluate the stability of electrical current signals. The welding electrical signals were first decomposed by EEMD, and then transformed to a Hilbert-Huang spectrum and a marginal spectrum. The marginal spectrum is an approximate distribution of amplitude with frequency of signals, and can be described by a marginal index. Analysis of various welding process parameters showed that the marginal index of current signals increased when the welding process was more stable, and vice versa. Thus EEMD combined with the marginal index can effectively uncover the stability and droplet transition of GMAW.
Cochard, Étienne; Prada, Claire; Aubry, Jean-François; Fink, Mathias
2010-03-01
Thermal ablation induced by high intensity focused ultrasound has produced promising clinical results to treat hepatocarcinoma and other liver tumors. However skin burns have been reported due to the high absorption of ultrasonic energy by the ribs. This study proposes a method to produce an acoustic field focusing on a chosen target while sparing the ribs, using the decomposition of the time-reversal operator (DORT method). The idea is to apply an excitation weight vector to the transducers array which is orthogonal to the subspace of emissions focusing on the ribs. The ratio of the energies absorbed at the focal point and on the ribs has been enhanced up to 100-fold as demonstrated by the measured specific absorption rates.
Directory of Open Access Journals (Sweden)
Te-Jen Su
2007-01-01
Full Text Available In this letter, clonal selection algorithm (CSA with singular value decomposition (SVD method is investigated for the realization of two-dimentional (2D infinite-impulse response (IIR filters with arbitrary magnitude responses. The CSA is applied to optimize the sampled frequencies of transition band of digital filters, then producing a planar response matrix of a 2D IIR digital filter. By using the SVD, 2D magnitude specifications can be decomposed into a pair of 1D filters, and thus the problem of designing a 2D digital filter can be reduced to the one of designing a pair of 1D digital filters or even only one 1D digital filter. The stimulation results show the proposed method has the better performance of the minimum attenuation between the passband and stopband.
Duality-Free Decomposition Based Data-Driven Stochastic Security-Constrained Unit Commitment
DEFF Research Database (Denmark)
Ding, Tao; Yang, Qingrun; Liu, Xiyuan
2018-01-01
for wind power outputs and then it optimizes the unit commitment under the worst-case probability distribution. However, this model suffers from huge computational burden, as a large number of scenarios are considered. To tackle this issue, a duality-free decomposition method is proposed in this paper......To incorporate the superiority of both stochastic and robust approaches, a data-driven stochastic optimization is employed to solve the security-constrained unit commitment model. This approach makes the most use of the historical data to generate a set of possible probability distributions...... be decomposed into independent sub-problems to be solved in parallel, which further improves the computational efficiency. A numerical study on an IEEE 118-bus system with practical data of a wind power system has demonstrated the effectiveness of the proposal....
Tamellini, L.
2014-01-01
In this paper we consider a proper generalized decomposition method to solve the steady incompressible Navier-Stokes equations with random Reynolds number and forcing term. The aim of such a technique is to compute a low-cost reduced basis approximation of the full stochastic Galerkin solution of the problem at hand. A particular algorithm, inspired by the Arnoldi method for solving eigenproblems, is proposed for an efficient greedy construction of a deterministic reduced basis approximation. This algorithm decouples the computation of the deterministic and stochastic components of the solution, thus allowing reuse of preexisting deterministic Navier-Stokes solvers. It has the remarkable property of only requiring the solution of m uncoupled deterministic problems for the construction of an m-dimensional reduced basis rather than M coupled problems of the full stochastic Galerkin approximation space, with m l M (up to one order of magnitudefor the problem at hand in this work). © 2014 Society for Industrial and Applied Mathematics.
Multidisciplinary Product Decomposition and Analysis Based on Design Structure Matrix Modeling
DEFF Research Database (Denmark)
Habib, Tufail
2014-01-01
Design structure matrix (DSM) modeling in complex system design supports to define physical and logical configuration of subsystems, components, and their relationships. This modeling includes product decomposition, identification of interfaces, and structure analysis to increase the architectural...... interactions across subsystems and components. For this purpose, Cambridge advanced modeler (CAM) software tool is used to develop the system matrix. The analysis of the product (printer) architecture includes clustering, partitioning as well as structure analysis of the system. The DSM analysis is helpful...... understanding of the system. Since product architecture has broad implications in relation to product life cycle issues, in this paper, mechatronic product is decomposed into subsystems and components, and then, DSM model is developed to examine the extent of modularity in the system and to manage multiple...
Simeonidis, K.; Mourdikoudis, S.; Moulla, M.; Tsiaoussis, I.; Martinez-Boubeta, C.; Angelakeris, M.; Dendrinou-Samara, C.; Kalogirou, O.
2007-09-01
Iron oxide nanoparticles were synthesized by the thermal decomposition of Fe(acac) 3 and Fe(CO) 5. Three different homogeneous procedures were used for the controlled synthesis of Fe 3O 4, γ-Fe 2O 3 and Fe 3O 4/γ-Fe 2O 3 mixture nanocrystals. A combination of characterization techniques was used in order to distinguish these oxides. The controllable size, the narrow distribution and the rhombic self-assembly of the nanoparticles were revealed by the high-resolution transmission electron microscopy images and the X-ray powder diffraction results. For the quantitative analysis of the samples manganometry was used. Preliminary magnetic measurements indicated the size and composition dependence of saturation magnetization, a superparamagnetic behavior of the samples and some ferromagnetic features.
Sharma, Govind K; Kumar, Anish; Jayakumar, T; Purnachandra Rao, B; Mariyappa, N
2015-03-01
A signal processing methodology is proposed in this paper for effective reconstruction of ultrasonic signals in coarse grained high scattering austenitic stainless steel. The proposed methodology is comprised of the Ensemble Empirical Mode Decomposition (EEMD) processing of ultrasonic signals and application of signal minimisation algorithm on selected Intrinsic Mode Functions (IMFs) obtained by EEMD. The methodology is applied to ultrasonic signals obtained from austenitic stainless steel specimens of different grain size, with and without defects. The influence of probe frequency and data length of a signal on EEMD decomposition is also investigated. For a particular sampling rate and probe frequency, the same range of IMFs can be used to reconstruct the ultrasonic signal, irrespective of the grain size in the range of 30-210 μm investigated in this study. This methodology is successfully employed for detection of defects in a 50mm thick coarse grain austenitic stainless steel specimens. Signal to noise ratio improvement of better than 15 dB is observed for the ultrasonic signal obtained from a 25 mm deep flat bottom hole in 200 μm grain size specimen. For ultrasonic signals obtained from defects at different depths, a minimum of 7 dB extra enhancement in SNR is achieved as compared to the sum of selected IMF approach. The application of minimisation algorithm with EEMD processed signal in the proposed methodology proves to be effective for adaptive signal reconstruction with improved signal to noise ratio. This methodology was further employed for successful imaging of defects in a B-scan. Copyright © 2014. Published by Elsevier B.V.
Surface EMG decomposition based on K-means clustering and convolution kernel compensation.
Ning, Yong; Zhu, Xiangjun; Zhu, Shanan; Zhang, Yingchun
2015-03-01
A new approach has been developed by combining the K-mean clustering (KMC) method and a modified convolution kernel compensation (CKC) method for multichannel surface electromyogram (EMG) decomposition. The KMC method was first utilized to cluster vectors of observations at different time instants and then estimate the initial innervation pulse train (IPT). The CKC method, modified with a novel multistep iterative process, was conducted to update the estimated IPT. The performance of the proposed K-means clustering-Modified CKC (KmCKC) approach was evaluated by reconstructing IPTs from both simulated and experimental surface EMG signals. The KmCKC approach successfully reconstructed all 10 IPTs from the simulated surface EMG signals with true positive rates (TPR) of over 90% with a low signal-to-noise ratio (SNR) of -10 dB. More than 10 motor units were also successfully extracted from the 64-channel experimental surface EMG signals of the first dorsal interosseous (FDI) muscles when a contraction force was held at 8 N by using the KmCKC approach. A "two-source" test was further conducted with 64-channel surface EMG signals. The high percentage of common MUs and common pulses (over 92% at all force levels) between the IPTs reconstructed from the two independent groups of surface EMG signals demonstrates the reliability and capability of the proposed KmCKC approach in multichannel surface EMG decomposition. Results from both simulated and experimental data are consistent and confirm that the proposed KmCKC approach can successfully reconstruct IPTs with high accuracy at different levels of contraction.
Energy Technology Data Exchange (ETDEWEB)
Iliopoulos, AS; Sun, X [Duke University, Durham, North Carolina (United States); Pitsianis, N [Aristotle University of Thessaloniki (Greece); Duke University, Durham, North Carolina (United States); Yin, FF; Ren, L
2016-06-15
Purpose: To address and lift the limited degree of freedom (DoF) of globally bilinear motion components such as those based on principal components analysis (PCA), for encoding and modeling volumetric deformation motion. Methods: We provide a systematic approach to obtaining a multi-linear decomposition (MLD) and associated motion model from deformation vector field (DVF) data. We had previously introduced MLD for capturing multi-way relationships between DVF variables, without being restricted by the bilinear component format of PCA-based models. PCA-based modeling is commonly used for encoding patient-specific deformation as per planning 4D-CT images, and aiding on-board motion estimation during radiotherapy. However, the bilinear space-time decomposition inherently limits the DoF of such models by the small number of respiratory phases. While this limit is not reached in model studies using analytical or digital phantoms with low-rank motion, it compromises modeling power in the presence of relative motion, asymmetries and hysteresis, etc, which are often observed in patient data. Specifically, a low-DoF model will spuriously couple incoherent motion components, compromising its adaptability to on-board deformation changes. By the multi-linear format of extracted motion components, MLD-based models can encode higher-DoF deformation structure. Results: We conduct mathematical and experimental comparisons between PCA- and MLD-based models. A set of temporally-sampled analytical trajectories provides a synthetic, high-rank DVF; trajectories correspond to respiratory and cardiac motion factors, including different relative frequencies and spatial variations. Additionally, a digital XCAT phantom is used to simulate a lung lesion deforming incoherently with respect to the body, which adheres to a simple respiratory trend. In both cases, coupling of incoherent motion components due to a low model DoF is clearly demonstrated. Conclusion: Multi-linear decomposition can
Buried object location based on frequency-domain UWB measurements
International Nuclear Information System (INIS)
Soliman, M; Wu, Z
2008-01-01
In this paper, a wideband ground penetrating radar (GPR) system and a proposed frequency-domain data analysis technique are presented for the detection of shallow buried objects such as anti-personnel landmines. The GPR system uses one transmitting antenna and an array of six monopole receiving antenna elements and operates from 1 GHz to 20 GHz. This system is able to acquire, save and analyse data in the frequency domain. A common source or wide-angle reflection and refraction technique has been used for acquiring and processing the data. This technique is effective for the rejection of ground surface clutter. By applying the C-scan scheme, metallic and plastic mine-like targets buried in dry soil will be located
Sentence-based sentiment analysis with domain adaptation capability
Gezici, Gizem
2013-01-01
Sentiment analysis aims to automatically estimate the sentiment in a given text as positive, objective or negative, possibly together with the strength of the sentiment. Polarity lexicons that indicate how positive or negative each term is, are often used as the basis of many sentiment analysis approaches. Domain-specific polarity lexicons are expensive and time-consuming to build; hence, researchers often use a general purpose or domainindependent lexicon as the basis of their analysis. In t...
Domain-Based Predictive Models for Protein-Protein Interaction Prediction
Directory of Open Access Journals (Sweden)
Chen Xue-Wen
2006-01-01
Full Text Available Protein interactions are of biological interest because they orchestrate a number of cellular processes such as metabolic pathways and immunological recognition. Recently, methods for predicting protein interactions using domain information are proposed and preliminary results have demonstrated their feasibility. In this paper, we develop two domain-based statistical models (neural networks and decision trees for protein interaction predictions. Unlike most of the existing methods which consider only domain pairs (one domain from one protein and assume that domain-domain interactions are independent of each other, the proposed methods are capable of exploring all possible interactions between domains and make predictions based on all the domains. Compared to maximum-likelihood estimation methods, our experimental results show that the proposed schemes can predict protein-protein interactions with higher specificity and sensitivity, while requiring less computation time. Furthermore, the decision tree-based model can be used to infer the interactions not only between two domains, but among multiple domains as well.
Directory of Open Access Journals (Sweden)
Lei Zhao
2015-10-01
Full Text Available Surveying the Earth’s gravity field refers to an important domain of Geodesy, involving deep connections with Earth Sciences and Geo-information. Airborne gravimetry is an effective tool for collecting gravity data with mGal accuracy and a spatial resolution of several kilometers. The main obstacle of airborne gravimetry is extracting gravity disturbance from the extremely low signal to noise ratio measuring data. In general, the power of noise concentrates on the higher frequency of measuring data, and a low pass filter can be used to eliminate it. However, the noise could distribute in a broad range of frequency while low pass filter cannot deal with it in pass band of the low pass filter. In order to improve the accuracy of the airborne gravimetry, Empirical Mode Decomposition (EMD is employed to denoise the measuring data of two primary repeated flights of the strapdown airborne gravimetry system SGA-WZ carried out in Greenland. Comparing to the solutions of using finite impulse response filter (FIR, the new results are improved by 40% and 10% of root mean square (RMS of internal consistency and external accuracy, respectively.
Symmetric Tensor Decomposition
DEFF Research Database (Denmark)
Brachat, Jerome; Comon, Pierre; Mourrain, Bernard
2010-01-01
We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables...... of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation...... of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions and for detecting the rank....
Wang, Qingzhu; Chen, Xiaoming; Zhu, Yihai
2017-09-01
Existing image compression and encryption methods have several shortcomings: they have low reconstruction accuracy and are unsuitable for three-dimensional (3D) images. To overcome these limitations, this paper proposes a tensor-based approach adopting tensor compressive sensing and tensor discrete fractional random transform (TDFRT). The source video images are measured by three key-controlled sensing matrices. Subsequently, the resulting tensor image is further encrypted using 3D cat map and the proposed TDFRT, which is based on higher-order singular value decomposition. A multiway projection algorithm is designed to reconstruct the video images. The proposed algorithm can greatly reduce the data volume and improve the efficiency of the data transmission and key distribution. The simulation results validate the good compression performance, efficiency, and security of the proposed algorithm.
Zhao, Wei; Xiao, Shixiao; Zhang, Baocan; Huang, Xiaojing; You, Rongyi
2015-12-01
Electrocardiogram (ECG) signals are susceptible to be disturbed by 50 Hz power line interference (PLI) in the process of acquisition and conversion. This paper, therefore, proposes a novel PLI removal algorithm based on morphological component analysis (MCA) and ensemble empirical mode decomposition (EEMD). Firstly, according to the morphological differences in ECG waveform characteristics, the noisy ECG signal was decomposed into the mutated component, the smooth component and the residual component by MCA. Secondly, intrinsic mode functions (IMF) of PLI was filtered. The noise suppression rate (NSR) and the signal distortion ratio (SDR) were used to evaluate the effect of de-noising algorithm. Finally, the ECG signals were re-constructed. Based on the experimental comparison, it was concluded that the proposed algorithm had better filtering functions than the improved Levkov algorithm, because it could not only effectively filter the PLI, but also have smaller SDR value.
Sun, Qi; Fu, Shujun
2017-09-20
Fringe orientation is an important feature of fringe patterns and has a wide range of applications such as guiding fringe pattern filtering, phase unwrapping, and abstraction. Estimating fringe orientation is a basic task for subsequent processing of fringe patterns. However, various noise, singular and obscure points, and orientation data degeneration lead to inaccurate calculations of fringe orientation. Thus, to deepen the understanding of orientation estimation and to better guide orientation estimation in fringe pattern processing, some advanced gradient-field-based orientation estimation methods are compared and analyzed. At the same time, following the ideas of smoothing regularization and computing of bigger gradient fields, a regularized singular-value decomposition (RSVD) technique is proposed for fringe orientation estimation. To compare the performance of these gradient-field-based methods, quantitative results and visual effect maps of orientation estimation are given on simulated and real fringe patterns that demonstrate that the RSVD produces the best estimation results at a cost of relatively less time.
Using Built-In Domain-Specific Modeling Support to Guide Model-Based Test Generation
Directory of Open Access Journals (Sweden)
Teemu Kanstrén
2012-02-01
Full Text Available We present a model-based testing approach to support automated test generation with domain-specific concepts. This includes a language expert who is an expert at building test models and domain experts who are experts in the domain of the system under test. First, we provide a framework to support the language expert in building test models using a full (Java programming language with the help of simple but powerful modeling elements of the framework. Second, based on the model built with this framework, the toolset automatically forms a domain-specific modeling language that can be used to further constrain and guide test generation from these models by a domain expert. This makes it possible to generate a large set of test cases covering the full model, chosen (constrained parts of the model, or manually define specific test cases on top of the model while using concepts familiar to the domain experts.
Smart-phone based electrocardiogram wavelet decomposition and neural network classification
International Nuclear Information System (INIS)
Jannah, N; Hadjiloucas, S; Hwang, F; Galvão, R K H
2013-01-01
This paper discusses ECG classification after parametrizing the ECG waveforms in the wavelet domain. The aim of the work is to develop an accurate classification algorithm that can be used to diagnose cardiac beat abnormalities detected using a mobile platform such as smart-phones. Continuous time recurrent neural network classifiers are considered for this task. Records from the European ST-T Database are decomposed in the wavelet domain using discrete wavelet transform (DWT) filter banks and the resulting DWT coefficients are filtered and used as inputs for training the neural network classifier. Advantages of the proposed methodology are the reduced memory requirement for the signals which is of relevance to mobile applications as well as an improvement in the ability of the neural network in its generalization ability due to the more parsimonious representation of the signal to its inputs.
Kilian Stoffel; Paul Cotofrei; Dong Han
2012-01-01
As interdisciplinary domain requiring advanced and innovative methodologies the computational forensics domain is characterized by data being simultaneously large scaled and uncertain multidimensional and approximate. Forensic domain experts trained to discover hidden pattern from crime data are limited in their analysis without the assistance of a computational intelligence approach. In this paper a methodology and an automatic procedure based on fuzzy set theory and designed to infer precis...
Hu, Yegang; Lin, Yicong; Yang, Baoshan; Tang, Guangrui; Liu, Tao; Wang, Yuping; Zhang, Jicong
2017-08-11
In recent years, the source localization technique of magnetoencephalography (MEG) has played a prominent role in cognitive neuroscience and in the diagnosis and treatment of neurological and psychological disorders. However, locating deep brain activities such as in the mesial temporal structures, especially in preoperative evaluation of epilepsy patients, may be more challenging. In this work we have proposed a modified beamforming approach for finding deep sources. First, an iterative spatiotemporal signal decomposition was employed for reconstructing the sensor arrays, which could characterize the intrinsic discriminant features for interpreting sensor signals. Next, a sensor covariance matrix was estimated under the new reconstructed space. Then, a well-known vector beamforming approach, which was a linearly constraint minimum variance (LCMV) approach, was applied to compute the solution for the inverse problem. It can be shown that the proposed source localization approach can give better localization accuracy than two other commonly-used beamforming methods (LCMV, MUSIC) in simulated MEG measurements generated with deep sources. Further, we applied the proposed approach to real MEG data recorded from ten patients with medically-refractory mesial temporal lobe epilepsy (mTLE) for finding epileptogenic zone(s), and there was a good agreement between those findings by the proposed approach and the clinical comprehensive results.
Lahmiri, Salim; Shmuel, Amir
2017-11-01
Diabetic retinopathy is a disease that can cause a loss of vision. An early and accurate diagnosis helps to improve treatment of the disease and prognosis. One of the earliest characteristics of diabetic retinopathy is the appearance of retinal hemorrhages. The purpose of this study is to design a fully automated system for the detection of hemorrhages in a retinal image. In the first stage of our proposed system, a retinal image is processed with variational mode decomposition (VMD) to obtain the first variational mode, which captures the high frequency components of the original image. In the second stage, four texture descriptors are extracted from the first variational mode. Finally, a classifier trained with all computed texture descriptors is used to distinguish between images of healthy and unhealthy retinas with hemorrhages. Experimental results showed evidence of the effectiveness of the proposed system for detection of hemorrhages in the retina, since a perfect detection rate was achieved. Our proposed system for detecting diabetic retinopathy is simple and easy to implement. It requires only short processing time, and it yields higher accuracy in comparison with previously proposed methods for detecting diabetic retinopathy.
Defects diagnosis in laser brazing using near-infrared signals based on empirical mode decomposition
Cheng, Liyong; Mi, Gaoyang; Li, Shuo; Wang, Chunming; Hu, Xiyuan
2018-03-01
Real-time monitoring of laser welding plays a very important role in the modern automated production and online defects diagnosis is necessary to be implemented. In this study, the status of laser brazing was monitored in real time using an infrared photoelectric sensor. Four kinds of braze seams (including healthy weld, unfilled weld, hole weld and rough surface weld) along with corresponding near-infrared signals were obtained. Further, a new method called Empirical Mode Decomposition (EMD) was proposed to analyze the near-infrared signals. The results showed that the EMD method had a good performance in eliminating the noise on the near-infrared signals. And then, the correlation coefficient was developed for selecting the Intrinsic Mode Function (IMF) more sensitive to the weld defects. A more accurate signal was reconstructed with the selected IMF components. Simultaneously, the spectrum of selected IMF components was solved using fast Fourier transform, and the frequency characteristics were clearly revealed. The frequency energy of different frequency bands was computed to diagnose the defects. There was a significant difference in four types of weld defects. This approach has been proved to be an effective and efficient method for monitoring laser brazing defects.
Rakitskaya, Tatyana; Truba, Alla; Ennan, Alim; Volkova, Vitaliya
2015-12-01
Samples of the solid component of welding aerosols (SCWAs) were obtained as a result of steel welding by ANO-4, TsL-11, and UONI13/55 electrodes of Ukrainian manufacture. The phase compositions of the samples, both freshly prepared (FP) and modified (M) by water treatment at 60 °C, were studied by X-ray phase analysis and IR spectroscopy. All samples contain magnetite demonstrating its reflex at 2 θ ~ 35° characteristic of cubic spinel as well as manganochromite and iron oxides. FP SCWA-TsL and FP SCWA-UONI contain such phases as CaF2, water-soluble fluorides, chromates, and carbonates of alkali metals. After modification of the SCWA samples, water-soluble phases in their composition are undetectable. The size of magnetite nanoparticles varies from 15 to 68 nm depending on the chemical composition of electrodes under study. IR spectral investigations confirm the polyphase composition of the SCWAs. As to IR spectra, the biggest differences are apparent in the regions of deformation vibrations of M-O-H bonds and stretching vibrations of M-O bonds (M-Fe, Cr). The catalytic activity of the SCWAs in the reaction of ozone decomposition decreases in the order SCWA-ANO > SCWA-UONI > SCWA-TsL corresponding to the decrease in the content of catalytically active phases in their compositions.
Directory of Open Access Journals (Sweden)
Vahid Faghih Dinevari
2016-01-01
Full Text Available Wireless capsule endoscopy (WCE is a new noninvasive instrument which allows direct observation of the gastrointestinal tract to diagnose its relative diseases. Because of the large number of images obtained from the capsule endoscopy per patient, doctors need too much time to investigate all of them. So, it would be worthwhile to design a system for detecting diseases automatically. In this paper, a new method is presented for automatic detection of tumors in the WCE images. This method will utilize the advantages of the discrete wavelet transform (DWT and singular value decomposition (SVD algorithms to extract features from different color channels of the WCE images. Therefore, the extracted features are invariant to rotation and can describe multiresolution characteristics of the WCE images. In order to classify the WCE images, the support vector machine (SVM method is applied to a data set which includes 400 normal and 400 tumor WCE images. The experimental results show proper performance of the proposed algorithm for detection and isolation of the tumor images which, in the best way, shows 94%, 93%, and 93.5% of sensitivity, specificity, and accuracy in the RGB color space, respectively.
Faghih Dinevari, Vahid; Karimian Khosroshahi, Ghader; Zolfy Lighvan, Mina
2016-01-01
Wireless capsule endoscopy (WCE) is a new noninvasive instrument which allows direct observation of the gastrointestinal tract to diagnose its relative diseases. Because of the large number of images obtained from the capsule endoscopy per patient, doctors need too much time to investigate all of them. So, it would be worthwhile to design a system for detecting diseases automatically. In this paper, a new method is presented for automatic detection of tumors in the WCE images. This method will utilize the advantages of the discrete wavelet transform (DWT) and singular value decomposition (SVD) algorithms to extract features from different color channels of the WCE images. Therefore, the extracted features are invariant to rotation and can describe multiresolution characteristics of the WCE images. In order to classify the WCE images, the support vector machine (SVM) method is applied to a data set which includes 400 normal and 400 tumor WCE images. The experimental results show proper performance of the proposed algorithm for detection and isolation of the tumor images which, in the best way, shows 94%, 93%, and 93.5% of sensitivity, specificity, and accuracy in the RGB color space, respectively.
Edward Jero, S; Ramu, Palaniappan; Ramakrishnan, S
2014-10-01
ECG Steganography provides secured transmission of secret information such as patient personal information through ECG signals. This paper proposes an approach that uses discrete wavelet transform to decompose signals and singular value decomposition (SVD) to embed the secret information into the decomposed ECG signal. The novelty of the proposed method is to embed the watermark using SVD into the two dimensional (2D) ECG image. The embedding of secret information in a selected sub band of the decomposed ECG is achieved by replacing the singular values of the decomposed cover image by the singular values of the secret data. The performance assessment of the proposed approach allows understanding the suitable sub-band to hide secret data and the signal degradation that will affect diagnosability. Performance is measured using metrics like Kullback-Leibler divergence (KL), percentage residual difference (PRD), peak signal to noise ratio (PSNR) and bit error rate (BER). A dynamic location selection approach for embedding the singular values is also discussed. The proposed approach is demonstrated on a MIT-BIH database and the observations validate that HH is the ideal sub-band to hide data. It is also observed that the signal degradation (less than 0.6%) is very less in the proposed approach even with the secret data being as large as the sub band size. So, it does not affect the diagnosability and is reliable to transmit patient information.
Rakitskaya, Tatyana; Truba, Alla; Ennan, Alim; Volkova, Vitaliya
2015-12-01
Samples of the solid component of welding aerosols (SCWAs) were obtained as a result of steel welding by ANO-4, TsL‑11, and UONI13/55 electrodes of Ukrainian manufacture. The phase compositions of the samples, both freshly prepared (FP) and modified (M) by water treatment at 60 °C, were studied by X-ray phase analysis and IR spectroscopy. All samples contain magnetite demonstrating its reflex at 2θ ~ 35° characteristic of cubic spinel as well as manganochromite and iron oxides. FP SCWA-TsL and FP SCWA-UONI contain such phases as СaF2, water-soluble fluorides, chromates, and carbonates of alkali metals. After modification of the SCWA samples, water-soluble phases in their composition are undetectable. The size of magnetite nanoparticles varies from 15 to 68 nm depending on the chemical composition of electrodes under study. IR spectral investigations confirm the polyphase composition of the SCWAs. As to IR spectra, the biggest differences are apparent in the regions of deformation vibrations of M-O-H bonds and stretching vibrations of M-O bonds (M-Fe, Cr). The catalytic activity of the SCWAs in the reaction of ozone decomposition decreases in the order SCWA-ANO > SCWA-UONI > SCWA-TsL corresponding to the decrease in the content of catalytically active phases in their compositions.
International Nuclear Information System (INIS)
Lenain, Roland
2015-01-01
This thesis is devoted to the implementation of a domain decomposition method applied to the neutron transport equation. The objective of this work is to access high-fidelity deterministic solutions to properly handle heterogeneities located in nuclear reactor cores, for problems' size ranging from color-sets of assemblies to large reactor cores configurations in 2D and 3D. The innovative algorithm developed during the thesis intends to optimize the use of parallelism and memory. The approach also aims to minimize the influence of the parallel implementation on the performances. These goals match the needs of APOLLO3 project, developed at CEA and supported by EDF and AREVA, which must be a portable code (no optimization on a specific architecture) in order to achieve best estimate modeling with resources ranging from personal computer to compute cluster available for engineers analyses. The proposed algorithm is a Parallel Multigroup-Block Jacobi one. Each sub-domain is considered as a multi-group fixed-source problem with volume-sources (fission) and surface-sources (interface flux between the sub-domains). The multi-group problem is solved in each sub-domain and a single communication of the interface flux is required at each power iteration. The spectral radius of the resolution algorithm is made similar to the one of a classical resolution algorithm with a nonlinear diffusion acceleration method: the well-known Coarse Mesh Finite Difference. In this way an ideal scalability is achievable when the calculation is parallelized. The memory organization, taking advantage of shared memory parallelism, optimizes the resources by avoiding redundant copies of the data shared between the sub-domains. Distributed memory architectures are made available by a hybrid parallel method that combines both paradigms of shared memory parallelism and distributed memory parallelism. For large problems, these architectures provide a greater number of processors and the amount of
Hofmann, Philipp; Sedlmair, Martin; Krauss, Bernhard; Wichmann, Julian L.; Bauer, Ralf W.; Flohr, Thomas G.; Mahnken, Andreas H.
2016-03-01
Osteoporosis is a degenerative bone disease usually diagnosed at the manifestation of fragility fractures, which severely endanger the health of especially the elderly. To ensure timely therapeutic countermeasures, noninvasive and widely applicable diagnostic methods are required. Currently the primary quantifiable indicator for bone stability, bone mineral density (BMD), is obtained either by DEXA (Dual-energy X-ray absorptiometry) or qCT (quantitative CT). Both have respective advantages and disadvantages, with DEXA being considered as gold standard. For timely diagnosis of osteoporosis, another CT-based method is presented. A Dual Energy CT reconstruction workflow is being developed to evaluate BMD by evaluating lumbar spine (L1-L4) DE-CT images. The workflow is ROI-based and automated for practical use. A dual energy 3-material decomposition algorithm is used to differentiate bone from soft tissue and fat attenuation. The algorithm uses material attenuation coefficients on different beam energy levels. The bone fraction of the three different tissues is used to calculate the amount of hydroxylapatite in the trabecular bone of the corpus vertebrae inside a predefined ROI. Calibrations have been performed to obtain volumetric bone mineral density (vBMD) without having to add a calibration phantom or to use special scan protocols or hardware. Accuracy and precision are dependent on image noise and comparable to qCT images. Clinical indications are in accordance with the DEXA gold standard. The decomposition-based workflow shows bone degradation effects normally not visible on standard CT images which would induce errors in normal qCT results.
International Nuclear Information System (INIS)
Zhang, Tao; Li, Guoxiu; Yu, Yusong; Sun, Zuoyu; Wang, Meng; Chen, Jun
2014-01-01
Highlights: • Decomposition and combustion process of ADN-based thruster are studied. • Distribution of droplets is obtained during the process of spray hit on wire mesh. • Two temperature models are adopted to describe the heat transfer in porous media. • The influences brought by different mass flux and porosity are studied. - Abstract: Ammonium dinitramide (ADN) monopropellant is currently the most promising among all ‘green propellants’. In this paper, the decomposition and combustion process of liquid ADN-based ternary mixtures for propulsion are numerically studied. The R–R distribution model is used to study the initial boundary conditions of droplet distribution resulting from spray hit on a wire mesh based on PDA experiment. To simulate the heat-transfer characteristics between the gas–solid phases, a two-temperature porous medium model in a catalytic bed is used. An 11-species and 7-reactions chemistry model is used to study the catalytic and combustion processes. The final distribution of temperature, pressure, and other kinds of material component concentrations are obtained using the ADN thruster. The results of simulation conducted in the present study are well agree with previous experimental data, and the demonstration of the ADN thruster confirms that a good steady-state operation is achieved. The effects of spray inlet mass flux and porosity on monopropellant thruster performance are analyzed. The numerical results further show that a larger inlet mass flux results in better thruster performance and a catalytic bed porosity value of 0.5 can exhibit the best thruster performance. These findings can serve as a key reference for designing and testing non-toxic aerospace monopropellant thrusters
International Nuclear Information System (INIS)
Lubis, L.I.; Dincer, I.; Rosen, M.A.
2008-01-01
An extension of a previous Life Cycle Assessment (LCA) of nuclear-based hydrogen production using thermochemical water decomposition is reported. The copper-chlorine thermochemical cycle is considered, and the environmental impacts of the nuclear and thermochemical plants are assessed, while future needs are identified. Environmental impacts are investigated using CML 2001 impact categories. The nuclear fuel cycle and construction of the hydrogen plant contribute significantly to total environmental impacts. The environmental impacts for the operation of the thermochemical hydrogen production plant contribute much less. Changes in the inventory of chemicals needed in the thermochemical plant do not affect significantly the total impacts. Improvement analysis suggests the development of more sustainable processes, particularly in the nuclear plant. Other important and necessary future extensions of the research reported are also provided. (author)
Xie, Wen-Jie; Li, Ming-Xia; Xu, Hai-Chuan; Chen, Wei; Zhou, Wei-Xing; Stanley, H. Eugene
2016-10-01
Traders in a stock market exchange stock shares and form a stock trading network. Trades at different positions of the stock trading network may contain different information. We construct stock trading networks based on the limit order book data and classify traders into k classes using the k-shell decomposition method. We investigate the influences of trading behaviors on the price impact by comparing a closed national market (A-shares) with an international market (B-shares), individuals and institutions, partially filled and filled trades, buyer-initiated and seller-initiated trades, and trades at different positions of a trading network. Institutional traders professionally use some trading strategies to reduce the price impact and individuals at the same positions in the trading network have a higher price impact than institutions. We also find that trades in the core have higher price impacts than those in the peripheral shell.
Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos
2013-01-01
This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented. PMID:23948873
Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos
2013-08-14
This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.
Directory of Open Access Journals (Sweden)
Carlos Reyes-Garcia
2013-08-01
Full Text Available This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user’s blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD. EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.
Jiang, Wenqian; Zeng, Bo; Yang, Zhou; Li, Gang
2018-01-01
In the non-invasive load monitoring mode, the load decomposition can reflect the running state of each load, which will help the user reduce unnecessary energy costs. With the demand side management measures of time of using price, a resident load influence analysis method for time of using price (TOU) based on non-intrusive load monitoring data are proposed in the paper. Relying on the current signal of the resident load classification, the user equipment type, and different time series of self-elasticity and cross-elasticity of the situation could be obtained. Through the actual household load data test with the impact of TOU, part of the equipment will be transferred to the working hours, and users in the peak price of electricity has been reduced, and in the electricity at the time of the increase Electrical equipment, with a certain regularity.
Chundawat, Shishir P S; Vismeh, Ramin; Sharma, Lekh N; Humpula, James F; da Costa Sousa, Leonardo; Chambliss, C Kevin; Jones, A Daniel; Balan, Venkatesh; Dale, Bruce E
2010-11-01
Decomposition products formed/released during ammonia fiber expansion (AFEX) and dilute acid (DA) pretreatment of corn stover (CS) were quantified using robust mass spectrometry based analytical platforms. Ammonolytic cleavage of cell wall ester linkages during AFEX resulted in the formation of acetamide (25mg/g AFEX CS) and various phenolic amides (15mg/g AFEX CS) that are effective nutrients for downstream fermentation. After ammonolysis, Maillard reactions with carbonyl-containing intermediates represent the second largest sink for ammonia during AFEX. On the other hand, several carboxylic acids were formed (e.g. 35mg acetic acid/g DA CS) during DA pretreatment. Formation of furans was 36-fold lower for AFEX compared to DA treatment; while carboxylic acids (e.g. lactic and succinic acids) yield was 100-1000-fold lower during AFEX compared to previous reports using sodium hydroxide as pretreatment reagent. Copyright 2010 Elsevier Ltd. All rights reserved.
Prediction of peptidase category based on functional domain composition.
Xu, Xiaochun; Yu, Dong; Fang, Wei; Cheng, Yushao; Qian, Ziliang; Lu, Wencong; Cai, Yudong; Feng, Kaiyan
2008-10-01
Peptidases play pivotal regulatory roles in conception, birth, digestion, growth, maturation, aging, and death of all organisms. These regulatory roles include activation, synthesis and turnover of proteins. In the proteomics era, computational methods to identify peptidases and catalog the peptidases into six different major classes-aspartic peptidases, cysteine peptidases, glutamic peptidases, metallo peptidases, serine peptidases and threonine peptidases can give an instant glance at the biological functions of a newly identified protein. In this contribution, by combining the nearest neighbor algorithm and the functional domain composition, we introduce both an automatic peptidase identifier and an automatic peptidase classier. The successful identification and classification rates are 93.7% and 96.5% for our peptidase identifier and peptidase classifier, respectively. Free online peptidase identifier and peptidase classifier are provided on our Web page http://pcal.biosino.org/protease_classification.html.
Natural-Annotation-based Unsupervised Construction of Korean-Chinese Domain Dictionary
Liu, Wuying; Wang, Lin
2018-03-01
The large-scale bilingual parallel resource is significant to statistical learning and deep learning in natural language processing. This paper addresses the automatic construction issue of the Korean-Chinese domain dictionary, and presents a novel unsupervised construction method based on the natural annotation in the raw corpus. We firstly extract all Korean-Chinese word pairs from Korean texts according to natural annotations, secondly transform the traditional Chinese characters into the simplified ones, and finally distill out a bilingual domain dictionary after retrieving the simplified Chinese words in an extra Chinese domain dictionary. The experimental results show that our method can automatically build multiple Korean-Chinese domain dictionaries efficiently.
Danger, Michael; Cornut, Julien; Chauvet, Eric; Chavez, Paola; Elger, Arnaud; Lecerf, Antoine
2013-07-01
In detritus-based ecosystems, autochthonous primary production contributes very little to the detritus pool. Yet primary producers may still influence the functioning of these ecosystems through complex interactions with decomposers and detritivores. Recent studies have suggested that, in aquatic systems, small amounts of labile carbon (C) (e.g., producer exudates), could increase the mineralization of more recalcitrant organic-matter pools (e.g., leaf litter). This process, called priming effect, should be exacerbated under low-nutrient conditions and may alter the nature of interactions among microbial groups, from competition under low-nutrient conditions to indirect mutualism under high-nutrient conditions. Theoretical models further predict that primary producers may be competitively excluded when allochthonous C sources enter an ecosystem. In this study, the effects of a benthic diatom on aquatic hyphomycetes, bacteria, and leaf litter decomposition were investigated under two nutrient levels in a factorial microcosm experiment simulating detritus-based, headwater stream ecosystems. Contrary to theoretical expectations, diatoms and decomposers were able to coexist under both nutrient conditions. Under low-nutrient conditions, diatoms increased leaf litter decomposition rate by 20% compared to treatments where they were absent. No effect was observed under high-nutrient conditions. The increase in leaf litter mineralization rate induced a positive feedback on diatom densities. We attribute these results to the priming effect of labile C exudates from primary producers. The presence of diatoms in combination with fungal decomposers also promoted decomposer diversity and, under low-nutrient conditions, led to a significant decrease in leaf litter C:P ratio that could improve secondary production. Results from our microcosm experiment suggest new mechanisms by which primary producers may influence organic matter dynamics even in ecosystems where autochthonous
Real-time tumor ablation simulation based on the dynamic mode decomposition method
Bourantas, George C.
2014-05-01
Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must be employed, taking into account both the water evaporation phenomenon and the tissue damage during tumor ablation. Methods: A meshless point collocation solver is used for the numerical solution of the governing equations. The results obtained are used by the DMD method for forecasting the numerical solution faster than the meshless solver. The procedure is first validated against analytical and numerical predictions for simple problems. The DMD method is then applied to three-dimensional simulations that involve modeling of tumor ablation and account for metabolic heat generation, blood perfusion, and heat ablation using realistic values for the various parameters. Results: The present method offers very fast numerical solution to bioheat transfer, which is of clinical significance in medical practice. It also sidesteps the mathematical treatment of boundaries between tumor and healthy tissue, which is usually a tedious procedure with some inevitable degree of approximation. The DMD method provides excellent predictions of the temperature profile in tumors and in the healthy parts of the tissue, for linear and nonlinear thermal properties of the tissue. Conclusions: The low computational cost renders the use of DMD suitable forin situ real time tumor ablation simulations without sacrificing accuracy. In such a way, the tumor ablation treatment planning is feasible using just a personal computer thanks to the simplicity of the numerical procedure used. The geometrical data can be provided directly by medical image modalities used in everyday practice. © 2014 American Association of Physicists in Medicine.
Lim, Hobin; Kim, YoungHee; Song, Teh-Ru Alex; Shen, Xuzhang
2018-03-01
Accurate determination of the seismometer orientation is a prerequisite for seismic studies including, but not limited to seismic anisotropy. While borehole seismometers on land produce seismic waveform data somewhat free of human-induced noise, they might have a drawback of an uncertain orientation. This study calculates a harmonic decomposition of teleseismic receiver functions from the P and PP phases and determines the orientation of a seismometer by minimizing a constant term in a harmonic expansion of tangential receiver functions in backazimuth near and at 0 s. This method normalizes the effect of seismic sources and determines the orientation of a seismometer without having to assume for an isotropic medium. Compared to the method of minimizing the amplitudes of a mean of the tangential receiver functions near and at 0 s, the method yields more accurate orientations in cases where the backazimuthal coverage of earthquake sources (even in the case of ocean bottom seismometers) is uneven and incomplete. We apply this method to data from the Korean seismic network (52 broad-band velocity seismometers, 30 of which are borehole sensors) to estimate the sensor orientation in the period of 2005-2016. We also track temporal changes in the sensor orientation through the change in the polarity and the amplitude of the tangential receiver function. Six borehole stations are confirmed to experience a significant orientation change (10°-180°) over the period of 10 yr. We demonstrate the usefulness of our method by estimating the orientation of ocean bottom sensors, which are known to have high noise level during the relatively short deployment period.
Energy Technology Data Exchange (ETDEWEB)
Rao, Laxminarsimha V., E-mail: laxman@iitk.ac.in [Mechanics and Applied Mathematics Group, Department of Mechanical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016 (India); Roy, Subhradeep [Department of Biomedical Engineering and Mechanics (MC 0219), Virginia Tech, 495 Old Turner Street, Blacksburg, VA 24061 (United States); Das, Sovan Lal [Mechanics and Applied Mathematics Group, Department of Mechanical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016 (India)
2017-01-15
We estimate the equilibrium size distribution of cholesterol rich micro-domains on a lipid bilayer by solving Smoluchowski equation for coagulation and fragmentation. Towards this aim, we first derive the coagulation kernels based on the diffusion behaviour of domains moving in a two dimensional membrane sheet, as this represents the reality better. We incorporate three different diffusion scenarios of domain diffusion into our coagulation kernel. Subsequently, we investigate the influence of the parameters in our model on the coagulation and fragmentation behaviour. The observed behaviours of the coagulation and fragmentation kernels are also manifested in the equilibrium domain size distribution and its first moment. Finally, considering the liquid domains diffusing in a supported lipid bilayer, we fit the equilibrium domain size distribution to a benchmark solution.
Liang, Hui; Chen, Xiaobo
2017-10-01
A novel multi-domain method based on an analytical control surface is proposed by combining the use of free-surface Green function and Rankine source function. A cylindrical control surface is introduced to subdivide the fluid domain into external and internal domains. Unlike the traditional domain decomposition strategy or multi-block method, the control surface here is not panelized, on which the velocity potential and normal velocity components are analytically expressed as a series of base functions composed of Laguerre function in vertical coordinate and Fourier series in the circumference. Free-surface Green function is applied in the external domain, and the boundary integral equation is constructed on the control surface in the sense of Galerkin collocation via integrating test functions orthogonal to base functions over the control surface. The external solution gives rise to the so-called Dirichlet-to-Neumann [DN2] and Neumann-to-Dirichlet [ND2] relations on the control surface. Irregular frequencies, which are only dependent on the radius of the control surface, are present in the external solution, and they are removed by extending the boundary integral equation to the interior free surface (circular disc) on which the null normal derivative of potential is imposed, and the dipole distribution is expressed as Fourier-Bessel expansion on the disc. In the internal domain, where the Rankine source function is adopted, new boundary integral equations are formulated. The point collocation is imposed over the body surface and free surface, while the collocation of the Galerkin type is applied on the control surface. The present method is valid in the computation of both linear and second-order mean drift wave loads. Furthermore, the second-order mean drift force based on the middle-field formulation can be calculated analytically by using the coefficients of the Fourier-Laguerre expansion.
SAR Interferogram Filtering of Shearlet Domain Based on Interferometric Phase Statistics
Directory of Open Access Journals (Sweden)
Yonghong He
2017-02-01
Full Text Available This paper presents a new filtering approach for Synthetic Aperture Radar (SAR interferometric phase noise reduction in the shearlet domain, depending on the coherent statistical characteristics. Shearlets provide a multidirectional and multiscale decomposition that have advantages over wavelet filtering methods when dealing with noisy phase fringes. Phase noise in SAR interferograms is directly related to the interferometric coherence and the look number of the interferogram. Therefore, an optimal interferogram filter should incorporate information from both of them. The proposed method combines the phase noise standard deviation with the shearlet transform. Experimental results show that the proposed method can reduce the interferogram noise while maintaining the spatial resolution, especially in areas with low coherence.
Decomposition methods for unsupervised learning
DEFF Research Database (Denmark)
Mørup, Morten
2008-01-01
This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...
Shi, Feifei
2014-07-10
The formation of passive films on electrodes due to electrolyte decomposition significantly affects the reversibility of Li-ion batteries (LIBs); however, understanding of the electrolyte decomposition process is still lacking. The decomposition products of ethylene carbonate (EC)-based electrolytes on Sn and Ni electrodes are investigated in this study by Fourier transform infrared (FTIR) spectroscopy. The reference compounds, diethyl 2,5-dioxahexane dicarboxylate (DEDOHC) and polyethylene carbonate (poly-EC), were synthesized, and their chemical structures were characterized by FTIR spectroscopy and nuclear magnetic resonance (NMR). Assignment of the vibration frequencies of these compounds was assisted by quantum chemical (Hartree-Fock) calculations. The effect of Li-ion solvation on the FTIR spectra was studied by introducing the synthesized reference compounds into the electrolyte. EC decomposition products formed on Sn and Ni electrodes were identified as DEDOHC and poly-EC by matching the features of surface species formed on the electrodes with reference spectra. The results of this study demonstrate the importance of accounting for the solvation effect in FTIR analysis of the decomposition products forming on LIB electrodes. © 2014 American Chemical Society.
International Nuclear Information System (INIS)
Zhu Qin; Peng Xizhe; Wu Kaiya
2012-01-01
Based on the input–output model and the comparable price input–output tables, the current paper investigates the indirect carbon emissions from residential consumption in China in 1992–2005, and examines the impacts on the emissions using the structural decomposition method. The results demonstrate that the rise of the residential consumption level played a dominant role in the growth of residential indirect emissions. The persistent decline of the carbon emission intensity of industrial sectors presented a significant negative effect on the emissions. The change in the intermediate demand of industrial sectors resulted in an overall positive effect, except in the initial years. The increase in population prompted the indirect emissions to a certain extent; however, population size is no longer the main reason for the growth of the emissions. The change in the consumption structure showed a weak positive effect, demonstrating the importance for China to control and slow down the increase in the emissions while in the process of optimizing the residential consumption structure. The results imply that the means for restructuring the economy and improving efficiency, rather than for lowering the consumption scale, should be adopted by China to achieve the targets of energy conservation and emission reduction. - Highlights: ► We build the input–output model of indirect carbon emissions from residential consumption. ► We calculate the indirect emissions using the comparable price input–output tables. ► We examine the impacts on the indirect emissions using the structural decomposition method. ► The change in the consumption structure showed a weak positive effect on the emissions. ► China's population size is no longer the main reason for the growth of the emissions.
Infrared and visible fusion face recognition based on NSCT domain
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
2018-01-01
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.
Garces Correa, Agustina; Laciar Leber, Eric
2010-01-01
An algorithm to detect automatically drowsiness episodes has been developed. It uses only one EEG channel to differentiate the stages of alertness and drowsiness. In this work the vectors features are building combining Power Spectral Density (PDS) and Wavelet Transform (WT). The feature extracted from the PSD of EEG signal are: Central frequency, the First Quartile Frequency, the Maximum Frequency, the Total Energy of the Spectrum, the Power of Theta and Alpha bands. In the Wavelet Domain, it was computed the number of Zero Crossing and the integrated from the scale 3, 4 and 5 of Daubechies 2 order WT. The classifying of epochs is being done with neural networks. The detection results obtained with this technique are 86.5 % for drowsiness stages and 81.7% for alertness segment. Those results show that the features extracted and the classifier are able to identify drowsiness EEG segments.
A dimension decomposition approach based on iterative observer design for an elliptic Cauchy problem
Majeed, Muhammad Usman
2015-07-13
A state observer inspired iterative algorithm is presented to solve boundary estimation problem for Laplace equation using one of the space variables as a time-like variable. Three dimensional domain with two congruent parallel surfaces is considered. Problem is set up in cartesian co-ordinates and Laplace equation is re-written as a first order state equation with state operator matrix A and measurements are provided on the Cauchy data surface with measurement operator C. Conditions for the existence of strongly continuous semigroup generated by A are studied. Observability conditions for pair (C, A) are provided in infinite dimensional setting. In this given setting, special observability result obtained allows to decompose three dimensional problem into a set of independent two dimensional sub-problems over rectangular cross-sections. Numerical simulation results are provided.
Evaluation of Methyl-Binding Domain Based Enrichment Approaches Revisited.
Directory of Open Access Journals (Sweden)
Karolina A Aberg
Full Text Available Methyl-binding domain (MBD enrichment followed by deep sequencing (MBD-seq, is a robust and cost efficient approach for methylome-wide association studies (MWAS. MBD-seq has been demonstrated to be capable of identifying differentially methylated regions, detecting previously reported robust associations and producing findings that replicate with other technologies such as targeted pyrosequencing of bisulfite converted DNA. There are several kits commercially available that can be used for MBD enrichment. Our previous work has involved MethylMiner (Life Technologies, Foster City, CA, USA that we chose after careful investigation of its properties. However, in a recent evaluation of five commercially available MBD-enrichment kits the performance of the MethylMiner was deemed poor. Given our positive experience with MethylMiner, we were surprised by this report. In an attempt to reproduce these findings we here have performed a direct comparison of MethylMiner with MethylCap (Diagenode Inc, Denville, NJ, USA, the best performing kit in that study. We find that both MethylMiner and MethylCap are two well performing MBD-enrichment kits. However, MethylMiner shows somewhat better enrichment efficiency and lower levels of background "noise". In addition, for the purpose of MWAS where we want to investigate the majority of CpGs, we find MethylMiner to be superior as it allows tailoring the enrichment to the regions where most CpGs are located. Using targeted bisulfite sequencing we confirmed that sites where methylation was detected by either MethylMiner or by MethylCap indeed were methylated.
Concomitant prediction of function and fold at the domain level with GO-based profiles.
Lopez, Daniel; Pazos, Florencio
2013-01-01
Predicting the function of newly sequenced proteins is crucial due to the pace at which these raw sequences are being obtained. Almost all resources for predicting protein function assign functional terms to whole chains, and do not distinguish which particular domain is responsible for the allocated function. This is not a limitation of the methodologies themselves but it is due to the fact that in the databases of functional annotations these methods use for transferring functional terms to new proteins, these annotations are done on a whole-chain basis. Nevertheless, domains are the basic evolutionary and often functional units of proteins. In many cases, the domains of a protein chain have distinct molecular functions, independent from each other. For that reason resources with functional annotations at the domain level, as well as methodologies for predicting function for individual domains adapted to these resources are required.We present a methodology for predicting the molecular function of individual domains, based on a previously developed database of functional annotations at the domain level. The approach, which we show outperforms a standard method based on sequence searches in assigning function, concomitantly predicts the structural fold of the domains and can give hints on the functionally important residues associated to the predicted function.