Non-linear scalable TFETI domain decomposition based contact algorithm
Czech Academy of Sciences Publication Activity Database
Dobiáš, Jiří; Pták, Svatopluk; Dostál, Z.; Vondrák, V.; Kozubek, T.
2010-01-01
Roč. 10, č. 1 (2010), s. 1-10 ISSN 1757-8981. [World Congress on Computational Mechanics/9./. Sydney, 19.07.2010 - 23.07.2010] R&D Projects: GA ČR GA101/08/0574 Institutional research plan: CEZ:AV0Z20760514 Keywords : finite element method * domain decomposition method * contact Subject RIV: BA - General Mathematics http://iopscience.iop.org/1757-899X/10/1/012161/pdf/1757-899X_10_1_012161.pdf
Lubineau, Gilles
2015-03-01
We propose a domain decomposition formalism specifically designed for the identification of local elastic parameters based on full-field measurements. This technique is made possible by a multi-scale implementation of the constitutive compatibility method. Contrary to classical approaches, the constitutive compatibility method resolves first some eigenmodes of the stress field over the structure rather than directly trying to recover the material properties. A two steps micro/macro reconstruction of the stress field is performed: a Dirichlet identification problem is solved first over every subdomain, the macroscopic equilibrium is then ensured between the subdomains in a second step. We apply the method to large linear elastic 2D identification problems to efficiently produce estimates of the material properties at a much lower computational cost than classical approaches.
Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions
Energy Technology Data Exchange (ETDEWEB)
Fattebert, J.-L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Richards, D.F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glosli, J.N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2012-12-01
We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10^{6} particles on 65,536 MPI tasks.
Domain decomposition method for solving elliptic problems in unbounded domains
International Nuclear Information System (INIS)
Khoromskij, B.N.; Mazurkevich, G.E.; Zhidkov, E.P.
1991-01-01
Computational aspects of the box domain decomposition (DD) method for solving boundary value problems in an unbounded domain are discussed. A new variant of the DD-method for elliptic problems in unbounded domains is suggested. It is based on the partitioning of an unbounded domain adapted to the given asymptotic decay of an unknown function at infinity. The comparison of computational expenditures is given for boundary integral method and the suggested DD-algorithm. 29 refs.; 2 figs.; 2 tabs
Java-Based Coupling for Parallel Predictive-Adaptive Domain Decomposition
Directory of Open Access Journals (Sweden)
Cécile Germain‐Renaud
1999-01-01
Full Text Available Adaptive domain decomposition exemplifies the problem of integrating heterogeneous software components with intermediate coupling granularity. This paper describes an experiment where a data‐parallel (HPF client interfaces with a sequential computation server through Java. We show that seamless integration of data‐parallelism is possible, but requires most of the tools from the Java palette: Java Native Interface (JNI, Remote Method Invocation (RMI, callbacks and threads.
A New Efficient Algorithm for the 2D WLP-FDTD Method Based on Domain Decomposition Technique
Directory of Open Access Journals (Sweden)
Bo-Ao Xu
2016-01-01
Full Text Available This letter introduces a new efficient algorithm for the two-dimensional weighted Laguerre polynomials finite difference time-domain (WLP-FDTD method based on domain decomposition scheme. By using the domain decomposition finite difference technique, the whole computational domain is decomposed into several subdomains. The conventional WLP-FDTD and the efficient WLP-FDTD methods are, respectively, used to eliminate the splitting error and speed up the calculation in different subdomains. A joint calculation scheme is presented to reduce the amount of calculation. Through our work, the iteration is not essential to obtain the accurate results. Numerical example indicates that the efficiency and accuracy are improved compared with the efficient WLP-FDTD method.
Vector domain decomposition schemes for parabolic equations
Vabishchevich, P. N.
2017-09-01
A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.
An acceleration technique for 2D MOC based on Krylov subspace and domain decomposition methods
International Nuclear Information System (INIS)
Zhang Hongbo; Wu Hongchun; Cao Liangzhi
2011-01-01
Highlights: → We convert MOC into linear system solved by GMRES as an acceleration method. → We use domain decomposition method to overcome the inefficiency on large matrices. → Parallel technology is applied and a matched ray tracing system is developed. → Results show good efficiency even in large-scale and strong scattering problems. → The emphasis is that the technique is geometry-flexible. - Abstract: The method of characteristics (MOC) has great geometrical flexibility but poor computational efficiency in neutron transport calculations. The generalized minimal residual (GMRES) method, a type of Krylov subspace method, is utilized to accelerate a 2D generalized geometry characteristics solver AutoMOC. In this technique, a form of linear algebraic equation system for angular flux moments and boundary fluxes is derived to replace the conventional characteristics sweep (i.e. inner iteration) scheme, and then the GMRES method is implemented as an efficient linear system solver. This acceleration method is proved to be reliable in theory and simple for implementation. Furthermore, as introducing no restriction in geometry treatment, it is suitable for acceleration of an arbitrary geometry MOC solver. However, it is observed that the speedup decreases when the matrix becomes larger. The spatial domain decomposition method and multiprocessing parallel technology are then employed to overcome the problem. The calculation domain is partitioned into several sub-domains. For each of them, a smaller matrix is established and solved by GMRES; and the adjacent sub-domains are coupled by 'inner-edges', where the trajectory mismatches are considered adequately. Moreover, a matched ray tracing system is developed on the basis of AutoCAD, which allows a user to define the sub-domains on demand conveniently. Numerical results demonstrate that the acceleration techniques are efficient without loss of accuracy, even in the case of large-scale and strong scattering
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z. [Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China)
2013-07-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
Combinatorial geometry domain decomposition strategies for Monte Carlo simulations
International Nuclear Information System (INIS)
Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.
2013-01-01
Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)
Directory of Open Access Journals (Sweden)
Khaled Loukhaoukha
2013-01-01
Full Text Available We present a new optimal watermarking scheme based on discrete wavelet transform (DWT and singular value decomposition (SVD using multiobjective ant colony optimization (MOACO. A binary watermark is decomposed using a singular value decomposition. Then, the singular values are embedded in a detailed subband of host image. The trade-off between watermark transparency and robustness is controlled by multiple scaling factors (MSFs instead of a single scaling factor (SSF. Determining the optimal values of the multiple scaling factors (MSFs is a difficult problem. However, a multiobjective ant colony optimization is used to determine these values. Experimental results show much improved performances of the proposed scheme in terms of transparency and robustness compared to other watermarking schemes. Furthermore, it does not suffer from the problem of high probability of false positive detection of the watermarks.
Domain decomposition multigrid for unstructured grids
Energy Technology Data Exchange (ETDEWEB)
Shapira, Yair
1997-01-01
A two-level preconditioning method for the solution of elliptic boundary value problems using finite element schemes on possibly unstructured meshes is introduced. It is based on a domain decomposition and a Galerkin scheme for the coarse level vertex unknowns. For both the implementation and the analysis, it is not required that the curves of discontinuity in the coefficients of the PDE match the interfaces between subdomains. Generalizations to nonmatching or overlapping grids are made.
Domain decomposition method for solving the neutron diffusion equation
International Nuclear Information System (INIS)
Coulomb, F.
1989-03-01
The aim of this work is to study methods for solving the neutron diffusion equation; we are interested in methods based on a classical finite element discretization and well suited for use on parallel computers. Domain decomposition methods seem to answer this preoccupation. This study deals with a decomposition of the domain. A theoretical study is carried out for Lagrange finite elements and some examples are given; in the case of mixed dual finite elements, the study is based on examples [fr
Spatial domain decomposition for neutron transport problems
International Nuclear Information System (INIS)
Yavuz, M.; Larsen, E.W.
1989-01-01
A spatial Domain Decomposition method is proposed for modifying the Source Iteration (SI) and Diffusion Synthetic Acceleration (DSA) algorithms for solving discrete ordinates problems. The method, which consists of subdividing the spatial domain of the problem and performing the transport sweeps independently on each subdomain, has the advantage of being parallelizable because the calculations in each subdomain can be performed on separate processors. In this paper we describe the details of this spatial decomposition and study, by numerical experimentation, the effect of this decomposition on the SI and DSA algorithms. Our results show that the spatial decomposition has little effect on the convergence rates until the subdomains become optically thin (less than about a mean free path in thickness)
Multilevel domain decomposition for electronic structure calculations
International Nuclear Information System (INIS)
Barrault, M.; Cances, E.; Hager, W.W.; Le Bris, C.
2007-01-01
We introduce a new multilevel domain decomposition method (MDD) for electronic structure calculations within semi-empirical and density functional theory (DFT) frameworks. This method iterates between local fine solvers and global coarse solvers, in the spirit of domain decomposition methods. Using this approach, calculations have been successfully performed on several linear polymer chains containing up to 40,000 atoms and 200,000 atomic orbitals. Both the computational cost and the memory requirement scale linearly with the number of atoms. Additional speed-up can easily be obtained by parallelization. We show that this domain decomposition method outperforms the density matrix minimization (DMM) method for poor initial guesses. Our method provides an efficient preconditioner for DMM and other linear scaling methods, variational in nature, such as the orbital minimization (OM) procedure
Domain decomposition based iterative methods for nonlinear elliptic finite element problems
Energy Technology Data Exchange (ETDEWEB)
Cai, X.C. [Univ. of Colorado, Boulder, CO (United States)
1994-12-31
The class of overlapping Schwarz algorithms has been extensively studied for linear elliptic finite element problems. In this presentation, the author considers the solution of systems of nonlinear algebraic equations arising from the finite element discretization of some nonlinear elliptic equations. Several overlapping Schwarz algorithms, including the additive and multiplicative versions, with inexact Newton acceleration will be discussed. The author shows that the convergence rate of the Newton`s method is independent of the mesh size used in the finite element discretization, and also independent of the number of subdomains into which the original domain in decomposed. Numerical examples will be presented.
A PARALLEL NONOVERLAPPING DOMAIN DECOMPOSITION METHOD FOR STOKES PROBLEMS
Institute of Scientific and Technical Information of China (English)
Mei-qun Jiang; Pei-liang Dai
2006-01-01
A nonoverlapping domain decomposition iterative procedure is developed and analyzed for generalized Stokes problems and their finite element approximate problems in RN(N=2,3). The method is based on a mixed-type consistency condition with two parameters as a transmission condition together with a derivative-free transmission data updating technique on the artificial interfaces. The method can be applied to a general multi-subdomain decomposition and implemented on parallel machines with local simple communications naturally.
Multiple Shooting and Time Domain Decomposition Methods
Geiger, Michael; Körkel, Stefan; Rannacher, Rolf
2015-01-01
This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms. The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics. This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...
Domain decomposition methods for fluid dynamics
International Nuclear Information System (INIS)
Clerc, S.
1995-01-01
A domain decomposition method for steady-state, subsonic fluid dynamics calculations, is proposed. The method is derived from the Schwarz alternating method used for elliptic problems, extended to non-linear hyperbolic problems. Particular emphasis is given on the treatment of boundary conditions. Numerical results are shown for a realistic three-dimensional two-phase flow problem with the FLICA-4 code for PWR cores. (from author). 4 figs., 8 refs
Domain decomposition methods for mortar finite elements
Energy Technology Data Exchange (ETDEWEB)
Widlund, O.
1996-12-31
In the last few years, domain decomposition methods, previously developed and tested for standard finite element methods and elliptic problems, have been extended and modified to work for mortar and other nonconforming finite element methods. A survey will be given of work carried out jointly with Yves Achdou, Mario Casarin, Maksymilian Dryja and Yvon Maday. Results on the p- and h-p-version finite elements will also be discussed.
Bregmanized Domain Decomposition for Image Restoration
Langer, Andreas
2012-05-22
Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems such as total variation minimization became increasingly important for the solution of these tasks. While being favorable due to the improved enhancement of images compared to smooth imaging approaches, non-smooth minimization problems typically scale badly with the dimension of the data. Hence, for large imaging problems solved by total variation minimization domain decomposition algorithms have been proposed, aiming to split one large problem into N > 1 smaller problems which can be solved on parallel CPUs. The N subproblems constitute constrained minimization problems, where the constraint enforces the support of the minimizer to be the respective subdomain. In this paper we discuss a fast computational algorithm to solve domain decomposition for total variation minimization. In particular, we accelerate the computation of the subproblems by nested Bregman iterations. We propose a Bregmanized Operator Splitting-Split Bregman (BOS-SB) algorithm, which enforces the restriction onto the respective subdomain by a Bregman iteration that is subsequently solved by a Split Bregman strategy. The computational performance of this new approach is discussed for its application to image inpainting and image deblurring. It turns out that the proposed new solution technique is up to three times faster than the iterative algorithm currently used in domain decomposition methods for total variation minimization. © Springer Science+Business Media, LLC 2012.
Domain decomposition methods for the neutron diffusion problem
International Nuclear Information System (INIS)
Guerin, P.; Baudron, A. M.; Lautard, J. J.
2010-01-01
The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, simplified transport (SPN) or diffusion approximations are often used. The MINOS solver developed at CEA Saclay uses a mixed dual finite element method for the resolution of these problems. and has shown his efficiency. In order to take into account the heterogeneities of the geometry, a very fine mesh is generally required, and leads to expensive calculations for industrial applications. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose here two domain decomposition methods based on the MINOS solver. The first approach is a component mode synthesis method on overlapping sub-domains: several Eigenmodes solutions of a local problem on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is an iterative method based on a non-overlapping domain decomposition with Robin interface conditions. At each iteration, we solve the problem on each sub-domain with the interface conditions given by the solutions on the adjacent sub-domains estimated at the previous iteration. Numerical results on parallel computers are presented for the diffusion model on realistic 2D and 3D cores. (authors)
Zheng, Xiang
2015-03-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors. © 2015 Elsevier Inc.
International Nuclear Information System (INIS)
Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David
2015-01-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors
Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David
2015-03-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.
Domain decomposition and multilevel integration for fermions
International Nuclear Information System (INIS)
Ce, Marco; Giusti, Leonardo; Schaefer, Stefan
2016-01-01
The numerical computation of many hadronic correlation functions is exceedingly difficult due to the exponentially decreasing signal-to-noise ratio with the distance between source and sink. Multilevel integration methods, using independent updates of separate regions in space-time, are known to be able to solve such problems but have so far been available only for pure gauge theory. We present first steps into the direction of making such integration schemes amenable to theories with fermions, by factorizing a given observable via an approximated domain decomposition of the quark propagator. This allows for multilevel integration of the (large) factorized contribution to the observable, while its (small) correction can be computed in the standard way.
Domain decomposition methods and parallel computing
International Nuclear Information System (INIS)
Meurant, G.
1991-01-01
In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset
Domain decomposition methods for core calculations using the MINOS solver
International Nuclear Information System (INIS)
Guerin, P.; Baudron, A. M.; Lautard, J. J.
2007-01-01
Cell by cell homogenized transport calculations of an entire nuclear reactor core are currently too expensive for industrial applications, even if a simplified transport (SPn) approximation is used. In order to take advantage of parallel computers, we propose here two domain decomposition methods using the mixed dual finite element solver MINOS. The first one is a modal synthesis method on overlapping sub-domains: several Eigenmodes solutions of a local problem on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second one is an iterative method based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, we solve the problem on each sub-domain with the interface conditions given by the solutions on the close sub-domains estimated at the previous iteration. For these two methods, we give numerical results which demonstrate their accuracy and their efficiency for the diffusion model on realistic 2D and 3D cores. (authors)
Domain decomposition methods for solving an image problem
Energy Technology Data Exchange (ETDEWEB)
Tsui, W.K.; Tong, C.S. [Hong Kong Baptist College (Hong Kong)
1994-12-31
The domain decomposition method is a technique to break up a problem so that ensuing sub-problems can be solved on a parallel computer. In order to improve the convergence rate of the capacitance systems, pre-conditioned conjugate gradient methods are commonly used. In the last decade, most of the efficient preconditioners are based on elliptic partial differential equations which are particularly useful for solving elliptic partial differential equations. In this paper, the authors apply the so called covering preconditioner, which is based on the information of the operator under investigation. Therefore, it is good for various kinds of applications, specifically, they shall apply the preconditioned domain decomposition method for solving an image restoration problem. The image restoration problem is to extract an original image which has been degraded by a known convolution process and additive Gaussian noise.
Scalable Domain Decomposition Preconditioners for Heterogeneous Elliptic Problems
Directory of Open Access Journals (Sweden)
Pierre Jolivet
2014-01-01
Full Text Available Domain decomposition methods are, alongside multigrid methods, one of the dominant paradigms in contemporary large-scale partial differential equation simulation. In this paper, a lightweight implementation of a theoretically and numerically scalable preconditioner is presented in the context of overlapping methods. The performance of this work is assessed by numerical simulations executed on thousands of cores, for solving various highly heterogeneous elliptic problems in both 2D and 3D with billions of degrees of freedom. Such problems arise in computational science and engineering, in solid and fluid mechanics. While focusing on overlapping domain decomposition methods might seem too restrictive, it will be shown how this work can be applied to a variety of other methods, such as non-overlapping methods and abstract deflation based preconditioners. It is also presented how multilevel preconditioners can be used to avoid communication during an iterative process such as a Krylov method.
Lorin, E.; Yang, X.; Antoine, X.
2016-06-01
The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.
A convergent overlapping domain decomposition method for total variation minimization
Fornasier, Massimo; Langer, Andreas; Schö nlieb, Carola-Bibiane
2010-01-01
In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation
Domain decomposition methods and deflated Krylov subspace iterations
Nabben, R.; Vuik, C.
2006-01-01
The balancing Neumann-Neumann (BNN) and the additive coarse grid correction (BPS) preconditioner are fast and successful preconditioners within domain decomposition methods for solving partial differential equations. For certain elliptic problems these preconditioners lead to condition numbers which
Domain Decomposition: A Bridge between Nature and Parallel Computers
1992-09-01
B., "Domain Decomposition Algorithms for Indefinite Elliptic Problems," S"IAM Journal of S; cientific and Statistical (’omputing, Vol. 13, 1992, pp...AD-A256 575 NASA Contractor Report 189709 ICASE Report No. 92-44 ICASE DOMAIN DECOMPOSITION: A BRIDGE BETWEEN NATURE AND PARALLEL COMPUTERS DTIC dE...effectively implemented on dis- tributed memory multiprocessors. In 1990 (as reported in Ref. 38 using the tile algo- rithm), a 103,201-unknown 2D elliptic
Simplified approaches to some nonoverlapping domain decomposition methods
Energy Technology Data Exchange (ETDEWEB)
Xu, Jinchao
1996-12-31
An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.
Implementation of domain decomposition and data decomposition algorithms in RMC code
International Nuclear Information System (INIS)
Liang, J.G.; Cai, Y.; Wang, K.; She, D.
2013-01-01
The applications of Monte Carlo method in reactor physics analysis is somewhat restricted due to the excessive memory demand in solving large-scale problems. Memory demand in MC simulation is analyzed firstly, it concerns geometry data, data of nuclear cross-sections, data of particles, and data of tallies. It appears that tally data is dominant in memory cost and should be focused on in solving the memory problem. Domain decomposition and tally data decomposition algorithms are separately designed and implemented in the reactor Monte Carlo code RMC. Basically, the domain decomposition algorithm is a strategy of 'divide and rule', which means problems are divided into different sub-domains to be dealt with separately and some rules are established to make sure the whole results are correct. Tally data decomposition consists in 2 parts: data partition and data communication. Two algorithms with differential communication synchronization mechanisms are proposed. Numerical tests have been executed to evaluate performance of the new algorithms. Domain decomposition algorithm shows potentials to speed up MC simulation as a space parallel method. As for tally data decomposition algorithms, memory size is greatly reduced
22nd International Conference on Domain Decomposition Methods
Gander, Martin; Halpern, Laurence; Krause, Rolf; Pavarino, Luca
2016-01-01
These are the proceedings of the 22nd International Conference on Domain Decomposition Methods, which was held in Lugano, Switzerland. With 172 participants from over 24 countries, this conference continued a long-standing tradition of internationally oriented meetings on Domain Decomposition Methods. The book features a well-balanced mix of established and new topics, such as the manifold theory of Schwarz Methods, Isogeometric Analysis, Discontinuous Galerkin Methods, exploitation of modern HPC architectures, and industrial applications. As the conference program reflects, the growing capabilities in terms of theory and available hardware allow increasingly complex non-linear and multi-physics simulations, confirming the tremendous potential and flexibility of the domain decomposition concept.
Load Estimation by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Pedersen, Ivar Chr. Bjerg; Hansen, Søren Mosegaard; Brincker, Rune
2007-01-01
When performing operational modal analysis the dynamic loading is unknown, however, once the modal properties of the structure have been estimated, the transfer matrix can be obtained, and the loading can be estimated by inverse filtering. In this paper loads in frequency domain are estimated by ...
B-spline Collocation with Domain Decomposition Method
International Nuclear Information System (INIS)
Hidayat, M I P; Parman, S; Ariwahjoedi, B
2013-01-01
A global B-spline collocation method has been previously developed and successfully implemented by the present authors for solving elliptic partial differential equations in arbitrary complex domains. However, the global B-spline approximation, which is simply reduced to Bezier approximation of any degree p with C 0 continuity, has led to the use of B-spline basis of high order in order to achieve high accuracy. The need for B-spline bases of high order in the global method would be more prominent in domains of large dimension. For the increased collocation points, it may also lead to the ill-conditioning problem. In this study, overlapping domain decomposition of multiplicative Schwarz algorithm is combined with the global method. Our objective is two-fold that improving the accuracy with the combination technique, and also investigating influence of the combination technique to the employed B-spline basis orders with respect to the obtained accuracy. It was shown that the combination method produced higher accuracy with the B-spline basis of much lower order than that needed in implementation of the initial method. Hence, the approximation stability of the B-spline collocation method was also increased.
Eigenvalue Decomposition-Based Modified Newton Algorithm
Directory of Open Access Journals (Sweden)
Wen-jun Wang
2013-01-01
Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.
Directory of Open Access Journals (Sweden)
MOHAMMED FAIZ ABOALMAALY
2014-10-01
Full Text Available With the continuous revolution of multicore architecture, several parallel programming platforms have been introduced in order to pave the way for fast and efficient development of parallel algorithms. Back into its categories, parallel computing can be done through two forms: Data-Level Parallelism (DLP or Task-Level Parallelism (TLP. The former can be done by the distribution of data among the available processing elements while the latter is based on executing independent tasks concurrently. Most of the parallel programming platforms have built-in techniques to distribute the data among processors, these techniques are technically known as automatic distribution (scheduling. However, due to their wide range of purposes, variation of data types, amount of distributed data, possibility of extra computational overhead and other hardware-dependent factors, manual distribution could achieve better outcomes in terms of performance when compared to the automatic distribution. In this paper, this assumption is investigated by conducting a comparison between automatic and our newly proposed manual distribution of data among threads in parallel. Empirical results of matrix addition and matrix multiplication show a considerable performance gain when manual distribution is applied against automatic distribution.
Domain Decomposition Solvers for Frequency-Domain Finite Element Equations
Copeland, Dylan; Kolmbauer, Michael; Langer, Ulrich
2010-01-01
The paper is devoted to fast iterative solvers for frequency-domain finite element equations approximating linear and nonlinear parabolic initial boundary value problems with time-harmonic excitations. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple linear elliptic system for the amplitudes belonging to the sine- and to the cosine-excitation or a large nonlinear elliptic system for the Fourier coefficients in the linear and nonlinear case, respectively. The fast solution of the corresponding linear and nonlinear system of finite element equations is crucial for the competitiveness of this method. © 2011 Springer-Verlag Berlin Heidelberg.
Domain Decomposition Solvers for Frequency-Domain Finite Element Equations
Copeland, Dylan
2010-10-05
The paper is devoted to fast iterative solvers for frequency-domain finite element equations approximating linear and nonlinear parabolic initial boundary value problems with time-harmonic excitations. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple linear elliptic system for the amplitudes belonging to the sine- and to the cosine-excitation or a large nonlinear elliptic system for the Fourier coefficients in the linear and nonlinear case, respectively. The fast solution of the corresponding linear and nonlinear system of finite element equations is crucial for the competitiveness of this method. © 2011 Springer-Verlag Berlin Heidelberg.
Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu
2018-04-01
A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.
Lattice QCD with Domain Decomposition on Intel Xeon Phi Co-Processors
Energy Technology Data Exchange (ETDEWEB)
Heybrock, Simon; Joo, Balint; Kalamkar, Dhiraj D; Smelyanskiy, Mikhail; Vaidyanathan, Karthikeyan; Wettig, Tilo; Dubey, Pradeep
2014-12-01
The gap between the cost of moving data and the cost of computing continues to grow, making it ever harder to design iterative solvers on extreme-scale architectures. This problem can be alleviated by alternative algorithms that reduce the amount of data movement. We investigate this in the context of Lattice Quantum Chromodynamics and implement such an alternative solver algorithm, based on domain decomposition, on Intel Xeon Phi co-processor (KNC) clusters. We demonstrate close-to-linear on-chip scaling to all 60 cores of the KNC. With a mix of single- and half-precision the domain-decomposition method sustains 400-500 Gflop/s per chip. Compared to an optimized KNC implementation of a standard solver [1], our full multi-node domain-decomposition solver strong-scales to more nodes and reduces the time-to-solution by a factor of 5.
International Nuclear Information System (INIS)
Guerin, P.
2007-12-01
The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)
A physics-motivated Centroidal Voronoi Particle domain decomposition method
Energy Technology Data Exchange (ETDEWEB)
Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de
2017-04-15
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.
Europlexus: a domain decomposition method in explicit dynamics
International Nuclear Information System (INIS)
Faucher, V.; Hariddh, Bung; Combescure, A.
2003-01-01
Explicit time integration methods are used in structural dynamics to simulate fast transient phenomena, such as impacts or explosions. A very fine analysis is required in the vicinity of the loading areas but extending the same method, and especially the same small time-step, to the whole structure frequently yields excessive calculation times. We thus perform a dual Schur domain decomposition, to divide the global problem into several independent ones, to which is added a reduced size interface problem, to ensure connections between sub-domains. Each sub-domain is given its own time-step and its own mesh fineness. Non-matching meshes at the interfaces are handled. An industrial example demonstrates the interest of our approach. (authors)
A TFETI domain decomposition solver for elastoplastic problems
Czech Academy of Sciences Publication Activity Database
Čermák, M.; Kozubek, T.; Sysala, Stanislav; Valdman, J.
2014-01-01
Roč. 231, č. 1 (2014), s. 634-653 ISSN 0096-3003 Institutional support: RVO:68145535 Keywords : elastoplasticity * Total FETI domain decomposition method * Finite element method * Semismooth Newton method Subject RIV: BA - General Mathematics Impact factor: 1.551, year: 2014 http://ac.els-cdn.com/S0096300314000253/1-s2.0-S0096300314000253-main.pdf?_tid=33a29cf4-996a-11e3-8c5a-00000aacb360&acdnat=1392816896_4584697dc26cf934dcf590c63f0dbab7
Neutron transport solver parallelization using a Domain Decomposition method
International Nuclear Information System (INIS)
Van Criekingen, S.; Nataf, F.; Have, P.
2008-01-01
A domain decomposition (DD) method is investigated for the parallel solution of the second-order even-parity form of the time-independent Boltzmann transport equation. The spatial discretization is performed using finite elements, and the angular discretization using spherical harmonic expansions (P N method). The main idea developed here is due to P.L. Lions. It consists in having sub-domains exchanging not only interface point flux values, but also interface flux 'derivative' values. (The word 'derivative' is here used with quotes, because in the case considered here, it in fact consists in the Ω.∇ operator, with Ω the angular variable vector and ∇ the spatial gradient operator.) A parameter α is introduced, as proportionality coefficient between point flux and 'derivative' values. This parameter can be tuned - so far heuristically - to optimize the method. (authors)
Energy Technology Data Exchange (ETDEWEB)
Feng, Xiaobing [Univ. of Tennessee, Knoxville, TN (United States)
1996-12-31
A non-overlapping domain decomposition iterative method is proposed and analyzed for mixed finite element methods for a sequence of noncoercive elliptic systems with radiation boundary conditions. These differential systems describe the motion of a nearly elastic solid in the frequency domain. The convergence of the iterative procedure is demonstrated and the rate of convergence is derived for the case when the domain is decomposed into subdomains in which each subdomain consists of an individual element associated with the mixed finite elements. The hybridization of mixed finite element methods plays a important role in the construction of the discrete procedure.
Dictionary-Based Tensor Canonical Polyadic Decomposition
Cohen, Jeremy Emile; Gillis, Nicolas
2018-04-01
To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.
Domain decomposition techniques for boundary elements application to fluid flow
Brebbia, C A; Skerget, L
2007-01-01
The sub-domain techniques in the BEM are nowadays finding its place in the toolbox of numerical modellers, especially when dealing with complex 3D problems. We see their main application in conjunction with the classical BEM approach, which is based on a single domain, when part of the domain needs to be solved using a single domain approach, the classical BEM, and part needs to be solved using a domain approach, BEM subdomain technique. This has usually been done in the past by coupling the BEM with the FEM, however, it is much more efficient to use a combination of the BEM and a BEM sub-domain technique. The advantage arises from the simplicity of coupling the single domain and multi-domain solutions, and from the fact that only one formulation needs to be developed, rather than two separate formulations based on different techniques. There are still possibilities for improving the BEM sub-domain techniques. However, considering the increased interest and research in this approach we believe that BEM sub-do...
A convergent overlapping domain decomposition method for total variation minimization
Fornasier, Massimo
2010-06-22
In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.
Zheng, Xiang; Yang, Chao; Cai, Xiaochuan; Keyes, David E.
2015-01-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation
Energy Technology Data Exchange (ETDEWEB)
Girardi, E.; Ruggieri, J.M. [CEA Cadarache (DER/SPRC/LEPH), 13 - Saint-Paul-lez-Durance (France). Dept. d' Etudes des Reacteurs; Santandrea, S. [CEA Saclay, Dept. Modelisation de Systemes et Structures DM2S/SERMA/LENR, 91 - Gif sur Yvette (France)
2005-07-01
This paper describes a recently-developed extension of our 'Multi-methods,multi-domains' (MM-MD) method for the solution of the multigroup transport equation. Based on a domain decomposition technique, our approach allows us to treat the one-group equation by cooperatively employing several numerical methods together. In this work, we describe the coupling between the Method of Characteristics (integro-differential equation, unstructured meshes) with the Variational Nodal Method (even parity equation, cartesian meshes). Then, the coupling method is applied to the benchmark model of the Phebus experimental facility (Cea Cadarache). Our domain decomposition method give us the capability to employ a very fine mesh in describing a particular fuel bundle with an appropriate numerical method (MOC), while using a much large mesh size in the rest of the core, in conjunction with a coarse-mesh method (VNM). This application shows the benefits of our MM-MD approach, in terms of accuracy and computing time: the domain decomposition method allows us to reduce the Cpu time, while preserving a good accuracy of the neutronic indicators: reactivity, core-to-bundle power coupling coefficient and flux error. (authors)
International Nuclear Information System (INIS)
Girardi, E.; Ruggieri, J.M.
2005-01-01
This paper describes a recently-developed extension of our 'Multi-methods,multi-domains' (MM-MD) method for the solution of the multigroup transport equation. Based on a domain decomposition technique, our approach allows us to treat the one-group equation by cooperatively employing several numerical methods together. In this work, we describe the coupling between the Method of Characteristics (integro-differential equation, unstructured meshes) with the Variational Nodal Method (even parity equation, cartesian meshes). Then, the coupling method is applied to the benchmark model of the Phebus experimental facility (Cea Cadarache). Our domain decomposition method give us the capability to employ a very fine mesh in describing a particular fuel bundle with an appropriate numerical method (MOC), while using a much large mesh size in the rest of the core, in conjunction with a coarse-mesh method (VNM). This application shows the benefits of our MM-MD approach, in terms of accuracy and computing time: the domain decomposition method allows us to reduce the Cpu time, while preserving a good accuracy of the neutronic indicators: reactivity, core-to-bundle power coupling coefficient and flux error. (authors)
Analysis of generalized Schwarz alternating procedure for domain decomposition
Energy Technology Data Exchange (ETDEWEB)
Engquist, B.; Zhao, Hongkai [Univ. of California, Los Angeles, CA (United States)
1996-12-31
The Schwartz alternating method(SAM) is the theoretical basis for domain decomposition which itself is a powerful tool both for parallel computation and for computing in complicated domains. The convergence rate of the classical SAM is very sensitive to the overlapping size between each subdomain, which is not desirable for most applications. We propose a generalized SAM procedure which is an extension of the modified SAM proposed by P.-L. Lions. Instead of using only Dirichlet data at the artificial boundary between subdomains, we take a convex combination of u and {partial_derivative}u/{partial_derivative}n, i.e. {partial_derivative}u/{partial_derivative}n + {Lambda}u, where {Lambda} is some {open_quotes}positive{close_quotes} operator. Convergence of the modified SAM without overlapping in a quite general setting has been proven by P.-L.Lions using delicate energy estimates. The important questions remain for the generalized SAM. (1) What is the most essential mechanism for convergence without overlapping? (2) Given the partial differential equation, what is the best choice for the positive operator {Lambda}? (3) In the overlapping case, is the generalized SAM superior to the classical SAM? (4) What is the convergence rate and what does it depend on? (5) Numerically can we obtain an easy to implement operator {Lambda} such that the convergence is independent of the mesh size. To analyze the convergence of the generalized SAM we focus, for simplicity, on the Poisson equation for two typical geometry in two subdomain case.
Simulation of two-phase flows by domain decomposition
International Nuclear Information System (INIS)
Dao, T.H.
2013-01-01
This thesis deals with numerical simulations of compressible fluid flows by implicit finite volume methods. Firstly, we studied and implemented an implicit version of the Roe scheme for compressible single-phase and two-phase flows. Thanks to Newton method for solving nonlinear systems, our schemes are conservative. Unfortunately, the resolution of nonlinear systems is very expensive. It is therefore essential to use an efficient algorithm to solve these systems. For large size matrices, we often use iterative methods whose convergence depends on the spectrum. We have studied the spectrum of the linear system and proposed a strategy, called Scaling, to improve the condition number of the matrix. Combined with the classical ILU pre-conditioner, our strategy has reduced significantly the GMRES iterations for local systems and the computation time. We also show some satisfactory results for low Mach-number flows using the implicit centered scheme. We then studied and implemented a domain decomposition method for compressible fluid flows. We have proposed a new interface variable which makes the Schur complement method easy to build and allows us to treat diffusion terms. Using GMRES iterative solver rather than Richardson for the interface system also provides a better performance compared to other methods. We can also decompose the computational domain into any number of sub-domains. Moreover, the Scaling strategy for the interface system has improved the condition number of the matrix and reduced the number of GMRES iterations. In comparison with the classical distributed computing, we have shown that our method is more robust and efficient. (author) [fr
Multitasking domain decomposition fast Poisson solvers on the Cray Y-MP
Chan, Tony F.; Fatoohi, Rod A.
1990-01-01
The results of multitasking implementation of a domain decomposition fast Poisson solver on eight processors of the Cray Y-MP are presented. The object of this research is to study the performance of domain decomposition methods on a Cray supercomputer and to analyze the performance of different multitasking techniques using highly parallel algorithms. Two implementations of multitasking are considered: macrotasking (parallelism at the subroutine level) and microtasking (parallelism at the do-loop level). A conventional FFT-based fast Poisson solver is also multitasked. The results of different implementations are compared and analyzed. A speedup of over 7.4 on the Cray Y-MP running in a dedicated environment is achieved for all cases.
International Nuclear Information System (INIS)
Azmy, Y.Y.
1997-01-01
The effect of three communication schemes for solving Arbitrarily High Order Transport (AHOT) methods of the Nodal type on parallel performance is examined via direct measurements and performance models. The target architecture in this study is Oak Ridge National Laboratory's 128 node Paragon XP/S 5 computer and the parallelization is based on the Parallel Virtual Machine (PVM) library. However, the conclusions reached can be easily generalized to a large class of message passing platforms and communication software. The three schemes considered here are: (1) PVM's global operations (broadcast and reduce) which utilizes the Paragon's native corresponding operations based on a spanning tree routing; (2) the Bucket algorithm wherein the angular domain decomposition of the mesh sweep is complemented with a spatial domain decomposition of the accumulation process of the scalar flux from the angular flux and the convergence test; (3) a distributed memory version of the Bucket algorithm that pushes the spatial domain decomposition one step farther by actually distributing the fixed source and flux iterates over the memories of the participating processes. Their conclusion is that the Bucket algorithm is the most efficient of the three if all participating processes have sufficient memories to hold the entire problem arrays. Otherwise, the third scheme becomes necessary at an additional cost to speedup and parallel efficiency that is quantifiable via the parallel performance model
A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis
Jokhio, G. A.; Izzuddin, B. A.
2015-05-01
This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.
Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai
1998-01-01
This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.
Mechanical and assembly units of viral capsids identified via quasi-rigid domain decomposition.
Directory of Open Access Journals (Sweden)
Guido Polles
Full Text Available Key steps in a viral life-cycle, such as self-assembly of a protective protein container or in some cases also subsequent maturation events, are governed by the interplay of physico-chemical mechanisms involving various spatial and temporal scales. These salient aspects of a viral life cycle are hence well described and rationalised from a mesoscopic perspective. Accordingly, various experimental and computational efforts have been directed towards identifying the fundamental building blocks that are instrumental for the mechanical response, or constitute the assembly units, of a few specific viral shells. Motivated by these earlier studies we introduce and apply a general and efficient computational scheme for identifying the stable domains of a given viral capsid. The method is based on elastic network models and quasi-rigid domain decomposition. It is first applied to a heterogeneous set of well-characterized viruses (CCMV, MS2, STNV, STMV for which the known mechanical or assembly domains are correctly identified. The validated method is next applied to other viral particles such as L-A, Pariacoto and polyoma viruses, whose fundamental functional domains are still unknown or debated and for which we formulate verifiable predictions. The numerical code implementing the domain decomposition strategy is made freely available.
Energy Technology Data Exchange (ETDEWEB)
Guerin, P
2007-12-15
The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
International Nuclear Information System (INIS)
Desai, Ajit; Pettit, Chris; Poirel, Dominique; Sarkar, Abhijit
2017-01-01
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolution in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.
DEFF Research Database (Denmark)
Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune
2006-01-01
The presence of harmonic components in the measured responses is unavoidable in many applications of Operational Modal Analysis. This is especially true when measuring on mechanical structures containing rotating or reciprocating parts. This paper describes a new method based on the popular...... agreement is found and the method is proven to be an easy-to-use and robust tool for handling responses with deterministic and stochastic content....... Enhanced Frequency Domain Decomposition technique for eliminating the influence of these harmonic components in the modal parameter extraction process. For various experiments, the quality of the method is assessed and compared to the results obtained using broadband stochastic excitation forces. Good...
Parallel algorithms for nuclear reactor analysis via domain decomposition method
International Nuclear Information System (INIS)
Kim, Yong Hee
1995-02-01
In this thesis, the neutron diffusion equation in reactor physics is discretized by the finite difference method and is solved on a parallel computer network which is composed of T-800 transputers. T-800 transputer is a message-passing type MIMD (multiple instruction streams and multiple data streams) architecture. A parallel variant of Schwarz alternating procedure for overlapping subdomains is developed with domain decomposition. The thesis provides convergence analysis and improvement of the convergence of the algorithm. The convergence of the parallel Schwarz algorithms with DN(or ND), DD, NN, and mixed pseudo-boundary conditions(a weighted combination of Dirichlet and Neumann conditions) is analyzed for both continuous and discrete models in two-subdomain case and various underlying features are explored. The analysis shows that the convergence rate of the algorithm highly depends on the pseudo-boundary conditions and the theoretically best one is the mixed boundary conditions(MM conditions). Also it is shown that there may exist a significant discrepancy between continuous model analysis and discrete model analysis. In order to accelerate the convergence of the parallel Schwarz algorithm, relaxation in pseudo-boundary conditions is introduced and the convergence analysis of the algorithm for two-subdomain case is carried out. The analysis shows that under-relaxation of the pseudo-boundary conditions accelerates the convergence of the parallel Schwarz algorithm if the convergence rate without relaxation is negative, and any relaxation(under or over) decelerates convergence if the convergence rate without relaxation is positive. Numerical implementation of the parallel Schwarz algorithm on an MIMD system requires multi-level iterations: two levels for fixed source problems, three levels for eigenvalue problems. Performance of the algorithm turns out to be very sensitive to the iteration strategy. In general, multi-level iterations provide good performance when
Steganography based on pixel intensity value decomposition
Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.
2014-05-01
This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.
Multiscale analysis of damage using dual and primal domain decomposition techniques
Lloberas-Valls, O.; Everdij, F.P.X.; Rixen, D.J.; Simone, A.; Sluys, L.J.
2014-01-01
In this contribution, dual and primal domain decomposition techniques are studied for the multiscale analysis of failure in quasi-brittle materials. The multiscale strategy essentially consists in decomposing the structure into a number of nonoverlapping domains and considering a refined spatial
Domain decomposition method of stochastic PDEs: a two-level scalable preconditioner
International Nuclear Information System (INIS)
Subber, Waad; Sarkar, Abhijit
2012-01-01
For uncertainty quantification in many practical engineering problems, the stochastic finite element method (SFEM) may be computationally challenging. In SFEM, the size of the algebraic linear system grows rapidly with the spatial mesh resolution and the order of the stochastic dimension. In this paper, we describe a non-overlapping domain decomposition method, namely the iterative substructuring method to tackle the large-scale linear system arising in the SFEM. The SFEM is based on domain decomposition in the geometric space and a polynomial chaos expansion in the probabilistic space. In particular, a two-level scalable preconditioner is proposed for the iterative solver of the interface problem for the stochastic systems. The preconditioner is equipped with a coarse problem which globally connects the subdomains both in the geometric and probabilistic spaces via their corner nodes. This coarse problem propagates the information quickly across the subdomains leading to a scalable preconditioner. For numerical illustrations, a two-dimensional stochastic elliptic partial differential equation (SPDE) with spatially varying non-Gaussian random coefficients is considered. The numerical scalability of the the preconditioner is investigated with respect to the mesh size, subdomain size, fixed problem size per subdomain and order of polynomial chaos expansion. The numerical experiments are performed on a Linux cluster using MPI and PETSc parallel libraries.
Energy Technology Data Exchange (ETDEWEB)
Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)
2016-06-15
Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
Large Scale Simulation of Hydrogen Dispersion by a Stabilized Balancing Domain Decomposition Method
Directory of Open Access Journals (Sweden)
Qing-He Yao
2014-01-01
Full Text Available The dispersion behaviour of leaking hydrogen in a partially open space is simulated by a balancing domain decomposition method in this work. An analogy of the Boussinesq approximation is employed to describe the connection between the flow field and the concentration field. The linear systems of Navier-Stokes equations and the convection diffusion equation are symmetrized by a pressure stabilized Lagrange-Galerkin method, and thus a balancing domain decomposition method is enabled to solve the interface problem of the domain decomposition system. Numerical results are validated by comparing with the experimental data and available numerical results. The dilution effect of ventilation is investigated, especially at the doors, where flow pattern is complicated and oscillations appear in the past research reported by other researchers. The transient behaviour of hydrogen and the process of accumulation in the partially open space are discussed, and more details are revealed by large scale computation.
Empirical projection-based basis-component decomposition method
Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland
2009-02-01
Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.
Modal Identification from Ambient Responses using Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, L.; Andersen, P.
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification from ambient responses, ie. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical ...
Modal Identification from Ambient Responses Using Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, Lingmi; Andersen, Palle
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification from ambient responses, i.e. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical...
A non overlapping parallel domain decomposition method applied to the simplified transport equations
International Nuclear Information System (INIS)
Lathuiliere, B.; Barrault, M.; Ramet, P.; Roman, J.
2009-01-01
A reactivity computation requires to compute the highest eigenvalue of a generalized eigenvalue problem. An inverse power algorithm is used commonly. Very fine modelizations are difficult to tackle for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. So, we propose a non-overlapping domain decomposition method for the approximate resolution of the linear system to solve at each inverse power iteration. Our method brings to a low development effort as the inner multigroup solver can be re-use without modification, and allows us to adapt locally the numerical resolution (mesh, finite element order). Numerical results are obtained by a parallel implementation of the method on two different cases with a pin by pin discretization. This results are analyzed in terms of memory consumption and parallel efficiency. (authors)
Energy Technology Data Exchange (ETDEWEB)
Maliassov, S.Y. [Texas A& M Univ., College Station, TX (United States)
1996-12-31
An approach to the construction of an iterative method for solving systems of linear algebraic equations arising from nonconforming finite element discretizations with nonmatching grids for second order elliptic boundary value problems with anisotropic coefficients is considered. The technique suggested is based on decomposition of the original domain into nonoverlapping subdomains. The elliptic problem is presented in the macro-hybrid form with Lagrange multipliers at the interfaces between subdomains. A block diagonal preconditioner is proposed which is spectrally equivalent to the original saddle point matrix and has the optimal order of arithmetical complexity. The preconditioner includes blocks for preconditioning subdomain and interface problems. It is shown that constants of spectral equivalence axe independent of values of coefficients and mesh step size.
International Nuclear Information System (INIS)
Tang Shaojie; Tang Xiangyang
2012-01-01
Purposes: The suppression of noise in x-ray computed tomography (CT) imaging is of clinical relevance for diagnostic image quality and the potential for radiation dose saving. Toward this purpose, statistical noise reduction methods in either the image or projection domain have been proposed, which employ a multiscale decomposition to enhance the performance of noise suppression while maintaining image sharpness. Recognizing the advantages of noise suppression in the projection domain, the authors propose a projection domain multiscale penalized weighted least squares (PWLS) method, in which the angular sampling rate is explicitly taken into consideration to account for the possible variation of interview sampling rate in advanced clinical or preclinical applications. Methods: The projection domain multiscale PWLS method is derived by converting an isotropic diffusion partial differential equation in the image domain into the projection domain, wherein a multiscale decomposition is carried out. With adoption of the Markov random field or soft thresholding objective function, the projection domain multiscale PWLS method deals with noise at each scale. To compensate for the degradation in image sharpness caused by the projection domain multiscale PWLS method, an edge enhancement is carried out following the noise reduction. The performance of the proposed method is experimentally evaluated and verified using the projection data simulated by computer and acquired by a CT scanner. Results: The preliminary results show that the proposed projection domain multiscale PWLS method outperforms the projection domain single-scale PWLS method and the image domain multiscale anisotropic diffusion method in noise reduction. In addition, the proposed method can preserve image sharpness very well while the occurrence of “salt-and-pepper” noise and mosaic artifacts can be avoided. Conclusions: Since the interview sampling rate is taken into account in the projection domain
Domain decomposition solvers for nonlinear multiharmonic finite element equations
Copeland, D. M.
2010-01-01
In many practical applications, for instance, in computational electromagnetics, the excitation is time-harmonic. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple elliptic equation for the amplitude. This is true for linear problems, but not for nonlinear problems. However, due to the periodicity of the solution, we can expand the solution in a Fourier series. Truncating this Fourier series and approximating the Fourier coefficients by finite elements, we arrive at a large-scale coupled nonlinear system for determining the finite element approximation to the Fourier coefficients. The construction of fast solvers for such systems is very crucial for the efficiency of this multiharmonic approach. In this paper we look at nonlinear, time-harmonic potential problems as simple model problems. We construct and analyze almost optimal solvers for the Jacobi systems arising from the Newton linearization of the large-scale coupled nonlinear system that one has to solve instead of performing the expensive time-integration procedure. © 2010 de Gruyter.
Domain decomposition parallel computing for transient two-phase flow of nuclear reactors
Energy Technology Data Exchange (ETDEWEB)
Lee, Jae Ryong; Yoon, Han Young [KAERI, Daejeon (Korea, Republic of); Choi, Hyoung Gwon [Seoul National University, Seoul (Korea, Republic of)
2016-05-15
KAERI (Korea Atomic Energy Research Institute) has been developing a multi-dimensional two-phase flow code named CUPID for multi-physics and multi-scale thermal hydraulics analysis of Light water reactors (LWRs). The CUPID code has been validated against a set of conceptual problems and experimental data. In this work, the CUPID code has been parallelized based on the domain decomposition method with Message passing interface (MPI) library. For domain decomposition, the CUPID code provides both manual and automatic methods with METIS library. For the effective memory management, the Compressed sparse row (CSR) format is adopted, which is one of the methods to represent the sparse asymmetric matrix. CSR format saves only non-zero value and its position (row and column). By performing the verification for the fundamental problem set, the parallelization of the CUPID has been successfully confirmed. Since the scalability of a parallel simulation is generally known to be better for fine mesh system, three different scales of mesh system are considered: 40000 meshes for coarse mesh system, 320000 meshes for mid-size mesh system, and 2560000 meshes for fine mesh system. In the given geometry, both single- and two-phase calculations were conducted. In addition, two types of preconditioners for a matrix solver were compared: Diagonal and incomplete LU preconditioner. In terms of enhancement of the parallel performance, the OpenMP and MPI hybrid parallel computing for a pressure solver was examined. It is revealed that the scalability of hybrid calculation was enhanced for the multi-core parallel computation.
International Nuclear Information System (INIS)
Haeberlein, F.
2011-01-01
Reactive transport modelling is a basic tool to model chemical reactions and flow processes in porous media. A totally reduced multi-species reactive transport model including kinetic and equilibrium reactions is presented. A structured numerical formulation is developed and different numerical approaches are proposed. Domain decomposition methods offer the possibility to split large problems into smaller subproblems that can be treated in parallel. The class of Schwarz-type domain decomposition methods that have proved to be high-performing algorithms in many fields of applications is presented with a special emphasis on the geometrical viewpoint. Numerical issues for the realisation of geometrical domain decomposition methods and transmission conditions in the context of finite volumes are discussed. We propose and validate numerically a hybrid finite volume scheme for advection-diffusion processes that is particularly well-suited for the use in a domain decomposition context. Optimised Schwarz waveform relaxation methods are studied in detail on a theoretical and numerical level for a two species coupled reactive transport system with linear and nonlinear coupling terms. Well-posedness and convergence results are developed and the influence of the coupling term on the convergence behaviour of the Schwarz algorithm is studied. Finally, we apply a Schwarz waveform relaxation method on the presented multi-species reactive transport system. (author)
Energy Technology Data Exchange (ETDEWEB)
Gaiffe, St
2000-03-23
In this thesis, we are interested in the modeling of fluid flow through porous media with 2-D and 3-D unstructured meshes, and in the use of domain decomposition methods. The behavior of flow through porous media is strongly influenced by heterogeneities: either large-scale lithological discontinuities or quite localized phenomena such as fluid flow in the neighbourhood of wells. In these two typical cases, an accurate consideration of the singularities requires the use of adapted meshes. After having shown the limits of classic meshes we present the future prospects offered by hybrid and flexible meshes. Next, we consider the generalization possibilities of the numerical schemes traditionally used in reservoir simulation and we draw two available approaches: mixed finite elements and U-finite volumes. The investigated phenomena being also characterized by different time-scales, special treatments in terms of time discretization on various parts of the domain are required. We think that the combination of domain decomposition methods with operator splitting techniques may provide a promising approach to obtain high flexibility for a local tune-steps management. Consequently, we develop a new numerical scheme for linear parabolic equations which allows to get a higher flexibility in the local space and time steps management. To conclude, a priori estimates and error estimates on the two variables of interest, namely the pressure and the velocity are proposed. (author)
International Nuclear Information System (INIS)
Girardi, E.; Ruggieri, J.M.
2003-01-01
The aim of this paper is to present the last developments made on a domain decomposition method applied to reactor core calculations. In this method, two kind of balance equation with two different numerical methods dealing with two different unknowns are coupled. In the first part the two balance transport equations (first order and second order one) are presented with the corresponding following numerical methods: Variational Nodal Method and Discrete Ordinate Nodal Method. In the second part, the Multi-Method/Multi-Domain algorithm is introduced by applying the Schwarz domain decomposition to the multigroup eigenvalue problem of the transport equation. The resulting algorithm is then provided. The projection operators used to coupled the two methods are detailed in the last part of the paper. Finally some preliminary numerical applications on benchmarks are given showing encouraging results. (authors)
Parallel finite elements with domain decomposition and its pre-processing
International Nuclear Information System (INIS)
Yoshida, A.; Yagawa, G.; Hamada, S.
1993-01-01
This paper describes a parallel finite element analysis using a domain decomposition method, and the pre-processing for the parallel calculation. Computer simulations are about to replace experiments in various fields, and the scale of model to be simulated tends to be extremely large. On the other hand, computational environment has drastically changed in these years. Especially, parallel processing on massively parallel computers or computer networks is considered to be promising techniques. In order to achieve high efficiency on such parallel computation environment, large granularity of tasks, a well-balanced workload distribution are key issues. It is also important to reduce the cost of pre-processing in such parallel FEM. From the point of view, the authors developed the domain decomposition FEM with the automatic and dynamic task-allocation mechanism and the automatic mesh generation/domain subdivision system for it. (author)
Exterior domain problems and decomposition of tensor fields in weighted Sobolev spaces
Schwarz, Günter
1996-01-01
The Hodge decompOsition is a useful tool for tensor analysis on compact manifolds with boundary. This paper aims at generalising the decomposition to exterior domains G ⊂ IR n. Let L 2a Ω k(G) be the space weighted square integrable differential forms with weight function (1 + |χ|²)a, let d a be the weighted perturbation of the exterior derivative and δ a its adjoint. Then L 2a Ω k(G) splits into the orthogonal sum of the subspaces of the d a-exact forms with vanishi...
Pitfalls in VAR based return decompositions: A clarification
DEFF Research Database (Denmark)
Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten
in their analysis is not "cashflow news" but "inter- est rate news" which should not be zero. Consequently, in contrast to what Chen and Zhao claim, their decomposition does not serve as a valid caution against VAR based decompositions. Second, we point out that in order for VAR based decompositions to be valid......Based on Chen and Zhao's (2009) criticism of VAR based return de- compositions, we explain in detail the various limitations and pitfalls involved in such decompositions. First, we show that Chen and Zhao's interpretation of their excess bond return decomposition is wrong: the residual component...
International Nuclear Information System (INIS)
Zerr, R.J.; Azmy, Y.Y.
2010-01-01
A spatial domain decomposition with a parallel block Jacobi solution algorithm has been developed based on the integral transport matrix formulation of the discrete ordinates approximation for solving the within-group transport equation. The new methodology abandons the typical source iteration scheme and solves directly for the fully converged scalar flux. Four matrix operators are constructed based upon the integral form of the discrete ordinates equations. A single differential mesh sweep is performed to construct these operators. The method is parallelized by decomposing the problem domain into several smaller sub-domains, each treated as an independent problem. The scalar flux of each sub-domain is solved exactly given incoming angular flux boundary conditions. Sub-domain boundary conditions are updated iteratively, and convergence is achieved when the scalar flux error in all cells meets a pre-specified convergence criterion. The method has been implemented in a computer code that was then employed for strong scaling studies of the algorithm's parallel performance via a fixed-size problem in tests ranging from one domain up to one cell per sub-domain. Results indicate that the best parallel performance compared to source iterations occurs for optically thick, highly scattering problems, the variety that is most difficult for the traditional SI scheme to solve. Moreover, the minimum execution time occurs when each sub-domain contains a total of four cells. (authors)
Energy Technology Data Exchange (ETDEWEB)
Salque, B
1998-07-01
This work deals with the equation of radiosity, this equation describes the transport of light energy through a diffuse medium, its resolution enables us to simulate the presence of light sources. The equation of radiosity is an integral equation who admits a unique solution in realistic cases. The different methods of solving are reviewed. The equation of radiosity can not be formulated as the integral form of a classical partial differential equation, but this work shows that the technique of domain decomposition can be successfully applied to the equation of radiosity if this approach is framed by considerations of physics. This method provides a system of independent equations valid for each sub-domain and whose main parameter is luminance. Some numerical examples give an idea of the convergence of the algorithm. This method is applied to the optimization of the shape of a light reflector.
Energy Technology Data Exchange (ETDEWEB)
Flauraud, E.
2004-05-01
In this thesis, we are interested in using domain decomposition methods for solving fluid flows in faulted porous media. This study comes within the framework of sedimentary basin modeling which its aim is to predict the presence of possible oil fields in the subsoil. A sedimentary basin is regarded as a heterogeneous porous medium in which fluid flows (water, oil, gas) occur. It is often subdivided into several blocks separated by faults. These faults create discontinuities that have a tremendous effect on the fluid flow in the basin. In this work, we present two approaches to model faults from the mathematical point of view. The first approach consists in considering faults as sub-domains, in the same way as blocks but with their own geological properties. However, because of the very small width of the faults in comparison with the size of the basin, the second and new approach consists in considering faults no longer as sub-domains, but as interfaces between the blocks. A mathematical study of the two models is carried out in order to investigate the existence and the uniqueness of solutions. Then; we are interested in using domain decomposition methods for solving the previous models. The main part of this study is devoted to the design of Robin interface conditions and to the formulation of the interface problem. The Schwarz algorithm can be seen as a Jacobi method for solving the interface problem. In order to speed up the convergence, this problem can be solved by a Krylov type algorithm (BICGSTAB). We discretize the equations with a finite volume scheme, and perform extensive numerical tests to compare the different methods. (author)
International Nuclear Information System (INIS)
Wagner, John C.; Mosher, Scott W.; Evans, Thomas M.; Peplow, Douglas E.; Turner, John A.
2010-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which
International Nuclear Information System (INIS)
Wagner, J.C.; Mosher, S.W.; Evans, T.M.; Peplow, D.E.; Turner, J.A.
2010-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform 'real' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the 'gold standard' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method
Moussawi, Ali
2015-02-24
Summary: The post-treatment of (3D) displacement fields for the identification of spatially varying elastic material parameters is a large inverse problem that remains out of reach for massive 3D structures. We explore here the potential of the constitutive compatibility method for tackling such an inverse problem, provided an appropriate domain decomposition technique is introduced. In the method described here, the statically admissible stress field that can be related through the known constitutive symmetry to the kinematic observations is sought through minimization of an objective function, which measures the violation of constitutive compatibility. After this stress reconstruction, the local material parameters are identified with the given kinematic observations using the constitutive equation. Here, we first adapt this method to solve 3D identification problems and then implement it within a domain decomposition framework which allows for reduced computational load when handling larger problems.
International Nuclear Information System (INIS)
Previti, Alberto; Furfaro, Roberto; Picca, Paolo; Ganapol, Barry D.; Mostacci, Domiziano
2011-01-01
This paper deals with finding accurate solutions for photon transport problems in highly heterogeneous media fastly, efficiently and with modest memory resources. We propose an extended version of the analytical discrete ordinates method, coupled with domain decomposition-derived algorithms and non-linear convergence acceleration techniques. Numerical performances are evaluated using a challenging case study available in the literature. A study of accuracy versus computational time and memory requirements is reported for transport calculations that are relevant for remote sensing applications.
Coupling parallel adaptive mesh refinement with a nonoverlapping domain decomposition solver
Czech Academy of Sciences Publication Activity Database
Kůs, Pavel; Šístek, Jakub
2017-01-01
Roč. 110, August (2017), s. 34-54 ISSN 0965-9978 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : adaptive mesh refinement * parallel algorithms * domain decomposition Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 3.000, year: 2016 http://www.sciencedirect.com/science/article/pii/S0965997816305737
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe
2010-01-01
Roč. 5910, - (2010), s. 76-83 ISSN 0302-9743. [International Conference on Large-Scale Scientific Computations, LSSC 2009 /7./. Sozopol, 04.06.2009-08.06.2009] R&D Projects: GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z30860518 Keywords : additive matrix * condition number * domain decomposition Subject RIV: BA - General Mathematics www.springerlink.com
Coupling parallel adaptive mesh refinement with a nonoverlapping domain decomposition solver
Czech Academy of Sciences Publication Activity Database
Kůs, Pavel; Šístek, Jakub
2017-01-01
Roč. 110, August (2017), s. 34-54 ISSN 0965-9978 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : adaptive mesh refinement * parallel algorithms * domain decomposition Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 3.000, year: 2016 http://www.sciencedirect.com/science/ article /pii/S0965997816305737
Energy Technology Data Exchange (ETDEWEB)
Jemcov, A.; Matovic, M.D. [Queen`s Univ., Kingston, Ontario (Canada)
1996-12-31
This paper examines the sparse representation and preconditioning of a discrete Steklov-Poincare operator which arises in domain decomposition methods. A non-overlapping domain decomposition method is applied to a second order self-adjoint elliptic operator (Poisson equation), with homogeneous boundary conditions, as a model problem. It is shown that the discrete Steklov-Poincare operator allows sparse representation with a bounded condition number in wavelet basis if the transformation is followed by thresholding and resealing. These two steps combined enable the effective use of Krylov subspace methods as an iterative solution procedure for the system of linear equations. Finding the solution of an interface problem in domain decomposition methods, known as a Schur complement problem, has been shown to be equivalent to the discrete form of Steklov-Poincare operator. A common way to obtain Schur complement matrix is by ordering the matrix of discrete differential operator in subdomain node groups then block eliminating interface nodes. The result is a dense matrix which corresponds to the interface problem. This is equivalent to reducing the original problem to several smaller differential problems and one boundary integral equation problem for the subdomain interface.
Pioldi, Fabio; Rizzi, Egidio
2017-07-01
Output-only structural identification is developed by a refined Frequency Domain Decomposition ( rFDD) approach, towards assessing current modal properties of heavy-damped buildings (in terms of identification challenge), under strong ground motions. Structural responses from earthquake excitations are taken as input signals for the identification algorithm. A new dedicated computational procedure, based on coupled Chebyshev Type II bandpass filters, is outlined for the effective estimation of natural frequencies, mode shapes and modal damping ratios. The identification technique is also coupled with a Gabor Wavelet Transform, resulting in an effective and self-contained time-frequency analysis framework. Simulated response signals generated by shear-type frames (with variable structural features) are used as a necessary validation condition. In this context use is made of a complete set of seismic records taken from the FEMA P695 database, i.e. all 44 "Far-Field" (22 NS, 22 WE) earthquake signals. The modal estimates are statistically compared to their target values, proving the accuracy of the developed algorithm in providing prompt and accurate estimates of all current strong ground motion modal parameters. At this stage, such analysis tool may be employed for convenient application in the realm of Earthquake Engineering, towards potential Structural Health Monitoring and damage detection purposes.
Implicit upwind schemes for computational fluid dynamics. Solution by domain decomposition
International Nuclear Information System (INIS)
Clerc, S.
1998-01-01
In this work, the numerical simulation of fluid dynamics equations is addressed. Implicit upwind schemes of finite volume type are used for this purpose. The first part of the dissertation deals with the improvement of the computational precision in unfavourable situations. A non-conservative treatment of some source terms is studied in order to correct some shortcomings of the usual operator-splitting method. Besides, finite volume schemes based on Godunov's approach are unsuited to compute low Mach number flows. A modification of the up-winding by preconditioning is introduced to correct this defect. The second part deals with the solution of steady-state problems arising from an implicit discretization of the equations. A well-posed linearized boundary value problem is formulated. We prove the convergence of a domain decomposition algorithm of Schwartz type for this problem. This algorithm is implemented either directly, or in a Schur complement framework. Finally, another approach is proposed, which consists in decomposing the non-linear steady state problem. (author)
Automatic classification of visual evoked potentials based on wavelet decomposition
Stasiakiewicz, Paweł; Dobrowolski, Andrzej P.; Tomczykiewicz, Kazimierz
2017-04-01
Diagnosis of part of the visual system, that is responsible for conducting compound action potential, is generally based on visual evoked potentials generated as a result of stimulation of the eye by external light source. The condition of patient's visual path is assessed by set of parameters that describe the time domain characteristic extremes called waves. The decision process is compound therefore diagnosis significantly depends on experience of a doctor. The authors developed a procedure - based on wavelet decomposition and linear discriminant analysis - that ensures automatic classification of visual evoked potentials. The algorithm enables to assign individual case to normal or pathological class. The proposed classifier has a 96,4% sensitivity at 10,4% probability of false alarm in a group of 220 cases and area under curve ROC equals to 0,96 which, from the medical point of view, is a very good result.
International Nuclear Information System (INIS)
Berthe, P.M.
2013-01-01
In the context of nuclear waste repositories, we consider the numerical discretization of the non stationary convection diffusion equation. Discontinuous physical parameters and heterogeneous space and time scales lead us to use different space and time discretizations in different parts of the domain. In this work, we choose the discrete duality finite volume (DDFV) scheme and the discontinuous Galerkin scheme in time, coupled by an optimized Schwarz waveform relaxation (OSWR) domain decomposition method, because this allows the use of non-conforming space-time meshes. The main difficulty lies in finding an upwind discretization of the convective flux which remains local to a sub-domain and such that the multi domain scheme is equivalent to the mono domain one. These difficulties are first dealt with in the one-dimensional context, where different discretizations are studied. The chosen scheme introduces a hybrid unknown on the cell interfaces. The idea of up winding with respect to this hybrid unknown is extended to the DDFV scheme in the two-dimensional setting. The well-posedness of the scheme and of an equivalent multi domain scheme is shown. The latter is solved by an OSWR algorithm, the convergence of which is proved. The optimized parameters in the Robin transmission conditions are obtained by studying the continuous or discrete convergence rates. Several test-cases, one of which inspired by nuclear waste repositories, illustrate these results. (author) [fr
Parallel computing of a climate model on the dawn 1000 by domain decomposition method
Bi, Xunqiang
1997-12-01
In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.
International Nuclear Information System (INIS)
Fischer, J.W.; Azmy, Y.Y.
2003-01-01
A previously reported parallel performance model for Angular Domain Decomposition (ADD) of the Discrete Ordinates method for solving multidimensional neutron transport problems is revisited for further validation. Three communication schemes: native MPI, the bucket algorithm, and the distributed bucket algorithm, are included in the validation exercise that is successfully conducted on a Beowulf cluster. The parallel performance model is comprised of three components: serial, parallel, and communication. The serial component is largely independent of the number of participating processors, P, while the parallel component decreases like 1/P. These two components are independent of the communication scheme, in contrast with the communication component that typically increases with P in a manner highly dependent on the global reduced algorithm. Correct trends for each component and each communication scheme were measured for the Arbitrarily High Order Transport (AHOT) code, thus validating the performance models. Furthermore, extensive experiments illustrate the superiority of the bucket algorithm. The primary question addressed in this research is: for a given problem size, which domain decomposition method, angular or spatial, is best suited to parallelize Discrete Ordinates methods on a specific computational platform? We address this question for three-dimensional applications via parallel performance models that include parameters specifying the problem size and system performance: the above-mentioned ADD, and a previously constructed and validated Spatial Domain Decomposition (SDD) model. We conclude that for large problems the parallel component dwarfs the communication component even on moderately large numbers of processors. The main advantages of SDD are: (a) scalability to higher numbers of processors of the order of the number of computational cells; (b) smaller memory requirement; (c) better performance than ADD on high-end platforms and large number of
A domain decomposition method for analyzing a coupling between multiple acoustical spaces (L).
Chen, Yuehua; Jin, Guoyong; Liu, Zhigang
2017-05-01
This letter presents a domain decomposition method to predict the acoustic characteristics of an arbitrary enclosure made up of any number of sub-spaces. While the Lagrange multiplier technique usually has good performance for conditional extremum problems, the present method avoids involving extra coupling parameters and theoretically ensures the continuity conditions of both sound pressure and particle velocity at the coupling interface. Comparisons with the finite element results illustrate the accuracy and efficiency of the present predictions and the effect of coupling parameters between sub-spaces on the natural frequencies and mode shapes of the overall enclosure is revealed.
Combined spatial/angular domain decomposition SN algorithms for shared memory parallel machines
International Nuclear Information System (INIS)
Hunter, M.A.; Haghighat, A.
1993-01-01
Several parallel processing algorithms on the basis of spatial and angular domain decomposition methods are developed and incorporated into a two-dimensional discrete ordinates transport theory code. These algorithms divide the spatial and angular domains into independent subdomains so that the flux calculations within each subdomain can be processed simultaneously. Two spatial parallel algorithms (Block-Jacobi, red-black), one angular parallel algorithm (η-level), and their combinations are implemented on an eight processor CRAY Y-MP. Parallel performances of the algorithms are measured using a series of fixed source RZ geometry problems. Some of the results are also compared with those executed on an IBM 3090/600J machine. (orig.)
Domain Decomposition Preconditioners for Multiscale Flows in High-Contrast Media
Galvis, Juan; Efendiev, Yalchin
2010-01-01
In this paper, we study domain decomposition preconditioners for multiscale flows in high-contrast media. We consider flow equations governed by elliptic equations in heterogeneous media with a large contrast in the coefficients. Our main goal is to develop domain decomposition preconditioners with the condition number that is independent of the contrast when there are variations within coarse regions. This is accomplished by designing coarse-scale spaces and interpolators that represent important features of the solution within each coarse region. The important features are characterized by the connectivities of high-conductivity regions. To detect these connectivities, we introduce an eigenvalue problem that automatically detects high-conductivity regions via a large gap in the spectrum. A main observation is that this eigenvalue problem has a few small, asymptotically vanishing eigenvalues. The number of these small eigenvalues is the same as the number of connected high-conductivity regions. The coarse spaces are constructed such that they span eigenfunctions corresponding to these small eigenvalues. These spaces are used within two-level additive Schwarz preconditioners as well as overlapping methods for the Schur complement to design preconditioners. We show that the condition number of the preconditioned systems is independent of the contrast. More detailed studies are performed for the case when the high-conductivity region is connected within coarse block neighborhoods. Our numerical experiments confirm the theoretical results presented in this paper. © 2010 Society for Industrial and Applied Mathematics.
Energy Technology Data Exchange (ETDEWEB)
Clement, F.; Vodicka, A.; Weis, P. [Institut National de Recherches Agronomiques (INRA), 78 - Le Chesnay (France); Martin, V. [Institut National de Recherches Agronomiques (INRA), 92 - Chetenay Malabry (France); Di Cosmo, R. [Institut National de Recherches Agronomiques (INRA), 78 - Le Chesnay (France); Paris-7 Univ., 75 (France)
2003-07-01
We consider the application of a non-overlapping domain decomposition method with non-matching grids based on Robin interface conditions to the problem of flow surrounding an underground nuclear waste disposal. We show with a simple example how one can refine the mesh locally around the storage with this technique. A second aspect is studied in this paper. The coupling between the sub-domains can be achieved by computing in two ways: either directly (i.e. the domain decomposition algorithm is included in the code that solves the problems on the sub-domains) or using code coupling. In the latter case, each sub-domain problem is solved separately and the coupling is performed by another program. We wrote a coupling program in the functional language Ocaml, using the OcamIP31 environment devoted to ease the parallelism. This at the same time we test the code coupling and we use the natural parallel property of domain decomposition methods. Some simple 2D numerical tests show promising results, and further studies are under way. (authors)
International Nuclear Information System (INIS)
Clement, F.; Vodicka, A.; Weis, P.; Martin, V.; Di Cosmo, R.
2003-01-01
We consider the application of a non-overlapping domain decomposition method with non-matching grids based on Robin interface conditions to the problem of flow surrounding an underground nuclear waste disposal. We show with a simple example how one can refine the mesh locally around the storage with this technique. A second aspect is studied in this paper. The coupling between the sub-domains can be achieved by computing in two ways: either directly (i.e. the domain decomposition algorithm is included in the code that solves the problems on the sub-domains) or using code coupling. In the latter case, each sub-domain problem is solved separately and the coupling is performed by another program. We wrote a coupling program in the functional language Ocaml, using the OcamIP31 environment devoted to ease the parallelism. This at the same time we test the code coupling and we use the natural parallel property of domain decomposition methods. Some simple 2D numerical tests show promising results, and further studies are under way. (authors)
Two-phase flow steam generator simulations on parallel computers using domain decomposition method
International Nuclear Information System (INIS)
Belliard, M.
2003-01-01
Within the framework of the Domain Decomposition Method (DDM), we present industrial steady state two-phase flow simulations of PWR Steam Generators (SG) using iteration-by-sub-domain methods: standard and Adaptive Dirichlet/Neumann methods (ADN). The averaged mixture balance equations are solved by a Fractional-Step algorithm, jointly with the Crank-Nicholson scheme and the Finite Element Method. The algorithm works with overlapping or non-overlapping sub-domains and with conforming or nonconforming meshing. Computations are run on PC networks or on massively parallel mainframe computers. A CEA code-linker and the PVM package are used (master-slave context). SG mock-up simulations, involving up to 32 sub-domains, highlight the efficiency (speed-up, scalability) and the robustness of the chosen approach. With the DDM, the computational problem size is easily increased to about 1,000,000 cells and the CPU time is significantly reduced. The difficulties related to industrial use are also discussed. (author)
Domain decomposition method for dynamic faulting under slip-dependent friction
International Nuclear Information System (INIS)
Badea, Lori; Ionescu, Ioan R.; Wolf, Sylvie
2004-01-01
The anti-plane shearing problem on a system of finite faults under a slip-dependent friction in a linear elastic domain is considered. Using a Newmark method for the time discretization of the problem, we have obtained an elliptic variational inequality at each time step. An upper bound for the time step size, which is not a CFL condition, is deduced from the solution uniqueness criterion using the first eigenvalue of the tangent problem. Finite element form of the variational inequality is solved by a Schwarz method assuming that the inner nodes of the domain lie in one subdomain and the nodes on the fault lie in other subdomains. Two decompositions of the domain are analyzed, one made up of two subdomains and another one with three subdomains. Numerical experiments are performed to illustrate convergence for a single time step (convergence of the Schwarz algorithm, influence of the mesh size, influence of the time step), convergence in time (instability capturing, energy dissipation, optimal time step) and an application to a relevant physical problem (interacting parallel fault segments)
Energy Technology Data Exchange (ETDEWEB)
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
2017-12-01
We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.
DOMAIN DECOMPOSITION FOR POROELASTICITY AND ELASTICITY WITH DG JUMPS AND MORTARS
GIRAULT, V.
2011-01-01
We couple a time-dependent poroelastic model in a region with an elastic model in adjacent regions. We discretize each model independently on non-matching grids and we realize a domain decomposition on the interface between the regions by introducing DG jumps and mortars. The unknowns are condensed on the interface, so that at each time step, the computation in each subdomain can be performed in parallel. In addition, by extrapolating the displacement, we present an algorithm where the computations of the pressure and displacement are decoupled. We show that the matrix of the interface problem is positive definite and establish error estimates for this scheme. © 2011 World Scientific Publishing Company.
A balancing domain decomposition method by constraints for advection-diffusion problems
Energy Technology Data Exchange (ETDEWEB)
Tu, Xuemin; Li, Jing
2008-12-10
The balancing domain decomposition methods by constraints are extended to solving nonsymmetric, positive definite linear systems resulting from the finite element discretization of advection-diffusion equations. A pre-conditioned GMRES iteration is used to solve a Schur complement system of equations for the subdomain interface variables. In the preconditioning step of each iteration, a partially sub-assembled finite element problem is solved. A convergence rate estimate for the GMRES iteration is established, under the condition that the diameters of subdomains are small enough. It is independent of the number of subdomains and grows only slowly with the subdomain problem size. Numerical experiments for several two-dimensional advection-diffusion problems illustrate the fast convergence of the proposed algorithm.
Energy Technology Data Exchange (ETDEWEB)
Girardi, E
2004-12-15
A new methodology for the solution of the neutron transport equation, based on domain decomposition has been developed. This approach allows us to employ different numerical methods together for a whole core calculation: a variational nodal method, a discrete ordinate nodal method and a method of characteristics. These new developments authorize the use of independent spatial and angular expansion, non-conformal Cartesian and unstructured meshes for each sub-domain, introducing a flexibility of modeling which is not allowed in today available codes. The effectiveness of our multi-domain/multi-method approach has been tested on several configurations. Among them, one particular application: the benchmark model of the Phebus experimental facility at Cea-Cadarache, shows why this new methodology is relevant to problems with strong local heterogeneities. This comparison has showed that the decomposition method brings more accuracy all along with an important reduction of the computer time.
Yücel, Abdulkadir C.
2013-07-01
Reliable and effective wireless communication and tracking systems in mine environments are key to ensure miners\\' productivity and safety during routine operations and catastrophic events. The design of such systems greatly benefits from simulation tools capable of analyzing electromagnetic (EM) wave propagation in long mine tunnels and large mine galleries. Existing simulation tools for analyzing EM wave propagation in such environments employ modal decompositions (Emslie et. al., IEEE Trans. Antennas Propag., 23, 192-205, 1975), ray-tracing techniques (Zhang, IEEE Tran. Vehic. Tech., 5, 1308-1314, 2003), and full wave methods. Modal approaches and ray-tracing techniques cannot accurately account for the presence of miners and their equipments, as well as wall roughness (especially when the latter is comparable to the wavelength). Full-wave methods do not suffer from such restrictions but require prohibitively large computational resources. To partially alleviate this computational burden, a 2D integral equation-based domain decomposition technique has recently been proposed (Bakir et. al., in Proc. IEEE Int. Symp. APS, 1-2, 8-14 July 2012). © 2013 IEEE.
Domain similarity based orthology detection.
Bitard-Feildel, Tristan; Kemena, Carsten; Greenwood, Jenny M; Bornberg-Bauer, Erich
2015-05-13
Orthologous protein detection software mostly uses pairwise comparisons of amino-acid sequences to assert whether two proteins are orthologous or not. Accordingly, when the number of sequences for comparison increases, the number of comparisons to compute grows in a quadratic order. A current challenge of bioinformatic research, especially when taking into account the increasing number of sequenced organisms available, is to make this ever-growing number of comparisons computationally feasible in a reasonable amount of time. We propose to speed up the detection of orthologous proteins by using strings of domains to characterize the proteins. We present two new protein similarity measures, a cosine and a maximal weight matching score based on domain content similarity, and new software, named porthoDom. The qualities of the cosine and the maximal weight matching similarity measures are compared against curated datasets. The measures show that domain content similarities are able to correctly group proteins into their families. Accordingly, the cosine similarity measure is used inside porthoDom, the wrapper developed for proteinortho. porthoDom makes use of domain content similarity measures to group proteins together before searching for orthologs. By using domains instead of amino acid sequences, the reduction of the search space decreases the computational complexity of an all-against-all sequence comparison. We demonstrate that representing and comparing proteins as strings of discrete domains, i.e. as a concatenation of their unique identifiers, allows a drastic simplification of search space. porthoDom has the advantage of speeding up orthology detection while maintaining a degree of accuracy similar to proteinortho. The implementation of porthoDom is released using python and C++ languages and is available under the GNU GPL licence 3 at http://www.bornberglab.org/pages/porthoda .
A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems
Energy Technology Data Exchange (ETDEWEB)
Taverniers, Søren; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu
2017-02-01
Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton–Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement of path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.
Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos
International Nuclear Information System (INIS)
Ragusa, J.C.
2003-01-01
The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster
Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos
Energy Technology Data Exchange (ETDEWEB)
Ragusa, J.C. [CEA Saclay, Direction de l' Energie Nucleaire, Service d' Etudes des Reacteurs et de Modelisations Avancees (DEN/SERMA), 91 - Gif sur Yvette (France)
2003-07-01
The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster.
Energy Technology Data Exchange (ETDEWEB)
Saas, L.
2004-05-01
This Thesis deals with sedimentary basin modeling whose goal is the prediction through geological times of the localizations and appraisal of hydrocarbons quantities present in the ground. Due to the natural and evolutionary decomposition of the sedimentary basin in blocks and stratigraphic layers, domain decomposition methods are requested to simulate flows of waters and of hydrocarbons in the ground. Conservations laws are used to model the flows in the ground and form coupled partial differential equations which must be discretized by finite volume method. In this report we carry out a study on finite volume methods on non-matching grids solved by domain decomposition methods. We describe a family of finite volume schemes on non-matching grids and we prove that the associated global discretized problem is well posed. Then we give an error estimate. We give two examples of finite volume schemes on non matching grids and the corresponding theoretical results (Constant scheme and Linear scheme). Then we present the resolution of the global discretized problem by a domain decomposition method using arbitrary interface conditions (for example Robin conditions). Finally we give numerical results which validate the theoretical results and study the use of finite volume methods on non-matching grids for basin modeling. (author)
International Nuclear Information System (INIS)
Kuo, V.
2016-01-01
Full text: The European Qualifications Framework categorizes learning objectives into three qualifiers “knowledge”, “skills”, and “competences” (KSCs) to help improve the comparability between different fields and disciplines. However, the management of KSCs remains a great challenge given their semantic fuzziness. Similar texts may describe different concepts and different texts may describe similar concepts among different domains. This is difficult for the indexing, searching and matching of semantically similar KSCs within an information system, to facilitate transfer and mobility of KSCs. We present a working example using a semantic inference method known as Latent Semantic Analysis, employing a matrix operation called Singular Value Decomposition, which have been shown to infer semantic associations within unstructured textual data comparable to that of human interpretations. In our example, a few natural language text passages representing KSCs in the nuclear sector are used to demonstrate the capabilities of the system. It can be shown that LSA is able to infer latent semantic associations between texts, and cluster and match separate text passages semantically based on these associations. We propose this methodology for modelling existing natural language KSCs in the nuclear domain so they can be semantically queried, retrieved and filtered upon request. (author
Benders’ Decomposition for Curriculum-Based Course Timetabling
DEFF Research Database (Denmark)
Bagger, Niels-Christian F.; Sørensen, Matias; Stidsen, Thomas R.
2018-01-01
feasibility. We compared our algorithm with other approaches from the literature for a total of 32 data instances. We obtained a lower bound on 23 of the instances, which were at least as good as the lower bounds obtained by the state-of-the-art, and on eight of these, our lower bounds were higher. On two......In this paper we applied Benders’ decomposition to the Curriculum-Based Course Timetabling (CBCT) problem. The objective of the CBCT problem is to assign a set of lectures to time slots and rooms. Our approach was based on segmenting the problem into time scheduling and room allocation problems...... of the instances, our lower bound was an improvement of the currently best-known. Lastly, we compared our decomposition to the model without the decomposition on an additional six instances, which are much larger than the other 32. To our knowledge, this was the first time that lower bounds were calculated...
Distributed Prognostics Based on Structural Model Decomposition
National Aeronautics and Space Administration — Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based...
Optimal (Solvent) Mixture Design through a Decomposition Based CAMD methodology
DEFF Research Database (Denmark)
Achenie, L.; Karunanithi, Arunprakash T.; Gani, Rafiqul
2004-01-01
Computer Aided Molecular/Mixture design (CAMD) is one of the most promising techniques for solvent design and selection. A decomposition based CAMD methodology has been formulated where the mixture design problem is solved as a series of molecular and mixture design sub-problems. This approach is...
Asynchronous Task-Based Polar Decomposition on Manycore Architectures
Sukkari, Dalal
2016-10-25
This paper introduces the first asynchronous, task-based implementation of the polar decomposition on manycore architectures. Based on a new formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original and hostile LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is also capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been severely weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations (i.e., Intel MKL and Elemental) for the polar decomposition on latest shared-memory vendors\\' systems (i.e., Intel Haswell/Broadwell/Knights Landing, NVIDIA K80/P100 GPUs and IBM Power8), while maintaining high numerical accuracy.
Advances in audio watermarking based on singular value decomposition
Dhar, Pranab Kumar
2015-01-01
This book introduces audio watermarking methods for copyright protection, which has drawn extensive attention for securing digital data from unauthorized copying. The book is divided into two parts. First, an audio watermarking method in discrete wavelet transform (DWT) and discrete cosine transform (DCT) domains using singular value decomposition (SVD) and quantization is introduced. This method is robust against various attacks and provides good imperceptible watermarked sounds. Then, an audio watermarking method in fast Fourier transform (FFT) domain using SVD and Cartesian-polar transformation (CPT) is presented. This method has high imperceptibility and high data payload and it provides good robustness against various attacks. These techniques allow media owners to protect copyright and to show authenticity and ownership of their material in a variety of applications. · Features new methods of audio watermarking for copyright protection and ownership protection · Outl...
Heinkenschloss, Matthias
2005-01-01
We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.
Structural system identification based on variational mode decomposition
Bagheri, Abdollah; Ozbulut, Osman E.; Harris, Devin K.
2018-03-01
In this paper, a new structural identification method is proposed to identify the modal properties of engineering structures based on dynamic response decomposition using the variational mode decomposition (VMD). The VMD approach is a decomposition algorithm that has been developed as a means to overcome some of the drawbacks and limitations of the empirical mode decomposition method. The VMD-based modal identification algorithm decomposes the acceleration signal into a series of distinct modal responses and their respective center frequencies, such that when combined their cumulative modal responses reproduce the original acceleration response. The decaying amplitude of the extracted modal responses is then used to identify the modal damping ratios using a linear fitting function on modal response data. Finally, after extracting modal responses from available sensors, the mode shape vector for each of the decomposed modes in the system is identified from all obtained modal response data. To demonstrate the efficiency of the algorithm, a series of numerical, laboratory, and field case studies were evaluated. The laboratory case study utilized the vibration response of a three-story shear frame, whereas the field study leveraged the ambient vibration response of a pedestrian bridge to characterize the modal properties of the structure. The modal properties of the shear frame were computed using analytical approach for a comparison with the experimental modal frequencies. Results from these case studies demonstrated that the proposed method is efficient and accurate in identifying modal data of the structures.
Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.
Ze Wang; Chi Man Wong; Feng Wan
2017-07-01
An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.
Zampini, Stefano; Tu, Xuemin
2017-01-01
Multilevel balancing domain decomposition by constraints (BDDC) deluxe algorithms are developed for the saddle point problems arising from mixed formulations of Darcy flow in porous media. In addition to the standard no-net-flux constraints on each face, adaptive primal constraints obtained from the solutions of local generalized eigenvalue problems are included to control the condition number. Special deluxe scaling and local generalized eigenvalue problems are designed in order to make sure that these additional primal variables lie in a benign subspace in which the preconditioned operator is positive definite. The current multilevel theory for BDDC methods for porous media flow is complemented with an efficient algorithm for the computation of the so-called malign part of the solution, which is needed to make sure the rest of the solution can be obtained using the conjugate gradient iterates lying in the benign subspace. We also propose a new technique, based on the Sherman--Morrison formula, that lets us preserve the complexity of the subdomain local solvers. Condition number estimates are provided under certain standard assumptions. Extensive numerical experiments confirm the theoretical estimates; additional numerical results prove the effectiveness of the method with higher order elements and high-contrast problems from real-world applications.
Zampini, Stefano
2017-08-03
Multilevel balancing domain decomposition by constraints (BDDC) deluxe algorithms are developed for the saddle point problems arising from mixed formulations of Darcy flow in porous media. In addition to the standard no-net-flux constraints on each face, adaptive primal constraints obtained from the solutions of local generalized eigenvalue problems are included to control the condition number. Special deluxe scaling and local generalized eigenvalue problems are designed in order to make sure that these additional primal variables lie in a benign subspace in which the preconditioned operator is positive definite. The current multilevel theory for BDDC methods for porous media flow is complemented with an efficient algorithm for the computation of the so-called malign part of the solution, which is needed to make sure the rest of the solution can be obtained using the conjugate gradient iterates lying in the benign subspace. We also propose a new technique, based on the Sherman--Morrison formula, that lets us preserve the complexity of the subdomain local solvers. Condition number estimates are provided under certain standard assumptions. Extensive numerical experiments confirm the theoretical estimates; additional numerical results prove the effectiveness of the method with higher order elements and high-contrast problems from real-world applications.
Robust domain decomposition preconditioners for abstract symmetric positive definite bilinear forms
Efendiev, Yalchin; Galvis, Juan; Lazarov, Raytcho; Willems, Joerg
2012-01-01
An abstract framework for constructing stable decompositions of the spaces corresponding to general symmetric positive definite problems into "local" subspaces and a global "coarse" space is developed. Particular applications of this abstract
Energy Technology Data Exchange (ETDEWEB)
El-Sayed, A.M.A. [Faculty of Science University of Alexandria (Egypt)]. E-mail: amasyed@hotmail.com; Gaber, M. [Faculty of Education Al-Arish, Suez Canal University (Egypt)]. E-mail: mghf408@hotmail.com
2006-11-20
The Adomian decomposition method has been successively used to find the explicit and numerical solutions of the time fractional partial differential equations. A different examples of special interest with fractional time and space derivatives of order {alpha}, 0<{alpha}=<1 are considered and solved by means of Adomian decomposition method. The behaviour of Adomian solutions and the effects of different values of {alpha} are shown graphically for some examples.
A novel method for EMG decomposition based on matched filters
Directory of Open Access Journals (Sweden)
Ailton Luiz Dias Siqueira Júnior
Full Text Available Introduction Decomposition of electromyography (EMG signals into the constituent motor unit action potentials (MUAPs can allow for deeper insights into the underlying processes associated with the neuromuscular system. The vast majority of the methods for EMG decomposition found in the literature depend on complex algorithms and specific instrumentation. As an attempt to contribute to solving these issues, we propose a method based on a bank of matched filters for the decomposition of EMG signals. Methods Four main units comprise our method: a bank of matched filters, a peak detector, a motor unit classifier and an overlapping resolution module. The system’s performance was evaluated with simulated and real EMG data. Classification accuracy was measured by comparing the responses of the system with known data from the simulator and with the annotations of a human expert. Results The results show that decomposition of non-overlapping MUAPs can be achieved with up to 99% accuracy for signals with up to 10 active motor units and a signal-to-noise ratio (SNR of 10 dB. For overlapping MUAPs with up to 10 motor units per signal and a SNR of 20 dB, the technique allows for correct classification of approximately 71% of the MUAPs. The method is capable of processing, decomposing and classifying a 50 ms window of data in less than 5 ms using a standard desktop computer. Conclusion This article contributes to the ongoing research on EMG decomposition by describing a novel technique capable of delivering high rates of success by means of a fast algorithm, suggesting its possible use in future real-time embedded applications, such as myoelectric prostheses control and biofeedback systems.
Sensitivity Analysis of the Proximal-Based Parallel Decomposition Methods
Directory of Open Access Journals (Sweden)
Feng Ma
2014-01-01
Full Text Available The proximal-based parallel decomposition methods were recently proposed to solve structured convex optimization problems. These algorithms are eligible for parallel computation and can be used efficiently for solving large-scale separable problems. In this paper, compared with the previous theoretical results, we show that the range of the involved parameters can be enlarged while the convergence can be still established. Preliminary numerical tests on stable principal component pursuit problem testify to the advantages of the enlargement.
Topology Based Domain Search (TBDS)
National Research Council Canada - National Science Library
Manning, William
2002-01-01
This effort will explore radical changes in the way Domain Name System (DNS) is used by endpoints in a network to improve the resilience of the endpoint and its applications in the face of dynamically changing infrastructure topology...
International Nuclear Information System (INIS)
Chiba, Gou; Tsuji, Masashi; Shimazu, Yoichiro
2001-01-01
A hierarchical domain decomposition boundary element method (HDD-BEM) that was developed to solve a two-dimensional neutron diffusion equation has been modified to deal with three-dimensional problems. In the HDD-BEM, the domain is decomposed into homogeneous regions. The boundary conditions on the common inner boundaries between decomposed regions and the neutron multiplication factor are initially assumed. With these assumptions, the neutron diffusion equations defined in decomposed homogeneous regions can be solved respectively by applying the boundary element method. This part corresponds to the 'lower level' calculations. At the 'higher level' calculations, the assumed values, the inner boundary conditions and the neutron multiplication factor, are modified so as to satisfy the continuity conditions for the neutron flux and the neutron currents on the inner boundaries. These procedures of the lower and higher levels are executed alternately and iteratively until the continuity conditions are satisfied within a convergence tolerance. With the hierarchical domain decomposition, it is possible to deal with problems composing a large number of regions, something that has been difficult with the conventional BEM. In this paper, it is showed that a three-dimensional problem even with 722 regions can be solved with a fine accuracy and an acceptable computation time. (author)
Satellite Image Time Series Decomposition Based on EEMD
Directory of Open Access Journals (Sweden)
Yun-long Kong
2015-11-01
Full Text Available Satellite Image Time Series (SITS have recently been of great interest due to the emerging remote sensing capabilities for Earth observation. Trend and seasonal components are two crucial elements of SITS. In this paper, a novel framework of SITS decomposition based on Ensemble Empirical Mode Decomposition (EEMD is proposed. EEMD is achieved by sifting an ensemble of adaptive orthogonal components called Intrinsic Mode Functions (IMFs. EEMD is noise-assisted and overcomes the drawback of mode mixing in conventional Empirical Mode Decomposition (EMD. Inspired by these advantages, the aim of this work is to employ EEMD to decompose SITS into IMFs and to choose relevant IMFs for the separation of seasonal and trend components. In a series of simulations, IMFs extracted by EEMD achieved a clear representation with physical meaning. The experimental results of 16-day compositions of Moderate Resolution Imaging Spectroradiometer (MODIS, Normalized Difference Vegetation Index (NDVI, and Global Environment Monitoring Index (GEMI time series with disturbance illustrated the effectiveness and stability of the proposed approach to monitoring tasks, such as applications for the detection of abrupt changes.
Robust domain decomposition preconditioners for abstract symmetric positive definite bilinear forms
Efendiev, Yalchin
2012-02-22
An abstract framework for constructing stable decompositions of the spaces corresponding to general symmetric positive definite problems into "local" subspaces and a global "coarse" space is developed. Particular applications of this abstract framework include practically important problems in porous media applications such as: the scalar elliptic (pressure) equation and the stream function formulation of its mixed form, Stokes\\' and Brinkman\\'s equations. The constant in the corresponding abstract energy estimate is shown to be robust with respect to mesh parameters as well as the contrast, which is defined as the ratio of high and low values of the conductivity (or permeability). The derived stable decomposition allows to construct additive overlapping Schwarz iterative methods with condition numbers uniformly bounded with respect to the contrast and mesh parameters. The coarse spaces are obtained by patching together the eigenfunctions corresponding to the smallest eigenvalues of certain local problems. A detailed analysis of the abstract setting is provided. The proposed decomposition builds on a method of Galvis and Efendiev [Multiscale Model. Simul. 8 (2010) 1461-1483] developed for second order scalar elliptic problems with high contrast. Applications to the finite element discretizations of the second order elliptic problem in Galerkin and mixed formulation, the Stokes equations, and Brinkman\\'s problem are presented. A number of numerical experiments for these problems in two spatial dimensions are provided. © EDP Sciences, SMAI, 2012.
Variance decomposition-based sensitivity analysis via neural networks
International Nuclear Information System (INIS)
Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo
2003-01-01
This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project
Aligning observed and modelled behaviour based on workflow decomposition
Wang, Lu; Du, YuYue; Liu, Wei
2017-09-01
When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.
Palm vein recognition based on directional empirical mode decomposition
Lee, Jen-Chun; Chang, Chien-Ping; Chen, Wei-Kuei
2014-04-01
Directional empirical mode decomposition (DEMD) has recently been proposed to make empirical mode decomposition suitable for the processing of texture analysis. Using DEMD, samples are decomposed into a series of images, referred to as two-dimensional intrinsic mode functions (2-D IMFs), from finer to large scale. A DEMD-based 2 linear discriminant analysis (LDA) for palm vein recognition is proposed. The proposed method progresses through three steps: (i) a set of 2-D IMF features of various scale and orientation are extracted using DEMD, (ii) the 2LDA method is then applied to reduce the dimensionality of the feature space in both the row and column directions, and (iii) the nearest neighbor classifier is used for classification. We also propose two strategies for using the set of 2-D IMF features: ensemble DEMD vein representation (EDVR) and multichannel DEMD vein representation (MDVR). In experiments using palm vein databases, the proposed MDVR-based 2LDA method achieved recognition accuracy of 99.73%, thereby demonstrating its feasibility for palm vein recognition.
An efficient domain decomposition strategy for wave loads on surface piercing circular cylinders
DEFF Research Database (Denmark)
Paulsen, Bo Terp; Bredmose, Henrik; Bingham, Harry B.
2014-01-01
A fully nonlinear domain decomposed solver is proposed for efficient computations of wave loads on surface piercing structures in the time domain. A fully nonlinear potential flow solver was combined with a fully nonlinear Navier–Stokes/VOF solver via generalized coupling zones of arbitrary shape....... Sensitivity tests of the extent of the inner Navier–Stokes/VOF domain were carried out. Numerical computations of wave loads on surface piercing circular cylinders at intermediate water depths are presented. Four different test cases of increasing complexity were considered; 1) weakly nonlinear regular waves...
A Temporal Domain Decomposition Algorithmic Scheme for Large-Scale Dynamic Traffic Assignment
Directory of Open Access Journals (Sweden)
Eric J. Nava
2012-03-01
This paper presents a temporal decomposition scheme for large spatial- and temporal-scale dynamic traffic assignment, in which the entire analysis period is divided into Epochs. Vehicle assignment is performed sequentially in each Epoch, thus improving the model scalability and confining the peak run-time memory requirement regardless of the total analysis period. A proposed self-turning scheme adaptively searches for the run-time-optimal Epoch setting during iterations regardless of the characteristics of the modeled network. Extensive numerical experiments confirm the promising performance of the proposed algorithmic schemes.
Energy Technology Data Exchange (ETDEWEB)
Sidler, Rolf, E-mail: rsidler@gmail.com [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland); Carcione, José M. [Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS), Borgo Grotta Gigante 42c, 34010 Sgonico, Trieste (Italy); Holliger, Klaus [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland)
2013-02-15
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge–Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid–solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.
Directory of Open Access Journals (Sweden)
Jesús García
2012-01-01
Full Text Available The application of a 3D domain decomposition finite-element and spherical mode expansion for the design of planar ESPAR (electronically steerable passive array radiator made with probe-fed circular microstrip patches is presented in this work. A global generalized scattering matrix (GSM in terms of spherical modes is obtained analytically from the GSM of the isolated patches by using rotation and translation properties of spherical waves. The whole behaviour of the array is characterized including all the mutual coupling effects between its elements. This procedure has been firstly validated by analyzing an array of monopoles on a ground plane, and then it has been applied to synthesize a prescribed radiation pattern optimizing the reactive loads connected to the feeding ports of the array of circular patches by means of a genetic algorithm.
International Nuclear Information System (INIS)
Greenman, G.M.; O'Brien, M.J.; Procassini, R.J.; Joy, K.I.
2009-01-01
Two enhancements to the combinatorial geometry (CG) particle tracker in the Mercury Monte Carlo transport code are presented. The first enhancement is a hybrid particle tracker wherein a mesh region is embedded within a CG region. This method permits efficient calculations of problems with contain both large-scale heterogeneous and homogeneous regions. The second enhancement relates to the addition of parallelism within the CG tracker via spatial domain decomposition. This permits calculations of problems with a large degree of geometric complexity, which are not possible through particle parallelism alone. In this method, the cells are decomposed across processors and a particles is communicated to an adjacent processor when it tracks to an interprocessor boundary. Applications that demonstrate the efficacy of these new methods are presented
Modal Identification of Output-Only Systems using Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Zhang, L.; Andersen, P.
2000-01-01
In this paper a new frequency domain technique is introduced for the modal identification from ambient responses, ie. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical ...
Quantum game theory based on the Schmidt decomposition
International Nuclear Information System (INIS)
Ichikawa, Tsubasa; Tsutsui, Izumi; Cheon, Taksu
2008-01-01
We present a novel formulation of quantum game theory based on the Schmidt decomposition, which has the merit that the entanglement of quantum strategies is manifestly quantified. We apply this formulation to 2-player, 2-strategy symmetric games and obtain a complete set of quantum Nash equilibria. Apart from those available with the maximal entanglement, these quantum Nash equilibria are extensions of the Nash equilibria in classical game theory. The phase structure of the equilibria is determined for all values of entanglement, and thereby the possibility of resolving the dilemmas by entanglement in the game of Chicken, the Battle of the Sexes, the Prisoners' Dilemma, and the Stag Hunt, is examined. We find that entanglement transforms these dilemmas with each other but cannot resolve them, except in the Stag Hunt game where the dilemma can be alleviated to a certain degree
Quantum Image Encryption Algorithm Based on Image Correlation Decomposition
Hua, Tianxiang; Chen, Jiamin; Pei, Dongju; Zhang, Wenquan; Zhou, Nanrun
2015-02-01
A novel quantum gray-level image encryption and decryption algorithm based on image correlation decomposition is proposed. The correlation among image pixels is established by utilizing the superposition and measurement principle of quantum states. And a whole quantum image is divided into a series of sub-images. These sub-images are stored into a complete binary tree array constructed previously and then randomly performed by one of the operations of quantum random-phase gate, quantum revolving gate and Hadamard transform. The encrypted image can be obtained by superimposing the resulting sub-images with the superposition principle of quantum states. For the encryption algorithm, the keys are the parameters of random phase gate, rotation angle, binary sequence and orthonormal basis states. The security and the computational complexity of the proposed algorithm are analyzed. The proposed encryption algorithm can resist brute force attack due to its very large key space and has lower computational complexity than its classical counterparts.
Directory of Open Access Journals (Sweden)
Elias D. Nino-Ruiz
2017-07-01
Full Text Available In this paper, a matrix-free posterior ensemble Kalman filter implementation based on a modified Cholesky decomposition is proposed. The method works as follows: the precision matrix of the background error distribution is estimated based on a modified Cholesky decomposition. The resulting estimator can be expressed in terms of Cholesky factors which can be updated based on a series of rank-one matrices in order to approximate the precision matrix of the analysis distribution. By using this matrix, the posterior ensemble can be built by either sampling from the posterior distribution or using synthetic observations. Furthermore, the computational effort of the proposed method is linear with regard to the model dimension and the number of observed components from the model domain. Experimental tests are performed making use of the Lorenz-96 model. The results reveal that, the accuracy of the proposed implementation in terms of root-mean-square-error is similar, and in some cases better, to that of a well-known ensemble Kalman filter (EnKF implementation: the local ensemble transform Kalman filter. In addition, the results are comparable to those obtained by the EnKF with large ensemble sizes.
Multiple image encryption scheme based on pixel exchange operation and vector decomposition
Xiong, Y.; Quan, C.; Tay, C. J.
2018-02-01
We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.
Analysis of large fault trees based on functional decomposition
International Nuclear Information System (INIS)
Contini, Sergio; Matuzas, Vaidas
2011-01-01
With the advent of the Binary Decision Diagrams (BDD) approach in fault tree analysis, a significant enhancement has been achieved with respect to previous approaches, both in terms of efficiency and accuracy of the overall outcome of the analysis. However, the exponential increase of the number of nodes with the complexity of the fault tree may prevent the construction of the BDD. In these cases, the only way to complete the analysis is to reduce the complexity of the BDD by applying the truncation technique, which nevertheless implies the problem of estimating the truncation error or upper and lower bounds of the top-event unavailability. This paper describes a new method to analyze large coherent fault trees which can be advantageously applied when the working memory is not sufficient to construct the BDD. It is based on the decomposition of the fault tree into simpler disjoint fault trees containing a lower number of variables. The analysis of each simple fault tree is performed by using all the computational resources. The results from the analysis of all simpler fault trees are re-combined to obtain the results for the original fault tree. Two decomposition methods are herewith described: the first aims at determining the minimal cut sets (MCS) and the upper and lower bounds of the top-event unavailability; the second can be applied to determine the exact value of the top-event unavailability. Potentialities, limitations and possible variations of these methods will be discussed with reference to the results of their application to some complex fault trees.
Analysis of large fault trees based on functional decomposition
Energy Technology Data Exchange (ETDEWEB)
Contini, Sergio, E-mail: sergio.contini@jrc.i [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, 21020 Ispra (Italy); Matuzas, Vaidas [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, 21020 Ispra (Italy)
2011-03-15
With the advent of the Binary Decision Diagrams (BDD) approach in fault tree analysis, a significant enhancement has been achieved with respect to previous approaches, both in terms of efficiency and accuracy of the overall outcome of the analysis. However, the exponential increase of the number of nodes with the complexity of the fault tree may prevent the construction of the BDD. In these cases, the only way to complete the analysis is to reduce the complexity of the BDD by applying the truncation technique, which nevertheless implies the problem of estimating the truncation error or upper and lower bounds of the top-event unavailability. This paper describes a new method to analyze large coherent fault trees which can be advantageously applied when the working memory is not sufficient to construct the BDD. It is based on the decomposition of the fault tree into simpler disjoint fault trees containing a lower number of variables. The analysis of each simple fault tree is performed by using all the computational resources. The results from the analysis of all simpler fault trees are re-combined to obtain the results for the original fault tree. Two decomposition methods are herewith described: the first aims at determining the minimal cut sets (MCS) and the upper and lower bounds of the top-event unavailability; the second can be applied to determine the exact value of the top-event unavailability. Potentialities, limitations and possible variations of these methods will be discussed with reference to the results of their application to some complex fault trees.
Identifying key nodes in multilayer networks based on tensor decomposition.
Wang, Dingjie; Wang, Haitao; Zou, Xiufen
2017-06-01
The identification of essential agents in multilayer networks characterized by different types of interactions is a crucial and challenging topic, one that is essential for understanding the topological structure and dynamic processes of multilayer networks. In this paper, we use the fourth-order tensor to represent multilayer networks and propose a novel method to identify essential nodes based on CANDECOMP/PARAFAC (CP) tensor decomposition, referred to as the EDCPTD centrality. This method is based on the perspective of multilayer networked structures, which integrate the information of edges among nodes and links between different layers to quantify the importance of nodes in multilayer networks. Three real-world multilayer biological networks are used to evaluate the performance of the EDCPTD centrality. The bar chart and ROC curves of these multilayer networks indicate that the proposed approach is a good alternative index to identify real important nodes. Meanwhile, by comparing the behavior of both the proposed method and the aggregated single-layer methods, we demonstrate that neglecting the multiple relationships between nodes may lead to incorrect identification of the most versatile nodes. Furthermore, the Gene Ontology functional annotation demonstrates that the identified top nodes based on the proposed approach play a significant role in many vital biological processes. Finally, we have implemented many centrality methods of multilayer networks (including our method and the published methods) and created a visual software based on the MATLAB GUI, called ENMNFinder, which can be used by other researchers.
AN IMPROVED INTERFEROMETRIC CALIBRATION METHOD BASED ON INDEPENDENT PARAMETER DECOMPOSITION
Directory of Open Access Journals (Sweden)
J. Fan
2018-04-01
Full Text Available Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM. The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs. However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD. Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.
An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition
Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.
2018-04-01
Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.
A domain decomposition method for pseudo-spectral electromagnetic simulations of plasmas
International Nuclear Information System (INIS)
Vay, Jean-Luc; Haber, Irving; Godfrey, Brendan B.
2013-01-01
Pseudo-spectral electromagnetic solvers (i.e. representing the fields in Fourier space) have extraordinary precision. In particular, Haber et al. presented in 1973 a pseudo-spectral solver that integrates analytically the solution over a finite time step, under the usual assumption that the source is constant over that time step. Yet, pseudo-spectral solvers have not been widely used, due in part to the difficulty for efficient parallelization owing to global communications associated with global FFTs on the entire computational domains. A method for the parallelization of electromagnetic pseudo-spectral solvers is proposed and tested on single electromagnetic pulses, and on Particle-In-Cell simulations of the wakefield formation in a laser plasma accelerator. The method takes advantage of the properties of the Discrete Fourier Transform, the linearity of Maxwell’s equations and the finite speed of light for limiting the communications of data within guard regions between neighboring computational domains. Although this requires a small approximation, test results show that no significant error is made on the test cases that have been presented. The proposed method opens the way to solvers combining the favorable parallel scaling of standard finite-difference methods with the accuracy advantages of pseudo-spectral methods
Kernel based pattern analysis methods using eigen-decompositions for reading Icelandic sagas
DEFF Research Database (Denmark)
Christiansen, Asger Nyman; Carstensen, Jens Michael
We want to test the applicability of kernel based eigen-decomposition methods, compared to the traditional eigen-decomposition methods. We have implemented and tested three kernel based methods methods, namely PCA, MAF and MNF, all using a Gaussian kernel. We tested the methods on a multispectral...... image of a page in the book 'hauksbok', which contains Icelandic sagas....
QR-decomposition based SENSE reconstruction using parallel architecture.
Ullah, Irfan; Nisar, Habab; Raza, Haseeb; Qasim, Malik; Inam, Omair; Omer, Hammad
2018-04-01
Magnetic Resonance Imaging (MRI) is a powerful medical imaging technique that provides essential clinical information about the human body. One major limitation of MRI is its long scan time. Implementation of advance MRI algorithms on a parallel architecture (to exploit inherent parallelism) has a great potential to reduce the scan time. Sensitivity Encoding (SENSE) is a Parallel Magnetic Resonance Imaging (pMRI) algorithm that utilizes receiver coil sensitivities to reconstruct MR images from the acquired under-sampled k-space data. At the heart of SENSE lies inversion of a rectangular encoding matrix. This work presents a novel implementation of GPU based SENSE algorithm, which employs QR decomposition for the inversion of the rectangular encoding matrix. For a fair comparison, the performance of the proposed GPU based SENSE reconstruction is evaluated against single and multicore CPU using openMP. Several experiments against various acceleration factors (AFs) are performed using multichannel (8, 12 and 30) phantom and in-vivo human head and cardiac datasets. Experimental results show that GPU significantly reduces the computation time of SENSE reconstruction as compared to multi-core CPU (approximately 12x speedup) and single-core CPU (approximately 53x speedup) without any degradation in the quality of the reconstructed images. Copyright © 2018 Elsevier Ltd. All rights reserved.
Multi-Domain Modeling Based on Modelica
Directory of Open Access Journals (Sweden)
Liu Jun
2016-01-01
Full Text Available With the application of simulation technology in large-scale and multi-field problems, multi-domain unified modeling become an effective way to solve these problems. This paper introduces several basic methods and advantages of the multidisciplinary model, and focuses on the simulation based on Modelica language. The Modelica/Mworks is a newly developed simulation software with features of an object-oriented and non-casual language for modeling of the large, multi-domain system, which makes the model easier to grasp, develop and maintain.It This article shows the single degree of freedom mechanical vibration system based on Modelica language special connection mechanism in Mworks. This method that multi-domain modeling has simple and feasible, high reusability. it closer to the physical system, and many other advantages.
Wavelet decomposition based principal component analysis for face recognition using MATLAB
Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish
2016-03-01
For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.
Energy Technology Data Exchange (ETDEWEB)
Behrens, R.; Minier, L.
1998-03-24
The thermal decomposition of ammonium perchlorate (AP) and ammonium-perchlorate-based composite propellants is studied using the simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) technique. The main objective of the present work is to evaluate whether the STMBMS can provide new data on these materials that will have sufficient detail on the reaction mechanisms and associated reaction kinetics to permit creation of a detailed model of the thermal decomposition process. Such a model is a necessary ingredient to engineering models of ignition and slow-cookoff for these AP-based composite propellants. Results show that the decomposition of pure AP is controlled by two processes. One occurs at lower temperatures (240 to 270 C), produces mainly H{sub 2}O, O{sub 2}, Cl{sub 2}, N{sub 2}O and HCl, and is shown to occur in the solid phase within the AP particles. 200{micro} diameter AP particles undergo 25% decomposition in the solid phase, whereas 20{micro} diameter AP particles undergo only 13% decomposition. The second process is dissociative sublimation of AP to NH{sub 3} + HClO{sub 4} followed by the decomposition of, and reaction between, these two products in the gas phase. The dissociative sublimation process occurs over the entire temperature range of AP decomposition, but only becomes dominant at temperatures above those for the solid-phase decomposition. AP-based composite propellants are used extensively in both small tactical rocket motors and large strategic rocket systems.
International Nuclear Information System (INIS)
Ogino, Masao
2016-01-01
Actual problems in science and industrial applications are modeled by multi-materials and large-scale unstructured mesh, and the finite element analysis has been widely used to solve such problems on the parallel computer. However, for large-scale problems, the iterative methods for linear finite element equations suffer from slow or no convergence. Therefore, numerical methods having both robust convergence and scalable parallel efficiency are in great demand. The domain decomposition method is well known as an iterative substructuring method, and is an efficient approach for parallel finite element methods. Moreover, the balancing preconditioner achieves robust convergence. However, in case of problems consisting of very different materials, the convergence becomes bad. There are some research to solve this issue, however not suitable for cases of complex shape and composite materials. In this study, to improve convergence of the balancing preconditioner for multi-materials, a balancing preconditioner combined with the diagonal scaling preconditioner, called Scaled-BDD method, is proposed. Some numerical results are included which indicate that the proposed method has robust convergence for the number of subdomains and shows high performances compared with the original balancing preconditioner. (author)
Decomposition based parallel processing technique for efficient collaborative optimization
International Nuclear Information System (INIS)
Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon
2000-01-01
In practical design studies, most of designers solve multidisciplinary problems with complex design structure. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder original design processes to minimize total cost and time. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology
International Nuclear Information System (INIS)
Monjoly, Stéphanie; André, Maïna; Calif, Rudy; Soubdhan, Ted
2017-01-01
This paper introduces a new approach for the forecasting of solar radiation series at 1 h ahead. We investigated on several techniques of multiscale decomposition of clear sky index K_c data such as Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD) and Wavelet Decomposition. From these differents methods, we built 11 decomposition components and 1 residu signal presenting different time scales. We performed classic forecasting models based on linear method (Autoregressive process AR) and a non linear method (Neural Network model). The choice of forecasting method is adaptative on the characteristic of each component. Hence, we proposed a modeling process which is built from a hybrid structure according to the defined flowchart. An analysis of predictive performances for solar forecasting from the different multiscale decompositions and forecast models is presented. From multiscale decomposition, the solar forecast accuracy is significantly improved, particularly using the wavelet decomposition method. Moreover, multistep forecasting with the proposed hybrid method resulted in additional improvement. For example, in terms of RMSE error, the obtained forecasting with the classical NN model is about 25.86%, this error decrease to 16.91% with the EMD-Hybrid Model, 14.06% with the EEMD-Hybid model and to 7.86% with the WD-Hybrid Model. - Highlights: • Hourly forecasting of GHI in tropical climate with many cloud formation processes. • Clear sky Index decomposition using three multiscale decomposition methods. • Combination of multiscale decomposition methods with AR-NN models to predict GHI. • Comparison of the proposed hybrid model with the classical models (AR, NN). • Best results using Wavelet-Hybrid model in comparison with classical models.
Schultz, A.
2010-12-01
describe our ongoing efforts to achieve massive parallelization on a novel hybrid GPU testbed machine currently configured with 12 Intel Westmere Xeon CPU cores (or 24 parallel computational threads) with 96 GB DDR3 system memory, 4 GPU subsystems which in aggregate contain 960 NVidia Tesla GPU cores with 16 GB dedicated DDR3 GPU memory, and a second interleved bank of 4 GPU subsystems containing in aggregate 1792 NVidia Fermi GPU cores with 12 GB dedicated DDR5 GPU memory. We are applying domain decomposition methods to a modified version of Weiss' (2001) 3D frequency domain full physics EM finite difference code, an open source GPL licensed f90 code available for download from www.OpenEM.org. This will be the core of a new hybrid 3D inversion that parallelizes frequencies across CPUs and individual forward solutions across GPUs. We describe progress made in modifying the code to use direct solvers in GPU cores dedicated to each small subdomain, iteratively improving the solution by matching adjacent subdomain boundary solutions, rather than iterative Krylov space sparse solvers as currently applied to the whole domain.
Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions
DEFF Research Database (Denmark)
Hansen, Per Christian; Jensen, Søren Holdt
We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... diagonal (eigenvalue and singular value) decompositions and rank-revealing triangular decompositions (ULV, URV, VSV, ULLV and ULLIV). In addition we show how the subspace-based algorithms can be evaluated and compared by means of simple FIR filter interpretations. The algorithms are illustrated...... with working Matlab code and applications in speech processing....
Energy Technology Data Exchange (ETDEWEB)
Mehboob, Shoaib, E-mail: smehboob@pieas.edu.pk [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan); Mehmood, Mazhar [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan); Ahmed, Mushtaq [National Institute of Lasers and Optronics (NILOP), Nilore 45650, Islamabad (Pakistan); Ahmad, Jamil; Tanvir, Muhammad Tauseef [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan); Ahmad, Izhar [National Institute of Lasers and Optronics (NILOP), Nilore 45650, Islamabad (Pakistan); Hassan, Syed Mujtaba ul [National Center for Nanotechnology, Department of Metallurgy and Materials Engineering, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Nilore 45650, Islamabad (Pakistan)
2017-04-15
The objective of this work is to study the changes in optical and dielectric properties with the transformation of aluminum ammonium carbonate hydroxide (AACH) to α-alumina, using terahertz time domain spectroscopy (THz-TDS). The nanostructured AACH was synthesized by hydrothermal treatment of the raw chemicals at 140 °C for 12 h. This AACH was then calcined at different temperatures. The AACH was decomposed to amorphous phase at 400 °C and transformed to δ* + α-alumina at 1000 °C. Finally, the crystalline α-alumina was achieved at 1200 °C. X-ray diffraction (XRD) and Fourier transform infrared (FTIR) spectroscopy were employed to identify the phases formed after calcination. The morphology of samples was studied using scanning electron microscopy (SEM), which revealed that the AACH sample had rod-like morphology which was retained in the calcined samples. THz-TDS measurements showed that AACH had lowest refractive index in the frequency range of measurements. The refractive index at 0.1 THZ increased from 2.41 for AACH to 2.58 for the amorphous phase and to 2.87 for the crystalline α-alumina. The real part of complex permittivity increased with the calcination temperature. Further, the absorption coefficient was highest for AACH, which reduced with calcination temperature. The amorphous phase had higher absorption coefficient than the crystalline alumina. - Highlights: • Aluminum oxide nanostructures were obtained by thermal decomposition of AACH. • Crystalline phases of aluminum oxide have higher refractive index than that of amorphous phase. • The removal of heavier ionic species led to the lower absorption of THz radiations.
Parallel processing based decomposition technique for efficient collaborative optimization
International Nuclear Information System (INIS)
Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon
2001-01-01
In practical design studies, most of designers solve multidisciplinary problems with large sized and complex design system. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder the original design processes to minimize total computational cost. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology
A Hybrid Model Based on Wavelet Decomposition-Reconstruction in Track Irregularity State Forecasting
Directory of Open Access Journals (Sweden)
Chaolong Jia
2015-01-01
Full Text Available Wavelet is able to adapt to the requirements of time-frequency signal analysis automatically and can focus on any details of the signal and then decompose the function into the representation of a series of simple basis functions. It is of theoretical and practical significance. Therefore, this paper does subdivision on track irregularity time series based on the idea of wavelet decomposition-reconstruction and tries to find the best fitting forecast model of detail signal and approximate signal obtained through track irregularity time series wavelet decomposition, respectively. On this ideology, piecewise gray-ARMA recursive based on wavelet decomposition and reconstruction (PG-ARMARWDR and piecewise ANN-ARMA recursive based on wavelet decomposition and reconstruction (PANN-ARMARWDR models are proposed. Comparison and analysis of two models have shown that both these models can achieve higher accuracy.
Kumar, Ravi; Bhaduri, Basanta; Nishchal, Naveen K.
2018-01-01
In this study, we propose a quick response (QR) code based nonlinear optical image encryption technique using spiral phase transform (SPT), equal modulus decomposition (EMD) and singular value decomposition (SVD). First, the primary image is converted into a QR code and then multiplied with a spiral phase mask (SPM). Next, the product is spiral phase transformed with particular spiral phase function, and further, the EMD is performed on the output of SPT, which results into two complex images, Z 1 and Z 2. Among these, Z 1 is further Fresnel propagated with distance d, and Z 2 is reserved as a decryption key. Afterwards, SVD is performed on Fresnel propagated output to get three decomposed matrices i.e. one diagonal matrix and two unitary matrices. The two unitary matrices are modulated with two different SPMs and then, the inverse SVD is performed using the diagonal matrix and modulated unitary matrices to get the final encrypted image. Numerical simulation results confirm the validity and effectiveness of the proposed technique. The proposed technique is robust against noise attack, specific attack, and brutal force attack. Simulation results are presented in support of the proposed idea.
Ultra-precision machining induced phase decomposition at surface of Zn-Al based alloy
International Nuclear Information System (INIS)
To, S.; Zhu, Y.H.; Lee, W.B.
2006-01-01
The microstructural changes and phase transformation of an ultra-precision machined Zn-Al based alloy were examined using X-ray diffraction and back-scattered electron microscopy techniques. Decomposition of the Zn-rich η phase and the related changes in crystal orientation was detected at the surface of the ultra-precision machined alloy specimen. The effects of the machining parameters, such as cutting speed and depth of cut, on the phase decomposition were discussed in comparison with the tensile and rolling induced microstrucutural changes and phase decomposition
Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng
2017-08-01
Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.
Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes
Grigoryan, Artyom M.
2015-03-01
In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.
International Nuclear Information System (INIS)
Odry, Nans
2016-01-01
Deterministic calculation schemes are devised to numerically solve the neutron transport equation in nuclear reactors. Dealing with core-sized problems is very challenging for computers, so much that the dedicated core calculations have no choice but to allow simplifying assumptions (assembly- then core scale steps..). The PhD work aims at overcoming some of these approximations: thanks to important changes in computer architecture and capacities (HPC), nowadays one can solve 3D core-sized problems, using both high mesh refinement and the transport operator. It is an essential step forward in order to perform, in the future, reference calculations using deterministic schemes. This work focuses on a spatial domain decomposition method (DDM). Using massive parallelism, DDM allows much more ambitious computations in terms of both memory requirements and calculation time. Developments were performed inside the Sn core solver Minaret, from the new CEA neutronics platform APOLLO3. Only fast reactors (hexagonal periodicity) are considered, even if all kinds of geometries can be dealt with, using Minaret. The work has been divided in four steps: 1) The spatial domain decomposition with no overlap is inserted into the standard algorithmic structure of Minaret. The fundamental idea involves splitting a core-sized problem into smaller, independent, spatial sub-problems. angular flux is exchanged between adjacent sub-domains. In doing so, all combined sub-problems converge to the global solution at the outcome of an iterative process. Various strategies were explored regarding both data management and algorithm design. Results (k eff and flux) are systematically compared to the reference in a numerical verification step. 2) Introducing more parallelism is an unprecedented opportunity to heighten performances of deterministic schemes. Domain decomposition is particularly suited to this. A two-layer hybrid parallelism strategy, suited to HPC, is chosen. It benefits from the
Video steganography based on bit-plane decomposition of wavelet-transformed video
Noda, Hideki; Furuta, Tomofumi; Niimi, Michiharu; Kawaguchi, Eiji
2004-06-01
This paper presents a steganography method using lossy compressed video which provides a natural way to send a large amount of secret data. The proposed method is based on wavelet compression for video data and bit-plane complexity segmentation (BPCS) steganography. BPCS steganography makes use of bit-plane decomposition and the characteristics of the human vision system, where noise-like regions in bit-planes of a dummy image are replaced with secret data without deteriorating image quality. In wavelet-based video compression methods such as 3-D set partitioning in hierarchical trees (SPIHT) algorithm and Motion-JPEG2000, wavelet coefficients in discrete wavelet transformed video are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain. 3-D SPIHT-BPCS steganography and Motion-JPEG2000-BPCS steganography are presented and tested, which are the integration of 3-D SPIHT video coding and BPCS steganography, and that of Motion-JPEG2000 and BPCS, respectively. Experimental results show that 3-D SPIHT-BPCS is superior to Motion-JPEG2000-BPCS with regard to embedding performance. In 3-D SPIHT-BPCS steganography, embedding rates of around 28% of the compressed video size are achieved for twelve bit representation of wavelet coefficients with no noticeable degradation in video quality.
Base catalyzed decomposition of toxic and hazardous chemicals
International Nuclear Information System (INIS)
Rogers, C.J.; Kornel, A.; Sparks, H.L.
1991-01-01
There are vast amounts of toxic and hazardous chemicals, which have pervaded our environment during the past fifty years, leaving us with serious, crucial problems of remediation and disposal. The accumulation of polychlorinated biphenyls (PCBs), polychlorinated dibenzo-p-dioxins (PCDDs), ''dioxins'' and pesticides in soil sediments and living systems is a serious problem that is receiving considerable attention concerning the cancer-causing nature of these synthetic compounds.US EPA scientists developed in 1989 and 1990 two novel chemical Processes to effect the dehalogenation of chlorinated solvents, PCBs, PCDDs, PCDFs, PCP and other pollutants in soil, sludge, sediment and liquids. This improved technology employs hydrogen as a nucleophile to replace halogens on halogenated compounds. Hydrogen as nucleophile is not influenced by steric hinderance as with other nucleophile where complete dehalogenation of organohalogens can be achieved. This report discusses catalyzed decomposition of toxic and hazardous chemicals
International Nuclear Information System (INIS)
Mendez, M O; Cerutti, S; Bianchi, A M; Corthout, J; Van Huffel, S; Matteucci, M; Penzel, T
2010-01-01
This study analyses two different methods to detect obstructive sleep apnea (OSA) during sleep time based only on the ECG signal. OSA is a common sleep disorder caused by repetitive occlusions of the upper airways, which produces a characteristic pattern on the ECG. ECG features, such as the heart rate variability (HRV) and the QRS peak area, contain information suitable for making a fast, non-invasive and simple screening of sleep apnea. Fifty recordings freely available on Physionet have been included in this analysis, subdivided in a training and in a testing set. We investigated the possibility of using the recently proposed method of empirical mode decomposition (EMD) for this application, comparing the results with the ones obtained through the well-established wavelet analysis (WA). By these decomposition techniques, several features have been extracted from the ECG signal and complemented with a series of standard HRV time domain measures. The best performing feature subset, selected through a sequential feature selection (SFS) method, was used as the input of linear and quadratic discriminant classifiers. In this way we were able to classify the signals on a minute-by-minute basis as apneic or nonapneic with different best-subset sizes, obtaining an accuracy up to 89% with WA and 85% with EMD. Furthermore, 100% correct discrimination of apneic patients from normal subjects was achieved independently of the feature extractor. Finally, the same procedure was repeated by pooling features from standard HRV time domain, EMD and WA together in order to investigate if the two decomposition techniques could provide complementary features. The obtained accuracy was 89%, similarly to the one achieved using only Wavelet analysis as the feature extractor; however, some complementary features in EMD and WA are evident
Sparse time-frequency decomposition based on dictionary adaptation.
Hou, Thomas Y; Shi, Zuoqiang
2016-04-13
In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. © 2016 The Author(s).
Directory of Open Access Journals (Sweden)
Daniel Marcsa
2015-01-01
Full Text Available The analysis and design of electromechanical devices involve the solution of large sparse linear systems, and require therefore high performance algorithms. In this paper, the primal Domain Decomposition Method (DDM with parallel forward-backward and with parallel Preconditioned Conjugate Gradient (PCG solvers are introduced in two-dimensional parallel time-stepping finite element formulation to analyze rotating machine considering the electromagnetic field, external circuit and rotor movement. The proposed parallel direct and the iterative solver with two preconditioners are analyzed concerning its computational efficiency and number of iterations of the solver with different preconditioners. Simulation results of a rotating machine is also presented.
Implementation of QR-decomposition based on CORDIC for unitary MUSIC algorithm
Lounici, Merwan; Luan, Xiaoming; Saadi, Wahab
2013-07-01
The DOA (Direction Of Arrival) estimation with subspace methods such as MUSIC (MUltiple SIgnal Classification) and ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique) is based on an accurate estimation of the eigenvalues and eigenvectors of covariance matrix. QR decomposition is implemented with the Coordinate Rotation DIgital Computer (CORDIC) algorithm. QRD requires only additions and shifts [6], so it is faster and more regular than other methods. In this article the hardware architecture of an EVD (Eigen Value Decomposition) processor based on TSA (triangular systolic array) for QR decomposition is proposed. Using Xilinx System Generator (XSG), the design is implemented and the estimated logic device resource values are presented for different matrix sizes.
Digital Image Stabilization Method Based on Variational Mode Decomposition and Relative Entropy
Directory of Open Access Journals (Sweden)
Duo Hao
2017-11-01
Full Text Available Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD and relative entropy (RE. In this paper, the global motion vector (GMV is initially decomposed into several narrow-banded modes by VMD. REs, which exhibit the difference of probability distribution between two modes, are then calculated to identify the intentional and jitter motion modes. Finally, the summation of the jitter motion modes constitutes jitter motions, whereas the subtraction of the resulting sum from the GMV represents the intentional motions. The proposed stabilization method is compared with several known methods, namely, medium filter (MF, Kalman filter (KF, wavelet decomposition (MD method, empirical mode decomposition (EMD-based method, and enhanced EMD-based method, to evaluate stabilization performance. Experimental results show that the proposed method outperforms the other stabilization methods.
Directory of Open Access Journals (Sweden)
Batakliev Todor
2014-06-01
Full Text Available Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers. Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates
International Nuclear Information System (INIS)
Zhao, W; Niu, T; Xing, L; Xiong, G; Elmore, K; Min, J; Zhu, J; Wang, L
2015-01-01
Purpose: To significantly improve dual energy CT (DECT) imaging by establishing a new theoretical framework of image-domain material decomposition with incorporation of edge-preserving techniques. Methods: The proposed algorithm, HYPR-NLM, combines the edge-preserving non-local mean filter (NLM) with the HYPR-LR (Local HighlY constrained backPRojection Reconstruction) framework. Image denoising using HYPR-LR framework depends on the noise level of the composite image which is the average of the different energy images. For DECT, the composite image is the average of high- and low-energy images. To further reduce noise, one may want to increase the window size of the filter of the HYPR-LR, leading resolution degradation. By incorporating the NLM filtering and the HYPR-LR framework, HYPR-NLM reduces the boost material decomposition noise using energy information redundancies as well as the non-local mean. We demonstrate the noise reduction and resolution preservation of the algorithm with both iodine concentration numerical phantom and clinical patient data by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). Results: The results show iterative material decomposition method reduces noise to the lowest level and provides improved DECT images. HYPR-NLM significantly reduces noise while preserving the accuracy of quantitative measurement and resolution. For the iodine concentration numerical phantom, the averaged noise levels are about 2.0, 0.7, 0.2 and 0.4 for direct inversion, HYPR-LR, Iter- DECT and HYPR-NLM, respectively. For the patient data, the noise levels of the water images are about 0.36, 0.16, 0.12 and 0.13 for direct inversion, HYPR-LR, Iter-DECT and HYPR-NLM, respectively. Difference images of both HYPR-LR and Iter-DECT show edge effect, while no significant edge effect is shown for HYPR-NLM, suggesting spatial resolution is well preserved for HYPR-NLM. Conclusion: HYPR
International Nuclear Information System (INIS)
Umegaki, Kikuo; Miki, Kazuyoshi
1990-01-01
A numerical method is developed to solve three-dimensional incompressible viscous flow in complicated geometry using curvilinear coordinate transformation and domain decomposition technique. In this approach, a complicated flow domain is decomposed into several subdomains, each of which has an overlapping region with neighboring subdomains. Curvilinear coordinates are numerically generated in each subdomain using the boundary-fitted coordinate transformation technique. The modified SMAC scheme is developed to solve Navier-Stokes equations in which the convective terms are discretized by the QUICK method. A fully vectorized computer program is developed on the basis of the proposed method. The program is applied to flow analysis in a semicircular curved, 90deg elbow and T-shape branched pipes. Computational time with the vector processor of the HITAC S-810/20 supercomputer system, is reduced to 1/10∼1/20 of that with a scalar processor. (author)
International Nuclear Information System (INIS)
Wang, Yamin; Wu, Lei
2016-01-01
This paper presents a comprehensive analysis on practical challenges of empirical mode decomposition (EMD) based algorithms on wind speed and solar irradiation forecasts that have been largely neglected in literature, and proposes an alternative approach to mitigate such challenges. Specifically, the challenges are: (1) Decomposed sub-series are very sensitive to the original time series data. That is, sub-series of the new time series, consisting of the original one plus a limit number of new data samples, may significantly differ from those used in training forecasting models. In turn, forecasting models established by original sub-series may not be suitable for newly decomposed sub-series and have to be trained more frequently; and (2) Key environmental factors usually play a critical role in non-decomposition based methods for forecasting wind speed and solar irradiation. However, it is difficult to incorporate such critical environmental factors into forecasting models of individual decomposed sub-series, because the correlation between the original data and environmental factors is lost after decomposition. Numerical case studies on wind speed and solar irradiation forecasting show that the performance of existing EMD-based forecasting methods could be worse than the non-decomposition based forecasting model, and are not effective in practical cases. Finally, the approximated forecasting model based on EMD is proposed to mitigate the challenges and achieve better forecasting results than existing EMD-based forecasting algorithms and the non-decomposition based forecasting models on practical wind speed and solar irradiation forecasting cases. - Highlights: • Two challenges of existing EMD-based forecasting methods are discussed. • Significant changes of sub-series in each step of the rolling forecast procedure. • Difficulties in incorporating environmental factors into sub-series forecasting models. • The approximated forecasting method is proposed to
Naya, Tomoki; Kohga, Makoto
2015-04-01
Ammonium nitrate (AN) has attracted much attention due to its clean burning nature as an oxidizer. However, an AN-based composite propellant has the disadvantages of low burning rate and poor ignitability. In this study, we added nitramine of cyclotrimethylene trinitramine (RDX) or cyclotetramethylene tetranitramine (HMX) as a high-energy material to AN propellants to overcome these disadvantages. The thermal decomposition and burning rate characteristics of the prepared propellants were examined as the ratio of AN and nitramine was varied. In the thermal decomposition process, AN/RDX propellants showed unique mass loss peaks in the lower temperature range that were not observed for AN or RDX propellants alone. AN and RDX decomposed continuously as an almost single oxidizer in the AN/RDX propellant. In contrast, AN/HMX propellants exhibited thermal decomposition characteristics similar to those of AN and HMX, which decomposed almost separately in the thermal decomposition of the AN/HMX propellant. The ignitability was improved and the burning rate increased by the addition of nitramine for both AN/RDX and AN/HMX propellants. The increased burning rates of AN/RDX propellants were greater than those of AN/HMX. The difference in the thermal decomposition and burning characteristics was caused by the interaction between AN and RDX.
Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques
2012-09-01
The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.
A novel ECG data compression method based on adaptive Fourier decomposition
Tan, Chunyu; Zhang, Liming
2017-12-01
This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.
Adaptive variational mode decomposition method for signal processing based on mode characteristic
Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng
2018-07-01
Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.
Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy
Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng
2018-06-01
To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.
Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions
DEFF Research Database (Denmark)
Hansen, Per Christian; Jensen, Søren Holdt
2007-01-01
We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... with working Matlab code and applications in speech processing....
Asynchronous Task-Based Polar Decomposition on Single Node Manycore Architectures
Sukkari, Dalal E.; Ltaief, Hatem; Faverge, Mathieu; Keyes, David E.
2017-01-01
This paper introduces the first asynchronous, task-based formulation of the polar decomposition and its corresponding implementation on manycore architectures. Based on a formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations for the polar decomposition on latest shared-memory vendors' systems, while maintaining numerical accuracy.
Asynchronous Task-Based Polar Decomposition on Single Node Manycore Architectures
Sukkari, Dalal E.
2017-09-29
This paper introduces the first asynchronous, task-based formulation of the polar decomposition and its corresponding implementation on manycore architectures. Based on a formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations for the polar decomposition on latest shared-memory vendors\\' systems, while maintaining numerical accuracy.
A Decomposition-Based Pricing Method for Solving a Large-Scale MILP Model for an Integrated Fishery
Directory of Open Access Journals (Sweden)
M. Babul Hasan
2007-01-01
The IFP can be decomposed into a trawler-scheduling subproblem and a fish-processing subproblem in two different ways by relaxing different sets of constraints. We tried conventional decomposition techniques including subgradient optimization and Dantzig-Wolfe decomposition, both of which were unacceptably slow. We then developed a decomposition-based pricing method for solving the large fishery model, which gives excellent computation times. Numerical results for several planning horizon models are presented.
Kernel based eigenvalue-decomposition methods for analysing ham
DEFF Research Database (Denmark)
Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming
2010-01-01
methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...
Michelson interferometer based interleaver design using classic IIR filter decomposition.
Cheng, Chi-Hao; Tang, Shasha
2013-12-16
An elegant method to design a Michelson interferometer based interleaver using a classic infinite impulse response (IIR) filter such as Butterworth, Chebyshev, and elliptic filters as a starting point are presented. The proposed design method allows engineers to design a Michelson interferometer based interleaver from specifications seamlessly. Simulation results are presented to demonstrate the validity of the proposed design method.
Zhang, Hongqin; Tian, Xiangjun
2018-04-01
Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.
Grid-based electronic structure calculations: The tensor decomposition approach
Energy Technology Data Exchange (ETDEWEB)
Rakhuba, M.V., E-mail: rakhuba.m@gmail.com [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Oseledets, I.V., E-mail: i.oseledets@skoltech.ru [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Institute of Numerical Mathematics, Russian Academy of Sciences, Gubkina St. 8, 119333 Moscow (Russian Federation)
2016-05-01
We present a fully grid-based approach for solving Hartree–Fock and all-electron Kohn–Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 8192{sup 3} and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.
Ebrahimi, Farideh; Setarehdan, Seyed-Kamaledin; Ayala-Moyeda, Jose; Nazeran, Homer
2013-10-01
The conventional method for sleep staging is to analyze polysomnograms (PSGs) recorded in a sleep lab. The electroencephalogram (EEG) is one of the most important signals in PSGs but recording and analysis of this signal presents a number of technical challenges, especially at home. Instead, electrocardiograms (ECGs) are much easier to record and may offer an attractive alternative for home sleep monitoring. The heart rate variability (HRV) signal proves suitable for automatic sleep staging. Thirty PSGs from the Sleep Heart Health Study (SHHS) database were used. Three feature sets were extracted from 5- and 0.5-min HRV segments: time-domain features, nonlinear-dynamics features and time-frequency features. The latter was achieved by using empirical mode decomposition (EMD) and discrete wavelet transform (DWT) methods. Normalized energies in important frequency bands of HRV signals were computed using time-frequency methods. ANOVA and t-test were used for statistical evaluations. Automatic sleep staging was based on HRV signal features. The ANOVA followed by a post hoc Bonferroni was used for individual feature assessment. Most features were beneficial for sleep staging. A t-test was used to compare the means of extracted features in 5- and 0.5-min HRV segments. The results showed that the extracted features means were statistically similar for a small number of features. A separability measure showed that time-frequency features, especially EMD features, had larger separation than others. There was not a sizable difference in separability of linear features between 5- and 0.5-min HRV segments but separability of nonlinear features, especially EMD features, decreased in 0.5-min HRV segments. HRV signal features were classified by linear discriminant (LD) and quadratic discriminant (QD) methods. Classification results based on features from 5-min segments surpassed those obtained from 0.5-min segments. The best result was obtained from features using 5-min HRV
Primal Decomposition-Based Method for Weighted Sum-Rate Maximization in Downlink OFDMA Systems
Directory of Open Access Journals (Sweden)
Weeraddana Chathuranga
2010-01-01
Full Text Available We consider the weighted sum-rate maximization problem in downlink Orthogonal Frequency Division Multiple Access (OFDMA systems. Motivated by the increasing popularity of OFDMA in future wireless technologies, a low complexity suboptimal resource allocation algorithm is obtained for joint optimization of multiuser subcarrier assignment and power allocation. The algorithm is based on an approximated primal decomposition-based method, which is inspired from exact primal decomposition techniques. The original nonconvex optimization problem is divided into two subproblems which can be solved independently. Numerical results are provided to compare the performance of the proposed algorithm to Lagrange relaxation based suboptimal methods as well as to optimal exhaustive search-based method. Despite its reduced computational complexity, the proposed algorithm provides close-to-optimal performance.
MRI Volume Fusion Based on 3D Shearlet Decompositions.
Duan, Chang; Wang, Shuai; Wang, Xue Gang; Huang, Qi Hong
2014-01-01
Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods.
MRI Volume Fusion Based on 3D Shearlet Decompositions
Directory of Open Access Journals (Sweden)
Chang Duan
2014-01-01
Full Text Available Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods.
Bertrand, G.; Comperat, M.; Lallemant, M.; Watelle, G.
1980-03-01
Copper sulfate pentahydrate dehydration into trihydrate was investigated using monocrystalline platelets with varying crystallographic orientations. The morphological and kinetic features of the trihydrate domains were examined. Different shapes were observed: polygons (parallelograms, hexagons) and ellipses; their conditions of occurrence are reported in the (P, T) diagram. At first (for about 2 min), the ratio of the long to the short axes of elliptical domains changes with time; these subsequently develop homothetically and the rate ratio is then only pressure dependent. Temperature influence is inferred from that of pressure. Polygonal shapes are time dependent and result in ellipses. So far, no model can be put forward. Yet, qualitatively, the polygonal shape of a domain may be explained by the prevalence of the crystal arrangement and the elliptical shape by that of the solid tensorial properties. The influence of those factors might be modulated versus pressure, temperature, interface extent, and, thus, time.
Directory of Open Access Journals (Sweden)
Tsz-Chun Wu
2018-01-01
Full Text Available In aviation, rapidly fluctuating headwind/tailwind may lead to high horizontal windshear, posing potential safety hazards to aircraft. So far, windshear alerts are issued by considering directly the headwind differences measured along the aircraft flight path (e.g. based on Doppler velocities from remote-sensing. In this paper, we propose and demonstrate a new methodology for windshear alerting with the technique of spectral decomposition. Through Fourier transformation of the LIDAR-based headwind profiles in 2012 and 2014 at arrival corridors 07LA and 25RA of the Hong Kong International Airport (HKIA, we study the occurrence of windshear in the spectral domain. Using a threshold-based approach, we investigate performance of single and multiple channel detection algorithms and validate the results against pilot reports. With the receiver operating characteristic (ROC diagram, we successfully demonstrate feasibility of this approach to alert windshear by showing a comparable performance of the triple channel detection algorithm and a consistent hit rate gain (07LA in particular of 4.5 to 8 % in quadruple channel detection against GLYGA, which is the currently operational algorithm in HKIA. We also observe that some length scales are particularly sensitive to windshear events which may be closely related to the local geography of HKIA. This study serves to open a new door for the methodology of windshear detection in the spectral domain for the aviation community.
A frequency domain radar interferometric imaging (FII) technique based on high-resolution methods
Luce, H.; Yamamoto, M.; Fukao, S.; Helal, D.; Crochet, M.
2001-01-01
In the present work, we propose a frequency-domain interferometric imaging (FII) technique for a better knowledge of the vertical distribution of the atmospheric scatterers detected by MST radars. This is an extension of the dual frequency-domain interferometry (FDI) technique to multiple frequencies. Its objective is to reduce the ambiguity (resulting from the use of only two adjacent frequencies), inherent with the FDI technique. Different methods, commonly used in antenna array processing, are first described within the context of application to the FII technique. These methods are the Fourier-based imaging, the Capon's and the singular value decomposition method used with the MUSIC algorithm. Some preliminary simulations and tests performed on data collected with the middle and upper atmosphere (MU) radar (Shigaraki, Japan) are also presented. This work is a first step in the developments of the FII technique which seems to be very promising.
Towards Domain-specific Flow-based Languages
DEFF Research Database (Denmark)
Zarrin, Bahram; Baumeister, Hubert; Sarjoughian, Hessam S.
2018-01-01
describe their problems and solutions, instead of using general purpose programming languages. The goal of these languages is to improve the productivity and efficiency of the development and simulation of concurrent scientific models and systems. Moreover, they help to expose parallelism and to specify...... the concurrency within a component or across different independent components. In this paper, we introduce the concept of domain-specific flowbased languages which allows domain experts to use flow-based languages adapted to a particular problem domain. Flow-based programming is used to support concurrency, while......Due to the significant growth of the demand for data-intensive computing, in addition to the emergence of new parallel and distributed computing technologies, scientists and domain experts are leveraging languages specialized for their problem domain, i.e., domain-specific languages, to help them...
Domain Adaptation for Pedestrian Detection Based on Prediction Consistency
Directory of Open Access Journals (Sweden)
Yu Li-ping
2014-01-01
Full Text Available Pedestrian detection is an active area of research in computer vision. It remains a quite challenging problem in many applications where many factors cause a mismatch between source dataset used to train the pedestrian detector and samples in the target scene. In this paper, we propose a novel domain adaptation model for merging plentiful source domain samples with scared target domain samples to create a scene-specific pedestrian detector that performs as well as rich target domain simples are present. Our approach combines the boosting-based learning algorithm with an entropy-based transferability, which is derived from the prediction consistency with the source classifications, to selectively choose the samples showing positive transferability in source domains to the target domain. Experimental results show that our approach can improve the detection rate, especially with the insufficient labeled data in target scene.
Directory of Open Access Journals (Sweden)
Changyun Liu
2017-01-01
Full Text Available A multisensor scheduling algorithm based on the hybrid task decomposition and modified binary particle swarm optimization (MBPSO is proposed. Firstly, aiming at the complex relationship between sensor resources and tasks, a hybrid task decomposition method is presented, and the resource scheduling problem is decomposed into subtasks; then the sensor resource scheduling problem is changed into the match problem of sensors and subtasks. Secondly, the resource match optimization model based on the sensor resources and tasks is established, which considers several factors, such as the target priority, detecting benefit, handover times, and resource load. Finally, MBPSO algorithm is proposed to solve the match optimization model effectively, which is based on the improved updating means of particle’s velocity and position through the doubt factor and modified Sigmoid function. The experimental results show that the proposed algorithm is better in terms of convergence velocity, searching capability, solution accuracy, and efficiency.
International Nuclear Information System (INIS)
Masiello, Emiliano; Martin, Brunella; Do, Jean-Michel
2011-01-01
A new development for the IDT solver is presented for large reactor core applications in XYZ geometries. The multigroup discrete-ordinate neutron transport equation is solved using a Domain-Decomposition (DD) method coupled with the Coarse-Mesh Finite Differences (CMFD). The later is used for accelerating the DD convergence rate. In particular, the external power iterations are preconditioned for stabilizing the oscillatory behavior of the DD iterative process. A set of critical 2-D and 3-D numerical tests on a single processor will be presented for the analysis of the performances of the method. The results show that the application of the CMFD to the DD can be a good candidate for large 3D full-core parallel applications. (author)
Malagón-Romero, A.; Luque, A.
2018-04-01
At high pressure electric discharges typically grow as thin, elongated filaments. In a numerical simulation this large aspect ratio should ideally translate into a narrow, cylindrical computational domain that envelops the discharge as closely as possible. However, the development of the discharge is driven by electrostatic interactions and, if the computational domain is not wide enough, the boundary conditions imposed to the electrostatic potential on the external boundary have a strong effect on the discharge. Most numerical codes circumvent this problem by either using a wide computational domain or by calculating the boundary conditions by integrating the Green's function of an infinite domain. Here we describe an accurate and efficient method to impose free boundary conditions in the radial direction for an elongated electric discharge. To facilitate the use of our method we provide a sample implementation. Finally, we apply the method to solve Poisson's equation in cylindrical coordinates with free boundary conditions in both radial and longitudinal directions. This case is of particular interest for the initial stages of discharges in long gaps or natural discharges in the atmosphere, where it is not practical to extend the simulation volume to be bounded by two electrodes.
Near-lossless multichannel EEG compression based on matrix and tensor decompositions.
Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej
2013-05-01
A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.
Frequency hopping signal detection based on wavelet decomposition and Hilbert-Huang transform
Zheng, Yang; Chen, Xihao; Zhu, Rui
2017-07-01
Frequency hopping (FH) signal is widely adopted by military communications as a kind of low probability interception signal. Therefore, it is very important to research the FH signal detection algorithm. The existing detection algorithm of FH signals based on the time-frequency analysis cannot satisfy the time and frequency resolution requirement at the same time due to the influence of window function. In order to solve this problem, an algorithm based on wavelet decomposition and Hilbert-Huang transform (HHT) was proposed. The proposed algorithm removes the noise of the received signals by wavelet decomposition and detects the FH signals by Hilbert-Huang transform. Simulation results show the proposed algorithm takes into account both the time resolution and the frequency resolution. Correspondingly, the accuracy of FH signals detection can be improved.
Tissue artifact removal from respiratory signals based on empirical mode decomposition.
Liu, Shaopeng; Gao, Robert X; John, Dinesh; Staudenmayer, John; Freedson, Patty
2013-05-01
On-line measurement of respiration plays an important role in monitoring human physical activities. Such measurement commonly employs sensing belts secured around the rib cage and abdomen of the test object. Affected by the movement of body tissues, respiratory signals typically have a low signal-to-noise ratio. Removing tissue artifacts therefore is critical to ensuring effective respiration analysis. This paper presents a signal decomposition technique for tissue artifact removal from respiratory signals, based on the empirical mode decomposition (EMD). An algorithm based on the mutual information and power criteria was devised to automatically select appropriate intrinsic mode functions for tissue artifact removal and respiratory signal reconstruction. Performance of the EMD-algorithm was evaluated through simulations and real-life experiments (N = 105). Comparison with low-pass filtering that has been conventionally applied confirmed the effectiveness of the technique in tissue artifacts removal.
Efficient Divide-And-Conquer Classification Based on Feature-Space Decomposition
Guo, Qi; Chen, Bo-Wei; Jiang, Feng; Ji, Xiangyang; Kung, Sun-Yuan
2015-01-01
This study presents a divide-and-conquer (DC) approach based on feature space decomposition for classification. When large-scale datasets are present, typical approaches usually employed truncated kernel methods on the feature space or DC approaches on the sample space. However, this did not guarantee separability between classes, owing to overfitting. To overcome such problems, this work proposes a novel DC approach on feature spaces consisting of three steps. Firstly, we divide the feature ...
Li, Guohui; Zhang, Songling; Yang, Hong
2017-01-01
Aiming at the irregularity of nonlinear signal and its predicting difficulty, a deep learning prediction model based on extreme-point symmetric mode decomposition (ESMD) and clustering analysis is proposed. Firstly, the original data is decomposed by ESMD to obtain the finite number of intrinsic mode functions (IMFs) and residuals. Secondly, the fuzzy c-means is used to cluster the decomposed components, and then the deep belief network (DBN) is used to predict it. Finally, the reconstructed ...
Ford, Neville J.; Connolly, Joseph A.
2009-07-01
We give a comparison of the efficiency of three alternative decomposition schemes for the approximate solution of multi-term fractional differential equations using the Caputo form of the fractional derivative. The schemes we compare are based on conversion of the original problem into a system of equations. We review alternative approaches and consider how the most appropriate numerical scheme may be chosen to solve a particular equation.
Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta
2016-01-01
This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.
Adaptive Hybrid Visual Servo Regulation of Mobile Robots Based on Fast Homography Decomposition
Directory of Open Access Journals (Sweden)
Chunfu Wu
2015-01-01
Full Text Available For the monocular camera-based mobile robot system, an adaptive hybrid visual servo regulation algorithm which is based on a fast homography decomposition method is proposed to drive the mobile robot to its desired position and orientation, even when object’s imaging depth and camera’s position extrinsic parameters are unknown. Firstly, the homography’s particular properties caused by mobile robot’s 2-DOF motion are taken into account to induce a fast homography decomposition method. Secondly, the homography matrix and the extracted orientation error, incorporated with the desired view’s single feature point, are utilized to form an error vector and its open-loop error function. Finally, Lyapunov-based techniques are exploited to construct an adaptive regulation control law, followed by the experimental verification. The experimental results show that the proposed fast homography decomposition method is not only simple and efficient, but also highly precise. Meanwhile, the designed control law can well enable mobile robot position and orientation regulation despite the lack of depth information and camera’s position extrinsic parameters.
Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun
2017-08-07
This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.
Domain observations of Fe and Co based amorphous wires
International Nuclear Information System (INIS)
Takajo, M.; Yamasaki, J.
1993-01-01
Domain observations were made on Fe and Co based amorphous magnetic wires that exhibit a large Barkhausen discontinuity during flux reversal. Domain patterns observed on the wire surface were compared with those found on a polished section through the center of the wire. It was confirmed that the Fe based wire consists of a shell and core region as previously proposed, however, there is a third region between them. This fairly thick transition region made up of domains at an angle of about 45 degree to the wire axis clearly lacking the closure domains of the previous model. The Co based wire does not have a clear core and shell domain structure. The center of the wire had a classic domain structure expected of uniaxial anisotropy with the easy axis normal to the wire axis. When a model for the residual stress quenched-in during cooling of large Fe bars is applied to the wire, the expected anisotropy is consistent with the domain patterns in the Fe based wire, however, shape anisotropy still plays a dominant role in defining the wire core in the Co based wire
Harmonic analysis of traction power supply system based on wavelet decomposition
Dun, Xiaohong
2018-05-01
With the rapid development of high-speed railway and heavy-haul transport, AC drive electric locomotive and EMU large-scale operation in the country on the ground, the electrified railway has become the main harmonic source of China's power grid. In response to this phenomenon, the need for timely monitoring of power quality problems of electrified railway, assessment and governance. Wavelet transform is developed on the basis of Fourier analysis, the basic idea comes from the harmonic analysis, with a rigorous theoretical model, which has inherited and developed the local thought of Garbor transformation, and has overcome the disadvantages such as window fixation and lack of discrete orthogonally, so as to become a more recently studied spectral analysis tool. The wavelet analysis takes the gradual and precise time domain step in the high frequency part so as to focus on any details of the signal being analyzed, thereby comprehensively analyzing the harmonics of the traction power supply system meanwhile use the pyramid algorithm to increase the speed of wavelet decomposition. The matlab simulation shows that the use of wavelet decomposition of the traction power supply system for harmonic spectrum analysis is effective.
Design of tailor-made chemical blend using a decomposition-based computer-aided approach
DEFF Research Database (Denmark)
Yunus, Nor Alafiza; Gernaey, Krist; Manan, Z.A.
2011-01-01
Computer aided techniques form an efficient approach to solve chemical product design problems such as the design of blended liquid products (chemical blending). In chemical blending, one tries to find the best candidate, which satisfies the product targets defined in terms of desired product...... methodology for blended liquid products that identifies a set of feasible chemical blends. The blend design problem is formulated as a Mixed Integer Nonlinear Programming (MINLP) model where the objective is to find the optimal blended gasoline or diesel product subject to types of chemicals...... and their compositions and a set of desired target properties of the blended product as design constraints. This blend design problem is solved using a decomposition approach, which eliminates infeasible and/or redundant candidates gradually through a hierarchy of (property) model based constraints. This decomposition...
Ammonia synthesis and decomposition on a Ru-based catalyst modeled by first-principles
DEFF Research Database (Denmark)
Hellman, A.; Honkala, Johanna Karoliina; Remediakis, Ioannis
2009-01-01
A recently published first-principles model for the ammonia synthesis on an unpromoted Ru-based catalyst is extended to also describe ammonia decomposition. In addition, further analysis concerning trends in ammonia productivity, surface conditions during the reaction, and macro-properties, such ......A recently published first-principles model for the ammonia synthesis on an unpromoted Ru-based catalyst is extended to also describe ammonia decomposition. In addition, further analysis concerning trends in ammonia productivity, surface conditions during the reaction, and macro......-properties, such as apparent activation energies and reaction orders are provided. All observed trends in activity are captured by the model and the absolute value of ammonia synthesis/decomposition productivity is predicted to within a factor of 1-100 depending on the experimental conditions. Moreover it is shown: (i......) that small changes in the relative adsorption potential energies are sufficient to get a quantitative agreement between theory and experiment (Appendix A) and (ii) that it is possible to reproduce results from the first-principles model by a simple micro-kinetic model (Appendix B)....
SAR and Infrared Image Fusion in Complex Contourlet Domain Based on Joint Sparse Representation
Directory of Open Access Journals (Sweden)
Wu Yiquan
2017-08-01
Full Text Available To investigate the problems of the large grayscale difference between infrared and Synthetic Aperture Radar (SAR images and their fusion image not being fit for human visual perception, we propose a fusion method for SAR and infrared images in the complex contourlet domain based on joint sparse representation. First, we perform complex contourlet decomposition of the infrared and SAR images. Then, we employ the KSingular Value Decomposition (K-SVD method to obtain an over-complete dictionary of the low-frequency components of the two source images. Using a joint sparse representation model, we then generate a joint dictionary. We obtain the sparse representation coefficients of the low-frequency components of the source images in the joint dictionary by the Orthogonal Matching Pursuit (OMP method and select them using the selection maximization strategy. We then reconstruct these components to obtain the fused low-frequency components and fuse the high-frequency components using two criteria——the coefficient of visual sensitivity and the degree of energy matching. Finally, we obtain the fusion image by the inverse complex contourlet transform. Compared with the three classical fusion methods and recently presented fusion methods, e.g., that based on the Non-Subsampled Contourlet Transform (NSCT and another based on sparse representation, the method we propose in this paper can effectively highlight the salient features of the two source images and inherit their information to the greatest extent.
Converting One Type-Based Abstract Domain to Another
DEFF Research Database (Denmark)
Gallagher, John Patrick; Puebla, German; Albert, Elvira
2006-01-01
The specific problem that motivates this paper is how to obtain abstract descriptions of the meanings of imported predicates (such as built-ins) that can be used when analysing a module of a logic program with respect to some abstract domain. We assume that abstract descriptions of the imported....... We develop a method which has been applied in order to generate call and success patterns from the Ciaopp assertions for built-ins, for any given regular type-based domain. In the paper we present the method as an instance of the more general problem of mapping elements of one abstract domain...
Microresonator-Based Optical Frequency Combs: A Time Domain Perspective
2016-04-19
AFRL-AFOSR-VA-TR-2016-0165 (BRI) Microresonator-Based Optical Frequency Combs: A Time Domain Perspective Andrew Weiner PURDUE UNIVERSITY 401 SOUTH...Optical Frequency Combs: A Time Domain Perspective 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-12-1-0236 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data
Huang, Yan; Wang, Zhihui
2015-12-01
With the development of FPGA, DSP Builder is widely applied to design system-level algorithms. The algorithm of CL multi-wavelet is more advanced and effective than scalar wavelets in processing signal decomposition. Thus, a system of CL multi-wavelet based on DSP Builder is designed for the first time in this paper. The system mainly contains three parts: a pre-filtering subsystem, a one-level decomposition subsystem and a two-level decomposition subsystem. It can be converted into hardware language VHDL by the Signal Complier block that can be used in Quartus II. After analyzing the energy indicator, it shows that this system outperforms Daubenchies wavelet in signal decomposition. Furthermore, it has proved to be suitable for the implementation of signal fusion based on SoPC hardware, and it will become a solid foundation in this new field.
Tree decomposition based fast search of RNA structures including pseudoknots in genomes.
Song, Yinglei; Liu, Chunmei; Malmberg, Russell; Pan, Fangfang; Cai, Liming
2005-01-01
Searching genomes for RNA secondary structure with computational methods has become an important approach to the annotation of non-coding RNAs. However, due to the lack of efficient algorithms for accurate RNA structure-sequence alignment, computer programs capable of fast and effectively searching genomes for RNA secondary structures have not been available. In this paper, a novel RNA structure profiling model is introduced based on the notion of a conformational graph to specify the consensus structure of an RNA family. Tree decomposition yields a small tree width t for such conformation graphs (e.g., t = 2 for stem loops and only a slight increase for pseudo-knots). Within this modelling framework, the optimal alignment of a sequence to the structure model corresponds to finding a maximum valued isomorphic subgraph and consequently can be accomplished through dynamic programming on the tree decomposition of the conformational graph in time O(k(t)N(2)), where k is a small parameter; and N is the size of the projiled RNA structure. Experiments show that the application of the alignment algorithm to search in genomes yields the same search accuracy as methods based on a Covariance model with a significant reduction in computation time. In particular; very accurate searches of tmRNAs in bacteria genomes and of telomerase RNAs in yeast genomes can be accomplished in days, as opposed to months required by other methods. The tree decomposition based searching tool is free upon request and can be downloaded at our site h t t p ://w.uga.edu/RNA-informatics/software/index.php.
Huang, Wentao; Sun, Hongjian; Wang, Weijie
2017-06-03
Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD). Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD's theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis.
Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots
Directory of Open Access Journals (Sweden)
Ching-Long Shih
2012-08-01
Full Text Available This paper aims to demonstrate a clear relationship between Lagrange equations and Newton-Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method for using either symbolic or on-line numerical computations. Based on the decomposition approach and cross-product operation, a computing method for robot dynamics can be easily developed. The advantages of this computing framework are that: it can be used for both symbolic and on-line numeric computation purposes, and it can also be applied to biped systems, as well as some simple closed-chain robot systems.
Wang, Shu-tao; Yang, Xue-ying; Kong, De-ming; Wang, Yu-tian
2017-11-01
A new noise reduction method based on ensemble empirical mode decomposition (EEMD) is proposed to improve the detection effect for fluorescence spectra. Polycyclic aromatic hydrocarbons (PAHs) pollutants, as a kind of important current environmental pollution source, are highly oncogenic. Using the fluorescence spectroscopy method, the PAHs pollutants can be detected. However, instrument will produce noise in the experiment. Weak fluorescent signals can be affected by noise, so we propose a way to denoise and improve the detection effect. Firstly, we use fluorescence spectrometer to detect PAHs to obtain fluorescence spectra. Subsequently, noises are reduced by EEMD algorithm. Finally, the experiment results show the proposed method is feasible.
Phase-only asymmetric optical cryptosystem based on random modulus decomposition
Xu, Hongfeng; Xu, Wenhui; Wang, Shuaihua; Wu, Shaofan
2018-06-01
We propose a phase-only asymmetric optical cryptosystem based on random modulus decomposition (RMD). The cryptosystem is presented for effectively improving the capacity to resist various attacks, including the attack of iterative algorithms. On the one hand, RMD and phase encoding are combined to remove the constraints that can be used in the attacking process. On the other hand, the security keys (geometrical parameters) introduced by Fresnel transform can increase the key variety and enlarge the key space simultaneously. Numerical simulation results demonstrate the strong feasibility, security and robustness of the proposed cryptosystem. This cryptosystem will open up many new opportunities in the application fields of optical encryption and authentication.
Central Decoding for Multiple Description Codes based on Domain Partitioning
Directory of Open Access Journals (Sweden)
M. Spiertz
2006-01-01
Full Text Available Multiple Description Codes (MDC can be used to trade redundancy against packet loss resistance for transmitting data over lossy diversity networks. In this work we focus on MD transform coding based on domain partitioning. Compared to Vaishampayan’s quantizer based MDC, domain based MD coding is a simple approach for generating different descriptions, by using different quantizers for each description. Commonly, only the highest rate quantizer is used for reconstruction. In this paper we investigate the benefit of using the lower rate quantizers to enhance the reconstruction quality at decoder side. The comparison is done on artificial source data and on image data.
Directory of Open Access Journals (Sweden)
Xin Chai
2017-05-01
Full Text Available Electroencephalography (EEG-based emotion recognition is an important element in psychiatric health diagnosis for patients. However, the underlying EEG sensor signals are always non-stationary if they are sampled from different experimental sessions or subjects. This results in the deterioration of the classification performance. Domain adaptation methods offer an effective way to reduce the discrepancy of marginal distribution. However, for EEG sensor signals, both marginal and conditional distributions may be mismatched. In addition, the existing domain adaptation strategies always require a high level of additional computation. To address this problem, a novel strategy named adaptive subspace feature matching (ASFM is proposed in this paper in order to integrate both the marginal and conditional distributions within a unified framework (without any labeled samples from target subjects. Specifically, we develop a linear transformation function which matches the marginal distributions of the source and target subspaces without a regularization term. This significantly decreases the time complexity of our domain adaptation procedure. As a result, both marginal and conditional distribution discrepancies between the source domain and unlabeled target domain can be reduced, and logistic regression (LR can be applied to the new source domain in order to train a classifier for use in the target domain, since the aligned source domain follows a distribution which is similar to that of the target domain. We compare our ASFM method with six typical approaches using a public EEG dataset with three affective states: positive, neutral, and negative. Both offline and online evaluations were performed. The subject-to-subject offline experimental results demonstrate that our component achieves a mean accuracy and standard deviation of 80.46% and 6.84%, respectively, as compared with a state-of-the-art method, the subspace alignment auto-encoder (SAAE, which
International Nuclear Information System (INIS)
Tsuji, Masashi; Chiba, Gou
2000-01-01
A hierarchical domain decomposition boundary element method (HDD-BEM) for solving the multiregion neutron diffusion equation (NDE) has been fully parallelized, both for numerical computations and for data communications, to accomplish a high parallel efficiency on distributed memory message passing parallel computers. Data exchanges between node processors that are repeated during iteration processes of HDD-BEM are implemented, without any intervention of the host processor that was used to supervise parallel processing in the conventional parallelized HDD-BEM (P-HDD-BEM). Thus, the parallel processing can be executed with only cooperative operations of node processors. The communication overhead was even the dominant time consuming part in the conventional P-HDD-BEM, and the parallelization efficiency decreased steeply with the increase of the number of processors. With the parallel data communication, the efficiency is affected only by the number of boundary elements assigned to decomposed subregions, and the communication overhead can be drastically reduced. This feature can be particularly advantageous in the analysis of three-dimensional problems where a large number of processors are required. The proposed P-HDD-BEM offers a promising solution to the deterioration problem of parallel efficiency and opens a new path to parallel computations of NDEs on distributed memory message passing parallel computers. (author)
Biometric identification based on novel frequency domain facial asymmetry measures
Mitra, Sinjini; Savvides, Marios; Vijaya Kumar, B. V. K.
2005-03-01
In the modern world, the ever-growing need to ensure a system's security has spurred the growth of the newly emerging technology of biometric identification. The present paper introduces a novel set of facial biometrics based on quantified facial asymmetry measures in the frequency domain. In particular, we show that these biometrics work well for face images showing expression variations and have the potential to do so in presence of illumination variations as well. A comparison of the recognition rates with those obtained from spatial domain asymmetry measures based on raw intensity values suggests that the frequency domain representation is more robust to intra-personal distortions and is a novel approach for performing biometric identification. In addition, some feature analysis based on statistical methods comparing the asymmetry measures across different individuals and across different expressions is presented.
Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret
2003-12-01
A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.
Directory of Open Access Journals (Sweden)
Yu-Fei Gao
2017-04-01
Full Text Available This paper investigates a two-dimensional angle of arrival (2D AOA estimation algorithm for the electromagnetic vector sensor (EMVS array based on Type-2 block component decomposition (BCD tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD method.
Exact Partial Information Decompositions for Gaussian Systems Based on Dependency Constraints
Directory of Open Access Journals (Sweden)
Jim W. Kay
2018-03-01
Full Text Available The Partial Information Decomposition, introduced by Williams P. L. et al. (2010, provides a theoretical framework to characterize and quantify the structure of multivariate information sharing. A new method ( I dep has recently been proposed by James R. G. et al. (2017 for computing a two-predictor partial information decomposition over discrete spaces. A lattice of maximum entropy probability models is constructed based on marginal dependency constraints, and the unique information that a particular predictor has about the target is defined as the minimum increase in joint predictor-target mutual information when that particular predictor-target marginal dependency is constrained. Here, we apply the I dep approach to Gaussian systems, for which the marginally constrained maximum entropy models are Gaussian graphical models. Closed form solutions for the I dep PID are derived for both univariate and multivariate Gaussian systems. Numerical and graphical illustrations are provided, together with practical and theoretical comparisons of the I dep PID with the minimum mutual information partial information decomposition ( I mmi , which was discussed by Barrett A. B. (2015. The results obtained using I dep appear to be more intuitive than those given with other methods, such as I mmi , in which the redundant and unique information components are constrained to depend only on the predictor-target marginal distributions. In particular, it is proved that the I mmi method generally produces larger estimates of redundancy and synergy than does the I dep method. In discussion of the practical examples, the PIDs are complemented by the use of tests of deviance for the comparison of Gaussian graphical models.
Polarimetric SAR interferometry-based decomposition modelling for reliable scattering retrieval
Agrawal, Neeraj; Kumar, Shashi; Tolpekin, Valentyn
2016-05-01
Fully Polarimetric SAR (PolSAR) data is used for scattering information retrieval from single SAR resolution cell. Single SAR resolution cell may contain contribution from more than one scattering objects. Hence, single or dual polarized data does not provide all the possible scattering information. So, to overcome this problem fully Polarimetric data is used. It was observed in previous study that fully Polarimetric data of different dates provide different scattering values for same object and coefficient of determination obtained from linear regression between volume scattering and aboveground biomass (AGB) shows different values for the SAR dataset of different dates. Scattering values are important input elements for modelling of forest aboveground biomass. In this research work an approach is proposed to get reliable scattering from interferometric pair of fully Polarimetric RADARSAT-2 data. The field survey for data collection was carried out for Barkot forest during November 10th to December 5th, 2014. Stratified random sampling was used to collect field data for circumference at breast height (CBH) and tree height measurement. Field-measured AGB was compared with the volume scattering elements obtained from decomposition modelling of individual PolSAR images and PolInSAR coherency matrix. Yamaguchi 4-component decomposition was implemented to retrieve scattering elements from SAR data. PolInSAR based decomposition was the great challenge in this work and it was implemented with certain assumptions to create Hermitian coherency matrix with co-registered polarimetric interferometric pair of SAR data. Regression analysis between field-measured AGB and volume scattering element obtained from PolInSAR data showed highest (0.589) coefficient of determination. The same regression with volume scattering elements of individual SAR images showed 0.49 and 0.50 coefficients of determination for master and slave images respectively. This study recommends use of
Directory of Open Access Journals (Sweden)
Chun Wang
2017-01-01
Full Text Available A novel multiobjective memetic algorithm based on decomposition (MOMAD is proposed to solve multiobjective flexible job shop scheduling problem (MOFJSP, which simultaneously minimizes makespan, total workload, and critical workload. Firstly, a population is initialized by employing an integration of different machine assignment and operation sequencing strategies. Secondly, multiobjective memetic algorithm based on decomposition is presented by introducing a local search to MOEA/D. The Tchebycheff approach of MOEA/D converts the three-objective optimization problem to several single-objective optimization subproblems, and the weight vectors are grouped by K-means clustering. Some good individuals corresponding to different weight vectors are selected by the tournament mechanism of a local search. In the experiments, the influence of three different aggregation functions is first studied. Moreover, the effect of the proposed local search is investigated. Finally, MOMAD is compared with eight state-of-the-art algorithms on a series of well-known benchmark instances and the experimental results show that the proposed algorithm outperforms or at least has comparative performance to the other algorithms.
Directory of Open Access Journals (Sweden)
Hui Lu
2014-01-01
Full Text Available Test task scheduling problem (TTSP is a complex optimization problem and has many local optima. In this paper, a hybrid chaotic multiobjective evolutionary algorithm based on decomposition (CMOEA/D is presented to avoid becoming trapped in local optima and to obtain high quality solutions. First, we propose an improving integrated encoding scheme (IES to increase the efficiency. Then ten chaotic maps are applied into the multiobjective evolutionary algorithm based on decomposition (MOEA/D in three phases, that is, initial population and crossover and mutation operators. To identify a good approach for hybrid MOEA/D and chaos and indicate the effectiveness of the improving IES several experiments are performed. The Pareto front and the statistical results demonstrate that different chaotic maps in different phases have different effects for solving the TTSP especially the circle map and ICMIC map. The similarity degree of distribution between chaotic maps and the problem is a very essential factor for the application of chaotic maps. In addition, the experiments of comparisons of CMOEA/D and variable neighborhood MOEA/D (VNM indicate that our algorithm has the best performance in solving the TTSP.
International Nuclear Information System (INIS)
Li, Ning; Liang, Caiping; Yang, Jianguo; Zhou, Rui
2016-01-01
Knock is one of the major constraints to improve the performance and thermal efficiency of spark ignition (SI) engines. It can also result in severe permanent engine damage under certain operating conditions. Based on the ensemble empirical mode decomposition (EEMD), this paper proposes a new approach to determine the knock characteristics in SI engines. By adding a uniformly distributed and finite white Gaussian noise, the EEMD can preserve signal continuity in different scales and therefore alleviates the mode-mixing problem occurring in the classic empirical mode decomposition (EMD). The feasibilities of applying the EEMD to detect the knock signatures of a test SI engine via the pressure signal measured from combustion chamber and the vibration signal measured from cylinder head are investigated. Experimental results show that the EEMD-based method is able to detect the knock signatures from both the pressure signal and vibration signal, even in initial stage of knock. Finally, by comparing the application results with those obtained by short-time Fourier transform (STFT), Wigner–Ville distribution (WVD) and discrete wavelet transform (DWT), the superiority of the EEMD method in determining knock characteristics is demonstrated. (paper)
A new solar power output prediction based on hybrid forecast engine and decomposition model.
Zhang, Weijiang; Dang, Hongshe; Simoes, Rolando
2018-06-12
Regarding to the growing trend of photovoltaic (PV) energy as a clean energy source in electrical networks and its uncertain nature, PV energy prediction has been proposed by researchers in recent decades. This problem is directly effects on operation in power network while, due to high volatility of this signal, an accurate prediction model is demanded. A new prediction model based on Hilbert Huang transform (HHT) and integration of improved empirical mode decomposition (IEMD) with feature selection and forecast engine is presented in this paper. The proposed approach is divided into three main sections. In the first section, the signal is decomposed by the proposed IEMD as an accurate decomposition tool. To increase the accuracy of the proposed method, a new interpolation method has been used instead of cubic spline curve (CSC) fitting in EMD. Then the obtained output is entered into the new feature selection procedure to choose the best candidate inputs. Finally, the signal is predicted by a hybrid forecast engine composed of support vector regression (SVR) based on an intelligent algorithm. The effectiveness of the proposed approach has been verified over a number of real-world engineering test cases in comparison with other well-known models. The obtained results prove the validity of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Decomposition of Polarimetric SAR Images Based on Second- and Third-order Statics Analysis
Kojima, S.; Hensley, S.
2012-12-01
There are many papers concerning the research of the decomposition of polerimetric SAR imagery. Most of them are based on second-order statics analysis that Freeman and Durden [1] suggested for the reflection symmetry condition that implies that the co-polarization and cross-polarization correlations are close to zero. Since then a number of improvements and enhancements have been proposed to better understand the underlying backscattering mechanisms present in polarimetric SAR images. For example, Yamaguchi et al. [2] added the helix component into Freeman's model and developed a 4 component scattering model for the non-reflection symmetry condition. In addition, Arii et al. [3] developed an adaptive model-based decomposition method that could estimate both the mean orientation angle and a degree of randomness for the canopy scattering for each pixel in a SAR image without the reflection symmetry condition. This purpose of this research is to develop a new decomposition method based on second- and third-order statics analysis to estimate the surface, dihedral, volume and helix scattering components from polarimetric SAR images without the specific assumptions concerning the model for the volume scattering. In addition, we evaluate this method by using both simulation and real UAVSAR data and compare this method with other methods. We express the volume scattering component using the wire formula and formulate the relationship equation between backscattering echo and each component such as the surface, dihedral, volume and helix via linearization based on second- and third-order statics. In third-order statics, we calculate the correlation of the correlation coefficients for each polerimetric data and get one new relationship equation to estimate each polarization component such as HH, VV and VH for the volume. As a result, the equation for the helix component in this method is the same formula as one in Yamaguchi's method. However, the equation for the volume
Directory of Open Access Journals (Sweden)
Lajmert Paweł
2018-01-01
Full Text Available In the paper a cutting stability in the milling process of nickel based alloy Inconel 625 is analysed. This problem is often considered theoretically, but the theoretical finding do not always agree with experimental results. For this reason, the paper presents different methods for instability identification during real machining process. A stability lobe diagram is created based on data obtained in impact test of an end mill. Next, the cutting tests were conducted in which the axial cutting depth of cut was gradually increased in order to find a stability limit. Finally, based on the cutting force measurements the stability estimation problem is investigated using the recurrence plot technique and Hilbert vibration decomposition method.
Alsharoa, Ahmad M.
2015-05-01
In this paper, the problem of radio and power resource management in long term evolution heterogeneous networks (LTE HetNets) is investigated. The goal is to minimize the total power consumption of the network while satisfying the user quality of service determined by each target data rate. We study the model where one macrocell base station is placed in the cell center, and multiple small cell base stations and femtocell access points are distributed around it. The dual decomposition technique is adopted to jointly optimize the power and carrier allocation in the downlink direction in addition to the selection of turned off small cell base stations. Our numerical results investigate the performance of the proposed scheme versus different system parameters and show an important saving in terms of total power consumption. © 2015 IEEE.
Coarse-to-fine markerless gait analysis based on PCA and Gauss-Laguerre decomposition
Goffredo, Michela; Schmid, Maurizio; Conforto, Silvia; Carli, Marco; Neri, Alessandro; D'Alessio, Tommaso
2005-04-01
Human movement analysis is generally performed through the utilization of marker-based systems, which allow reconstructing, with high levels of accuracy, the trajectories of markers allocated on specific points of the human body. Marker based systems, however, show some drawbacks that can be overcome by the use of video systems applying markerless techniques. In this paper, a specifically designed computer vision technique for the detection and tracking of relevant body points is presented. It is based on the Gauss-Laguerre Decomposition, and a Principal Component Analysis Technique (PCA) is used to circumscribe the region of interest. Results obtained on both synthetic and experimental tests provide significant reduction of the computational costs, with no significant reduction of the tracking accuracy.
Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models
International Nuclear Information System (INIS)
Cai, Caifang
2013-01-01
Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also
Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach
Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil
2016-01-01
Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.
Directory of Open Access Journals (Sweden)
Jianchang Lu
2015-04-01
Full Text Available Based on the international community’s analysis of the present CO2 emissions situation, a Log Mean Divisia Index (LMDI decomposition model is proposed in this paper, aiming to reflect the decomposition of carbon productivity. The model is designed by analyzing the factors that affect carbon productivity. China’s contribution to carbon productivity is analyzed from the dimensions of influencing factors, regional structure and industrial structure. It comes to the conclusions that: (a economic output, the provincial carbon productivity and energy structure are the most influential factors, which are consistent with China’s current actual policy; (b the distribution patterns of economic output, carbon productivity and energy structure in different regions have nothing to do with the Chinese traditional sense of the regional economic development patterns; (c considering the regional protectionism, regional actual situation need to be considered at the same time; (d in the study of the industrial structure, the contribution value of industry is the most prominent factor for China’s carbon productivity, while the industrial restructuring has not been done well enough.
International Nuclear Information System (INIS)
Chen, Baojia; He, Zhengjia; Chen, Xuefeng; Cao, Hongrui; Cai, Gaigai; Zi, Yanyang
2011-01-01
Since machinery fault vibration signals are usually multicomponent modulation signals, how to decompose complex signals into a set of mono-components whose instantaneous frequency (IF) has physical sense has become a key issue. Local mean decomposition (LMD) is a new kind of time–frequency analysis approach which can decompose a signal adaptively into a set of product function (PF) components. In this paper, a modulation feature extraction method-based LMD is proposed. The envelope of a PF is the instantaneous amplitude (IA) and the derivative of the unwrapped phase of a purely flat frequency demodulated (FM) signal is the IF. The computed IF and IA are displayed together in the form of time–frequency representation (TFR). Modulation features can be extracted from the spectrum analysis of the IA and IF. In order to make the IF have physical meaning, the phase-unwrapping algorithm and IF processing method of extrema are presented in detail along with a simulation FM signal example. Besides, the dependence of the LMD method on the signal-to-noise ratio (SNR) is also investigated by analyzing synthetic signals which are added with Gaussian noise. As a result, the recommended critical SNRs for PF decomposition and IF extraction are given according to the practical application. Successful fault diagnosis on a rolling bearing and gear of locomotive bogies shows that LMD has better identification capacity for modulation signal processing and is very suitable for failure detection in rotating machinery
Sparse Localization with a Mobile Beacon Based on LU Decomposition in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Chunhui Zhao
2015-09-01
Full Text Available Node localization is the core in wireless sensor network. It can be solved by powerful beacons, which are equipped with global positioning system devices to know their location information. In this article, we present a novel sparse localization approach with a mobile beacon based on LU decomposition. Our scheme firstly translates node localization problem into a 1-sparse vector recovery problem by establishing sparse localization model. Then, LU decomposition pre-processing is adopted to solve the problem that measurement matrix does not meet the re¬stricted isometry property. Later, the 1-sparse vector can be exactly recovered by compressive sensing. Finally, as the 1-sparse vector is approximate sparse, weighted Cen¬troid scheme is introduced to accurately locate the node. Simulation and analysis show that our scheme has better localization performance and lower requirement for the mobile beacon than MAP+GC, MAP-M, and MAP-MN schemes. In addition, the obstacles and DOI have little effect on the novel scheme, and it has great localization performance under low SNR, thus, the scheme proposed is robust.
CT Image Sequence Restoration Based on Sparse and Low-Rank Decomposition
Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe
2013-01-01
Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images. PMID:24023764
The Speech multi features fusion perceptual hash algorithm based on tensor decomposition
Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.
2018-03-01
With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.
Regional income inequality model based on theil index decomposition and weighted variance coeficient
Sitepu, H. R.; Darnius, O.; Tambunan, W. N.
2018-03-01
Regional income inequality is an important issue in the study on economic development of a certain region. Rapid economic development may not in accordance with people’s per capita income. The method of measuring the regional income inequality has been suggested by many experts. This research used Theil index and weighted variance coefficient in order to measure the regional income inequality. Regional income decomposition which becomes the productivity of work force and their participation in regional income inequality, based on Theil index, can be presented in linear relation. When the economic assumption in j sector, sectoral income value, and the rate of work force are used, the work force productivity imbalance can be decomposed to become the component in sectors and in intra-sectors. Next, weighted variation coefficient is defined in the revenue and productivity of the work force. From the quadrate of the weighted variation coefficient result, it was found that decomposition of regional revenue imbalance could be analyzed by finding out how far each component contribute to regional imbalance which, in this research, was analyzed in nine sectors of economic business.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Directory of Open Access Journals (Sweden)
Naveed ur Rehman
2015-05-01
Full Text Available A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA, discrete wavelet transform (DWT and non-subsampled contourlet transform (NCT. A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Phase stability and decomposition processes in Ti-Al based intermetallics
Energy Technology Data Exchange (ETDEWEB)
Nakai, Kiyomichi [Department of Materials Science and Engineering, Faculty of Engineering, Ehime University, 3 Bunkyo-cho, Matsuyama 790 (Japan); Ono, Toshiaki [Department of Materials Science and Engineering, Faculty of Engineering, Ehime University, 3 Bunkyo-cho, Matsuyama 790 (Japan); Ohtsubo, Hiroyuki [Department of Materials Science and Engineering, Faculty of Engineering, Ehime University, 3 Bunkyo-cho, Matsuyama 790 (Japan); Ohmori, Yasuya [Department of Materials Science and Engineering, Faculty of Engineering, Ehime University, 3 Bunkyo-cho, Matsuyama 790 (Japan)
1995-02-28
The high-temperature phase equilibria and the phase decomposition of {alpha} and {beta} phases were studied by crystallographic analysis of the solidification microstructures of Ti-48at.%Al and Ti-48at.%Al-2at.%X (X=Mn, Cr, Mo) alloys. The effects on the phase stability of Zr and O atoms penetrating from the specimen surface were also examined for Ti-48at.%Al and Ti-50at.%Al alloys. The third elements Cr and Mo shift the {beta} phase region to higher Al concentrations, and the {beta} phase is ordered to the {beta}{sub 2} phase. The Zr and O atoms stabilize {beta} and {alpha} phases respectively. In the Zr-stabilized {beta} phase, {alpha}{sub 2} laths form with accompanying surface relief, and stacking faults which relax the elastic strain owing to lattice deformation are introduced after formation of {alpha}{sub 2} order domains. Thus shear is thought to operate after the phase transition from {beta} to {alpha}{sub 2} by short-range diffusion. A similar analysis was conducted for the Ti-Al binary system, and the transformation was interpreted from the CCT diagram constructed qualitatively. ((orig.))
WEALTH-BASED INEQUALITY IN CHILD IMMUNIZATION IN INDIA: A DECOMPOSITION APPROACH.
Debnath, Avijit; Bhattacharjee, Nairita
2018-05-01
SummaryDespite years of health and medical advancement, children still suffer from infectious diseases that are vaccine preventable. India reacted in 1978 by launching the Expanded Programme on Immunization in an attempt to reduce the incidence of vaccine-preventable diseases (VPDs). Although the nation has made remarkable progress over the years, there is significant variation in immunization coverage across different socioeconomic strata. This study attempted to identify the determinants of wealth-based inequality in child immunization using a new, modified method. The present study was based on 11,001 eligible ever-married women aged 15-49 and their children aged 12-23 months. Data were from the third District Level Household and Facility Survey (DLHS-3) of India, 2007-08. Using an approximation of Erreyger's decomposition technique, the study identified unequal access to antenatal care as the main factor associated with inequality in immunization coverage in India.
Analysis of Human's Motions Based on Local Mean Decomposition in Through-wall Radar Detection
Lu, Qi; Liu, Cai; Zeng, Zhaofa; Li, Jing; Zhang, Xuebing
2016-04-01
Observation of human motions through a wall is an important issue in security applications and search-and rescue. Radar has advantages in looking through walls where other sensors give low performance or cannot be used at all. Ultrawideband (UWB) radar has high spatial resolution as a result of employment of ultranarrow pulses. It has abilities to distinguish the closely positioned targets and provide time-lapse information of targets. Moreover, the UWB radar shows good performance in wall penetration when the inherently short pulses spread their energy over a broad frequency range. Human's motions show periodic features including respiration, swing arms and legs, fluctuations of the torso. Detection of human targets is based on the fact that there is always periodic motion due to breathing or other body movements like walking. The radar can gain the reflections from each human body parts and add the reflections at each time sample. The periodic movements will cause micro-Doppler modulation in the reflected radar signals. Time-frequency analysis methods are consider as the effective tools to analysis and extract micro-Doppler effects caused by the periodic movements in the reflected radar signal, such as short-time Fourier transform (STFT), wavelet transform (WT), and Hilbert-Huang transform (HHT).The local mean decomposition (LMD), initially developed by Smith (2005), is to decomposed amplitude and frequency modulated signals into a small set of product functions (PFs), each of which is the product of an envelope signal and a frequency modulated signal from which a time-vary instantaneous phase and instantaneous frequency can be derived. As bypassing the Hilbert transform, the LMD has no demodulation error coming from window effect and involves no negative frequency without physical sense. Also, the instantaneous attributes obtained by LMD are more stable and precise than those obtained by the empirical mode decomposition (EMD) because LMD uses smoothed local
Restoration in multi-domain GMPLS-based networks
DEFF Research Database (Denmark)
Manolova, Anna; Ruepp, Sarah Renée; Dittmann, Lars
2011-01-01
In this paper, we evaluate the efficiency of using restoration mechanisms in a dynamic multi-domain GMPLS network. Major challenges and solutions are introduced and two well-known restoration schemes (End-to-End and Local-to-End) are evaluated. Additionally, new restoration mechanisms are introdu......In this paper, we evaluate the efficiency of using restoration mechanisms in a dynamic multi-domain GMPLS network. Major challenges and solutions are introduced and two well-known restoration schemes (End-to-End and Local-to-End) are evaluated. Additionally, new restoration mechanisms...... are introduced: one based on the position of a failed link, called Location-Based, and another based on minimizing the additional resources consumed during restoration, called Shortest-New. A complete set of simulations in different network scenarios show where each mechanism is more efficient in terms, such as...
Directory of Open Access Journals (Sweden)
Jinlu Sheng
2016-07-01
Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.
Deconvolutions based on singular value decomposition and the pseudoinverse: a guide for beginners.
Hendler, R W; Shrager, R I
1994-01-01
Singular value decomposition (SVD) is deeply rooted in the theory of linear algebra, and because of this is not readily understood by a large group of researchers who could profit from its application. In this paper, we discuss the subject on a level that should be understandable to scientists who are not well versed in linear algebra. However, because it is necessary that certain key concepts in linear algebra be appreciated in order to comprehend what is accomplished by SVD, we present the section, 'Bare basics of linear algebra'. This is followed by a discussion of the theory of SVD. Next we present step-by-step examples to illustrate how SVD is applied to deconvolute a titration involving a mixture of three pH indicators. One noiseless case is presented as well as two cases where either a fixed or varying noise level is present. Finally, we discuss additional deconvolutions of mixed spectra based on the use of the pseudoinverse.
International Nuclear Information System (INIS)
Slanina, Z.
1987-01-01
Water vapor is treated as an equilibrium mixture of water clusters (H 2 O)/sub i/ using quantum-chemical evaluation of the equilibrium constants of water associations. The model is adapted to the conditions of atmospheric humidity, and a decomposition algorithm is suggested using the temperature and mass concentration of water as input information and used for a demonstration of evaluation of the water oligomer populations in the Earth's atmosphere. An upper limit of the populations is set up based on the water content in saturated aqueous vapor. It is proved that the cluster population in the saturated water vapor, as well as in the Earth's atmosphere for a typical temperature/humidity profile, increases with increasing temperatures
Fringe-projection profilometry based on two-dimensional empirical mode decomposition.
Zheng, Suzhen; Cao, Yiping
2013-11-01
In 3D shape measurement, because deformed fringes often contain low-frequency information degraded with random noise and background intensity information, a new fringe-projection profilometry is proposed based on 2D empirical mode decomposition (2D-EMD). The fringe pattern is first decomposed into numbers of intrinsic mode functions by 2D-EMD. Because the method has partial noise reduction, the background components can be removed to obtain the fundamental components needed to perform Hilbert transformation to retrieve the phase information. The 2D-EMD can effectively extract the modulation phase of a single direction fringe and an inclined fringe pattern because it is a full 2D analysis method and considers the relationship between adjacent lines of a fringe patterns. In addition, as the method does not add noise repeatedly, as does ensemble EMD, the data processing time is shortened. Computer simulations and experiments prove the feasibility of this method.
An epileptic seizures detection algorithm based on the empirical mode decomposition of EEG.
Orosco, Lorena; Laciar, Eric; Correa, Agustina Garces; Torres, Abel; Graffigna, Juan P
2009-01-01
Epilepsy is a neurological disorder that affects around 50 million people worldwide. The seizure detection is an important component in the diagnosis of epilepsy. In this study, the Empirical Mode Decomposition (EMD) method was proposed on the development of an automatic epileptic seizure detection algorithm. The algorithm first computes the Intrinsic Mode Functions (IMFs) of EEG records, then calculates the energy of each IMF and performs the detection based on an energy threshold and a minimum duration decision. The algorithm was tested in 9 invasive EEG records provided and validated by the Epilepsy Center of the University Hospital of Freiburg. In 90 segments analyzed (39 with epileptic seizures) the sensitivity and specificity obtained with the method were of 56.41% and 75.86% respectively. It could be concluded that EMD is a promissory method for epileptic seizure detection in EEG records.
Synthesis and thermal decomposition kinetics of Th(IV) complex with unsymmetrical Schiff base ligand
International Nuclear Information System (INIS)
Fan Yuhua; Bi Caifeng; Liu Siquan; Yang Lirong; Liu Feng; Ai Xiaokang
2006-01-01
A new unsymmetrical Schiff base ligand (H 2 LLi) was synthesized using L-lysine, o-vanillin and salicylaladyde. Thorium(IV) complex of this ligand [Th(H 2 L)(NO 3 )](NO 3 ) 2 x 3H 2 O have been prepared and characterized by elemental analyses, IR, UV and molar conductance. The thermal decomposition kinetics of the complex for the second stage was studied under non-isothermal condition by TG and DTG methods. The kinetic equation may be expressed as: dα/dt = A x e -E/RT x 1/2 (1-α) x [-ln(1-α)] -1 . The kinetic parameters (E, A), activation entropy ΔS ≠ and activation free-energy ΔG ≠ were also calculated. (author)
Guo, Wei; Tse, Peter W.
2013-01-01
Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect
Enhancement of dynamic myocardial perfusion PET images based on low-rank plus sparse decomposition.
Lu, Lijun; Ma, Xiaomian; Mohy-Ud-Din, Hassan; Ma, Jianhua; Feng, Qianjin; Rahmim, Arman; Chen, Wufan
2018-02-01
The absolute quantification of dynamic myocardial perfusion (MP) PET imaging is challenged by the limited spatial resolution of individual frame images due to division of the data into shorter frames. This study aims to develop a method for restoration and enhancement of dynamic PET images. We propose that the image restoration model should be based on multiple constraints rather than a single constraint, given the fact that the image characteristic is hardly described by a single constraint alone. At the same time, it may be possible, but not optimal, to regularize the image with multiple constraints simultaneously. Fortunately, MP PET images can be decomposed into a superposition of background vs. dynamic components via low-rank plus sparse (L + S) decomposition. Thus, we propose an L + S decomposition based MP PET image restoration model and express it as a convex optimization problem. An iterative soft thresholding algorithm was developed to solve the problem. Using realistic dynamic 82 Rb MP PET scan data, we optimized and compared its performance with other restoration methods. The proposed method resulted in substantial visual as well as quantitative accuracy improvements in terms of noise versus bias performance, as demonstrated in extensive 82 Rb MP PET simulations. In particular, the myocardium defect in the MP PET images had improved visual as well as contrast versus noise tradeoff. The proposed algorithm was also applied on an 8-min clinical cardiac 82 Rb MP PET study performed on the GE Discovery PET/CT, and demonstrated improved quantitative accuracy (CNR and SNR) compared to other algorithms. The proposed method is effective for restoration and enhancement of dynamic PET images. Copyright © 2017 Elsevier B.V. All rights reserved.
Primal Recovery from Consensus-Based Dual Decomposition for Distributed Convex Optimization
Simonetto, A.; Jamali-Rad, H.
2015-01-01
Dual decomposition has been successfully employed in a variety of distributed convex optimization problems solved by a network of computing and communicating nodes. Often, when the cost function is separable but the constraints are coupled, the dual decomposition scheme involves local parallel
Agent-based Simulation of the Maritime Domain
Directory of Open Access Journals (Sweden)
O. Vaněk
2010-01-01
Full Text Available In this paper, a multi-agent based simulation platform is introduced that focuses on legitimate and illegitimate aspects of maritime traffic, mainly on intercontinental transport through piracy afflicted areas. The extensible architecture presented here comprises several modules controlling the simulation and the life-cycle of the agents, analyzing the simulation output and visualizing the entire simulated domain. The simulation control module is initialized by various configuration scenarios to simulate various real-world situations, such as a pirate ambush, coordinated transit through a transport corridor, or coastal fishing and local traffic. The environmental model provides a rich set of inputs for agents that use the geo-spatial data and the vessel operational characteristics for their reasoning. The agent behavior model based on finite state machines together with planning algorithms allows complex expression of agent behavior, so the resulting simulation output can serve as a substitution for real world data from the maritime domain.
Su, Yonggang; Tang, Chen; Li, Biyuan; Lei, Zhenkun
2018-05-01
This paper presents a novel optical colour image watermarking scheme based on phase-truncated linear canonical transform (PT-LCT) and image decomposition (ID). In this proposed scheme, a PT-LCT-based asymmetric cryptography is designed to encode the colour watermark into a noise-like pattern, and an ID-based multilevel embedding method is constructed to embed the encoded colour watermark into a colour host image. The PT-LCT-based asymmetric cryptography, which can be optically implemented by double random phase encoding with a quadratic phase system, can provide a higher security to resist various common cryptographic attacks. And the ID-based multilevel embedding method, which can be digitally implemented by a computer, can make the information of the colour watermark disperse better in the colour host image. The proposed colour image watermarking scheme possesses high security and can achieve a higher robustness while preserving the watermark’s invisibility. The good performance of the proposed scheme has been demonstrated by extensive experiments and comparison with other relevant schemes.
Sun, Qianlai; Wang, Yin; Sun, Zhiyi
2018-05-01
For most surface defect detection methods based on image processing, image segmentation is a prerequisite for determining and locating the defect. In our previous work, a method based on singular value decomposition (SVD) was used to determine and approximately locate surface defects on steel strips without image segmentation. For the SVD-based method, the image to be inspected was projected onto its first left and right singular vectors respectively. If there were defects in the image, there would be sharp changes in the projections. Then the defects may be determined and located according sharp changes in the projections of each image to be inspected. This method was simple and practical but the SVD should be performed for each image to be inspected. Owing to the high time complexity of SVD itself, it did not have a significant advantage in terms of time consumption over image segmentation-based methods. Here, we present an improved SVD-based method. In the improved method, a defect-free image is considered as the reference image which is acquired under the same environment as the image to be inspected. The singular vectors of each image to be inspected are replaced by the singular vectors of the reference image, and SVD is performed only once for the reference image off-line before detecting of the defects, thus greatly reducing the time required. The improved method is more conducive to real-time defect detection. Experimental results confirm its validity.
Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang
2015-12-01
As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions.
International Nuclear Information System (INIS)
Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang
2015-01-01
As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions. (paper)
Cicone, A; Liu, J; Zhou, H
2016-04-13
Chemicals released in the air can be extremely dangerous for human beings and the environment. Hyperspectral images can be used to identify chemical plumes, however the task can be extremely challenging. Assuming we know a priori that some chemical plume, with a known frequency spectrum, has been photographed using a hyperspectral sensor, we can use standard techniques such as the so-called matched filter or adaptive cosine estimator, plus a properly chosen threshold value, to identify the position of the chemical plume. However, due to noise and inadequate sensing, the accurate identification of chemical pixels is not easy even in this apparently simple situation. In this paper, we present a post-processing tool that, in a completely adaptive and data-driven fashion, allows us to improve the performance of any classification methods in identifying the boundaries of a plume. This is done using the multidimensional iterative filtering (MIF) algorithm (Cicone et al. 2014 (http://arxiv.org/abs/1411.6051); Cicone & Zhou 2015 (http://arxiv.org/abs/1507.07173)), which is a non-stationary signal decomposition method like the pioneering empirical mode decomposition method (Huang et al. 1998 Proc. R. Soc. Lond. A 454, 903. (doi:10.1098/rspa.1998.0193)). Moreover, based on the MIF technique, we propose also a pre-processing method that allows us to decorrelate and mean-centre a hyperspectral dataset. The cosine similarity measure, which often fails in practice, appears to become a successful and outperforming classifier when equipped with such a pre-processing method. We show some examples of the proposed methods when applied to real-life problems. © 2016 The Author(s).
Directory of Open Access Journals (Sweden)
Khaled Loukhaoukha
2017-12-01
Full Text Available Among emergent applications of digital watermarking are copyright protection and proof of ownership. Recently, Makbol and Khoo (2013 have proposed for these applications a new robust blind image watermarking scheme based on the redundant discrete wavelet transform (RDWT and the singular value decomposition (SVD. In this paper, we present two ambiguity attacks on this algorithm that have shown that this algorithm fails when used to provide robustness applications like owner identification, proof of ownership, and transaction tracking. Keywords: Ambiguity attack, Image watermarking, Singular value decomposition, Redundant discrete wavelet transform
Directory of Open Access Journals (Sweden)
Qinghua Xie
2017-01-01
Full Text Available Recently, a general polarimetric model-based decomposition framework was proposed by Chen et al., which addresses several well-known limitations in previous decomposition methods and implements a simultaneous full-parameter inversion by using complete polarimetric information. However, it only employs four typical models to characterize the volume scattering component, which limits the parameter inversion performance. To overcome this issue, this paper presents two general polarimetric model-based decomposition methods by incorporating the generalized volume scattering model (GVSM or simplified adaptive volume scattering model, (SAVSM proposed by Antropov et al. and Huang et al., respectively, into the general decomposition framework proposed by Chen et al. By doing so, the final volume coherency matrix structure is selected from a wide range of volume scattering models within a continuous interval according to the data itself without adding unknowns. Moreover, the new approaches rely on one nonlinear optimization stage instead of four as in the previous method proposed by Chen et al. In addition, the parameter inversion procedure adopts the modified algorithm proposed by Xie et al. which leads to higher accuracy and more physically reliable output parameters. A number of Monte Carlo simulations of polarimetric synthetic aperture radar (PolSAR data are carried out and show that the proposed method with GVSM yields an overall improvement in the final accuracy of estimated parameters and outperforms both the version using SAVSM and the original approach. In addition, C-band Radarsat-2 and L-band AIRSAR fully polarimetric images over the San Francisco region are also used for testing purposes. A detailed comparison and analysis of decomposition results over different land-cover types are conducted. According to this study, the use of general decomposition models leads to a more accurate quantitative retrieval of target parameters. However, there
Czech Academy of Sciences Publication Activity Database
Asadi, M.; Asadi, Z.; Savaripoor, N.; Dušek, Michal; Eigner, Václav; Shorkaei, M.R.; Sedaghat, M.
2015-01-01
Roč. 136, Feb (2015), 625-634 ISSN 1386-1425 R&D Projects: GA ČR(CZ) GAP204/11/0809 Institutional support: RVO:68378271 Keywords : Oxovanadium(IV) complexes * Schiff base * Kinetic s of thermal decomposition * Electrochemistry Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 2.653, year: 2015
Capturing alternative secondary structures of RNA by decomposition of base-pairing probabilities.
Hagio, Taichi; Sakuraba, Shun; Iwakiri, Junichi; Mori, Ryota; Asai, Kiyoshi
2018-02-19
It is known that functional RNAs often switch their functions by forming different secondary structures. Popular tools for RNA secondary structures prediction, however, predict the single 'best' structures, and do not produce alternative structures. There are bioinformatics tools to predict suboptimal structures, but it is difficult to detect which alternative secondary structures are essential. We proposed a new computational method to detect essential alternative secondary structures from RNA sequences by decomposing the base-pairing probability matrix. The decomposition is calculated by a newly implemented software tool, RintW, which efficiently computes the base-pairing probability distributions over the Hamming distance from arbitrary reference secondary structures. The proposed approach has been demonstrated on ROSE element RNA thermometer sequence and Lysine RNA ribo-switch, showing that the proposed approach captures conformational changes in secondary structures. We have shown that alternative secondary structures are captured by decomposing base-paring probabilities over Hamming distance. Source code is available from http://www.ncRNA.org/RintW .
Bivariate empirical mode decomposition for ECG-based biometric identification with emotional data.
Ferdinando, Hany; Seppanen, Tapio; Alasaarela, Esko
2017-07-01
Emotions modulate ECG signals such that they might affect ECG-based biometric identification in real life application. It motivated in finding good feature extraction methods where the emotional state of the subjects has minimum impacts. This paper evaluates feature extraction based on bivariate empirical mode decomposition (BEMD) for biometric identification when emotion is considered. Using the ECG signal from the Mahnob-HCI database for affect recognition, the features were statistical distributions of dominant frequency after applying BEMD analysis to ECG signals. The achieved accuracy was 99.5% with high consistency using kNN classifier in 10-fold cross validation to identify 26 subjects when the emotional states of the subjects were ignored. When the emotional states of the subject were considered, the proposed method also delivered high accuracy, around 99.4%. We concluded that the proposed method offers emotion-independent features for ECG-based biometric identification. The proposed method needs more evaluation related to testing with other classifier and variation in ECG signals, e.g. normal ECG vs. ECG with arrhythmias, ECG from various ages, and ECG from other affective databases.
Yeh, Jia-Rong; Lin, Tzu-Yu; Chen, Yun; Sun, Wei-Zen; Abbod, Maysam F; Shieh, Jiann-Shing
2012-01-01
Cardiovascular system is known to be nonlinear and nonstationary. Traditional linear assessments algorithms of arterial stiffness and systemic resistance of cardiac system accompany the problem of nonstationary or inconvenience in practical applications. In this pilot study, two new assessment methods were developed: the first is ensemble empirical mode decomposition based reflection index (EEMD-RI) while the second is based on the phase shift between ECG and BP on cardiac oscillation. Both methods utilise the EEMD algorithm which is suitable for nonlinear and nonstationary systems. These methods were used to investigate the properties of arterial stiffness and systemic resistance for a pig's cardiovascular system via ECG and blood pressure (BP). This experiment simulated a sequence of continuous changes of blood pressure arising from steady condition to high blood pressure by clamping the artery and an inverse by relaxing the artery. As a hypothesis, the arterial stiffness and systemic resistance should vary with the blood pressure due to clamping and relaxing the artery. The results show statistically significant correlations between BP, EEMD-based RI, and the phase shift between ECG and BP on cardiac oscillation. The two assessments results demonstrate the merits of the EEMD for signal analysis.
Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen
2018-01-01
With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP
Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen
2018-01-05
With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP
DOGMA: domain-based transcriptome and proteome quality assessment.
Dohmen, Elias; Kremer, Lukas P M; Bornberg-Bauer, Erich; Kemena, Carsten
2016-09-01
Genome studies have become cheaper and easier than ever before, due to the decreased costs of high-throughput sequencing and the free availability of analysis software. However, the quality of genome or transcriptome assemblies can vary a lot. Therefore, quality assessment of assemblies and annotations are crucial aspects of genome analysis pipelines. We developed DOGMA, a program for fast and easy quality assessment of transcriptome and proteome data based on conserved protein domains. DOGMA measures the completeness of a given transcriptome or proteome and provides information about domain content for further analysis. DOGMA provides a very fast way to do quality assessment within seconds. DOGMA is implemented in Python and published under GNU GPL v.3 license. The source code is available on https://ebbgit.uni-muenster.de/domainWorld/DOGMA/ CONTACTS: e.dohmen@wwu.de or c.kemena@wwu.de Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Real-time tumor ablation simulation based on the dynamic mode decomposition method
Bourantas, George C.; Ghommem, Mehdi; Kagadis, George C.; Katsanos, Konstantinos H.; Loukopoulos, Vassilios C.; Burganos, Vasilis N.; Nikiforidis, George C.
2014-01-01
Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must
Cui, Ximing; Wang, Zhe; Kang, Yihua; Pu, Haiming; Deng, Zhiyang
2018-05-01
Singular value decomposition (SVD) has been proven to be an effective de-noising tool for flaw echo signal feature detection in ultrasonic non-destructive evaluation (NDE). However, the uncertainty in the arbitrary manner of the selection of an effective singular value weakens the robustness of this technique. Improper selection of effective singular values will lead to bad performance of SVD de-noising. What is more, the computational complexity of SVD is too large for it to be applied in real-time applications. In this paper, to eliminate the uncertainty in SVD de-noising, a novel flaw indicator, named the maximum singular value indicator (MSI), based on short-time SVD (STSVD), is proposed for flaw feature detection from a measured signal in ultrasonic NDE. In this technique, the measured signal is first truncated into overlapping short-time data segments to put feature information of a transient flaw echo signal in local field, and then the MSI can be obtained from the SVD of each short-time data segment. Research shows that this indicator can clearly indicate the location of ultrasonic flaw signals, and the computational complexity of this STSVD-based indicator is significantly reduced with the algorithm proposed in this paper. Both simulation and experiments show that this technique is very efficient for real-time application in flaw detection from noisy data.
A new approach for crude oil price analysis based on empirical mode decomposition
International Nuclear Information System (INIS)
Zhang, Xun; Wang, Shou-Yang; Lai, K.K.
2008-01-01
The importance of understanding the underlying characteristics of international crude oil price movements attracts much attention from academic researchers and business practitioners. Due to the intrinsic complexity of the oil market, however, most of them fail to produce consistently good results. Empirical Mode Decomposition (EMD), recently proposed by Huang et al., appears to be a novel data analysis method for nonlinear and non-stationary time series. By decomposing a time series into a small number of independent and concretely implicational intrinsic modes based on scale separation, EMD explains the generation of time series data from a novel perspective. Ensemble EMD (EEMD) is a substantial improvement of EMD which can better separate the scales naturally by adding white noise series to the original time series and then treating the ensemble averages as the true intrinsic modes. In this paper, we extend EEMD to crude oil price analysis. First, three crude oil price series with different time ranges and frequencies are decomposed into several independent intrinsic modes, from high to low frequency. Second, the intrinsic modes are composed into a fluctuating process, a slowly varying part and a trend based on fine-to-coarse reconstruction. The economic meanings of the three components are identified as short term fluctuations caused by normal supply-demand disequilibrium or some other market activities, the effect of a shock of a significant event, and a long term trend. Finally, the EEMD is shown to be a vital technique for crude oil price analysis. (author)
Directory of Open Access Journals (Sweden)
Yuqi Dong
2016-12-01
Full Text Available Accurate short-term electrical load forecasting plays a pivotal role in the national economy and people’s livelihood through providing effective future plans and ensuring a reliable supply of sustainable electricity. Although considerable work has been done to select suitable models and optimize the model parameters to forecast the short-term electrical load, few models are built based on the characteristics of time series, which will have a great impact on the forecasting accuracy. For that reason, this paper proposes a hybrid model based on data decomposition considering periodicity, trend and randomness of the original electrical load time series data. Through preprocessing and analyzing the original time series, the generalized regression neural network optimized by genetic algorithm is used to forecast the short-term electrical load. The experimental results demonstrate that the proposed hybrid model can not only achieve a good fitting ability, but it can also approximate the actual values when dealing with non-linear time series data with periodicity, trend and randomness.
Directory of Open Access Journals (Sweden)
Zhiwen Lu
2016-01-01
Full Text Available Multicrack localization in operating rotor systems is still a challenge today. Focusing on this challenge, a new approach based on proper orthogonal decomposition (POD is proposed for multicrack localization in rotors. A two-disc rotor-bearing system with breathing cracks is established by the finite element method and simulated sensors are distributed along the rotor to obtain the steady-state transverse responses required by POD. Based on the discontinuities introduced in the proper orthogonal modes (POMs at the locations of cracks, the characteristic POM (CPOM, which is sensitive to crack locations and robust to noise, is selected for cracks localization. Instead of using the CPOM directly, due to its difficulty to localize incipient cracks, damage indexes using fractal dimension (FD and gapped smoothing method (GSM are adopted, in order to extract the locations more efficiently. The method proposed in this work is validated to be effective for multicrack localization in rotors by numerical experiments on rotors in different crack configuration cases considering the effects of noise. In addition, the feasibility of using fewer sensors is also investigated.
Jiang, Shouyong; Yang, Shengxiang
2016-02-01
The multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been shown to be very efficient in solving multiobjective optimization problems (MOPs). In practice, the Pareto-optimal front (POF) of many MOPs has complex characteristics. For example, the POF may have a long tail and sharp peak and disconnected regions, which significantly degrades the performance of MOEA/D. This paper proposes an improved MOEA/D for handling such kind of complex problems. In the proposed algorithm, a two-phase strategy (TP) is employed to divide the whole optimization procedure into two phases. Based on the crowdedness of solutions found in the first phase, the algorithm decides whether or not to delicate computational resources to handle unsolved subproblems in the second phase. Besides, a new niche scheme is introduced into the improved MOEA/D to guide the selection of mating parents to avoid producing duplicate solutions, which is very helpful for maintaining the population diversity when the POF of the MOP being optimized is discontinuous. The performance of the proposed algorithm is investigated on some existing benchmark and newly designed MOPs with complex POF shapes in comparison with several MOEA/D variants and other approaches. The experimental results show that the proposed algorithm produces promising performance on these complex problems.
Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview
International Nuclear Information System (INIS)
Han, G.; Lin, B.; Xu, Z.
2017-01-01
Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.
Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview
Han, G.; Lin, B.; Xu, Z.
2017-03-01
Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.
Energy Technology Data Exchange (ETDEWEB)
Schmidt, A.J.; Freeman, H.D.; Brown, M.D.; Zacher, A.H.; Neuenschwander, G.N.; Wilcox, W.A.; Gano, S.R. [Pacific Northwest National Lab., Richland, WA (United States); Kim, B.C.; Gavaskar, A.R. [Battelle Columbus Div., OH (United States)
1996-02-01
Base Catalyzed Decomposition (BCD) is a chemical dehalogenation process designed for treating soils and other substrate contaminated with polychlorinated biphenyls (PCB), pesticides, dioxins, furans, and other hazardous organic substances. PCBs are heavy organic liquids once widely used in industry as lubricants, heat transfer oils, and transformer dielectric fluids. In 1976, production was banned when PCBs were recognized as carcinogenic substances. It was estimated that significant quantities (one billion tons) of U.S. soils, including areas on U.S. military bases outside the country, were contaminated by PCB leaks and spills, and cleanup activities began. The BCD technology was developed in response to these activities. This report details the evolution of the process, from inception to deployment in Guam, and describes the process and system components provided to the Navy to meet the remediation requirements. The report is divided into several sections to cover the range of development and demonstration activities. Section 2.0 gives an overview of the project history. Section 3.0 describes the process chemistry and remediation steps involved. Section 4.0 provides a detailed description of each component and specific development activities. Section 5.0 details the testing and deployment operations and provides the results of the individual demonstration campaigns. Section 6.0 gives an economic assessment of the process. Section 7.0 presents the conclusions and recommendations form this project. The appendices contain equipment and instrument lists, equipment drawings, and detailed run and analytical data.
Shang, Yizi; Lu, Shibao; Gong, Jiaguo; Shang, Ling; Li, Xiaofei; Wei, Yongping; Shi, Hongwang
2017-12-01
A recent study decomposed the changes in industrial water use into three hierarchies (output, technology, and structure) using a refined Laspeyres decomposition model, and found monotonous and exclusive trends in the output and technology hierarchies. Based on that research, this study proposes a hierarchical prediction approach to forecast future industrial water demand. Three water demand scenarios (high, medium, and low) were then established based on potential future industrial structural adjustments, and used to predict water demand for the structural hierarchy. The predictive results of this approach were compared with results from a grey prediction model (GPM (1, 1)). The comparison shows that the results of the two approaches were basically identical, differing by less than 10%. Taking Tianjin, China, as a case, and using data from 2003-2012, this study predicts that industrial water demand will continuously increase, reaching 580 million m 3 , 776.4 million m 3 , and approximately 1.09 billion m 3 by the years 2015, 2020 and 2025 respectively. It is concluded that Tianjin will soon face another water crisis if no immediate measures are taken. This study recommends that Tianjin adjust its industrial structure with water savings as the main objective, and actively seek new sources of water to increase its supply.
A hybrid filtering method based on a novel empirical mode decomposition for friction signals
International Nuclear Information System (INIS)
Li, Chengwei; Zhan, Liwei
2015-01-01
During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods. (paper)
International Nuclear Information System (INIS)
Hu Xintao; Zhu Jianxin; Ding Qiong
2011-01-01
Highlights: → We study the environmental impacts of two kinds of remediation technologies including Infrared High Temperature Incineration(IHTI) and Base Catalyzed Decomposition(BCD). → Combined midpoint/damage approaches were calculated for two technologies. → The results showed that major environmental impacts arose from energy consumption. → BCD has a lower environmental impact than IHTI in the view of single score. - Abstract: Remediation action is critical for the management of polychlorinated biphenyl (PCB) contaminated sites. Dozens of remediation technologies developed internationally could be divided in two general categories incineration and non-incineration. In this paper, life cycle assessment (LCA) was carried out to study the environmental impacts of these two kinds of remediation technologies in selected PCB contaminated sites, where Infrared High Temperature Incineration (IHTI) and Base Catalyzed Decomposition (BCD) were selected as representatives of incineration and non-incineration. A combined midpoint/damage approach was adopted by using SimaPro 7.2 and IMPACTA2002+ to assess the human toxicity, ecotoxicity, climate change impact, and resource consumption from the five subsystems of IHTI and BCD technologies, respectively. It was found that the major environmental impacts through the whole lifecycle arose from energy consumption in both IHTI and BCD processes. For IHTI, primary and secondary combustion subsystem contributes more than 50% of midpoint impacts concerning with carcinogens, respiratory inorganics, respiratory organics, terrestrial ecotoxity, terrestrial acidification/eutrophication and global warming. In BCD process, the rotary kiln reactor subsystem presents the highest contribution to almost all the midpoint impacts including global warming, non-renewable energy, non-carcinogens, terrestrial ecotoxity and respiratory inorganics. In the view of midpoint impacts, the characterization values for global warming from IHTI and
Domain-based small molecule binding site annotation
Directory of Open Access Journals (Sweden)
Dumontier Michel
2006-03-01
Full Text Available Abstract Background Accurate small molecule binding site information for a protein can facilitate studies in drug docking, drug discovery and function prediction, but small molecule binding site protein sequence annotation is sparse. The Small Molecule Interaction Database (SMID, a database of protein domain-small molecule interactions, was created using structural data from the Protein Data Bank (PDB. More importantly it provides a means to predict small molecule binding sites on proteins with a known or unknown structure and unlike prior approaches, removes large numbers of false positive hits arising from transitive alignment errors, non-biologically significant small molecules and crystallographic conditions that overpredict ion binding sites. Description Using a set of co-crystallized protein-small molecule structures as a starting point, SMID interactions were generated by identifying protein domains that bind to small molecules, using NCBI's Reverse Position Specific BLAST (RPS-BLAST algorithm. SMID records are available for viewing at http://smid.blueprint.org. The SMID-BLAST tool provides accurate transitive annotation of small-molecule binding sites for proteins not found in the PDB. Given a protein sequence, SMID-BLAST identifies domains using RPS-BLAST and then lists potential small molecule ligands based on SMID records, as well as their aligned binding sites. A heuristic ligand score is calculated based on E-value, ligand residue identity and domain entropy to assign a level of confidence to hits found. SMID-BLAST predictions were validated against a set of 793 experimental small molecule interactions from the PDB, of which 472 (60% of predicted interactions identically matched the experimental small molecule and of these, 344 had greater than 80% of the binding site residues correctly identified. Further, we estimate that 45% of predictions which were not observed in the PDB validation set may be true positives. Conclusion By
Kernel-Based Learning for Domain-Specific Relation Extraction
Basili, Roberto; Giannone, Cristina; Del Vescovo, Chiara; Moschitti, Alessandro; Naggar, Paolo
In a specific process of business intelligence, i.e. investigation on organized crime, empirical language processing technologies can play a crucial role. The analysis of transcriptions on investigative activities, such as police interrogatories, for the recognition and storage of complex relations among people and locations is a very difficult and time consuming task, ultimately based on pools of experts. We discuss here an inductive relation extraction platform that opens the way to much cheaper and consistent workflows. The presented empirical investigation shows that accurate results, comparable to the expert teams, can be achieved, and parametrization allows to fine tune the system behavior for fitting domain-specific requirements.
A Tensor Decomposition-Based Approach for Detecting Dynamic Network States From EEG.
Mahyari, Arash Golibagh; Zoltowski, David M; Bernat, Edward M; Aviyente, Selin
2017-01-01
Functional connectivity (FC), defined as the statistical dependency between distinct brain regions, has been an important tool in understanding cognitive brain processes. Most of the current works in FC have focused on the assumption of temporally stationary networks. However, recent empirical work indicates that FC is dynamic due to cognitive functions. The purpose of this paper is to understand the dynamics of FC for understanding the formation and dissolution of networks of the brain. In this paper, we introduce a two-step approach to characterize the dynamics of functional connectivity networks (FCNs) by first identifying change points at which the network connectivity across subjects shows significant changes and then summarizing the FCNs between consecutive change points. The proposed approach is based on a tensor representation of FCNs across time and subjects yielding a four-mode tensor. The change points are identified using a subspace distance measure on low-rank approximations to the tensor at each time point. The network summarization is then obtained through tensor-matrix projections across the subject and time modes. The proposed framework is applied to electroencephalogram (EEG) data collected during a cognitive control task. The detected change-points are consistent with a priori known ERN interval. The results show significant connectivities in medial-frontal regions which are consistent with widely observed ERN amplitude measures. The tensor-based method outperforms conventional matrix-based methods such as singular value decomposition in terms of both change-point detection and state summarization. The proposed tensor-based method captures the topological structure of FCNs which provides more accurate change-point-detection and state summarization.
Benchmarking of a T-wave alternans detection method based on empirical mode decomposition.
Blanco-Velasco, Manuel; Goya-Esteban, Rebeca; Cruz-Roldán, Fernando; García-Alberola, Arcadi; Rojo-Álvarez, José Luis
2017-07-01
T-wave alternans (TWA) is a fluctuation of the ST-T complex occurring on an every-other-beat basis of the surface electrocardiogram (ECG). It has been shown to be an informative risk stratifier for sudden cardiac death, though the lack of gold standard to benchmark detection methods has promoted the use of synthetic signals. This work proposes a novel signal model to study the performance of a TWA detection. Additionally, the methodological validation of a denoising technique based on empirical mode decomposition (EMD), which is used here along with the spectral method, is also tackled. The proposed test bed system is based on the following guidelines: (1) use of open source databases to enable experimental replication; (2) use of real ECG signals and physiological noise; (3) inclusion of randomized TWA episodes. Both sensitivity (Se) and specificity (Sp) are separately analyzed. Also a nonparametric hypothesis test, based on Bootstrap resampling, is used to determine whether the presence of the EMD block actually improves the performance. The results show an outstanding specificity when the EMD block is used, even in very noisy conditions (0.96 compared to 0.72 for SNR = 8 dB), being always superior than that of the conventional SM alone. Regarding the sensitivity, using the EMD method also outperforms in noisy conditions (0.57 compared to 0.46 for SNR=8 dB), while it decreases in noiseless conditions. The proposed test setting designed to analyze the performance guarantees that the actual physiological variability of the cardiac system is reproduced. The use of the EMD-based block in noisy environment enables the identification of most patients with fatal arrhythmias. Copyright © 2017 Elsevier B.V. All rights reserved.
Thermal decomposition of dimethoxymethane and dimethyl carbonate catalyzed by solid acids and bases
International Nuclear Information System (INIS)
Fu Yuchuan; Zhu Haiyan; Shen Jianyi
2005-01-01
The thermal decomposition of dimethoxymethane (DMM) and dimethyl carbonate (DMC) on MgO, H-ZSM-5, SiO 2 , γ-Al 2 O 3 and ZnO was studied using a fixed bed isothermal reactor equipped with an online gas chromatograph. It was found that DMM was stable on MgO at temperatures up to 623 K, while it was decomposed over the acidic H-ZSM-5 with 99% conversion at 423 K. On the other hand, DMC was easily decomposed on the strong solid base and acid. The conversion of DMC was 76% on MgO at 473 K, and 98% on H-ZSM-5 at 423 K. It was even easier decomposed on the amphoteric γ-Al 2 O 3 . Both DMM and DMC were relatively stable on SiO 2 possessing little surface acidity and basicity. They were even more stable on ZnO with the conversion of DMM and DMC of about 1.5% at 573 K. Thus, metal oxides with either strong acidity or basicity are not suitable for the selective oxidation of DMM to DMC, while ZnO may be used as a component for the reaction
Directory of Open Access Journals (Sweden)
Y. Sun
2017-09-01
Full Text Available Hyperspectral imaging system can obtain spectral and spatial information simultaneously with bandwidth to the level of 10 nm or even less. Therefore, hyperspectral remote sensing has the ability to detect some kinds of objects which can not be detected in wide-band remote sensing, making it becoming one of the hottest spots in remote sensing. In this study, under conditions with a fuzzy set of full constraints, Normalized Multi-Endmember Decomposition Method (NMEDM for vegetation, water, and soil was proposed to reconstruct hyperspectral data using a large number of high-quality multispectral data and auxiliary spectral library data. This study considered spatial and temporal variation and decreased the calculation time required to reconstruct the hyper-spectral data. The results of spectral reconstruction based on NMEDM showed that the reconstructed data has good qualities and certain applications, which makes it possible to carry out spectral features identification. This method also extends the application of depth and breadth of remote sensing data, helping to explore the law between multispectral and hyperspectral data.
Automatic decomposition of a complex hologram based on the virtual diffraction plane framework
International Nuclear Information System (INIS)
Jiao, A S M; Tsang, P W M; Lam, Y K; Poon, T-C; Liu, J-P; Lee, C-C
2014-01-01
Holography is a technique for capturing the hologram of a three-dimensional scene. In many applications, it is often pertinent to retain specific items of interest in the hologram, rather than retaining the full information, which may cause distraction in the analytical process that follows. For a real optical image that is captured with a camera or scanner, this process can be realized by applying image segmentation algorithms to decompose an image into its constituent entities. However, because it is different from an optical image, classic image segmentation methods cannot be applied directly to a hologram, as each pixel in the hologram carries holistic, rather than local, information of the object scene. In this paper, we propose a method to perform automatic decomposition of a complex hologram based on a recently proposed technique called the virtual diffraction plane (VDP) framework. Briefly, a complex hologram is back-propagated to a hypothetical plane known as the VDP. Next, the image on the VDP is automatically decomposed, through the use of the segmentation on the magnitude of the VDP image, into multiple sub-VDP images, each representing the diffracted waves of an isolated entity in the scene. Finally, each sub-VDP image is reverted back to a hologram. As such, a complex hologram can be decomposed into a plurality of subholograms, each representing a discrete object in the scene. We have demonstrated the successful performance of our proposed method by decomposing a complex hologram that is captured through the optical scanning holography (OSH) technique. (papers)
E, Jianwei; Bao, Yanling; Ye, Jimin
2017-10-01
As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.
Singular value decomposition based feature extraction technique for physiological signal analysis.
Chang, Cheng-Ding; Wang, Chien-Chih; Jiang, Bernard C
2012-06-01
Multiscale entropy (MSE) is one of the popular techniques to calculate and describe the complexity of the physiological signal. Many studies use this approach to detect changes in the physiological conditions in the human body. However, MSE results are easily affected by noise and trends, leading to incorrect estimation of MSE values. In this paper, singular value decomposition (SVD) is adopted to replace MSE to extract the features of physiological signals, and adopt the support vector machine (SVM) to classify the different physiological states. A test data set based on the PhysioNet website was used, and the classification results showed that using SVD to extract features of the physiological signal could attain a classification accuracy rate of 89.157%, which is higher than that using the MSE value (71.084%). The results show the proposed analysis procedure is effective and appropriate for distinguishing different physiological states. This promising result could be used as a reference for doctors in diagnosis of congestive heart failure (CHF) disease.
Multivariate Empirical Mode Decomposition Based Signal Analysis and Efficient-Storage in Smart Grid
Energy Technology Data Exchange (ETDEWEB)
Liu, Lu [University of Tennessee, Knoxville (UTK); Albright, Austin P [ORNL; Rahimpour, Alireza [University of Tennessee, Knoxville (UTK); Guo, Jiandong [University of Tennessee, Knoxville (UTK); Qi, Hairong [University of Tennessee, Knoxville (UTK); Liu, Yilu [University of Tennessee (UTK) and Oak Ridge National Laboratory (ORNL)
2017-01-01
Wide-area-measurement systems (WAMSs) are used in smart grid systems to enable the efficient monitoring of grid dynamics. However, the overwhelming amount of data and the severe contamination from noise often impede the effective and efficient data analysis and storage of WAMS generated measurements. To solve this problem, we propose a novel framework that takes advantage of Multivariate Empirical Mode Decomposition (MEMD), a fully data-driven approach to analyzing non-stationary signals, dubbed MEMD based Signal Analysis (MSA). The frequency measurements are considered as a linear superposition of different oscillatory components and noise. The low-frequency components, corresponding to the long-term trend and inter-area oscillations, are grouped and compressed by MSA using the mean shift clustering algorithm. Whereas, higher-frequency components, mostly noise and potentially part of high-frequency inter-area oscillations, are analyzed using Hilbert spectral analysis and they are delineated by statistical behavior. By conducting experiments on both synthetic and real-world data, we show that the proposed framework can capture the characteristics, such as trends and inter-area oscillation, while reducing the data storage requirements
Radiative transport-based frequency-domain fluorescence tomography
International Nuclear Information System (INIS)
Joshi, Amit; Rasmussen, John C; Sevick-Muraca, Eva M; Wareing, Todd A; McGhee, John
2008-01-01
We report the development of radiative transport model-based fluorescence optical tomography from frequency-domain boundary measurements. The coupled radiative transport model for describing NIR fluorescence propagation in tissue is solved by a novel software based on the established Attila(TM) particle transport simulation platform. The proposed scheme enables the prediction of fluorescence measurements with non-contact sources and detectors at a minimal computational cost. An adjoint transport solution-based fluorescence tomography algorithm is implemented on dual grids to efficiently assemble the measurement sensitivity Jacobian matrix. Finally, we demonstrate fluorescence tomography on a realistic computational mouse model to locate nM to μM fluorophore concentration distributions in simulated mouse organs
Boniface Ngah Epo; Francis Menjo Baye; Nadine Teme Angele Manga
2011-01-01
This study applies the regression-based inequality decomposition technique to explain poverty and inequality trends in Cameroon. We also identify gender related factors which explain income disparities and discrimination based on the 2001 and 2007 Cameroon household consumption surveys. The results show that education, health, employment in the formal sector, age cohorts, household size, gender, ownership of farmland and urban versus rural residence explain household economic wellbeing; dispa...
Energy Technology Data Exchange (ETDEWEB)
Han, Sang-Bo [Industry Applications Research Laboratory, Korea Electrotechnology Research Institute, Changwon, Kyeongnam (Korea, Republic of); Oda, Tetsuji [Department of Electrical Engineering, The University of Tokyo, Tokyo 113-8656 (Japan)
2007-05-15
The hybrid barrier discharge plasma process combined with ozone decomposition catalysts was studied experimentally for decomposing dilute trichloroethylene (TCE). Based on the fundamental experiment for catalytic activities on ozone decomposition, MnO{sub 2} was selected for application in the main experiments for its higher catalytic abilities than other metal oxides. A lower initial TCE concentration existed in the working gas; the larger ozone concentration was generated from the barrier discharge plasma treatment. Near complete decomposition of dichloro-acetylchloride (DCAC) into Cl{sub 2} and CO{sub x} was observed for an initial TCE concentration of less than 250 ppm. C=C {pi} bond cleavage in TCE gave a carbon single bond of DCAC through oxidation reaction during the barrier discharge plasma treatment. Those DCAC were easily broken in the subsequent catalytic reaction. While changing oxygen concentration in working gas, oxygen radicals in the plasma space strongly reacted with precursors of DCAC compared with those of trichloro-acetaldehyde. A chlorine radical chain reaction is considered as a plausible decomposition mechanism in the barrier discharge plasma treatment. The potential energy of oxygen radicals at the surface of the catalyst is considered as an important factor in causing reactive chemical reactions.
International Nuclear Information System (INIS)
Hu, T. Y.; Connolly, S. M.; Lahoda, E. J.; Kriel, W.
2008-01-01
The key interface component between the reactor and chemical systems for the sulfuric acid based processes to make hydrogen is the sulfuric acid decomposition reactor. The materials issues for the decomposition reactor are severe since sulfuric acid must be heated, vaporized and decomposed. SiC has been identified and proven by others to be an acceptable material. However, SiC has a significant design issue when it must be interfaced with metals for connection to the remainder of the process. Westinghouse has developed a design utilizing SiC for the high temperature portions of the reactor that are in contact with the sulfuric acid and polymeric coated steel for low temperature portions. This design is expected to have a reasonable cost for an operating lifetime of 20 years. It can be readily maintained in the field, and is transportable by truck (maximum OD is 4.5 meters). This paper summarizes the detailed engineering design of the Westinghouse Decomposition Reactor and the decomposition reactor's capital cost. (authors)
International Nuclear Information System (INIS)
BEHRENS JR., RICHARD; MINIER, LEANNA M.G.
1999-01-01
A study to characterize the low-temperature reactive processes for o-AP and an AP/HTPB-based propellant (class 1.3) is being conducted in the laboratory using the techniques of simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) and scanning electron microscopy (SEM). The results presented in this paper are a follow up of the previous work that showed the overall decomposition to be complex and controlled by both physical and chemical processes. The decomposition is characterized by the occurrence of one major event that consumes up to(approx)35% of the AP, depending upon particle size, and leaves behind a porous agglomerate of AP. The major gaseous products released during this event include H(sub 2)O, O(sub 2), Cl(sub 2), N(sub 2)O and HCl. The recent efforts provide further insight into the decomposition processes for o-AP. The temporal behaviors of the gas formation rates (GFRs) for the products indicate that the major decomposition event consists of three chemical channels. The first and third channels are affected by the pressure in the reaction cell and occur at the surface or in the gas phase above the surface of the AP particles. The second channel is not affected by pressure and accounts for the solid-phase reactions characteristic of o-AP. The third channel involves the interactions of the decomposition products with the surface of the AP. SEM images of partially decomposed o-AP provide insight to how the morphology changes as the decomposition progresses. A conceptual model has been developed, based upon the STMBMS and SEM results, that provides a basic description of the processes. The thermal decomposition characteristics of the propellant are evaluated from the identities of the products and the temporal behaviors of their GFRs. First, the volatile components in the propellant evolve from the propellant as it is heated. Second, the hot AP (and HClO(sub 4)) at the AP-binder interface oxidize the binder through reactions that
An ill-conditioning conformal radiotherapy analysis based on singular values decomposition
International Nuclear Information System (INIS)
Lefkopoulos, D.; Grandjean, P.; Bendada, S.; Dominique, C.; Platoni, K.; Schlienger, M.
1995-01-01
Clinical experience in stereotactic radiotherapy of irregular complex lesions had shown that optimization algorithms were necessary to improve the dose distribution. We have developed a general optimization procedure which can be applied to different conformal irradiation techniques. In this presentation this procedure is tested on the stereotactic radiotherapy modality of complex cerebral lesions treated with multi-isocentric technique based on the 'associated targets methodology'. In this inverse procedure we use the singular value decomposition (SVD) analysis which proposes several optimal solutions for the narrow beams weights of each isocentre. The SVD analysis quantifies the ill-conditioning of the dosimetric calculation of the stereotactic irradiation, using the condition number which is the ratio of the bigger to smaller singular values. Our dose distribution optimization approach consists on the study of the irradiation parameters influence on the stereotactic radiotherapy inverse problem. The adjustment of the different irradiation parameters into the 'SVD optimizer' procedure is realized taking into account the ratio of the quality reconstruction to the time calculation. It will permit a more efficient use of the 'SVD optimizer' in clinical applications for real 3D lesions. The evaluation criteria for the choice of satisfactory solutions are based on the dose-volume histograms and clinical considerations. We will present the efficiency of ''SVD optimizer'' to analyze and predict the ill-conditioning in stereotactic radiotherapy and to recognize the topography of the different beams in order to create optimal reconstructed weighting vector. The planification of stereotactic treatments using the ''SVD optimizer'' is examined for mono-isocentrically and complex dual-isocentrically treated lesions. The application of the SVD optimization technique provides conformal dose distribution for complex intracranial lesions. It is a general optimization procedure
Three-Component Decomposition Based on Stokes Vector for Compact Polarimetric SAR
Directory of Open Access Journals (Sweden)
Hanning Wang
2015-09-01
Full Text Available In this paper, a three-component decomposition algorithm is proposed for processing compact polarimetric SAR images. By using the correspondence between the covariance matrix and the Stokes vector, three-component scattering models for CTLR and DCP modes are established. The explicit expression of decomposition results is then derived by setting the contribution of volume scattering as a free parameter. The degree of depolarization is taken as the upper bound of the free parameter, for the constraint that the weighting factor of each scattering component should be nonnegative. Several methods are investigated to estimate the free parameter suitable for decomposition. The feasibility of this algorithm is validated by AIRSAR data over San Francisco and RADARSAT-2 data over Flevoland.
Entropy based classifier for cross-domain opinion mining
Directory of Open Access Journals (Sweden)
Jyoti S. Deshmukh
2018-01-01
Full Text Available In recent years, the growth of social network has increased the interest of people in analyzing reviews and opinions for products before they buy them. Consequently, this has given rise to the domain adaptation as a prominent area of research in sentiment analysis. A classifier trained from one domain often gives poor results on data from another domain. Expression of sentiment is different in every domain. The labeling cost of each domain separately is very high as well as time consuming. Therefore, this study has proposed an approach that extracts and classifies opinion words from one domain called source domain and predicts opinion words of another domain called target domain using a semi-supervised approach, which combines modified maximum entropy and bipartite graph clustering. A comparison of opinion classification on reviews on four different product domains is presented. The results demonstrate that the proposed method performs relatively well in comparison to the other methods. Comparison of SentiWordNet of domain-specific and domain-independent words reveals that on an average 72.6% and 88.4% words, respectively, are correctly classified.
Optimization of dual-energy CT acquisitions for proton therapy using projection-based decomposition.
Vilches-Freixas, Gloria; Létang, Jean Michel; Ducros, Nicolas; Rit, Simon
2017-09-01
Dual-energy computed tomography (DECT) has been presented as a valid alternative to single-energy CT to reduce the uncertainty of the conversion of patient CT numbers to proton stopping power ratio (SPR) of tissues relative to water. The aim of this work was to optimize DECT acquisition protocols from simulations of X-ray images for the treatment planning of proton therapy using a projection-based dual-energy decomposition algorithm. We have investigated the effect of various voltages and tin filtration combinations on the SPR map accuracy and precision, and the influence of the dose allocation between the low-energy (LE) and the high-energy (HE) acquisitions. For all spectra combinations, virtual CT projections of the Gammex phantom were simulated with a realistic energy-integrating detector response model. Two situations were simulated: an ideal case without noise (infinite dose) and a realistic situation with Poisson noise corresponding to a 20 mGy total central dose. To determine the optimal dose balance, the proportion of LE-dose with respect to the total dose was varied from 10% to 90% while keeping the central dose constant, for four dual-energy spectra. SPR images were derived using a two-step projection-based decomposition approach. The ranges of 70 MeV, 90 MeV, and 100 MeV proton beams onto the adult female (AF) reference computational phantom of the ICRP were analytically determined from the reconstructed SPR maps. The energy separation between the incident spectra had a strong impact on the SPR precision. Maximizing the incident energy gap reduced image noise. However, the energy gap was not a good metric to evaluate the accuracy of the SPR. In terms of SPR accuracy, a large variability of the optimal spectra was observed when studying each phantom material separately. The SPR accuracy was almost flat in the 30-70% LE-dose range, while the precision showed a minimum slightly shifted in favor of lower LE-dose. Photon noise in the SPR images (20 mGy dose
Directory of Open Access Journals (Sweden)
Mishra Vinod
2016-01-01
Full Text Available Numerical Laplace transform method is applied to approximate the solution of nonlinear (quadratic Riccati differential equations mingled with Adomian decomposition method. A new technique is proposed in this work by reintroducing the unknown function in Adomian polynomial with that of well known Newton-Raphson formula. The solutions obtained by the iterative algorithm are exhibited in an infinite series. The simplicity and efficacy of method is manifested with some examples in which comparisons are made among the exact solutions, ADM (Adomian decomposition method, HPM (Homotopy perturbation method, Taylor series method and the proposed scheme.
Speech Denoising in White Noise Based on Signal Subspace Low-rank Plus Sparse Decomposition
Directory of Open Access Journals (Sweden)
yuan Shuai
2017-01-01
Full Text Available In this paper, a new subspace speech enhancement method using low-rank and sparse decomposition is presented. In the proposed method, we firstly structure the corrupted data as a Toeplitz matrix and estimate its effective rank for the underlying human speech signal. Then the low-rank and sparse decomposition is performed with the guidance of speech rank value to remove the noise. Extensive experiments have been carried out in white Gaussian noise condition, and experimental results show the proposed method performs better than conventional speech enhancement methods, in terms of yielding less residual noise and lower speech distortion.
Jin, Haiyan; Xing, Bei; Wang, Lei; Wang, Yanyan
2015-11-01
In this paper, we put forward a novel fusion method for remote sensing images based on the contrast pyramid (CP) using the Baldwinian Clonal Selection Algorithm (BCSA), referred to as CPBCSA. Compared with classical methods based on the transform domain, the method proposed in this paper adopts an improved heuristic evolutionary algorithm, wherein the clonal selection algorithm includes Baldwinian learning. In the process of image fusion, BCSA automatically adjusts the fusion coefficients of different sub-bands decomposed by CP according to the value of the fitness function. BCSA also adaptively controls the optimal search direction of the coefficients and accelerates the convergence rate of the algorithm. Finally, the fusion images are obtained via weighted integration of the optimal fusion coefficients and CP reconstruction. Our experiments show that the proposed method outperforms existing methods in terms of both visual effect and objective evaluation criteria, and the fused images are more suitable for human visual or machine perception.
Directory of Open Access Journals (Sweden)
Daryl L Moorhead
2013-08-01
Full Text Available We re-examined data from a recent litter decay study to determine if additional insights could be gained to inform decomposition modeling. Rinkes et al. (2013 conducted 14-day laboratory incubations of sugar maple (Acer saccharum or white oak (Quercus alba leaves, mixed with sand (0.4% organic C content or loam (4.1% organic C. They measured microbial biomass C, carbon dioxide efflux, soil ammonium, nitrate, and phosphate concentrations, and β-glucosidase (BG, β-N-acetyl-glucosaminidase (NAG, and acid phosphatase (AP activities on days 1, 3, and 14. Analyses of relationships among variables yielded different insights than original analyses of individual variables. For example, although respiration rates per g soil were higher for loam than sand, rates per g soil C were actually higher for sand than loam, and rates per g microbial C showed little difference between treatments. Microbial biomass C peaked on day 3 when biomass-specific activities of enzymes were lowest, suggesting uptake of litter C without extracellular hydrolysis. This result refuted a common model assumption that all enzyme production is constitutive and thus proportional to biomass, and/or indicated that part of litter decay is independent of enzyme activity. The length and angle of vectors defined by ratios of enzyme activities (BG/NAG versus BG/AP represent relative microbial investments in C (length, and N and P (angle acquiring enzymes. Shorter lengths on day 3 suggested low C limitation, whereas greater lengths on day 14 suggested an increase in C limitation with decay. The soils and litter in this study generally had stronger P limitation (angles > 45˚. Reductions in vector angles to < 45˚ for sand by day 14 suggested a shift to N limitation. These relational variables inform enzyme-based models, and are usually much less ambiguous when obtained from a single study in which measurements were made on the same samples than when extrapolated from separate studies.
Hu, Xintao; Zhu, Jianxin; Ding, Qiong
2011-07-15
Remediation action is critical for the management of polychlorinated biphenyl (PCB) contaminated sites. Dozens of remediation technologies developed internationally could be divided in two general categories incineration and non-incineration. In this paper, life cycle assessment (LCA) was carried out to study the environmental impacts of these two kinds of remediation technologies in selected PCB contaminated sites, where Infrared High Temperature Incineration (IHTI) and Base Catalyzed Decomposition (BCD) were selected as representatives of incineration and non-incineration. A combined midpoint/damage approach was adopted by using SimaPro 7.2 and IMPACTA2002+ to assess the human toxicity, ecotoxicity, climate change impact, and resource consumption from the five subsystems of IHTI and BCD technologies, respectively. It was found that the major environmental impacts through the whole lifecycle arose from energy consumption in both IHTI and BCD processes. For IHTI, primary and secondary combustion subsystem contributes more than 50% of midpoint impacts concerning with carcinogens, respiratory inorganics, respiratory organics, terrestrial ecotoxity, terrestrial acidification/eutrophication and global warming. In BCD process, the rotary kiln reactor subsystem presents the highest contribution to almost all the midpoint impacts including global warming, non-renewable energy, non-carcinogens, terrestrial ecotoxity and respiratory inorganics. In the view of midpoint impacts, the characterization values for global warming from IHTI and BCD were about 432.35 and 38.5 kg CO(2)-eq per ton PCB-containing soils, respectively. LCA results showed that the single score of BCD environmental impact was 1468.97 Pt while IHTI's score is 2785.15 Pt, which indicates BCD potentially has a lower environmental impact than IHTI technology in the PCB contaminated soil remediation process. Copyright © 2011 Elsevier B.V. All rights reserved.
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
An Improved Algorithm to Delineate Urban Targets with Model-Based Decomposition of PolSAR Data
Directory of Open Access Journals (Sweden)
Dingfeng Duan
2017-10-01
Full Text Available In model-based decomposition algorithms using polarimetric synthetic aperture radar (PolSAR data, urban targets are typically identified based on the existence of strong double-bounced scattering. However, urban targets with large azimuth orientation angles (AOAs produce strong volumetric scattering that appears similar to scattering characteristics from tree canopies. Due to scattering ambiguity, urban targets can be classified into the vegetation category if the same classification scheme of the model-based PolSAR decomposition algorithms is followed. To resolve the ambiguity and to reduce the misclassification eventually, we introduced a correlation coefficient that characterized scattering mechanisms of urban targets with variable AOAs. Then, an existing volumetric scattering model was modified, and a PolSAR decomposition algorithm developed. The validity and effectiveness of the algorithm were examined using four PolSAR datasets. The algorithm was valid and effective to delineate urban targets with a wide range of AOAs, and applicable to a broad range of ground targets from urban areas, and from upland and flooded forest stands.
Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan
2013-01-01
Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.
Barbini, L.; Eltabach, M.; Hillis, A. J.; du Bois, J. L.
2018-03-01
In rotating machine diagnosis different spectral tools are used to analyse vibration signals. Despite the good diagnostic performance such tools are usually refined, computationally complex to implement and require oversight of an expert user. This paper introduces an intuitive and easy to implement method for vibration analysis: amplitude cyclic frequency decomposition. This method firstly separates vibration signals accordingly to their spectral amplitudes and secondly uses the squared envelope spectrum to reveal the presence of cyclostationarity in each amplitude level. The intuitive idea is that in a rotating machine different components contribute vibrations at different amplitudes, for instance defective bearings contribute a very weak signal in contrast to gears. This paper also introduces a new quantity, the decomposition squared envelope spectrum, which enables separation between the components of a rotating machine. The amplitude cyclic frequency decomposition and the decomposition squared envelope spectrum are tested on real word signals, both at stationary and varying speeds, using data from a wind turbine gearbox and an aircraft engine. In addition a benchmark comparison to the spectral correlation method is presented.
Chen, Yi-Feng; Atal, Kiran; Xie, Sheng-Quan; Liu, Quan
2017-08-01
Objective. Accurate and efficient detection of steady-state visual evoked potentials (SSVEP) in electroencephalogram (EEG) is essential for the related brain-computer interface (BCI) applications. Approach. Although the canonical correlation analysis (CCA) has been applied extensively and successfully to SSVEP recognition, the spontaneous EEG activities and artifacts that often occur during data recording can deteriorate the recognition performance. Therefore, it is meaningful to extract a few frequency sub-bands of interest to avoid or reduce the influence of unrelated brain activity and artifacts. This paper presents an improved method to detect the frequency component associated with SSVEP using multivariate empirical mode decomposition (MEMD) and CCA (MEMD-CCA). EEG signals from nine healthy volunteers were recorded to evaluate the performance of the proposed method for SSVEP recognition. Main results. We compared our method with CCA and temporally local multivariate synchronization index (TMSI). The results suggest that the MEMD-CCA achieved significantly higher accuracy in contrast to standard CCA and TMSI. It gave the improvements of 1.34%, 3.11%, 3.33%, 10.45%, 15.78%, 18.45%, 15.00% and 14.22% on average over CCA at time windows from 0.5 s to 5 s and 0.55%, 1.56%, 7.78%, 14.67%, 13.67%, 7.33% and 7.78% over TMSI from 0.75 s to 5 s. The method outperformed the filter-based decomposition (FB), empirical mode decomposition (EMD) and wavelet decomposition (WT) based CCA for SSVEP recognition. Significance. The results demonstrate the ability of our proposed MEMD-CCA to improve the performance of SSVEP-based BCI.
Evolution based on domain combinations: the case of glutaredoxins
Directory of Open Access Journals (Sweden)
Herrero Enrique
2009-03-01
Full Text Available Abstract Background Protein domains represent the basic units in the evolution of proteins. Domain duplication and shuffling by recombination and fusion, followed by divergence are the most common mechanisms in this process. Such domain fusion and recombination events are predicted to occur only once for a given multidomain architecture. However, other scenarios may be relevant in the evolution of specific proteins, such as convergent evolution of multidomain architectures. With this in mind, we study glutaredoxin (GRX domains, because these domains of approximately one hundred amino acids are widespread in archaea, bacteria and eukaryotes and participate in fusion proteins. GRXs are responsible for the reduction of protein disulfides or glutathione-protein mixed disulfides and are involved in cellular redox regulation, although their specific roles and targets are often unclear. Results In this work we analyze the distribution and evolution of GRX proteins in archaea, bacteria and eukaryotes. We study over one thousand GRX proteins, each containing at least one GRX domain, from hundreds of different organisms and trace the origin and evolution of the GRX domain within the tree of life. Conclusion Our results suggest that single domain GRX proteins of the CGFS and CPYC classes have, each, evolved through duplication and divergence from one initial gene that was present in the last common ancestor of all organisms. Remarkably, we identify a case of convergent evolution in domain architecture that involves the GRX domain. Two independent recombination events of a TRX domain to a GRX domain are likely to have occurred, which is an exception to the dominant mechanism of domain architecture evolution.
CSI Frequency Domain Fingerprint-Based Passive Indoor Human Detection
Directory of Open Access Journals (Sweden)
Chong Han
2018-04-01
Full Text Available Passive indoor personnel detection technology is now a hot topic. Existing methods have been greatly influenced by environmental changes, and there are problems with the accuracy and robustness of detection. Passive personnel detection based on Wi-Fi not only solves the above problems, but also has the advantages of being low cost and easy to implement, and can be better applied to elderly care and safety monitoring. In this paper, we propose a passive indoor personnel detection method based on Wi-Fi, which we call FDF-PIHD (Frequency Domain Fingerprint-based Passive Indoor Human Detection. Through this method, fine-grained physical layer Channel State Information (CSI can be extracted to generate feature fingerprints so as to help determine the state in the scene by matching online fingerprints with offline fingerprints. In order to improve accuracy, we combine the detection results of three receiving antennas to obtain the final test result. The experimental results show that the detection rates of our proposed scheme all reach above 90%, no matter whether the scene is human-free, stationary or a moving human presence. In addition, it can not only detect whether there is a target indoors, but also determine the current state of the target.
Stereo matching and view interpolation based on image domain triangulation.
Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce
2013-09-01
This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.
Torres, A. F.
2011-12-01
Agricultural lands are sources of food and energy for population around the globe. These lands are vulnerable to the impacts of climate change including variations in rainfall regimes, weather patterns, and decreased availability of water for irrigation. In addition, it is not unusual that irrigated agriculture is forced to divert less water in order to make it available for other uses, e.g. human consumption and others. As part of implementation of better policies for water control and management, irrigation companies and water user associations have been implemented water conveyance and distribution monitoring systems along with soil moisture sensors networks in the last decades. These systems allow them to manage and distribute water among the users based on their requirements and water availability while collecting information about actual soil moisture conditions in representative crop fields. In spite of this, requested water deliveries by farmers/water users is based typically on total water share, traditions and past experience on irrigation, which in most cases do not correspond to the actual crop evapotranspiration, already affected by climate change. Therefore it is necessary to provide actual information about the crop water requirements to water users/managers, so they can better quantify the required vs. available water for the irrigation events along the irrigation season. To estimate the actual evapotranspiration in a spatial extent the Sensitivity Analysis of the Surface Energy Balance Algorithm for Land (SEBAL) algorithm has demonstrated its effectiveness using satellite or airborne data. Nonetheless the estimation is restricted to the day when the geospatial information was obtained. Without information of precise future daily water crop demand there is a continuous challenge for the implementation of better water distribution and management policies in the irrigation system. The purpose of this study is to investigate the plausibility of using
Susceptibility of Redundant Versus Singular Clock Domains Implemented in SRAM-Based FPGA TMR Designs
Berg, Melanie D.; LaBel, Kenneth A.; Pellish, Jonathan
2016-01-01
We present the challenges that arise when using redundant clock domains due to their clock-skew. Radiation data show that a singular clock domain (DTMR) provides an improved TMR methodology for SRAM-based FPGAs over redundant clocks.
Chao, T.T.; Sanzolone, R.F.
1992-01-01
Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.
Some nonlinear space decomposition algorithms
Energy Technology Data Exchange (ETDEWEB)
Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)
1996-12-31
Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.
Facial Image Compression Based on Structured Codebooks in Overcomplete Domain
Directory of Open Access Journals (Sweden)
Vila-Forcén JE
2006-01-01
Full Text Available We advocate facial image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: image compression is considered from the position of source coding with side information and, contrarily to the existing scenarios where the side information is given explicitly; the side information is created based on a deterministic approximation of the local image features. We consider an image in the overcomplete transform domain as a realization of a random source with a structured codebook of symbols where each symbol represents a particular edge shape. Due to the partial availability of the side information at both encoder and decoder, we treat our problem as a modification of the Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available at the decoder. Finally, the paper presents a practical image compression algorithm for facial images based on our concept that demonstrates the superior performance in the very-low-bit-rate regime.
Single-Trial Decoding of Bistable Perception Based on Sparse Nonnegative Tensor Decomposition
Wang, Zhisong; Maier, Alexander; Logothetis, Nikos K.; Liang, Hualou
2008-01-01
The study of the neuronal correlates of the spontaneous alternation in perception elicited by bistable visual stimuli is promising for understanding the mechanism of neural information processing and the neural basis of visual perception and perceptual decision-making. In this paper, we develop a sparse nonnegative tensor factorization-(NTF)-based method to extract features from the local field potential (LFP), collected from the middle temporal (MT) visual cortex in a macaque monkey, for decoding its bistable structure-from-motion (SFM) perception. We apply the feature extraction approach to the multichannel time-frequency representation of the intracortical LFP data. The advantages of the sparse NTF-based feature extraction approach lies in its capability to yield components common across the space, time, and frequency domains yet discriminative across different conditions without prior knowledge of the discriminating frequency bands and temporal windows for a specific subject. We employ the support vector machines (SVMs) classifier based on the features of the NTF components for single-trial decoding the reported perception. Our results suggest that although other bands also have certain discriminability, the gamma band feature carries the most discriminative information for bistable perception, and that imposing the sparseness constraints on the nonnegative tensor factorization improves extraction of this feature. PMID:18528515
Dong, Feng; Long, Ruyin; Chen, Hong; Li, Xiaohui; Yang, Qingliang
2013-01-01
China is considered to be the main carbon producer in the world. The per-capita carbon emissions indicator is an important measure of the regional carbon emissions situation. This study used the LMDI factor decomposition model–panel co-integration test two-step method to analyze the factors that affect per-capita carbon emissions. The main results are as follows. (1) During 1997, Eastern China, Central China, and Western China ranked first, second, and third in the per-capita carbon emissions, while in 2009 the pecking order changed to Eastern China, Western China, and Central China. (2) According to the LMDI decomposition results, the key driver boosting the per-capita carbon emissions in the three economic regions of China between 1997 and 2009 was economic development, and the energy efficiency was much greater than the energy structure after considering their effect on restraining increased per-capita carbon emissions. (3) Based on the decomposition, the factors that affected per-capita carbon emissions in the panel co-integration test showed that Central China had the best energy structure elasticity in its regional per-capita carbon emissions. Thus, Central China was ranked first for energy efficiency elasticity, while Western China was ranked first for economic development elasticity. PMID:24353753
Dynamic Load Balancing Based on Constrained K-D Tree Decomposition for Parallel Particle Tracing
Energy Technology Data Exchange (ETDEWEB)
Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru; Hong, Fan; Peterka, Tom
2018-01-01
Particle tracing is a fundamental technique in flow field data visualization. In this work, we present a novel dynamic load balancing method for parallel particle tracing. Specifically, we employ a constrained k-d tree decomposition approach to dynamically redistribute tasks among processes. Each process is initially assigned a regularly partitioned block along with duplicated ghost layer under the memory limit. During particle tracing, the k-d tree decomposition is dynamically performed by constraining the cutting planes in the overlap range of duplicated data. This ensures that each process is reassigned particles as even as possible, and on the other hand the new assigned particles for a process always locate in its block. Result shows good load balance and high efficiency of our method.
Controllable pneumatic generator based on the catalytic decomposition of hydrogen peroxide
International Nuclear Information System (INIS)
Kim, Kyung-Rok; Kim, Kyung-Soo; Kim, Soohyun
2014-01-01
This paper presents a novel compact and controllable pneumatic generator that uses hydrogen peroxide decomposition. A fuel micro-injector using a piston-pump mechanism is devised and tested to control the chemical decomposition rate. By controlling the injection rate, the feedback controller maintains the pressure of the gas reservoir at a desired pressure level. Thermodynamic analysis and experiments are performed to demonstrate the feasibility of the proposed pneumatic generator. Using a prototype of the pneumatic generator, it takes 6 s to reach 3.5 bars with a reservoir volume of 200 ml at the room temperature, which is sufficiently rapid and effective to maintain the repetitive lifting of a 1 kg mass
Controllable pneumatic generator based on the catalytic decomposition of hydrogen peroxide
Kim, Kyung-Rok; Kim, Kyung-Soo; Kim, Soohyun
2014-07-01
This paper presents a novel compact and controllable pneumatic generator that uses hydrogen peroxide decomposition. A fuel micro-injector using a piston-pump mechanism is devised and tested to control the chemical decomposition rate. By controlling the injection rate, the feedback controller maintains the pressure of the gas reservoir at a desired pressure level. Thermodynamic analysis and experiments are performed to demonstrate the feasibility of the proposed pneumatic generator. Using a prototype of the pneumatic generator, it takes 6 s to reach 3.5 bars with a reservoir volume of 200 ml at the room temperature, which is sufficiently rapid and effective to maintain the repetitive lifting of a 1 kg mass.
Czech Academy of Sciences Publication Activity Database
Geleyn, J.- F.; Mašek, Jan; Brožková, Radmila; Kuma, P.; Degrauwe, D.; Hello, G.; Pristov, N.
2017-01-01
Roč. 143, č. 704 (2017), s. 1313-1335 ISSN 0035-9009 R&D Projects: GA MŠk(CZ) LO1415 Institutional support: RVO:86652079 Keywords : numerical weather prediction * climate models * clouds * parameterization * atmospheres * formulation * absorption * scattering * accurate * database * longwave radiative transfer * broadband approach * idealized optical paths * net exchanged rate decomposition * bracketing * selective intermittency Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Meteorology and atmospheric sciences Impact factor: 3.444, year: 2016
Probabilistic inference with noisy-threshold models based on a CP tensor decomposition
Czech Academy of Sciences Publication Activity Database
Vomlel, Jiří; Tichavský, Petr
2014-01-01
Roč. 55, č. 4 (2014), s. 1072-1092 ISSN 0888-613X R&D Projects: GA ČR GA13-20012S; GA ČR GA102/09/1278 Institutional support: RVO:67985556 Keywords : Bayesian networks * Probabilistic inference * Candecomp-Parafac tensor decomposition * Symmetric tensor rank Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.451, year: 2014 http://library.utia.cas.cz/separaty/2014/MTR/vomlel-0427059.pdf
Qin, Xiwen; Li, Qiaoling; Dong, Xiaogang; Lv, Siqi
2017-01-01
Accurate diagnosis of rolling bearing fault on the normal operation of machinery and equipment has a very important significance. A method combining Ensemble Empirical Mode Decomposition (EEMD) and Random Forest (RF) is proposed. Firstly, the original signal is decomposed into several intrinsic mode functions (IMFs) by EEMD, and the effective IMFs are selected. Then their energy entropy is calculated as the feature. Finally, the classification is performed by RF. In addition, the wavelet meth...
Yu, Xu; Lin, Jun-Yu; Jiang, Feng; Du, Jun-Wei; Han, Ji-Zhong
2018-01-01
Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.
Directory of Open Access Journals (Sweden)
Xu Yu
2018-01-01
Full Text Available Cross-domain collaborative filtering (CDCF solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR. We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.
Chiu, Chun-Huo; Chao, Anne
2014-01-01
Hill numbers (or the “effective number of species”) are increasingly used to characterize species diversity of an assemblage. This work extends Hill numbers to incorporate species pairwise functional distances calculated from species traits. We derive a parametric class of functional Hill numbers, which quantify “the effective number of equally abundant and (functionally) equally distinct species” in an assemblage. We also propose a class of mean functional diversity (per species), which quantifies the effective sum of functional distances between a fixed species to all other species. The product of the functional Hill number and the mean functional diversity thus quantifies the (total) functional diversity, i.e., the effective total distance between species of the assemblage. The three measures (functional Hill numbers, mean functional diversity and total functional diversity) quantify different aspects of species trait space, and all are based on species abundance and species pairwise functional distances. When all species are equally distinct, our functional Hill numbers reduce to ordinary Hill numbers. When species abundances are not considered or species are equally abundant, our total functional diversity reduces to the sum of all pairwise distances between species of an assemblage. The functional Hill numbers and the mean functional diversity both satisfy a replication principle, implying the total functional diversity satisfies a quadratic replication principle. When there are multiple assemblages defined by the investigator, each of the three measures of the pooled assemblage (gamma) can be multiplicatively decomposed into alpha and beta components, and the two components are independent. The resulting beta component measures pure functional differentiation among assemblages and can be further transformed to obtain several classes of normalized functional similarity (or differentiation) measures, including N-assemblage functional generalizations of
Directory of Open Access Journals (Sweden)
Chun-Huo Chiu
Full Text Available Hill numbers (or the "effective number of species" are increasingly used to characterize species diversity of an assemblage. This work extends Hill numbers to incorporate species pairwise functional distances calculated from species traits. We derive a parametric class of functional Hill numbers, which quantify "the effective number of equally abundant and (functionally equally distinct species" in an assemblage. We also propose a class of mean functional diversity (per species, which quantifies the effective sum of functional distances between a fixed species to all other species. The product of the functional Hill number and the mean functional diversity thus quantifies the (total functional diversity, i.e., the effective total distance between species of the assemblage. The three measures (functional Hill numbers, mean functional diversity and total functional diversity quantify different aspects of species trait space, and all are based on species abundance and species pairwise functional distances. When all species are equally distinct, our functional Hill numbers reduce to ordinary Hill numbers. When species abundances are not considered or species are equally abundant, our total functional diversity reduces to the sum of all pairwise distances between species of an assemblage. The functional Hill numbers and the mean functional diversity both satisfy a replication principle, implying the total functional diversity satisfies a quadratic replication principle. When there are multiple assemblages defined by the investigator, each of the three measures of the pooled assemblage (gamma can be multiplicatively decomposed into alpha and beta components, and the two components are independent. The resulting beta component measures pure functional differentiation among assemblages and can be further transformed to obtain several classes of normalized functional similarity (or differentiation measures, including N-assemblage functional
Magee, Daniel J.; Niemeyer, Kyle E.
2018-03-01
The expedient design of precision components in aerospace and other high-tech industries requires simulations of physical phenomena often described by partial differential equations (PDEs) without exact solutions. Modern design problems require simulations with a level of resolution difficult to achieve in reasonable amounts of time-even in effectively parallelized solvers. Though the scale of the problem relative to available computing power is the greatest impediment to accelerating these applications, significant performance gains can be achieved through careful attention to the details of memory communication and access. The swept time-space decomposition rule reduces communication between sub-domains by exhausting the domain of influence before communicating boundary values. Here we present a GPU implementation of the swept rule, which modifies the algorithm for improved performance on this processing architecture by prioritizing use of private (shared) memory, avoiding interblock communication, and overwriting unnecessary values. It shows significant improvement in the execution time of finite-difference solvers for one-dimensional unsteady PDEs, producing speedups of 2 - 9 × for a range of problem sizes, respectively, compared with simple GPU versions and 7 - 300 × compared with parallel CPU versions. However, for a more sophisticated one-dimensional system of equations discretized with a second-order finite-volume scheme, the swept rule performs 1.2 - 1.9 × worse than a standard implementation for all problem sizes.
A time domain phase-gradient based ISAR autofocus algorithm
CSIR Research Space (South Africa)
Nel, W
2011-10-01
Full Text Available . Results on simulated and measured data show that the algorithm performs well. Unlike many other ISAR autofocus techniques, the algorithm does not make use of several computationally intensive iterations between the data and image domains as part...
Liu, Leili; Li, Jie; Zhang, Lingyao; Tian, Siyu
2018-01-15
MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 were prepared, and their structure and hydrogen storage properties were determined through X-ray photoelectron spectroscopy and thermal analyzer. The effects of MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 on the thermal decomposition, burning rate, and explosive heat of ammonium perchlorate-based composite solid propellant were subsequently studied. Results indicated that MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 can decrease the thermal decomposition peak temperature and increase the total released heat of decomposition. These compounds can improve the effect of thermal decomposition of the propellant. The burning rates of the propellant increased using Mg-based hydrogen storage materials as promoter. The burning rates of the propellant also increased using MgH 2 instead of Al in the propellant, but its explosive heat was not enlarged. Nonetheless, the combustion heat of MgH 2 was higher than that of Al. A possible mechanism was thus proposed. Copyright © 2017. Published by Elsevier B.V.
Directory of Open Access Journals (Sweden)
Xiaoxing Zhang
2016-11-01
Full Text Available Detection of decomposition products of sulfur hexafluoride (SF6 is one of the best ways to diagnose early latent insulation faults in gas-insulated equipment, and the occurrence of sudden accidents can be avoided effectively by finding early latent faults. Recently, functionalized graphene, a kind of gas sensing material, has been reported to show good application prospects in the gas sensor field. Therefore, calculations were performed to analyze the gas sensing properties of intrinsic graphene (Int-graphene and functionalized graphene-based material, Ag-decorated graphene (Ag-graphene, for decomposition products of SF6, including SO2F2, SOF2, and SO2, based on density functional theory (DFT. We thoroughly investigated a series of parameters presenting gas-sensing properties of adsorbing process about gas molecule (SO2F2, SOF2, SO2 and double gas molecules (2SO2F2, 2SOF2, 2SO2 on Ag-graphene, including adsorption energy, net charge transfer, electronic state density, and the highest and lowest unoccupied molecular orbital. The results showed that the Ag atom significantly enhances the electrochemical reactivity of graphene, reflected in the change of conductivity during the adsorption process. SO2F2 and SO2 gas molecules on Ag-graphene presented chemisorption, and the adsorption strength was SO2F2 > SO2, while SOF2 absorption on Ag-graphene was physical adsorption. Thus, we concluded that Ag-graphene showed good selectivity and high sensitivity to SO2F2. The results can provide a helpful guide in exploring Ag-graphene material in experiments for monitoring the insulation status of SF6-insulated equipment based on detecting decomposition products of SF6.
α-Decomposition for estimating parameters in common cause failure modeling based on causal inference
International Nuclear Information System (INIS)
Zheng, Xiaoyu; Yamaguchi, Akira; Takata, Takashi
2013-01-01
The traditional α-factor model has focused on the occurrence frequencies of common cause failure (CCF) events. Global α-factors in the α-factor model are defined as fractions of failure probability for particular groups of components. However, there are unknown uncertainties in the CCF parameters estimation for the scarcity of available failure data. Joint distributions of CCF parameters are actually determined by a set of possible causes, which are characterized by CCF-triggering abilities and occurrence frequencies. In the present paper, the process of α-decomposition (Kelly-CCF method) is developed to learn about sources of uncertainty in CCF parameter estimation. Moreover, it aims to evaluate CCF risk significances of different causes, which are named as decomposed α-factors. Firstly, a Hybrid Bayesian Network is adopted to reveal the relationship between potential causes and failures. Secondly, because all potential causes have different occurrence frequencies and abilities to trigger dependent failures or independent failures, a regression model is provided and proved by conditional probability. Global α-factors are expressed by explanatory variables (causes’ occurrence frequencies) and parameters (decomposed α-factors). At last, an example is provided to illustrate the process of hierarchical Bayesian inference for the α-decomposition process. This study shows that the α-decomposition method can integrate failure information from cause, component and system level. It can parameterize the CCF risk significance of possible causes and can update probability distributions of global α-factors. Besides, it can provide a reliable way to evaluate uncertainty sources and reduce the uncertainty in probabilistic risk assessment. It is recommended to build databases including CCF parameters and corresponding causes’ occurrence frequency of each targeted system
DEFF Research Database (Denmark)
Sosnovsky, Sergey; Dolog, Peter; Henze, Nicola
2007-01-01
The effectiveness of an adaptive educational system in many respects depends on the precision of modeling assumptions it makes about a student. One of the well-known challenges in student modeling is to adequately assess the initial level of student's knowledge when s/he starts working...... with a system. Sometimes potentially handful data are available as a part of user model from a system used by the student before. The usage of external user modeling information is troublesome because of differences in system architecture, knowledge representation, modeling constraints, etc. In this paper, we...... argue that the implementation of underlying knowledge models in a sharable format, as domain ontologies - along with application of automatic ontology mapping techniques for model alignment - can help to overcome the "new-user" problem and will greatly widen opportunities for student model translation...
Directory of Open Access Journals (Sweden)
Hong-Juan Li
2013-04-01
Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.
Analytical singular-value decomposition of three-dimensional, proximity-based SPECT systems
Energy Technology Data Exchange (ETDEWEB)
Barrett, Harrison H. [Arizona Univ., Tucson, AZ (United States). College of Optical Sciences; Arizona Univ., Tucson, AZ (United States). Center for Gamma-Ray Imaging; Holen, Roel van [Ghent Univ. (Belgium). Medical Image and Signal Processing (MEDISIP); Arizona Univ., Tucson, AZ (United States). Center for Gamma-Ray Imaging
2011-07-01
An operator formalism is introduced for the description of SPECT imaging systems that use solid-angle effects rather than pinholes or collimators, as in recent work by Mitchell and Cherry. The object is treated as a 3D function, without discretization, and the data are 2D functions on the detectors. An analytic singular-value decomposition of the resulting integral operator is performed and used to compute the measurement and null components of the objects. The results of the theory are confirmed with a Landweber algorithm that does not require a system matrix. (orig.)
Progressivity of personal income tax in Croatia: decomposition of tax base and rate effects
Directory of Open Access Journals (Sweden)
Ivica Urban
2006-09-01
Full Text Available This paper presents progressivity breakdowns for Croatian personal income tax (henceforth PIT in 1997 and 2004. The decompositions reveal how the elements of the system – tax schedule, allowances, deductions and credits – contribute to the achievement of progressivity, over the quantiles of pre-tax income distribution. Through the use of ‘single parameter’ Gini indices, the social decision maker’s (henceforth SDM relatively more or less favorable inclination toward taxpayers in the lower tails of pre-tax income distribution is accounted for. Simulations are undertaken to show how the introduction of a flat-rate system would affect progressivity.
Directory of Open Access Journals (Sweden)
Xiwen Qin
2017-01-01
Full Text Available Accurate diagnosis of rolling bearing fault on the normal operation of machinery and equipment has a very important significance. A method combining Ensemble Empirical Mode Decomposition (EEMD and Random Forest (RF is proposed. Firstly, the original signal is decomposed into several intrinsic mode functions (IMFs by EEMD, and the effective IMFs are selected. Then their energy entropy is calculated as the feature. Finally, the classification is performed by RF. In addition, the wavelet method is also used in the proposed process, the same as EEMD. The results of the comparison show that the EEMD method is more accurate than the wavelet method.
Tissue decomposition from dual energy CT data for MC based dose calculation in particle therapy
Energy Technology Data Exchange (ETDEWEB)
Hünemohr, Nora, E-mail: n.huenemohr@dkfz.de; Greilich, Steffen [Medical Physics in Radiation Oncology, German Cancer Research Center, 69120 Heidelberg (Germany); Paganetti, Harald; Seco, Joao [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Jäkel, Oliver [Medical Physics in Radiation Oncology, German Cancer Research Center, 69120 Heidelberg, Germany and Department of Radiation Oncology and Radiation Therapy, University Hospital of Heidelberg, 69120 Heidelberg (Germany)
2014-06-15
Purpose: The authors describe a novel method of predicting mass density and elemental mass fractions of tissues from dual energy CT (DECT) data for Monte Carlo (MC) based dose planning. Methods: The relative electron density ϱ{sub e} and effective atomic number Z{sub eff} are calculated for 71 tabulated tissue compositions. For MC simulations, the mass density is derived via one linear fit in the ϱ{sub e} that covers the entire range of tissue compositions (except lung tissue). Elemental mass fractions are predicted from the ϱ{sub e} and the Z{sub eff} in combination. Since particle therapy dose planning and verification is especially sensitive to accurate material assignment, differences to the ground truth are further analyzed for mass density, I-value predictions, and stopping power ratios (SPR) for ions. Dose studies with monoenergetic proton and carbon ions in 12 tissues which showed the largest differences of single energy CT (SECT) to DECT are presented with respect to range uncertainties. The standard approach (SECT) and the new DECT approach are compared to reference Bragg peak positions. Results: Mean deviations to ground truth in mass density predictions could be reduced for soft tissue from (0.5±0.6)% (SECT) to (0.2±0.2)% with the DECT method. Maximum SPR deviations could be reduced significantly for soft tissue from 3.1% (SECT) to 0.7% (DECT) and for bone tissue from 0.8% to 0.1%. MeanI-value deviations could be reduced for soft tissue from (1.1±1.4%, SECT) to (0.4±0.3%) with the presented method. Predictions of elemental composition were improved for every element. Mean and maximum deviations from ground truth of all elemental mass fractions could be reduced by at least a half with DECT compared to SECT (except soft tissue hydrogen and nitrogen where the reduction was slightly smaller). The carbon and oxygen mass fraction predictions profit especially from the DECT information. Dose studies showed that most of the 12 selected tissues would
A dimension decomposition approach based on iterative observer design for an elliptic Cauchy problem
Majeed, Muhammad Usman; Laleg-Kirati, Taous-Meriem
2015-01-01
A state observer inspired iterative algorithm is presented to solve boundary estimation problem for Laplace equation using one of the space variables as a time-like variable. Three dimensional domain with two congruent parallel surfaces
Identifying APT Malware Domain Based on Mobile DNS Logging
Directory of Open Access Journals (Sweden)
Weina Niu
2017-01-01
Full Text Available Advanced Persistent Threat (APT is a serious threat against sensitive information. Current detection approaches are time-consuming since they detect APT attack by in-depth analysis of massive amounts of data after data breaches. Specifically, APT attackers make use of DNS to locate their command and control (C&C servers and victims’ machines. In this paper, we propose an efficient approach to detect APT malware C&C domain with high accuracy by analyzing DNS logs. We first extract 15 features from DNS logs of mobile devices. According to Alexa ranking and the VirusTotal’s judgement result, we give each domain a score. Then, we select the most normal domains by the score metric. Finally, we utilize our anomaly detection algorithm, called Global Abnormal Forest (GAF, to identify malware C&C domains. We conduct a performance analysis to demonstrate that our approach is more efficient than other existing works in terms of calculation efficiency and recognition accuracy. Compared with Local Outlier Factor (LOF, k-Nearest Neighbor (KNN, and Isolation Forest (iForest, our approach obtains more than 99% F-M and R for the detection of C&C domains. Our approach not only can reduce data volume that needs to be recorded and analyzed but also can be applicable to unsupervised learning.
Lu, Shikun; Zhang, Hao; Li, Xihai; Li, Yihong; Niu, Chao; Yang, Xiaoyun; Liu, Daizhi
2018-03-01
Combining analyses of spatial and temporal characteristics of the ionosphere is of great significance for scientific research and engineering applications. Tensor decomposition is performed to explore the temporal-longitudinal-latitudinal characteristics in the ionosphere. Three-dimensional tensors are established based on the time series of ionospheric vertical total electron content maps obtained from the Centre for Orbit Determination in Europe. To obtain large-scale characteristics of the ionosphere, rank-1 decomposition is used to obtain U^{(1)}, U^{(2)}, and U^{(3)}, which are the resulting vectors for the time, longitude, and latitude modes, respectively. Our initial finding is that the correspondence between the frequency spectrum of U^{(1)} and solar variation indicates that rank-1 decomposition primarily describes large-scale temporal variations in the global ionosphere caused by the Sun. Furthermore, the time lags between the maxima of the ionospheric U^{(2)} and solar irradiation range from 1 to 3.7 h without seasonal dependence. The differences in time lags may indicate different interactions between processes in the magnetosphere-ionosphere-thermosphere system. Based on the dataset displayed in the geomagnetic coordinates, the position of the barycenter of U^{(3)} provides evidence for north-south asymmetry (NSA) in the large-scale ionospheric variations. The daily variation in such asymmetry indicates the influences of solar ionization. The diurnal geomagnetic coordinate variations in U^{(3)} show that the large-scale EIA (equatorial ionization anomaly) variations during the day and night have similar characteristics. Considering the influences of geomagnetic disturbance on ionospheric behavior, we select the geomagnetic quiet GIMs to construct the ionospheric tensor. The results indicate that the geomagnetic disturbances have little effect on large-scale ionospheric characteristics.
Path planning of decentralized multi-quadrotor based on fuzzy-cell decomposition algorithm
Iswanto, Wahyunggoro, Oyas; Cahyadi, Adha Imam
2017-04-01
The paper aims to present a design algorithm for multi quadrotor lanes in order to move towards the goal quickly and avoid obstacles in an area with obstacles. There are several problems in path planning including how to get to the goal position quickly and avoid static and dynamic obstacles. To overcome the problem, therefore, the paper presents fuzzy logic algorithm and fuzzy cell decomposition algorithm. Fuzzy logic algorithm is one of the artificial intelligence algorithms which can be applied to robot path planning that is able to detect static and dynamic obstacles. Cell decomposition algorithm is an algorithm of graph theory used to make a robot path map. By using the two algorithms the robot is able to get to the goal position and avoid obstacles but it takes a considerable time because they are able to find the shortest path. Therefore, this paper describes a modification of the algorithms by adding a potential field algorithm used to provide weight values on the map applied for each quadrotor by using decentralized controlled, so that the quadrotor is able to move to the goal position quickly by finding the shortest path. The simulations conducted have shown that multi-quadrotor can avoid various obstacles and find the shortest path by using the proposed algorithms.
Directory of Open Access Journals (Sweden)
Wang Jiajun
2010-05-01
Full Text Available Abstract Background The inverse problem of fluorescent molecular tomography (FMT often involves complex large-scale matrix operations, which may lead to unacceptable computational errors and complexity. In this research, a tree structured Schur complement decomposition strategy is proposed to accelerate the reconstruction process and reduce the computational complexity. Additionally, an adaptive regularization scheme is developed to improve the ill-posedness of the inverse problem. Methods The global system is decomposed level by level with the Schur complement system along two paths in the tree structure. The resultant subsystems are solved in combination with the biconjugate gradient method. The mesh for the inverse problem is generated incorporating the prior information. During the reconstruction, the regularization parameters are adaptive not only to the spatial variations but also to the variations of the objective function to tackle the ill-posed nature of the inverse problem. Results Simulation results demonstrate that the strategy of the tree structured Schur complement decomposition obviously outperforms the previous methods, such as the conventional Conjugate-Gradient (CG and the Schur CG methods, in both reconstruction accuracy and speed. As compared with the Tikhonov regularization method, the adaptive regularization scheme can significantly improve ill-posedness of the inverse problem. Conclusions The methods proposed in this paper can significantly improve the reconstructed image quality of FMT and accelerate the reconstruction process.
Automated polyp measurement based on colon structure decomposition for CT colonography
Wang, Huafeng; Li, Lihong C.; Han, Hao; Peng, Hao; Song, Bowen; Wei, Xinzhou; Liang, Zhengrong
2014-03-01
Accurate assessment of colorectal polyp size is of great significance for early diagnosis and management of colorectal cancers. Due to the complexity of colon structure, polyps with diverse geometric characteristics grow from different landform surfaces. In this paper, we present a new colon decomposition approach for polyp measurement. We first apply an efficient maximum a posteriori expectation-maximization (MAP-EM) partial volume segmentation algorithm to achieve an effective electronic cleansing on colon. The global colon structure is then decomposed into different kinds of morphological shapes, e.g. haustral folds or haustral wall. Meanwhile, the polyp location is identified by an automatic computer aided detection algorithm. By integrating the colon structure decomposition with the computer aided detection system, a patch volume of colon polyps is extracted. Thus, polyp size assessment can be achieved by finding abnormal protrusion on a relative uniform morphological surface from the decomposed colon landform. We evaluated our method via physical phantom and clinical datasets. Experiment results demonstrate the feasibility of our method in consistently quantifying the size of polyp volume and, therefore, facilitating characterizing for clinical management.
Directory of Open Access Journals (Sweden)
Aliakbar Dehno Khalaji
2015-06-01
Full Text Available In this paper, plate-like NiO nanoparticles were prepared by one-pot solid-state thermal decomposition of nickel (II Schiff base complex as new precursor. First, the nickel (II Schiff base precursor was prepared by solid-state grinding using nickel (II nitrate hexahydrate, Ni(NO32∙6H2O, and the Schiff base ligand N,N′-bis-(salicylidene benzene-1,4-diamine for 30 min without using any solvent, catalyst, template or surfactant. It was characterized by Fourier Transform Infrared spectroscopy (FT-IR and elemental analysis (CHN. The resultant solid was subsequently annealed in the electrical furnace at 450 °C for 3 h in air atmosphere. Nanoparticles of NiO were produced and characterized by X-ray powder diffraction (XRD at 2θ degree 0-140°, FT-IR spectroscopy, scanning electron microscopy (SEM and transmission electron microscopy (TEM. The XRD and FT-IR results showed that the product is pure and has good crystallinity with cubic structure because no characteristic peaks of impurity were observed, while the SEM and TEM results showed that the obtained product is tiny, aggregated with plate-like shape, narrow size distribution with an average size between 10-40 nm. Results show that the solid state thermal decomposition method is simple, environmentally friendly, safe and suitable for preparation of NiO nanoparticles. This method can also be used to synthesize nanoparticles of other metal oxides.
Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi
2014-01-01
The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.
Entropy-Based Method of Choosing the Decomposition Level in Wavelet Threshold De-noising
Directory of Open Access Journals (Sweden)
Yan-Fang Sang
2010-06-01
Full Text Available In this paper, the energy distributions of various noises following normal, log-normal and Pearson-III distributions are first described quantitatively using the wavelet energy entropy (WEE, and the results are compared and discussed. Then, on the basis of these analytic results, a method for use in choosing the decomposition level (DL in wavelet threshold de-noising (WTD is put forward. Finally, the performance of the proposed method is verified by analysis of both synthetic and observed series. Analytic results indicate that the proposed method is easy to operate and suitable for various signals. Moreover, contrary to traditional white noise testing which depends on “autocorrelations”, the proposed method uses energy distributions to distinguish real signals and noise in noisy series, therefore the chosen DL is reliable, and the WTD results of time series can be improved.
Multidisciplinary Product Decomposition and Analysis Based on Design Structure Matrix Modeling
DEFF Research Database (Denmark)
Habib, Tufail
2014-01-01
Design structure matrix (DSM) modeling in complex system design supports to define physical and logical configuration of subsystems, components, and their relationships. This modeling includes product decomposition, identification of interfaces, and structure analysis to increase the architectural...... interactions across subsystems and components. For this purpose, Cambridge advanced modeler (CAM) software tool is used to develop the system matrix. The analysis of the product (printer) architecture includes clustering, partitioning as well as structure analysis of the system. The DSM analysis is helpful...... understanding of the system. Since product architecture has broad implications in relation to product life cycle issues, in this paper, mechatronic product is decomposed into subsystems and components, and then, DSM model is developed to examine the extent of modularity in the system and to manage multiple...
Tamellini, L.; Le Maî tre, O.; Nouy, A.
2014-01-01
In this paper we consider a proper generalized decomposition method to solve the steady incompressible Navier-Stokes equations with random Reynolds number and forcing term. The aim of such a technique is to compute a low-cost reduced basis approximation of the full stochastic Galerkin solution of the problem at hand. A particular algorithm, inspired by the Arnoldi method for solving eigenproblems, is proposed for an efficient greedy construction of a deterministic reduced basis approximation. This algorithm decouples the computation of the deterministic and stochastic components of the solution, thus allowing reuse of preexisting deterministic Navier-Stokes solvers. It has the remarkable property of only requiring the solution of m uncoupled deterministic problems for the construction of an m-dimensional reduced basis rather than M coupled problems of the full stochastic Galerkin approximation space, with m l M (up to one order of magnitudefor the problem at hand in this work). © 2014 Society for Industrial and Applied Mathematics.
Cochard, Étienne; Prada, Claire; Aubry, Jean-François; Fink, Mathias
2010-03-01
Thermal ablation induced by high intensity focused ultrasound has produced promising clinical results to treat hepatocarcinoma and other liver tumors. However skin burns have been reported due to the high absorption of ultrasonic energy by the ribs. This study proposes a method to produce an acoustic field focusing on a chosen target while sparing the ribs, using the decomposition of the time-reversal operator (DORT method). The idea is to apply an excitation weight vector to the transducers array which is orthogonal to the subspace of emissions focusing on the ribs. The ratio of the energies absorbed at the focal point and on the ribs has been enhanced up to 100-fold as demonstrated by the measured specific absorption rates.
Sharma, Govind K; Kumar, Anish; Jayakumar, T; Purnachandra Rao, B; Mariyappa, N
2015-03-01
A signal processing methodology is proposed in this paper for effective reconstruction of ultrasonic signals in coarse grained high scattering austenitic stainless steel. The proposed methodology is comprised of the Ensemble Empirical Mode Decomposition (EEMD) processing of ultrasonic signals and application of signal minimisation algorithm on selected Intrinsic Mode Functions (IMFs) obtained by EEMD. The methodology is applied to ultrasonic signals obtained from austenitic stainless steel specimens of different grain size, with and without defects. The influence of probe frequency and data length of a signal on EEMD decomposition is also investigated. For a particular sampling rate and probe frequency, the same range of IMFs can be used to reconstruct the ultrasonic signal, irrespective of the grain size in the range of 30-210 μm investigated in this study. This methodology is successfully employed for detection of defects in a 50mm thick coarse grain austenitic stainless steel specimens. Signal to noise ratio improvement of better than 15 dB is observed for the ultrasonic signal obtained from a 25 mm deep flat bottom hole in 200 μm grain size specimen. For ultrasonic signals obtained from defects at different depths, a minimum of 7 dB extra enhancement in SNR is achieved as compared to the sum of selected IMF approach. The application of minimisation algorithm with EEMD processed signal in the proposed methodology proves to be effective for adaptive signal reconstruction with improved signal to noise ratio. This methodology was further employed for successful imaging of defects in a B-scan. Copyright © 2014. Published by Elsevier B.V.
Surface EMG decomposition based on K-means clustering and convolution kernel compensation.
Ning, Yong; Zhu, Xiangjun; Zhu, Shanan; Zhang, Yingchun
2015-03-01
A new approach has been developed by combining the K-mean clustering (KMC) method and a modified convolution kernel compensation (CKC) method for multichannel surface electromyogram (EMG) decomposition. The KMC method was first utilized to cluster vectors of observations at different time instants and then estimate the initial innervation pulse train (IPT). The CKC method, modified with a novel multistep iterative process, was conducted to update the estimated IPT. The performance of the proposed K-means clustering-Modified CKC (KmCKC) approach was evaluated by reconstructing IPTs from both simulated and experimental surface EMG signals. The KmCKC approach successfully reconstructed all 10 IPTs from the simulated surface EMG signals with true positive rates (TPR) of over 90% with a low signal-to-noise ratio (SNR) of -10 dB. More than 10 motor units were also successfully extracted from the 64-channel experimental surface EMG signals of the first dorsal interosseous (FDI) muscles when a contraction force was held at 8 N by using the KmCKC approach. A "two-source" test was further conducted with 64-channel surface EMG signals. The high percentage of common MUs and common pulses (over 92% at all force levels) between the IPTs reconstructed from the two independent groups of surface EMG signals demonstrates the reliability and capability of the proposed KmCKC approach in multichannel surface EMG decomposition. Results from both simulated and experimental data are consistent and confirm that the proposed KmCKC approach can successfully reconstruct IPTs with high accuracy at different levels of contraction.
Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu
2016-06-01
To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.
International Nuclear Information System (INIS)
Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.
2013-01-01
Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also
Symmetric Tensor Decomposition
DEFF Research Database (Denmark)
Brachat, Jerome; Comon, Pierre; Mourrain, Bernard
2010-01-01
We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables...... of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation...... of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions and for detecting the rank....
International Nuclear Information System (INIS)
Cao, Guangxi; Xu, Wei
2016-01-01
Basing on daily price data of carbon emission rights in futures markets of Certified Emission Reduction (CER) and European Union Allowances (EUA), we analyze the multiscale characteristics of the markets by using empirical mode decomposition (EMD) and multifractal detrended fluctuation analysis (MFDFA) based on EMD. The complexity of the daily returns of CER and EUA futures markets changes with multiple time scales and multilayered features. The two markets also exhibit clear multifractal characteristics and long-range correlation. We employ shuffle and surrogate approaches to analyze the origins of multifractality. The long-range correlations and fat-tail distributions significantly contribute to multifractality. Furthermore, we analyze the influence of high returns on multifractality by using threshold method. The multifractality of the two futures markets is related to the presence of high values of returns in the price series.
Sun, Qi; Fu, Shujun
2017-09-20
Fringe orientation is an important feature of fringe patterns and has a wide range of applications such as guiding fringe pattern filtering, phase unwrapping, and abstraction. Estimating fringe orientation is a basic task for subsequent processing of fringe patterns. However, various noise, singular and obscure points, and orientation data degeneration lead to inaccurate calculations of fringe orientation. Thus, to deepen the understanding of orientation estimation and to better guide orientation estimation in fringe pattern processing, some advanced gradient-field-based orientation estimation methods are compared and analyzed. At the same time, following the ideas of smoothing regularization and computing of bigger gradient fields, a regularized singular-value decomposition (RSVD) technique is proposed for fringe orientation estimation. To compare the performance of these gradient-field-based methods, quantitative results and visual effect maps of orientation estimation are given on simulated and real fringe patterns that demonstrate that the RSVD produces the best estimation results at a cost of relatively less time.
Wang, Lynn T.-N.; Madhavan, Sriram
2018-03-01
A pattern matching and rule-based polygon clustering methodology with DFM scoring is proposed to detect decomposition-induced manufacturability detractors and fix the layout designs prior to manufacturing. A pattern matcher scans the layout for pre-characterized patterns from a library. If a pattern were detected, rule-based clustering identifies the neighboring polygons that interact with those captured by the pattern. Then, DFM scores are computed for the possible layout fixes: the fix with the best score is applied. The proposed methodology was applied to two 20nm products with a chip area of 11 mm2 on the metal 2 layer. All the hotspots were resolved. The number of DFM spacing violations decreased by 7-15%.
Smart-phone based electrocardiogram wavelet decomposition and neural network classification
International Nuclear Information System (INIS)
Jannah, N; Hadjiloucas, S; Hwang, F; Galvão, R K H
2013-01-01
This paper discusses ECG classification after parametrizing the ECG waveforms in the wavelet domain. The aim of the work is to develop an accurate classification algorithm that can be used to diagnose cardiac beat abnormalities detected using a mobile platform such as smart-phones. Continuous time recurrent neural network classifiers are considered for this task. Records from the European ST-T Database are decomposed in the wavelet domain using discrete wavelet transform (DWT) filter banks and the resulting DWT coefficients are filtered and used as inputs for training the neural network classifier. Advantages of the proposed methodology are the reduced memory requirement for the signals which is of relevance to mobile applications as well as an improvement in the ability of the neural network in its generalization ability due to the more parsimonious representation of the signal to its inputs.
International Nuclear Information System (INIS)
Tsuji, Masashi; Shimazu, Yoichiro; Michishita, Hiroshi
2005-01-01
A new method for evaluating the decay ratios in a boiling water reactor (BWR) using the singular value decomposition (SVD) method had been proposed. In this method, a signal component closely related to the BWR stability can be extracted from independent components of the neutron noise signal decomposed by the SVD method. However, real-time stability monitoring by the SVD method requires an efficient procedure for screening such components. For efficient screening, an artificial neural network (ANN) with three layers was adopted. The trained ANN was actually applied to decomposed components of local power range monitor (LPRM) signals that were measured in stability experiments conducted in the Ringhals-1 BWR. In each LPRM signal, multiple candidates were screened from the decomposed components. However, decay ratios could be estimated by introducing appropriate criterions for selecting the most suitable component among the candidates. The estimated decay ratios are almost identical to those evaluated by visual screening in a previous study. The selected components commonly have the largest singular value, the largest decay ratio and the least squared fitting error among the candidates. By virtue of excellent screening performance of the trained ANN, the real-time stability monitoring by the SVD method can be applied in practice. (author)
Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K
2017-04-11
First-principles quantum mechanical calculations with methods such as density functional theory (DFT) allow the accurate calculation of interaction energies between molecules. These interaction energies can be dissected into chemically relevant components such as electrostatics, polarization, and charge transfer using energy decomposition analysis (EDA) approaches. Typically EDA has been used to study interactions between small molecules; however, it has great potential to be applied to large biomolecular assemblies such as protein-protein and protein-ligand interactions. We present an application of EDA calculations to the study of ligands that bind to the thrombin protein, using the ONETEP program for linear-scaling DFT calculations. Our approach goes beyond simply providing the components of the interaction energy; we are also able to provide visual representations of the changes in density that happen as a result of polarization and charge transfer, thus pinpointing the functional groups between the ligand and protein that participate in each kind of interaction. We also demonstrate with this approach that we can focus on studying parts (fragments) of ligands. The method is relatively insensitive to the protocol that is used to prepare the structures, and the results obtained are therefore robust. This is an application to a real protein drug target of a whole new capability where accurate DFT calculations can produce both energetic and visual descriptors of interactions. These descriptors can be used to provide insights for tailoring interactions, as needed for example in drug design.
Lahmiri, Salim; Shmuel, Amir
2017-11-01
Diabetic retinopathy is a disease that can cause a loss of vision. An early and accurate diagnosis helps to improve treatment of the disease and prognosis. One of the earliest characteristics of diabetic retinopathy is the appearance of retinal hemorrhages. The purpose of this study is to design a fully automated system for the detection of hemorrhages in a retinal image. In the first stage of our proposed system, a retinal image is processed with variational mode decomposition (VMD) to obtain the first variational mode, which captures the high frequency components of the original image. In the second stage, four texture descriptors are extracted from the first variational mode. Finally, a classifier trained with all computed texture descriptors is used to distinguish between images of healthy and unhealthy retinas with hemorrhages. Experimental results showed evidence of the effectiveness of the proposed system for detection of hemorrhages in the retina, since a perfect detection rate was achieved. Our proposed system for detecting diabetic retinopathy is simple and easy to implement. It requires only short processing time, and it yields higher accuracy in comparison with previously proposed methods for detecting diabetic retinopathy.
A medium term bulk production cost model based on decomposition techniques
Energy Technology Data Exchange (ETDEWEB)
Ramos, A.; Munoz, L. [Univ. Pontificia Comillas, Madrid (Spain). Inst. de Investigacion Tecnologica; Martinez-Corcoles, F.; Martin-Corrochano, V. [IBERDROLA, Madrid (Spain)
1995-11-01
This model provides the minimum variable cost subject to operating constraints (generation, transmission and fuel constraints). Generation constraints include power reserve margin with respect to the system peak load, first Kirchhoff`s law at each node, hydro energy scheduling, maintenance scheduling, and generation limitations. Transmission constraints cover the second Kirchhoff`s law and transmission limitations. The generation and transmission economic dispatch is approximated by the linearized (also called DC) load flow. Network losses are included as a non linear approximation. Fuel constraints include minimum consumption quotas and fuel scheduling for domestic coal thermal plants. This production costing problem is formulated as a large-scale non linear optimization problem solved by generalized Benders decomposition method. Master problem determines the inter-period decisions, i.e., maintenance, fuel and hydro scheduling, and each subproblem solves the intra-period decisions, i.e., generation and transmission economic dispatch for one period. The model has been implemented in GAMS, a mathematical programming language. An application to the large-scale Spanish electric power system is presented. 11 refs
Directory of Open Access Journals (Sweden)
Xuejun Chen
2014-01-01
Full Text Available As one of the most promising renewable resources in electricity generation, wind energy is acknowledged for its significant environmental contributions and economic competitiveness. Because wind fluctuates with strong variation, it is quite difficult to describe the characteristics of wind or to estimate the power output that will be injected into the grid. In particular, short-term wind speed forecasting, an essential support for the regulatory actions and short-term load dispatching planning during the operation of wind farms, is currently regarded as one of the most difficult problems to be solved. This paper contributes to short-term wind speed forecasting by developing two three-stage hybrid approaches; both are combinations of the five-three-Hanning (53H weighted average smoothing method, ensemble empirical mode decomposition (EEMD algorithm, and nonlinear autoregressive (NAR neural networks. The chosen datasets are ten-minute wind speed observations, including twelve samples, and our simulation indicates that the proposed methods perform much better than the traditional ones when addressing short-term wind speed forecasting problems.
Zhang, Xiao-bo; Tan, Jun; Song, Peng; Li, Jin-shan; Xia, Dong-ming; Liu, Zhao-lun
2017-01-01
The gradient preconditioning approach based on seismic wave energy can effectively avoid the huge storage consumption in the gradient preconditioning algorithms based on Hessian matrices in time-domain full waveform inversion (FWI), but the accuracy
Managing Time-Based Conflict Across Life Domains In Nigeria: A ...
African Journals Online (AJOL)
Managing Time-Based Conflict Across Life Domains In Nigeria: A Decision Making Perspective. ... which employees in a developing country attempt to resolve time-based conflict between work, family and other activities. A decision making ...
International Nuclear Information System (INIS)
Lenain, Roland
2015-01-01
This thesis is devoted to the implementation of a domain decomposition method applied to the neutron transport equation. The objective of this work is to access high-fidelity deterministic solutions to properly handle heterogeneities located in nuclear reactor cores, for problems' size ranging from color-sets of assemblies to large reactor cores configurations in 2D and 3D. The innovative algorithm developed during the thesis intends to optimize the use of parallelism and memory. The approach also aims to minimize the influence of the parallel implementation on the performances. These goals match the needs of APOLLO3 project, developed at CEA and supported by EDF and AREVA, which must be a portable code (no optimization on a specific architecture) in order to achieve best estimate modeling with resources ranging from personal computer to compute cluster available for engineers analyses. The proposed algorithm is a Parallel Multigroup-Block Jacobi one. Each sub-domain is considered as a multi-group fixed-source problem with volume-sources (fission) and surface-sources (interface flux between the sub-domains). The multi-group problem is solved in each sub-domain and a single communication of the interface flux is required at each power iteration. The spectral radius of the resolution algorithm is made similar to the one of a classical resolution algorithm with a nonlinear diffusion acceleration method: the well-known Coarse Mesh Finite Difference. In this way an ideal scalability is achievable when the calculation is parallelized. The memory organization, taking advantage of shared memory parallelism, optimizes the resources by avoiding redundant copies of the data shared between the sub-domains. Distributed memory architectures are made available by a hybrid parallel method that combines both paradigms of shared memory parallelism and distributed memory parallelism. For large problems, these architectures provide a greater number of processors and the amount of
Kilian Stoffel; Paul Cotofrei; Dong Han
2012-01-01
As interdisciplinary domain requiring advanced and innovative methodologies the computational forensics domain is characterized by data being simultaneously large scaled and uncertain multidimensional and approximate. Forensic domain experts trained to discover hidden pattern from crime data are limited in their analysis without the assistance of a computational intelligence approach. In this paper a methodology and an automatic procedure based on fuzzy set theory and designed to infer precis...
Hofmann, Philipp; Sedlmair, Martin; Krauss, Bernhard; Wichmann, Julian L.; Bauer, Ralf W.; Flohr, Thomas G.; Mahnken, Andreas H.
2016-03-01
Osteoporosis is a degenerative bone disease usually diagnosed at the manifestation of fragility fractures, which severely endanger the health of especially the elderly. To ensure timely therapeutic countermeasures, noninvasive and widely applicable diagnostic methods are required. Currently the primary quantifiable indicator for bone stability, bone mineral density (BMD), is obtained either by DEXA (Dual-energy X-ray absorptiometry) or qCT (quantitative CT). Both have respective advantages and disadvantages, with DEXA being considered as gold standard. For timely diagnosis of osteoporosis, another CT-based method is presented. A Dual Energy CT reconstruction workflow is being developed to evaluate BMD by evaluating lumbar spine (L1-L4) DE-CT images. The workflow is ROI-based and automated for practical use. A dual energy 3-material decomposition algorithm is used to differentiate bone from soft tissue and fat attenuation. The algorithm uses material attenuation coefficients on different beam energy levels. The bone fraction of the three different tissues is used to calculate the amount of hydroxylapatite in the trabecular bone of the corpus vertebrae inside a predefined ROI. Calibrations have been performed to obtain volumetric bone mineral density (vBMD) without having to add a calibration phantom or to use special scan protocols or hardware. Accuracy and precision are dependent on image noise and comparable to qCT images. Clinical indications are in accordance with the DEXA gold standard. The decomposition-based workflow shows bone degradation effects normally not visible on standard CT images which would induce errors in normal qCT results.
International Nuclear Information System (INIS)
Zhang, Tao; Li, Guoxiu; Yu, Yusong; Sun, Zuoyu; Wang, Meng; Chen, Jun
2014-01-01
Highlights: • Decomposition and combustion process of ADN-based thruster are studied. • Distribution of droplets is obtained during the process of spray hit on wire mesh. • Two temperature models are adopted to describe the heat transfer in porous media. • The influences brought by different mass flux and porosity are studied. - Abstract: Ammonium dinitramide (ADN) monopropellant is currently the most promising among all ‘green propellants’. In this paper, the decomposition and combustion process of liquid ADN-based ternary mixtures for propulsion are numerically studied. The R–R distribution model is used to study the initial boundary conditions of droplet distribution resulting from spray hit on a wire mesh based on PDA experiment. To simulate the heat-transfer characteristics between the gas–solid phases, a two-temperature porous medium model in a catalytic bed is used. An 11-species and 7-reactions chemistry model is used to study the catalytic and combustion processes. The final distribution of temperature, pressure, and other kinds of material component concentrations are obtained using the ADN thruster. The results of simulation conducted in the present study are well agree with previous experimental data, and the demonstration of the ADN thruster confirms that a good steady-state operation is achieved. The effects of spray inlet mass flux and porosity on monopropellant thruster performance are analyzed. The numerical results further show that a larger inlet mass flux results in better thruster performance and a catalytic bed porosity value of 0.5 can exhibit the best thruster performance. These findings can serve as a key reference for designing and testing non-toxic aerospace monopropellant thrusters
Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos
2013-08-14
This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.
Directory of Open Access Journals (Sweden)
Carlos Reyes-Garcia
2013-08-01
Full Text Available This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user’s blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD. EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.
Xie, Wen-Jie; Li, Ming-Xia; Xu, Hai-Chuan; Chen, Wei; Zhou, Wei-Xing; Stanley, H. Eugene
2016-10-01
Traders in a stock market exchange stock shares and form a stock trading network. Trades at different positions of the stock trading network may contain different information. We construct stock trading networks based on the limit order book data and classify traders into k classes using the k-shell decomposition method. We investigate the influences of trading behaviors on the price impact by comparing a closed national market (A-shares) with an international market (B-shares), individuals and institutions, partially filled and filled trades, buyer-initiated and seller-initiated trades, and trades at different positions of a trading network. Institutional traders professionally use some trading strategies to reduce the price impact and individuals at the same positions in the trading network have a higher price impact than institutions. We also find that trades in the core have higher price impacts than those in the peripheral shell.
Parto Dezfouli, Mohammad Ali; Dezfouli, Mohsen Parto; Rad, Hamidreza Saligheh
2014-01-01
Proton magnetic resonance spectroscopy ((1)H-MRS) is a non-invasive diagnostic tool for measuring biochemical changes in the human body. Acquired (1)H-MRS signals may be corrupted due to a wideband baseline signal generated by macromolecules. Recently, several methods have been developed for the correction of such baseline signals, however most of them are not able to estimate baseline in complex overlapped signal. In this study, a novel automatic baseline correction method is proposed for (1)H-MRS spectra based on ensemble empirical mode decomposition (EEMD). This investigation was applied on both the simulated data and the in-vivo (1)H-MRS of human brain signals. Results justify the efficiency of the proposed method to remove the baseline from (1)H-MRS signals.
Jiang, Wenqian; Zeng, Bo; Yang, Zhou; Li, Gang
2018-01-01
In the non-invasive load monitoring mode, the load decomposition can reflect the running state of each load, which will help the user reduce unnecessary energy costs. With the demand side management measures of time of using price, a resident load influence analysis method for time of using price (TOU) based on non-intrusive load monitoring data are proposed in the paper. Relying on the current signal of the resident load classification, the user equipment type, and different time series of self-elasticity and cross-elasticity of the situation could be obtained. Through the actual household load data test with the impact of TOU, part of the equipment will be transferred to the working hours, and users in the peak price of electricity has been reduced, and in the electricity at the time of the increase Electrical equipment, with a certain regularity.
Real-time tumor ablation simulation based on the dynamic mode decomposition method
Bourantas, George C.
2014-05-01
Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must be employed, taking into account both the water evaporation phenomenon and the tissue damage during tumor ablation. Methods: A meshless point collocation solver is used for the numerical solution of the governing equations. The results obtained are used by the DMD method for forecasting the numerical solution faster than the meshless solver. The procedure is first validated against analytical and numerical predictions for simple problems. The DMD method is then applied to three-dimensional simulations that involve modeling of tumor ablation and account for metabolic heat generation, blood perfusion, and heat ablation using realistic values for the various parameters. Results: The present method offers very fast numerical solution to bioheat transfer, which is of clinical significance in medical practice. It also sidesteps the mathematical treatment of boundaries between tumor and healthy tissue, which is usually a tedious procedure with some inevitable degree of approximation. The DMD method provides excellent predictions of the temperature profile in tumors and in the healthy parts of the tissue, for linear and nonlinear thermal properties of the tissue. Conclusions: The low computational cost renders the use of DMD suitable forin situ real time tumor ablation simulations without sacrificing accuracy. In such a way, the tumor ablation treatment planning is feasible using just a personal computer thanks to the simplicity of the numerical procedure used. The geometrical data can be provided directly by medical image modalities used in everyday practice. © 2014 American Association of Physicists in Medicine.
Danger, Michael; Cornut, Julien; Chauvet, Eric; Chavez, Paola; Elger, Arnaud; Lecerf, Antoine
2013-07-01
In detritus-based ecosystems, autochthonous primary production contributes very little to the detritus pool. Yet primary producers may still influence the functioning of these ecosystems through complex interactions with decomposers and detritivores. Recent studies have suggested that, in aquatic systems, small amounts of labile carbon (C) (e.g., producer exudates), could increase the mineralization of more recalcitrant organic-matter pools (e.g., leaf litter). This process, called priming effect, should be exacerbated under low-nutrient conditions and may alter the nature of interactions among microbial groups, from competition under low-nutrient conditions to indirect mutualism under high-nutrient conditions. Theoretical models further predict that primary producers may be competitively excluded when allochthonous C sources enter an ecosystem. In this study, the effects of a benthic diatom on aquatic hyphomycetes, bacteria, and leaf litter decomposition were investigated under two nutrient levels in a factorial microcosm experiment simulating detritus-based, headwater stream ecosystems. Contrary to theoretical expectations, diatoms and decomposers were able to coexist under both nutrient conditions. Under low-nutrient conditions, diatoms increased leaf litter decomposition rate by 20% compared to treatments where they were absent. No effect was observed under high-nutrient conditions. The increase in leaf litter mineralization rate induced a positive feedback on diatom densities. We attribute these results to the priming effect of labile C exudates from primary producers. The presence of diatoms in combination with fungal decomposers also promoted decomposer diversity and, under low-nutrient conditions, led to a significant decrease in leaf litter C:P ratio that could improve secondary production. Results from our microcosm experiment suggest new mechanisms by which primary producers may influence organic matter dynamics even in ecosystems where autochthonous
Asadi, Mozaffar; Asadi, Zahra; Savaripoor, Nooshin; Dusek, Michal; Eigner, Vaclav; Shorkaei, Mohammad Ranjkesh; Sedaghat, Moslem
2015-02-05
A series of new VO(IV) complexes of tetradentate N2O2 Schiff base ligands (L(1)-L(4)), were synthesized and characterized by FT-IR, UV-vis and elemental analysis. The structure of the complex VOL(1)⋅DMF was also investigated by X-ray crystallography which revealed a vanadyl center with distorted octahedral coordination where the 2-aza and 2-oxo coordinating sites of the ligand were perpendicular to the "-yl" oxygen. The electrochemical properties of the vanadyl complexes were investigated by cyclic voltammetry. A good correlation was observed between the oxidation potentials and the electron withdrawing character of the substituents on the Schiff base ligands, showing the following trend: MeO5-H>5-Br>5-Cl. Furthermore, the kinetic parameters of thermal decomposition were calculated by using the Coats-Redfern equation. According to the Coats-Redfern plots the kinetics of thermal decomposition of studied complexes is of the first-order in all stages, the free energy of activation for each following stage is larger than the previous one and the complexes have good thermal stability. The preparation of VOL(1)⋅DMF yielded also another compound, one kind of vanadium oxide [VO]X, with different habitus of crystals, (platelet instead of prisma) and without L(1) ligand, consisting of a V10O28 cage, diaminium moiety and dimethylamonium as a counter ions. Because its crystal structure was also new, we reported it along with the targeted complex. Copyright © 2014 Elsevier B.V. All rights reserved.
Jin, Yang; Ciwei, Gao; Jing, Zhang; Min, Sun; Jie, Yu
2017-05-01
The selection and evaluation of priority domains in Global Energy Internet standard development will help to break through limits of national investment, thus priority will be given to standardizing technical areas with highest urgency and feasibility. Therefore, in this paper, the process of Delphi survey based on technology foresight is put forward, the evaluation index system of priority domains is established, and the index calculation method is determined. Afterwards, statistical method is used to evaluate the alternative domains. Finally the top four priority domains are determined as follows: Interconnected Network Planning and Simulation Analysis, Interconnected Network Safety Control and Protection, Intelligent Power Transmission and Transformation, and Internet of Things.
Natural-Annotation-based Unsupervised Construction of Korean-Chinese Domain Dictionary
Liu, Wuying; Wang, Lin
2018-03-01
The large-scale bilingual parallel resource is significant to statistical learning and deep learning in natural language processing. This paper addresses the automatic construction issue of the Korean-Chinese domain dictionary, and presents a novel unsupervised construction method based on the natural annotation in the raw corpus. We firstly extract all Korean-Chinese word pairs from Korean texts according to natural annotations, secondly transform the traditional Chinese characters into the simplified ones, and finally distill out a bilingual domain dictionary after retrieving the simplified Chinese words in an extra Chinese domain dictionary. The experimental results show that our method can automatically build multiple Korean-Chinese domain dictionaries efficiently.
Energy Technology Data Exchange (ETDEWEB)
Rao, Laxminarsimha V., E-mail: laxman@iitk.ac.in [Mechanics and Applied Mathematics Group, Department of Mechanical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016 (India); Roy, Subhradeep [Department of Biomedical Engineering and Mechanics (MC 0219), Virginia Tech, 495 Old Turner Street, Blacksburg, VA 24061 (United States); Das, Sovan Lal [Mechanics and Applied Mathematics Group, Department of Mechanical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016 (India)
2017-01-15
We estimate the equilibrium size distribution of cholesterol rich micro-domains on a lipid bilayer by solving Smoluchowski equation for coagulation and fragmentation. Towards this aim, we first derive the coagulation kernels based on the diffusion behaviour of domains moving in a two dimensional membrane sheet, as this represents the reality better. We incorporate three different diffusion scenarios of domain diffusion into our coagulation kernel. Subsequently, we investigate the influence of the parameters in our model on the coagulation and fragmentation behaviour. The observed behaviours of the coagulation and fragmentation kernels are also manifested in the equilibrium domain size distribution and its first moment. Finally, considering the liquid domains diffusing in a supported lipid bilayer, we fit the equilibrium domain size distribution to a benchmark solution.
Location-based restoration mechanism for multi-domain GMPLS networks
DEFF Research Database (Denmark)
Manolova, Anna Vasileva; Calle, Eusibi; Ruepp, Sarah Renée
2009-01-01
In this paper we propose and evaluate the efficiency of a location-based restoration mechanism in a dynamic multi-domain GMPLS network. We focus on inter-domain link failures and utilize the correlation between the actual position of a failed link along the path with the applied restoration...
The Domain Five Observation Instrument: A Competency-Based Coach Evaluation Tool
Shangraw, Rebecca
2017-01-01
The Domain Five Observation Instrument (DFOI) is a competency-based observation instrument recommended for sport leaders or researchers who wish to evaluate coaches' instructional behaviors. The DFOI includes 10 behavior categories and four timed categories that encompass 34 observable instructional benchmarks outlined in domain five of the…
Liang, Hui; Chen, Xiaobo
2017-10-01
A novel multi-domain method based on an analytical control surface is proposed by combining the use of free-surface Green function and Rankine source function. A cylindrical control surface is introduced to subdivide the fluid domain into external and internal domains. Unlike the traditional domain decomposition strategy or multi-block method, the control surface here is not panelized, on which the velocity potential and normal velocity components are analytically expressed as a series of base functions composed of Laguerre function in vertical coordinate and Fourier series in the circumference. Free-surface Green function is applied in the external domain, and the boundary integral equation is constructed on the control surface in the sense of Galerkin collocation via integrating test functions orthogonal to base functions over the control surface. The external solution gives rise to the so-called Dirichlet-to-Neumann [DN2] and Neumann-to-Dirichlet [ND2] relations on the control surface. Irregular frequencies, which are only dependent on the radius of the control surface, are present in the external solution, and they are removed by extending the boundary integral equation to the interior free surface (circular disc) on which the null normal derivative of potential is imposed, and the dipole distribution is expressed as Fourier-Bessel expansion on the disc. In the internal domain, where the Rankine source function is adopted, new boundary integral equations are formulated. The point collocation is imposed over the body surface and free surface, while the collocation of the Galerkin type is applied on the control surface. The present method is valid in the computation of both linear and second-order mean drift wave loads. Furthermore, the second-order mean drift force based on the middle-field formulation can be calculated analytically by using the coefficients of the Fourier-Laguerre expansion.
SAR Interferogram Filtering of Shearlet Domain Based on Interferometric Phase Statistics
Directory of Open Access Journals (Sweden)
Yonghong He
2017-02-01
Full Text Available This paper presents a new filtering approach for Synthetic Aperture Radar (SAR interferometric phase noise reduction in the shearlet domain, depending on the coherent statistical characteristics. Shearlets provide a multidirectional and multiscale decomposition that have advantages over wavelet filtering methods when dealing with noisy phase fringes. Phase noise in SAR interferograms is directly related to the interferometric coherence and the look number of the interferogram. Therefore, an optimal interferogram filter should incorporate information from both of them. The proposed method combines the phase noise standard deviation with the shearlet transform. Experimental results show that the proposed method can reduce the interferogram noise while maintaining the spatial resolution, especially in areas with low coherence.
Decomposition methods for unsupervised learning
DEFF Research Database (Denmark)
Mørup, Morten
2008-01-01
This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...
Amorphization of Fe-based alloy via wet mechanical alloying assisted by PCA decomposition
Energy Technology Data Exchange (ETDEWEB)
Neamţu, B.V., E-mail: Bogdan.Neamtu@stm.utcluj.ro [Materials Science and Engineering Department, Technical University of Cluj-Napoca, 103-105, Muncii Avenue, 400641, Cluj-Napoca (Romania); Chicinaş, H.F.; Marinca, T.F. [Materials Science and Engineering Department, Technical University of Cluj-Napoca, 103-105, Muncii Avenue, 400641, Cluj-Napoca (Romania); Isnard, O. [Université Grenoble Alpes, Institut NEEL, F-38042, Grenoble (France); CNRS, Institut NEEL, 25 rue des martyrs, BP166, F-38042, Grenoble (France); Pană, O. [National Institute for Research and Development of Isotopic and Molecular Technologies, 65-103 Donath Street, 400293, Cluj-Napoca (Romania); Chicinaş, I. [Materials Science and Engineering Department, Technical University of Cluj-Napoca, 103-105, Muncii Avenue, 400641, Cluj-Napoca (Romania)
2016-11-01
used as microalloying elements which could provide the required extra amount of metalloids. - Highlights: • Amorphization of Fe{sub 75}Si{sub 20}B{sub 5} alloy via wet mechanical alloying is assisted by PCA decomposition. • Powder amorphization was not achieved even after 140 de hours of dry MA. • Wet MA using different PCA leads to powder amorphization at different MA duration. • Regardless of PCA type, contamination with 2.3 wt% C is needed for amorphization.
International Nuclear Information System (INIS)
Zhu Qin; Peng Xizhe; Wu Kaiya
2012-01-01
Based on the input–output model and the comparable price input–output tables, the current paper investigates the indirect carbon emissions from residential consumption in China in 1992–2005, and examines the impacts on the emissions using the structural decomposition method. The results demonstrate that the rise of the residential consumption level played a dominant role in the growth of residential indirect emissions. The persistent decline of the carbon emission intensity of industrial sectors presented a significant negative effect on the emissions. The change in the intermediate demand of industrial sectors resulted in an overall positive effect, except in the initial years. The increase in population prompted the indirect emissions to a certain extent; however, population size is no longer the main reason for the growth of the emissions. The change in the consumption structure showed a weak positive effect, demonstrating the importance for China to control and slow down the increase in the emissions while in the process of optimizing the residential consumption structure. The results imply that the means for restructuring the economy and improving efficiency, rather than for lowering the consumption scale, should be adopted by China to achieve the targets of energy conservation and emission reduction. - Highlights: ► We build the input–output model of indirect carbon emissions from residential consumption. ► We calculate the indirect emissions using the comparable price input–output tables. ► We examine the impacts on the indirect emissions using the structural decomposition method. ► The change in the consumption structure showed a weak positive effect on the emissions. ► China's population size is no longer the main reason for the growth of the emissions.
Shi, Feifei; Zhao, Hui; Liu, Gao; Ross, Philip N.; Somorjai, Gabor A.; Komvopoulos, Kyriakos
2014-01-01
The formation of passive films on electrodes due to electrolyte decomposition significantly affects the reversibility of Li-ion batteries (LIBs); however, understanding of the electrolyte decomposition process is still lacking. The decomposition products of ethylene carbonate (EC)-based electrolytes on Sn and Ni electrodes are investigated in this study by Fourier transform infrared (FTIR) spectroscopy. The reference compounds, diethyl 2,5-dioxahexane dicarboxylate (DEDOHC) and polyethylene carbonate (poly-EC), were synthesized, and their chemical structures were characterized by FTIR spectroscopy and nuclear magnetic resonance (NMR). Assignment of the vibration frequencies of these compounds was assisted by quantum chemical (Hartree-Fock) calculations. The effect of Li-ion solvation on the FTIR spectra was studied by introducing the synthesized reference compounds into the electrolyte. EC decomposition products formed on Sn and Ni electrodes were identified as DEDOHC and poly-EC by matching the features of surface species formed on the electrodes with reference spectra. The results of this study demonstrate the importance of accounting for the solvation effect in FTIR analysis of the decomposition products forming on LIB electrodes. © 2014 American Chemical Society.
Shi, Feifei
2014-07-10
The formation of passive films on electrodes due to electrolyte decomposition significantly affects the reversibility of Li-ion batteries (LIBs); however, understanding of the electrolyte decomposition process is still lacking. The decomposition products of ethylene carbonate (EC)-based electrolytes on Sn and Ni electrodes are investigated in this study by Fourier transform infrared (FTIR) spectroscopy. The reference compounds, diethyl 2,5-dioxahexane dicarboxylate (DEDOHC) and polyethylene carbonate (poly-EC), were synthesized, and their chemical structures were characterized by FTIR spectroscopy and nuclear magnetic resonance (NMR). Assignment of the vibration frequencies of these compounds was assisted by quantum chemical (Hartree-Fock) calculations. The effect of Li-ion solvation on the FTIR spectra was studied by introducing the synthesized reference compounds into the electrolyte. EC decomposition products formed on Sn and Ni electrodes were identified as DEDOHC and poly-EC by matching the features of surface species formed on the electrodes with reference spectra. The results of this study demonstrate the importance of accounting for the solvation effect in FTIR analysis of the decomposition products forming on LIB electrodes. © 2014 American Chemical Society.
A dimension decomposition approach based on iterative observer design for an elliptic Cauchy problem
Majeed, Muhammad Usman
2015-07-13
A state observer inspired iterative algorithm is presented to solve boundary estimation problem for Laplace equation using one of the space variables as a time-like variable. Three dimensional domain with two congruent parallel surfaces is considered. Problem is set up in cartesian co-ordinates and Laplace equation is re-written as a first order state equation with state operator matrix A and measurements are provided on the Cauchy data surface with measurement operator C. Conditions for the existence of strongly continuous semigroup generated by A are studied. Observability conditions for pair (C, A) are provided in infinite dimensional setting. In this given setting, special observability result obtained allows to decompose three dimensional problem into a set of independent two dimensional sub-problems over rectangular cross-sections. Numerical simulation results are provided.
Infrared and visible fusion face recognition based on NSCT domain
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
2018-01-01
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.
DEFF Research Database (Denmark)
Lusby, Richard Martin; Muller, Laurent Flindt; Petersen, Bjørn
2013-01-01
This paper describes a Benders decomposition-based framework for solving the large scale energy management problem that was posed for the ROADEF 2010 challenge. The problem was taken from the power industry and entailed scheduling the outage dates for a set of nuclear power plants, which need...... to be regularly taken down for refueling and maintenance, in such away that the expected cost of meeting the power demand in a number of potential scenarios is minimized. We show that the problem structure naturally lends itself to Benders decomposition; however, not all constraints can be included in the mixed...
Evaluation of Methyl-Binding Domain Based Enrichment Approaches Revisited.
Directory of Open Access Journals (Sweden)
Karolina A Aberg
Full Text Available Methyl-binding domain (MBD enrichment followed by deep sequencing (MBD-seq, is a robust and cost efficient approach for methylome-wide association studies (MWAS. MBD-seq has been demonstrated to be capable of identifying differentially methylated regions, detecting previously reported robust associations and producing findings that replicate with other technologies such as targeted pyrosequencing of bisulfite converted DNA. There are several kits commercially available that can be used for MBD enrichment. Our previous work has involved MethylMiner (Life Technologies, Foster City, CA, USA that we chose after careful investigation of its properties. However, in a recent evaluation of five commercially available MBD-enrichment kits the performance of the MethylMiner was deemed poor. Given our positive experience with MethylMiner, we were surprised by this report. In an attempt to reproduce these findings we here have performed a direct comparison of MethylMiner with MethylCap (Diagenode Inc, Denville, NJ, USA, the best performing kit in that study. We find that both MethylMiner and MethylCap are two well performing MBD-enrichment kits. However, MethylMiner shows somewhat better enrichment efficiency and lower levels of background "noise". In addition, for the purpose of MWAS where we want to investigate the majority of CpGs, we find MethylMiner to be superior as it allows tailoring the enrichment to the regions where most CpGs are located. Using targeted bisulfite sequencing we confirmed that sites where methylation was detected by either MethylMiner or by MethylCap indeed were methylated.
Concomitant prediction of function and fold at the domain level with GO-based profiles.
Lopez, Daniel; Pazos, Florencio
2013-01-01
Predicting the function of newly sequenced proteins is crucial due to the pace at which these raw sequences are being obtained. Almost all resources for predicting protein function assign functional terms to whole chains, and do not distinguish which particular domain is responsible for the allocated function. This is not a limitation of the methodologies themselves but it is due to the fact that in the databases of functional annotations these methods use for transferring functional terms to new proteins, these annotations are done on a whole-chain basis. Nevertheless, domains are the basic evolutionary and often functional units of proteins. In many cases, the domains of a protein chain have distinct molecular functions, independent from each other. For that reason resources with functional annotations at the domain level, as well as methodologies for predicting function for individual domains adapted to these resources are required.We present a methodology for predicting the molecular function of individual domains, based on a previously developed database of functional annotations at the domain level. The approach, which we show outperforms a standard method based on sequence searches in assigning function, concomitantly predicts the structural fold of the domains and can give hints on the functionally important residues associated to the predicted function.
A domain-based approach to predict protein-protein interactions
Directory of Open Access Journals (Sweden)
Resat Haluk
2007-06-01
Full Text Available Abstract Background Knowing which proteins exist in a certain organism or cell type and how these proteins interact with each other are necessary for the understanding of biological processes at the whole cell level. The determination of the protein-protein interaction (PPI networks has been the subject of extensive research. Despite the development of reasonably successful methods, serious technical difficulties still exist. In this paper we present DomainGA, a quantitative computational approach that uses the information about the domain-domain interactions to predict the interactions between proteins. Results DomainGA is a multi-parameter optimization method in which the available PPI information is used to derive a quantitative scoring scheme for the domain-domain pairs. Obtained domain interaction scores are then used to predict whether a pair of proteins interacts. Using the yeast PPI data and a series of tests, we show the robustness and insensitivity of the DomainGA method to the selection of the parameter sets, score ranges, and detection rules. Our DomainGA method achieves very high explanation ratios for the positive and negative PPIs in yeast. Based on our cross-verification tests on human PPIs, comparison of the optimized scores with the structurally observed domain interactions obtained from the iPFAM database, and sensitivity and specificity analysis; we conclude that our DomainGA method shows great promise to be applicable across multiple organisms. Conclusion We envision the DomainGA as a first step of a multiple tier approach to constructing organism specific PPIs. As it is based on fundamental structural information, the DomainGA approach can be used to create potential PPIs and the accuracy of the constructed interaction template can be further improved using complementary methods. Explanation ratios obtained in the reported test case studies clearly show that the false prediction rates of the template networks constructed
Directory of Open Access Journals (Sweden)
Dong Wang
2015-01-01
Full Text Available The traditional polarity comparison based travelling wave protection, using the initial wave information, is affected by initial fault angle, bus structure, and external fault. And the relationship between the magnitude and polarity of travelling wave is ignored. Because of the protection tripping and malfunction, the further application of this protection principle is affected. Therefore, this paper presents an ultra-high-speed travelling wave protection using integral based polarity comparison principle. After empirical mode decomposition of the original travelling wave, the first-order intrinsic mode function is used as protection object. Based on the relationship between the magnitude and polarity of travelling wave, this paper demonstrates the feasibility of using travelling wave magnitude which contains polar information as direction criterion. And the paper integrates the direction criterion in a period after fault to avoid wave head detection failure. Through PSCAD simulation with the typical 500 kV transmission system, the reliability and sensitivity of travelling wave protection were verified under different factors’ affection.
Directed evolution of the TALE N-terminal domain for recognition of all 5′ bases
Lamb, Brian M.; Mercer, Andrew C.; Barbas, Carlos F.
2013-01-01
Transcription activator-like effector (TALE) proteins can be designed to bind virtually any DNA sequence. General guidelines for design of TALE DNA-binding domains suggest that the 5′-most base of the DNA sequence bound by the TALE (the N0 base) should be a thymine. We quantified the N0 requirement by analysis of the activities of TALE transcription factors (TALE-TF), TALE recombinases (TALE-R) and TALE nucleases (TALENs) with each DNA base at this position. In the absence of a 5′ T, we observed decreases in TALE activity up to >1000-fold in TALE-TF activity, up to 100-fold in TALE-R activity and up to 10-fold reduction in TALEN activity compared with target sequences containing a 5′ T. To develop TALE architectures that recognize all possible N0 bases, we used structure-guided library design coupled with TALE-R activity selections to evolve novel TALE N-terminal domains to accommodate any N0 base. A G-selective domain and broadly reactive domains were isolated and characterized. The engineered TALE domains selected in the TALE-R format demonstrated modularity and were active in TALE-TF and TALEN architectures. Evolved N-terminal domains provide effective and unconstrained TALE-based targeting of any DNA sequence as TALE binding proteins and designer enzymes. PMID:23980031
Directed evolution of the TALE N-terminal domain for recognition of all 5' bases.
Lamb, Brian M; Mercer, Andrew C; Barbas, Carlos F
2013-11-01
Transcription activator-like effector (TALE) proteins can be designed to bind virtually any DNA sequence. General guidelines for design of TALE DNA-binding domains suggest that the 5'-most base of the DNA sequence bound by the TALE (the N0 base) should be a thymine. We quantified the N0 requirement by analysis of the activities of TALE transcription factors (TALE-TF), TALE recombinases (TALE-R) and TALE nucleases (TALENs) with each DNA base at this position. In the absence of a 5' T, we observed decreases in TALE activity up to >1000-fold in TALE-TF activity, up to 100-fold in TALE-R activity and up to 10-fold reduction in TALEN activity compared with target sequences containing a 5' T. To develop TALE architectures that recognize all possible N0 bases, we used structure-guided library design coupled with TALE-R activity selections to evolve novel TALE N-terminal domains to accommodate any N0 base. A G-selective domain and broadly reactive domains were isolated and characterized. The engineered TALE domains selected in the TALE-R format demonstrated modularity and were active in TALE-TF and TALEN architectures. Evolved N-terminal domains provide effective and unconstrained TALE-based targeting of any DNA sequence as TALE binding proteins and designer enzymes.
White, Jeffery A.; Baurle, Robert A.; Passe, Bradley J.; Spiegel, Seth C.; Nishikawa, Hiroaki
2017-01-01
The ability to solve the equations governing the hypersonic turbulent flow of a real gas on unstructured grids using a spatially-elliptic, 2nd-order accurate, cell-centered, finite-volume method has been recently implemented in the VULCAN-CFD code. This paper describes the key numerical methods and techniques that were found to be required to robustly obtain accurate solutions to hypersonic flows on non-hex-dominant unstructured grids. The methods and techniques described include: an augmented stencil, weighted linear least squares, cell-average gradient method, a robust multidimensional cell-average gradient-limiter process that is consistent with the augmented stencil of the cell-average gradient method and a cell-face gradient method that contains a cell skewness sensitive damping term derived using hyperbolic diffusion based concepts. A data-parallel matrix-based symmetric Gauss-Seidel point-implicit scheme, used to solve the governing equations, is described and shown to be more robust and efficient than a matrix-free alternative. In addition, a y+ adaptive turbulent wall boundary condition methodology is presented. This boundary condition methodology is deigned to automatically switch between a solve-to-the-wall and a wall-matching-function boundary condition based on the local y+ of the 1st cell center off the wall. The aforementioned methods and techniques are then applied to a series of hypersonic and supersonic turbulent flat plate unit tests to examine the efficiency, robustness and convergence behavior of the implicit scheme and to determine the ability of the solve-to-the-wall and y+ adaptive turbulent wall boundary conditions to reproduce the turbulent law-of-the-wall. Finally, the thermally perfect, chemically frozen, Mach 7.8 turbulent flow of air through a scramjet flow-path is computed and compared with experimental data to demonstrate the robustness, accuracy and convergence behavior of the unstructured-grid solver for a realistic 3-D geometry on
Chen, Jun; Li, Guoxiu; Zhang, Tao; Wang, Meng; Yu, Yusong
2016-12-01
Low toxicity ammonium dinitramide (ADN)-based aerospace propulsion systems currently show promise with regard to applications such as controlling satellite attitude. In the present work, the decomposition and combustion processes of an ADN-based monopropellant thruster were systematically studied, using a thermally stable catalyst to promote the decomposition reaction. The performance of the ADN propulsion system was investigated using a ground test system under vacuum, and the physical properties of the ADN-based propellant were also examined. Using this system, the effects of the preheating temperature and feed pressure on the combustion characteristics and thruster performance during steady state operation were observed. The results indicate that the propellant and catalyst employed during this work, as well as the design and manufacture of the thruster, met performance requirements. Moreover, the 1 N ADN thruster generated a specific impulse of 223 s, demonstrating the efficacy of the new catalyst. The thruster operational parameters (specifically, the preheating temperature and feed pressure) were found to have a significant effect on the decomposition and combustion processes within the thruster, and the performance of the thruster was demonstrated to improve at higher feed pressures and elevated preheating temperatures. A lower temperature of 140 °C was determined to activate the catalytic decomposition and combustion processes more effectively compared with the results obtained using other conditions. The data obtained in this study should be beneficial to future systematic and in-depth investigations of the combustion mechanism and characteristics within an ADN thruster.
Mindfulness-Based Interventions and the Affective Domain of Education
Hyland, Terry
2014-01-01
Thanks largely to the work of Kabat-Zinn and associates applications of mindfulness-based practices have grown exponentially over the last decade or so, particularly in the fields of education, psychology, psychotherapy and mind-body health. Having its origins in Buddhist traditions, the more recent secular and therapeutic applications of the…
Application of the Decomposition Method to the Design Complexity of Computer-based Display
Energy Technology Data Exchange (ETDEWEB)
Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2012-05-15
The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display
Application of the Decomposition Method to the Design Complexity of Computer-based Display
International Nuclear Information System (INIS)
Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun; Park, Jin Kyun
2012-01-01
The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display
Ma, Xiaojun; Liu, Yan; Wei, Xiaoxue; Li, Yifan; Zheng, Mengchen; Li, Yudong; Cheng, Chaochao; Wu, Yumei; Liu, Zhaonan; Yu, Yuanbo
2017-08-01
Nowadays, environment problem has become the international hot issue. Experts and scholars pay more and more attention to the energy efficiency. Unlike most studies, which analyze the changes of TFEE in inter-provincial or regional cities, TFEE is calculated with the ratio of target energy value and actual energy input based on data in cities of prefecture levels, which would be more accurate. Many researches regard TFP as TFEE to do analysis from the provincial perspective. This paper is intended to calculate more reliably by super efficiency DEA, observe the changes of TFEE, and analyze its relation with TFP, and it proves that TFP is not equal to TFEE. Additionally, the internal influences of the TFEE are obtained via the Malmquist index decomposition. The external influences of the TFFE are analyzed afterward based on the Tobit models. Analysis results demonstrate that Heilongjiang has the highest TFEE followed by Jilin, and Liaoning has the lowest TFEE. Eventually, some policy suggestions are proposed for the influences of energy efficiency and study results.
Directory of Open Access Journals (Sweden)
Jing Xu
2015-10-01
Full Text Available In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD and Probabilistic Neural Network (PNN is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.
Directory of Open Access Journals (Sweden)
Boyang Qu
2017-12-01
Full Text Available The intermittency of wind power and the large-scale integration of electric vehicles (EVs bring new challenges to the reliability and economy of power system dispatching. In this paper, a novel multi-objective dynamic economic emission dispatch (DEED model is proposed considering the EVs and uncertainties of wind power. The total fuel cost and pollutant emission are considered as the optimization objectives, and the vehicle to grid (V2G power and the conventional generator output power are set as the decision variables. The stochastic wind power is derived by Weibull probability distribution function. Under the premise of meeting the system energy and user’s travel demand, the charging and discharging behavior of the EVs are dynamically managed. Moreover, we propose a two-step dynamic constraint processing strategy for decision variables based on penalty function, and, on this basis, the Multi-Objective Evolutionary Algorithm Based on Decomposition (MOEA/D algorithm is improved. The proposed model and approach are verified by the 10-generator system. The results demonstrate that the proposed DEED model and the improved MOEA/D algorithm are effective and reasonable.
An Exemplar-Based Multi-View Domain Generalization Framework for Visual Recognition.
Niu, Li; Li, Wen; Xu, Dong; Cai, Jianfei
2018-02-01
In this paper, we propose a new exemplar-based multi-view domain generalization (EMVDG) framework for visual recognition by learning robust classifier that are able to generalize well to arbitrary target domain based on the training samples with multiple types of features (i.e., multi-view features). In this framework, we aim to address two issues simultaneously. First, the distribution of training samples (i.e., the source domain) is often considerably different from that of testing samples (i.e., the target domain), so the performance of the classifiers learnt on the source domain may drop significantly on the target domain. Moreover, the testing data are often unseen during the training procedure. Second, when the training data are associated with multi-view features, the recognition performance can be further improved by exploiting the relation among multiple types of features. To address the first issue, considering that it has been shown that fusing multiple SVM classifiers can enhance the domain generalization ability, we build our EMVDG framework upon exemplar SVMs (ESVMs), in which a set of ESVM classifiers are learnt with each one trained based on one positive training sample and all the negative training samples. When the source domain contains multiple latent domains, the learnt ESVM classifiers are expected to be grouped into multiple clusters. To address the second issue, we propose two approaches under the EMVDG framework based on the consensus principle and the complementary principle, respectively. Specifically, we propose an EMVDG_CO method by adding a co-regularizer to enforce the cluster structures of ESVM classifiers on different views to be consistent based on the consensus principle. Inspired by multiple kernel learning, we also propose another EMVDG_MK method by fusing the ESVM classifiers from different views based on the complementary principle. In addition, we further extend our EMVDG framework to exemplar-based multi-view domain
Energy Technology Data Exchange (ETDEWEB)
Zhmetko, D.N., E-mail: sergey.zhmetko@gmail.com [Department of Physics, Zaporizhzhya National University, 66 Zhukovsky Street, 69063 Zaporizhzhya (Ukraine); Zhmetko, S.D. [Department of Physics, Zaporizhzhya National University, 66 Zhukovsky Street, 69063 Zaporizhzhya (Ukraine); Troschenkov, Y.N. [Institute for Magnetism, 36-b Vernadsky Boulevard, 03142 Kyiv (Ukraine); Matsura, A.V. [Department of Physics, Zaporizhzhya National University, 66 Zhukovsky Street, 69063 Zaporizhzhya (Ukraine)
2013-08-15
The frequency dependence of asymmetry of the domain walls velocity relative to the middle plane of amorphous ribbon is investigated. An additional pressure of the same direction acting on each domain wall caused by dependence of eddy current damping on the coordinate of the domain wall is revealed. The microscopic mechanisms of this additional pressure are considered. - Highlights: ► Additional pressure on the domain wall, caused by inhomogeneity of its damping. ► Asymmetry of the coordinate of the nucleation of domain walls and their damping. ► Connection between the components of additional pressure and its direction. ► Interaction of domain walls with the surface defects of the amorphous ribbon.
International Nuclear Information System (INIS)
Zhmetko, D.N.; Zhmetko, S.D.; Troschenkov, Y.N.; Matsura, A.V.
2013-01-01
The frequency dependence of asymmetry of the domain walls velocity relative to the middle plane of amorphous ribbon is investigated. An additional pressure of the same direction acting on each domain wall caused by dependence of eddy current damping on the coordinate of the domain wall is revealed. The microscopic mechanisms of this additional pressure are considered. - Highlights: ► Additional pressure on the domain wall, caused by inhomogeneity of its damping. ► Asymmetry of the coordinate of the nucleation of domain walls and their damping. ► Connection between the components of additional pressure and its direction. ► Interaction of domain walls with the surface defects of the amorphous ribbon
Highly sensitive immunoassay based on E. coli with autodisplayed Z-domain
International Nuclear Information System (INIS)
Jose, Joachim; Park, Min; Pyun, Jae-Chul
2010-01-01
The Z-domain of protein A has been known to bind specifically to the F c region of antibodies (IgGs). In this work, the Z-domain of protein A was expressed on the outer membrane of Escherichia coli by using 'Autodisplay' technology as a fusion protein of autotransport domain. The E. coli with autodisplayed Z-domain was applied to the sandwich-type immunoassay as a solid-support of detection-antibodies against a target analyte. For the feasibility demonstration of the E. coli based immunoassay, C-reactive protein (CRP) assay was carried out by using E. coli with autodisplayed Z-domain. The limit of detection (LOD) and binding capacity of the E. coli based immunoassay were estimated to be far more sensitive than the conventional ELISA. Such a far higher sensitivity of E. coli based immunoassay than conventional ELISA was explained by the orientation control of immobilized antibodies and the mobility of E. coli in assay matrix. From the test results of 45 rheumatoid arthritis (RA) patients' serum and 15 healthy samples, a cut-off value was established to have optimal sensitivity and selectivity values for RA. The CRP test result of each individual sample was compared with ELISA which is the reference method for RA diagnosis. From this work, the E. coli with Z-domain was proved to be feasible for the medical diagnosis based on sandwich-type immunoassay.
Robust Visual Knowledge Transfer via Extreme Learning Machine Based Domain Adaptation.
Zhang, Lei; Zhang, David
2016-08-10
We address the problem of visual knowledge adaptation by leveraging labeled patterns from source domain and a very limited number of labeled instances in target domain to learn a robust classifier for visual categorization. This paper proposes a new extreme learning machine based cross-domain network learning framework, that is called Extreme Learning Machine (ELM) based Domain Adaptation (EDA). It allows us to learn a category transformation and an ELM classifier with random projection by minimizing the -norm of the network output weights and the learning error simultaneously. The unlabeled target data, as useful knowledge, is also integrated as a fidelity term to guarantee the stability during cross domain learning. It minimizes the matching error between the learned classifier and a base classifier, such that many existing classifiers can be readily incorporated as base classifiers. The network output weights cannot only be analytically determined, but also transferrable. Additionally, a manifold regularization with Laplacian graph is incorporated, such that it is beneficial to semi-supervised learning. Extensively, we also propose a model of multiple views, referred as MvEDA. Experiments on benchmark visual datasets for video event recognition and object recognition, demonstrate that our EDA methods outperform existing cross-domain learning methods.
Combined failure acoustical diagnosis based on improved frequency domain blind deconvolution
International Nuclear Information System (INIS)
Pan, Nan; Wu, Xing; Chi, YiLin; Liu, Xiaoqin; Liu, Chang
2012-01-01
According to gear box combined failure extraction in complex sound field, an acoustic fault detection method based on improved frequency domain blind deconvolution was proposed. Follow the frequency-domain blind deconvolution flow, the morphological filtering was firstly used to extract modulation features embedded in the observed signals, then the CFPA algorithm was employed to do complex-domain blind separation, finally the J-Divergence of spectrum was employed as distance measure to resolve the permutation. Experiments using real machine sound signals was carried out. The result demonstrate this algorithm can be efficiently applied to gear box combined failure detection in practice.
Simulation of power fluctuation of wind farms based on frequency domain
DEFF Research Database (Denmark)
Lin, Jin; Sun, Yuanzhang; Li, Guojie
2011-01-01
, however, is incapable of completely explaining the physical mechanism of randomness of power fluctuation. To remedy such a situation, fluctuation modeling based on the frequency domain is proposed. The frequency domain characteristics of stochastic fluctuation on large wind farms are studied using...... the power spectral density of wind speed, the frequency domain model of a wind power generator and the information on weather and geography of the wind farms. The correctness and effectiveness of the model are verified by comparing the measurement data with simulation results of a certain wind farm. © 2011...
Policy-Based Negotiation Engine for Cross-Domain Interoperability
Vatan, Farrokh; Chow, Edward T.
2012-01-01
A successful policy negotiation scheme for Policy-Based Management (PBM) has been implemented. Policy negotiation is the process of determining the "best" communication policy that all of the parties involved can agree on. Specifically, the problem is how to reconcile the various (and possibly conflicting) communication protocols used by different divisions. The solution must use protocols available to all parties involved, and should attempt to do so in the best way possible. Which protocols are commonly available, and what the definition of "best" is will be dependent on the parties involved and their individual communications priorities.
DRO: domain-based route optimization scheme for nested mobile networks
Directory of Open Access Journals (Sweden)
Chuang Ming-Chin
2011-01-01
Full Text Available Abstract The network mobility (NEMO basic support protocol is designed to support NEMO management, and to ensure communication continuity between nodes in mobile networks. However, in nested mobile networks, NEMO suffers from the pinball routing problem, which results in long packet transmission delays. To solve the problem, we propose a domain-based route optimization (DRO scheme that incorporates a domain-based network architecture and ad hoc routing protocols for route optimization. DRO also improves the intra-domain handoff performance, reduces the convergence time during route optimization, and avoids the out-of-sequence packet problem. A detailed performance analysis and simulations were conducted to evaluate the scheme. The results demonstrate that DRO outperforms existing mechanisms in terms of packet transmission delay (i.e., better route-optimization, intra-domain handoff latency, convergence time, and packet tunneling overhead.
de Almeida, Andre L. F.; Luciani, Xavier; Stegeman, Alwin; Comon, Pierre
This work proposes a new tensor-based approach to solve the problem of blind identification of underdetermined mixtures of complex-valued sources exploiting the cumulant generating function (CGF) of the observations. We show that a collection of second-order derivatives of the CGF of the
Czech Academy of Sciences Publication Activity Database
Asadi, Z.; Zeinali, A.; Dušek, Michal; Eigner, Václav
2014-01-01
Roč. 46, č. 12 (2014), s. 718-729 ISSN 0538-8066 R&D Projects: GA ČR(CZ) GAP204/11/0809 Institutional support: RVO:68378271 Keywords : uranyl * Schiff base * kinetics * anticancer activity Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.517, year: 2014
Inverse scale space decomposition
DEFF Research Database (Denmark)
Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane
2018-01-01
We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...
Bregmanized Domain Decomposition for Image Restoration
Langer, Andreas; Osher, Stanley; Schö nlieb, Carola-Bibiane
2012-01-01
Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems
Directory of Open Access Journals (Sweden)
M. Imran
2017-09-01
Full Text Available A blind adaptive color image watermarking scheme based on principal component analysis, singular value decomposition, and human visual system is proposed. The use of principal component analysis to decorrelate the three color channels of host image, improves the perceptual quality of watermarked image. Whereas, human visual system and fuzzy inference system helped to improve both imperceptibility and robustness by selecting adaptive scaling factor, so that, areas more prone to noise can be added with more information as compared to less prone areas. To achieve security, location of watermark embedding is kept secret and used as key at the time of watermark extraction, whereas, for capacity both singular values and vectors are involved in watermark embedding process. As a result, four contradictory requirements; imperceptibility, robustness, security and capacity are achieved as suggested by results. Both subjective and objective methods are acquired to examine the performance of proposed schemes. For subjective analysis the watermarked images and watermarks extracted from attacked watermarked images are shown. For objective analysis of proposed scheme in terms of imperceptibility, peak signal to noise ratio, structural similarity index, visual information fidelity and normalized color difference are used. Whereas, for objective analysis in terms of robustness, normalized correlation, bit error rate, normalized hamming distance and global authentication rate are used. Security is checked by using different keys to extract the watermark. The proposed schemes are compared with state-of-the-art watermarking techniques and found better performance as suggested by results.
Wang, Wen-chuan; Chau, Kwok-wing; Qiu, Lin; Chen, Yang-bo
2015-05-01
Hydrological time series forecasting is one of the most important applications in modern hydrology, especially for the effective reservoir management. In this research, an artificial neural network (ANN) model coupled with the ensemble empirical mode decomposition (EEMD) is presented for forecasting medium and long-term runoff time series. First, the original runoff time series is decomposed into a finite and often small number of intrinsic mode functions (IMFs) and a residual series using EEMD technique for attaining deeper insight into the data characteristics. Then all IMF components and residue are predicted, respectively, through appropriate ANN models. Finally, the forecasted results of the modeled IMFs and residual series are summed to formulate an ensemble forecast for the original annual runoff series. Two annual reservoir runoff time series from Biuliuhe and Mopanshan in China, are investigated using the developed model based on four performance evaluation measures (RMSE, MAPE, R and NSEC). The results obtained in this work indicate that EEMD can effectively enhance forecasting accuracy and the proposed EEMD-ANN model can attain significant improvement over ANN approach in medium and long-term runoff time series forecasting. Copyright © 2015 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Wu, Xiaoyang; Liu, Tianyou
2010-01-01
Reflections from a hydrocarbon-saturated zone are generally expected to have a tendency to be low frequency. Previous work has shown the application of seismic spectral decomposition for low-frequency shadow detection. In this paper, we further analyse the characteristics of spectral amplitude in fractured sandstone reservoirs with different fluid saturations using the Wigner–Ville distribution (WVD)-based method. We give a description of the geometric structure of cross-terms due to the bilinear nature of WVD and eliminate cross-terms using smoothed pseudo-WVD (SPWVD) with time- and frequency-independent Gaussian kernels as smoothing windows. SPWVD is finally applied to seismic data from West Sichuan depression. We focus our study on the comparison of SPWVD spectral amplitudes resulting from different fluid contents. It shows that prolific gas reservoirs feature higher peak spectral amplitude at higher peak frequency, which attenuate faster than low-quality gas reservoirs and dry or wet reservoirs. This can be regarded as a spectral attenuation signature for future exploration in the study area
Li, Biyuan; Tang, Chen; Gao, Guannan; Chen, Mingming; Tang, Shuwei; Lei, Zhenkun
2017-06-01
Filtering off speckle noise from a fringe image is one of the key tasks in electronic speckle pattern interferometry (ESPI). In general, ESPI fringe images can be divided into three categories: low-density fringe images, high-density fringe images, and variable-density fringe images. In this paper, we first present a general filtering method based on variational image decomposition that can filter speckle noise for ESPI fringe images with various densities. In our method, a variable-density ESPI fringe image is decomposed into low-density fringes, high-density fringes, and noise. A low-density fringe image is decomposed into low-density fringes and noise. A high-density fringe image is decomposed into high-density fringes and noise. We give some suitable function spaces to describe low-density fringes, high-density fringes, and noise, respectively. Then we construct several models and numerical algorithms for ESPI fringe images with various densities. And we investigate the performance of these models via our extensive experiments. Finally, we compare our proposed models with the windowed Fourier transform method and coherence enhancing diffusion partial differential equation filter. These two methods may be the most effective filtering methods at present. Furthermore, we use the proposed method to filter a collection of the experimentally obtained ESPI fringe images with poor quality. The experimental results demonstrate the performance of our proposed method.
Sengottuvel, S; Khan, Pathan Fayaz; Mariyappa, N; Patel, Rajesh; Saipriya, S; Gireesan, K
2018-06-01
Cutaneous measurements of electrogastrogram (EGG) signals are heavily contaminated by artifacts due to cardiac activity, breathing, motion artifacts, and electrode drifts whose effective elimination remains an open problem. A common methodology is proposed by combining independent component analysis (ICA) and ensemble empirical mode decomposition (EEMD) to denoise gastric slow-wave signals in multichannel EGG data. Sixteen electrodes are fixed over the upper abdomen to measure the EGG signals under three gastric conditions, namely, preprandial, postprandial immediately, and postprandial 2 h after food for three healthy subjects and a subject with a gastric disorder. Instantaneous frequencies of intrinsic mode functions that are obtained by applying the EEMD technique are analyzed to individually identify and remove each of the artifacts. A critical investigation on the proposed ICA-EEMD method reveals its ability to provide a higher attenuation of artifacts and lower distortion than those obtained by the ICA-EMD method and conventional techniques, like bandpass and adaptive filtering. Characteristic changes in the slow-wave frequencies across the three gastric conditions could be determined from the denoised signals for all the cases. The results therefore encourage the use of the EEMD-based technique for denoising gastric signals to be used in clinical practice.
International Nuclear Information System (INIS)
Chang, C C; Hsiao, T C; Kao, S C; Hsu, H Y
2014-01-01
Arterial blood pressure (ABP) is an important indicator of cardiovascular circulation and presents various intrinsic regulations. It has been found that the intrinsic characteristics of blood vessels can be assessed quantitatively by ABP analysis (called reflection wave analysis (RWA)), but conventional RWA is insufficient for assessment during non-stationary conditions, such as the Valsalva maneuver. Recently, a novel adaptive method called empirical mode decomposition (EMD) was proposed for non-stationary data analysis. This study proposed a RWA algorithm based on EMD (EMD-RWA). A total of 51 subjects participated in this study, including 39 healthy subjects and 12 patients with autonomic nervous system (ANS) dysfunction. The results showed that EMD-RWA provided a reliable estimation of reflection time in baseline and head-up tilt (HUT). Moreover, the estimated reflection time is able to assess the ANS function non-invasively, both in normal, healthy subjects and in the patients with ANS dysfunction. EMD-RWA provides a new approach for reflection time estimation in non-stationary conditions, and also helps with non-invasive ANS assessment. (paper)
Post-processing of Deep Web Information Extraction Based on Domain Ontology
Directory of Open Access Journals (Sweden)
PENG, T.
2013-11-01
Full Text Available Many methods are utilized to extract and process query results in deep Web, which rely on the different structures of Web pages and various designing modes of databases. However, some semantic meanings and relations are ignored. So, in this paper, we present an approach for post-processing deep Web query results based on domain ontology which can utilize the semantic meanings and relations. A block identification model (BIM based on node similarity is defined to extract data blocks that are relevant to specific domain after reducing noisy nodes. Feature vector of domain books is obtained by result set extraction model (RSEM based on vector space model (VSM. RSEM, in combination with BIM, builds the domain ontology on books which can not only remove the limit of Web page structures when extracting data information, but also make use of semantic meanings of domain ontology. After extracting basic information of Web pages, a ranking algorithm is adopted to offer an ordered list of data records to users. Experimental results show that BIM and RSEM extract data blocks and build domain ontology accurately. In addition, relevant data records and basic information are extracted and ranked. The performances precision and recall show that our proposed method is feasible and efficient.
International Nuclear Information System (INIS)
Tanaka, Yuho; Uruma, Kazunori; Furukawa, Toshihiro; Nakao, Tomoki; Izumi, Kenya; Utsumi, Hiroaki
2017-01-01
This paper deals with an analysis problem for diffusion-ordered NMR spectroscopy (DOSY). DOSY is formulated as a matrix factorization problem of a given observed matrix. In order to solve this problem, a direct exponential curve resolution algorithm (DECRA) is well known. DECRA is based on singular value decomposition; the advantage of this algorithm is that the initial value is not required. However, DECRA requires a long calculating time, depending on the size of the given observed matrix due to the singular value decomposition, and this is a serious problem in practical use. Thus, this paper proposes a new analysis algorithm for DOSY to achieve a short calculating time. In order to solve matrix factorization for DOSY without using singular value decomposition, this paper focuses on the size of the given observed matrix. The observed matrix in DOSY is also a rectangular matrix with more columns than rows, due to limitation of the measuring time; thus, the proposed algorithm transforms the given observed matrix into a small observed matrix. The proposed algorithm applies the eigenvalue decomposition and the difference approximation to the small observed matrix, and the matrix factorization problem for DOSY is solved. The simulation and a data analysis show that the proposed algorithm achieves a lower calculating time than DECRA as well as similar analysis result results to DECRA. (author)
Wang, Deyun; Wei, Shuai; Luo, Hongyuan; Yue, Chenqiang; Grunder, Olivier
2017-02-15
The randomness, non-stationarity and irregularity of air quality index (AQI) series bring the difficulty of AQI forecasting. To enhance forecast accuracy, a novel hybrid forecasting model combining two-phase decomposition technique and extreme learning machine (ELM) optimized by differential evolution (DE) algorithm is developed for AQI forecasting in this paper. In phase I, the complementary ensemble empirical mode decomposition (CEEMD) is utilized to decompose the AQI series into a set of intrinsic mode functions (IMFs) with different frequencies; in phase II, in order to further handle the high frequency IMFs which will increase the forecast difficulty, variational mode decomposition (VMD) is employed to decompose the high frequency IMFs into a number of variational modes (VMs). Then, the ELM model optimized by DE algorithm is applied to forecast all the IMFs and VMs. Finally, the forecast value of each high frequency IMF is obtained through adding up the forecast results of all corresponding VMs, and the forecast series of AQI is obtained by aggregating the forecast results of all IMFs. To verify and validate the proposed model, two daily AQI series from July 1, 2014 to June 30, 2016 collected from Beijing and Shanghai located in China are taken as the test cases to conduct the empirical study. The experimental results show that the proposed hybrid model based on two-phase decomposition technique is remarkably superior to all other considered models for its higher forecast accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.
Iron-based Nanocomposite Synthesised by Microwave Plasma Decomposition of Iron Pentacarbonyl
Czech Academy of Sciences Publication Activity Database
David, Bohumil; Pizúrová, Naděžda; Schneeweiss, Oldřich; Hoder, T.; Kudrle, V.; Janča, J.
2007-01-01
Roč. 263, - (2007), s. 147-152 ISSN 1012-0386. [Diffusion and Thermodynamics of Materials /IX/. Brno, 13.09.2006-15.09.2006] R&D Projects: GA ČR GA202/04/0221 Institutional research plan: CEZ:AV0Z20410507 Keywords : iron-based nanopowder * synthesis * microwave plasma method Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.483, year: 2005 http://www.scientific.net/3-908451-35-3/3.html
A 16-Channel Nonparametric Spike Detection ASIC Based on EC-PC Decomposition.
Wu, Tong; Xu, Jian; Lian, Yong; Khalili, Azam; Rastegarnia, Amir; Guan, Cuntai; Yang, Zhi
2016-02-01
In extracellular neural recording experiments, detecting neural spikes is an important step for reliable information decoding. A successful implementation in integrated circuits can achieve substantial data volume reduction, potentially enabling a wireless operation and closed-loop system. In this paper, we report a 16-channel neural spike detection chip based on a customized spike detection method named as exponential component-polynomial component (EC-PC) algorithm. This algorithm features a reliable prediction of spikes by applying a probability threshold. The chip takes raw data as input and outputs three data streams simultaneously: field potentials, band-pass filtered neural data, and spiking probability maps. The algorithm parameters are on-chip configured automatically based on input data, which avoids manual parameter tuning. The chip has been tested with both in vivo experiments for functional verification and bench-top experiments for quantitative performance assessment. The system has a total power consumption of 1.36 mW and occupies an area of 6.71 mm (2) for 16 channels. When tested on synthesized datasets with spikes and noise segments extracted from in vivo preparations and scaled according to required precisions, the chip outperforms other detectors. A credit card sized prototype board is developed to provide power and data management through a USB port.
Nonlinear Prediction Model for Hydrologic Time Series Based on Wavelet Decomposition
Kwon, H.; Khalil, A.; Brown, C.; Lall, U.; Ahn, H.; Moon, Y.
2005-12-01
Traditionally forecasting and characterizations of hydrologic systems is performed utilizing many techniques. Stochastic linear methods such as AR and ARIMA and nonlinear ones such as statistical learning theory based tools have been extensively used. The common difficulty to all methods is the determination of sufficient and necessary information and predictors for a successful prediction. Relationships between hydrologic variables are often highly nonlinear and interrelated across the temporal scale. A new hybrid approach is proposed for the simulation of hydrologic time series combining both the wavelet transform and the nonlinear model. The present model employs some merits of wavelet transform and nonlinear time series model. The Wavelet Transform is adopted to decompose a hydrologic nonlinear process into a set of mono-component signals, which are simulated by nonlinear model. The hybrid methodology is formulated in a manner to improve the accuracy of a long term forecasting. The proposed hybrid model yields much better results in terms of capturing and reproducing the time-frequency properties of the system at hand. Prediction results are promising when compared to traditional univariate time series models. An application of the plausibility of the proposed methodology is provided and the results conclude that wavelet based time series model can be utilized for simulating and forecasting of hydrologic variable reasonably well. This will ultimately serve the purpose of integrated water resources planning and management.
Directory of Open Access Journals (Sweden)
Wei Kong
2014-01-01
Full Text Available Alzheimer’s disease (AD is the most common form of dementia and leads to irreversible neurodegenerative damage of the brain. Finding the dynamic responses of genes, signaling proteins, transcription factor (TF activities, and regulatory networks of the progressively deteriorative progress of AD would represent a significant advance in discovering the pathogenesis of AD. However, the high throughput technologies of measuring TF activities are not yet available on a genome-wide scale. In this study, based on DNA microarray gene expression data and a priori information of TFs, network component analysis (NCA algorithm is applied to determining the TF activities and regulatory influences on TGs of incipient, moderate, and severe AD. Based on that, the dynamical gene regulatory networks of the deteriorative courses of AD were reconstructed. To select significant genes which are differentially expressed in different courses of AD, independent component analysis (ICA, which is better than the traditional clustering methods and can successfully group one gene in different meaningful biological processes, was used. The molecular biological analysis showed that the changes of TF activities and interactions of signaling proteins in mitosis, cell cycle, immune response, and inflammation play an important role in the deterioration of AD.
Shared Reed-Muller Decision Diagram Based Thermal-Aware AND-XOR Decomposition of Logic Circuits
Directory of Open Access Journals (Sweden)
Apangshu Das
2016-01-01
Full Text Available The increased number of complex functional units exerts high power-density within a very-large-scale integration (VLSI chip which results in overheating. Power-densities directly converge into temperature which reduces the yield of the circuit. An adverse effect of power-density reduction is the increase in area. So, there is a trade-off between area and power-density. In this paper, we introduce a Shared Reed-Muller Decision Diagram (SRMDD based on fixed polarity AND-XOR decomposition to represent multioutput Boolean functions. By recursively applying transformations and reductions, we obtained a compact SRMDD. A heuristic based on Genetic Algorithm (GA increases the sharing of product terms by judicious choice of polarity of input variables in SRMDD expansion and a suitable area and power-density trade-off has been enumerated. This is the first effort ever to incorporate the power-density as a measure of temperature estimation in AND-XOR expansion process. The results of logic synthesis are incorporated with physical design in CADENCE digital synthesis tool to obtain the floor-plan silicon area and power profile. The proposed thermal-aware synthesis has been validated by obtaining absolute temperature of the synthesized circuits using HotSpot tool. We have experimented with 29 benchmark circuits. The minimized AND-XOR circuit realization shows average savings up to 15.23% improvement in silicon area and up to 17.02% improvement in temperature over the sum-of-product (SOP based logic minimization.
Yokoyama, Takao; Miura, Fumihito; Araki, Hiromitsu; Okamura, Kohji; Ito, Takashi
2015-08-12
Base-resolution methylome data generated by whole-genome bisulfite sequencing (WGBS) is often used to segment the genome into domains with distinct methylation levels. However, most segmentation methods include many parameters to be carefully tuned and/or fail to exploit the unsurpassed resolution of the data. Furthermore, there is no simple method that displays the composition of the domains to grasp global trends in each methylome. We propose to use changepoint detection for domain demarcation based on base-resolution methylome data. While the proposed method segments the methylome in a largely comparable manner to conventional approaches, it has only a single parameter to be tuned. Furthermore, it fully exploits the base-resolution of the data to enable simultaneous detection of methylation changes in even contrasting size ranges, such as focal hypermethylation and global hypomethylation in cancer methylomes. We also propose a simple plot termed methylated domain landscape (MDL) that globally displays the size, the methylation level and the number of the domains thus defined, thereby enabling one to intuitively grasp trends in each methylome. Since the pattern of MDL often reflects cell lineages and is largely unaffected by data size, it can serve as a novel signature of methylome. Changepoint detection in base-resolution methylome data followed by MDL plotting provides a novel method for methylome characterization and will facilitate global comparison among various WGBS data differing in size and even species origin.
Energy Technology Data Exchange (ETDEWEB)
Ahn, Hyeong Joon [Dept. of Mechanical Engineering, Soongsil University, Seoul (Korea, Republic of); Kim, Chan Jung [Dept. of Mechanical Design Engineering, Pukyong National University, Busan(Korea, Republic of)
2016-12-15
It is very difficult to directly identify an unstable system with uncertain dynamics from frequency domain input-output data. Hence, in these cases, closed-loop frequency responses calculated using a fictitious feedback could be more identifiable than open-loop data. This paper presents a frequency domain indirect identification of AMB rotor systems based on a Fictitious proportional feedback gain (FPFG). The closed-loop effect due to the FPFG can enhance the detectability of the system by moving the system poles, and significantly weigh the target mode in the frequency domain. The effectiveness of the proposed identification method was verified through the frequency domain identification of active magnetic bearing rotor systems.
Dong, Boliang; Peng, Haihui; Motika, Stephen E; Shi, Xiaodong
2017-08-16
The discovery of photoassisted diazonium activation toward gold(I) oxidation greatly extended the scope of gold redox catalysis by avoiding the use of a strong oxidant. Some practical issues that limit the application of this new type of chemistry are the relative low efficiency (long reaction time and low conversion) and the strict reaction condition control that is necessary (degassing and inert reaction environment). Herein, an alternative photofree condition has been developed through Lewis base induced diazonium activation. With this method, an unreactive Au I catalyst was used in combination with Na 2 CO 3 and diazonium salts to produce a Au III intermediate. The efficient activation of various substrates, including alkyne, alkene and allene was achieved, followed by rapid Au III reductive elimination, which yielded the C-C coupling products with good to excellent yields. Relative to the previously reported photoactivation method, our approach offered greater efficiency and versatility through faster reaction rates and broader reaction scope. Challenging substrates such as electron rich/neutral allenes, which could not be activated under the photoinitiation conditions (<5 % yield), could be activated to subsequently yield the desired coupling products in good to excellent yield. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
A new physics-based method for detecting weak nuclear signals via spectral decomposition
International Nuclear Information System (INIS)
Chan, Kung-Sik; Li, Jinzheng; Eichinger, William; Bai, Erwei
2012-01-01
We propose a new physics-based method to determine the presence of the spectral signature of one or more nuclides from a poorly resolved spectra with weak signatures. The method is different from traditional methods that rely primarily on peak finding algorithms. The new approach considers each of the signatures in the library to be a linear combination of subspectra. These subspectra are obtained by assuming a signature consisting of just one of the unique gamma rays emitted by the nuclei. We propose a Poisson regression model for deducing which nuclei are present in the observed spectrum. In recognition that a radiation source generally comprises few nuclear materials, the underlying Poisson model is sparse, i.e. most of the regression coefficients are zero (positive coefficients correspond to the presence of nuclear materials). We develop an iterative algorithm for a penalized likelihood estimation that prompts sparsity. We illustrate the efficacy of the proposed method by simulations using a variety of poorly resolved, low signal-to-noise ratio (SNR) situations, which show that the proposed approach enjoys excellent empirical performance even with SNR as low as to -15 db.
Gu, X.; Blackmore, K. L.
2015-01-01
This paper presents the results of a systematic review of agent-based modelling and simulation (ABMS) applications in the higher education (HE) domain. Agent-based modelling is a "bottom-up" modelling paradigm in which system-level behaviour (macro) is modelled through the behaviour of individual local-level agent interactions (micro).…
Towards tool support for spreadsheet-based domain-specific languages
DEFF Research Database (Denmark)
Adam, Marian Sorin; Schultz, Ulrik Pagh
2015-01-01
Spreadsheets are commonly used by non-programmers to store data in a structured form, this data can in some cases be considered to be a program in a domain-specific language (DSL). Unlike ordinary text-based domain-specific languages, there is however currently no formalism for expressing...... the syntax of such spreadsheet-based DSLs (SDSLs), and there is no tool support for automatically generating language infrastructure such as parsers and IDE support. In this paper we define a simple notion of two-dimensional grammars for SDSLs, and show how such grammars can be used for automatically...
Huang, Weichuan; Liu, Yukuai; Luo, Zhen; Hou, Chuangming; Zhao, Wenbo; Yin, Yuewei; Li, Xiaoguang
2018-06-01
The ferroelectric domain reversal dynamics and the corresponding resistance switching as well as the memristive behaviors in epitaxial BiFeO3 (BFO, ~150 nm) based multiferroic heterojunctions were systematically investigated. The ferroelectric domain reversal dynamics could be described by the nucleation-limited-switching model with the Lorentzian distribution of logarithmic domain-switching times. By engineering the domain states, multi and even continuously tunable resistances states, i.e. memristive states, could be non-volatilely achieved. The resistance switching speed can be as fast as 30 ns in the BFO-based multiferroic heterojunctions with a write voltage of ~20 V. By reducing the thickness of BFO, the La0.6Sr0.4MnO3/BFO (~5 nm)/La0.6Sr0.4MnO3 multiferroic tunnel junction (MFTJ) shows an even a quicker switching speed (20 ns) with a much lower operation voltage (~4 V). Importantly, the MFTJ exhibits a tunable interfacial magnetoelectric coupling related to the ferroelectric domain switching dynamics. These findings enrich the potential applications of multiferroic BFO based devices in high-speed, low-power, and high-density memories as well as future neuromorphic computational architectures.
A study on group decision-making based fault multi-symptom-domain consensus diagnosis
International Nuclear Information System (INIS)
He Yongyong; Chu Fulei; Zhong Binglin
2001-01-01
In the field of fault diagnosis for rotating machines, the conventional methods or the neural network based methods are mainly single symptom domain based methods, and the diagnosis accuracy of which is not always satisfactory. In this paper, in order to utilize multiple symptom domains to improve the diagnosis accuracy, an idea of fault multi-symptom-domain consensus diagnosis is developed. From the point of view of the group decision-making, two particular multi-symptom-domain diagnosis strategies are proposed. The proposed strategies use BP (Back-Propagation) neural networks as diagnosis models in various symptom domains, and then combine the outputs of these networks by two combination schemes, which are based on Dempster-Shafer evidence theory and fuzzy integral theory, respectively. Finally, a case study pertaining to the fault diagnosis for rotor-bearing systems is given in detail, and the results show that the proposed diagnosis strategies are feasible and more efficient than conventional stacked-vector methods
Diaz Galicia, Miriam Escarlet
2018-05-01
Protein-protein interactions modulate cellular processes in health and disease. However, tracing weak or rare associations or dissociations of proteins is not a trivial task. Kinases are often regulated through interaction partners and, at the same time, themselves regulate cellular interaction networks. The use of kinase domains for creating a synthetic sensor device that reads low concentration protein-protein interactions and amplifies them to a higher concentration interaction which is then translated into a FRET (Fluorescence Resonance Energy Transfer) signal is here proposed. To this end, DNA constructs for interaction amplification (split kinases), positive controls (intact kinase domains), scaffolding proteins and phosphopeptide - SH2-domain modules for the reading of kinase activity were assembled and expression protocols for fusion proteins containing Lyn, Src, and Fak kinase domains in bacterial and in cell-free systems were optimized. Also, two non-overlapping methods for measuring the kinase activity of these proteins were stablished and, finally, a protein-fragment complementation assay with the split-kinase constructs was tested. In conclusion, it has been demonstrated that features such as codon optimization, vector design and expression conditions have an impact on the expression yield and activity of kinase-based proteins. Furthermore, it has been found that the defined PURE cell-free system is insufficient for the active expression of catalytic kinase domains. In contrast, the bacterial co-expression with phosphatases produced active kinase fusion proteins for two out of the three tested Tyrosine kinase domains.
Sengupta, Tapan K.; Gullapalli, Atchyut
2016-11-01
Spinning cylinder rotating about its axis experiences a transverse force/lift, an account of this basic aerodynamic phenomenon is known as the Robins-Magnus effect in text books. Prandtl studied this flow by an inviscid irrotational model and postulated an upper limit of the lift experienced by the cylinder for a critical rotation rate. This non-dimensional rate is the ratio of oncoming free stream speed and the surface speed due to rotation. Prandtl predicted a maximum lift coefficient as CLmax = 4π for the critical rotation rate of two. In recent times, evidences show the violation of this upper limit, as in the experiments of Tokumaru and Dimotakis ["The lift of a cylinder executing rotary motions in a uniform flow," J. Fluid Mech. 255, 1-10 (1993)] and in the computed solution in Sengupta et al. ["Temporal flow instability for Magnus-robins effect at high rotation rates," J. Fluids Struct. 17, 941-953 (2003)]. In the latter reference, this was explained as the temporal instability affecting the flow at higher Reynolds number and rotation rates (>2). Here, we analyze the flow past a rotating cylinder at a super-critical rotation rate (=2.5) by the enstrophy-based proper orthogonal decomposition (POD) of direct simulation results. POD identifies the most energetic modes and helps flow field reconstruction by reduced number of modes. One of the motivations for the present study is to explain the shedding of puffs of vortices at low Reynolds number (Re = 60), for the high rotation rate, due to an instability originating in the vicinity of the cylinder, using the computed Navier-Stokes equation (NSE) from t = 0 to t = 300 following an impulsive start. This instability is also explained through the disturbance mechanical energy equation, which has been established earlier in Sengupta et al. ["Temporal flow instability for Magnus-robins effect at high rotation rates," J. Fluids Struct. 17, 941-953 (2003)].
International Nuclear Information System (INIS)
Wang, Ke; Wei, Yi-Ming
2016-01-01
Given that different energy inputs play different roles in production and that energy policy decision making requires an evaluation of productivity change in individual energy input to provide insight into the scope for improvement of the utilization of specific energy input, this study develops, based on the Luenberger productivity indicator and data envelopment analysis models, an aggregated specific energy productivity indicator combining the individual energy input productivity indicators that account for the contributions of each specific energy input toward energy productivity change. In addition, these indicators can be further decomposed into four factors: pure efficiency change, scale efficiency change, pure technology change, and scale of technology change. These decompositions enable a determination of which specific energy input is the driving force of energy productivity change and which of the four factors is the primary contributor of energy productivity change. An empirical analysis of China's energy productivity change over the period 1997–2012 indicates that (i) China's energy productivity growth may be overestimated if energy consumption structure is omitted; (ii) in regard to the contribution of specific energy input toward energy productivity growth, oil and electricity show positive contributions, but coal and natural gas show negative contributions; (iii) energy-specific productivity changes are mainly caused by technical changes rather than efficiency changes; and (iv) the Porter Hypothesis is partially supported in China that carbon emissions control regulations may lead to energy productivity growth. - Highlights: • An energy input specific Luenberger productivity indicator is proposed. • It enables to examine the contribution of specific energy input productivity change. • It can be decomposed for identifying pure and scale efficiency changes, as well as pure and scale technical changes. • China's energy productivity growth may
A developmental screening tool for toddlers with multiple domains based on Rasch analysis.
Hwang, Ai-Wen; Chou, Yeh-Tai; Hsieh, Ching-Lin; Hsieh, Wu-Shiun; Liao, Hua-Fang; Wong, Alice May-Kuen
2015-01-01
Using multidomain developmental screening tools is a feasible method for pediatric health care professionals to identify children at risk of developmental problems in multiple domains simultaneously. The purpose of this study was to develop a Rasch-based tool for Multidimensional Screening in Child Development (MuSiC) for children aged 0-3 years. The MuSic was developed by constructing items bank based on three commonly used screening tools, validating with developmental status (at risk for delay or not) on five developmental domains. Parents of a convenient sample of 632 children (aged 3-35.5 months) with and without developmental delays responded to items from the three screening tools funded by health authorities in Taiwan. Item bank was determined by item fit of Rasch analysis for each of the five developmental domains (cognitive skills, language skills, gross motor skills, fine motor skills, and socioadaptive skills). Children's performance scores in logits derived in Rasch analysis were validated with developmental status for each domain using the area under receiver operating characteristic curves. MuSiC, a 75-item developmental screening tool for five domains, was derived. The diagnostic validity of all five domains was acceptable for all stages of development, except for the infant stage (≤11 months and 15 days). MuSiC can be applied simultaneously to well-child care visits as a universal screening tool for children aged 1-3 years on multiple domains. Items with sound validity for infants need to be further developed. Copyright © 2014. Published by Elsevier B.V.
Rawski, M.; Jozwiak, L.; Luba, T.
2001-01-01
The functional decomposition of binary and multi-valued discrete functions and relations has been gaining more and more recognition. It has important applications in many fields of modern digital system engineering, such as combinational and sequential logic synthesis for VLSI systems, pattern
Directory of Open Access Journals (Sweden)
Polyakov Vyacheslav Sergeevich
2012-07-01
4 the optimal composite additive that increases the time period of stiffening of the cement grout , improves the water resistance and the compressive strength of concrete, represents the composition of polyacrylates and polymethacrylates, products of thermal decomposition of polyamide-6 and low-molecular polyethylene in the weight ratio of 1:1:0.5.
Lemley, Todd A.
1996-11-01
The rapid change in the telecommunications environment is forcing carriers to re-assess not only their service offering, but also their network management philosophy. The competitive carrier environment has taken away the luxury of throwing technology at a problem by using legacy and proprietary systems and architectures. A more flexible management environment is necessary to effectively gain, and maintain operating margins in the new market era. Competitive forces are driving change which gives carriers more choices than those that are available in legacy and standards-based solutions alone. However, creating an operational support system (OSS) with this gap between legacy and standards has become as dynamic as the services which it supports. A philosophy which helps to integrate the legacy and standards systems is domain management. Domain management relates to a specific service or market 'domain,'and its associated operational support requirements. It supports a companies definition of its business model, which drives the definition of each domain. It also attempts to maximize current investment while injecting new technology available in a practical approach. The following paragraphs offer an overview of legacy systems, standards-based philosophy, and the potential of domain management to help bridge the gap between the two types of systems.
Directory of Open Access Journals (Sweden)
Maria Grazia De Giorgi
2014-08-01
Full Text Available A high penetration of wind energy into the electricity market requires a parallel development of efficient wind power forecasting models. Different hybrid forecasting methods were applied to wind power prediction, using historical data and numerical weather predictions (NWP. A comparative study was carried out for the prediction of the power production of a wind farm located in complex terrain. The performances of Least-Squares Support Vector Machine (LS-SVM with Wavelet Decomposition (WD were evaluated at different time horizons and compared to hybrid Artificial Neural Network (ANN-based methods. It is acknowledged that hybrid methods based on LS-SVM with WD mostly outperform other methods. A decomposition of the commonly known root mean square error was beneficial for a better understanding of the origin of the differences between prediction and measurement and to compare the accuracy of the different models. A sensitivity analysis was also carried out in order to underline the impact that each input had in the network training process for ANN. In the case of ANN with the WD technique, the sensitivity analysis was repeated on each component obtained by the decomposition.
Evolving Rule-Based Systems in two Medical Domains using Genetic Programming
DEFF Research Database (Denmark)
Tsakonas, A.; Dounias, G.; Jantzen, Jan
2004-01-01
We demonstrate, compare and discuss the application of two genetic programming methodologies for the construction of rule-based systems in two medical domains: the diagnosis of Aphasia's subtypes and the classification of Pap-Smear Test examinations. The first approach consists of a scheme...
Clustering via Kernel Decomposition
DEFF Research Database (Denmark)
Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan
2006-01-01
Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....
Directory of Open Access Journals (Sweden)
KIM, K.-H.
2011-02-01
Full Text Available The work in this paper pertains to domain independent vocabulary generation and its use in category-based small footprint Language Model (LM. Two major constraints of the conventional LMs in the embedded environment are memory capacity limitation and data sparsity for the domain-specific application. This data sparsity adversely affects vocabulary coverage and LM performance. To overcome these constraints, we define a set of domain independent categories using a Part-Of-Speech (POS tagged corpus. Also, we generate a domain independent vocabulary based on this set using the corpus and knowledge base. Then, we propose a mathematical framework for a category-based LM using this set. In this LM, one word can be assigned assign multiple categories. In order to reduce its memory requirements, we propose a tree-based data structure. In addition, we determine the history length of a category n-gram, and the independent assumption applying to a category history generation. The proposed vocabulary generation method illustrates at least 13.68% relative improvement in coverage for a SMS text corpus, where data are sparse due to the difficulties in data collection. The proposed category-based LM requires only 215KB which is 55% and 13% compared to the conventional category-based LM and the word-based LM, respectively. It successively improves the performance, achieving 54.9% and 60.6% perplexity reduction compared to the conventional category-based LM and the word-based LM in terms of normalized perplexity.
Mathematical modelling of the decomposition of explosives
International Nuclear Information System (INIS)
Smirnov, Lev P
2010-01-01
Studies on mathematical modelling of the molecular and supramolecular structures of explosives and the elementary steps and overall processes of their decomposition are analyzed. Investigations on the modelling of combustion and detonation taking into account the decomposition of explosives are also considered. It is shown that solution of problems related to the decomposition kinetics of explosives requires the use of a complex strategy based on the methods and concepts of chemical physics, solid state physics and theoretical chemistry instead of empirical approach.
Recent progress in synchrotron-based frequency-domain Fourier-transform THz-EPR.
Nehrkorn, Joscha; Holldack, Karsten; Bittl, Robert; Schnegg, Alexander
2017-07-01
We describe frequency-domain Fourier-transform THz-EPR as a method to assign spin-coupling parameters of high-spin (S>1/2) systems with very large zero-field splittings. The instrumental foundations of synchrotron-based FD-FT THz-EPR are presented, alongside with a discussion of frequency-domain EPR simulation routines. The capabilities of this approach is demonstrated for selected mono- and multinuclear HS systems. Finally, we discuss remaining challenges and give an outlook on the future prospects of the technique. Copyright © 2017 Elsevier Inc. All rights reserved.
Nonlinear System Identification via Basis Functions Based Time Domain Volterra Model
Directory of Open Access Journals (Sweden)
Yazid Edwar
2014-07-01
Full Text Available This paper proposes basis functions based time domain Volterra model for nonlinear system identification. The Volterra kernels are expanded by using complex exponential basis functions and estimated via genetic algorithm (GA. The accuracy and practicability of the proposed method are then assessed experimentally from a scaled 1:100 model of a prototype truss spar platform. Identification results in time and frequency domain are presented and coherent functions are performed to check the quality of the identification results. It is shown that results between experimental data and proposed method are in good agreement.
Directory of Open Access Journals (Sweden)
Yingni Zhai
2014-10-01
Full Text Available Purpose: A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems (JSP is proposed.Design/methodology/approach: In the algorithm, a number of sub-problems are constructed by iteratively decomposing the large-scale JSP according to the process route of each job. And then the solution of the large-scale JSP can be obtained by iteratively solving the sub-problems. In order to improve the sub-problems' solving efficiency and the solution quality, a detection method for multi-bottleneck machines based on critical path is proposed. Therewith the unscheduled operations can be decomposed into bottleneck operations and non-bottleneck operations. According to the principle of “Bottleneck leads the performance of the whole manufacturing system” in TOC (Theory Of Constraints, the bottleneck operations are scheduled by genetic algorithm for high solution quality, and the non-bottleneck operations are scheduled by dispatching rules for the improvement of the solving efficiency.Findings: In the process of the sub-problems' construction, partial operations in the previous scheduled sub-problem are divided into the successive sub-problem for re-optimization. This strategy can improve the solution quality of the algorithm. In the process of solving the sub-problems, the strategy that evaluating the chromosome's fitness by predicting the global scheduling objective value can improve the solution quality.Research limitations/implications: In this research, there are some assumptions which reduce the complexity of the large-scale scheduling problem. They are as follows: The processing route of each job is predetermined, and the processing time of each operation is fixed. There is no machine breakdown, and no preemption of the operations is allowed. The assumptions should be considered if the algorithm is used in the actual job shop.Originality/value: The research provides an efficient scheduling method for the
A feature dictionary supporting a multi-domain medical knowledge base.
Naeymi-Rad, F
1989-01-01
Because different terminology is used by physicians of different specialties in different locations to refer to the same feature (signs, symptoms, test results), it is essential that our knowledge development tools provide a means to access a common pool of terms. This paper discusses the design of an online medical dictionary that provides a solution to this problem for developers of multi-domain knowledge bases for MEDAS (Medical Emergency Decision Assistance System). Our Feature Dictionary supports phrase equivalents for features, feature interactions, feature classifications, and translations to the binary features generated by the expert during knowledge creation. It is also used in the conversion of a domain knowledge to the database used by the MEDAS inference diagnostic sessions. The Feature Dictionary also provides capabilities for complex queries across multiple domains using the supported relations. The Feature Dictionary supports three methods for feature representation: (1) for binary features, (2) for continuous valued features, and (3) for derived features.
A NEW TECHNIQUE BASED ON CHAOTIC STEGANOGRAPHY AND ENCRYPTION TEXT IN DCT DOMAIN FOR COLOR IMAGE
Directory of Open Access Journals (Sweden)
MELAD J. SAEED
2013-10-01
Full Text Available Image steganography is the art of hiding information into a cover image. This paper presents a new technique based on chaotic steganography and encryption text in DCT domain for color image, where DCT is used to transform original image (cover image from spatial domain to frequency domain. This technique used chaotic function in two phases; firstly; for encryption secret message, second; for embedding in DCT cover image. With this new technique, good results are obtained through satisfying the important properties of steganography such as: imperceptibility; improved by having mean square error (MSE, peak signal to noise ratio (PSNR and normalized correlation (NC, to phase and capacity; improved by encoding the secret message characters with variable length codes and embedding the secret message in one level of color image only.
Efficient expression of SRK intracellular domain by a modeling-based protein engineering.
Murase, Kohji; Hirano, Yoshinori; Takayama, Seiji; Hakoshima, Toshio
2017-03-01
S-locus protein kinase (SRK) is a receptor kinase that plays a critical role in self-recognition in the Brassicaceae self-incompatibility (SI) response. SRK is activated by binding of its ligand S-locus protein 11 (SP11) and subsequently induced phosphorylation of the intracellular kinase domain. However, a detailed activation mechanism of SRK is still largely unknown because of the difficulty in stably expressing SRK recombinant proteins. Here, we performed modeling-based protein engineering of the SRK kinase domain for stable expression in Escherichia coli. The engineered SRK intracellular domain was expressed about 54-fold higher production than wild type SRK, without loss of the kinase activity, suggesting it could be useful for further biochemical and structural studies. Copyright © 2016 Elsevier Inc. All rights reserved.
Virtual network embedding in cross-domain network based on topology and resource attributes
Zhu, Lei; Zhang, Zhizhong; Feng, Linlin; Liu, Lilan
2018-03-01
Aiming at the network architecture ossification and the diversity of access technologies issues, this paper researches the cross-domain virtual network embedding algorithm. By analysing the topological attribute from the local and global perspective of nodes in the virtual network and the physical network, combined with the local network resource property, we rank the embedding priority of the nodes with PCA and TOPSIS methods. Besides, the link load distribution is considered. Above all, We proposed an cross-domain virtual network embedding algorithm based on topology and resource attributes. The simulation results depicts that our algorithm increases the acceptance rate of multi-domain virtual network requests, compared with the existing virtual network embedding algorithm.
Measurement-based management of mental health quality and access in VHA: SAIL mental health domain.
Lemke, Sonne; Boden, Matthew Tyler; Kearney, Lisa K; Krahn, Dean D; Neuman, Matthew J; Schmidt, Eric M; Trafton, Jodie A
2017-02-01
We outline the development of a Mental Health Domain to track accessibility and quality of mental health care in the United States Veterans Health Administration (VHA) as part of a broad-based performance measurement system. This domain adds an important element to national performance improvement efforts by targeting regional and facility leadership and providing them a concise yet comprehensive measure to identify facilities facing challenges in their mental health programs. We present the conceptual framework and rationale behind measure selection and development. The Mental Health Domain covers three important aspects of mental health treatment: Population Coverage, Continuity of Care, and Experience of Care. Each component is a composite of existing and newly adapted measures with moderate to high internal consistency; components are statistically independent or moderately related. Development and dissemination of the Mental Health Domain involved a variety of approaches and benefited from close collaboration between local, regional, and national leadership and from coordination with existing quality-improvement initiatives. During the first year of use, facilities varied in the direction and extent of change. These patterns of change were generally consistent with qualitative information, providing support for the validity of the domain and its component measures. Measure maintenance remains an iterative process as the VHA mental health system and potential data resources continue to evolve. Lessons learned may be helpful to the broader mental health-provider community as mental health care consolidates and becomes increasingly integrated within healthcare systems. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Spinodal decomposition in fluid mixtures
International Nuclear Information System (INIS)
Kawasaki, Kyozi; Koga, Tsuyoshi
1993-01-01
We study the late stage dynamics of spinodal decomposition in binary fluids by the computer simulation of the time-dependent Ginzburg-Landau equation. We obtain a temporary linear growth law of the characteristic length of domains in the late stage. This growth law has been observed in many real experiments of binary fluids and indicates that the domain growth proceeds by the flow caused by the surface tension of interfaces. We also find that the dynamical scaling law is satisfied in this hydrodynamic domain growth region. By comparing the scaling functions for fluids with that for the case without hydrodynamic effects, we find that the scaling functions for the two systems are different. (author)
Goal-Based Domain Modeling as a Basis for Cross-Disciplinary Systems Engineering
Jarke, Matthias; Nissen, Hans W.; Rose, Thomas; Schmitz, Dominik
Small and medium-sized enterprises (SMEs) are important drivers for innovation. In particular, project-driven SMEs that closely cooperate with their customers have specific needs in regard to information engineering of their development process. They need a fast requirements capture since this is most often included in the (unpaid) offer development phase. At the same time, they need to maintain and reuse the knowledge and experiences they have gathered in previous projects extensively as it is their core asset. The situation is complicated further if the application field crosses disciplinary boundaries. To bridge the gaps and perspectives, we focus on shared goals and dependencies captured in models at a conceptual level. Such a model-based approach also offers a smarter connection to subsequent development stages, including a high share of automated code generation. In the approach presented here, the agent- and goal-oriented formalism i * is therefore extended by domain models to facilitate information organization. This extension permits a domain model-based similarity search, and a model-based transformation towards subsequent development stages. Our approach also addresses the evolution of domain models reflecting the experiences from completed projects. The approach is illustrated with a case study on software-intensive control systems in an SME of the automotive domain.
Optimal Couple Projections for Domain Adaptive Sparse Representation-based Classification.
Zhang, Guoqing; Sun, Huaijiang; Porikli, Fatih; Liu, Yazhou; Sun, Quansen
2017-08-29
In recent years, sparse representation based classification (SRC) is one of the most successful methods and has been shown impressive performance in various classification tasks. However, when the training data has a different distribution than the testing data, the learned sparse representation may not be optimal, and the performance of SRC will be degraded significantly. To address this problem, in this paper, we propose an optimal couple projections for domain-adaptive sparse representation-based classification (OCPD-SRC) method, in which the discriminative features of data in the two domains are simultaneously learned with the dictionary that can succinctly represent the training and testing data in the projected space. OCPD-SRC is designed based on the decision rule of SRC, with the objective to learn coupled projection matrices and a common discriminative dictionary such that the between-class sparse reconstruction residuals of data from both domains are maximized, and the within-class sparse reconstruction residuals of data are minimized in the projected low-dimensional space. Thus, the resulting representations can well fit SRC and simultaneously have a better discriminant ability. In addition, our method can be easily extended to multiple domains and can be kernelized to deal with the nonlinear structure of data. The optimal solution for the proposed method can be efficiently obtained following the alternative optimization method. Extensive experimental results on a series of benchmark databases show that our method is better or comparable to many state-of-the-art methods.
Calculus domains modelled using an original bool algebra based on polygons
Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.
2016-08-01
Analytical and numerical computer based models require analytical definitions of the calculus domains. The paper presents a method to model a calculus domain based on a bool algebra which uses solid and hollow polygons. The general calculus relations of the geometrical characteristics that are widely used in mechanical engineering are tested using several shapes of the calculus domain in order to draw conclusions regarding the most effective methods to discretize the domain. The paper also tests the results of several CAD commercial software applications which are able to compute the geometrical characteristics, being drawn interesting conclusions. The tests were also targeting the accuracy of the results vs. the number of nodes on the curved boundary of the cross section. The study required the development of an original software consisting of more than 1700 computer code lines. In comparison with other calculus methods, the discretization using convex polygons is a simpler approach. Moreover, this method doesn't lead to large numbers as the spline approximation did, in that case being required special software packages in order to offer multiple, arbitrary precision. The knowledge resulted from this study may be used to develop complex computer based models in engineering.
Cui, Xinchun; Niu, Yuying; Zheng, Xiangwei; Han, Yingshuai
2018-01-01
In this paper, a new color watermarking algorithm based on differential evolution is proposed. A color host image is first converted from RGB space to YIQ space, which is more suitable for the human visual system. Then, apply three-level discrete wavelet transformation to luminance component Y and generate four different frequency sub-bands. After that, perform singular value decomposition on these sub-bands. In the watermark embedding process, apply discrete wavelet transformation to a watermark image after the scrambling encryption processing. Our new algorithm uses differential evolution algorithm with adaptive optimization to choose the right scaling factors. Experimental results show that the proposed algorithm has a better performance in terms of invisibility and robustness.
International Nuclear Information System (INIS)
Sun Bin; Zhou Yunlong; Zhao Peng; Guan Yuebo
2007-01-01
Aiming at the non-stationary characteristics of differential pressure fluctuation signals of gas-liquid two-phase flow, and the slow convergence of learning and liability of dropping into local minima for BP neural networks, flow regime identification method based on Singular Value Decomposition (SVD) and Least Square Support Vector Machine (LS-SVM) is presented. First of all, the Empirical Mode Decomposition (EMD) method is used to decompose the differential pressure fluctuation signals of gas-liquid two-phase flow into a number of stationary Intrinsic Mode Functions (IMFs) components from which the initial feature vector matrix is formed. By applying the singular vale decomposition technique to the initial feature vector matrixes, the singular values are obtained. Finally, the singular values serve as the flow regime characteristic vector to be LS-SVM classifier and flow regimes are identified by the output of the classifier. The identification result of four typical flow regimes of air-water two-phase flow in horizontal pipe has shown that this method achieves a higher identification rate. (authors)
Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So
2017-09-01
A new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss-Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm-1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.
International Nuclear Information System (INIS)
Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So
2017-01-01
Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.
Zilong Zhang; Xingpeng Chen; Peter Heck
2014-01-01
Integrated analysis on socio-economic metabolism could provide a basis for understanding and optimizing regional sustainability. The paper conducted socio-economic metabolism analysis by means of the emergy accounting method coupled with data envelopment analysis and decomposition analysis techniques to assess the sustainability of Qingyang city and its eight sub-region system, as well as to identify the major driving factors of performance change during 2000–2007, to serve as the basis for f...
On the hadron mass decomposition
Lorcé, Cédric
2018-02-01
We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force.
On the hadron mass decomposition
Energy Technology Data Exchange (ETDEWEB)
Lorce, Cedric [Universite Paris-Saclay, Centre de Physique Theorique, Ecole Polytechnique, CNRS, Palaiseau (France)
2018-02-15
We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force. (orig.)
Targeting lysine specific demethylase 4A (KDM4A) tandem TUDOR domain - A fragment based approach.
Upadhyay, Anup K; Judge, Russell A; Li, Leiming; Pithawalla, Ron; Simanis, Justin; Bodelle, Pierre M; Marin, Violeta L; Henry, Rodger F; Petros, Andrew M; Sun, Chaohong
2018-06-01
The tandem TUDOR domains present in the non-catalytic C-terminal half of the KDM4A, 4B and 4C enzymes play important roles in regulating their chromatin localizations and substrate specificities. They achieve this regulatory role by binding to different tri-methylated lysine residues on histone H3 (H3-K4me3, H3-K23me3) and histone H4 (H4-K20me3) depending upon the specific chromatin environment. In this work, we have used a 2D-NMR based fragment screening approach to identify a novel fragment (1a), which binds to the KDM4A-TUDOR domain and shows modest competition with H3-K4me3 binding in biochemical as well as in vitro cell based assays. A co-crystal structure of KDM4A TUDOR domain in complex with 1a shows that the fragment binds stereo-specifically to the methyl lysine binding pocket forming a network of strong hydrogen bonds and hydrophobic interactions. We anticipate that the fragment 1a can be further developed into a novel allosteric inhibitor of the KDM4 family of enzymes through targeting their C-terminal tandem TUDOR domain. Copyright © 2018 Elsevier Ltd. All rights reserved.
Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin
2017-01-01
We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.
Chen, Zibin; Hong, Liang; Wang, Feifei; An, Xianghai; Wang, Xiaolin; Ringer, Simon; Chen, Long-Qing; Luo, Haosu; Liao, Xiaozhou
2017-12-01
Ferroelectric materials have been extensively explored for applications in high-density nonvolatile memory devices because of their ferroelectric-ferroelastic domain-switching behavior under electric loading or mechanical stress. However, the existence of ferroelectric and ferroelastic backswitching would cause significant data loss, which affects the reliability of data storage. Here, we apply in situ transmission electron microscopy and phase-field modeling to explore the unique ferroelastic domain-switching kinetics and the origin of this in relaxor-based Pb (Mg1 /3Nb2 /3)O3-33 % PbTiO3 single-crystal pillars under electrical and mechanical stimulations. Results showed that the electric-mechanical hysteresis loop shifted for relaxor-based single-crystal pillars because of the low energy levels of domains in the material and the constraint on the pillars, resulting in various mechanically reversible and irreversible domain-switching states. The phenomenon can potentially be used for advanced bit writing and reading in nonvolatile memories, which effectively overcomes the backswitching problem and broadens the types of ferroelectric materials for nonvolatile memory applications.
Chai, Xin; Wang, Qisong; Zhao, Yongping; Liu, Xin; Bai, Ou; Li, Yongqiang
2016-12-01
In electroencephalography (EEG)-based emotion recognition systems, the distribution between the training samples and the testing samples may be mismatched if they are sampled from different experimental sessions or subjects because of user fatigue, different electrode placements, varying impedances, etc. Therefore, it is difficult to directly classify the EEG patterns with a conventional classifier. The domain adaptation method, which is aimed at obtaining a common representation across training and test domains, is an effective method for reducing the distribution discrepancy. However, the existing domain adaptation strategies either employ a linear transformation or learn the nonlinearity mapping without a consistency constraint; they are not sufficiently powerful to obtain a similar distribution from highly non-stationary EEG signals. To address this problem, in this paper, a novel component, called the subspace alignment auto-encoder (SAAE), is proposed. Taking advantage of both nonlinear transformation and a consistency constraint, we combine an auto-encoder network and a subspace alignment solution in a unified framework. As a result, the source domain can be aligned with the target domain together with its class label, and any supervised method can be applied to the new source domain to train a classifier for classification in the target domain, as the aligned source domain follows a distribution similar to that of the target domain. We compared our SAAE method with six typical approaches using a public EEG dataset containing three affective states: positive, neutral, and negative. Subject-to-subject and session-to-session evaluations were performed. The subject-to-subject experimental results demonstrate that our component achieves a mean accuracy of 77.88% in comparison with a state-of-the-art method, TCA, which achieves 73.82% on average. In addition, the average classification accuracy of SAAE in the session-to-session evaluation for all the 15 subjects
Maltais-Landry, Gabriel; Neufeld, Katarina; Poon, David; Grant, Nicholas; Nesic, Zoran; Smukler, Sean
2018-04-01
Manure-based soil amendments (herein "amendments") are important fertility sources, but differences among amendment types and management can significantly affect their nutrient value and environmental impacts. A 6-month in situ decomposition experiment was conducted to determine how protection from wintertime rainfall affected nutrient losses and greenhouse gas (GHG) emissions in poultry (broiler chicken and turkey) and horse amendments. Changes in total nutrient concentration were measured every 3 months, changes in ammonium (NH 4 + ) and nitrate (NO 3 - ) concentrations every month, and GHG emissions of carbon dioxide (CO 2 ), methane (CH 4 ), and nitrous oxide (N 2 O) every 7-14 days. Poultry amendments maintained higher nutrient concentrations (except for K), higher emissions of CO 2 and N 2 O, and lower CH 4 emissions than horse amendments. Exposing amendments to rainfall increased total N and NH 4 + losses in poultry amendments, P losses in turkey and horse amendments, and K losses and cumulative N 2 O emissions for all amendments. However, it did not affect CO 2 or CH 4 emissions. Overall, rainfall exposure would decrease total N inputs by 37% (horse), 59% (broiler chicken), or 74% (turkey) for a given application rate (wet weight basis) after 6 months of decomposition, with similar losses for NH 4 + (69-96%), P (41-73%), and K (91-97%). This study confirms the benefits of facilities protected from rainfall to reduce nutrient losses and GHG emissions during amendment decomposition. The impact of rainfall protection on nutrient losses and GHG emissions was monitored during the decomposition of broiler chicken, turkey, and horse manure-based soil amendments. Amendments exposed to rainfall had large ammonium and potassium losses, resulting in a 37-74% decrease in N inputs when compared with amendments protected from rainfall. Nitrous oxide emissions were also higher with rainfall exposure, although it had no effect on carbon dioxide and methane emissions