Domain decomposition algorithms and computational fluid dynamics
Chan, Tony F.
1988-01-01
Some of the new domain decomposition algorithms are applied to two model problems in computational fluid dynamics: the two-dimensional convection-diffusion problem and the incompressible driven cavity flow problem. First, a brief introduction to the various approaches of domain decomposition is given, and a survey of domain decomposition preconditioners for the operator on the interface separating the subdomains is then presented. For the convection-diffusion problem, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is examined.
Domain decomposition algorithms and computation fluid dynamics
Chan, Tony F.
1988-01-01
In the past several years, domain decomposition was a very popular topic, partly motivated by the potential of parallelization. While a large body of theory and algorithms were developed for model elliptic problems, they are only recently starting to be tested on realistic applications. The application of some of these methods to two model problems in computational fluid dynamics are investigated. Some examples are two dimensional convection-diffusion problems and the incompressible driven cavity flow problem. The construction and analysis of efficient preconditioners for the interface operator to be used in the iterative solution of the interface solution is described. For the convection-diffusion problems, the effect of the convection term and its discretization on the performance of some of the preconditioners is discussed. For the driven cavity problem, the effectiveness of a class of boundary probe preconditioners is discussed.
Implementation and performance of a domain decomposition algorithm in Sisal
Energy Technology Data Exchange (ETDEWEB)
DeBoni, T.; Feo, J. [Lawrence Livermore National Lab., CA (United States); Rodrigue, G. [California Univ., Livermore, CA (United States); Muller, J. [State Univ. of New York, Stony Brook, NY (United States)
1993-09-23
Sisal is a general-purpose functional language that hides the complexity of parallel processing, expedites parallel program development, and guarantees determinacy. Parallelism and management of concurrent tasks are realized automatically by the compiler and runtime system. Spatial domain decomposition is a widely-used method that focuses computational resources on the most active, or important, areas of a domain. Many complex programming issues are introduced in paralleling this method including: dynamic spatial refinement, dynamic grid partitioning and fusion, task distribution, data distribution, and load balancing. In this paper, we describe a spatial domain decomposition algorithm programmed in Sisal. We explain the compilation process, and present the execution performance of the resultant code on two different multiprocessor systems: a multiprocessor vector supercomputer, and cache-coherent scalar multiprocessor.
Non-linear scalable TFETI domain decomposition based contact algorithm
Dobiáš, J.; Pták, S.; Dostál, Z.; Vondrák, V.; Kozubek, T.
2010-06-01
The paper is concerned with the application of our original variant of the Finite Element Tearing and Interconnecting (FETI) domain decomposition method, called the Total FETI (TFETI), to solve solid mechanics problems exhibiting geometric, material, and contact non-linearities. The TFETI enforces the prescribed displacements by the Lagrange multipliers, so that all the subdomains are 'floating', the kernels of their stiffness matrices are known a priori, and the projector to the natural coarse grid is more effective. The basic theory and relationships of both FETI and TFETI are briefly reviewed and a new version of solution algorithm is presented. It is shown that application of TFETI methodology to the contact problems converts the original problem to the strictly convex quadratic programming problem with bound and equality constraints, so that the effective, in a sense optimal algorithms is to be applied. Numerical experiments show that the method exhibits both numerical and parallel scalabilities.
Institute of Scientific and Technical Information of China (English)
Igor Boglaev; Matthew Hardy
2008-01-01
This paper presents and analyzes a monotone domain decomposition algorithm for solving nonlinear singularly perturbed reaction-diffusion problems of parabolic type.To solve the nonlinear weighted average finite difference scheme for the partial differential equation,we construct a monotone domain decomposition algorithm based on a Schwarz alternating method and a box-domain decomposition.This algorithm needs only to solve linear discrete systems at each iterative step and converges monotonically to the exact solution of the nonlinear discrete problem. The rate of convergence of the monotone domain decomposition algorithm is estimated.Numerical experiments are presented.
Domain decomposition algorithms for mixed methods for second-order elliptic problems
Energy Technology Data Exchange (ETDEWEB)
Chen, Zhangxin; Ewing, R.E.; Lazarov, R.
1996-04-01
In this paper domain decomposition algorithms for mixed finite element methods for linear second-order elliptic problems in R{sup 2} and R{sup 3} are developed. A convergence theory for two-level and multilevel Schwartz methods applied to the algorithms under consideration is given. It is shown that the condition number of these iterative methods is bounded uniformly from above in the same manner as in the theory of domain decomposition methods for conforming and nonconforming finite element methods for the same differential problems. Numerical experiments are presented to illustrate the present techniques. 40 refs., 3 figs., 2 tabs.
Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions
Energy Technology Data Exchange (ETDEWEB)
Fattebert, J.-L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Richards, D.F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glosli, J.N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2012-12-01
We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10^{6} particles on 65,536 MPI tasks.
Institute of Scientific and Technical Information of China (English)
Wu Zhi-jian; Tang Zhi-long; Kang Li-shan
2003-01-01
This paper presents a parallel two level evolutionary algorithm based on domain decomposition for solving function optimization problem containing multiple solutions.By combining the characteristics of the global search and local search in each sub-domain, the former enables individual to draw closer to each optirma and keeps the diversity of individuals, while the latter selects local optimal solutions known as latent solutions in sub-domain. In the end, by selecting the global optimal solutions from latent solutions in each sub-domain, we can discover all the optimal solutions easily and quickly.
Institute of Scientific and Technical Information of China (English)
LIAO HongLin; SHI HanSheng; SUN ZhiZhong
2009-01-01
Corrected explicit-implicit domain decomposition (CEIDD) algorithms are studied for parallel approximation of semilinear parabolic problems on distributed memory processors. It is natural to divide the spatial domain into some smaller parallel strips and cells using the simplest straight-line interface (SI). By using the Leray-Schauder fixed-point theorem and the discrete energy method, it is shown that the resulting CEIDD-SI algorithm is uniquely solvable, unconditionally stable and convergent. The CEIDD-SI method always suffers from the globalization of data communication when interior boundaries cross into each other inside the domain. To overcome this disadvantage, a composite interface (CI) that consists of straight segments and zigzag fractions is suggested. The corresponding CEIDD-CI algorithm is proven to be solvable, stable and convergent. Numerical experiments are presented to support the theoretical results.
Zhao, Tao; Hwang, Feng-Nan; Cai, Xiao-Chuan
2016-07-01
We consider a quintic polynomial eigenvalue problem arising from the finite volume discretization of a quantum dot simulation problem. The problem is solved by the Jacobi-Davidson (JD) algorithm. Our focus is on how to achieve the quadratic convergence of JD in a way that is not only efficient but also scalable when the number of processor cores is large. For this purpose, we develop a projected two-level Schwarz preconditioned JD algorithm that exploits multilevel domain decomposition techniques. The pyramidal quantum dot calculation is carefully studied to illustrate the efficiency of the proposed method. Numerical experiments confirm that the proposed method has a good scalability for problems with hundreds of millions of unknowns on a parallel computer with more than 10,000 processor cores.
DEFF Research Database (Denmark)
Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune
2006-01-01
Enhanced Frequency Domain Decomposition technique for eliminating the influence of these harmonic components in the modal parameter extraction process. For various experiments, the quality of the method is assessed and compared to the results obtained using broadband stochastic excitation forces. Good...
Robust Domain Decomposition Preconditioners for Abstract Symmetric Positive Definite Bilinear Forms
Efendiev, Y; Lazarov, R; Willems, J
2011-01-01
An abstract framework for constructing stable decompositions of the spaces corresponding to general symmetric positive definite problems into "local" subspaces and a global "coarse" space is developed. Particular applications of this abstract framework include practically important problems in porous media applications such as: the scalar elliptic (pressure) equation and the stream function formulation of its mixed form, Stokes' and Brinkman's equations. The constant in the corresponding abstract energy estimate is shown to be robust with respect to mesh parameters as well as the contrast, which is defined as the ratio of high and low values of the conductivity (or permeability). The derived stable decomposition allows to construct additive overlapping Schwarz iterative methods with condition numbers uniformly bounded with respect to the contrast and mesh parameters. The coarse spaces are obtained by patching together the eigenfunctions corresponding to the smallest eigenvalues of certain local problems. A de...
Robust domain decomposition preconditioners for abstract symmetric positive definite bilinear forms
Efendiev, Yalchin
2012-02-22
An abstract framework for constructing stable decompositions of the spaces corresponding to general symmetric positive definite problems into "local" subspaces and a global "coarse" space is developed. Particular applications of this abstract framework include practically important problems in porous media applications such as: the scalar elliptic (pressure) equation and the stream function formulation of its mixed form, Stokes\\' and Brinkman\\'s equations. The constant in the corresponding abstract energy estimate is shown to be robust with respect to mesh parameters as well as the contrast, which is defined as the ratio of high and low values of the conductivity (or permeability). The derived stable decomposition allows to construct additive overlapping Schwarz iterative methods with condition numbers uniformly bounded with respect to the contrast and mesh parameters. The coarse spaces are obtained by patching together the eigenfunctions corresponding to the smallest eigenvalues of certain local problems. A detailed analysis of the abstract setting is provided. The proposed decomposition builds on a method of Galvis and Efendiev [Multiscale Model. Simul. 8 (2010) 1461-1483] developed for second order scalar elliptic problems with high contrast. Applications to the finite element discretizations of the second order elliptic problem in Galerkin and mixed formulation, the Stokes equations, and Brinkman\\'s problem are presented. A number of numerical experiments for these problems in two spatial dimensions are provided. © EDP Sciences, SMAI, 2012.
Zampini, Stefano
2017-08-03
Multilevel balancing domain decomposition by constraints (BDDC) deluxe algorithms are developed for the saddle point problems arising from mixed formulations of Darcy flow in porous media. In addition to the standard no-net-flux constraints on each face, adaptive primal constraints obtained from the solutions of local generalized eigenvalue problems are included to control the condition number. Special deluxe scaling and local generalized eigenvalue problems are designed in order to make sure that these additional primal variables lie in a benign subspace in which the preconditioned operator is positive definite. The current multilevel theory for BDDC methods for porous media flow is complemented with an efficient algorithm for the computation of the so-called malign part of the solution, which is needed to make sure the rest of the solution can be obtained using the conjugate gradient iterates lying in the benign subspace. We also propose a new technique, based on the Sherman--Morrison formula, that lets us preserve the complexity of the subdomain local solvers. Condition number estimates are provided under certain standard assumptions. Extensive numerical experiments confirm the theoretical estimates; additional numerical results prove the effectiveness of the method with higher order elements and high-contrast problems from real-world applications.
Energy Technology Data Exchange (ETDEWEB)
Dahlgren, Kathryn Marie [California State Univ., Turlock, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Morris, Karla Vanessa [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Debusschere, Bert [Sandia National Lab. (SNL-CA), Livermore, CA (United States)
2014-08-01
The future of extreme-scale computing is expected to magnify the influence of soft faults as a source of inaccuracy or failure in solutions obtained from distributed parallel computations. The development of resilient computational tools represents an essential recourse for understanding the best methods for absorbing the impacts of soft faults without sacrificing solution accuracy. The Rexsss (Resilient Extreme Scale Scientific Simulations) project pursues the development of fault resilient algorithms for solving partial differential equations (PDEs) on distributed systems. Performance analyses of current algorithm implementations assist in the identification of runtime inefficiencies.
A Temporal Domain Decomposition Algorithmic Scheme for Large-Scale Dynamic Traffic Assignment
Directory of Open Access Journals (Sweden)
Eric J. Nava
2012-03-01
This paper presents a temporal decomposition scheme for large spatial- and temporal-scale dynamic traffic assignment, in which the entire analysis period is divided into Epochs. Vehicle assignment is performed sequentially in each Epoch, thus improving the model scalability and confining the peak run-time memory requirement regardless of the total analysis period. A proposed self-turning scheme adaptively searches for the run-time-optimal Epoch setting during iterations regardless of the characteristics of the modeled network. Extensive numerical experiments confirm the promising performance of the proposed algorithmic schemes.
Domain Decomposition Based High Performance Parallel Computing
Raju, Mandhapati P
2009-01-01
The study deals with the parallelization of finite element based Navier-Stokes codes using domain decomposition and state-ofart sparse direct solvers. There has been significant improvement in the performance of sparse direct solvers. Parallel sparse direct solvers are not found to exhibit good scalability. Hence, the parallelization of sparse direct solvers is done using domain decomposition techniques. A highly efficient sparse direct solver PARDISO is used in this study. The scalability of both Newton and modified Newton algorithms are tested.
Energy Technology Data Exchange (ETDEWEB)
Javidi, M. [Department of Mathematics, Iran University of Science and Technology, Narmak, Tehran 16844 (Iran, Islamic Republic of)], E-mail: mo_javidi@yahoo.com; Golbabai, A. [Department of Mathematics, Iran University of Science and Technology, Narmak, Tehran 16844 (Iran, Islamic Republic of)], E-mail: golbabai@iust.ac.ir
2009-01-30
In this study, we use the spectral collocation method using Chebyshev polynomials for spatial derivatives and fourth order Runge-Kutta method for time integration to solve the generalized Burger's-Huxley equation (GBHE). To reduce round-off error in spectral collocation (pseudospectral) method we use preconditioning. Firstly, theory of application of Chebyshev spectral collocation method with preconditioning (CSCMP) and domain decomposition on the generalized Burger's-Huxley equation presented. This method yields a system of ordinary differential algebric equations (DAEs). Secondly, we use fourth order Runge-Kutta formula for the numerical integration of the system of DAEs. The numerical results obtained by this way have been compared with the exact solution to show the efficiency of the method.
Convergence Analysis of a Domain Decomposition Paradigm
Energy Technology Data Exchange (ETDEWEB)
Bank, R E; Vassilevski, P S
2006-06-12
We describe a domain decomposition algorithm for use in several variants of the parallel adaptive meshing paradigm of Bank and Holst. This algorithm has low communication, makes extensive use of existing sequential solvers, and exploits in several important ways data generated as part of the adaptive meshing paradigm. We show that for an idealized version of the algorithm, the rate of convergence is independent of both the global problem size N and the number of subdomains p used in the domain decomposition partition. Numerical examples illustrate the effectiveness of the procedure.
Directory of Open Access Journals (Sweden)
Wang Zhenhua
2015-02-01
Full Text Available To improve the computational efficiency and hold calculation accuracy at the same time, we study the parallel computation for radiation heat transfer. In this paper, the discrete ordinates method (DOM and the spatial domain decomposition parallelization (DDP are combined by message passing interface (MPI language. The DDP–DOM computation of the radiation heat transfer within the rectangular furnace is described. When the result of DDP–DOM along one-dimensional direction is compared with that along multi-dimensional directions, it is found that the result of the latter one has higher precision without considering the medium scattering. Meanwhile, an in-depth study of the convergence of DDP–DOM for radiation heat transfer is made. Analyzing the cause of the weak convergence, we relate the total number of iteration steps when the convergence is obtained to the number of sub-domains. When we decompose the spatial domain along one-, two- and three-dimensional directions, different linear relationships between the number of total iteration steps and the number of sub-domains will be possessed separately, then several equations are developed to show the relationships. Using the equations, some phenomena in DDP–DOM can be made clear easily. At the same time, the correctness of the equations is verified.
Space-time domain decomposition method for scalar conservation laws
Doucoure, S
2012-01-01
The Space-Time Integrated Least-Squares (STILS) method is considered to analyze a space-time domain decomposition algorithm for scalar conservation laws. Continuous and discrete convergence estimates are given. Next using a time-marching finite element formulation, the STILS solution and its domain decomposition form are numerically compared.
Multiscale Domain Decomposition Methods for Elliptic Problems with High Aspect Ratios
Institute of Scientific and Technical Information of China (English)
Jфrg Aarnes; Thomas Y. Hou
2002-01-01
In this paper we study some nonoverlapping domain decomposition methods for solving a class of elliptic problems arising from composite materials and flows in porous media which contain many spatial scales. Our preconditioner differs from traditional domain decomposition preconditioners by using a coarse solver which is adaptive to small scale heterogeneous features. While the convergence rate of traditional domain decomposition algorithms using coarse solvers based on linear or polynomial interpolations may deteriorate in the presence of rapid small scale oscillations or high aspect ratios, our preconditioner is applicable to multiplescale problems without restrictive assumptions and seems to have a convergence rate nearly independent of the aspect ratio within the substructures. A rigorous convergence analysis based on the Schwarz framework is carried out, and we demonstrate the efficiency and robustness of the proposed preconditioner through numerical experiments which include problems with multiple-scale coefficients, as well problems with continuous scales.
Bregmanized Domain Decomposition for Image Restoration
Langer, Andreas
2012-05-22
Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems such as total variation minimization became increasingly important for the solution of these tasks. While being favorable due to the improved enhancement of images compared to smooth imaging approaches, non-smooth minimization problems typically scale badly with the dimension of the data. Hence, for large imaging problems solved by total variation minimization domain decomposition algorithms have been proposed, aiming to split one large problem into N > 1 smaller problems which can be solved on parallel CPUs. The N subproblems constitute constrained minimization problems, where the constraint enforces the support of the minimizer to be the respective subdomain. In this paper we discuss a fast computational algorithm to solve domain decomposition for total variation minimization. In particular, we accelerate the computation of the subproblems by nested Bregman iterations. We propose a Bregmanized Operator Splitting-Split Bregman (BOS-SB) algorithm, which enforces the restriction onto the respective subdomain by a Bregman iteration that is subsequently solved by a Split Bregman strategy. The computational performance of this new approach is discussed for its application to image inpainting and image deblurring. It turns out that the proposed new solution technique is up to three times faster than the iterative algorithm currently used in domain decomposition methods for total variation minimization. © Springer Science+Business Media, LLC 2012.
Multiple Shooting and Time Domain Decomposition Methods
Geiger, Michael; Körkel, Stefan; Rannacher, Rolf
2015-01-01
This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms. The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics. This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...
Domain decomposition for implicit solvation models.
Cancès, Eric; Maday, Yvon; Stamm, Benjamin
2013-08-07
This article is the first of a series of papers dealing with domain decomposition algorithms for implicit solvent models. We show that, in the framework of the COSMO model, with van der Waals molecular cavities and classical charge distributions, the electrostatic energy contribution to the solvation energy, usually computed by solving an integral equation on the whole surface of the molecular cavity, can be computed more efficiently by using an integral equation formulation of Schwarz's domain decomposition method for boundary value problems. In addition, the so-obtained potential energy surface is smooth, which is a critical property to perform geometry optimization and molecular dynamics simulations. The purpose of this first article is to detail the methodology, set up the theoretical foundations of the approach, and study the accuracies and convergence rates of the resulting algorithms. The full efficiency of the method and its applicability to large molecular systems of biological interest is demonstrated elsewhere.
21st International Conference on Domain Decomposition Methods
Gander, Martin; Halpern, Laurence; Pichot, Géraldine; Sassi, Taoufik; Widlund, Olof
2014-01-01
This volume contains a selection of papers presented at the 21st international conference on domain decomposition methods in science and engineering held in Rennes, France, June 25-29, 2012. Domain decomposition is an active and interdisciplinary research discipline, focusing on the development, analysis and implementation of numerical methods for massively parallel computers. Domain decomposition methods are among the most efficient solvers for large scale applications in science and engineering. They are based on a solid theoretical foundation and shown to be scalable for many important applications. Domain decomposition techniques can also naturally take into account multiscale phenomena. This book contains the most recent results in this important field of research, both mathematically and algorithmically and allows the reader to get an overview of this exciting branch of numerical analysis and scientific computing.
PARTITION PROPERTY OF DOMAIN DECOMPOSITION WITHOUT ELLIPTICITY
Institute of Scientific and Technical Information of China (English)
Mo Mu; Yun-qing Huang
2001-01-01
Partition property plays a central role in domain decomposition methods. Existing theory essentially assumes certain ellipticity. We prove the partition property for problems without ellipticity which are of practical importance. Example applications include implicit schemes applied to degenerate parabolic partial differential equations arising from superconductors, superfluids and liquid crystals. With this partition property, Schwarz algorithms can be applied to general non-elliptic problems with an h-independent optimal convergence rate. Application to the time-dependent Ginzburg-Landau model of superconductivity is illustrated and numerical results are presented.
Robustness Beamforming Algorithms
Directory of Open Access Journals (Sweden)
Sajad Dehghani
2014-04-01
Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.
Robustness Beamforming Algorithms
Directory of Open Access Journals (Sweden)
Sajad Dehghani
2014-09-01
Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.
Automated Frequency Domain Decomposition for Operational Modal Analysis
DEFF Research Database (Denmark)
Brincker, Rune; Andersen, Palle; Jacobsen, Niels-Jørgen
2007-01-01
The Frequency Domain Decomposition (FDD) technique is known as one of the most user friendly and powerful techniques for operational modal analysis of structures. However, the classical implementation of the technique requires some user interaction. The present paper describes an algorithm for au...
A TFETI Domain Decomposition Solver for Elastoplastic Problems
Čermák, M; Sysala, S; Valdman, J
2012-01-01
In the paper, we propose an algorithm for the efficient parallel implementation of elastoplastic problems with hardening based on the so-called TFETI (Total Finite Element Tearing and Interconnecting) domain decomposition method. We consider an associated elastoplastic model with the von Mises plastic criterion and the linear isotropic hardening law. Such a model is discretized by the implicit Euler method in time and the consequent one time step elastoplastic problem by the finite element method in space. The latter results in a system of nonlinear equations with a strongly semismooth and strongly monotone operator. The semismooth Newton method is applied to solve this nonlinear system. Corresponding linearized problems arising in the Newton iterations are solved in parallel by the above mentioned TFETI domain decomposition method. The proposed TFETI based algorithm was implemented in Matlab parallel environment and its performance was illustrated on a 3D elastoplastic benchmark. Numerical results for differ...
Domain Decomposition Methods for Hyperbolic Problems
Indian Academy of Sciences (India)
Pravir Dutt; Subir Singh Lamba
2009-04-01
In this paper a method is developed for solving hyperbolic initial boundary value problems in one space dimension using domain decomposition, which can be extended to problems in several space dimensions. We minimize a functional which is the sum of squares of the 2 norms of the residuals and a term which is the sum of the squares of the 2 norms of the jumps in the function across interdomain boundaries. To make the problem well posed the interdomain boundaries are made to move back and forth at alternate time steps with sufficiently high speed. We construct parallel preconditioners and obtain error estimates for the method. The Schwarz waveform relaxation method is often employed to solve hyperbolic problems using domain decomposition but this technique faces difficulties if the system becomes characteristic at the inter-element boundaries. By making the inter-element boundaries move faster than the fastest wave speed associated with the hyperbolic system we are able to overcome this problem.
Domain decomposition methods for mortar finite elements
Energy Technology Data Exchange (ETDEWEB)
Widlund, O.
1996-12-31
In the last few years, domain decomposition methods, previously developed and tested for standard finite element methods and elliptic problems, have been extended and modified to work for mortar and other nonconforming finite element methods. A survey will be given of work carried out jointly with Yves Achdou, Mario Casarin, Maksymilian Dryja and Yvon Maday. Results on the p- and h-p-version finite elements will also be discussed.
Towards Robust Image Matching Algorithms
Parsons, Timothy J.
1984-12-01
The rapid advance in digital electronics during recent years has enabled the real-time hardware implementation of many basic image processing techniques and these methods are finding increasing use in both commercial and military applications where a superiority to existing systems can be demonstrated. The potential superiority of an entirely passive, automatic image processing based navigation system over the less accurate and active navigation systems based on radar, for example "TERCOM", is evident. By placing a sensor on board an aircraft or missile together with the appropriate processing power and enough memory to store a reference image or a map of the planned route, large scale features extracted from the scene available to the sensor can be compared with the same feature stored in memory. The difference between the aircraft's actual position and its desired position can then be evaluated and the appropriate navigational correction undertaken. This paper summaries work carried out at British Aerospace Hatfield to investigate various classes of algorithms and solutions which would render a robust image matching system viable for such an automatic system flying at low level with a thermal I.R. sensor.
A Domain Decomposition Method for Time Fractional Reaction-Diffusion Equation
Directory of Open Access Journals (Sweden)
Chunye Gong
2014-01-01
Full Text Available The computational complexity of one-dimensional time fractional reaction-diffusion equation is O(N2M compared with O(NM for classical integer reaction-diffusion equation. Parallel computing is used to overcome this challenge. Domain decomposition method (DDM embodies large potential for parallelization of the numerical solution for fractional equations and serves as a basis for distributed, parallel computations. A domain decomposition algorithm for time fractional reaction-diffusion equation with implicit finite difference method is proposed. The domain decomposition algorithm keeps the same parallelism but needs much fewer iterations, compared with Jacobi iteration in each time step. Numerical experiments are used to verify the efficiency of the obtained algorithm.
Simplified approaches to some nonoverlapping domain decomposition methods
Energy Technology Data Exchange (ETDEWEB)
Xu, Jinchao
1996-12-31
An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.
Lattice QCD with Domain Decomposition on Intel Xeon Phi Co-Processors
Energy Technology Data Exchange (ETDEWEB)
Heybrock, Simon; Joo, Balint; Kalamkar, Dhiraj D; Smelyanskiy, Mikhail; Vaidyanathan, Karthikeyan; Wettig, Tilo; Dubey, Pradeep
2014-12-01
The gap between the cost of moving data and the cost of computing continues to grow, making it ever harder to design iterative solvers on extreme-scale architectures. This problem can be alleviated by alternative algorithms that reduce the amount of data movement. We investigate this in the context of Lattice Quantum Chromodynamics and implement such an alternative solver algorithm, based on domain decomposition, on Intel Xeon Phi co-processor (KNC) clusters. We demonstrate close-to-linear on-chip scaling to all 60 cores of the KNC. With a mix of single- and half-precision the domain-decomposition method sustains 400-500 Gflop/s per chip. Compared to an optimized KNC implementation of a standard solver [1], our full multi-node domain-decomposition solver strong-scales to more nodes and reduces the time-to-solution by a factor of 5.
Lattice QCD with Domain Decomposition on Intel Xeon Phi Co-Processors
Heybrock, Simon; Kalamkar, Dhiraj D; Smelyanskiy, Mikhail; Vaidyanathan, Karthikeyan; Wettig, Tilo; Dubey, Pradeep
2014-01-01
The gap between the cost of moving data and the cost of computing continues to grow, making it ever harder to design iterative solvers on extreme-scale architectures. This problem can be alleviated by alternative algorithms that reduce the amount of data movement. We investigate this in the context of Lattice Quantum Chromodynamics and implement such an alternative solver algorithm, based on domain decomposition, on Intel Xeon Phi co-processor (KNC) clusters. We demonstrate close-to-linear on-chip scaling to all 60 cores of the KNC. With a mix of single- and half-precision the domain-decomposition method sustains 400-500 Gflop/s per chip. Compared to an optimized KNC implementation of a standard solver [1], our full multi-node domain-decomposition solver strong-scales to more nodes and reduces the time-to-solution by a factor of 5.
Fast structural design and analysis via hybrid domain decomposition on massively parallel processors
Farhat, Charbel
1993-01-01
A hybrid domain decomposition framework for static, transient and eigen finite element analyses of structural mechanics problems is presented. Its basic ingredients include physical substructuring and /or automatic mesh partitioning, mapping algorithms, 'gluing' approximations for fast design modifications and evaluations, and fast direct and preconditioned iterative solvers for local and interface subproblems. The overall methodology is illustrated with the structural design of a solar viewing payload that is scheduled to fly in March 1993. This payload has been entirely designed and validated by a group of undergraduate students at the University of Colorado using the proposed hybrid domain decomposition approach on a massively parallel processor. Performance results are reported on the CRAY Y-MP/8 and the iPSC-860/64 Touchstone systems, which represent both extreme parallel architectures. The hybrid domain decomposition methodology is shown to outperform leading solution algorithms and to exhibit an excellent parallel scalability.
Load Estimation by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Pedersen, Ivar Chr. Bjerg; Hansen, Søren Mosegaard; Brincker, Rune;
2007-01-01
When performing operational modal analysis the dynamic loading is unknown, however, once the modal properties of the structure have been estimated, the transfer matrix can be obtained, and the loading can be estimated by inverse filtering. In this paper loads in frequency domain are estimated...... by analysis of simulated responses of a 4 DOF system, for which the exact modal parameters are known. This estimation approach entails modal identification of the natural eigenfrequencies, mode shapes and damping ratios by the frequency domain decomposition technique. Scaled mode shapes are determined by use...
Higher order statistical frequency domain decomposition for operational modal analysis
Nita, G. M.; Mahgoub, M. A.; Sharyatpanahi, S. G.; Cretu, N. C.; El-Fouly, T. M.
2017-02-01
Experimental methods based on modal analysis under ambient vibrational excitation are often employed to detect structural damages of mechanical systems. Many of such frequency domain methods, such as Basic Frequency Domain (BFD), Frequency Domain Decomposition (FFD), or Enhanced Frequency Domain Decomposition (EFFD), use as first step a Fast Fourier Transform (FFT) estimate of the power spectral density (PSD) associated with the response of the system. In this study it is shown that higher order statistical estimators such as Spectral Kurtosis (SK) and Sample to Model Ratio (SMR) may be successfully employed not only to more reliably discriminate the response of the system against the ambient noise fluctuations, but also to better identify and separate contributions from closely spaced individual modes. It is shown that a SMR-based Maximum Likelihood curve fitting algorithm may improve the accuracy of the spectral shape and location of the individual modes and, when combined with the SK analysis, it provides efficient means to categorize such individual spectral components according to their temporal dynamics as coherent or incoherent system responses to unknown ambient excitations.
Non-conformal domain decomposition methods for time-harmonic Maxwell equations.
Shao, Yang; Peng, Zhen; Lim, Kheng Hwee; Lee, Jin-Fa
2012-09-08
We review non-conformal domain decomposition methods (DDMs) and their applications in solving electrically large and multi-scale electromagnetic (EM) radiation and scattering problems. In particular, a finite-element DDM, together with a finite-element tearing and interconnecting (FETI)-like algorithm, incorporating Robin transmission conditions and an edge corner penalty term, are discussed in detail. We address in full the formulations, and subsequently, their applications to problems with significant amounts of repetitions. The non-conformal DDM approach has also been extended into surface integral equation methods. We elucidate a non-conformal integral equation domain decomposition method and a generalized combined field integral equation method for modelling EM wave scattering from non-penetrable and penetrable targets, respectively. Moreover, a plane wave scattering from a composite mockup fighter jet has been simulated using the newly developed multi-solver domain decomposition method.
A Robust Parsing Algorithm For Link Grammars
Grinberg, D; Sleator, D; Grinberg, Dennis; Lafferty, John; Sleator, Daniel
1995-01-01
In this paper we present a robust parsing algorithm based on the link grammar formalism for parsing natural languages. Our algorithm is a natural extension of the original dynamic programming recognition algorithm which recursively counts the number of linkages between two words in the input sentence. The modified algorithm uses the notion of a null link in order to allow a connection between any pair of adjacent words, regardless of their dictionary definitions. The algorithm proceeds by making three dynamic programming passes. In the first pass, the input is parsed using the original algorithm which enforces the constraints on links to ensure grammaticality. In the second pass, the total cost of each substring of words is computed, where cost is determined by the number of null links necessary to parse the substring. The final pass counts the total number of parses with minimal cost. All of the original pruning techniques have natural counterparts in the robust algorithm. When used together with memoization...
Stability estimates for hybrid coupled domain decomposition methods
Steinbach, Olaf
2003-01-01
Domain decomposition methods are a well established tool for an efficient numerical solution of partial differential equations, in particular for the coupling of different model equations and of different discretization methods. Based on the approximate solution of local boundary value problems either by finite or boundary element methods, the global problem is reduced to an operator equation on the skeleton of the domain decomposition. Different variational formulations then lead to hybrid domain decomposition methods.
A convergent overlapping domain decomposition method for total variation minimization
Fornasier, Massimo
2010-06-22
In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.
Review of Robust Video Watermarking Algorithms
Deshpande, Neeta; Manthalkar, R
2010-01-01
There has been a remarkable increase in the data exchange over web and the widespread use of digital media. As a result, multimedia data transfers also had a boost up. The mounting interest with reference to digital watermarking throughout the last decade is certainly due to the increase in the need of copyright protection of digital content. This is also enhanced due to commercial prospective. Applications of video watermarking in copy control, broadcast monitoring, fingerprinting, video authentication, copyright protection etc is immensely rising. The main aspects of information hiding are capacity, security and robustness. Capacity deals with the amount of information that can be hidden. The skill of anyone detecting the information is security and robustness refers to the resistance to modification of the cover content before concealed information is destroyed. Video watermarking algorithms normally prefers robustness. In a robust algorithm it is not possible to eliminate the watermark without rigorous de...
Doubly Constrained Robust Blind Beamforming Algorithm
Directory of Open Access Journals (Sweden)
Xin Song
2013-01-01
Full Text Available We propose doubly constrained robust least-squares constant modulus algorithm (LSCMA to solve the problem of signal steering vector mismatches via the Bayesian method and worst-case performance optimization, which is based on the mismatches between the actual and presumed steering vectors. The weight vector is iteratively updated with penalty for the worst-case signal steering vector by the partial Taylor-series expansion and Lagrange multiplier method, in which the Lagrange multipliers can be optimally derived and incorporated at each step. A theoretical analysis for our proposed algorithm in terms of complexity cost, convergence performance, and SINR performance is presented in this paper. In contrast to the linearly constrained LSCMA, the proposed algorithm provides better robustness against the signal steering vector mismatches, yields higher signal captive performance, improves greater array output SINR, and has a lower computational cost. The simulation results confirm the superiority of the proposed algorithm on beampattern control and output SINR enhancement.
Robustness of Tree Extraction Algorithms from LIDAR
Dumitru, M.; Strimbu, B. M.
2015-12-01
Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.
Directory of Open Access Journals (Sweden)
Dolean Victorita
2014-07-01
Full Text Available Multiphase, compositional porous media flow models lead to the solution of highly heterogeneous systems of Partial Differential Equations (PDE. We focus on overlapping Schwarz type methods on parallel computers and on multiscale methods. We present a coarse space [Nataf F., Xiang H., Dolean V., Spillane N. (2011 SIAM J. Sci. Comput. 33, 4, 1623-1642] that is robust even when there are such heterogeneities. The two-level domain decomposition approach is compared to multiscale methods.
A physics-motivated Centroidal Voronoi Particle domain decomposition method
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-04-01
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.
Robust location algorithm for NLOS environments
Institute of Scientific and Technical Information of China (English)
Huang Jiyan; Wan Qun
2008-01-01
One of the main problems facing accurate location in wireless communication systems is non-line-of-sight(NLOS)propagation.Traditional location algorithms are based on classical techniques under minimizing a least-squares objective function and it loses optimality when the NLOS error distribution deviates from Gaussian distribution.An effective location algorithm based on a robust objective function is proposed to mitigate NLOS errors.The proposed method does not require the prior knowledge of the NLOS error distribution and can give a closed-form solution.A comparison is performed in different NLOS environments between the proposed algorithm and two additional ones(LS method and Chan's method with an NLOS correction).The proposed algorithm clearly outperforms the other two.
Energy Technology Data Exchange (ETDEWEB)
Girardi, E.; Ruggieri, J.M. [CEA Cadarache, CEA/DEN/CAD/DER/SPRC/LEPH, 13 - Saint-Paul Lez Durance (France)
2003-07-01
The aim of this paper is to present the last developments made on a domain decomposition method applied to reactor core calculations. In this method, two kind of balance equation with two different numerical methods dealing with two different unknowns are coupled. In the first part the two balance transport equations (first order and second order one) are presented with the corresponding following numerical methods: Variational Nodal Method and Discrete Ordinate Nodal Method. In the second part, the Multi-Method/Multi-Domain algorithm is introduced by applying the Schwarz domain decomposition to the multigroup eigenvalue problem of the transport equation. The resulting algorithm is then provided. The projection operators used to coupled the two methods are detailed in the last part of the paper. Finally some preliminary numerical applications on benchmarks are given showing encouraging results. (authors)
Adaptive Aggregation-based Domain Decomposition Multigrid for Twisted Mass Fermions
Alexandrou, Constantia; Finkenrath, Jacob; Frommer, Andreas; Kahl, Karsten; Rottmann, Matthias
2016-01-01
The Adaptive Aggregation-based Domain Decomposition Multigrid method (arXiv:1303.1377) is extended for two degenerate flavors of twisted mass fermions. By fine-tuning the parameters we achieve a speed-up of the order of hundred times compared to the conjugate gradient algorithm for the physical value of the pion mass. A thorough analysis of the aggregation parameters is presented, which provides a novel insight into multigrid methods for lattice QCD independently of the fermion discretization.
A frequency-spatial domain decomposition (FSDD) method for operational modal analysis
Zhang, Lingmi; Wang, Tong; Tamura, Yukio
2010-07-01
Following a brief review of the development of operational modal identification techniques, we describe a new method named frequency-spatial domain decomposition (FSDD), with theoretical background, formulation and algorithm. Three typical applications to civil engineering structures are presented to demonstrate the procedure and features of the method: a large-span stadium roof for finite-element model verification, a highway bridge for damage detection and a long-span cable-stayed bridge for structural health monitoring.
Previti, Alberto; Furfaro, Roberto; Picca, Paolo; Ganapol, Barry D; Mostacci, Domiziano
2011-08-01
This paper deals with finding accurate solutions for photon transport problems in highly heterogeneous media fastly, efficiently and with modest memory resources. We propose an extended version of the analytical discrete ordinates method, coupled with domain decomposition-derived algorithms and non-linear convergence acceleration techniques. Numerical performances are evaluated using a challenging case study available in the literature. A study of accuracy versus computational time and memory requirements is reported for transport calculations that are relevant for remote sensing applications.
Parallel Finite Element Domain Decomposition for Structural/Acoustic Analysis
Nguyen, Duc T.; Tungkahotara, Siroj; Watson, Willie R.; Rajan, Subramaniam D.
2005-01-01
A domain decomposition (DD) formulation for solving sparse linear systems of equations resulting from finite element analysis is presented. The formulation incorporates mixed direct and iterative equation solving strategics and other novel algorithmic ideas that are optimized to take advantage of sparsity and exploit modern computer architecture, such as memory and parallel computing. The most time consuming part of the formulation is identified and the critical roles of direct sparse and iterative solvers within the framework of the formulation are discussed. Experiments on several computer platforms using several complex test matrices are conducted using software based on the formulation. Small-scale structural examples are used to validate thc steps in the formulation and large-scale (l,000,000+ unknowns) duct acoustic examples are used to evaluate the ORIGIN 2000 processors, and a duster of 6 PCs (running under the Windows environment). Statistics show that the formulation is efficient in both sequential and parallel computing environmental and that the formulation is significantly faster and consumes less memory than that based on one of the best available commercialized parallel sparse solvers.
A Robust Algorithm in Active Queue Management
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
A variable structure based control scheme was proposed for Active Queue Management (AQM) by using sliding model algorithm and reach law method. This approach aims toaddress the tradeoff between good performance and robustness with respect to the uncertainties of the round-trip time and the number of active connections. Ns simulations results show that the proposed design significantly outperforms the peer AQM schemes in terms of fluctuation in the queue length, packet throughput, and loss ratio. The conclusion is that proposed scheme is in favor of the achievement to AQM objectives due to its good transient and steady performance.
22nd International Conference on Domain Decomposition Methods
Gander, Martin; Halpern, Laurence; Krause, Rolf; Pavarino, Luca
2016-01-01
These are the proceedings of the 22nd International Conference on Domain Decomposition Methods, which was held in Lugano, Switzerland. With 172 participants from over 24 countries, this conference continued a long-standing tradition of internationally oriented meetings on Domain Decomposition Methods. The book features a well-balanced mix of established and new topics, such as the manifold theory of Schwarz Methods, Isogeometric Analysis, Discontinuous Galerkin Methods, exploitation of modern HPC architectures, and industrial applications. As the conference program reflects, the growing capabilities in terms of theory and available hardware allow increasingly complex non-linear and multi-physics simulations, confirming the tremendous potential and flexibility of the domain decomposition concept.
Splitting extrapolation based on domain decomposition for finite element approximations
Institute of Scientific and Technical Information of China (English)
吕涛; 冯勇
1997-01-01
Splitting extrapolation based on domain decomposition for finite element approximations is a new technique for solving large scale scientific and engineering problems in parallel. By means of domain decomposition, a large scale multidimensional problem is turned to many discrete problems involving several grid parameters The multi-variate asymptotic expansions of finite element errors on independent grid parameters are proved for linear and nonlin ear second order elliptic equations as well as eigenvalue problems. Therefore after solving smaller problems with similar sizes in parallel, a global fine grid approximation with higher accuracy is computed by the splitting extrapolation method.
Energy Technology Data Exchange (ETDEWEB)
Griebel, M. [Technische Universitaet Muenchen (Germany)
1994-12-31
In recent years, it has turned out that many modern iterative algorithms (multigrid schemes, multilevel preconditioners, domain decomposition methods etc.) for solving problems resulting from the discretization of PDEs can be interpreted as additive (Jacobi-like) or multiplicative (Gauss-Seidel-like) subspace correction methods. The key to their analysis is the study of certain metric properties of the underlying splitting of the discretization space V into a sum of subspaces V{sub j}, j = 1{hor_ellipsis}, J resp. of the variational problem on V into auxiliary problems on these subspaces. Here, the author proposes a modified approach to the abstract convergence theory of these additive and multiplicative Schwarz iterative methods, that makes the relation to traditional iteration methods more explicit. To this end he introduces the enlarged Hilbert space V = V{sub 0} x {hor_ellipsis} x V{sub j} which is nothing else but the usual construction of the Cartesian product of the Hilbert spaces V{sub j} and use it now in the discretization process. This results in an enlarged, semidefinite linear system to be solved instead of the usual definite system. Then, modern multilevel methods as well as domain decomposition methods simplify to just traditional (block-) iteration methods. Now, the convergence analysis can be carried out directly for these traditional iterations on the enlarged system, making convergence proofs of multilevel and domain decomposition methods more clear, or, at least, more classical. The terms that enter the convergence proofs are exactly the ones of the classical iterative methods. It remains to estimate them properly. The convergence proof itself follow basically line by line the old proofs of the respective traditional iterative methods. Additionally, new multilevel/domain decomposition methods are constructed straightforwardly by now applying just other old and well known traditional iterative methods to the enlarged system.
Institute of Scientific and Technical Information of China (English)
付朝江; 张武
2006-01-01
Parallel finite element method using domain decomposition technique is adapted to a distributed parallel environment of workstation cluster. The algorithm is presented for parallelization of the preconditioned conjugate gradient method based on domain decomposition. Using the developed code, a dam structural analysis problem is solved on workstation cluster and results are given. The parallel performance is analyzed.
Using domain decomposition in the Jacobi-Davidson method
Genseberger, M.; Sleijpen, G.L.G.; Vorst, H.A. van der
2000-01-01
The Jacobi-Davidson method is suitable for computing solutions of large $n$-dimensional eigenvalue problems. It needs (approximate) solutions of specific $n$-dimensional linear systems. Here we propose a strategy based on a nonoverlapping domain decomposition technique in order to reduce the wall cl
Using domain decomposition in the Jacobi-Davidson method
Genseberger, M.; Sleijpen, G.L.G.; Vorst, H.A. van der
2001-01-01
The JacobiDavidson method is suitable for computing solutions of large ndimensional eigen value problems. It needs (approximate) solutions of specific ndimensional linear systems. Here we propose a strategy based on a nonoverlapping domain decomposition technique in order to reduce the wall c
DOMAIN DECOMPOSITION METHODS WITH NONMATCHING GRIDS FOR THE UNILATERAL PROBLEM
Institute of Scientific and Technical Information of China (English)
Ping Luo; Guo-ping Liang
2002-01-01
This paper is devoted to the construction of domain decomposition methods with nonmatching grids based on mixed finite element methods for the unilateral problem. The existence and uniqueness of solution are discussed and optimal error bounds are obtained.Furthermore, global superconvergence estimates are given.
QPACE 2 and Domain Decomposition on the Intel Xeon Phi
Arts, Paul; Georg, Peter; Glaessle, Benjamin; Heybrock, Simon; Komatsubara, Yu; Lohmayer, Robert; Mages, Simon; Mendl, Bernhard; Meyer, Nils; Parcianello, Alessio; Pleiter, Dirk; Rappl, Florian; Rossi, Mauro; Solbrig, Stefan; Tecchiolli, Giampietro; Wettig, Tilo; Zanier, Gianpaolo
2015-01-01
We give an overview of QPACE 2, which is a custom-designed supercomputer based on Intel Xeon Phi processors, developed in a collaboration of Regensburg University and Eurotech. We give some general recommendations for how to write high-performance code for the Xeon Phi and then discuss our implementation of a domain-decomposition-based solver and present a number of benchmarks.
A NONOVERLAPPING DOMAIN DECOMPOSITION METHOD FOR EXTERIOR 3-D PROBLEM
Institute of Scientific and Technical Information of China (English)
De-hao Yu; Ji-ming Wu; Ji-ming Wu
2001-01-01
In this paper, a nonoverlapping domain decomposition method, which is based on the natural boundary reduction(cf. [4, 13, 15]), is developed to solve the boundary value problem in exterior three-dimensional domain of general shape. Convergence analyses both for the exterior spherical domain and the general exterior domain are made. Some numerical examples are also provided to illustrate the method.
Using domain decomposition in the Jacobi-Davidson method
Genseberger, M.; Sleijpen, G.L.G.; Vorst, H.A. van der
2000-01-01
The JacobiDavidson method is suitable for computing solutions of large ndimensional eigen value problems. It needs (approximate) solutions of specific ndimensional linear systems. Here we propose a strategy based on a nonoverlapping domain decomposition technique in order to reduce the
Martin, R.; Gonzalez Ortiz, A.
In the industry as well as in the geophysical community, multiphase flows are mod- elled using a finite volume approach and a multicorrector algorithm in time in order to determine implicitly the pressures, velocities and volume fractions for each phase. Pressures, and velocities are generally determined at mid-half mesh step from each other following the staggered grid approach. This ensures stability and prevents os- cillations in pressure. It allows to treat almost all the Reynolds number ranges for all speeds and viscosities. The disadvantages appear when we want to treat more complex geometries or if a generalized curvilinear formulation of the conservation equations is considered. Too many interpolations have to be done and accuracy is then lost. In order to overcome these problems, we use here a similar algorithm in time and a Rhie and Chow interpolation (1983) of the collocated variables and essentially the velocities at the interface. The Rhie and Chow interpolation of the velocities at the finite volume interfaces allows to have no oscillatons of the pressure without checkerboard effects and to stabilize all the algorithm. In a first predictor step, fluxes at the interfaces of the finite volumes are then computed using 2nd and 3rd order shock capturing schemes of MUSCL/TVD or Van Leer type, and the orthogonal stress components are treated implicitly while cross viscous/diffusion terms are treated explicitly. A pentadiagonal system in 2D or a septadiagonal in 3D must be solve but here we have chosen to solve 3 tridiagonal linear systems (the so called Alternate Direction Implicit algorithm), one in each spatial direction, to reduce the cost of computation. Then a multi-correction of interpolated velocities, pressures and volumic fractions of each phase are done in the cartesian frame or the deformed local curvilinear coordinate system till convergence and mass conservation. At the end the energy conservation equations are solved. In all this process the
Adaptive aggregation-based domain decomposition multigrid for twisted mass fermions
Alexandrou, Constantia; Bacchio, Simone; Finkenrath, Jacob; Frommer, Andreas; Kahl, Karsten; Rottmann, Matthias
2016-12-01
The adaptive aggregation-base domain decomposition multigrid method [A. Frommer et al., SIAM J. Sci. Comput. 36, A1581 (2014)] is extended for two degenerate flavors of twisted mass fermions. By fine-tuning the parameters we achieve a speed-up of the order of a hundred times compared to the conjugate gradient algorithm for the physical value of the pion mass. A thorough analysis of the aggregation parameters is presented, which provides a novel insight into multigrid methods for lattice quantum chromodynamics independently of the fermion discretization.
Domain decomposition methods for a class of integro-partial differential equations
Califano, Giovanna; Conte, Dajana
2016-10-01
This paper deals with the construction of Schwarz Waveform Relaxation (SWR) methods for fractional diffusion-wave equations. SWR methods are a class of domain decomposition algorithms to solve evolution problems in parallel and have been mainly developed and analysed for several kinds of PDEs. We first analyse the convergence behaviour of the classical SWR method applied to fractional diffusion-wave equations, showing that Dirichlet boundary conditions at the artificial interfaces slow down the convergence of the method. Then, we construct optimal SWR methods, by providing the transmission conditions which assure convergence in two iterations.
DOMAIN DECOMPOSITION WITH NON-MATCHING GRIDS FOR COUPLING OF FEM AND NATURAL BEM
Institute of Scientific and Technical Information of China (English)
YANG Jue; HU Qiya; YU Dehao
2005-01-01
In this paper, we introduce a domain decomposition method with non-matching grids for solving Dirichlet exterior boundary problems by coupling of finite element method(FEM) and natural boundary element method(BEM). We first derive the optimal energy error estimate of the nonconforming approximation generated by this method. Then we apply a Dirichlet-Neumann(D-N) alternating algorithm to solve the coupled discrete system. It will be shown that such iterative method possesses the optimal convergence. The numerical experiments testify our theoretical results.
Domain decomposition methods for solving an image problem
Energy Technology Data Exchange (ETDEWEB)
Tsui, W.K.; Tong, C.S. [Hong Kong Baptist College (Hong Kong)
1994-12-31
The domain decomposition method is a technique to break up a problem so that ensuing sub-problems can be solved on a parallel computer. In order to improve the convergence rate of the capacitance systems, pre-conditioned conjugate gradient methods are commonly used. In the last decade, most of the efficient preconditioners are based on elliptic partial differential equations which are particularly useful for solving elliptic partial differential equations. In this paper, the authors apply the so called covering preconditioner, which is based on the information of the operator under investigation. Therefore, it is good for various kinds of applications, specifically, they shall apply the preconditioned domain decomposition method for solving an image restoration problem. The image restoration problem is to extract an original image which has been degraded by a known convolution process and additive Gaussian noise.
A PARALLEL NONOVERLAPPING DOMAIN DECOMPOSITION METHOD FOR STOKES PROBLEMS
Institute of Scientific and Technical Information of China (English)
Mei-qun Jiang; Pei-liang Dai
2006-01-01
A nonoverlapping domain decomposition iterative procedure is developed and analyzed for generalized Stokes problems and their finite element approximate problems in RN(N=2,3). The method is based on a mixed-type consistency condition with two parameters as a transmission condition together with a derivative-free transmission data updating technique on the artificial interfaces. The method can be applied to a general multi-subdomain decomposition and implemented on parallel machines with local simple communications naturally.
Overlapping Domain Decomposition Methods with FreeFem++
Jolivet, Pierre; Hecht, Frédéric; Nataf, Frédéric; Prud'Homme, Christophe
2012-01-01
International audience; In this note, the performances of a framework for two-level overlapping domain decomposition methods are assessed. Numerical experiments are run on Curie, a Tier-0 system for PRACE, for two second order elliptic PDE with highly heterogeneous coefficients: a scalar equation of diffusivity and the system of linear elasticity. Those experiments yield systems with up to ten billion unknowns in 2D and one billion unknowns in 3D, solved on few thousands cores.
Robustness of the ATLAS pixel clustering neural network algorithm
AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration
2016-01-01
Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. In the ATLAS track reconstruction algorithm, an artificial neural network is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The robustness of the neural network algorithm is presented, probing its sensitivity to uncertainties in the detector conditions. The robustness is studied by evaluating the stability of the algorithm's performance under a range of variations in the inputs to the neural networks. Within reasonable variation magnitudes, the neural networks prove to be robust to most variation types.
Robust Fault Diagnosis Algorithm for a Class of Nonlinear Systems
Directory of Open Access Journals (Sweden)
Hai-gang Xu
2015-01-01
Full Text Available A kind of robust fault diagnosis algorithm to Lipschitz nonlinear system is proposed. The novel disturbances constraint condition of the nonlinear system is derived by group algebra method, and the novel constraint condition can meet the system stability performance. Besides, the defined robust performance index of fault diagnosis observer guarantees the robust. Finally, the effectiveness of the algorithm proposed is proved in the simulations.
Duality-based domain decomposition with natural coarse-space for variational inequalities
Dostál, Zdenek; Neto, Francisco A. M. Gomes; Santos, Sandra A.
2000-12-01
An efficient non-overlapping domain decomposition algorithm of Neumann-Neumann type for solving variational inequalities arising from the elliptic boundary value problems with inequality boundary conditions has been presented. The discretized problem is first turned by the duality theory of convex programming into a quadratic programming problem with bound and equality constraints and the latter is further modified by means of orthogonal projectors to the natural coarse space introduced recently by Farhat and Roux. The resulting problem is then solved by an augmented Lagrangian type algorithm with an outer loop for the Lagrange multipliers for the equality constraints and an inner loop for the solution of the bound constrained quadratic programming problems. The projectors are shown to guarantee an optimal rate of convergence of iterative solution of auxiliary linear problems. Reported theoretical results and numerical experiments indicate high numerical and parallel scalability of the algorithm.
Robust message authentication code algorithm for digital audio recordings
Zmudzinski, Sascha; Steinebach, Martin
2007-02-01
Current systems and protocols for integrity and authenticity verification of media data do not distinguish between legitimate signal transformation and malicious tampering that manipulates the content. Furthermore, they usually provide no localization or assessment of the relevance of such manipulations with respect to human perception or semantics. We present an algorithm for a robust message authentication code (RMAC) to verify the integrity of audio recodings by means of robust audio fingerprinting and robust perceptual hashing. Experimental results show that the proposed algorithm provides both a high level of distinction between perceptually different audio data and a high robustness against signal transformations that do not change the perceived information.
Semiempirical robust algorithm for investment portfolio formation
Directory of Open Access Journals (Sweden)
Natalja Kaskevič
2013-03-01
Full Text Available When analyzing stock market data, it is common to encounter observations that differ from the overall pattern. It is known as the problem of robustness. Presence of outlying observations in different data sets may strongly influence the result of classical (mean and standard deviation based analysis methods or models based on this data. The problem of outliers can be handled by using robust estimators, therefore making aberrations less influential or ignoring them completely. An example of applying such procedures for outlier elimination in stock trading system optimization process is presented.
A robust DCT domain watermarking algorithm based on chaos system
Xiao, Mingsong; Wan, Xiaoxia; Gan, Chaohua; Du, Bo
2009-10-01
Digital watermarking is a kind of technique that can be used for protecting and enforcing the intellectual property (IP) rights of the digital media like the digital images containting in the transaction copyright. There are many kinds of digital watermarking algorithms. However, existing digital watermarking algorithms are not robust enough against geometric attacks and signal processing operations. In this paper, a robust watermarking algorithm based on chaos array in DCT (discrete cosine transform)-domain for gray images is proposed. The algorithm provides an one-to-one method to extract the watermark.Experimental results have proved that this new method has high accuracy and is highly robust against geometric attacks, signal processing operations and geometric transformations. Furthermore, the one who have on idea of the key can't find the position of the watermark embedded in. As a result, the watermark not easy to be modified, so this scheme is secure and robust.
Finite Algorithms for Robust Linear Regression
DEFF Research Database (Denmark)
Madsen, Kaj; Nielsen, Hans Bruun
1990-01-01
The Huber M-estimator for robust linear regression is analyzed. Newton type methods for solution of the problem are defined and analyzed, and finite convergence is proved. Numerical experiments with a large number of test problems demonstrate efficiency and indicate that this kind of approach may...
Finite Algorithms for Robust Linear Regression
DEFF Research Database (Denmark)
Madsen, Kaj; Nielsen, Hans Bruun
1990-01-01
The Huber M-estimator for robust linear regression is analyzed. Newton type methods for solution of the problem are defined and analyzed, and finite convergence is proved. Numerical experiments with a large number of test problems demonstrate efficiency and indicate that this kind of approach may...
Analysis of a wavelet-based robust hash algorithm
Meixner, Albert; Uhl, Andreas
2004-06-01
This paper paper is a quantitative evaluation of a wavelet-based, robust authentication hashing algorithm. Based on the results of a series of robustness and tampering sensitivity tests, we describepossible shortcomings and propose variousmodifications to the algorithm to improve its performance. The second part of the paper describes and attack against the scheme. It allows an attacker to modify a tampered image, such that it's hash value closely matches the hash value of the original.
Steganography: a class of secure and robust algorithms
Bahi, Jacques M; Guyeux, Christophe
2011-01-01
This research work presents a new class of non-blind information hiding algorithms that are stego-secure and robust. They are based on some finite domains iterations having the Devaney's topological chaos property. Thanks to a complete formalization of the approach we prove security against watermark-only attacks of a large class of steganographic algorithms. Finally a complete study of robustness is given in frequency DWT and DCT domains.
TARCMO: Theory and Algorithms for Robust, Combinatorial, Multicriteria Optimization
2016-11-28
magnitude in computational experiments on portfolio optimization problems. The research on this topic has been published as [CG15a], where details can...AFRL-AFOSR-UK-TR-2017-0001 TARCMO: Theory and Algorithms for Robust, Combinatorial, Multicriteria Optimization Horst Hamacher Technische Universität...To) 15 May 2013 to 12 May 2016 4. TITLE AND SUBTITLE TARCMO: Theory and Algorithms for Robust, Combinatorial, Multicriteria Optimization 5a. CONTRACT
Robust adaptive beamforming algorithm based on Bayesian approach
Institute of Scientific and Technical Information of China (English)
Xin SONG; Jinkuan WANG; Yinghua HAN; Han WANG
2008-01-01
The performance of adaptive array beamform-ing algorithms substantially degrades in practice because of a slight mismatch between actual and presumed array res-ponses to the desired signal. A novel robust adaptive beam-forming algorithm based on Bayesian approach is therefore proposed. The algorithm responds to the current envi-ronment by estimating the direction of arrival (DOA) of the actual signal from observations. Computational com-plexity of the proposed algorithm can thus be reduced com-pared with other algorithms since the recursive method is used to obtain inverse matrix. In addition, it has strong robustness to the uncertainty of actual signal DOA and makes the mean output array signal-to-interference-plus-noise ratio (SINR) consistently approach the optimum. Simulation results show that the proposed algorithm is bet-ter in performance than conventional adaptive beamform-ing algorithms.
Energy Technology Data Exchange (ETDEWEB)
Li,Jing; Tu, Xuemin
2008-12-10
A variant of balancing domain decomposition method by constraints (BDDC) is proposed for solving a class of indefinite system of linear equations, which arises from the finite element discretization of the Helmholtz equation of time-harmonic wave propagation in a bounded interior domain. The proposed BDDC algorithm is closely related to the dual-primal finite element tearing and interconnecting algorithm for solving Helmholtz equations (FETI-DPH). Under the condition that the diameters of the subdomains are small enough, the rate of convergence is established which depends polylogarithmically on the dimension of the individual subdomain problems and which improves with the decrease of the subdomain diameters. These results are supported by numerical experiments of solving a Helmholtz equation on a two-dimensional square domain.
An ellipsoid algorithm for probabilistic robust controller design
Kanev, S.K.; de Schutter, B.; Verhaegen, M.H.G.
2003-01-01
In this paper, a new iterative approach to probabilistic robust controller design is presented, which is applicable to any robust controller/filter design problem that can be represented as an LMI feasibility problem. Recently, a probabilistic Subgradient Iteration algorithm was proposed for solving
Robust face recognition algorithm for identifition of disaster victims
Gevaert, Wouter J. R.; de With, Peter H. N.
2013-02-01
We present a robust face recognition algorithm for the identification of occluded, injured and mutilated faces with a limited training set per person. In such cases, the conventional face recognition methods fall short due to specific aspects in the classification. The proposed algorithm involves recursive Principle Component Analysis for reconstruction of afiected facial parts, followed by a feature extractor based on Gabor wavelets and uniform multi-scale Local Binary Patterns. As a classifier, a Radial Basis Neural Network is employed. In terms of robustness to facial abnormalities, tests show that the proposed algorithm outperforms conventional face recognition algorithms like, the Eigenfaces approach, Local Binary Patterns and the Gabor magnitude method. To mimic real-life conditions in which the algorithm would have to operate, specific databases have been constructed and merged with partial existing databases and jointly compiled. Experiments on these particular databases show that the proposed algorithm achieves recognition rates beyond 95%.
A Robust Algorithm for Blind Total Variation Restoration
Institute of Scientific and Technical Information of China (English)
Jing Xu; Qian-shun Chang
2008-01-01
Image restoration is a fundamental problem in image processing. Blind image restoration has a great value in its practical application. However, it is not an easy problem to solve due to its complexity and difficulty. In this paper, we combine our robust algorithm for known blur operator with an alternating minimization implicit iterative scheme to deal with blind deconvolution problem, recover the image and identify the point spread function(PSF). The only assumption needed is satisfy the practical physical sense. Numerical experiments demonstrate that this minimization algorithm is efficient and robust over a wide range of PSF and have almost the same results compared with known PSF algorithm.
A Security Enhanced Robust Steganography Algorithm for Data Hiding
Directory of Open Access Journals (Sweden)
Siddharth Singh
2012-05-01
Full Text Available In this paper, a new robust steganography algorithm based on discrete cosine transform (DCT, Arnold transform and chaotic system is proposed. The chaotic system is used to generate a random sequence to be used for spreading data in the middle frequency band DCT coefficient of the cover image. The security is further enhanced by scrambling the secret data using Arnold Cat map before embedding. The recovery process is blind. A series of experiments is conducted to prove the security and robustness of the proposed algorithm. The experimental results demonstrate that the proposed algorithm achieves higher security and robustness against JPEG compression, addition of noise, low pass filtering and cropping attacks as compared to other existing algorithms for data hiding in the DCT domain.
23rd International Conference on Domain Decomposition Methods in Science and Engineering
Cai, Xiao-Chuan; Keyes, David; Kim, Hyea; Klawonn, Axel; Park, Eun-Jae; Widlund, Olof
2017-01-01
This book is a collection of papers presented at the 23rd International Conference on Domain Decomposition Methods in Science and Engineering, held on Jeju Island, Korea on July 6-10, 2015. Domain decomposition methods solve boundary value problems by splitting them into smaller boundary value problems on subdomains and iterating to coordinate the solution between adjacent subdomains. Domain decomposition methods have considerable potential for a parallelization of the finite element methods, and serve a basis for distributed, parallel computations.
Energy Technology Data Exchange (ETDEWEB)
Gazzaniga, G.; Sacchi, G. [Istituto di Analisi Numerica, Pavia (Italy)
1995-12-01
Different Domain Decomposition techniques for the solution of elliptic boundary-value problems are considered. The results of the implementation on a parallel distributed memory architecture are discussed.
THE DOMAIN DECOMPOSITION TECHNIQUES FOR THE FINITE ELEMENT PROBABILITY COMPUTATIONAL METHODS
Institute of Scientific and Technical Information of China (English)
LIU Xiaoqi
2000-01-01
In this paper, we shall study the domain decomposition techniques for the finite element probability computational methods. These techniques provide a theoretical basis for parallel probability computational methods.
Non-overlapping domain decomposition methods in structural mechanics
Gosselet, Pierre; 10.1007/BF02905857
2012-01-01
The modern design of industrial structures leads to very complex simulations characterized by nonlinearities, high heterogeneities, tortuous geometries... Whatever the modelization may be, such an analysis leads to the solution to a family of large ill-conditioned linear systems. In this paper we study strategies to efficiently solve to linear system based on non-overlapping domain decomposition methods. We present a review of most employed approaches and their strong connections. We outline their mechanical interpretations as well as the practical issues when willing to implement and use them. Numerical properties are illustrated by various assessments from academic to industrial problems. An hybrid approach, mainly designed for multifield problems, is also introduced as it provides a general framework of such approaches.
Segmented Domain Decomposition Multigrid For 3-D Turbomachinery Flows
Celestina, M. L.; Adamczyk, J. J.; Rubin, S. G.
2001-01-01
A Segmented Domain Decomposition Multigrid (SDDMG) procedure was developed for three-dimensional viscous flow problems as they apply to turbomachinery flows. The procedure divides the computational domain into a coarse mesh comprised of uniformly spaced cells. To resolve smaller length scales such as the viscous layer near a surface, segments of the coarse mesh are subdivided into a finer mesh. This is repeated until adequate resolution of the smallest relevant length scale is obtained. Multigrid is used to communicate information between the different grid levels. To test the procedure, simulation results will be presented for a compressor and turbine cascade. These simulations are intended to show the ability of the present method to generate grid independent solutions. Comparisons with data will also be presented. These comparisons will further demonstrate the usefulness of the present work for they allow an estimate of the accuracy of the flow modeling equations independent of error attributed to numerical discretization.
DOMAIN DECOMPOSITION FOR POROELASTICITY AND ELASTICITY WITH DG JUMPS AND MORTARS
GIRAULT, V.
2011-01-01
We couple a time-dependent poroelastic model in a region with an elastic model in adjacent regions. We discretize each model independently on non-matching grids and we realize a domain decomposition on the interface between the regions by introducing DG jumps and mortars. The unknowns are condensed on the interface, so that at each time step, the computation in each subdomain can be performed in parallel. In addition, by extrapolating the displacement, we present an algorithm where the computations of the pressure and displacement are decoupled. We show that the matrix of the interface problem is positive definite and establish error estimates for this scheme. © 2011 World Scientific Publishing Company.
A balancing domain decomposition method by constraints for advection-diffusion problems
Energy Technology Data Exchange (ETDEWEB)
Tu, Xuemin; Li, Jing
2008-12-10
The balancing domain decomposition methods by constraints are extended to solving nonsymmetric, positive definite linear systems resulting from the finite element discretization of advection-diffusion equations. A pre-conditioned GMRES iteration is used to solve a Schur complement system of equations for the subdomain interface variables. In the preconditioning step of each iteration, a partially sub-assembled finite element problem is solved. A convergence rate estimate for the GMRES iteration is established, under the condition that the diameters of subdomains are small enough. It is independent of the number of subdomains and grows only slowly with the subdomain problem size. Numerical experiments for several two-dimensional advection-diffusion problems illustrate the fast convergence of the proposed algorithm.
A novel moving mesh method based on the domain decomposition for traveling singular sources problems
Zhou, Xiaoyan; Liang, Keiwei
2012-01-01
This paper studies the numerical solution of traveling singular sources problems. A big challenge is the sources move with different speeds. Our work focus on a moving mesh method based on the domain decomposition. A predictor-corrector algorithm is derived to simulate the position of singular sources, which are described by some ordinary differential equations. The whole domain is splitted into several subdomains according to the positions of the sources. The endpoints of each subdomain are two adjacent sources. In each subdomain, moving mesh method is respectively applied. Moreover, the computation of jump $[\\dot{u}]$ is avoided and there are only two different cases discussed in the discretization of the PDE. Furthermore, the new method has a desired second-order of the spacial convergence. Numerical examples are presented to illustrate the convergence rates and the efficiency of the method. Blow-up phenomenon is also investigated for various motions of the sources.
Kuraz, Michal
2016-06-01
Modelling the transport processes in a vadose zone, e.g. modelling contaminant transport or the effect of the soil water regime on changes in soil structure and composition, plays an important role in predicting the reactions of soil biotopes to anthropogenic activity. Water flow is governed by the quasilinear Richards equation. The paper concerns the implementation of a multi-time-step approach for solving a nonlinear Richards equation. When modelling porous media flow with a Richards equation, due to a possible convection dominance and a convergence of a nonlinear solver, a stable finite element approximation requires accurate temporal and spatial integration. The method presented here enables adaptive domain decomposition algorithm together with a multi-time-step treatment of actively changing subdomains.
Final Report, DE-FG01-06ER25718 Domain Decomposition and Parallel Computing
Energy Technology Data Exchange (ETDEWEB)
Widlund, Olof B. [New York Univ. (NYU), NY (United States). Courant Inst.
2015-06-09
The goal of this project is to develop and improve domain decomposition algorithms for a variety of partial differential equations such as those of linear elasticity and electro-magnetics.These iterative methods are designed for massively parallel computing systems and allow the fast solution of the very large systems of algebraic equations that arise in large scale and complicated simulations. A special emphasis is placed on problems arising from Maxwell's equation. The approximate solvers, the preconditioners, are combined with the conjugate gradient method and must always include a solver of a coarse model in order to have a performance which is independent of the number of processors used in the computer simulation. A recent development allows for an adaptive construction of this coarse component of the preconditioner.
Energy Technology Data Exchange (ETDEWEB)
Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)
2016-06-15
Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
A Novel and Robust Evolution Algorithm for Optimizing Complicated Functions
Gao, Yifeng; Zhao, Ge
2011-01-01
In this paper, a novel mutation operator of differential evolution algorithm is proposed. A new algorithm called divergence differential evolution algorithm (DDEA) is developed by combining the new mutation operator with divergence operator and assimilation operator (divergence operator divides population, and, assimilation operator combines population), which can detect multiple solutions and robustness in noisy environment. The new algorithm is applied to optimize Michalewicz Function and to track changing of rain-induced-attenuation process. The results based on DDEA are compared with those based on Differential Evolution Algorithm (DEA). It shows that DDEA algorithm gets better results than DEA does in the same premise. The new algorithm is significant for optimizing and tracking the characteristics of MIMO (Multiple Input Multiple Output) channel at millimeter waves.
A Security Enhanced Robust Steganography Algorithm for Data Hiding
Siddharth Singh; Tanveer J. Siddiqui
2012-01-01
In this paper, a new robust steganography algorithm based on discrete cosine transform (DCT), Arnold transform and chaotic system is proposed. The chaotic system is used to generate a random sequence to be used for spreading data in the middle frequency band DCT coefficient of the cover image. The security is further enhanced by scrambling the secret data using Arnold Cat map before embedding. The recovery process is blind. A series of experiments is conducted to prove the security and robust...
Robustness of the ATLAS pixel clustering neural network algorithm
Sidebo, Per Edvin; The ATLAS collaboration
2016-01-01
Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. The algorithms depend heavily on accurate estimation of the position of particles as they traverse the inner detector elements. An artificial neural network algorithm is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The method recovers otherwise lost tracks in dense environments where particles are separated by distances comparable to the size of the detector read-out elements. Such environments are highly relevant for LHC run 2, e.g. in searches for heavy resonances. Within the scope of run 2 track reconstruction performance and upgrades, the robustness of the neural network algorithm will be presented. The robustness has been studied by evaluating the stability of the algorithm’s performance under a range of variations in the pixel detector conditions.
Robust reactor power control system design by genetic algorithm
Energy Technology Data Exchange (ETDEWEB)
Lee, Yoon Joon; Cho, Kyung Ho; Kim, Sin [Cheju National University, Cheju (Korea, Republic of)
1997-12-31
The H{sub {infinity}} robust controller for the reactor power control system is designed by use of the mixed weight sensitivity. The system is configured into the typical two-port model with which the weight functions are augmented. Since the solution depends on the weighting functions and the problem is of nonconvex, the genetic algorithm is used to determine the weighting functions. The cost function applied in the genetic algorithm permits the direct control of the power tracking performances. In addition, the actual operating constraints such as rod velocity and acceleration can be treated as design parameters. Compared with the conventional approach, the controller designed by the genetic algorithm results in the better performances with the realistic constraints. Also, it is found that the genetic algorithm could be used as an effective tool in the robust design. 4 refs., 6 figs. (Author)
Thresholded Covering Algorithms for Robust and Max-Min Optimization
Gupta, Anupam; Ravi, R
2009-01-01
The general problem of robust optimization is this: one of several possible scenarios will appear tomorrow, but things are more expensive tomorrow than they are today. What should you anticipatorily buy today, so that the worst-case cost (summed over both days) is minimized? Feige et al. and Khandekar et al. considered the k-robust model where the possible outcomes tomorrow are given by all demand-subsets of size k, and gave algorithms for the set cover problem, and the Steiner tree and facility location problems in this model, respectively. In this paper, we give the following simple and intuitive template for k-robust problems: "having built some anticipatory solution, if there exists a single demand whose augmentation cost is larger than some threshold, augment the anticipatory solution to cover this demand as well, and repeat". In this paper we show that this template gives us improved approximation algorithms for k-robust Steiner tree and set cover, and the first approximation algorithms for k-robust Ste...
A Novel Algorithm for Robust Audio Watermarking in Wavelet Domain
Institute of Scientific and Technical Information of China (English)
FU Yu; WANG Bao-bao; LI Chun-ru; QUAN Ning-qiang
2004-01-01
A novel algorithm for digital audio watermarking in wavelet domain is proposed. First,an original audio signal is decomposed by discrete wavelet transform at three levels. Then, a discrete watermark is embedded into the coefficients of its intermediate frequencies. Finally, the watermarked audio signal is obtained by wavelet reconstruction. The proposed algorithm makes good use of the multiresolution characteristics of wavelet transform. The original audio signal is not needed when detecting the watermark correlatively. Simulation results show that the algorithm is inaudible and robust to noise, filtering and resampling.
A robust chaotic algorithm for digital image steganography
Ghebleh, M.; Kanso, A.
2014-06-01
This paper proposes a new robust chaotic algorithm for digital image steganography based on a 3-dimensional chaotic cat map and lifted discrete wavelet transforms. The irregular outputs of the cat map are used to embed a secret message in a digital cover image. Discrete wavelet transforms are used to provide robustness. Sweldens' lifting scheme is applied to ensure integer-to-integer transforms, thus improving the robustness of the algorithm. The suggested scheme is fast, efficient and flexible. Empirical results are presented to showcase the satisfactory performance of our proposed steganographic scheme in terms of its effectiveness (imperceptibility and security) and feasibility. Comparison with some existing transform domain steganographic schemes is also presented.
Analysis of generalized Schwarz alternating procedure for domain decomposition
Energy Technology Data Exchange (ETDEWEB)
Engquist, B.; Zhao, Hongkai [Univ. of California, Los Angeles, CA (United States)
1996-12-31
The Schwartz alternating method(SAM) is the theoretical basis for domain decomposition which itself is a powerful tool both for parallel computation and for computing in complicated domains. The convergence rate of the classical SAM is very sensitive to the overlapping size between each subdomain, which is not desirable for most applications. We propose a generalized SAM procedure which is an extension of the modified SAM proposed by P.-L. Lions. Instead of using only Dirichlet data at the artificial boundary between subdomains, we take a convex combination of u and {partial_derivative}u/{partial_derivative}n, i.e. {partial_derivative}u/{partial_derivative}n + {Lambda}u, where {Lambda} is some {open_quotes}positive{close_quotes} operator. Convergence of the modified SAM without overlapping in a quite general setting has been proven by P.-L.Lions using delicate energy estimates. The important questions remain for the generalized SAM. (1) What is the most essential mechanism for convergence without overlapping? (2) Given the partial differential equation, what is the best choice for the positive operator {Lambda}? (3) In the overlapping case, is the generalized SAM superior to the classical SAM? (4) What is the convergence rate and what does it depend on? (5) Numerically can we obtain an easy to implement operator {Lambda} such that the convergence is independent of the mesh size. To analyze the convergence of the generalized SAM we focus, for simplicity, on the Poisson equation for two typical geometry in two subdomain case.
Fast and Robust Stereo Vision Algorithm for Obstacle Detection
Institute of Scientific and Technical Information of China (English)
Yi-peng Zhou
2008-01-01
Binocular computer vision is based on bionics, after the calibration through the camera head by double-exposure image synchronization, access to the calculation of two-dimensional image pixels of the three-dimensional depth information. In this paper, a fast and robust stereo vision algorithm is described to perform in-vehicle obstacles detection and characterization. The stereo algorithm which provides a suitable representation of the geometric content of the road scene is described, and an in-vehicle embedded system is presented. We present the way in which the algorithm is used, and then report experiments on real situations which show that our solution is accurate, reliable and efficient. In particular, both processes are fast, generic,robust to noise and bad conditions, and work even with partial occlusion.
A Robust Algorithm of Contour Extraction for Vehicle Tracking
Institute of Scientific and Technical Information of China (English)
FANZhimin; ZHOUJie; GAODashan
2003-01-01
Contour extraction of moving vehicle is an important and challenging issue in traffic surveillance. In this paper, a robust algorithm is proposed for contour ex-traction and moving vehicle tracking. First, we establish a modified snake model and utilize the directional infor-mation of the edge map to guide the snaxels' behavior.Then an adaptive shape restriction is embedded into the algorithm to govern the scope of the snake's motion, and Kalman filter is employed to estimate spatio-temporal rela-tionship between successive frames. In addition~ multiple refinements are suggested to compensate for the snake's vulnerability to fake edges. All of them contribute to a ro-bust overall performance in contour extraction and vehicle tracking. Experimental results in real traffic scene prove the effectiveness of our algorithm.The comparison with conventional snakes is also provided.
A Robust Zero-Watermarking Algorithm for Audio
Directory of Open Access Journals (Sweden)
Jie Zhu
2008-03-01
Full Text Available In traditional watermarking algorithms, the insertion of watermark into the host signal inevitably introduces some perceptible quality degradation. Another problem is the inherent conflict between imperceptibility and robustness. Zero-watermarking technique can solve these problems successfully. Instead of embedding watermark, the zero-watermarking technique extracts some essential characteristics from the host signal and uses them for watermark detection. However, most of the available zero-watermarking schemes are designed for still image and their robustness is not satisfactory. In this paper, an efficient and robust zero-watermarking technique for audio signal is presented. The multiresolution characteristic of discrete wavelet transform (DWT, the energy compression characteristic of discrete cosine transform (DCT, and the Gaussian noise suppression property of higher-order cumulant are combined to extract essential features from the host audio signal and they are then used for watermark recovery. Simulation results demonstrate the effectiveness of our scheme in terms of inaudibility, detection reliability, and robustness.
A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems
Taverniers, Søren; Tartakovsky, Daniel M.
2017-02-01
Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton-Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement of path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.
Ant Colony Algorithm and Simulation for Robust Airport Gate Assignment
Directory of Open Access Journals (Sweden)
Hui Zhao
2014-01-01
Full Text Available Airport gate assignment is core task for airport ground operations. Due to the fact that the departure and arrival time of flights may be influenced by many random factors, the airport gate assignment scheme may encounter gate conflict and many other problems. This paper aims at finding a robust solution for airport gate assignment problem. A mixed integer model is proposed to formulate the problem, and colony algorithm is designed to solve this model. Simulation result shows that, in consideration of robustness, the ability of antidisturbance for airport gate assignment scheme has much improved.
An improved robust ADMM algorithm for quantum state tomography
Li, Kezhi; Zhang, Hui; Kuang, Sen; Meng, Fangfang; Cong, Shuang
2016-06-01
In this paper, an improved adaptive weights alternating direction method of multipliers algorithm is developed to implement the optimization scheme for recovering the quantum state in nearly pure states. The proposed approach is superior to many existing methods because it exploits the low-rank property of density matrices, and it can deal with unexpected sparse outliers as well. The numerical experiments are provided to verify our statements by comparing the results to three different optimization algorithms, using both adaptive and fixed weights in the algorithm, in the cases of with and without external noise, respectively. The results indicate that the improved algorithm has better performances in both estimation accuracy and robustness to external noise. The further simulation results show that the successful recovery rate increases when more qubits are estimated, which in fact satisfies the compressive sensing theory and makes the proposed approach more promising.
A ROBUST EYE LOCALIZATION ALGORITHM FOR FACE RECOGNITION
Institute of Scientific and Technical Information of China (English)
Zhang Wencong; Li Xin; Yao Peng; Li Bin; Zhuang Zhenquan
2008-01-01
The accuracy of face alignment affects greatly the performance of a face recognition system.Since the face alignment is usually conducted using eye positions, the algorithm for accurate eye localization is essential for the accurate face recognition. In this paper, an algorithm is proposed for eye localization. First, the proper AdaBoost detection is adaptively trained to segment the region based on the special gray distribution in the region. After that, a fast radial symmetry operator is used to precisely locate the center of eyes. Experimental results show that the method can accurately locate the eyes, and it is robust to the variations of face poses, illuminations, expressions, and accessories.
Development of a robust algorithm to compute reactive azeotropes
Directory of Open Access Journals (Sweden)
M. H. M. Reis
2006-09-01
Full Text Available In this paper, a novel approach for establishing the route for process intensification through the application of two developed softwares to characterize reactive mixtures is presented. A robust algorithm was developed to build up reactive phase diagrams and to predict the existence and the location of reactive azeotropes. The proposed algorithm does not depend on initial estimates and is able to compute all reactive azeotropes present in the mixture. It also allows verifying if there are no azeotropes, which are the major troubles in this kind of programming. An additional software was developed in order to calculate reactive residue curve maps. Results obtained with the developed program were compared with the published in the literature for several mixtures, showing the efficiency and robustness of the developed softwares.
TOA-BASED ROBUST LOCATION ALGORITHMS FOR WIRELESS CELLULAR NETWORKS
Institute of Scientific and Technical Information of China (English)
Sun Guolin; Guo Wei
2005-01-01
Caused by Non-Line-Of-Sight (NLOS) propagation effect, the non-symmetric contamination of measured Time Of Arrival (TOA) data leads to high inaccuracies of the conventional TOA based mobile location techniques. Robust position estimation method based on bootstrapping M-estimation and Huber estimator are proposed to mitigate the effects of NLOS propagation on the location error. Simulation results show the improvement over traditional Least-Square (LS)algorithm on location accuracy under different channel environments.
Pioldi, Fabio; Rizzi, Egidio
2017-07-01
Output-only structural identification is developed by a refined Frequency Domain Decomposition ( rFDD) approach, towards assessing current modal properties of heavy-damped buildings (in terms of identification challenge), under strong ground motions. Structural responses from earthquake excitations are taken as input signals for the identification algorithm. A new dedicated computational procedure, based on coupled Chebyshev Type II bandpass filters, is outlined for the effective estimation of natural frequencies, mode shapes and modal damping ratios. The identification technique is also coupled with a Gabor Wavelet Transform, resulting in an effective and self-contained time-frequency analysis framework. Simulated response signals generated by shear-type frames (with variable structural features) are used as a necessary validation condition. In this context use is made of a complete set of seismic records taken from the FEMA P695 database, i.e. all 44 "Far-Field" (22 NS, 22 WE) earthquake signals. The modal estimates are statistically compared to their target values, proving the accuracy of the developed algorithm in providing prompt and accurate estimates of all current strong ground motion modal parameters. At this stage, such analysis tool may be employed for convenient application in the realm of Earthquake Engineering, towards potential Structural Health Monitoring and damage detection purposes.
An Adaptive Robust Watermarking Algorithm for Audio Signals Using SVD
Dutta, Malay Kishore; Pathak, Vinay K.; Gupta, Phalguni
This paper proposes an efficient watermarking algorithm which embeds watermark data adaptively in the audio signal. The algorithm embeds the watermark in the host audio signal in such a way that the degree of embedding (DOE) is adaptive in nature and is chosen in a justified manner according to the localized content of the audio. The watermark embedding regions are selectively chosen in the high energy regions of the audio signal which make the embedding process robust to synchronization attacks. Synchronization codes are added along with the watermark in the wavelet domain and hence the embedded data can be subjected to self synchronization and the synchronization code can be used as a check to combat false alarm that results from data modification due to watermark embedding. The watermark is embedded by quantization of the singular value decompositions in the wavelet domain which makes the process perceptually transparent. The experimental results suggest that the proposed algorithm maintains a good perceptual quality of the audio signal and maintains good robustness against signal processing attacks. Comparative analysis indicates that the proposed algorithm of adaptive DOE has superior performance in comparison to existing uniform DOE.
Dynamic Routing Algorithm for Increasing Robustness in Satellite Networks
Institute of Scientific and Technical Information of China (English)
LI Dong-ni; ZHANG Da-kun
2008-01-01
In low earth orbit(LEO)and medium earth orbit(MEO)satellite networks,the network topology changes rapidly because of the high relative speed movement of satellites.When some inter-satellite links (ISLs)fail,they can not be repaired in a short time.In order to increase the robustness for LEO/MEO satellite networks,an effective dynamic routing algorithm is proposed.All the routes to a certain node are found by constructing a destination oriented acyclic directed graph(DOADG)with the node as the destination.In this algorithm,multiple routes are provided,loop-free is guaranteed,and as long as the DOADG maintains,it is not necessary to reroute even if some ISLs fail.Simulation results show that comparing to the conventional routing algorithms,it is more efficient and reliable,costs less transmission overhead and converges faster.
Farhat, Charbel; Rixen, Daniel
1996-01-01
We present an optimal preconditioning algorithm that is equally applicable to the dual (FETI) and primal (Balancing) Schur complement domain decomposition methods, and which successfully addresses the problems of subdomain heterogeneities including the effects of large jumps of coefficients. The proposed preconditioner is derived from energy principles and embeds a new coarsening operator that propagates the error globally and accelerates convergence. The resulting iterative solver is illustrated with the solution of highly heterogeneous elasticity problems.
A Robust Image Hashing Algorithm Resistant Against Geometrical Attacks
Directory of Open Access Journals (Sweden)
Y.L. Liu
2013-12-01
Full Text Available This paper proposes a robust image hashing method which is robust against common image processing attacks and geometric distortion attacks. In order to resist against geometric attacks, the log-polar mapping (LPM and contourlet transform are employed to obtain the low frequency sub-band image. Then the sub-band image is divided into some non-overlapping blocks, and low and middle frequency coefficients are selected from each block after discrete cosine transform. The singular value decomposition (SVD is applied in each block to obtain the first digit of the maximum singular value. Finally, the features are scrambled and quantized as the safe hash bits. Experimental results show that the algorithm is not only resistant against common image processing attacks and geometric distortion attacks, but also discriminative to content changes.
Robust Estimation of Trifocal Tensor Using Messy Genetic Algorithm
Institute of Scientific and Technical Information of China (English)
HUMingxing; YUANBaozong; TANGXiaofang
2003-01-01
Given three partially overlapping views of a scene from which a set of point or line correspondences have been extracted, 3D structure and camera motion pa-rameters can be represented by the trifocal tensor, which is the key to many problems of computer vision among three views. This paper addresses the problem of robust esti-mating the trifocal tensor employing a new method based on messy genetic algorithm, which uses each gene to stand for a triplet of correspondences, and takes every chromo-some as a minimum subset for trifocal tensor estimation.The method would eventually converge to a near optimal solution and is relatively unaffected by the outliers. Exper-iments with both synthetic data and real images show that our method is more robust and precise than other typical methods because it can efficiently detect and delete the bad corresponding points, which include both bad loca-tions and false matches.
Directory of Open Access Journals (Sweden)
Ran Zhao
2015-01-01
Full Text Available The hybrid solvers based on integral equation domain decomposition method (HS-DDM are developed for modeling of electromagnetic radiation. Based on the philosophy of “divide and conquer,” the IE-DDM divides the original multiscale problem into many closed nonoverlapping subdomains. For adjacent subdomains, the Robin transmission conditions ensure the continuity of currents, so the meshes of different subdomains can be allowed to be nonconformal. It also allows different fast solvers to be used in different subdomains based on the property of different subdomains to reduce the time and memory consumption. Here, the multilevel fast multipole algorithm (MLFMA and hierarchical (H- matrices method are combined in the framework of IE-DDM to enhance the capability of IE-DDM and realize efficient solution of multiscale electromagnetic radiating problems. The MLFMA is used to capture propagating wave physics in large, smooth regions, while H-matrices are used to capture evanescent wave physics in small regions which are discretized with dense meshes. Numerical results demonstrate the validity of the HS-DDM.
Directory of Open Access Journals (Sweden)
Sushanta Ghuku
2016-09-01
In the present bar problem, only one such singularity point arising from the application of a concentrated axial load, is considered. Governing equation of the problem is derived from equilibrium condition and expressed in variational form with assumed displacement field by using direct variational principle. The computational domain is divided into two sub-domains based on the location of singularity point within the domain. An approximate solution of the governing equation is obtained assuming a series expression of the unknown variable by using Galerkin's principle. This approximation is carried out by a linear combination of sets of orthogonal co-ordinate functions which satisfy prescribed conditions at three points. The three conditions comprises of two boundary conditions and another condition at the point of application of concentrated load. The solution algorithm is implemented with the help of MATLAB® computational simulation software. The problem is also studied by using energy functional based variational method and an identical solution is observed. The present analysis highlights the generalized application of domain decomposition method based on variational principle in solving structural problems having singularities in domain.
Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density
Hohl, A.; Delmelle, E. M.; Tang, W.
2015-07-01
Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.
A domain decomposition method for modelling Stokes flow in porous materials
Liu, Guangli; Thompson, Karsten E.
2002-04-01
An algorithm is presented for solving the Stokes equation in large disordered two-dimensional porous domains. In this work, it is applied to random packings of discs, but the geometry can be essentially arbitrary. The approach includes the subdivision of the domain and a subsequent application of boundary integral equations to the subdomains. This gives a block diagonal matrix with sparse off-block components that arise from shared variables on internal subdomain boundaries. The global problem is solved using a biconjugate gradient routine with preconditioning. Results show that the effectiveness of the preconditioner is strongly affected by the subdomain structure, from which a methodology is proposed for the domain decomposition step. A minimum is observed in the solution time versus subdomain size, which is governed by the time required for preconditioning, the time for vector multiplications in the biconjugate gradient routine, the iterative convergence rate and issues related to memory allocation. The method is demonstrated on various domains including a random 1000-particle domain. The solution can be used for efficient recovery of point velocities, which is discussed in the context of stochastic modelling of solute transport. Copyright
Robust Algorithm for Face Detection in Color Images
Directory of Open Access Journals (Sweden)
Hlaing Htake Khaung Tin
2012-03-01
Full Text Available Robust Algorithm is presented for frontal face detection in color images. Face detection is an important task in facial analysis systems in order to have a priori localized faces in a given image. Applications such as face tracking, facial expression recognition, gesture recognition, etc., for example, have a pre-requisite that a face is already located in the given image or the image sequence. Facial features such as eyes, nose and mouth are automatically detected based on properties of the associated image regions. On detecting a mouth, a nose and two eyes, a face verification step based on Eigen face theory is applied to a normalized search space in the image relative to the distance between the eye feature points. The experiments were carried out on test images taken from the internet and various other randomly selected sources. The algorithm has also been tested in practice with a webcam, giving (near real-time performance and good extraction results.
Robust Algorithm Development for Application of Pinch Analysis on HEN
Directory of Open Access Journals (Sweden)
Ritesh Sojitra
2016-10-01
Full Text Available Since its genesis, Pinch Analysis is continuously evolving and its application is widening, reaching new horizons. The original concept of pinch approach was quite clear and, because of flexibility of this approach, innumerable applications have been developed in the industry. Consequently, a designer gets thoroughly muddled among these flexibilities. Hence, there was a need for a rigorous and robust model which could guide the optimisation engineer on deciding the applicability of the pinch approach and direct sequential step of procedure in predefined workflow, so that the precision of approach is ensured. Exploring the various options of a novice hands-on algorithm development that can be coded and interfaced with GUI and keeping in mind the difficulties faced by designers, an effort was made to formulate a new algorithm for the optimisation activity. As such, the work aims at easing out application hurdles and providing hands-on information to the Developer for use during preparation of new application tools. This paper presents a new algorithm, the application which ensures the Developer does not violate basic pinch rules. To achieve this, intermittent check gates are provided in the algorithm, which eliminate violation of predefined basic pinch rules, design philosophy, and Engineering Standards and ensure that constraints are adequately considered. On the other side, its sequential instruction to develop the pinch analysis and reiteration promises Maximum Energy Recovery (MER.
Robust kernel-based tracking algorithm with background contrasting
Institute of Scientific and Technical Information of China (English)
Rongli Liu; Zhongliang Jing
2012-01-01
The mean-shift algorithm has achieved considerable success in object tracking due to its simplicity and efficiency. Color histogram is a common feature in the description of an object. However, the kernel-based color histogram may not have the ability to discriminate the object from clutter background. To boost the discriminating ability of the feature, based on background contrasting, this letter presents an improved Bhattacharyya similarity metric for mean-shift tracking. Experiments show that the proposed tracker is more robust in relation to background clutter.%The mean-shift algorithm has achieved considerable success in object tracking due to its simplicity and efficiency.Color histogram is a common feature in the description of an object.However,the kernel-based color histogram may not have the ability to discriminate the object from clutter background.To boost the discriminating ability of the feature,based on background contrasting,this letter presents an improved Bhattacharyya similarity metric for mean-shift tracking.Experiments show that the proposed tracker is more robust in relation to background clutter.
A robust digital watermarking algorithm based on framelet and SVD
Xiao, Moyan; He, Zhibiao; Quan, Tingwei
2015-12-01
Compared with wavelet, framelet has good time frequency analysis ability and redundant characteristic. SVD (Singular Value Decomposition) can obtain stable feature of images which is not easily destroyed. To further improve the watermarking technique, a robust digital watermarking algorithm based on framelet and SVD is proposed. Firstly, Arnold transform is implemented to the grayscale watermark image. Secondly perform framelet transform to each host block which is divided according to the size of the watermark. Then embed the scrambled watermark into the biggest singular values produced in SVD transform to each coarse band gained from framelet transform to host image block. At last inverse framelet transform after inverse SVD transform to obtain embedded coarse band. Experimental results show that the proposed method gains good performance in robustness and security compared with traditional image processing including noise attack, cropping, filtering and JPEG compression etc. Moreover, the watermark imperceptibility of our method is better than that of wavelet and has stronger robustness than pure framelet without SVD.
A robust algorithm for the contact of viscoelastic materials
Spinu, S.; Cerlinca, D.
2016-08-01
Existing solutions for the contact problem involving viscoelastic materials often require numerical differentiation and integration, as well as resolution of transcendental equations, which can raise convergence issues. The algorithm advanced in this paper can tackle the contact behaviour of the viscoelastic materials without any convergence problems, for arbitrary contact geometry, arbitrary loading programs and complex constitutive models of linear viscoelasticity. An updated algorithm for the elastic frictionless contact, coupled with a semi-analytical method for the computation of viscoelastic displacement, is employed to solve the viscoelastic contact problem at a series of small time increments. The number of equations in the linear system resulting from the geometrical condition of deformation is set by the number of cells in the contact area, which is a priori unknown. A trial-and-error approach is implemented, resulting in a series of linear systems which are solved on evolving contact areas, until static equilibrium equations and complementarity conditions are fully satisfied for every cell in the computational domain. At any iteration, cells with negative pressure are excluded from the contact area, while cells with negative gap (i.e. cells where the contacting bodies are predicted to overlap) are reincluded. The solution is found when pressure is stabilized in relation to the imposed normal load. This robust algorithm is expected to solve a large variety of contact problems involving viscoelastic materials.
A robust localization algorithm in wireless sensor networks
Institute of Scientific and Technical Information of China (English)
Xin LI; Bei HUA; Yi SHANG; Yan XIONG
2008-01-01
Most of the state-of-the-art localization algo-rithms in wireless sensor networks (WSNs) are vulnerable to various kinds of location attacks, whereas secure local-ization schemes proposed so far are too complex to apply to power constrained WSNs. This paper provides a distributed robust localization algorithm called Bilateration that employs a unified way to deal with all kinds of location attacks as well as other kinds of information distortion caused by node malfunction or abnormal environmental noise. Bilateration directly calculates two candidate posi-tions for every two heard anchors, and then uses the aver-age of a maximum set of close-by candidate positions as the location estimation. The basic idea behind Bilateration is that candidate positions calculated from reasonable (I.e.,error bounded) anchor positions and distance measure-ments tend to be close to each other, whereas candidate positions calculated from false anchor positions or distance measurements are highly unlikely to be close to each other if false information are not collaborated. By using ilateration instead of classical multilateration to compute location estimation, Bilateration requires much lower computa-tional complexity, yet still retains the same localization accuracy. This paper also evaluates and compares Bilateration with three multilateration-based localization algorithms, and the simulation results show that Bilateration achieves the best comprehensive performance and is more suitable to real wireless sensor networks.
Adaptive Aggregation Based Domain Decomposition Multigrid for the Lattice Wilson Dirac Operator
Frommer, Andreas; Krieg, Stefan; Leder, Björn; Rottmann, Matthias
2013-01-01
In lattice QCD computations a substantial amount of work is spent in solving discretized versions of the Dirac equation. Conventional Krylov solvers show critical slowing down for large system sizes and physically interesting parameter regions. We present a domain decomposition adaptive algebraic multigrid method used as a precondtioner to solve the "clover improved" Wilson discretization of the Dirac equation. This approach combines and improves two approaches, namely domain decomposition and adaptive algebraic multigrid, that have been used seperately in lattice QCD before. We show in extensive numerical test conducted with a parallel production code implementation that considerable speed-up over conventional Krylov subspace methods, domain decomposition methods and other hierarchical approaches for realistic system sizes can be achieved.
Fast estimation of discretization error for FE problems solved by domain decomposition
Parret-Fréaud, Augustin; Gosselet, Pierre; Feyel, Frédéric; 10.1016/j.cma.2010.07.002
2012-01-01
This paper presents a strategy for a posteriori error estimation for substructured problems solved by non-overlapping domain decomposition methods. We focus on global estimates of the discretization error obtained through the error in constitutive relation for linear mechanical problems. Our method allows to compute error estimate in a fully parallel way for both primal (BDD) and dual (FETI) approaches of non-overlapping domain decomposition whatever the state (converged or not) of the associated iterative solver. Results obtained on an academic problem show that the strategy we propose is efficient in the sense that correct estimation is obtained with fully parallel computations; they also indicate that the estimation of the discretization error reaches sufficient precision in very few iterations of the domain decomposition solver, which enables to consider highly effective adaptive computational strategies.
Large Scale Simulation of Hydrogen Dispersion by a Stabilized Balancing Domain Decomposition Method
Directory of Open Access Journals (Sweden)
Qing-He Yao
2014-01-01
Full Text Available The dispersion behaviour of leaking hydrogen in a partially open space is simulated by a balancing domain decomposition method in this work. An analogy of the Boussinesq approximation is employed to describe the connection between the flow field and the concentration field. The linear systems of Navier-Stokes equations and the convection diffusion equation are symmetrized by a pressure stabilized Lagrange-Galerkin method, and thus a balancing domain decomposition method is enabled to solve the interface problem of the domain decomposition system. Numerical results are validated by comparing with the experimental data and available numerical results. The dilution effect of ventilation is investigated, especially at the doors, where flow pattern is complicated and oscillations appear in the past research reported by other researchers. The transient behaviour of hydrogen and the process of accumulation in the partially open space are discussed, and more details are revealed by large scale computation.
A chaos-based robust wavelet-domain watermarking algorithm
Energy Technology Data Exchange (ETDEWEB)
Zhao Dawei E-mail: davidzhaodw@hotmail.com; Chen Guanrong; Liu Wenbo
2004-10-01
In this paper, a chaos-based watermarking algorithm is developed in the wavelet domain for still images. The wavelet transform is commonly applied for watermarking, where the whole image is transformed in the frequency domain. In contrast to this conventional approach, we apply the wavelet transform only locally. We transform the subimage, which is extracted from the original image, in the frequency domain by using DWT and then embed the chaotic watermark into part of the subband coefficients. As usual, the watermark is detected by computing the correlation between the watermarked coefficients and the watermarking signal, where the watermarking threshold is chosen according to the Neyman-Pearson criterion based on some statistical assumptions. Watermark detection is accomplished without using the original image. Simulation results show that we can gain high fidelity and high robustness, especially under the typical attack of geometric operations.
Korneev, V. G.
2012-09-01
BPS is a well known an efficient and rather general domain decomposition Dirichlet-Dirichlet type preconditioner, suggested in the famous series of papers Bramble, Pasciak and Schatz (1986-1989). Since then, it has been serving as the origin for the whole family of domain decomposition Dirichlet-Dirichlet type preconditioners-solvers as for h so hp discretizations of elliptic problems. For its original version, designed for h discretizations, the named authors proved the bound O(1 + log2 H/ h) for the relative condition number under some restricting conditions on the domain decomposition and finite element discretization. Here H/ h is the maximal relation of the characteristic size H of a decomposition subdomain to the mesh parameter h of its discretization. It was assumed that subdomains are images of the reference unite cube by trilinear mappings. Later similar bounds related to h discretizations were proved for more general domain decompositions, defined by means of coarse tetrahedral meshes. These results, accompanied by the development of some special tools of analysis aimed at such type of decompositions, were summarized in the book of Toselli and Widlund (2005). This paper is also confined to h discretizations. We further expand the range of admissible domain decompositions for constructing BPS preconditioners, in which decomposition subdomains can be convex polyhedrons, satisfying some conditions of shape regularity. We prove the bound for the relative condition number with the same dependence on H/ h as in the bound given above. Along the way to this result, we simplify the proof of the so called abstract bound for the relative condition number of the domain decomposition preconditioner. In the part, related to the analysis of the interface sub-problem preconditioning, our technical tools are generalization of those used by Bramble, Pasciak and Schatz.
Experimental robustness of Fourier Ptychography phase retrieval algorithms
Yeh, Li-Hao; Zhong, Jingshan; Tian, Lei; Chen, Michael; Tang, Gongguo; Soltanolkotabi, Mahdi; Waller, Laura
2015-01-01
Fourier ptychography is a new computational microscopy technique that provides gigapixel-scale intensity and phase images with both wide field-of-view and high resolution. By capturing a stack of low-resolution images under different illumination angles, a nonlinear inverse algorithm can be used to computationally reconstruct the high-resolution complex field. Here, we compare and classify multiple proposed inverse algorithms in terms of experimental robustness. We find that the main sources of error are noise, aberrations and mis-calibration (i.e. model mis-match). Using simulations and experiments, we demonstrate that the choice of cost function plays a critical role, with amplitude-based cost functions performing better than intensity-based ones. The reason for this is that Fourier ptychography datasets consist of images from both brightfield and darkfield illumination, representing a large range of measured intensities. Both noise (e.g. Poisson noise) and model mis-match errors are shown to scale with int...
Directory of Open Access Journals (Sweden)
Jesús García
2012-01-01
Full Text Available The application of a 3D domain decomposition finite-element and spherical mode expansion for the design of planar ESPAR (electronically steerable passive array radiator made with probe-fed circular microstrip patches is presented in this work. A global generalized scattering matrix (GSM in terms of spherical modes is obtained analytically from the GSM of the isolated patches by using rotation and translation properties of spherical waves. The whole behaviour of the array is characterized including all the mutual coupling effects between its elements. This procedure has been firstly validated by analyzing an array of monopoles on a ground plane, and then it has been applied to synthesize a prescribed radiation pattern optimizing the reactive loads connected to the feeding ports of the array of circular patches by means of a genetic algorithm.
Directory of Open Access Journals (Sweden)
Jingang Liang
2016-06-01
Full Text Available Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC codes in accomplishing pin-wise three-dimensional (3D full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
Robust and accurate detection algorithm for multimode polymer optical FBG sensor system
DEFF Research Database (Denmark)
Ganziy, Denis; Jespersen, O.; Rose, B.
2015-01-01
We propose a novel dynamic gate algorithm (DGA) for robust and fast peak detection. The algorithm uses a threshold determined detection window and center of gravity algorithm with bias compensation. Our experiment demonstrates that the DGA method is fast and robust with better stability...
Robust Optimization Design Algorithm for High-Frequency TWTs
Wilson, Jeffrey D.; Chevalier, Christine T.
2010-01-01
Traveling-wave tubes (TWTs), such as the Ka-band (26-GHz) model recently developed for the Lunar Reconnaissance Orbiter, are essential as communication amplifiers in spacecraft for virtually all near- and deep-space missions. This innovation is a computational design algorithm that, for the first time, optimizes the efficiency and output power of a TWT while taking into account the effects of dimensional tolerance variations. Because they are primary power consumers and power generation is very expensive in space, much effort has been exerted over the last 30 years to increase the power efficiency of TWTs. However, at frequencies higher than about 60 GHz, efficiencies of TWTs are still quite low. A major reason is that at higher frequencies, dimensional tolerance variations from conventional micromachining techniques become relatively large with respect to the circuit dimensions. When this is the case, conventional design- optimization procedures, which ignore dimensional variations, provide inaccurate designs for which the actual amplifier performance substantially under-performs that of the design. Thus, this new, robust TWT optimization design algorithm was created to take account of and ameliorate the deleterious effects of dimensional variations and to increase efficiency, power, and yield of high-frequency TWTs. This design algorithm can help extend the use of TWTs into the terahertz frequency regime of 300-3000 GHz. Currently, these frequencies are under-utilized because of the lack of efficient amplifiers, thus this regime is known as the "terahertz gap." The development of an efficient terahertz TWT amplifier could enable breakthrough applications in space science molecular spectroscopy, remote sensing, nondestructive testing, high-resolution "through-the-wall" imaging, biomedical imaging, and detection of explosives and toxic biochemical agents.
Multigrid and multilevel domain decomposition for unstructured grids
Energy Technology Data Exchange (ETDEWEB)
Chan, T.; Smith, B.
1994-12-31
Multigrid has proven itself to be a very versatile method for the iterative solution of linear and nonlinear systems of equations arising from the discretization of PDES. In some applications, however, no natural multilevel structure of grids is available, and these must be generated as part of the solution procedure. In this presentation the authors will consider the problem of generating a multigrid algorithm when only a fine, unstructured grid is given. Their techniques generate a sequence of coarser grids by first forming an approximate maximal independent set of the vertices and then applying a Cavendish type algorithm to form the coarser triangulation. Numerical tests indicate that convergence using this approach can be as fast as standard multigrid on a structured mesh, at least in two dimensions.
A Nitsche-based domain decomposition method for hypersingular integral equations
Chouly, Franz
2011-01-01
We introduce and analyze a Nitsche-based domain decomposition method for the solution of hypersingular integral equations. This method allows for discretizations with non-matching grids without the necessity of a Lagrangian multiplier, as opposed to the traditional mortar method. We prove its almost quasi-optimal convergence and underline the theory by a numerical experiment.
Multiscale analysis of damage using dual and primal domain decomposition techniques
Lloberas-Valls, O.; Everdij, F.P.X.; Rixen, D.J.; Simone, A.; Sluys, L.J.
2014-01-01
In this contribution, dual and primal domain decomposition techniques are studied for the multiscale analysis of failure in quasi-brittle materials. The multiscale strategy essentially consists in decomposing the structure into a number of nonoverlapping domains and considering a refined spatial res
High performance domain decomposition methods on massively parallel architectures with FreeFEM++
Jolivet, Pierre; Dolean, Victorita; Hecht, Frédéric; Nataf, Frédéric; Prud'Homme, Christophe; Spillane, Nicole
2012-01-01
International audience; In this document, we present a parallel implementation in Freefem++ of scalable two-level domain decomposition methods. Numerical studies with highly heterogeneous problems are then performed on large clusters in order to assert the performance of our code.
Comparing the Robustness of Evolutionary Algorithms on the Basis of Benchmark Functions
Directory of Open Access Journals (Sweden)
DENIZ ULKER, E.
2013-05-01
Full Text Available In real-world optimization problems, even though the solution quality is of great importance, the robustness of the solution is also an important aspect. This paper investigates how the optimization algorithms are sensitive to the variations of control parameters and to the random initialization of the solution set for fixed control parameters. The comparison is performed of three well-known evolutionary algorithms which are Particle Swarm Optimization (PSO algorithm, Differential Evolution (DE algorithm and the Harmony Search (HS algorithm. Various benchmark functions with different characteristics are used for the evaluation of these algorithms. The experimental results show that the solution quality of the algorithms is not directly related to their robustness. In particular, the algorithm that is highly robust can have a low solution quality, or the algorithm that has a high quality of solution can be quite sensitive to the parameter variations.
cBathy: A robust algorithm for estimating nearshore bathymetry
Plant, Nathaniel G.; Holman, Rob; Holland, K. Todd
2013-01-01
A three-part algorithm is described and tested to provide robust bathymetry maps based solely on long time series observations of surface wave motions. The first phase consists of frequency-dependent characterization of the wave field in which dominant frequencies are estimated by Fourier transform while corresponding wave numbers are derived from spatial gradients in cross-spectral phase over analysis tiles that can be small, allowing high-spatial resolution. Coherent spatial structures at each frequency are extracted by frequency-dependent empirical orthogonal function (EOF). In phase two, depths are found that best fit weighted sets of frequency-wave number pairs. These are subsequently smoothed in time in phase 3 using a Kalman filter that fills gaps in coverage and objectively averages new estimates of variable quality with prior estimates. Objective confidence intervals are returned. Tests at Duck, NC, using 16 surveys collected over 2 years showed a bias and root-mean-square (RMS) error of 0.19 and 0.51 m, respectively but were largest near the offshore limits of analysis (roughly 500 m from the camera) and near the steep shoreline where analysis tiles mix information from waves, swash and static dry sand. Performance was excellent for small waves but degraded somewhat with increasing wave height. Sand bars and their small-scale alongshore variability were well resolved. A single ground truth survey from a dissipative, low-sloping beach (Agate Beach, OR) showed similar errors over a region that extended several kilometers from the camera and reached depths of 14 m. Vector wave number estimates can also be incorporated into data assimilation models of nearshore dynamics.
Markov chain algorithms: a template for building future robust low-power systems.
Deka, Biplab; Birklykke, Alex A; Duwe, Henry; Mansinghka, Vikash K; Kumar, Rakesh
2014-06-28
Although computational systems are looking towards post CMOS devices in the pursuit of lower power, the expected inherent unreliability of such devices makes it difficult to design robust systems without additional power overheads for guaranteeing robustness. As such, algorithmic structures with inherent ability to tolerate computational errors are of significant interest. We propose to cast applications as stochastic algorithms based on Markov chains (MCs) as such algorithms are both sufficiently general and tolerant to transition errors. We show with four example applications--Boolean satisfiability, sorting, low-density parity-check decoding and clustering-how applications can be cast as MC algorithms. Using algorithmic fault injection techniques, we demonstrate the robustness of these implementations to transition errors with high error rates. Based on these results, we make a case for using MCs as an algorithmic template for future robust low-power systems.
$\\ell_1$-K-SVD: A Robust Dictionary Learning Algorithm With Simultaneous Update
Mukherjee, Subhadip; Basu, Rupam; Seelamantula, Chandra Sekhar
2014-01-01
We develop a dictionary learning algorithm by minimizing the $\\ell_1$ distortion metric on the data term, which is known to be robust for non-Gaussian noise contamination. The proposed algorithm exploits the idea of iterative minimization of weighted $\\ell_2$ error. We refer to this algorithm as $\\ell_1$-K-SVD, where the dictionary atoms and the corresponding sparse coefficients are simultaneously updated to minimize the $\\ell_1$ objective, resulting in noise-robustness. We demonstrate throug...
Domain Decomposition of a Constructive Solid Geometry Monte Carlo Transport Code
Energy Technology Data Exchange (ETDEWEB)
O' Brien, M J; Joy, K I; Procassini, R J; Greenman, G M
2008-12-07
Domain decomposition has been implemented in a Constructive Solid Geometry (CSG) Monte Carlo neutron transport code. Previous methods to parallelize a CSG code relied entirely on particle parallelism; but in our approach we distribute the geometry as well as the particles across processors. This enables calculations whose geometric description is larger than what could fit in memory of a single processor, thus it must be distributed across processors. In addition to enabling very large calculations, we show that domain decomposition can speed up calculations compared to particle parallelism alone. We also show results of a calculation of the proposed Laser Inertial-Confinement Fusion-Fission Energy (LIFE) facility, which has 5.6 million CSG parts.
Moussawi, Ali
2015-02-24
Summary: The post-treatment of (3D) displacement fields for the identification of spatially varying elastic material parameters is a large inverse problem that remains out of reach for massive 3D structures. We explore here the potential of the constitutive compatibility method for tackling such an inverse problem, provided an appropriate domain decomposition technique is introduced. In the method described here, the statically admissible stress field that can be related through the known constitutive symmetry to the kinematic observations is sought through minimization of an objective function, which measures the violation of constitutive compatibility. After this stress reconstruction, the local material parameters are identified with the given kinematic observations using the constitutive equation. Here, we first adapt this method to solve 3D identification problems and then implement it within a domain decomposition framework which allows for reduced computational load when handling larger problems.
A domain decomposition study of massively parallel computing in compressible gas dynamics
Energy Technology Data Exchange (ETDEWEB)
Wong, C.C.; Blottner, F.G.; Payne, J.L. [Sandia National Labs., Albuquerque, NM (United States); Soetrisno, M. [Amtec Engineering, Inc., Bellevue, WA (United States)
1995-01-01
The appropriate utilization of massively parallel computers for solving the Navier-Stokes equations is investigated and determined from an engineering perspective. The issues investigated are: (1) Should strip or patch domain decomposition of the spatial mesh be used to reduce computer time? (2) How many computer nodes should be used for a problem with a given sized mesh to reduce computer time? (3) Is the convergence of the Navier-Stokes solution procedure (LU-SGS) adversely influenced by the domain decomposition approach? The results of the paper show that the present Navier-Stokes solution technique has good performance on a massively parallel computer for transient flow problems. For steady-state problems with a large number of mesh cells, the solution procedure will require significant computer time due to an increased number of iterations to achieve a converged solution. There is an optimum number of computer nodes to use for a problem with a given global mesh size.
A domain decomposition method for the efficient direct simulation of aeroacoustic problems
Utzmann, Jens
2008-01-01
A novel domain decomposition approach is developed in this thesis, which significantly accelerates the direct simulation of aeroacoustic problems. All relevant scales must be resolved with high accuracy, from the small, noise generating flow features (e.g., vortices) to the sound with small pressure amplitudes and large wavelengths. Furthermore, the acoustic waves must be propagated over great distances and without dissipation and dispersion errors. In order to keep the computational effort w...
An improved convergence bound for aggregation-based domain decomposition preconditioners.
Energy Technology Data Exchange (ETDEWEB)
Shadid, John Nicolas; Sala, Marzio; Tuminaro, Raymond Stephen
2005-06-01
In this paper we present a two-level overlapping domain decomposition preconditioner for the finite-element discretization of elliptic problems in two and three dimensions. The computational domain is partitioned into overlapping subdomains, and a coarse space correction, based on aggregation techniques, is added. Our definition of the coarse space does not require the introduction of a coarse grid. We consider a set of assumptions on the coarse basis functions to bound the condition number of the resulting preconditioned system. These assumptions involve only geometrical quantities associated with the aggregates and the subdomains. We prove that the condition number using the two-level additive Schwarz preconditioner is O(H/{delta} + H{sub 0}/{delta}), where H and H{sub 0} are the diameters of the subdomains and the aggregates, respectively, and {delta} is the overlap among the subdomains and the aggregates. This extends the bounds presented in [C. Lasser and A. Toselli, Convergence of some two-level overlapping domain decomposition preconditioners with smoothed aggregation coarse spaces, in Recent Developments in Domain Decomposition Methods, Lecture Notes in Comput. Sci. Engrg. 23, L. Pavarino and A. Toselli, eds., Springer-Verlag, Berlin, 2002, pp. 95-117; M. Sala, Domain Decomposition Preconditioners: Theoretical Properties, Application to the Compressible Euler Equations, Parallel Aspects, Ph.D. thesis, Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland, 2003; M. Sala, Math. Model. Numer. Anal., 38 (2004), pp. 765-780]. Numerical experiments on a model problem are reported to illustrate the performance of the proposed preconditioner.
Java-Based Coupling for Parallel Predictive-Adaptive Domain Decomposition
Directory of Open Access Journals (Sweden)
Cécile Germain‐Renaud
1999-01-01
Full Text Available Adaptive domain decomposition exemplifies the problem of integrating heterogeneous software components with intermediate coupling granularity. This paper describes an experiment where a data‐parallel (HPF client interfaces with a sequential computation server through Java. We show that seamless integration of data‐parallelism is possible, but requires most of the tools from the Java palette: Java Native Interface (JNI, Remote Method Invocation (RMI, callbacks and threads.
Energy Technology Data Exchange (ETDEWEB)
Jemcov, A.; Matovic, M.D. [Queen`s Univ., Kingston, Ontario (Canada)
1996-12-31
This paper examines the sparse representation and preconditioning of a discrete Steklov-Poincare operator which arises in domain decomposition methods. A non-overlapping domain decomposition method is applied to a second order self-adjoint elliptic operator (Poisson equation), with homogeneous boundary conditions, as a model problem. It is shown that the discrete Steklov-Poincare operator allows sparse representation with a bounded condition number in wavelet basis if the transformation is followed by thresholding and resealing. These two steps combined enable the effective use of Krylov subspace methods as an iterative solution procedure for the system of linear equations. Finding the solution of an interface problem in domain decomposition methods, known as a Schur complement problem, has been shown to be equivalent to the discrete form of Steklov-Poincare operator. A common way to obtain Schur complement matrix is by ordering the matrix of discrete differential operator in subdomain node groups then block eliminating interface nodes. The result is a dense matrix which corresponds to the interface problem. This is equivalent to reducing the original problem to several smaller differential problems and one boundary integral equation problem for the subdomain interface.
Energy Technology Data Exchange (ETDEWEB)
Girardi, E.; Ruggieri, J.M. [CEA Cadarache (DER/SPRC/LEPH), 13 - Saint-Paul-lez-Durance (France). Dept. d' Etudes des Reacteurs; Santandrea, S. [CEA Saclay, Dept. Modelisation de Systemes et Structures DM2S/SERMA/LENR, 91 - Gif sur Yvette (France)
2005-07-01
This paper describes a recently-developed extension of our 'Multi-methods,multi-domains' (MM-MD) method for the solution of the multigroup transport equation. Based on a domain decomposition technique, our approach allows us to treat the one-group equation by cooperatively employing several numerical methods together. In this work, we describe the coupling between the Method of Characteristics (integro-differential equation, unstructured meshes) with the Variational Nodal Method (even parity equation, cartesian meshes). Then, the coupling method is applied to the benchmark model of the Phebus experimental facility (Cea Cadarache). Our domain decomposition method give us the capability to employ a very fine mesh in describing a particular fuel bundle with an appropriate numerical method (MOC), while using a much large mesh size in the rest of the core, in conjunction with a coarse-mesh method (VNM). This application shows the benefits of our MM-MD approach, in terms of accuracy and computing time: the domain decomposition method allows us to reduce the Cpu time, while preserving a good accuracy of the neutronic indicators: reactivity, core-to-bundle power coupling coefficient and flux error. (authors)
A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis
Jokhio, G. A.; Izzuddin, B. A.
2015-05-01
This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.
Robust K-Median and K-Means Clustering Algorithms for Incomplete Data
Directory of Open Access Journals (Sweden)
Jinhua Li
2016-01-01
Full Text Available Incomplete data with missing feature values are prevalent in clustering problems. Traditional clustering methods first estimate the missing values by imputation and then apply the classical clustering algorithms for complete data, such as K-median and K-means. However, in practice, it is often hard to obtain accurate estimation of the missing values, which deteriorates the performance of clustering. To enhance the robustness of clustering algorithms, this paper represents the missing values by interval data and introduces the concept of robust cluster objective function. A minimax robust optimization (RO formulation is presented to provide clustering results, which are insensitive to estimation errors. To solve the proposed RO problem, we propose robust K-median and K-means clustering algorithms with low time and space complexity. Comparisons and analysis of experimental results on both artificially generated and real-world incomplete data sets validate the robustness and effectiveness of the proposed algorithms.
RoPEUS: A New Robust Algorithm for Static Positioning in Ultrasonic Systems
Directory of Open Access Journals (Sweden)
Christophe Croux
2009-06-01
Full Text Available A well known problem for precise positioning in real environments is the presence of outliers in the measurement sample. Its importance is even bigger in ultrasound based systems since this technology needs a direct line of sight between emitters and receivers. Standard techniques for outlier detection in range based systems do not usually employ robust algorithms, failing when multiple outliers are present. The direct application of standard robust regression algorithms fails in static positioning (where only the current measurement sample is considered in real ultrasound based systems mainly due to the limited number of measurements and the geometry effects. This paper presents a new robust algorithm, called RoPEUS, based on MM estimation, that follows a typical two-step strategy: 1 a high breakdown point algorithm to obtain a clean sample, and 2 a refinement algorithm to increase the accuracy of the solution. The main modifications proposed to the standard MM robust algorithm are a built in check of partial solutions in the first step (rejecting bad geometries and the off-line calculation of the scale of the measurements. The algorithm is tested with real samples obtained with the 3D-LOCUS ultrasound localization system in an ideal environment without obstacles. These measurements are corrupted with typical outlying patterns to numerically evaluate the algorithm performance with respect to the standard parity space algorithm. The algorithm proves to be robust under single or multiple outliers, providing similar accuracy figures in all cases.
Incremental Sampling Algorithms for Robust Propulsion Control Project
National Aeronautics and Space Administration — Aurora Flight Sciences proposes to develop a system for robust engine control based on incremental sampling, specifically Rapidly-Expanding Random Tree (RRT)...
Institute of Scientific and Technical Information of China (English)
Yi-rang Yuan
2007-01-01
For a coupled system of multiplayer dynamics of fluids in porous media,the characteristic finite element domain decomposition procedures applicable to parallel arithmetic are put forward.Techniques such as calculus of variations,domain decomposition,characteristic method,negative norm estimate,energy method and the theory of prior estimates are adopted.Optimal order estimates in L2 norm are derived for the error in the approximate solution.
Directory of Open Access Journals (Sweden)
Xing-cai Liu
2014-01-01
Full Text Available Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.
The genetic algorithm: A robust method for stress inversion
Thakur, Prithvi; Srivastava, Deepak C.; Gupta, Pravin K.
2017-01-01
The stress inversion of geological or geophysical observations is a nonlinear problem. In most existing methods, it is solved by linearization, under certain assumptions. These linear algorithms not only oversimplify the problem but also are vulnerable to entrapment of the solution in a local optimum. We propose the use of a nonlinear heuristic technique, the genetic algorithm, which searches the global optimum without making any linearizing assumption or simplification. The algorithm mimics the natural evolutionary processes of selection, crossover and mutation and, minimizes a composite misfit function for searching the global optimum, the fittest stress tensor. The validity and efficacy of the algorithm are demonstrated by a series of tests on synthetic and natural fault-slip observations in different tectonic settings and also in situations where the observations are noisy. It is shown that the genetic algorithm is superior to other commonly practised methods, in particular, in those tectonic settings where none of the principal stresses is directed vertically and/or the given data set is noisy.
Robust signal recovery algorithm for structured perturbation compressive sensing
Institute of Scientific and Technical Information of China (English)
Youhua Wang; Jianqiu Zhang
2016-01-01
It is understood that the sparse signal recovery with a standard compressive sensing (CS) strategy requires the measurement matrix known as a priori. The measurement matrix is, however, often perturbed in a practical application. In order to handle such a case, an optimization problem by exploiting the sparsity characteristics of both the perturbations and signals is formulated. An algorithm named as the sparse perturbation signal recovery algorithm (SPSRA) is then pro-posed to solve the formulated optimization problem. The analytical results show that our SPSRA can simultaneously recover the signal and perturbation vectors by an alternative iteration way, while the convergence of the SPSRA is also analyticaly given and guaranteed. Moreover, the support patterns of the sparse signal and structured perturbation shown are the same and can be exploited to improve the estimation accuracy and reduce the computation complexity of the algorithm. The numerical simulation results verify the effectiveness of analytical ones.
Robust Algorithms for Multiple View Geometry: Outliers and Optimality
Olof Enqvist
2011-01-01
This thesis is concerned with the geometrical parts of computer vision, or more precisely, with the three-dimensional geometry. The overall aim is to extract geometric information from a set of images. Most methods for estimating the geometry of multiple views rely on the existence of robust solvers for a set of basic problems. Such a basic problem can be estimating the relative orientation of two cameras or estimating the position of a camera given a model of the scene. The f...
Analysis of a Reflectarray by Using an Iterative Domain Decomposition Technique
Directory of Open Access Journals (Sweden)
Carlos Delgado
2012-01-01
Full Text Available We present an efficient method for the analysis of different objects that may contain a complex feeding system and a reflector structure. The approach is based on a domain decomposition technique that divides the geometry into several parts to minimize the vast computational resources required when applying a full wave method. This technique is also parallelized by using the Message Passing Interface to minimize the memory and time requirements of the simulation. A reflectarray analysis serves as an example of the proposed approach.
DEFF Research Database (Denmark)
Brincker, Rune; Andersen, Palle; Zhang, Lingmi
2007-01-01
As a part of a research project co-founded by the European Community, a series of 15 damage tests were performed on a prestressed concrete highway bridge in Switzerland. The ambient response of the bridge was recorded for each damage case. A dense array of instruments allowed the identification...... of a modal model with a total of 408 degrees of freedom. Six modes were identified in the frequency range from 0 to 16.7 Hz. The objective of this paper is to demonstrate the effectiveness of the Frequency Domain Decomposition (FDD) technique for modal identification of large structures. A second objective...
DEFF Research Database (Denmark)
Brincker, Rune; Andersen, P.; Zhang, L.
2002-01-01
As a part of a research project co-founded by the European Community, a series of 15 damage tests were performed on a prestressed concrete highway bridge in Switzerland. The ambient response of the bridge was recorded for each damage case. A dense array of instruments allowed the identification...... of a modal model with a total of 408 degrees of freedom. Six modes were identified in the frequency range from 0 to 16.7 Hz. The objective of this paper is to demonstrate the effectiveness of the Frequency Domain Decomposition (FDD) technique for modal identification of large structures. A second objective...
Local Exponential Methods: a domain decomposition approach to exponential time integration of PDEs
Bonaventura, Luca
2015-01-01
A local approach to the time integration of PDEs by exponential methods is proposed, motivated by theoretical estimates by A.Iserles on the decay of off-diagonal terms in the exponentials of sparse matrices. An overlapping domain decomposition technique is outlined, that allows to replace the computation of a global exponential matrix by a number of independent and easily parallelizable local problems. Advantages and potential problems of the proposed technique are discussed. Numerical experiments on simple, yet relevant model problems show that the resulting method allows to increase computational efficiency with respect to standard implementations of exponential methods.
Energy Technology Data Exchange (ETDEWEB)
Feng, Xiaobing [Univ. of Tennessee, Knoxville, TN (United States)
1996-12-31
A non-overlapping domain decomposition iterative method is proposed and analyzed for mixed finite element methods for a sequence of noncoercive elliptic systems with radiation boundary conditions. These differential systems describe the motion of a nearly elastic solid in the frequency domain. The convergence of the iterative procedure is demonstrated and the rate of convergence is derived for the case when the domain is decomposed into subdomains in which each subdomain consists of an individual element associated with the mixed finite elements. The hybridization of mixed finite element methods plays a important role in the construction of the discrete procedure.
Institute of Scientific and Technical Information of China (English)
Ju-e Yang; De-hao Yu
2006-01-01
In this paper, we are concerned with a non-overlapping domain decomposition method (DDM) for exterior transmission problems in the plane. Based on the natural boundary integral operator, we combine the DDM with a Dirichlet-to-Neumann (DtN) mapping and provide the numerical analysis with nonmatching grids. The weak continuity of the approximation solutions on the interface is imposed by a dual basis multiplier. We show that this multiplier space can generate optimal error estimate and obtain the corresponding rate of convergence. Finally, several numerical examples confirm the theoretical results.
Domain decomposition for a mixed finite element method in three dimensions
Cai, Z.; Parashkevov, R.R.; Russell, T.F.; Wilson, J.D.; Ye, X.
2003-01-01
We consider the solution of the discrete linear system resulting from a mixed finite element discretization applied to a second-order elliptic boundary value problem in three dimensions. Based on a decomposition of the velocity space, these equations can be reduced to a discrete elliptic problem by eliminating the pressure through the use of substructures of the domain. The practicality of the reduction relies on a local basis, presented here, for the divergence-free subspace of the velocity space. We consider additive and multiplicative domain decomposition methods for solving the reduced elliptic problem, and their uniform convergence is established.
Institute of Scientific and Technical Information of China (English)
Ji-ming Wu; De-hao Yu
2000-01-01
In this paper, the overlapping domain decomposition method, which is based on the natural boundary reduction[1] and first suggested in [2], is applied to slove the exterior boundary value problem of harmonic equation over three-dimensional domain. The convergence and error estimates both for the continuous case and the discrete case are given. The contraction factor for the exterior spherical domain is also discussed. Moreover, numerical results are given which show that the accuracy and the convergence are in accord with the theoretical analyses.
Directory of Open Access Journals (Sweden)
Johan Soewanda
2007-01-01
Full Text Available This paper discusses the application of Robust Hybrid Genetic Algorithm to solve a flow-shop scheduling problem. The proposed algorithm attempted to reach minimum makespan. PT. FSCM Manufacturing Indonesia Plant 4's case was used as a test case to evaluate the performance of the proposed algorithm. The proposed algorithm was compared to Ant Colony, Genetic-Tabu, Hybrid Genetic Algorithm, and the company's algorithm. We found that Robust Hybrid Genetic produces statistically better result than the company's, but the same as Ant Colony, Genetic-Tabu, and Hybrid Genetic. In addition, Robust Hybrid Genetic Algorithm required less computational time than Hybrid Genetic Algorithm
A Robust Algorithm for Real-time Endpoint Detection in the Noisy Mobile Environments
Institute of Scientific and Technical Information of China (English)
WUBian; RENXiaolin; LIUChongqing; ZHANGYaxin
2003-01-01
In speech recognition, the endpoint detection must be robust to noise. In low SNR situations, the conventional energy-based endpoint detection algorithms often fail and the performance of speech recognizer usually degrades distinctly, especially when in mobile environments, the background noise changes dramatically. In this paper, we propose a new algorithm that improves the endpoint detection for speech recognition in low SNR and in various noisy environments. The described algorithm not only uses multiple features but introduces a decision logic to increase the robustness in both low SNR and various noisy mobile environments. To evaluate the new algorithm, we carry out experiments in various noisy mobile environments (e.g. railway station, airport, street etc), and the performance of the algorithm is significantly improved, especially in low SNR situations. At the same time, the proposed algorithm has a low complexity and is suitable for real time embedded systems.
Algorithms and Array Design Criteria for Robust Imaging in Interferometry
2016-04-01
from the esteemed Harvard faculty. In particular, I would like to thank Prof. Yue Lu. I was very fortunate to be enrolled in the Statistical Inference... parents , Jean and Tom Kurien. xvi Introduction The use of optical interferometry as a multi-aperture imaging approach is attracting in- creasing...on the scene’s compactness, sparsity, or smoothness). In particular, a myriad of so-called self -calibration algorithms have been developed (see, e.g
The Genetic Algorithm: A Robust Method for Stress Inversion
Thakur, P.; Srivastava, D. C.; Gupta, P. K.
2016-12-01
The knowledge of stress states in Earth`s crust is a fundamental objective in many tectonic, seismological and engineering geological studies. Geologists and geophysicists routinely practice methods for determination of the stress tensor from inversion of observations on the stress indicators, such as faults, earthquakes and calcite twin lamellae. While the stress inversion is essentially a nonlinear problem, it is commonly solved by linearization, under some assumptions, in most existing methods. These algorithms not only oversimplify the problem but are also vulnerable to entrapment of the solution in a local optimum. We propose a nonlinear heuristic technique, the genetic algorithm method, that searches the global optimum without making any linearizing assumption or simplification. The method mimics the natural evolutionary process of selection, crossover, mutation, and minimises the composite misfit function for searching the global optimum, the fittest stress tensor. The validity of the method is successfully tested on synthetic fault-slip observations in different tectonic settings and also in situations where the observations contain noisy data. These results are compared with those obtained from the other common methods. The genetic algorithm method is superior to other common methods, in particular, in the oblique tectonic settings where none of the principal stresses is directed vertically.
Johan Soewanda; Tanti Octavia; Iwan Halim Sahputra
2007-01-01
This paper discusses the application of Robust Hybrid Genetic Algorithm to solve a flow-shop scheduling problem. The proposed algorithm attempted to reach minimum makespan. PT. FSCM Manufacturing Indonesia Plant 4's case was used as a test case to evaluate the performance of the proposed algorithm. The proposed algorithm was compared to Ant Colony, Genetic-Tabu, Hybrid Genetic Algorithm, and the company's algorithm. We found that Robust Hybrid Genetic produces statistically better result than...
Design of robust stability augmentation system for an airship using genetic algorithm
Institute of Scientific and Technical Information of China (English)
OUYANG Jin; QU Wei-dong; XI Yu-geng
2005-01-01
This paper presents the design of stability augmentation system (SAS) for the airship, which is robust with respect to parametric plant uncertainties. A robust pole placement approach is adopted in the design,which uses genetic algorithm (GA) as the optimization tool to derive the most robust solution of the state-feedback gain matrix K. The method can guarantee the resulting closed-loop poles to remain in a specified allocation region despite plant parameter uncertainty. Thus, the longitudinal stability of the airship is augmented by robustly assigning the closed-loop poles in a prescribed region of the left half s-plane.
A Novel Robust Interval Kalman Filter Algorithm for GPS/INS Integrated Navigation
Directory of Open Access Journals (Sweden)
Chen Jiang
2016-01-01
Full Text Available Kalman filter is widely applied in data fusion of dynamic systems under the assumption that the system and measurement noises are Gaussian distributed. In literature, the interval Kalman filter was proposed aiming at controlling the influences of the system model uncertainties. The robust Kalman filter has also been proposed to control the effects of outliers. In this paper, a new interval Kalman filter algorithm is proposed by integrating the robust estimation and the interval Kalman filter in which the system noise and the observation noise terms are considered simultaneously. The noise data reduction and the robust estimation methods are both introduced into the proposed interval Kalman filter algorithm. The new algorithm is equal to the standard Kalman filter in terms of computation, but superior for managing with outliers. The advantage of the proposed algorithm is demonstrated experimentally using the integrated navigation of Global Positioning System (GPS and the Inertial Navigation System (INS.
Domain Decomposition Preconditioners for Multiscale Flows in High-Contrast Media
Galvis, Juan
2010-01-01
In this paper, we study domain decomposition preconditioners for multiscale flows in high-contrast media. We consider flow equations governed by elliptic equations in heterogeneous media with a large contrast in the coefficients. Our main goal is to develop domain decomposition preconditioners with the condition number that is independent of the contrast when there are variations within coarse regions. This is accomplished by designing coarse-scale spaces and interpolators that represent important features of the solution within each coarse region. The important features are characterized by the connectivities of high-conductivity regions. To detect these connectivities, we introduce an eigenvalue problem that automatically detects high-conductivity regions via a large gap in the spectrum. A main observation is that this eigenvalue problem has a few small, asymptotically vanishing eigenvalues. The number of these small eigenvalues is the same as the number of connected high-conductivity regions. The coarse spaces are constructed such that they span eigenfunctions corresponding to these small eigenvalues. These spaces are used within two-level additive Schwarz preconditioners as well as overlapping methods for the Schur complement to design preconditioners. We show that the condition number of the preconditioned systems is independent of the contrast. More detailed studies are performed for the case when the high-conductivity region is connected within coarse block neighborhoods. Our numerical experiments confirm the theoretical results presented in this paper. © 2010 Society for Industrial and Applied Mathematics.
Mechanical and assembly units of viral capsids identified via quasi-rigid domain decomposition.
Directory of Open Access Journals (Sweden)
Guido Polles
Full Text Available Key steps in a viral life-cycle, such as self-assembly of a protective protein container or in some cases also subsequent maturation events, are governed by the interplay of physico-chemical mechanisms involving various spatial and temporal scales. These salient aspects of a viral life cycle are hence well described and rationalised from a mesoscopic perspective. Accordingly, various experimental and computational efforts have been directed towards identifying the fundamental building blocks that are instrumental for the mechanical response, or constitute the assembly units, of a few specific viral shells. Motivated by these earlier studies we introduce and apply a general and efficient computational scheme for identifying the stable domains of a given viral capsid. The method is based on elastic network models and quasi-rigid domain decomposition. It is first applied to a heterogeneous set of well-characterized viruses (CCMV, MS2, STNV, STMV for which the known mechanical or assembly domains are correctly identified. The validated method is next applied to other viral particles such as L-A, Pariacoto and polyoma viruses, whose fundamental functional domains are still unknown or debated and for which we formulate verifiable predictions. The numerical code implementing the domain decomposition strategy is made freely available.
Meaningful Clustered Forest: an Automatic and Robust Clustering Algorithm
Tepper, Mariano; Almansa, Andrés
2011-01-01
We propose a new clustering method that can be regarded as a numerical method to compute the proximity gestalt. The method analyzes edge length statistics in the MST of the dataset and provides an a contrario cluster detection criterion. The approach is fully parametric on the chosen distance and can detect arbitrarily shaped clusters. The method is also automatic, in the sense that only a single parameter is left to the user. This parameter has an intuitive interpretation as it controls the expected number of false detections. We show that the iterative application of our method can (1) provide robustness to noise and (2) solve a masking phenomenon in which a highly populated and salient cluster dominates the scene and inhibits the detection of less-populated, but still salient, clusters.
A ROBUST TRUST REGION ALGORITHM FOR SOLVING GENERAL NONLINEAR PROGRAMMING
Institute of Scientific and Technical Information of China (English)
Xin-wei Liu; Ya-xiang Yuan
2001-01-01
The trust region approach has been extended to solving nonlinear constrained optimization. Most of these extensions consider only equality constraints and require strong global regularity assumptions. In this paper, a trust region algorithm for solving general nonlinear programming is presented, which solves an unconstrained piecewise quadratic trust region subproblem and a quadratic programming trust region subproblem at each iteration. A new technique for updating the penalty parameter is introduced. Under very mild conditions, the global convergence results are proved. Some local convergence results are also proved. Preliminary numerical results are also reported.
APPLICATION OF GENETIC ALGORITHMS FOR ROBUST PARAMETER OPTIMIZATION
Directory of Open Access Journals (Sweden)
N. Belavendram
2010-12-01
Full Text Available Parameter optimization can be achieved by many methods such as Monte-Carlo, full, and fractional factorial designs. Genetic algorithms (GA are fairly recent in this respect but afford a novel method of parameter optimization. In GA, there is an initial pool of individuals each with its own specific phenotypic trait expressed as a ‘genetic chromosome’. Different genes enable individuals with different fitness levels to reproduce according to natural reproductive gene theory. This reproduction is established in terms of selection, crossover and mutation of reproducing genes. The resulting child generation of individuals has a better fitness level akin to natural selection, namely evolution. Populations evolve towards the fittest individuals. Such a mechanism has a parallel application in parameter optimization. Factors in a parameter design can be expressed as a genetic analogue in a pool of sub-optimal random solutions. Allowing this pool of sub-optimal solutions to evolve over several generations produces fitter generations converging to a pre-defined engineering optimum. In this paper, a genetic algorithm is used to study a seven factor non-linear equation for a Wheatstone bridge as the equation to be optimized. A comparison of the full factorial design against a GA method shows that the GA method is about 1200 times faster in finding a comparable solution.
Secure and robust steganographic algorithm for binary images
Agaian, Sos S.; Cherukuri, Ravindranath
2006-05-01
In recent years, active research has mainly concentrated on authenticating a signature; tracking a document in a digital library, and tamper detection of a scanned document or secured communication using binary images. Binary image steganographical systems provide a solution for the above discussed issues. The two color constraint of the image limits the extension of various LSB embedding techniques to the binary case. In this paper, we present a new data hiding system for binary images and scanned documents. The system initially identifies embeddable blocks and enforces specific block statistics to hide sensitive information. The distribution of the flippable pixels in these blocks is highly uneven over the image. A variable block embedding threshold is employed for capitalizing on this uneven distribution of pixels. In addition, we also present a measure to find the best the cover given a specific file of sensitive information. The simulation was performed over 50 various binary images such the scanned documents, cartoons, threshold color images. Simulation results shows that 1) The amount of data embedded is comparatively higher than the existing algorithms (such as K.H. Hwang et.al [5], J. Chen et.al [10], M.Y.Wu et.al [9]). 2) The visual distortion in cover image is minimal when compared with the existing algorithms (such as J. Chen[10], M.Y.Wu et.al [9]) will be presented.
Institute of Scientific and Technical Information of China (English)
YIN Hong; CHEN Zeng-qiang; YUAN Zhu-zhi
2006-01-01
@@ A hyperchaos-based watermarking algorithm is developed in the wavelet domain for images.The algorithm is based on discrete wavelet transform and combines the communication model with side information.We utilize a suitable scale factor to scale host image,then construct cosets for embedding digital watermarking according to scale version of the host image.Our scheme makes a tradeoff between imperceptibility and robustness,and achieves security.The extraction algorithm is a blind detection algorithm which retrieves the watermark without the original host image.In addition,we propose a new method for watermark encryption with hyperchaotic sequence.This method overcomes the drawback of small key space of chaotic sequence and improves the watermark security.Simulation results indicate that the algorithm is a well-balanced watermarking method that offers good robustness and imperceptibility.
A robust jet reconstruction algorithm for high-energy lepton colliders
Directory of Open Access Journals (Sweden)
M. Boronat
2015-11-01
Full Text Available We propose a new sequential jet reconstruction algorithm for future lepton colliders at the energy frontier. The Valencia algorithm combines the natural distance criterion for lepton colliders with the greater robustness against backgrounds of algorithms adapted to hadron colliders. Results on a detailed Monte Carlo simulation of tt¯ and ZZ production at future linear e+e− colliders (ILC and CLIC with a realistic level of background overlaid, show that it achieves better performance in the presence of background than the classical algorithms used at previous e+e− colliders.
Comparison of the Noise Robustness of FVC Retrieval Algorithms Based on Linear Mixture Models
Directory of Open Access Journals (Sweden)
Hiroki Yoshioka
2011-07-01
Full Text Available The fraction of vegetation cover (FVC is often estimated by unmixing a linear mixture model (LMM to assess the horizontal spread of vegetation within a pixel based on a remotely sensed reflectance spectrum. The LMM-based algorithm produces results that can vary to a certain degree, depending on the model assumptions. For example, the robustness of the results depends on the presence of errors in the measured reflectance spectra. The objective of this study was to derive a factor that could be used to assess the robustness of LMM-based algorithms under a two-endmember assumption. The factor was derived from the analytical relationship between FVC values determined according to several previously described algorithms. The factor depended on the target spectra, endmember spectra, and choice of the spectral vegetation index. Numerical simulations were conducted to demonstrate the dependence and usefulness of the technique in terms of robustness against the measurement noise.
Directory of Open Access Journals (Sweden)
Kriangkrai Maneerat
2016-01-01
Full Text Available One of the challenging problems for indoor wireless multifloor positioning systems is the presence of reference node (RN failures, which cause the values of received signal strength (RSS to be missed during the online positioning phase of the location fingerprinting technique. This leads to performance degradation in terms of floor accuracy, which in turn affects other localization procedures. This paper presents a robust floor determination algorithm called Robust Mean of Sum-RSS (RMoS, which can accurately determine the floor on which mobile objects are located and can work under either the fault-free scenario or the RN-failure scenarios. The proposed fault tolerance floor algorithm is based on the mean of the summation of the strongest RSSs obtained from the IEEE 802.15.4 Wireless Sensor Networks (WSNs during the online phase. The performance of the proposed algorithm is compared with those of different floor determination algorithms in literature. The experimental results show that the proposed robust floor determination algorithm outperformed the other floor algorithms and can achieve the highest percentage of floor determination accuracy in all scenarios tested. Specifically, the proposed algorithm can achieve greater than 95% correct floor determination under the scenario in which 40% of RNs failed.
Directory of Open Access Journals (Sweden)
GRANDIN, P. H.
2014-06-01
Full Text Available Recommendation systems based on collaborative filtering are open by nature, what makes them vulnerable to profile injection attacks that insert biased evaluations in the system database in order to manipulate recommendations. In this paper we evaluate the stability and robustness of collaborative filtering algorithms applied to semantic web services recommendation when submitted to random and segment profile injection attacks. We evaluated four algorithms: (1 IMEAN, that makes predictions using the average of the evaluations received by the target item; (2 UMEAN, that makes predictions using the average of the evaluation made by the target user; (3 an algorithm based on the k-nearest neighbor (k-NN method and (4, an algorithm based on the k-means clustering method.The experiments showed that the UMEAN algorithm is not affected by the attacks and that IMEAN is the most vulnerable of all algorithms tested. Nevertheless, both UMEAN and IMEAN have little practical application due to the low precision of their predictions. Among the algorithms with intermediate tolerance to attacks but with good prediction performance, the algorithm based on k-nn proved to be more robust and stable than the algorithm based on k-means.
Institute of Scientific and Technical Information of China (English)
Shao Wei; Qian Zuping; Yuan Feng
2007-01-01
A robust phase-only Direct Data Domain Least Squares (D3LS) algorithm based on generalized Rayleigh quotient optimization using hybrid Genetic Algorithm (GA) is presented in this letter. The optimization efficiency and computational speed are improved via the hybrid GA composed of standard GA and Nelder-Mead simplex algorithms. First, the objective function, with a form of generalized Rayleigh quotient, is derived via the standard D3LS algorithm. It is then taken as a fitness function and the unknown phases of all adaptive weights are taken as decision variables.Then, the nonlinear optimization is performed via the hybrid GA to obtain the optimized solution of phase-only adaptive weights. As a phase-only adaptive algorithm, the proposed algorithm is simpler than conventional algorithms when it comes to hardware implementation. Moreover, it processes only a single snapshot data as opposed to forming sample covariance matrix and operating matrix inversion. Simulation results show that the proposed algorithm has a good signal recovery and interferences nulling performance, which are superior to that of the phase-only D3LS algorithm based on standard GA.
On the convergence rate of a parallel nonoverlapping domain decomposition method
Institute of Scientific and Technical Information of China (English)
2008-01-01
In recent years,a nonoverlapping domain decomposition iterative procedure,which is based on using Robin-type boundary conditions as information transmission conditions on the subdomain interfaces,has been developed and analyzed.It is known that the convergence rate of this method is 1-O(h),where h is mesh size.In this paper,the convergence rate is improved to be 1-O(h1/2 H-1/2)sometime by choosing suitable parameter,where H is the subdomain size.Counter examples are constructed to show that our convergence estimates are sharp,which means that the convergence rate cannot be better than 1-O(h1/2H-1/2)in a certain case no matter how parameter is chosen.
Lubineau, Gilles
2015-03-01
We propose a domain decomposition formalism specifically designed for the identification of local elastic parameters based on full-field measurements. This technique is made possible by a multi-scale implementation of the constitutive compatibility method. Contrary to classical approaches, the constitutive compatibility method resolves first some eigenmodes of the stress field over the structure rather than directly trying to recover the material properties. A two steps micro/macro reconstruction of the stress field is performed: a Dirichlet identification problem is solved first over every subdomain, the macroscopic equilibrium is then ensured between the subdomains in a second step. We apply the method to large linear elastic 2D identification problems to efficiently produce estimates of the material properties at a much lower computational cost than classical approaches.
Rey, Valentine; Rey, Christian
2016-01-01
This article deals with the computation of guaranteed lower bounds of the error in the framework of finite element (FE) and domain decomposition (DD) methods. In addition to a fully parallel computation, the proposed lower bounds separate the algebraic error (due to the use of a DD iterative solver) from the discretization error (due to the FE), which enables the steering of the iterative solver by the discretization error. These lower bounds are also used to improve the goal-oriented error estimation in a substructured context. Assessments on 2D static linear mechanic problems illustrate the relevance of the separation of sources of error and the lower bounds' independence from the substructuring. We also steer the iterative solver by an objective of precision on a quantity of interest. This strategy consists in a sequence of solvings and takes advantage of adaptive remeshing and recycling of search directions.
Domain decomposition, multi-level integration and exponential noise reduction in lattice QCD
Energy Technology Data Exchange (ETDEWEB)
Ce, Marco [Scuola Normale Superiore, Pisa (Italy); INFN, Sezione di Pisa (Italy); Giusti, Leonardo [Univ. di Milano-Bicocca (Italy). Dipt. di Fisica; INFN, Sezione di Milano-Bicocca (Italy); Schaefer, Stefan [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC
2016-01-15
We explore the possibility of computing fermionic correlators on the lattice by combining a domain decomposition with a multi-level integration scheme. The quark propagator is expanded in series of terms with a well defined hierarchical structure. The higher the order of a term, the (exponentially) smaller its magnitude, the less local is its dependence on the gauge field. Once inserted in a Wick contraction, the gauge-field dependence of the terms in the resulting series can be factorized so that it is suitable for multi-level Monte Carlo integration. We test the strategy in quenched QCD by computing the disconnected correlator of two flavor-diagonal pseudoscalar densities, and a nucleon two-point function. In either cases we observe a significant exponential increase of the signal-to-noise ratio.
A stabilized explicit Lagrange multiplier based domain decomposition method for parabolic problems
Zheng, Zheming; Simeon, Bernd; Petzold, Linda
2008-05-01
A fully explicit, stabilized domain decomposition method for solving moderately stiff parabolic partial differential equations (PDEs) is presented. Writing the semi-discretized equations as a differential-algebraic equation (DAE) system where the interface continuity constraints between subdomains are enforced by Lagrange multipliers, the method uses the Runge-Kutta-Chebyshev projection scheme to integrate the DAE explicitly and to enforce the constraints by a projection. With mass lumping techniques and node-to-node matching grids, the method is fully explicit without solving any linear system. A stability analysis is presented to show the extended stability property of the method. The method is straightforward to implement and to parallelize. Numerical results demonstrate that it has excellent performance.
Identification of the Swiss Z24 Highway Bridge by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Andersen, P.
2002-01-01
This paper presents the result of the modal identification of the Swiss highway bridge Z24. A series of 15 progressive damage tests were performed on the bridge before it was demolished in autumn 1998, and the ambient response of the bridge was recorded for each damage case. In this paper the modal...... properties are identified from the ambient responses by frequency domain decomposition. 6 modes were identified for all 15 damage cases. The identification was carried out for the full 3D data case i.e. including all measurements, a total of 291 channels, a reduced data case in 2D including 153 channels......, and finally, a 1D case including 20 channels. The modal properties for the different damage cases are compared with the modal properties of the undamaged bridge. Deviations for frequencies, damping ratios and MAC values are used as monitoring variables. From these results it can be concluded, that frequencies...
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
It was proved numerically that the Domain Decomposition Method (DDM) with one layer overlapping grids is identical to the block iterative method of linear algebra equations. The results obtained using DDM could be in reasonable aggeement with the results of full-domain simulation. With the three dimensional solver developed by the authors, the flow field in a pipe was simulated using the full-domain DDM with one layer overlapping grids and with patched grids respectively. Both of the two cases led to the convergent solution. Further research shows the superiority of the DDM with one layer overlapping grids to the DDM with patched grids. A comparison between the numerical results obtained by the authors and the experimental results given by Enayet[3] shows that the numerical results are reasonable.
Stochastic domain decomposition for the solution of the two-dimensional magnetotelluric problem
Bihlo, Alexander; Haynes, Ronald D; Loredo-Osti, J Concepcion
2016-01-01
Stochastic domain decomposition is proposed as a novel method for solving the two-dimensional Maxwell's equations as used in the magnetotelluric method. The stochastic form of the exact solution of Maxwell's equations is evaluated using Monte-Carlo methods taking into consideration that the domain may be divided into neighboring sub-domains. These sub-domains can be naturally chosen by splitting the sub-surface domain into regions of constant (or at least continuous) conductivity. The solution over each sub-domain is obtained by solving Maxwell's equations in the strong form. The sub-domain solver used for this purpose is a meshless method resting on radial basis function based finite differences. The method is demonstrated by solving a number of classical magnetotelluric problems, including the quarter-space problem, the block-in-half-space problem and the triangle-in-half-space problem.
On the convergence rate of a parallel nonoverlapping domain decomposition method
Institute of Scientific and Technical Information of China (English)
QIN LiZhen; SHI ZhongCi; XU XueJun
2008-01-01
In recent years, a nonoverlapping domain decomposition iterative procedure, which is based on using Robin-type boundary conditions as information transmission conditions on the subdomain interfaces, has been developed and analyzed. It is known that the convergence rate of this method is 1 - O(h), where h is mesh size. In this paper, the convergence rate is improved to be 1 - O(h1/2H-1/2) sometime by choosing suitable parameter, where H is the subdomain size. Counter examples are constructed to show that our convergence estimates are sharp, which means that the convergence rate cannot be better than 1 - O(h1/2H-1/2) in a certain case no matter how parameter is chosen.
A Robust Algorithm Based on Object Contours and Order Matching for Disparity Map Post-Processing
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
Based on the feature of stereo images' content and the property of natural objects, we redefine the general order matching constraint with object contour restriction. According to the modified order matching constraint, we propose a robust algorithm for disparity-map post-processing. Verified by computer simulations using synthetic stereo images with given disparities, our new algorithm proves to be not only efficient in disparity error detection and correction, but also very robust, which can resolve the severe problem in the algorithm proposed in Ref.[3] that if there are large differences among the depths of objects in a scene, the algorithm will make mistakes during the process of disparity error detection and correction.
Collins, Emmanuel G., Jr.; Richter, Stephen
1990-01-01
One well known deficiency of LQG compensators is that they do not guarantee any measure of robustness. This deficiency is especially highlighted when considering control design for complex systems such as flexible structures. There has thus been a need to generalize LQG theory to incorporate robustness constraints. Here we describe the maximum entropy approach to robust control design for flexible structures, a generalization of LQG theory, pioneered by Hyland, which has proved useful in practice. The design equations consist of a set of coupled Riccati and Lyapunov equations. A homotopy algorithm that is used to solve these design equations is presented.
Domain decomposition parallel computing for transient two-phase flow of nuclear reactors
Energy Technology Data Exchange (ETDEWEB)
Lee, Jae Ryong; Yoon, Han Young [KAERI, Daejeon (Korea, Republic of); Choi, Hyoung Gwon [Seoul National University, Seoul (Korea, Republic of)
2016-05-15
KAERI (Korea Atomic Energy Research Institute) has been developing a multi-dimensional two-phase flow code named CUPID for multi-physics and multi-scale thermal hydraulics analysis of Light water reactors (LWRs). The CUPID code has been validated against a set of conceptual problems and experimental data. In this work, the CUPID code has been parallelized based on the domain decomposition method with Message passing interface (MPI) library. For domain decomposition, the CUPID code provides both manual and automatic methods with METIS library. For the effective memory management, the Compressed sparse row (CSR) format is adopted, which is one of the methods to represent the sparse asymmetric matrix. CSR format saves only non-zero value and its position (row and column). By performing the verification for the fundamental problem set, the parallelization of the CUPID has been successfully confirmed. Since the scalability of a parallel simulation is generally known to be better for fine mesh system, three different scales of mesh system are considered: 40000 meshes for coarse mesh system, 320000 meshes for mid-size mesh system, and 2560000 meshes for fine mesh system. In the given geometry, both single- and two-phase calculations were conducted. In addition, two types of preconditioners for a matrix solver were compared: Diagonal and incomplete LU preconditioner. In terms of enhancement of the parallel performance, the OpenMP and MPI hybrid parallel computing for a pressure solver was examined. It is revealed that the scalability of hybrid calculation was enhanced for the multi-core parallel computation.
Energy Technology Data Exchange (ETDEWEB)
Jo, Yu Gwon; Cho, Nam Zin [KAIST, Daejeon (Korea, Republic of)
2014-10-15
The OLG iteration scheme uses overlapping regions for each local problem solved by continuous-energy MC calculation to reduce errors in inaccurate boundary conditions (BCs) that are caused by discretization in space, energy, and angle. However, the overlapping region increases computational burdens and the discretized BCs for continuous-energy MC calculation result in an inaccurate global p-CMFD solution. On the other hand, there also have been several studies on the direct domain decomposed MC calculation where each processor simulates particles within its own domain and exchanges the particles crossing the domain boundary between processors with certain frequency. The efficiency of this method depends on the message checking frequency and the buffer size. Furthermore, it should overcome the load-imbalance problem for better parallel efficiency. Recently, fission and surface source (FSS) iteration method based on banking both fission and surface sources for the next iteration (i.e., cycle) was proposed to give exact BCs for non overlapping local problems in domain decomposition and tested in one-dimensional continuous-energy reactor problems. In this paper, the FSS iteration method is combined with a source splitting scheme to reduce the load imbalance problem and achieve global variance reduction. The performances are tested on a two dimensional continuous-energy reactor problem with domain-based parallelism and compared with the FSS iteration without source splitting. Numerical results show the improvements of the FSS iteration with source splitting. This paper describes the FSS iteration scheme in the domain decomposition method and proposes the FSS iteration combined with the source splitting based on the number of sampled sources, reducing the load-imbalance problem in domain-based parallelism and achieving global variance reduction.
The domain interface method in non-conforming domain decomposition multifield problems
Lloberas-Valls, O.; Cafiero, M.; Cante, J.; Ferrer, A.; Oliver, J.
2017-04-01
The Domain Interface Method (DIM) is extended in this contribution for the case of mixed fields as encountered in multiphysics problems. The essence of the non-conforming domain decomposition technique consists in a discretization of a fictitious zero-thickness interface as in the original methodology and continuity of the solution fields across the domains is satisfied by incorporating the corresponding Lagrange Multipliers. The multifield DIM inherits the advantages of its irreducible version in the sense that the connections between non-matching meshes, with possible geometrically non-conforming interfaces, is accounted by the automatic Delaunay interface discretization without considering master and slave surfaces or intermediate surface projections as done in many established techniques, e.g. mortar methods. The multifield enhancement identifies the Lagrange multiplier field and incorporates its contribution in the weak variational form accounting for the corresponding consistent stabilization term based on a Nitsche method. This type of constraint enforcement circumvents the appearance of instabilities when the Ladyzhenskaya-Babu\\vska-Brezzi (LBB) condition is not fulfilled by the chosen discretization. The domain decomposition framework is assessed in a large deformation setting for mixed displacement/pressure formulations and coupled thermomechanical problems. The continuity of the mixed field is studied in well selected benchmark problems for both mixed formulations and the objectivity of the response is compared to reference monolithic solutions. Results suggest that the presented strategy shows sufficient potential to be a valuable tool in situations where the evolving physics at particular domains require the use of different spatial discretizations or field interpolations.
The domain interface method in non-conforming domain decomposition multifield problems
Lloberas-Valls, O.; Cafiero, M.; Cante, J.; Ferrer, A.; Oliver, J.
2016-12-01
The Domain Interface Method (DIM) is extended in this contribution for the case of mixed fields as encountered in multiphysics problems. The essence of the non-conforming domain decomposition technique consists in a discretization of a fictitious zero-thickness interface as in the original methodology and continuity of the solution fields across the domains is satisfied by incorporating the corresponding Lagrange Multipliers. The multifield DIM inherits the advantages of its irreducible version in the sense that the connections between non-matching meshes, with possible geometrically non-conforming interfaces, is accounted by the automatic Delaunay interface discretization without considering master and slave surfaces or intermediate surface projections as done in many established techniques, e.g. mortar methods. The multifield enhancement identifies the Lagrange multiplier field and incorporates its contribution in the weak variational form accounting for the corresponding consistent stabilization term based on a Nitsche method. This type of constraint enforcement circumvents the appearance of instabilities when the Ladyzhenskaya-Babu\\vska-Brezzi (LBB) condition is not fulfilled by the chosen discretization. The domain decomposition framework is assessed in a large deformation setting for mixed displacement/pressure formulations and coupled thermomechanical problems. The continuity of the mixed field is studied in well selected benchmark problems for both mixed formulations and the objectivity of the response is compared to reference monolithic solutions. Results suggest that the presented strategy shows sufficient potential to be a valuable tool in situations where the evolving physics at particular domains require the use of different spatial discretizations or field interpolations.
Robust Mean Change-Point Detecting through Laplace Linear Regression Using EM Algorithm
Directory of Open Access Journals (Sweden)
Fengkai Yang
2014-01-01
normal distribution, we developed the expectation maximization (EM algorithm to estimate the position of mean change-point. We investigated the performance of the algorithm through different simulations, finding that our methods is robust to the distributions of errors and is effective to estimate the position of mean change-point. Finally, we applied our method to the classical Holbert data and detected a change-point.
Robustness of "cut and splice" genetic algorithms in the structural optimization of atomic clusters
Froltsov, V.; Reuter, K.
2009-01-01
We return to the geometry optimization problem of Lennard-Jones clusters to analyze the performance dependence of 'cut and splice' genetic algorithms (GAs) on the employed population size. We generally find that admixing twinning mutation moves leads to an improved robustness of the algorithm efficiency with respect to this a priori unknown technical parameter. The resulting very stable performance of the corresponding mutation + mating GA implementation over a wide range of population sizes...
Energy Technology Data Exchange (ETDEWEB)
Zhidkov, E.P.; Mazurkevich, G.E.; Khoromsky, B.N.
1989-01-01
A method of domain decomposition with cross-points (box decomposition) is used for the solution of finite-difference elliptic boundary value problems in rectangle and in parallelepiped. Capacitance matrix and preconditioners for iterative solution of arising algebraic problem are constructed by means of Poincare-Steklov operators. The convergence properties of iterative algorithms depend on local characteristics of subdomains on the number N'' of unknowns in one direction in subdomains, and are independent of the number of subdomains and of jumps of elliptic operator coefficients as long as these jumps only occur across the subdomain boundaries. The dependence of convergence on discretization of the problem is defined by ln N'' for two-dimensional problems, by {radical}N ln N'' for three-dimensional problems. The results of numerical experiments illustrating convergence properties are presented. 18 refs., 3 figs., 4 tabs.
Energy Technology Data Exchange (ETDEWEB)
Guerin, P
2007-12-15
The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)
Robust Vision-Based Pose Estimation Algorithm for AN Uav with Known Gravity Vector
Kniaz, V. V.
2016-06-01
Accurate estimation of camera external orientation with respect to a known object is one of the central problems in photogrammetry and computer vision. In recent years this problem is gaining an increasing attention in the field of UAV autonomous flight. Such application requires a real-time performance and robustness of the external orientation estimation algorithm. The accuracy of the solution is strongly dependent on the number of reference points visible on the given image. The problem only has an analytical solution if 3 or more reference points are visible. However, in limited visibility conditions it is often needed to perform external orientation with only 2 visible reference points. In such case the solution could be found if the gravity vector direction in the camera coordinate system is known. A number of algorithms for external orientation estimation for the case of 2 known reference points and a gravity vector were developed to date. Most of these algorithms provide analytical solution in the form of polynomial equation that is subject to large errors in the case of complex reference points configurations. This paper is focused on the development of a new computationally effective and robust algorithm for external orientation based on positions of 2 known reference points and a gravity vector. The algorithm implementation for guidance of a Parrot AR.Drone 2.0 micro-UAV is discussed. The experimental evaluation of the algorithm proved its computational efficiency and robustness against errors in reference points positions and complex configurations.
ROBUST ZERO-WATERMARK ALGORITHMS BASED ON NUMERICAL RELATIONSHIP BETWEEN ADJACENT BLOCKS
Institute of Scientific and Technical Information of China (English)
Zhang Yifeng; Jia Chengwei; Wang Xuechen; Wang Kai; Pei Wenjiang
2012-01-01
In this paper,three robust zero-watermark algorithms named Direct Current coefficient RElationship (DC-RE),CUmulant combined Singular Value Decomposition (CU-SVD),and CUmulant combined Singular Value Decomposition RElationship (CU-SVD-RE) are proposed.The algorithm DC-RE gets the feature vector from the relationship of DC coefficients between adjacent blocks,CU-SVD gets the feature vector from the singular value of third-order cumulants,while CU-SVD-RE combines the essence of the first two algorithms.Specially,CU-SVD-RE gets the feature vector from the relationship between singular values of third-order cumulants.Being a cross-over studying field of watermarking and cryptography,the zero-watermark algorithms are robust without modifying the carrier.Numerical simulation obviously shows that,under geometric attacks,the performance of CU-SVD-RE and DC-RE algorithm are better and all three proposed algorithms are robust to various attacks,such as median filter,salt and pepper noise,and Gaussian low-pass filter attacks.
Energy Technology Data Exchange (ETDEWEB)
Flauraud, E.
2004-05-01
In this thesis, we are interested in using domain decomposition methods for solving fluid flows in faulted porous media. This study comes within the framework of sedimentary basin modeling which its aim is to predict the presence of possible oil fields in the subsoil. A sedimentary basin is regarded as a heterogeneous porous medium in which fluid flows (water, oil, gas) occur. It is often subdivided into several blocks separated by faults. These faults create discontinuities that have a tremendous effect on the fluid flow in the basin. In this work, we present two approaches to model faults from the mathematical point of view. The first approach consists in considering faults as sub-domains, in the same way as blocks but with their own geological properties. However, because of the very small width of the faults in comparison with the size of the basin, the second and new approach consists in considering faults no longer as sub-domains, but as interfaces between the blocks. A mathematical study of the two models is carried out in order to investigate the existence and the uniqueness of solutions. Then; we are interested in using domain decomposition methods for solving the previous models. The main part of this study is devoted to the design of Robin interface conditions and to the formulation of the interface problem. The Schwarz algorithm can be seen as a Jacobi method for solving the interface problem. In order to speed up the convergence, this problem can be solved by a Krylov type algorithm (BICGSTAB). We discretize the equations with a finite volume scheme, and perform extensive numerical tests to compare the different methods. (author)
Robust Mokken Scale Analysis by Means of the Forward Search Algorithm for Outlier Detection
Zijlstra, Wobbe P.; van der Ark, L. Andries; Sijtsma, Klaas
2011-01-01
Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to…
NIC: a robust background extraction algorithm for foreground detection in dynamic scenes
Huynh-The, Thien; Banos, Oresti; Lee, Sungyoung; Kang, Byeong Ho; Kim, Eun-Soo; Le-Tien, Thuong
2016-01-01
This paper presents a robust foreground detection method capable of adapting to different motion speeds in scenes. A key contribution of this paper is the background estimation using a proposed novel algorithm, neighbor-based intensity correction (NIC), that identifies and modifies the motion pixels
A Robust Formant Extraction Algorithm Combining Spectral Peak Picking and Root Polishing
Directory of Open Access Journals (Sweden)
Seo Kwang-deok
2006-01-01
Full Text Available We propose a robust formant extraction algorithm that combines the spectral peak picking, formants location examining for peak merger checking, and the root extraction methods. The spectral peak picking method is employed to locate the formant candidates, and the root extraction is used for solving the peak merger problem. The location and the distance between the extracted formants are also utilized to efficiently find out suspected peak mergers. The proposed algorithm does not require much computation, and is shown to be superior to previous formant extraction algorithms through extensive tests using TIMIT speech database.
Lim, Gerald
2005-03-01
I have implemented a simple and robust numerical technique for calculating axisymmetric equilibrium shapes of one-component lipid bilayer vesicles. This so-called Tethered Infinitesimal Tori and Spheres (TITS) Algorithm gives shapes that are automatically stable with respect to axisymmetric perturbations. The latest version of this algorithm can, but is not restricted to, impose constraints on any of three geometrical quantities: the area, volume and pole-to-pole distance (in the case of tether formation). In this talk, I will introduce the basic principles of the TITS Algorithm and demonstrate its versatility through a few example shape calculations involving the Helfrich and Area Difference Elasticity bending free energies.
Robust protein microarray image segmentation using improved seeded region growing algorithm
Institute of Scientific and Technical Information of China (English)
Liqiang Wang(王立强); Xuxiang Ni(倪旭翔); Zukang Lu(陆祖康)
2003-01-01
Protein microarray technology has recently emerged as a powerful tool for biomedical research. Before automatic analysis the protein microarray images, protein spots in the images must be determined appropriately by spot segmentation algorithm. In this paper, an improved seeded region growing (ISRG)algorithm for protein microarray segmentation is presented, the seeds are obtained by finding the positions of the printed spots, and the protein spot regions are grown through these seeds. The experiment results show that the presented algorithm is accurate for adaptive shape segmentation and robust for protein microarray images contaminated by noise.
Institute of Scientific and Technical Information of China (English)
Luo Chang
2006-01-01
In this work, system of parabolic equations with discontinuous coefficients is studied. The domain decomposition method modified by a characteristic finite element procedure is applied. A function is defined to approximate the fluxes on inner boundaries by using the solution at the previous level. Thus the parallelism is achieved. Convergence analysis and error estimate are also presented.
Sidorenko, Pavel; Avnat, Zohar; Cohen, Oren
2016-01-01
Frequency-resolved optical gating (FROG) is probably the most popular technique for complete characterization of ultrashort laser pulses. In FROG, a reconstruction algorithm retrieves the pulse from a measured spectrogram, yet current FROG reconstruction algorithms require and exhibit several restricting features that weaken FROG performances. For example, the delay step must correspond to the spectral bandwidth measured with large enough SNR a condition that limits the temporal resolution of the reconstructed pulse, obscures measurements of weak broadband pulses, and makes measurement of broadband mid-IR pulses hard and slow because the spectrograms become huge. We develop a new approach for FROG reconstruction, based on ptychography (a scanning coherent diffraction imaging technique), that removes many of the algorithmic restrictions. The ptychographic reconstruction algorithm is significantly faster and more robust to noise than current FROG algorithms, which are based on generalized projections (GP). We d...
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
This paper addresses the problems of parameter estimation of multivariable stationary stochastic systems on the basis of observed output data. The main contribution is to employ the expectation-maximisation (EM) method as a means for computation of the maximum-likelihood (ML) parameter estimation of the system. Closed form of the expectation of the studied system subjected to Gaussian distribution noise is derived and paraneter choice that maximizes the expectation is also proposed. This results in an iterative algorithm for parameter estimation and the robust algorithm implementation based on technique of QR-factorization and Cholesky factorization is also discussed. Moreover, algorithmic properties such as non-decreasing likelihood value, necessary and sufficient conditions for the algorithm to arrive at a local stationary parameter, the convergence rate and the factors affecting the convergence rate are analyzed. Simulation study shows that the proposed algorithm has attractive properties such as numerical stability, and avoidance of difficult initial conditions.
Robust Automatic Focus Algorithm for Low Contrast Images Using a New Contrast Measure
Directory of Open Access Journals (Sweden)
Jinshan Tang
2011-08-01
Full Text Available Low contrast images, suffering from a lack of sharpness, are easily influenced by noise. As a result, many local false peaks may be generated in contrast measurements, making it difficult for the camera’s passive auto-focus system to perform its function of locating the focused peak. In this paper, a new passive auto-focus algorithm is proposed to address this problem. First, a noise reduction preprocessing is introduced to make our algorithm robust to both additive noise and multiplicative noise. Then, a new contrast measure is presented to bring in local false peaks, ensuring the presence of a well defined focused peak. In order to gauge the performance of our algorithm, a modified peak search algorithm is used in the experiments. The experimental results from an actual digital camera validate the effectiveness of our proposed algorithm.
LMI-Based Generation of Feedback Laws for a Robust Model Predictive Control Algorithm
Acikmese, Behcet; Carson, John M., III
2007-01-01
This technical note provides a mathematical proof of Corollary 1 from the paper 'A Nonlinear Model Predictive Control Algorithm with Proven Robustness and Resolvability' that appeared in the 2006 Proceedings of the American Control Conference. The proof was omitted for brevity in the publication. The paper was based on algorithms developed for the FY2005 R&TD (Research and Technology Development) project for Small-body Guidance, Navigation, and Control [2].The framework established by the Corollary is for a robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems that guarantees the resolvability of the associated nite-horizon optimal control problem in a receding-horizon implementation. Additional details of the framework are available in the publication.
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Rizzuto, Enrico; Narasimhan, Harikrishna
2012-01-01
More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of structures......, a theoretical and risk-based framework is presented which facilitates the quantification of robustness, and thus supports the formulation of pre-normative guidelines....
The Evolutionary Algorithm to Find Robust Pareto-Optimal Solutions over Time
Directory of Open Access Journals (Sweden)
Meirong Chen
2015-01-01
Full Text Available In dynamic multiobjective optimization problems, the environmental parameters change over time, which makes the true pareto fronts shifted. So far, most works of research on dynamic multiobjective optimization methods have concentrated on detecting the changed environment and triggering the population based optimization methods so as to track the moving pareto fronts over time. Yet, in many real-world applications, it is not necessary to find the optimal nondominant solutions in each dynamic environment. To solve this weakness, a novel method called robust pareto-optimal solution over time is proposed. It is in fact to replace the optimal pareto front at each time-varying moment with the series of robust pareto-optimal solutions. This means that each robust solution can fit for more than one time-varying moment. Two metrics, including the average survival time and average robust generational distance, are present to measure the robustness of the robust pareto solution set. Another contribution is to construct the algorithm framework searching for robust pareto-optimal solutions over time based on the survival time. Experimental results indicate that this definition is a more practical and time-saving method of addressing dynamic multiobjective optimization problems changing over time.
A Robust and Fast Non-Local Means Algorithm for Image Denoising
Institute of Scientific and Technical Information of China (English)
Yan-Li Liu; Jin Wang; Xi Chen; Yan-Wen Guo; Qun-Sheng Peng
2008-01-01
In the paper, we propose a robust and fast image denoising method. The approach integrates both Non- Local means algorithm and Laplacian Pyramid. Given an image to be denoised, we first decompose it into Laplacian pyramid. Exploiting the redundancy property of Laplacian pyramid, we then perform non-local means on every level image of Laplacian pyramid. Essentially, we use the similarity of image features in Laplacian pyramid to act as weight to denoise image. Since the features extracted in Laplacian pyramid are localized in spatial position and scale, they are much more able to describe image, and computing the similarity between them is more reasonable and more robust. Also, based on the efficient Summed Square Image (SSI) scheme and Fast Fourier Transform (FFT), we present an accelerating algorithm to break the bottleneck of non-local means algorithm - similarity computation of compare windows. After speedup, our algorithm is fifty times faster than original non-local means algorithm. Experiments demonstrated the effectiveness of our algorithm.
Robustness analysis of EGFR signaling network with a multi-objective evolutionary algorithm.
Zou, Xiufen; Liu, Minzhong; Pan, Zishu
2008-01-01
Robustness, the ability to maintain performance in the face of perturbations and uncertainty, is believed to be a necessary property of biological systems. In this paper, we address the issue of robustness in an important signal transduction network--epidermal growth factor receptor (EGFR) network. First, we analyze the robustness in the EGFR signaling network using all rate constants against the Gauss variation which was described as "the reference parameter set" in the previous study [Kholodenko, B.N., Demin, O.V., Moehren, G., Hoek, J.B., 1999. Quantification of short term signaling by the epidermal growth factor receptor. J. Biol. Chem. 274, 30169-30181]. The simulation results show that signal time, signal duration and signal amplitude of the EGRR signaling network are relatively not robust against the simultaneous variation of the reference parameter set. Second, robustness is quantified using some statistical quantities. Finally, a multi-objective evolutionary algorithm (MOEA) is presented to search reaction rate constants which optimize the robustness of network and compared with the NSGA-II, which is a representation of a class of modern multi-objective evolutionary algorithms. Our simulation results demonstrate that signal time, signal duration and signal amplitude of the four key components--the most downstream variable in each of the pathways: R-Sh-G-S, R-PLP, R-G-S and the phosphorylated receptor RP in EGRR signaling network for the optimized parameter sets have better robustness than those for the reference parameter set and the NSGA-II. These results can provide valuable insight into experimental designs and the dynamics of the signal-response relationship between the dimerized and activated EGFR and the activation of downstream proteins.
Directory of Open Access Journals (Sweden)
Dries Raymaekers
2010-08-01
Full Text Available A seasonally robust algorithm for the retrieval of Suspended Particulate Matter (SPM in the Scheldt River from hyperspectral images is presented. This algorithm can be applied without the need to simultaneously acquire samples (from vessels and pontoons. Especially in dynamic environments such as estuaries, this leads to a large reduction of costs, both in equipment and personnel. The algorithm was established empirically using in situ data of the water-leaving reflectance obtained over the tidal cycle during different seasons and different years. Different bands and band combinations were tested. Strong correlations were obtained for exponential relationships between band ratios and SPM concentration. The best performing relationships are validated using airborne hyperspectral data acquired in June 2005 and October 2007 at different moments in the tidal cycle. A band ratio algorithm (710 nm/596 nm was successfully applied to a hyperspectral AHS image of the Scheldt River to obtain an SPM concentration map.
Directory of Open Access Journals (Sweden)
Carlo Ruzzo
2016-10-01
Full Text Available System identification of offshore floating platforms is usually performed by testing small-scale models in wave tanks, where controlled conditions, such as still water for free decay tests, regular and irregular wave loading can be represented. However, this approach may result in constraints on model dimensions, testing time, and costs of the experimental activity. For such reasons, intermediate-scale field modelling of offshore floating structures may become an interesting as well as cost-effective alternative in a near future. Clearly, since the open sea is not a controlled environment, traditional system identification may become challenging and less precise. In this paper, a new approach based on Frequency Domain Decomposition (FDD method for Operational Modal Analysis is proposed and validated against numerical simulations in ANSYS AQWA v.16.0 on a simple spar-type structure. The results obtained match well with numerical predictions, showing that this new approach, opportunely coupled with more traditional wave tanks techniques, proves to be very promising to perform field-site identification of the model structures.
Poggi, Valerio; Ermert, Laura; Burjanek, Jan; Michel, Clotaire; Fäh, Donat
2015-01-01
Frequency domain decomposition (FDD) is a well-established spectral technique used in civil engineering to analyse and monitor the modal response of buildings and structures. The method is based on singular value decomposition of the cross-power spectral density matrix from simultaneous array recordings of ambient vibrations. This method is advantageous to retrieve not only the resonance frequencies of the investigated structure, but also the corresponding modal shapes without the need for an absolute reference. This is an important piece of information, which can be used to validate the consistency of numerical models and analytical solutions. We apply this approach using advanced signal processing to evaluate the resonance characteristics of 2-D Alpine sedimentary valleys. In this study, we present the results obtained at Martigny, in the Rhône valley (Switzerland). For the analysis, we use 2 hr of ambient vibration recordings from a linear seismic array deployed perpendicularly to the valley axis. Only the horizontal-axial direction (SH) of the ground motion is considered. Using the FDD method, six separate resonant frequencies are retrieved together with their corresponding modal shapes. We compare the mode shapes with results from classical standard spectral ratios and numerical simulations of ambient vibration recordings.
Wang, Benfeng; Jakobsen, Morten; Wu, Ru-Shan; Lu, Wenkai; Chen, Xiaohong
2017-03-01
Full waveform inversion (FWI) has been regarded as an effective tool to build the velocity model for the following pre-stack depth migration. Traditional inversion methods are built on Born approximation and are initial model dependent, while this problem can be avoided by introducing Transmission matrix (T-matrix), because the T-matrix includes all orders of scattering effects. The T-matrix can be estimated from the spatial aperture and frequency bandwidth limited seismic data using linear optimization methods. However the full T-matrix inversion method (FTIM) is always required in order to estimate velocity perturbations, which is very time consuming. The efficiency can be improved using the previously proposed inverse thin-slab propagator (ITSP) method, especially for large scale models. However, the ITSP method is currently designed for smooth media, therefore the estimation results are unsatisfactory when the velocity perturbation is relatively large. In this paper, we propose a domain decomposition method (DDM) to improve the efficiency of the velocity estimation for models with large perturbations, as well as guarantee the estimation accuracy. Numerical examples for smooth Gaussian ball models and a reservoir model with sharp boundaries are performed using the ITSP method, the proposed DDM and the FTIM. The estimated velocity distributions, the relative errors and the elapsed time all demonstrate the validity of the proposed DDM.
Weighing Efficiency-Robustness in Supply Chain Disruption by Multi-Objective Firefly Algorithm
Directory of Open Access Journals (Sweden)
Tong Shu
2016-03-01
Full Text Available This paper investigates various supply chain disruptions in terms of scenario planning, including node disruption and chain disruption; namely, disruptions in distribution centers and disruptions between manufacturing centers and distribution centers. Meanwhile, it also focuses on the simultaneous disruption on one node or a number of nodes, simultaneous disruption in one chain or a number of chains and the corresponding mathematical models and exemplification in relation to numerous manufacturing centers and diverse products. Robustness of the design of the supply chain network is examined by weighing efficiency against robustness during supply chain disruptions. Efficiency is represented by operating cost; robustness is indicated by the expected disruption cost and the weighing issue is calculated by the multi-objective firefly algorithm for consistency in the results. It has been shown that the total cost achieved by the optimal target function is lower than that at the most effective time of supply chains. In other words, the decrease of expected disruption cost by improving robustness in supply chains is greater than the increase of operating cost by reducing efficiency, thus leading to cost advantage. Consequently, by approximating the Pareto Front Chart of weighing between efficiency and robustness, enterprises can choose appropriate efficiency and robustness for their longer-term development.
Sohrabi, Foad; Davidson, Timothy N.
2016-06-01
We consider the problem of power allocation for the single-cell multi-user (MU) multiple-input single-output (MISO) downlink with quality-of-service (QoS) constraints. The base station acquires an estimate of the channels and, for a given beamforming structure, designs the power allocation so as to minimize the total transmission power required to ensure that target signal-to-interference-and-noise ratios at the receivers are met, subject to a specified outage probability. We consider scenarios in which the errors in the base station's channel estimates can be modelled as being zero-mean and Gaussian. Such a model is particularly suitable for time division duplex (TDD) systems with quasi-static channels, in which the base station estimates the channel during the uplink phase. Under that model, we employ a precise deterministic characterization of the outage probability to transform the chance-constrained formulation to a deterministic one. Although that deterministic formulation is not convex, we develop a coordinate descent algorithm that can be shown to converge to a globally optimal solution when the starting point is feasible. Insight into the structure of the deterministic formulation yields approximations that result in coordinate update algorithms with good performance and significantly lower computational cost. The proposed algorithms provide better performance than existing robust power loading algorithms that are based on tractable conservative approximations, and can even provide better performance than robust precoding algorithms based on such approximations.
GNSS receiver autonomous integrity monitoring (RAIM) algorithm based on robust estimation
Institute of Scientific and Technical Information of China (English)
Yuanxi Yang; Junyi Xu
2016-01-01
Integrity is significant for safety-of-life applications. Receiver autonomous integrity monitoring (RAIM) has been developed to provide integrity service for civil aviation. At first, the conventional RAIM algorithm is only suitable for single fault detection, single GNSS constellation. However, multiple satellite failure should be considered when more than one satellite navigation system are adopted. To detect and exclude multi-fault, most cur-rent algorithms perform an iteration procedure considering all possible fault model which lead to heavy computation burden. An alternative RAIM is presented in this paper based on multiple satellite constellations (for example, GPS and BeiDou (BDS) etc.) and robust esti-mation for multi-fault detection and exclusion, which can not only detect multi-failures, but also control the influences of near failure observation. Besides, the RAIM algorithm based on robust estimation is more efficient than the current RAIM algorithm for multiple constellation and multiple faults. Finally, the algorithm is tested by GPS/BeiDou data.
A FAST AND ROBUST ALGORITHM FOR ROAD EDGES EXTRACTION FROM LIDAR DATA
Directory of Open Access Journals (Sweden)
K. Qiu
2016-06-01
Full Text Available Fast mapping of roads plays an important role in many geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance. How to extract various road edges fast and robustly is a challenging task. In this paper, we present a fast and robust algorithm for the automatic road edges extraction from terrestrial mobile LiDAR data. The algorithm is based on a key observation: most roads around edges have difference in elevation and road edges with pavement are seen in two different planes. In our algorithm, we firstly extract a rough plane based on RANSAC algorithm, and then multiple refined planes which only contains pavement are extracted from the rough plane. The road edges are extracted based on these refined planes. In practice, there is a serious problem that the rough and refined planes usually extracted badly due to rough roads and different density of point cloud. To eliminate the influence of rough roads, the technology which is similar with the difference of DSM (digital surface model and DTM (digital terrain model is used, and we also propose a method which adjust the point clouds to a similar density to eliminate the influence of different density. Experiments show the validities of the proposed method with multiple datasets (e.g. urban road, highway, and some rural road. We use the same parameters through the experiments and our algorithm can achieve real-time processing speeds.
a Fast and Robust Algorithm for Road Edges Extraction from LIDAR Data
Qiu, Kaijin; Sun, Kai; Ding, Kou; Shu, Zhen
2016-06-01
Fast mapping of roads plays an important role in many geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance. How to extract various road edges fast and robustly is a challenging task. In this paper, we present a fast and robust algorithm for the automatic road edges extraction from terrestrial mobile LiDAR data. The algorithm is based on a key observation: most roads around edges have difference in elevation and road edges with pavement are seen in two different planes. In our algorithm, we firstly extract a rough plane based on RANSAC algorithm, and then multiple refined planes which only contains pavement are extracted from the rough plane. The road edges are extracted based on these refined planes. In practice, there is a serious problem that the rough and refined planes usually extracted badly due to rough roads and different density of point cloud. To eliminate the influence of rough roads, the technology which is similar with the difference of DSM (digital surface model) and DTM (digital terrain model) is used, and we also propose a method which adjust the point clouds to a similar density to eliminate the influence of different density. Experiments show the validities of the proposed method with multiple datasets (e.g. urban road, highway, and some rural road). We use the same parameters through the experiments and our algorithm can achieve real-time processing speeds.
Institute of Scientific and Technical Information of China (English)
Junbao Li; Jeng-Shyang Pan
2008-01-01
@@ In the real-world application of face recognition system, owing to the difficulties of collecting samples or storage space of systems, only one sample image per person is stored in the system, which is so-called one sample per person problem. Moreover, pose and illumination have impact on recognition performance. We propose a novel pose and illumination robust algorithm for face recognition with a single training image per person to solve the above limitations. Experimental results show that the proposed algorithm is an efficient and practical approach for face recognition.
Robust CPD Algorithm for Non-Rigid Point Set Registration Based on Structure Information.
Peng, Lei; Li, Guangyao; Xiao, Mang; Xie, Li
2016-01-01
Recently, the Coherent Point Drift (CPD) algorithm has become a very popular and efficient method for point set registration. However, this method does not take into consideration the neighborhood structure information of points to find the correspondence and requires a manual assignment of the outlier ratio. Therefore, CPD is not robust for large degrees of degradation. In this paper, an improved method is proposed to overcome the two limitations of CPD. A structure descriptor, such as shape context, is used to perform the auxiliary calculation of the correspondence, and the proportion of each GMM component is adjusted by the similarity. The outlier ratio is formulated in the EM framework so that it can be automatically calculated and optimized iteratively. The experimental results on both synthetic data and real data demonstrate that the proposed method described here is more robust to deformation, noise, occlusion, and outliers than CPD and other state-of-the-art algorithms.
Luque-Baena, R M; Urda, D; Gonzalo Claros, M; Franco, L; Jerez, J M
2014-06-01
Genetic algorithms are widely used in the estimation of expression profiles from microarrays data. However, these techniques are unable to produce stable and robust solutions suitable to use in clinical and biomedical studies. This paper presents a novel two-stage evolutionary strategy for gene feature selection combining the genetic algorithm with biological information extracted from the KEGG database. A comparative study is carried out over public data from three different types of cancer (leukemia, lung cancer and prostate cancer). Even though the analyses only use features having KEGG information, the results demonstrate that this two-stage evolutionary strategy increased the consistency, robustness and accuracy of a blind discrimination among relapsed and healthy individuals. Therefore, this approach could facilitate the definition of gene signatures for the clinical prognosis and diagnostic of cancer diseases in a near future. Additionally, it could also be used for biological knowledge discovery about the studied disease.
Acikmese, Ahmet Behcet; Carson, John M., III
2006-01-01
A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees resolvability. With resolvability, initial feasibility of the finite-horizon optimal control problem implies future feasibility in a receding-horizon framework. The control consists of two components; (i) feed-forward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives and derivatives in polytopes. An illustrative numerical example is also provided.
A Bio-Inspired Robust Adaptive Random Search Algorithm for Distributed Beamforming
Tseng, Chia-Shiang; Lin, Che
2010-01-01
A bio-inspired robust adaptive random search algorithm (BioRARSA), designed for distributed beamforming for sensor and relay networks, is proposed in this work. It has been shown via a systematic framework that BioRARSA converges in probability and its convergence time scales linearly with the number of distributed transmitters. More importantly, extensive simulation results demonstrate that the proposed BioRARSA outperforms existing adaptive distributed beamforming schemes by as large as 29.8% on average. This increase in performance results from the fact that BioRARSA can adaptively adjust its sampling stepsize via the "swim" behavior inspired by the bacterial foraging mechanism. Hence, the convergence time of BioRARSA is insensitive to the initial sampling stepsize of the algorithm, which makes it robust against the dynamic nature of distributed wireless networks.
Comparison of the Noise Robustness of FVC Retrieval Algorithms Based on Linear Mixture Models
Hiroki Yoshioka; Kenta Obata
2011-01-01
The fraction of vegetation cover (FVC) is often estimated by unmixing a linear mixture model (LMM) to assess the horizontal spread of vegetation within a pixel based on a remotely sensed reflectance spectrum. The LMM-based algorithm produces results that can vary to a certain degree, depending on the model assumptions. For example, the robustness of the results depends on the presence of errors in the measured reflectance spectra. The objective of this study was to derive a factor that could ...
Aggarwal, Er Deepak; Anantdeep, Er
2010-01-01
Capacity, Robustness, & Perceptual quality of watermark data are very important issues to be considered. A lot of research is going on to increase these parameters for watermarking of the digital images, as there is always a tradeoff among them. . In this paper an efficient watermarking algorithm to improve payload and robustness without affecting perceptual quality of image data based on DWT is discussed. The aim of the paper is to employ the nested watermarks in wavelet domain which increases the capacity and ultimately the robustness against attacks and selection of different scaling factor values for LL & HH bands and during embedding not to create the visible artifacts in the original image and therefore the original and watermarked image is similar.
A robust color image watermarking technique using modified Imperialist Competitive Algorithm.
Moghaddam, Mohsen Ebrahimi; Nemati, Nasibeh
2013-12-10
In this paper, a novel robust watermarking technique using Imperialistic Competition Algorithm (ICA) in the spatial domain is proposed to protect the intellectual property rights of color images. The proposed method tries to insert the watermark in the blocks which are selected by Modified ICA. In this method, ICA has been customized for watermarking. The color band for watermark insertion is selected based on color dynamic range in each block. Besides, in the procedure of selecting blocks for watermark insertion and extraction, ensuring higher fidelity and robustness and resilience to several possible image attacks have been considered. The experimental results showed that the proposed method performance created watermarked images with better PSNRs and more robustness versus several attacks such as additive noise and blurring in compare to related works. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Fault-tolerant Algorithms for Tick-Generation in Asynchronous Logic: Robust Pulse Generation
Dolev, Danny; Lenzen, Christoph; Schmid, Ulrich
2011-01-01
Today's hardware technology presents a new challenge in designing robust systems. Deep submicron VLSI technology introduced transient and permanent faults that were never considered in low-level system designs in the past. Still, robustness of that part of the system is crucial and needs to be guaranteed for any successful product. Distributed systems, on the other hand, have been dealing with similar issues for decades. However, neither the basic abstractions nor the complexity of contemporary fault-tolerant distributed algorithms match the peculiarities of hardware implementations. This paper is intended to be part of an attempt striving to overcome this gap between theory and practice for the clock synchronization problem. Solving this task sufficiently well will allow to build a very robust high-precision clocking system for hardware designs like systems-on-chips in critical applications. As our first building block, we describe and prove correct a novel Byzantine fault-tolerant self-stabilizing pulse syn...
A Fast, Accurate and Robust Algorithm For Transferring Radiation in Three-Dimensional Space
Cen, R
2002-01-01
We have developed an algorithm for transferring radiation in three-dimensional space. The algorithm computes radiation source and sink terms using the Fast Fourier Transform (FFT) method, based on a formulation in which the integral of any quantity (such as emissivity or opacity) over any volume may be written in the classic convolution form. The algorithm is fast with the computational time scaling as N (log N)^2, where N is the number of grid points of a simulation box, independent of the number of radiation sources. Furthermore, in this formulation one can naturally account for both local radiation sources and diffuse background as well as any extra external sources, all in a completely self-consistent fashion. Finally, the algorithm is completely stable and robust. While the algorithm is generally applicable, we test it on a set of problems that encompass a wide range of situations in cosmological applications, demonstrating that the algorithm is accurate. These tests show that the algorithm produces resu...
Robust algorithm for arrhythmia classification in ECG using extreme learning machine
Directory of Open Access Journals (Sweden)
Shin Kwangsoo
2009-10-01
Full Text Available Abstract Background Recently, extensive studies have been carried out on arrhythmia classification algorithms using artificial intelligence pattern recognition methods such as neural network. To improve practicality, many studies have focused on learning speed and the accuracy of neural networks. However, algorithms based on neural networks still have some problems concerning practical application, such as slow learning speeds and unstable performance caused by local minima. Methods In this paper we propose a novel arrhythmia classification algorithm which has a fast learning speed and high accuracy, and uses Morphology Filtering, Principal Component Analysis and Extreme Learning Machine (ELM. The proposed algorithm can classify six beat types: normal beat, left bundle branch block, right bundle branch block, premature ventricular contraction, atrial premature beat, and paced beat. Results The experimental results of the entire MIT-BIH arrhythmia database demonstrate that the performances of the proposed algorithm are 98.00% in terms of average sensitivity, 97.95% in terms of average specificity, and 98.72% in terms of average accuracy. These accuracy levels are higher than or comparable with those of existing methods. We make a comparative study of algorithm using an ELM, back propagation neural network (BPNN, radial basis function network (RBFN, or support vector machine (SVM. Concerning the aspect of learning time, the proposed algorithm using ELM is about 290, 70, and 3 times faster than an algorithm using a BPNN, RBFN, and SVM, respectively. Conclusion The proposed algorithm shows effective accuracy performance with a short learning time. In addition we ascertained the robustness of the proposed algorithm by evaluating the entire MIT-BIH arrhythmia database.
Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.
2017-07-01
Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.
Robust Control Algorithm for a Two Cart System and an Inverted Pendulum
Wilson, Chris L.; Capo-Lugo, Pedro
2011-01-01
The Rectilinear Control System can be used to simulate a launch vehicle during liftoff. Several control schemes have been developed that can control different dynamic models of the rectilinear plant. A robust control algorithm was developed that can control a pendulum to maintain an inverted position. A fluid slosh tank will be attached to the pendulum in order to test robustness in the presence of unknown slosh characteristics. The rectilinear plant consists of a DC motor and three carts mounted in series. Each cart s weight can be adjusted with brass masses and the carts can be coupled with springs. The pendulum is mounted on the first cart and an adjustable air damper can be attached to the third cart if desired. Each cart and the pendulum have a quadrature encoder to determine position. Full state feedback was implemented in order to develop the control algorithm along with a state estimator to determine the velocity states of the system. A MATLAB program was used to convert the state space matrices from continuous time to discrete time. This program also used a desired phase margin and damping ratio to determine the feedback gain matrix that would be used in the LabVIEW program. This experiment will allow engineers to gain a better understanding of liquid propellant slosh dynamics, therefore enabling them to develop more robust control algorithms for launch vehicle systems
An integer optimization algorithm for robust identification of non-linear gene regulatory networks
Directory of Open Access Journals (Sweden)
Chemmangattuvalappil Nishanth
2012-09-01
Full Text Available Abstract Background Reverse engineering gene networks and identifying regulatory interactions are integral to understanding cellular decision making processes. Advancement in high throughput experimental techniques has initiated innovative data driven analysis of gene regulatory networks. However, inherent noise associated with biological systems requires numerous experimental replicates for reliable conclusions. Furthermore, evidence of robust algorithms directly exploiting basic biological traits are few. Such algorithms are expected to be efficient in their performance and robust in their prediction. Results We have developed a network identification algorithm to accurately infer both the topology and strength of regulatory interactions from time series gene expression data in the presence of significant experimental noise and non-linear behavior. In this novel formulism, we have addressed data variability in biological systems by integrating network identification with the bootstrap resampling technique, hence predicting robust interactions from limited experimental replicates subjected to noise. Furthermore, we have incorporated non-linearity in gene dynamics using the S-system formulation. The basic network identification formulation exploits the trait of sparsity of biological interactions. Towards that, the identification algorithm is formulated as an integer-programming problem by introducing binary variables for each network component. The objective function is targeted to minimize the network connections subjected to the constraint of maximal agreement between the experimental and predicted gene dynamics. The developed algorithm is validated using both in silico and experimental data-sets. These studies show that the algorithm can accurately predict the topology and connection strength of the in silico networks, as quantified by high precision and recall, and small discrepancy between the actual and predicted kinetic parameters
A Simple and Robust Event-Detection Algorithm for Single-Cell Impedance Cytometry.
Caselli, Federica; Bisegna, Paolo
2016-02-01
Microfluidic impedance cytometry is emerging as a powerful label-free technique for the characterization of single biological cells. In order to increase the sensitivity and the specificity of the technique, suited digital signal processing methods are required to extract meaningful information from measured impedance data. In this study, a simple and robust event-detection algorithm for impedance cytometry is presented. Since a differential measuring scheme is generally adopted, the signal recorded when a cell passes through the sensing region of the device exhibits a typical odd-symmetric pattern. This feature is exploited twice by the proposed algorithm: first, a preliminary segmentation, based on the correlation of the data stream with the simplest odd-symmetric template, is performed; then, the quality of detected events is established by evaluating their E2O index, that is, a measure of the ratio between their even and odd parts. A thorough performance analysis is reported, showing the robustness of the algorithm with respect to parameter choice and noise level. In terms of sensitivity and positive predictive value, an overall performance of 94.9% and 98.5%, respectively, was achieved on two datasets relevant to microfluidic chips with very different characteristics, considering three noise levels. The present algorithm can foster the role of impedance cytometry in single-cell analysis, which is the new frontier in "Omics."
Robust Airfoil Optimization with Multi-objective Estimation of Distribution Algorithm
Institute of Scientific and Technical Information of China (English)
Zhong Xiaoping; Ding Jifeng; Li Weiji; Zhang Yong
2008-01-01
A transonic airfoil designed by means of classical point-optimization may result in its dramatically inferior performance under off-design conditious. To overcome this shortcoming, robust design is proposed to fred out the optimal profile of an airfoil to maintain its performance in an uncertain environment. The robust airfoil optimization is aimed to minimize mean values and variances of drag coefficients while satisfying the lift and thickness constraints over a range of Maeb numbers. A multi-objective estimation of distribution algorithm is applied to the robust airfoil optimization on the base of the RAE2822 benchmark airfoil. The shape of the airfoil is obtained through superposing ten Hick-Heune shape functions upon the benchmark airfoil. A set of design points is selected according to a uniform design table for aerodynamic evaluation. A Kriging model of drag coefficient is coustrueted with those points to reduce eumputing costs. Over the Maeh range fi'om 0.7 to 0.8, the airfoil generated by the robust optimization has a configuration characterized by supercritical airfoil with low drag coefficients. The small fluctuation in its drag coefficients means that the performance of the robust airfoil is insensitive to variation of Mach number.
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Recent advances in 3D spatial data capture, such as high resolution satellite images and laser scanning as well as corresponding data processing and modeling technologies, have led to the generation of large amounts of datasets on terrains, buildings, roads and other features. The rapid transmission and visualization of 3D models has become a 'bottleneck' of internet-based applications. This paper proposes a robust algorithm to generate multi-resolution models for rapid visualization and network transmission of 3D models. Experiments were undertaken to evaluate the performance of the proposed algorithm. Experimental results demonstrate that the proposed algorithm achieves good performance in terms of running speed, accuracy, encoding of multi-resolution models, and network transmission.
ISS-based robust adaptive fuzzy algorithm for maintaining a ship's track
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
This paper focuses on the problem of linear track keeping for marine surface vessels.The influence exerted by sea currents on the kinematic equation of ships is considered first.The input-to-state stability (ISS) theory used to verify the system is input-to-state stable.Combining the Nussbaum gain with backstepping techniques, a robust adaptive fuzzy algorithm is presented by employing fuzzy systems as an approximator for unknown nonlinearities in the system.It is proved that the proposed algorithm that guarantees all signals in the closed-loop system are ultimately bounded.Consequently, a ship's linear track-keeping control can be implemented.Simulation results using Dalian Maritime University's ocean-going training ship 'YULONG' are presented to validate the effectiveness of the proposed algorithm.
Directory of Open Access Journals (Sweden)
Shashwat Pathak
2016-09-01
Full Text Available This paper proposes and evaluates an algorithm to automatically detect the cataracts from color images in adult human subjects. Currently, methods available for cataract detection are based on the use of either fundus camera or Digital Single-Lens Reflex (DSLR camera; both are very expensive. The main motive behind this work is to develop an inexpensive, robust and convenient algorithm which in conjugation with suitable devices will be able to diagnose the presence of cataract from the true color images of an eye. An algorithm is proposed for cataract screening based on texture features: uniformity, intensity and standard deviation. These features are first computed and mapped with diagnostic opinion by the eye expert to define the basic threshold of screening system and later tested on real subjects in an eye clinic. Finally, a tele-ophthamology model using our proposed system has been suggested, which confirms the telemedicine application of the proposed system.
Advanced Credit-Assignment CMAC Algorithm for Robust Self-Learning and Self-Maintenance Machine
Institute of Scientific and Technical Information of China (English)
ZHANG Lei(张蕾); LEE Jay; CAO Qixin(曹其新); WANG Lei(王磊)
2004-01-01
Smart machine necessitates self-learning capabilities to assess its own performance and predict its behavior. To achieve self-maintenance intelligence, robust and fast learning algorithms need to be embedded in machine for real-time decision. This paper presents a credit-assignment cerebellar model articulation controller (CA-CMAC) algorithm to reduce learning interference in machine learning. The developed algorithms on credit matrix and the credit correlation matrix are presented. The error of the training sample distributed to the activated memory cell is proportional to the cell's credibility, which is determined by its activated times. The convergence processes of CA-CMAC in cyclic learning are further analyzed with two convergence theorems. In addition, simulation results on the inverse kinematics of 2-degree-of-freedom planar robot arm are used to prove the convergence theorems and show that CA-CMAC converges faster than conventional machine learning.
A robust background regression based score estimation algorithm for hyperspectral anomaly detection
Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei
2016-12-01
Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement
Weng, Jing-Feng; Lo, Yu-Lung
2012-05-07
For 3D objects with height discontinuities, the image reconstruction performance of interferometric systems is adversely affected by the presence of noise in the wrapped phase map. Various schemes have been proposed for detecting residual noise, speckle noise and noise at the lateral surfaces of the discontinuities. However, in most schemes, some noisy pixels are missed and noise detection errors occur. Accordingly, this paper proposes two robust filters (designated as Filters A and B, respectively) for improving the performance of the phase unwrapping process for objects with height discontinuities. Filter A comprises a noise and phase jump detection scheme and an adaptive median filter, while Filter B replaces the detected noise with the median phase value of an N × N mask centered on the noisy pixel. Filter A enables most of the noise and detection errors in the wrapped phase map to be removed. Filter B then detects and corrects any remaining noise or detection errors during the phase unwrapping process. Three reconstruction paths are proposed, Path I, Path II and Path III. Path I combines the path-dependent MACY algorithm with Filters A and B, while Paths II and III combine the path-independent cellular automata (CA) algorithm with Filters A and B. In Path II, the CA algorithm operates on the whole wrapped phase map, while in Path III, the CA algorithm operates on multiple sub-maps of the wrapped phase map. The simulation and experimental results confirm that the three reconstruction paths provide a robust and precise reconstruction performance given appropriate values of the parameters used in the detection scheme and filters, respectively. However, the CA algorithm used in Paths II and III is relatively inefficient in identifying the most suitable unwrapping paths. Thus, of the three paths, Path I yields the lowest runtime.
A new algorithm for the robust optimization of rotor-bearing systems
Lopez, R. H.; Ritto, T. G.; Sampaio, Rubens; Souza de Cursi, J. E.
2014-08-01
This article presents a new algorithm for the robust optimization of rotor-bearing systems. The goal of the optimization problem is to find the values of a set of parameters for which the natural frequencies of the system are as far away as possible from the rotational speeds of the machine. To accomplish this, the penalization proposed by Ritto, Lopez, Sampaio, and Souza de Cursi in 2011 is employed. Since the rotor-bearing system is subject to uncertainties, such a penalization is modelled as a random variable. The robust optimization is performed by minimizing the expected value and variance of the penalization, resulting in a multi-objective optimization problem (MOP). The objective function of this MOP is known to be non-convex and it is shown that its resulting Pareto front (PF) is also non-convex. Thus, a new algorithm is proposed for solving the MOP: the normal boundary intersection (NBI) is employed to discretize the PF handling its non-convexity, and a global optimization algorithm based on a restart procedure and local searches are employed to minimize the NBI subproblems tackling the non-convexity of the objective function. A numerical analysis section shows the advantage of using the proposed algorithm over the weighted-sum (WS) and NSGA-II approaches. In comparison with the WS, the proposed approach obtains a much more even and useful set of Pareto points. Compared with the NSGA-II approach, the proposed algorithm provides a better approximation of the PF requiring much lower computational cost.
Directory of Open Access Journals (Sweden)
V. Jayaraj
2010-08-01
Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.
A fast and Robust Algorithm for general inequality/equality constrained minimum time problems
Energy Technology Data Exchange (ETDEWEB)
Briessen, B. [Sandia National Labs., Albuquerque, NM (United States); Sadegh, N. [Georgia Inst. of Tech., Atlanta, GA (United States). School of Mechanical Engineering
1995-12-01
This paper presents a new algorithm for solving general inequality/equality constrained minimum time problems. The algorithm`s solution time is linear in the number of Runge-Kutta steps and the number of parameters used to discretize the control input history. The method is being applied to a three link redundant robotic arm with torque bounds, joint angle bounds, and a specified tip path. It solves case after case within a graphical user interface in which the user chooses the initial joint angles and the tip path with a mouse. Solve times are from 30 to 120 seconds on a Hewlett Packard workstation. A zero torque history is always used in the initial guess, and the algorithm has never crashed, indicating its robustness. The algorithm solves for a feasible solution for large trajectory execution time t{sub f} and then reduces t{sub f} and then reduces t{sub f} by a small amount and re-solves. The fixed time re- solve uses a new method of finding a near-minimum-2-norm solution to a set of linear equations and inequalities that achieves quadratic convegence to a feasible solution of the full nonlinear problem.
A Robust Vision-based Runway Detection and Tracking Algorithm for Automatic UAV Landing
Abu Jbara, Khaled F.
2015-05-01
This work presents a novel real-time algorithm for runway detection and tracking applied to the automatic takeoff and landing of Unmanned Aerial Vehicles (UAVs). The algorithm is based on a combination of segmentation based region competition and the minimization of a specific energy function to detect and identify the runway edges from streaming video data. The resulting video-based runway position estimates are updated using a Kalman Filter, which can integrate other sensory information such as position and attitude angle estimates to allow a more robust tracking of the runway under turbulence. We illustrate the performance of the proposed lane detection and tracking scheme on various experimental UAV flights conducted by the Saudi Aerospace Research Center. Results show an accurate tracking of the runway edges during the landing phase under various lighting conditions. Also, it suggests that such positional estimates would greatly improve the positional accuracy of the UAV during takeoff and landing phases. The robustness of the proposed algorithm is further validated using Hardware in the Loop simulations with diverse takeoff and landing videos generated using a commercial flight simulator.
Zheng, Xiang
2015-03-01
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors. © 2015 Elsevier Inc.
Robust state feedback controller design of STATCOM using chaotic optimization algorithm
Directory of Open Access Journals (Sweden)
Safari Amin
2010-01-01
Full Text Available In this paper, a new design technique for the design of robust state feedback controller for static synchronous compensator (STATCOM using Chaotic Optimization Algorithm (COA is presented. The design is formulated as an optimization problem which is solved by the COA. Since chaotic planning enjoys reliability, ergodicity and stochastic feature, the proposed technique presents chaos mapping using Lozi map chaotic sequences which increases its convergence rate. To ensure the robustness of the proposed damping controller, the design process takes into account a wide range of operating conditions and system configurations. The simulation results reveal that the proposed controller has an excellent capability in damping power system low frequency oscillations and enhances greatly the dynamic stability of the power systems. Moreover, the system performance analysis under different operating conditions shows that the phase based controller is superior compare to the magnitude based controller.
VisiMark1_0: An Assistance Tool for Evaluating Robustness of Video Watermarking Algorithms
Directory of Open Access Journals (Sweden)
Neeta Deshpande
2013-04-01
Full Text Available the paper proposes a tool, VisiMark1_0, as assistance for evaluating the robustness of video watermarking algorithms as evaluation of a video watermarking algorithm for robustness with available tools is a tedious task. It is our belief that for researchers of robust video watermarking, a tool needs to be designed that will assist in the evaluation procedure irrespective of the design algorithm. This tool provides a test bed of various attacks. The input to this tool will be a watermarked video whereas the outputs will be attacked videos, evaluated parameters PSNR, MSE, MSAD and DELTA, graphical comparisons of the attacked and watermarked videos with all parameters needed by researchers, and the attacks report. Provision for comparison of any two videos is an additional facility provided in the tool. The attacks implemented in VisiMark1_0 are categorized mainly in three. Firstly, Video attacks: Frame dropping, Frame averaging, Frame swapping, Changing the sequence of the scenes, Changing Frame rate, Fade and dissolve, Contrast stretching, Motion blurring, Chroma sampling, Inter frame averaging are some of the novel offerings in video frame attacks category. Secondly, Geometrical attacks: Apart from the traditional Rotation, Scaling and Cropping attacks for images, VisiMark1_0 contributed towards Sharpening, Shearing, Flipping, Up/down sampling and Dithering attacks for a video and signal processing attacks like Conventional Noising, Denoising and Filtering attacks for images are incorporated for video along with Pixel removal attack as a novel contribution. VisiMark1_0 is an endeavor to design a tool for evaluating a raw video (an .avi file currently incorporating various attacks having a prospect for numerous video formats in near future.
A Novel Robust Communication Algorithm for Distributed Secondary Control of Islanded MicroGrids
DEFF Research Database (Denmark)
Shafiee, Qobad; Dragicevic, Tomislav; Vasquez, Juan Carlos;
2013-01-01
Distributed secondary control (DSC) is a new approach for MicroGrids (MGs) such that frequency, voltage and power regulation is made in each unit locally to avoid using a central controller. Due to the constrained traffic pattern required by the secondary control, it is viable to implement...... dedicated local area communication functionality among the local controllers. This paper presents a new, wireless-based robust communication algorithm for DSC of MGs designed to avoid communication bottlenecks and enable the plug-and-play capability of new DGs. Real-time simulation and experimental results...
Institute of Scientific and Technical Information of China (English)
TENG Pengxiao; MA Chizhou; YANG Yichun; LI Xiaodong
2007-01-01
A robust algorithm of direction of arrival (DOA) estimation for coherent wideband sources in unknown correlated noise fields was investigated. The noise is usually unknown and correlated among sensors in practical applications, especially for arrays with comparatively small apertures. The spatially correlated noise incurs an increase in focusing error and a severe degradation in the DOA estimation, and therefore a method of focusing transformation based on differentiating covariance matrix was proposed to eliminate noise, hence reduce the focusing error. The simulation and experimental results demonstrate the effectiveness of the proposed method.
Robust Face Location and Tracking Using Optical Flow and Genetic Algorithms
Institute of Scientific and Technical Information of China (English)
WANG Yanjiang; YUAN Baozong
2001-01-01
This paper presents a new and robustapproach to the detection, localization and tracking ofa human face in image sequences. First, a fast algo-rithm based on the neighbor-point-reliability is pro-posed to calculate the optical flow, which is used toextract the motion region. Then the hair and thehead knowledges are used to locate the face area. Forface tracking, a new genetic algorithms-based dynamictemplate-matching method is applied to search thenew position of the face in each new video frame. Ex-perimental results show that the proposed face track-ing method is fast and robust to illumination, faceposes, facial expressions and image distractors suchas facial occlusion by hands.
Autopiquer - a Robust and Reliable Peak Detection Algorithm for Mass Spectrometry
Kilgour, David P. A.; Hughes, Sam; Kilgour, Samantha L.; Mackay, C. Logan; Palmblad, Magnus; Tran, Bao Quoc; Goo, Young Ah; Ernst, Robert K.; Clarke, David J.; Goodlett, David R.
2017-02-01
We present a simple algorithm for robust and unsupervised peak detection by determining a noise threshold in isotopically resolved mass spectrometry data. Solving this problem will greatly reduce the subjective and time-consuming manual picking of mass spectral peaks and so will prove beneficial in many research applications. The Autopiquer approach uses autocorrelation to test for the presence of (isotopic) structure in overlapping windows across the spectrum. Within each window, a noise threshold is optimized to remove the most unstructured data, whilst keeping as much of the (isotopic) structure as possible. This algorithm has been successfully demonstrated for both peak detection and spectral compression on data from many different classes of mass spectrometer and for different sample types, and this approach should also be extendible to other types of data that contain regularly spaced discrete peaks.
Schultz, A.
2010-12-01
describe our ongoing efforts to achieve massive parallelization on a novel hybrid GPU testbed machine currently configured with 12 Intel Westmere Xeon CPU cores (or 24 parallel computational threads) with 96 GB DDR3 system memory, 4 GPU subsystems which in aggregate contain 960 NVidia Tesla GPU cores with 16 GB dedicated DDR3 GPU memory, and a second interleved bank of 4 GPU subsystems containing in aggregate 1792 NVidia Fermi GPU cores with 12 GB dedicated DDR5 GPU memory. We are applying domain decomposition methods to a modified version of Weiss' (2001) 3D frequency domain full physics EM finite difference code, an open source GPL licensed f90 code available for download from www.OpenEM.org. This will be the core of a new hybrid 3D inversion that parallelizes frequencies across CPUs and individual forward solutions across GPUs. We describe progress made in modifying the code to use direct solvers in GPU cores dedicated to each small subdomain, iteratively improving the solution by matching adjacent subdomain boundary solutions, rather than iterative Krylov space sparse solvers as currently applied to the whole domain.
Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.
Clark, Darin P; Badea, Cristian T
2014-11-07
Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.
Family of image compression algorithms which are robust to transmission errors
Creusere, Charles D.
1996-10-01
In this work, we present a new family of image compression algorithms derived from Shapiro's embedded zerotree wavelet (EZW) coder. These new algorithms introduce robustness to transmission errors into the bit stream while still preserving its embedded structure. This is done by partitioning the wavelet coefficients into groups, coding each group independently, and interleaving the bit streams for transmission, thus if one bit is corrupted, then only one of these bit streams will be truncated in the decoder. If each group of wavelet coefficients uniformly spans the entire image, then the objective and subjective qualities of the reconstructed image are very good. To illustrate the advantages of this new family, we compare it to the conventional EZW coder. For example, one variation has a peak signal to noise ratio (PSNR) slightly lower than that of the conventional algorithm when no errors occur, but when a single error occurs at bit 1000, the PSNR of the new coder is well over 5 dB higher for both test images. Finally, we note that the new algorithms do not increase the complexity of the overall system and, in fact, they are far more easily parallelized than the conventional EZW coder.
A Novel Robust Scene Change Detection Algorithm for Autonomous Robots Using Mixtures of Gaussians
Directory of Open Access Journals (Sweden)
Luis J. Manso
2014-02-01
Full Text Available Interest in change detection techniques has considerably increased during recent years in the field of autonomous robotics. This is partly because changes in a robot's working environment are useful for several robotic skills (e.g., spatial cognition, modelling or navigation and applications (e.g., surveillance or guidance robots. Changes are usually detected by comparing current data provided by the robot's sensors with a previously known map or model of the environment. When the data consists of a large point cloud, dealing with it is a computationally expensive task, mainly due to the amount of points and the redundancy. Using Gaussian Mixture Models (GMM instead of raw point clouds leads to a more compact feature space that can be used to efficiently process the input data. This allows us to successfully segment the set of 3D points acquired by the sensor and reduce the computational load of the change detection algorithm. However, the segmentation of the environment as a Mixture of Gaussians has some problems that need to be properly addressed. In this paper, a novel change detection algorithm is described in order to improve the robustness and computational cost of previous approaches. The proposal is based on the classic Expectation Maximization (EM algorithm, for which different selection criteria are evaluated. As demonstrated in the experimental results section, the proposed change detection algorithm achieves the detection of changes in the robot's working environment faster and more accurately than similar approaches.
A Novel Robust Scene Change Detection Algorithm for Autonomous Robots Using Mixtures of Gaussians
Directory of Open Access Journals (Sweden)
Luis J. Manso
2014-02-01
Full Text Available Interest in change detection techniques has considerably increased during recent years in the field of autonomous robotics. This is partly because changes in a robot’s working environment are useful for several robotic skills (e.g., spatial cognition, modelling or navigation and applications (e.g., surveillance or guidance robots. Changes are usually detected by comparing current data provided by the robot’s sensors with a previously known map or model of the environment. When the data consists of a large point cloud, dealing with it is a computationally expensive task, mainly due to the amount of points and the redundancy. Using Gaussian Mixture Models (GMM instead of raw point clouds leads to a more compact feature space that can be used to efficiently process the input data. This allows us to successfully segment the set of 3D points acquired by the sensor and reduce the computational load of the change detection algorithm. However, the segmentation of the environment as a Mixture of Gaussians has some problems that need to be properly addressed. In this paper, a novel change detection algorithm is described in order to improve the robustness and computational cost of previous approaches. The proposal is based on the classic Expectation Maximization (EM algorithm, for which different selection criteria are evaluated. As demonstrated in the experimental results section, the proposed change detection algorithm achieves the detection of changes in the robot’s working environment faster and more accurately than similar approaches.
An improved robust blind motion de-blurring algorithm for remote sensing images
He, Yulong; Liu, Jin; Liang, Yonghui
2016-10-01
Shift-invariant motion blur can be modeled as a convolution of the true latent image and the blur kernel with additive noise. Blind motion de-blurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. This paper proposes an improved edge-specific motion de-blurring algorithm which proved to be fit for processing remote sensing images. We find that an inaccurate blur kernel is the main factor to the low-quality restored images. To improve image quality, we do the following contributions. For the robust kernel estimation, first, we adapt the multi-scale scheme to make sure that the edge map could be constructed accurately; second, an effective salient edge selection method based on RTV (Relative Total Variation) is used to extract salient structure from texture; third, an alternative iterative method is introduced to perform kernel optimization, in this step, we adopt l1 and l0 norm as the priors to remove noise and ensure the continuity of blur kernel. For the final latent image reconstruction, an improved adaptive deconvolution algorithm based on TV-l2 model is used to recover the latent image; we control the regularization weight adaptively in different region according to the image local characteristics in order to preserve tiny details and eliminate noise and ringing artifacts. Some synthetic remote sensing images are used to test the proposed algorithm, and results demonstrate that the proposed algorithm obtains accurate blur kernel and achieves better de-blurring results.
Multilayer perceptron for robust nonlinear interval regression analysis using genetic algorithms.
Hu, Yi-Chung
2014-01-01
On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets.
Robust guide-star tracking algorithm proposed for Gravity Probe-B relativity mission
Gwo, Dz-Hung
1997-09-01
The gravity probe-B cryogenic star-tracking telescope provides the inertial pointing reference, as established by a distant guide star, with milli-arc-second resolution for the NASA/Stanford relatively gyroscope experiment. The star image of the f/27 Cassegrainian telescope is split onto two focal planes by a 50/50 intensity splitter, with each resultant image further divided by a roof prism reflector to generate the quadrant pointing information within few arc-seconds about the guide-star direction. Conventionally, the quadrant pointing information can be derived through the difference- and-sum algorithm. In this article, an alternative simple, yet robust algorithm is proposed and compared with the conventional one in the following aspects: (1) requirements on near perfect star-image division, (2) optimization in selecting null direction, (3) compensation of null-direction drift due to differential aging of photon detectors, (4) operational definitions of response sensitivity, linearity, and linear range of motion measurement, (5) robustness in system redundancy in terms of options in single-detector pointing per axis.
Kubota, S.; Kanomata, K.; Momiyama, K.; Suzuki, T.; Hirose, F.
2013-10-01
We propose an optimization algorithm to design multilayer antireflection (AR) structure, which has robustness against variations in layer thicknesses, for organic photovoltaic cells. When a set of available materials are given, the proposed method searches for the material and thickness of each AR layer to maximize the short-circuit current density (Jsc). This algorithm allows for obtaining a set of solutions, including optimal and quasi-optimal solutions, at the same time, so that we can clearly make comparison between them. In addition, the effects of deviations in the thicknesses of the AR layers are examined for the (quasi-)optimal solutions obtained. The expectation of the decrease in the AR performance is estimated by calculating the changes in Jsc when the thicknesses of all AR layers are varied independently. We show that some of quasi-optimal solutions may have simpler layer configuration and can be more robust against the deviations in film thicknesses, than the optimal solution. This method indicates the importance of actively searching valuable, nonoptimal solutions for practical design of AR films. We also discuss the optical conditions that lead to light absorption in the back metal contact and the effects of changing active layer thicknesses.
Multilayer Perceptron for Robust Nonlinear Interval Regression Analysis Using Genetic Algorithms
2014-01-01
On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets. PMID:25110755
Balandin, DV; Kogan, MM
2004-01-01
An algorithm for checking feasibility of the robust H-infinity-control problem for systems with time-varying norm bounded uncertainty is suggested. This algorithm is an iterative procedure on each step of which an optimization problem for a linear function under convex constraints determined by LMIs
Sarjaš, Andrej; Chowdhury, Amor; Svečko, Rajko
2016-09-01
This paper presents the synthesis of an optimal robust controller design using the polynomial pole placement technique and multi-criteria optimisation procedure via an evolutionary computation algorithm - differential evolution. The main idea of the design is to provide a reliable fixed-order robust controller structure and an efficient closed-loop performance with a preselected nominally characteristic polynomial. The multi-criteria objective functions have quasi-convex properties that significantly improve convergence and the regularity of the optimal/sub-optimal solution. The fundamental aim of the proposed design is to optimise those quasi-convex functions with fixed closed-loop characteristic polynomials, the properties of which are unrelated and hard to present within formal algebraic frameworks. The objective functions are derived from different closed-loop criteria, such as robustness with metric ?∞, time performance indexes, controller structures, stability properties, etc. Finally, the design results from the example verify the efficiency of the controller design and also indicate broader possibilities for different optimisation criteria and control structures.
Robust evaluation of time series classification algorithms for structural health monitoring
Harvey, Dustin Y.; Worden, Keith; Todd, Michael D.
2014-03-01
Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and mechanical infrastructure through analysis of structural response measurements. The supervised learning methodology for data-driven SHM involves computation of low-dimensional, damage-sensitive features from raw measurement data that are then used in conjunction with machine learning algorithms to detect, classify, and quantify damage states. However, these systems often suffer from performance degradation in real-world applications due to varying operational and environmental conditions. Probabilistic approaches to robust SHM system design suffer from incomplete knowledge of all conditions a system will experience over its lifetime. Info-gap decision theory enables nonprobabilistic evaluation of the robustness of competing models and systems in a variety of decision making applications. Previous work employed info-gap models to handle feature uncertainty when selecting various components of a supervised learning system, namely features from a pre-selected family and classifiers. In this work, the info-gap framework is extended to robust feature design and classifier selection for general time series classification through an efficient, interval arithmetic implementation of an info-gap data model. Experimental results are presented for a damage type classification problem on a ball bearing in a rotating machine. The info-gap framework in conjunction with an evolutionary feature design system allows for fully automated design of a time series classifier to meet performance requirements under maximum allowable uncertainty.
Nejlaoui, Mohamed; Houidi, Ajmi; Affi, Zouhaier; Romdhane, Lotfi
2017-10-01
This paper deals with the robust safety design optimization of a rail vehicle system moving in short radius curved tracks. A combined multi-objective imperialist competitive algorithm and Monte Carlo method is developed and used for the robust multi-objective optimization of the rail vehicle system. This robust optimization of rail vehicle safety considers simultaneously the derailment angle and its standard deviation where the design parameters uncertainties are considered. The obtained results showed that the robust design reduces significantly the sensitivity of the rail vehicle safety to the design parameters uncertainties compared to the determinist one and to the literature results.
Robust moving mesh algorithms for hybrid stretched meshes: Application to moving boundaries problems
Landry, Jonathan; Soulaïmani, Azzeddine; Luke, Edward; Ben Haj Ali, Amine
2016-12-01
A robust Mesh-Mover Algorithm (MMA) approach is designed to adapt meshes of moving boundaries problems. A new methodology is developed from the best combination of well-known algorithms in order to preserve the quality of initial meshes. In most situations, MMAs distribute mesh deformation while preserving a good mesh quality. However, invalid meshes are generated when the motion is complex and/or involves multiple bodies. After studying a few MMA limitations, we propose the following approach: use the Inverse Distance Weighting (IDW) function to produce the displacement field, then apply the Geometric Element Transformation Method (GETMe) smoothing algorithms to improve the resulting mesh quality, and use an untangler to revert negative elements. The proposed approach has been proven efficient to adapt meshes for various realistic aerodynamic motions: a symmetric wing that has suffered large tip bending and twisting and the high-lift components of a swept wing that has moved to different flight stages. Finally, the fluid flow problem has been solved on meshes that have moved and they have produced results close to experimental ones. However, for situations where moving boundaries are too close to each other, more improvements need to be made or other approaches should be taken, such as an overset grid method.
Robust signal recognition algorithm based on machine learning in heterogeneous networks
Institute of Scientific and Technical Information of China (English)
Xiaokai Liu; Rong Li; Chenglin Zhao; Pengbiao Wang
2016-01-01
There are various heterogeneous networks for terminals to deliver a better quality of service. Signal system recognition and classification contribute a lot to the process. However, in low signal to noise ratio (SNR) circumstances or under time-varying multipath channels, the majority of the existing algorithms for signal recognition are already facing limitations. In this series, we present a robust signal recogni-tion method based upon the original and latest updated ver-sion of the extreme learning machine (ELM) to help users to switch between networks. The ELM utilizes signal characte- ristics to distinguish systems. The superiority of this algorithm lies in the random choices of hidden nodes and in the fact that it determines the output weights analyticaly, which result in lower complexity. Theoreticaly, the algorithm tends to ofer a good generalization performance at an extremely fast speed of learning. Moreover, we implement the GSM/WCDMA/LTE mod-els in the Matlab environment by using the Simulink tools. The simulations reveal that the signals can be recognized suc-cessfuly to achieve a 95% accuracy in a low SNR (0 dB) environment in the time-varying multipath Rayleigh fading channel.
Fast and robust ray casting algorithms for virtual X-ray imaging
Freud, N.; Duvauchelle, P.; Létang, J. M.; Babot, D.
2006-07-01
Deterministic calculations based on ray casting techniques are known as a powerful alternative to the Monte Carlo approach to simulate X- or γ-ray imaging modalities (e.g. digital radiography and computed tomography), whenever computation time is a critical issue. One of the key components, from the viewpoint of computing resource expense, is the algorithm which determines the path length travelled by each ray through complex 3D objects. This issue has given rise to intensive research in the field of 3D rendering (in the visible light domain) during the last decades. The present work proposes algorithmic solutions adapted from state-of-the-art computer graphics to carry out ray casting in X-ray imaging configurations. This work provides an algorithmic basis to simulate direct transmission of X-rays, as well as scattering and secondary emission of radiation. Emphasis is laid on the speed and robustness issues. Computation times are given in a typical case of radiography simulation.
Fast and robust ray casting algorithms for virtual X-ray imaging
Energy Technology Data Exchange (ETDEWEB)
Freud, N. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France)]. E-mail: Nicolas.Freud@insa-lyon.fr; Duvauchelle, P. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Letang, J.M. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Babot, D. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France)
2006-07-15
Deterministic calculations based on ray casting techniques are known as a powerful alternative to the Monte Carlo approach to simulate X- or {gamma}-ray imaging modalities (e.g. digital radiography and computed tomography), whenever computation time is a critical issue. One of the key components, from the viewpoint of computing resource expense, is the algorithm which determines the path length travelled by each ray through complex 3D objects. This issue has given rise to intensive research in the field of 3D rendering (in the visible light domain) during the last decades. The present work proposes algorithmic solutions adapted from state-of-the-art computer graphics to carry out ray casting in X-ray imaging configurations. This work provides an algorithmic basis to simulate direct transmission of X-rays, as well as scattering and secondary emission of radiation. Emphasis is laid on the speed and robustness issues. Computation times are given in a typical case of radiography simulation.
New algorithm for robust H2/H∞ filtering with error variance assignment
Institute of Scientific and Technical Information of China (English)
刘立恒; 邓正隆; 王广雄
2004-01-01
We consider the robust H2/H∞ filtering problem for linear perturbed systems with steady-state error variance assignment. The generalized inverse technique of matrix is introduced, and a new algorithm is developed. After two Riccati equations are solved, the filter can be obtained directly, and the following three performance requirements are simultaneously satisfied: The filtering process is asymptotically stable; the steady-state variance of the estimation error of each state is not more than the individual prespecified upper bound; the transfer function from exogenous noise inputs to error state outputs meets the prespecified H∞ norm upper bound constraint. A numerical example is provided to demonstrate the flexibility of the proposed design approach.
Robust motion control design for dual-axis motion platform using evolutionary algorithm
Indian Academy of Sciences (India)
Horn-Yong Jan; Chun-Liang Lin; Ching-Huei Huang; Thong-Shing Hwang
2008-12-01
This paper presents a new approach to deal with the dual-axis control design problem for a mechatronic platform. The cross-coupling effect leading to contour errors is effectively resolved by incorporating a neural net-based decoupling compensator. Conditions for robust stability are derived to ensure the closedloop system stability with the decoupling compensator. An evolutionary algorithm possessing the universal solution seeking capability is proposed for ﬁnding the optimal connecting weights of the neural compensator and PID control gains for the and axis control loops. Numerical studies and a real-world experiment for a watch cambered surface polishing platform have veriﬁed performance and applicability of our proposed design.
Directory of Open Access Journals (Sweden)
Yahya AL-Nabhani
2015-10-01
Full Text Available Digital watermarking, which has been proven effective for protecting digital data, has recently gained considerable research interest. This study aims to develop an enhanced technique for producing watermarked images with high invisibility. During extraction, watermarks can be successfully extracted without the need for the original image. We have developed discrete wavelet transform with a Haar filter to embed a binary watermark image in selected coefficient blocks. A probabilistic neural network is used to extract the watermark image. To evaluate the efficiency of the algorithm and the quality of the extracted watermark images, we used widely known image quality function measurements, such as peak signal-to-noise ratio (PSNR and normalized cross correlation (NCC. Results indicate the excellent invisibility of the extracted watermark image (PSNR = 68.27 dB, as well as exceptional watermark extraction (NCC = 0.9779. Experimental results reveal that the proposed watermarking algorithm yields watermarked images with superior imperceptibility and robustness to common attacks, such as JPEG compression, rotation, Gaussian noise, cropping, and median filter.
A Robust and Efficient Algorithm for Tool Recognition and Localization for Space Station Robot
Directory of Open Access Journals (Sweden)
Lingbo Cheng
2014-12-01
Full Text Available This paper studies a robust target recognition and localization method for a maintenance robot in a space station, and its main goal is to solve the target affine transformation caused by microgravity and the strong reflection and refraction of sunlight and lamplight in the cabin, as well as the occlusion of other objects. In this method, an Affine Scale Invariant Feature Transform (Affine-SIFT algorithm is proposed to extract enough local feature points with a fully affine invariant, and the stable matching point is obtained from the above point for target recognition by the selected Random Sample Consensus (RANSAC algorithm. Then, in order to localize the target, the effective and appropriate 3D grasping scope of the target is defined, and we determine and evaluate the grasping precision with the estimated affine transformation parameters presented in this paper. Finally, the threshold of RANSAC is optimized to enhance the accuracy and efficiency of target recognition and localization, and the scopes of illumination, vision distance and viewpoint angle for robot are evaluated to obtain effective image data by Root-Mean-Square Error (RMSE. An experimental system to simulate the illumination environment in a space station is established. Enough experiments have been carried out, and the experimental results show both the validity of the proposed definition of the grasping scope and the feasibility of the proposed recognition and localization method.
Directory of Open Access Journals (Sweden)
Jinkwon Kim
Full Text Available The purpose of this research is to develop an intuitive and robust realtime QRS detection algorithm based on the physiological characteristics of the electrocardiogram waveform. The proposed algorithm finds the QRS complex based on the dual criteria of the amplitude and duration of QRS complex. It consists of simple operations, such as a finite impulse response filter, differentiation or thresholding without complex and computational operations like a wavelet transformation. The QRS detection performance is evaluated by using both an MIT-BIH arrhythmia database and an AHA ECG database (a total of 435,700 beats. The sensitivity (SE and positive predictivity value (PPV were 99.85% and 99.86%, respectively. According to the database, the SE and PPV were 99.90% and 99.91% in the MIT-BIH database and 99.84% and 99.84% in the AHA database, respectively. The result of the noisy environment test using record 119 from the MIT-BIH database indicated that the proposed method was scarcely affected by noise above 5 dB SNR (SE = 100%, PPV > 98% without the need for an additional de-noising or back searching process.
A Robust and Efficient Algorithm for Tool Recognition and Localization for Space Station Robot
Directory of Open Access Journals (Sweden)
Lingbo Cheng
2014-12-01
Full Text Available This paper studies a robust target recognition and localization method for a maintenance robot in a space station, and its main goal is to solve the target affine transformation caused by microgravity and the strong reflection and refraction of sunlight and lamplight in the cabin, as well as the occlusion of other objects. In this method, an Affine Scale Invariant Feature Transform (Affine-SIFT algorithm is proposed to extract enough local feature points with a fully affine invariant, and the stable matching point is obtained from the above point for target recognition by the selected Random Sample Consensus (RANSAC algorithm. Then, in order to localize the target, the effective and appropriate 3D grasping scope of the target is defined, and we determine and evaluate the grasping precision with the estimated affine transformation parameters presented in this paper. Finally, the threshold of RANSAC is optimized to enhance the accuracy and efficiency of target recognition and localization, and the scopes of illumination, vision distance and viewpoint angle for robot are evaluated to obtain effective image data by Root-Mean-Square Error (RMSE. An experimental system to simulate the illumination environment in a space station is established. Enough experiments have been carried out, and the experimental results show both the validity of the proposed definition of the grasping scope and the feasibility of the proposed recognition and localization method.
Directory of Open Access Journals (Sweden)
Fabrizio Cutolo
2016-09-01
Full Text Available In the context of surgical navigation systems based on augmented reality (AR, the key challenge is to ensure the highest degree of realism in merging computer-generated elements with live views of the surgical scene. This paper presents an algorithm suited for wearable stereoscopic augmented reality video see-through systems for use in a clinical scenario. A video-based tracking solution is proposed that relies on stereo localization of three monochromatic markers rigidly constrained to the scene. A PnP-based optimization step is introduced to refine separately the pose of the two cameras. Video-based tracking methods using monochromatic markers are robust to non-controllable and/or inconsistent lighting conditions. The two-stage camera pose estimation algorithm provides sub-pixel registration accuracy. From a technological and an ergonomic standpoint, the proposed approach represents an effective solution to the implementation of wearable AR-based surgical navigation systems wherever rigid anatomies are involved.
IPED2X: a robust pedigree reconstruction algorithm for complicated pedigrees.
He, Dan; Eskin, Eleazar
2014-12-01
Reconstruction of family trees, or pedigree reconstruction, for a group of individuals is a fundamental problem in genetics. Some recent methods have been developed to reconstruct pedigrees using genotype data only. These methods are accurate and efficient for simple pedigrees which contain only siblings, where two individuals share the same pair of parents. A most recent method IPED2 is able to handle complicated pedigrees with half-sibling relationships, where two individuals share only one parent. However, the method is shown to miss many true positive half-sibling relationships as it removes all suspicious half-sibling relationships during the parent construction process. In this work, we propose a novel method IPED2X, which deploys a more robust algorithm for parent construction in the pedigrees by considering more possible operations rather than simple deletion. We convert the parent construction problem into a graph labeling problem and propose a more effective labeling algorithm. We show in our experiments that IPED2X is more powerful on capturing the true half-sibling relationships, which further leads to better reconstruction accuracy.
对含噪声数据的一种鲁棒学习算法%A Robust Learning Algorithm for Noise Data
Institute of Scientific and Technical Information of China (English)
李杰星; 章云; 符曦
2000-01-01
Allowing for the limitations of LS energy functionused in BP algorithm, this paper proposes a robust learning algorithmbased on the study of how clustering puts down radom noise's effects andthe consideration of intensified training for high-quality examples.Some simulation results demonstrate that the robust algorithm is clearlysuperior to BP algorithm in anti-disturbance and astringency.
A robust algorithm for optimizing protein structures with NMR chemical shifts.
Berjanskii, Mark; Arndt, David; Liang, Yongjie; Wishart, David S
2015-11-01
Over the past decade, a number of methods have been developed to determine the approximate structure of proteins using minimal NMR experimental information such as chemical shifts alone, sparse NOEs alone or a combination of comparative modeling data and chemical shifts. However, there have been relatively few methods that allow these approximate models to be substantively refined or improved using the available NMR chemical shift data. Here, we present a novel method, called Chemical Shift driven Genetic Algorithm for biased Molecular Dynamics (CS-GAMDy), for the robust optimization of protein structures using experimental NMR chemical shifts. The method incorporates knowledge-based scoring functions and structural information derived from NMR chemical shifts via a unique combination of multi-objective MD biasing, a genetic algorithm, and the widely used XPLOR molecular modelling language. Using this approach, we demonstrate that CS-GAMDy is able to refine and/or fold models that are as much as 10 Å (RMSD) away from the correct structure using only NMR chemical shift data. CS-GAMDy is also able to refine of a wide range of approximate or mildly erroneous protein structures to more closely match the known/correct structure and the known/correct chemical shifts. We believe CS-GAMDy will allow protein models generated by sparse restraint or chemical-shift-only methods to achieve sufficiently high quality to be considered fully refined and "PDB worthy". The CS-GAMDy algorithm is explained in detail and its performance is compared over a range of refinement scenarios with several commonly used protein structure refinement protocols. The program has been designed to be easily installed and easily used and is available at http://www.gamdy.ca.
A robust algorithm for optimizing protein structures with NMR chemical shifts
Energy Technology Data Exchange (ETDEWEB)
Berjanskii, Mark; Arndt, David; Liang, Yongjie; Wishart, David S., E-mail: david.wishart@ualberta.ca [University of Alberta, Department of Computing Science (Canada)
2015-11-15
Over the past decade, a number of methods have been developed to determine the approximate structure of proteins using minimal NMR experimental information such as chemical shifts alone, sparse NOEs alone or a combination of comparative modeling data and chemical shifts. However, there have been relatively few methods that allow these approximate models to be substantively refined or improved using the available NMR chemical shift data. Here, we present a novel method, called Chemical Shift driven Genetic Algorithm for biased Molecular Dynamics (CS-GAMDy), for the robust optimization of protein structures using experimental NMR chemical shifts. The method incorporates knowledge-based scoring functions and structural information derived from NMR chemical shifts via a unique combination of multi-objective MD biasing, a genetic algorithm, and the widely used XPLOR molecular modelling language. Using this approach, we demonstrate that CS-GAMDy is able to refine and/or fold models that are as much as 10 Å (RMSD) away from the correct structure using only NMR chemical shift data. CS-GAMDy is also able to refine of a wide range of approximate or mildly erroneous protein structures to more closely match the known/correct structure and the known/correct chemical shifts. We believe CS-GAMDy will allow protein models generated by sparse restraint or chemical-shift-only methods to achieve sufficiently high quality to be considered fully refined and “PDB worthy”. The CS-GAMDy algorithm is explained in detail and its performance is compared over a range of refinement scenarios with several commonly used protein structure refinement protocols. The program has been designed to be easily installed and easily used and is available at http://www.gamdy.ca http://www.gamdy.ca.
Directory of Open Access Journals (Sweden)
Bassem Sheta
2012-11-01
Full Text Available The UAV industry is growing rapidly in an attempt to serve both military and commercial applications. A crucial aspect in the development of UAVs is the reduction of navigational sensor costs while maintaining accurate navigation. Advances in visual sensor solutions with traditional navigation sensors are proving to be significantly promising in replacing traditional IMU or GPS systems for many mission scenarios. The basic concept behind Vision Based Navigation (VBN is to find the matches between a set of features in real-time captured images taken by the imaging sensor on the UAV and database images. A scale and rotation invariant image matching algorithm is a key element for VBN of aerial vehicles. Matches between the geo-referenced database images and the new real-time captured ones are determined by employing the fast Speeded Up Robust Features (SURF algorithm. The SURF algorithm consists mainly of two steps: the first is the detection of points of interest and the second is the creation of descriptors for each of these points. In this research paper, two major factors are investigated and tested to efficiently create the descriptors for each point of interest. The first factor is the dimension of the descriptor for a given point of interest. The dimension is affected by the number of descriptor sub-regions which consequently affects the matching time and the accuracy. SURF performance has been investigated and tested using different dimensions of the descriptor. The second factor is the number of sample points in each sub-region which are used to build the descriptor of the point of interest. SURF performance has been investigated and tested by changing thenumber of sample points in each sub-region where the matching accuracy is affected. Assessments of the SURF performance and consequently on UAV VBN are investigated.
Fei Song; Shiyin Qin
2014-01-01
This paper proposed a robust fault-tolerant control algorithm for satellite stabilization based on active disturbance rejection approach with artificial bee colony algorithm. The actuating mechanism of attitude control system consists of three working reaction flywheels and one spare reaction flywheel. The speed measurement of reaction flywheel is adopted for fault detection. If any reaction flywheel fault is detected, the corresponding fault flywheel is isolated and the spare reaction flywhe...
Directory of Open Access Journals (Sweden)
L.H. Yang
1992-01-01
Full Text Available A molecular dynamics algorithm for performing large-scale simulations using the Parallel C Preprocessor (PCP programming paradigm on the BBN TC2000, a massively parallel computer, is discussed. The algorithm uses a linked-cell data structure to obtain the near neighbors of each atom as time evoles. Each processor is assigned to a geometric domain containing many subcells and the storage for that domain is private to the processor. Within this scheme, the interdomain (i.e., interprocessor communication is minimized.
Domain decomposition based iterative methods for nonlinear elliptic finite element problems
Energy Technology Data Exchange (ETDEWEB)
Cai, X.C. [Univ. of Colorado, Boulder, CO (United States)
1994-12-31
The class of overlapping Schwarz algorithms has been extensively studied for linear elliptic finite element problems. In this presentation, the author considers the solution of systems of nonlinear algebraic equations arising from the finite element discretization of some nonlinear elliptic equations. Several overlapping Schwarz algorithms, including the additive and multiplicative versions, with inexact Newton acceleration will be discussed. The author shows that the convergence rate of the Newton`s method is independent of the mesh size used in the finite element discretization, and also independent of the number of subdomains into which the original domain in decomposed. Numerical examples will be presented.
Transonic solutions of a wing/pylon/finned store using hybrid domain decomposition
Newman, James C., III; Baysal, Oktay
1992-01-01
Transonic Euler calculations about a complex multicomponent configuration are presented. The 3D Euler equations are solved utilizing an upwind-biased, alternating direction implicit, approximately factored, multigrid algorithm. Computational results are compared to experimental data of the finned store in a carriage position.
2009-05-01
monographs by Quarteroni and Valli [10] and Toselli and Widlund [11]) culminating in the increasing appli- cation of these methods for High Performance Comput...Equations, Oxford University Press, Oxford, UK, 1999. [11] Toselli , A. and Widlund, O., Domain Decomposi- tion Methods – Algorithms and Theory, Springer
Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos
Energy Technology Data Exchange (ETDEWEB)
Ragusa, J.C. [CEA Saclay, Direction de l' Energie Nucleaire, Service d' Etudes des Reacteurs et de Modelisations Avancees (DEN/SERMA), 91 - Gif sur Yvette (France)
2003-07-01
The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster.
Directory of Open Access Journals (Sweden)
Pan Shao
2016-03-01
Full Text Available This study presents a novel approach for unsupervised change detection in multitemporal remotely sensed images. This method addresses the problem of the analysis of the difference image by proposing a novel and robust semi-supervised fuzzy C-means (RSFCM clustering algorithm. The advantage of the RSFCM is to further introduce the pseudolabels from the difference image compared with the existing change detection methods; these methods, mainly use difference intensity levels and spatial context. First, the patterns with a high probability of belonging to the changed or unchanged class are identified by selectively thresholding the difference image histogram. Second, the pseudolabels of these nearly certain pixel-patterns are jointly exploited with the intensity levels and spatial information in the properly defined RSFCM classifier in order to discriminate the changed pixels from the unchanged pixels. Specifically, labeling knowledge is used to guide the RSFCM clustering process to enhance the change information and obtain a more accurate membership; information on spatial context helps to lower the effect of noise and outliers by modifying the membership. RSFCM can detect more changes and provide noise immunity by the synergistic exploitation of pseudolabels and spatial context. The two main contributions of this study are as follows: (1 it proposes the idea of combining the three information types from the difference image, namely, (a intensity levels, (b labels, and (c spatial context; and (2 it develops the novel RSFCM algorithm for image segmentation and forms the proposed change detection framework. The proposed method is effective and efficient for change detection as confirmed by six experimental results of this study.
具有稳健性的无线传感器网络定位算法%Robust Wireless Sensor Network Location Algorithm with Robustness
Institute of Scientific and Technical Information of China (English)
宋伟; 樊孝明; 王玫
2012-01-01
基于无线传感器网络定位系统中影响定位精度的非视距误差问题,通过研究最小二乘法本身的敏感性以及与最小一乘法的稳健性特点提出了组合中位最小二乘定位算法,在抑制非视距误差的同时,避免了最小一乘法计算困难的弱点,仿真证明,此算法在有少量非视距误差存在时,可以非常有效地抑制非视距误差.同时此算法与进行测距的物理层信号无关,也不需要任何的统计信息,有一定的实用价值.%In this paper,a robust wireless sensor network location algorithm,Combination Median Least Squares(CMLS) is proposed, for the Non-Line-of-Sight( NLOS). The sensitivity of least squares multilateration about outliers and the robustness of the least absolute deviation about outliers are analyzed, and the algorithm overcomes the shortcomings of complexness of the least absolute deviation. The simulation results show the good performance of proposed algorithm when a few non-line-of-sights exist Furthermore,the algorithm is independent of physical layer and do not need any prior information a-bout the statistical properties of measurement errors or characterization of the environment where the sensor nodes are deployed. So it is a practical algorithm.
Jing, Yu; Wang, Yaxuan; Liu, Jianxin; Liu, Zhaoxia
2015-08-01
Edge detection is a crucial method for the location and quantity estimation of oil slick when oil spills on the sea. In this paper, we present a robust active contour edge detection algorithm for oil spill remote sensing images. In the proposed algorithm, we define a local Gaussian data fitting energy term with spatially varying means and variances, and this data fitting energy term is introduced into a global minimization active contour (GMAC) framework. The energy function minimization is achieved fast by a dual formulation of the weighted total variation norm. The proposed algorithm avoids the existence of local minima, does not require the definition of initial contour, and is robust to weak boundaries, high noise and severe intensity inhomogeneity exiting in oil slick remote sensing images. Furthermore, the edge detection of oil slick and the correction of intensity inhomogeneity are simultaneously achieved via the proposed algorithm. The experiment results have shown that a superior performance of proposed algorithm over state-of-the-art edge detection algorithms. In addition, the proposed algorithm can also deal with the special images with the object and background of the same intensity means but different variances.
A robust SEM auto-focus algorithm using multiple band-pass filters
Harada, Minoru; Obara, Kenji; Nakamae, Koji
2017-01-01
An auto-focus algorithm using multiple band-pass filters for a scanning electron microscope (SEM) is proposed. To acquire sharp images of various kinds of defects by SEM defect observation in semiconductor manufacturing, the auto-focus process must be robust. A method for designing a band-pass filter for calculating the ‘focus measure’ (a key parameter of the auto-focus process) is proposed. To achieve an optimal specific frequency response for various images, multiple band-pass filters are introduced. As for the proposed method, two series of focus measures are calculated by using multiple band-pass filters independently, and it is selected according to reliability of the series of focus measures. The signal-to-noise ratio of an image for acceptable auto-focus precision is determined by simulation using pseudo images. In an experiment using the proposed method with real images, the success rate of auto focus is improved from 79.4% to 95.6%.
Random weights, robust lattice rules and the geometry of the cbc$r$c algorithm
Dick, Josef
2011-01-01
In this paper we study lattice rules which are cubature formulae to approximate integrands over the unit cube $[0,1]^s$ from a weighted reproducing kernel Hilbert space. We assume that the weights are independent random variables with a given mean and variance for two reasons stemming from practical applications: (i) It is usually not known in practice how to choose the weights. Thus by assuming that the weights are random variables, we obtain robust constructions (with respect to the weights) of lattice rules. This, to some extend, removes the necessity to carefully choose the weights. (ii) In practice it is convenient to use the same lattice rule for many different integrands. The best choice of weights for each integrand may vary to some degree, hence considering the weights random variables does justice to how lattice rules are used in applications. We also study a generalized version which uses $r$ constraints which we call the cbc$r$c (component-by-component with $r$ constraints) algorithm. We show that...
A robust algorithm for estimation of depth map for 3D shape recovery
Malik, Aamir Saeed; Choi, Tae-Sun
2006-02-01
Three-dimensional shape recovery from one or multiple observations is a challenging problem of computer vision. In this paper, we present a new focus measure for calculation of depth map. That depth map can further be used in techniques and algorithms leading to recovery of three dimensional structure of object which is required in many high level vision applications. The focus measure presented has shown robustness in presence of noise as compared to the earlier focus measures. This new focus measure is based on an optical transfer function using Discrete Cosine Transform and its results are compared with the earlier focus measures including Sum of Modified Laplacian (SML) and Tenenbaum focus measures. With this new focus measure, the results without any noise are almost similar in nature to the earlier focus measures however drastic improvement is observed with respect to others in the presence of noise. The proposed focus measure is applied on a test image, on a sequence of 97 simulated cone images and on a sequence of 97 real cone images. The images were added with the Gaussian noise which arises due to factors such as electronic circuit noise and sensor noise due to poor illumination and/or high temperature.
Energy Technology Data Exchange (ETDEWEB)
Stathopoulos, A.; Fischer, C.F. [Vanderbilt Univ., Nashville, TN (United States); Saad, Y.
1994-12-31
The solution of the large, sparse, symmetric eigenvalue problem, Ax = {lambda}x, is central to many scientific applications. Among many iterative methods that attempt to solve this problem, the Lanczos and the Generalized Davidson (GD) are the most widely used methods. The Lanczos method builds an orthogonal basis for the Krylov subspace, from which the required eigenvectors are approximated through a Rayleigh-Ritz procedure. Each Lanczos iteration is economical to compute but the number of iterations may grow significantly for difficult problems. The GD method can be considered a preconditioned version of Lanczos. In each step the Rayleigh-Ritz procedure is solved and explicit orthogonalization of the preconditioned residual ((M {minus} {lambda}I){sup {minus}1}(A {minus} {lambda}I)x) is performed. Therefore, the GD method attempts to improve convergence and robustness at the expense of a more complicated step.
M. Genseberger (Menno)
2008-01-01
htmlabstractMost computational work in Jacobi-Davidson [9], an iterative method for large scale eigenvalue problems, is due to a so-called correction equation. In [5] a strategy for the approximate solution of the correction equation was proposed. This strategy is based on a domain decomposition
Torque Optimization Algorithm for SRM Drives Using a Robust Predictive Strategy
DEFF Research Database (Denmark)
Argeseanu, Alin; Ritchie, Ewen; Leban, Krisztina Monika
2010-01-01
This paper presents a new torque optimization algorithm to maximize the torque generated by an SRM drive. The new algorithm uses a predictive strategy. The behaviour of the SRM demands a sequential algorithm. To preserve the advantages of SRM drives (simple and rugged topology) the new algorithm...
Directory of Open Access Journals (Sweden)
Xue Li
2015-01-01
Full Text Available State of charge (SOC is one of the most important parameters in battery management system (BMS. There are numerous algorithms for SOC estimation, mostly of model-based observer/filter types such as Kalman filters, closed-loop observers, and robust observers. Modeling errors and measurement noises have critical impact on accuracy of SOC estimation in these algorithms. This paper is a comparative study of robustness of SOC estimation algorithms against modeling errors and measurement noises. By using a typical battery platform for vehicle applications with sensor noise and battery aging characterization, three popular and representative SOC estimation methods (extended Kalman filter, PI-controlled observer, and H∞ observer are compared on such robustness. The simulation and experimental results demonstrate that deterioration of SOC estimation accuracy under modeling errors resulted from aging and larger measurement noise, which is quantitatively characterized. The findings of this paper provide useful information on the following aspects: (1 how SOC estimation accuracy depends on modeling reliability and voltage measurement accuracy; (2 pros and cons of typical SOC estimators in their robustness and reliability; (3 guidelines for requirements on battery system identification and sensor selections.
Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus
2015-01-01
Developers of image processing routines rely on benchmark data sets to give qualitative comparisons of new image analysis algorithms and pipelines. Such data sets need to include artifacts in order to occlude and distort the required information to be extracted from an image. Robustness, the quality of an algorithm related to the amount of distortion is often important. However, using available benchmark data sets an evaluation of illumination robustness is difficult or even not possible due to missing ground truth data about object margins and classes and missing information about the distortion. We present a new framework for robustness evaluation. The key aspect is an image benchmark containing 9 object classes and the required ground truth for segmentation and classification. Varying levels of shading and background noise are integrated to distort the data set. To quantify the illumination robustness, we provide measures for image quality, segmentation and classification success and robustness. We set a high value on giving users easy access to the new benchmark, therefore, all routines are provided within a software package, but can as well easily be replaced to emphasize other aspects.
Institute of Scientific and Technical Information of China (English)
褚菲; 马小平; 王福利; 贾润达
2015-01-01
A novel approach for constructing robust Mamdani fuzzy system was proposed, which consisted of an efficiency robust estimator (partial robust M-regression, PRM) in the parameter learning phase of the initial fuzzy system, and an improved subtractive clustering algorithm in the fuzzy-rule-selecting phase. The weights obtained in PRM, which gives protection against noise and outliers, were incorporated into the potential measure of the subtractive cluster algorithm to enhance the robustness of the fuzzy rule cluster process, and a compact Mamdani-type fuzzy system was established after the parameters in the consequent parts of rules were re-estimated by partial least squares (PLS). The main characteristics of the new approach were its simplicity and ability to construct fuzzy system fast and robustly. Simulation and experiment results show that the proposed approach can achieve satisfactory results in various kinds of data domains with noise and outliers. Compared with D-SVD and ARRBFN, the proposed approach yields much fewer rules and less RMSE values.
The Use of Domain Decomposition in Accelerating the Convergence of Quasihyperbolic Systems
Parent, Bernard; Sislian, Jean P.
2002-06-01
This paper proposes an alternate form of the active-domain method [K. Nakahashi and E. Saitoh, AIAA J.35, 1280 (1997)] that is applicable to streamwise separated flows. Named the "marching window," the algorithm consists of performing pseudo-time iterations on a minimal width subdomain composed of a sequence of cross-stream planes of nodes. The upstream boundary of the subdomain is positioned such that all nodes upstream exhibit a residual smaller than the user-specified convergence threshold. The advancement of the downstream boundary follows the advancement of the upstream boundary, except in zones of significant streamwise ellipticity, where a streamwise ellipticity sensor ensures its continuous progress. Compared to the standard pseudo-time-marching approach, the marching window decreases the work required for convergence by up to 24 times for flows with little streamwise ellipticity and by up to eight times for flows with large streamwise separated regions. Storage is reduced by up to six times by not allocating memory to the nodes not included in the computational subdomain. The marching window satisfies the same convergence criterion as the standard pseudo-time-stepping methods, hence resulting in the same converged solution within the tolerance of the user-specified convergence threshold. The algorithm is not restricted to a discretization stencil and pseudo-time-stepping scheme in particular and is used here with the Yee-Roe scheme and block-implicit approximate factorization solving the Favre-averaged Navier-Stokes (FANS) equations closed by the Wilcox kω turbulence model. The eigenstructure of the FANS equations is also presented.
Directory of Open Access Journals (Sweden)
Fei Song
2014-01-01
Full Text Available This paper proposed a robust fault-tolerant control algorithm for satellite stabilization based on active disturbance rejection approach with artificial bee colony algorithm. The actuating mechanism of attitude control system consists of three working reaction flywheels and one spare reaction flywheel. The speed measurement of reaction flywheel is adopted for fault detection. If any reaction flywheel fault is detected, the corresponding fault flywheel is isolated and the spare reaction flywheel is activated to counteract the fault effect and ensure that the satellite is working safely and reliably. The active disturbance rejection approach is employed to design the controller, which handles input information with tracking differentiator, estimates system uncertainties with extended state observer, and generates control variables by state feedback and compensation. The designed active disturbance rejection controller is robust to both internal dynamics and external disturbances. The bandwidth parameter of extended state observer is optimized by the artificial bee colony algorithm so as to improve the performance of attitude control system. A series of simulation experiment results demonstrate the performance superiorities of the proposed robust fault-tolerant control algorithm.
A flexible and robust neural network IASI-NH3 retrieval algorithm
Whitburn, S.; Van Damme, M.; Clarisse, L.; Bauduin, S.; Heald, C. L.; Hadji-Lazaro, J.; Hurtmans, D.; Zondlo, M. A.; Clerbaux, C.; Coheur, P.-F.
2016-06-01
In this paper, we describe a new flexible and robust NH3 retrieval algorithm from measurements of the Infrared Atmospheric Sounding Interferometer (IASI). The method is based on the calculation of a spectral hyperspectral range index (HRI) and subsequent conversion to NH3 columns via a neural network. It is an extension of the method presented in Van Damme et al. (2014a) who used lookup tables (LUT) for the radiance-concentration conversion. The new method inherits the advantages of the LUT-based method while providing several significant improvements. These include the following: (1) Complete temperature and humidity vertical profiles can be accounted for. (2) Third-party NH3 vertical profile information can be used. (3) Reported positive biases of LUT retrieval are reduced, and finally (4) a full measurement uncertainty characterization is provided. A running theme in this study, related to item (2), is the importance of the assumed vertical NH3 profile. We demonstrate the advantages of allowing variable profile shapes in the retrieval. As an example, we analyze how the retrievals change when all NH3 is assumed to be confined to the boundary layer. We analyze different averaging procedures in use for NH3 in the literature, introduced to cope with the variable measurement sensitivity and derive global averaged distributions for the year 2013. A comparison with a GEOS-Chem modeled global distribution is also presented, showing a general good correspondence (within ±3 × 1015 molecules.cm-2) over most of the Northern Hemisphere. However, IASI finds mean columns about 1-1.5 × 1016 molecules.cm-2 (˜50-60%) lower than GEOS-Chem for India and the North China plain.
Zhou, Mingxing; Liu, Jing
2017-02-01
Designing robust networks has attracted increasing attentions in recent years. Most existing work focuses on improving the robustness of networks against a specific type of attacks. However, networks which are robust against one type of attacks may not be robust against another type of attacks. In the real-world situations, different types of attacks may happen simultaneously. Therefore, we use the Pearson's correlation coefficient to analyze the correlation between different types of attacks, model the robustness measures against different types of attacks which are negatively correlated as objectives, and model the problem of optimizing the robustness of networks against multiple malicious attacks as a multiobjective optimization problem. Furthermore, to effectively solve this problem, we propose a two-phase multiobjective evolutionary algorithm, labeled as MOEA-RSFMMA. In MOEA-RSFMMA, a single-objective sampling phase is first used to generate a good initial population for the later two-objective optimization phase. Such a two-phase optimizing pattern well balances the computational cost of the two objectives and improves the search efficiency. In the experiments, both synthetic scale-free networks and real-world networks are used to validate the performance of MOEA-RSFMMA. Moreover, both local and global characteristics of networks in different parts of the obtained Pareto fronts are studied. The results show that the networks in different parts of Pareto fronts reflect different properties, and provide various choices for decision makers.
Energy Technology Data Exchange (ETDEWEB)
Gongzhang, R.; Xiao, B.; Lardner, T.; Gachagan, A. [Centre for Ultrasonic Engineering, University of Strathclyde, Glasgow, G1 1XW (United Kingdom); Li, M. [School of Engineering, University of Glasgow, Glasgow, G12 8QQ (United Kingdom)
2014-02-18
This paper presents a robust frequency diversity based algorithm for clutter reduction in ultrasonic A-scan waveforms. The performance of conventional spectral-temporal techniques like Split Spectrum Processing (SSP) is highly dependent on the parameter selection, especially when the signal to noise ratio (SNR) is low. Although spatial beamforming offers noise reduction with less sensitivity to parameter variation, phased array techniques are not always available. The proposed algorithm first selects an ascending series of frequency bands. A signal is reconstructed for each selected band in which a defect is present when all frequency components are in uniform sign. Combining all reconstructed signals through averaging gives a probability profile of potential defect position. To facilitate data collection and validate the proposed algorithm, Full Matrix Capture is applied on the austenitic steel and high nickel alloy (HNA) samples with 5MHz transducer arrays. When processing A-scan signals with unrefined parameters, the proposed algorithm enhances SNR by 20dB for both samples and consequently, defects are more visible in B-scan images created from the large amount of A-scan traces. Importantly, the proposed algorithm is considered robust, while SSP is shown to fail on the austenitic steel data and achieves less SNR enhancement on the HNA data.
Bai, Mingsian R; Tung, Chih-Wei; Lee, Chih-Chung
2005-05-01
An optimal design technique of loudspeaker arrays for cross-talk cancellation with application in three-dimensional audio is presented. An array focusing scheme is presented on the basis of the inverse propagation that relates the transducers to a set of chosen control points. Tikhonov regularization is employed in designing the inverse cancellation filters. An extensive analysis is conducted to explore the cancellation performance and robustness issues. To best compromise the performance and robustness of the cross-talk cancellation system, optimal configurations are obtained with the aid of the Taguchi method and the genetic algorithm (GA). The proposed systems are further justified by physical as well as subjective experiments. The results reveal that large number of loudspeakers, closely spaced configuration, and optimal control point design all contribute to the robustness of cross-talk cancellation systems (CCS) against head misalignment.
Ying, Jinyong
2016-01-01
The size-modified Poisson-Boltzmann equation (SMPBE) is one important variant of the popular dielectric model, the Poisson-Boltzmann equation (PBE), to reflect ionic size effects in the prediction of electrostatics for a biomolecule in an ionic solvent. In this paper, a new SMPBE hybrid solver is developed using a solution decomposition, the Schwartz's overlapped domain decomposition, finite element, and finite difference. It is then programmed as a software package in C, Fortran, and Python based on the state-of-the-art finite element library DOLFIN from the FEniCS project. This software package is well validated on a Born ball model with analytical solution and a dipole model with a known physical properties. Numerical results on six proteins with different net charges demonstrate its high performance. Finally, this new SMPBE hybrid solver is shown to be numerically stable and convergent in the calculation of electrostatic solvation free energy for 216 biomolecules and binding free energy for a DNA-drug com...
Stupfel, Bruno; Lecouvez, Matthieu
2016-10-01
For the solution of the time-harmonic electromagnetic scattering problem by inhomogeneous 3-D objects, a one-way domain decomposition method (DDM) is considered: the computational domain is partitioned into concentric subdomains on the interfaces of which Robin-type transmission conditions (TCs) are prescribed; an integral representation of the electromagnetic fields on the outer boundary constitutes an exact radiation condition. The global system obtained after discretization of the finite element (FE) formulations is solved via a Krylov subspace iterative method (GMRES). It is preconditioned in such a way that, essentially, only the solution of the FE subsystems in each subdomain is required. This is made possible by a computationally cheap H (curl)- H (div) transformation performed on the interfaces that separate the two outermost subdomains. The eigenvalues of the preconditioned matrix of the system are bounded by two, and optimized values of the coefficients involved in the local TCs on the interfaces are determined so as to maximize the minimum eigenvalue. Numerical experiments are presented that illustrate the numerical accuracy of this technique, its fast convergence, and legitimate the choices made for the optimized coefficients.
Yücel, Abdulkadir C.
2013-07-01
Reliable and effective wireless communication and tracking systems in mine environments are key to ensure miners\\' productivity and safety during routine operations and catastrophic events. The design of such systems greatly benefits from simulation tools capable of analyzing electromagnetic (EM) wave propagation in long mine tunnels and large mine galleries. Existing simulation tools for analyzing EM wave propagation in such environments employ modal decompositions (Emslie et. al., IEEE Trans. Antennas Propag., 23, 192-205, 1975), ray-tracing techniques (Zhang, IEEE Tran. Vehic. Tech., 5, 1308-1314, 2003), and full wave methods. Modal approaches and ray-tracing techniques cannot accurately account for the presence of miners and their equipments, as well as wall roughness (especially when the latter is comparable to the wavelength). Full-wave methods do not suffer from such restrictions but require prohibitively large computational resources. To partially alleviate this computational burden, a 2D integral equation-based domain decomposition technique has recently been proposed (Bakir et. al., in Proc. IEEE Int. Symp. APS, 1-2, 8-14 July 2012). © 2013 IEEE.
More accurate recombination prediction in HIV-1 using a robust decoding algorithm for HMMs
Directory of Open Access Journals (Sweden)
Brown Daniel G
2011-05-01
Full Text Available Abstract Background Identifying recombinations in HIV is important for studying the epidemiology of the virus and aids in the design of potential vaccines and treatments. The previous widely-used tool for this task uses the Viterbi algorithm in a hidden Markov model to model recombinant sequences. Results We apply a new decoding algorithm for this HMM that improves prediction accuracy. Exactly locating breakpoints is usually impossible, since different subtypes are highly conserved in some sequence regions. Our algorithm identifies these sites up to a certain error tolerance. Our new algorithm is more accurate in predicting the location of recombination breakpoints. Our implementation of the algorithm is available at http://www.cs.uwaterloo.ca/~jmtruszk/jphmm_balls.tar.gz. Conclusions By explicitly accounting for uncertainty in breakpoint positions, our algorithm offers more reliable predictions of recombination breakpoints in HIV-1. We also document a new domain of use for our new decoding approach in HMMs.
Afik, Eldad
2013-01-01
Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, cellular trafficking, bacteria taxis, fluid dynamics in microfluidics devices. The 3d position of a fluorescent particle can be determined using 2d imaging alone, by measuring the diffraction rings generated by an out-of-focus particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from particles vicinity. It is capable of real time analysis thanks to its high performance and low memory footprint. Many of the algorithmic concepts introduced can be advantageous in other cases, particularly for sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. The image analysis algorithm, which is an offspring of the full 3d circle Hough transform, addresses the need to efficiently trace the trajectories of several particles concurrent...
Ma, Junqing; Song, Aiguo; Xiao, Jing
2012-10-29
Coupling errors are major threats to the accuracy of 3-axis force sensors. Design of decoupling algorithms is a challenging topic due to the uncertainty of coupling errors. The conventional nonlinear decoupling algorithms by a standard Neural Network (NN) are sometimes unstable due to overfitting. In order to avoid overfitting and minimize the negative effect of random noises and gross errors in calibration data, we propose a novel nonlinear static decoupling algorithm based on the establishment of a coupling error model. Instead of regarding the whole system as a black box in conventional algorithm, the coupling error model is designed by the principle of coupling errors, in which the nonlinear relationships between forces and coupling errors in each dimension are calculated separately. Six separate Support Vector Regressions (SVRs) are employed for their ability to perform adaptive, nonlinear data fitting. The decoupling performance of the proposed algorithm is compared with the conventional method by utilizing obtained data from the static calibration experiment of a 3-axis force sensor. Experimental results show that the proposed decoupling algorithm gives more robust performance with high efficiency and decoupling accuracy, and can thus be potentially applied to the decoupling application of 3-axis force sensors.
Robust 3-D Algorithm for Flare Planning and Guidance for Impaired Aircraft Project
National Aeronautics and Space Administration — Development of a robust nonlinear guidance law for planning and executing the flare-touchdown maneuver for impaired aircraft under adverse wind conditions is...
Schneider, Nadine; Sayle, Roger A; Landrum, Gregory A
2015-10-26
Finding a canonical ordering of the atoms in a molecule is a prerequisite for generating a unique representation of the molecule. The canonicalization of a molecule is usually accomplished by applying some sort of graph relaxation algorithm, the most common of which is the Morgan algorithm. There are known issues with that algorithm that lead to noncanonical atom orderings as well as problems when it is applied to large molecules like proteins. Furthermore, each cheminformatics toolkit or software provides its own version of a canonical ordering, most based on unpublished algorithms, which also complicates the generation of a universal unique identifier for molecules. We present an alternative canonicalization approach that uses a standard stable-sorting algorithm instead of a Morgan-like index. Two new invariants that allow canonical ordering of molecules with dependent chirality as well as those with highly symmetrical cyclic graphs have been developed. The new approach proved to be robust and fast when tested on the 1.45 million compounds of the ChEMBL 20 data set in different scenarios like random renumbering of input atoms or SMILES round tripping. Our new algorithm is able to generate a canonical order of the atoms of protein molecules within a few milliseconds. The novel algorithm is implemented in the open-source cheminformatics toolkit RDKit. With this paper, we provide a reference Python implementation of the algorithm that could easily be integrated in any cheminformatics toolkit. This provides a first step toward a common standard for canonical atom ordering to generate a universal unique identifier for molecules other than InChI.
Directory of Open Access Journals (Sweden)
Wing Kam Fung
2010-02-01
Full Text Available The case-control study is an important design for testing association between genetic markers and a disease. The Cochran-Armitage trend test (CATT is one of the most commonly used statistics for the analysis of case-control genetic association studies. The asymptotically optimal CATT can be used when the underlying genetic model (mode of inheritance is known. However, for most complex diseases, the underlying genetic models are unknown. Thus, tests robust to genetic model misspecification are preferable to the model-dependant CATT. Two robust tests, MAX3 and the genetic model selection (GMS, were recently proposed. Their asymptotic null distributions are often obtained by Monte-Carlo simulations, because they either have not been fully studied or involve multiple integrations. In this article, we study how components of each robust statistic are correlated, and find a linear dependence among the components. Using this new finding, we propose simple algorithms to calculate asymptotic null distributions for MAX3 and GMS, which greatly reduce the computing intensity. Furthermore, we have developed the R package Rassoc implementing the proposed algorithms to calculate the empirical and asymptotic p values for MAX3 and GMS as well as other commonly used tests in case-control association studies. For illustration, Rassoc is applied to the analysis of case-control data of 17 most significant SNPs reported in four genome-wide association studies.
Rey, Valentine; Gosselet, Pierre
2013-01-01
This paper deals with the estimation of the distance between the solution of a static linear mechanic problem and its approximation by the finite element method solved with a non-overlapping domain decomposition method (FETI or BDD). We propose a new strict upper bound of the error which separates the contribution of the iterative solver and the contribution of the discretization. Numerical assessments show that the bound is sharp and enables us to define an objective stopping criterion for the iterative solver
Jones, Adam; Utyuzhnikov, Sergey
2017-08-01
Turbulent flow in a ribbed channel is studied using an efficient near-wall domain decomposition (NDD) method. The NDD approach is formulated by splitting the computational domain into an inner and outer region, with an interface boundary between the two. The computational mesh covers the outer region, and the flow in this region is solved using the open-source CFD code Code_Saturne with special boundary conditions on the interface boundary, called interface boundary conditions (IBCs). The IBCs are of Robin type and incorporate the effect of the inner region on the flow in the outer region. IBCs are formulated in terms of the distance from the interface boundary to the wall in the inner region. It is demonstrated that up to 90% of the region between the ribs in the ribbed passage can be removed from the computational mesh with an error on the friction factor within 2.5%. In addition, computations with NDD are faster than computations based on low Reynolds number (LRN) models by a factor of five. Different rib heights can be studied with the same mesh in the outer region without affecting the accuracy of the friction factor. This is tested with six different rib heights in an example of a design optimisation study. It is found that the friction factors computed with NDD are almost identical to the fully-resolved results. When used for inverse problems, NDD is considerably more efficient than LRN computations because only one computation needs to be performed and only one mesh needs to be generated.
An efficient and robust algorithm for parallel groupwise registration of bone surfaces.
van de Giessen, Martijn; Vos, Frans M; Grimbergen, Cornelis A; van Vliet, Lucas J; Streekstra, Geert J
2012-01-01
In this paper a novel groupwise registration algorithm is proposed for the unbiased registration of a large number of densely sampled point clouds. The method fits an evolving mean shape to each of the example point clouds thereby minimizing the total deformation. The registration algorithm alternates between a computationally expensive, but parallelizable, deformation step of the mean shape to each example shape and a very inexpensive step updating the mean shape. The algorithm is evaluated by comparing it to a state of the art registration algorithm. Bone surfaces of wrists, segmented from CT data with a voxel size of 0.3 x 0.3 x 0.3 mm3, serve as an example test set. The negligible bias and registration error of about 0.12 mm for the proposed algorithm are similar to those in. However, current point cloud registration algorithms usually have computational and memory costs that increase quadratically with the number of point clouds, whereas the proposed algorithm has linearly increasing costs, allowing the registration of a much larger number of shapes: 48 versus 8, on the hardware used.
A Review of Fast L(1)-Minimization Algorithms for Robust Face Recognition
2010-07-01
process- ing and optimization communities in the last five years or so. In CS theory ∗This work was partially supported by NSF IIS 08-49292, NSF ECCS 07...good approximate solutions. The estimation error of Homo - topy is slightly higher than the rest four algorithms. 3. In terms of speed, L1LS and...linearly with the sparsity ratio, while the other algorithms are relatively unaffected. Thus, Homo - topy is more suitable for scenarios where the unknown
A> L1-TV algorithm for robust perspective photometric stereo with spatially-varying lightings
DEFF Research Database (Denmark)
Quéau, Yvain; Lauze, Francois Bernard; Durou, Jean-Denis
2015-01-01
We tackle the problem of perspective 3D-reconstruction of Lambertian surfaces through photometric stereo, in the presence of outliers to Lambert's law, depth discontinuities, and unknown spatially-varying lightings. To this purpose, we introduce a robust $L^1$-TV variational formulation of the re...
Directory of Open Access Journals (Sweden)
Amir Houshang Arab Avval
Full Text Available This paper proposes a novel robust blind steganography scheme for embedding audio signal into edge of color image based on a chaotic map and LSB method,which is different from some existing works. In this paper, we employed the LSB substitution technique ...
An Efficient Encryption Algorithm for P2P Networks Robust Against Man-in-the-Middle Adversary
Directory of Open Access Journals (Sweden)
Roohallah Rastaghi
2012-11-01
Full Text Available Peer-to-peer (P2P networks have become popular as a new paradigm for information exchange and are being used in many applications such as file sharing, distributed computing, video conference, VoIP, radio and TV broadcasting. This popularity comes with security implications and vulnerabilities that need to be addressed. Especially duo to direct communication between two end nodes in P2P networks, these networks are potentially vulnerable to Man-in-the-Middle attacks. In this paper, we propose a new public-key cryptosystem for P2P networks that is robust against Man-in-the-Middle adversary. This cryptosystem is based on RSA and knapsack problems. Our precoding-based algorithm uses knapsack problem for performing permutation and padding random data to the message. We show that comparing to other proposed cryptosystems, our algorithm is more efficient and it is fully secure against an active adversary.
Energy Technology Data Exchange (ETDEWEB)
Clerc, S
1998-07-01
In this work, the numerical simulation of fluid dynamics equations is addressed. Implicit upwind schemes of finite volume type are used for this purpose. The first part of the dissertation deals with the improvement of the computational precision in unfavourable situations. A non-conservative treatment of some source terms is studied in order to correct some shortcomings of the usual operator-splitting method. Besides, finite volume schemes based on Godunov's approach are unsuited to compute low Mach number flows. A modification of the up-winding by preconditioning is introduced to correct this defect. The second part deals with the solution of steady-state problems arising from an implicit discretization of the equations. A well-posed linearized boundary value problem is formulated. We prove the convergence of a domain decomposition algorithm of Schwartz type for this problem. This algorithm is implemented either directly, or in a Schur complement framework. Finally, another approach is proposed, which consists in decomposing the non-linear steady state problem. (author)
Robust multi-scale clustering of large DNA microarray datasets with the consensus algorithm
DEFF Research Database (Denmark)
Grotkjær, Thomas; Winther, Ole; Regenberg, Birgitte
2006-01-01
Motivation: Hierarchical and relocation clustering (e.g. K-means and self-organizing maps) have been successful tools in the display and analysis of whole genome DNA microarray expression data. However, the results of hierarchical clustering are sensitive to outliers, and most relocation methods...... analysis by collecting re-occurring clustering patterns in a co-occurrence matrix. The results show that consensus clustering obtained from clustering multiple times with Variational Bayes Mixtures of Gaussians or K-means significantly reduces the classification error rate for a simulated dataset....... The method is flexible and it is possible to find consensus clusters from different clustering algorithms. Thus, the algorithm can be used as a framework to test in a quantitative manner the homogeneity of different clustering algorithms. We compare the method with a number of state-of-the-art clustering...
A Review of Fast l1-Minimization Algorithms for Robust Face Recognition
Yang, Allen Y; Zhou, Zihan; Sastry, S Shankar; Ma, Yi
2010-01-01
l1-minimization refers to finding the minimum l1-norm solution to an underdetermined linear system b=Ax. It has recently received much attention, mainly motivated by the new compressive sensing theory that shows that under quite general conditions the minimum l1-norm solution is also the sparsest solution to the system of linear equations. Although the underlying problem is a linear program, conventional algorithms such as interior-point methods suffer from poor scalability for large-scale real world problems. A number of accelerated algorithms have been recently proposed that take advantage of the special structure of the l1-minimization problem. In this paper, we provide a comprehensive review of five representative approaches, namely, Gradient Projection, Homotopy, Iterative Shrinkage-Thresholding, Proximal Gradient, and Augmented Lagrange Multiplier. The work is intended to fill in a gap in the existing literature to systematically benchmark the performance of these algorithms using a consistent experimen...
Energy Technology Data Exchange (ETDEWEB)
Kim, Jong-Soo; Choe, Gyu-Yeong; Lee, Byoung-Kuk [School of Information and Communication Engineering, Sungkyunkwan University, 300 Cheoncheon-dong, Jangan-gu, Suwon, Gyeonggi-do 440-746 (Korea, Republic of); Kang, Hyun-Soo [R and D Center, Advanced Drive Technology (ADT) Company, 689-26 Geumjeong-dong, Gunpo-si, Gyeonggi-do 435-862 (Korea, Republic of)
2011-05-15
The low frequency current ripple in grid-connected fuel cell systems is generated from dc-ac inverter operation, which generates 60 Hz fundamental component, and gives harmful effects on fuel cell stack itself, such as making cathode surface responses slower, causing an increase of more than 10% in the fuel consumption, creating oxygen starvation, causing a reduction in the operating lifetime, and incurring a nuisance tripping such as overload situation. With these reasons, low frequency current ripple makes fuel cell system unstable and lifetime of fuel cell stack itself short. This paper presents a fast and robust control algorithm to eliminate low frequency current ripple in grid-connected fuel cell systems. Compared with the conventional methods, in the proposed control algorithm, dc link voltage controller is shifted from dc-dc converter to dc-ac inverter, resulting that dc-ac inverter handles dc link voltage control and output current control simultaneously with help of power balancing technique. The results indicate that the proposed algorithm can not only completely eliminate current ripple but also significantly reduce the overshoot or undershoot during transient states without any extra hardware. The validity of the proposed algorithm is verified by computer simulations and also by experiments with a 1 kW laboratory prototype. (author)
DEFF Research Database (Denmark)
Dollerup, Niels; Jepsen, Michael S.; Frier, Christian;
2014-01-01
A robust and effective finite element based implementation of lower bound limit state analysis applying an interior point formulation is presented in this paper. The lower bound formulation results in a convex optimization problem consisting of a number of linear constraints from the equilibrium...... equations and a number of convex non-linear constraints from the yield criteria. The computational robustness has been improved by eliminating a large number of the equilibrium equations a priori leaving only the statical redundant variables as free optimization variables. The elimination of equilibrium...... equations is based on a optimized numbering of elements and stress variables based on the frontal method approach used in the standard finite element method. The optimized numbering secures sparsity in the formulation. The convex non-linear yield criteria are treated directly in the interior point...
Mostafa, Khaled; Darwish, Ahmed M.
1999-01-01
The problem of cursive script segmentation is an essential one for handwritten character recognition. This is specially true for Arabic text where cursive is the only mode even for typewritten font. In this paper, we present a generalized segmentation approach for handwritten Arabic cursive scripts. The proposed approach is based on the analysis of the upper and lower contours of the word. The algorithm searchers for local minima points along the upper contour and local maxima points along the lower contour of the word. These points are then marked as potential letter boundaries (PLB). A set of rules, based on the nature of Arabic cursive scripts, are then applied to both upper and lower PLB points to eliminate some of the improper ones. A matching process between upper and lower PLBs is then performed in order to obtain the minimum number of non-overlapping PLB for each word. The output of the proposed segmentation algorithm is a set of labeled primitives that represent the Arabic word. In order to reconstruct the original word from its corresponding primitives and diacritics, a novel binding and dot assignment algorithm is introduced. The algorithm achieved correct segmentation rate of 97.7% when tested on samples of loosely constrained handwritten cursive script words consisting of 7922 characters written by 14 different writers.
Robust Layout Synthesis of a MEM Crab-Leg Resonator Using a Constrained Genetic Algorithm
DEFF Research Database (Denmark)
Fan, Zhun; Achiche, Sofiane
2007-01-01
optimization problem with certain assumptions and treated by a special constrained genetic algorithm. The MEM design used for validation is a crab-leg resonator taken from the literature. The results show that the approach proposed in this research can lead to design results that meet the target performance...
DART: a robust algorithm for fast reconstruction of three-dimensional grain maps
DEFF Research Database (Denmark)
Batenburg, K.J.; Sijbers, J.; Poulsen, Henning Friis;
2010-01-01
classical tomography. To test the properties of the algorithm, three-dimensional X-ray diffraction microscopy data are simulated and reconstructed with DART as well as by a conventional iterative technique, namely SIRT (simultaneous iterative reconstruction technique). For 100 × 100 pixel reconstructions...
Robust Layout Synthesis of a MEM Crab-Leg Resonator Using a Constrained Genetic Algorithm
DEFF Research Database (Denmark)
Fan, Zhun; Achiche, Sofiane
2007-01-01
optimization problem with certain assumptions and treated by a special constrained genetic algorithm. The MEM design used for validation is a crab-leg resonator taken from the literature. The results show that the approach proposed in this research can lead to design results that meet the target performance...
DEFF Research Database (Denmark)
Dollerup, Niels; Jepsen, Michael S.; Damkilde, Lars
2013-01-01
of the precalculation step, which utilizes the principals of the well-known frontal method. The succeeding optimization algorithm is also significantly optimized, by applying a parallel implementation, which eliminates the exponential growth in computational time relative to the element numbers....
Increasing the robustness of a preconditioned filtered-X LMS algorithm
Fraanje, P.R.; Verhaegen, M.; Doelman, N.J.
2004-01-01
This letter presents a robustification of the preconditioned Filtered-X LMS algorithm proposed by Elliott et al.. The method optimizes the average performance for probabilistic uncertainty in the secondary path and relaxes the SPR condition for global convergence. It also prevents large amplificatio
Tilly, David; Ahnesjö, Anders
2015-07-01
A fast algorithm is constructed to facilitate dose calculation for a large number of randomly sampled treatment scenarios, each representing a possible realisation of a full treatment with geometric, fraction specific displacements for an arbitrary number of fractions. The algorithm is applied to construct a dose volume coverage probability map (DVCM) based on dose calculated for several hundred treatment scenarios to enable the probabilistic evaluation of a treatment plan. For each treatment scenario, the algorithm calculates the total dose by perturbing a pre-calculated dose, separately for the primary and scatter dose components, for the nominal conditions. The ratio of the scenario specific accumulated fluence, and the average fluence for an infinite number of fractions is used to perturb the pre-calculated dose. Irregularities in the accumulated fluence may cause numerical instabilities in the ratio, which is mitigated by regularisation through convolution with a dose pencil kernel. Compared to full dose calculations the algorithm demonstrates a speedup factor of ~1000. The comparisons to full calculations show a 99% gamma index (2%/2 mm) pass rate for a single highly modulated beam in a virtual water phantom subject to setup errors during five fractions. The gamma comparison shows a 100% pass rate in a moving tumour irradiated by a single beam in a lung-like virtual phantom. DVCM iso-probability lines computed with the fast algorithm, and with full dose calculation for each of the fractions, for a hypo-fractionated prostate case treated with rotational arc therapy treatment were almost indistinguishable.
DEFF Research Database (Denmark)
Cesari, Matteo; Mehlsen, Jesper; Mehlsen, Anne-Birgitte
2016-01-01
T-wave amplitude (TWA) is a well know index of the autonomic innervation of the myocardium. However, until now it has been evaluated only manually or with simple and inefficient algorithms. In this paper, we developed a new robust single-lead electrocardiogram (ECG) T-wave delineation algorithm...
Ji, H F; Huang, M Y; Xu, S Y; Wang, N; Wang, S
2016-01-01
The Robust Conjugate Direction Search (RCDS) method is used to optimize the collimation system for Rapid Cycling Synchrotron (RCS) of the Chinese Spallation Neutron Source (CSNS). The parameters of secondary collimators are optimized for a better performance of the collimation system. To improve the efficiency of the optimization, the Objective Ring Beam Injection and Tracking (ORBIT) parallel module combined with MATLAB parallel computing is used, which can run multiple ORBIT instances simultaneously. This study presents a way to figure out an optimal parameter combination of the secondary collimators for a machine model in preparation for CSNS/RCS commissioning.
Meta-algorithmics patterns for robust, low cost, high quality systems
Simske, Steven J
2013-01-01
The confluence of cloud computing, parallelism and advanced machine intelligence approaches has created a world in which the optimum knowledge system will usually be architected from the combination of two or more knowledge-generating systems. There is a need, then, to provide a reusable, broadly-applicable set of design patterns to empower the intelligent system architect to take advantage of this opportunity. This book explains how to design and build intelligent systems that are optimized for changing system requirements (adaptability), optimized for changing system input (robustness), an
Towards a robust algorithm to determine topological domains from colocalization data
Directory of Open Access Journals (Sweden)
Alexander P. Moscalets
2015-09-01
Full Text Available One of the most important tasks in understanding the complex spatial organization of the genome consists in extracting information about this spatial organization, the function and structure of chromatin topological domains from existing experimental data, in particular, from genome colocalization (Hi-C matrices. Here we present an algorithm allowing to reveal the underlying hierarchical domain structure of a polymer conformation from analyzing the modularity of colocalization matrices. We also test this algorithm on several model polymer structures: equilibrium globules, random fractal globules and regular fractal (Peano conformations. We define what we call a spectrum of cluster borders, and show that these spectra behave strikingly di erently for equilibrium and fractal conformations, allowing us to suggest an additional criterion to identify fractal polymer conformations.
Robustness of Consensus Algorithms for Networks with Communication Delays and Switching Topology
Shida, Takanobu; Ohmori, Hiromitsu
In this paper, we study consensus problems of continuous-time multiagent systems. A multiagent system forms a communication network where each agent receives information from its neighbors. This information is used to obtain the control inputs necessary for achieving consensus. It is assumed that delays exist in communication networks. Communication delays are time dependent and differ for each communication channel. In addition, it is assumed that communication networks have switching topologies, and feedback gains are time varying. Under these assumptions, we show that a network system consisting of first-order agents is bounded and find a condition under which it achieves consensus. Stability is shown by using the Lyapunov theorem. In addition, we extend the consensus algorithm for first-order systems to an output consensus algorithm for high-order systems.
Towards a robust algorithm to determine topological domains from colocalization data
Moscalets, Alexander P; Tamm, Mikhail V
2016-01-01
One of the most important tasks in understanding the complex spatial organization of the genome consists in extracting information about this spatial organization, the function and structure of chromatin topological domains from existing experimental data, in particular, from genome colocalization (Hi-C) matrices. Here we present an algorithm allowing to reveal the underlying hierarchical domain structure of a polymer conformation from analyzing the modularity of colocalization matrices. We also test this algorithm on several model polymer structures: equilibrium globules, random fractal globules and regular fractal (Peano) conformations. We define what we call a spectrum of cluster borders, and show that these spectra behave strikingly differently for equilibrium and fractal conformations, allowing us to suggest an additional criterion to identify fractal polymer conformations.
Ranganadh Narayanam*
2013-01-01
Voice Activity Detection (VAD) problem considers detecting the presence of speech in a noisy signal. The speech/non-speech classification task is not as trivial as it appears, and most of the VAD algorithms fail when the level of background noise increases. In this research we are presenting a new technique for Voice Activity Detection (VAD) in EEG collected brain stem speech evoked potentials data [7, 8, 9]. This one is spectral subtraction method in which we have developed ou...
2012-09-01
laboratory includes several visible PixeLINK CMOS machine vision cameras and an LWIR microbolometer camera. All results reported in this paper using...of nearly log-spaced positions, resulting in a 101 image sequence; the minimum and maximum calibrated shifts are 0.001 and 49.974 pixels...the four algorithms identified above, and the results are presented in Fig. 10 as a function of true ( calibrated ) shift. Results are shown on the
A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain
Directory of Open Access Journals (Sweden)
E. M. Ismaili Aalaoui
2009-02-01
Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.
A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain
Directory of Open Access Journals (Sweden)
Ibn-Elhaj E
2009-01-01
Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.
Li, Wei; Saleeb, Atef F.
1995-01-01
This two-part report is concerned with the development of a general framework for the implicit time-stepping integrators for the flow and evolution equations in generalized viscoplastic models. The primary goal is to present a complete theoretical formulation, and to address in detail the algorithmic and numerical analysis aspects involved in its finite element implementation, as well as to critically assess the numerical performance of the developed schemes in a comprehensive set of test cases. On the theoretical side, the general framework is developed on the basis of the unconditionally-stable, backward-Euler difference scheme as a starting point. Its mathematical structure is of sufficient generality to allow a unified treatment of different classes of viscoplastic models with internal variables. In particular, two specific models of this type, which are representative of the present start-of-art in metal viscoplasticity, are considered in applications reported here; i.e., fully associative (GVIPS) and non-associative (NAV) models. The matrix forms developed for both these models are directly applicable for both initially isotropic and anisotropic materials, in general (three-dimensional) situations as well as subspace applications (i.e., plane stress/strain, axisymmetric, generalized plane stress in shells). On the computational side, issues related to efficiency and robustness are emphasized in developing the (local) interative algorithm. In particular, closed-form expressions for residual vectors and (consistent) material tangent stiffness arrays are given explicitly for both GVIPS and NAV models, with their maximum sizes 'optimized' to depend only on the number of independent stress components (but independent of the number of viscoplastic internal state parameters). Significant robustness of the local iterative solution is provided by complementing the basic Newton-Raphson scheme with a line-search strategy for convergence. In the present second part of
2015-01-01
In this paper, a robust algorithm for fault diagnosis of power system equipment based on a failure-sensitive matrix (FSM) is presented. The FSM is a dynamic matrix structure updated by multiple measurements (online) and test results (offline) on the systems. The algorithm uses many different artificial intelligence and expert system methods for adaptively detecting the location of faults, emerging failures, and causes of failures. In this algorithm, all data obtained from the power transforme...
Energy Technology Data Exchange (ETDEWEB)
Labaria, George R. [Univ. of California, Santa Cruz, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Warrick, Abbie L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Celliers, Peter M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kalantar, Daniel H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-01-12
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.
Munhoven, G.
2013-03-01
The total alkalinity-pH equation, which relates total alkalinity and pH for a given set of total concentrations of the acid-base systems that contribute to total alkalinity in a given water sample, is reviewed and its mathematical properties established. We prove that the equation function is strictly monotone and always has exactly one positive root. Different commonly used approximations are discussed and compared. An original method to derive appropriate initial values for the iterative solution of the cubic polynomial equation based upon carbonate-borate-alkalinity is presented. We then review different methods that have been used to solve the total alkalinity-pH equation, with a main focus on biogeochemical models. The shortcomings and limitations of these methods are made out and discussed. We then present two variants of a new, robust and universally convergent algorithm to solve the total alkalinity-pH equation. This algorithm does not require any a priori knowledge of the solution. The iterative procedure is shown to converge from any starting value to the physical solution. The extra computational cost for the convergence security is only 10-15% compared to the fastest algorithm in our test series.
Ji, Hong-Fei; Jiao, Yi; Huang, Ming-Yang; Xu, Shou-Yan; Wang, Na; Wang, Sheng
2016-09-01
The Robust Conjugate Direction Search (RCDS) method is used to optimize the collimation system for the Rapid Cycling Synchrotron (RCS) of the China Spallation Neutron Source (CSNS). The parameters of secondary collimators are optimized for a better performance of the collimation system. To improve the efficiency of the optimization, the Objective Ring Beam Injection and Tracking (ORBIT) parallel module combined with MATLAB parallel computing is used, which can run multiple ORBIT instances simultaneously. This study presents a way to find an optimal parameter combination of the secondary collimators for a machine model in preparation for CSNS/RCS commissioning. Supported by National Natural Science Foundation of China (11475202, 11405187, 11205185) and Youth Innovation Promotion Association of Chinese Academy of Sciences (2015009)
Multi-focus image fusion and robust encryption algorithm based on compressive sensing
Xiao, Di; Wang, Lan; Xiang, Tao; Wang, Yong
2017-06-01
Multi-focus image fusion schemes have been studied in recent years. However, little work has been done in multi-focus image transmission security. This paper proposes a scheme that can reduce data transmission volume and resist various attacks. First, multi-focus image fusion based on wavelet decomposition can generate complete scene images and optimize the perception of the human eye. The fused images are sparsely represented with DCT and sampled with structurally random matrix (SRM), which reduces the data volume and realizes the initial encryption. Then the obtained measurements are further encrypted to resist noise and crop attack through combining permutation and diffusion stages. At the receiver, the cipher images can be jointly decrypted and reconstructed. Simulation results demonstrate the security and robustness of the proposed scheme.
SuperPatchMatch: an Algorithm for Robust Correspondences using Superpixel Patches.
Giraud, Remi; Ta, Vinh-Thong; Bugeau, Aurelie; Coupe, Pierrik; Papadakis, Nicolas
2017-05-29
Superpixels have become very popular in many computer vision applications. Nevertheless, they remain underexploited since the superpixel decomposition may produce irregular and non stable segmentation results due to the dependency to the image content. In this paper, we first introduce a novel structure, a superpixel-based patch, called SuperPatch. The proposed structure, based on superpixel neighborhood, leads to a robust descriptor since spatial information is naturally included. The generalization of the PatchMatch method to SuperPatches, named SuperPatchMatch, is introduced. Finally, we propose a framework to perform fast segmentation and labeling from an image database, and demonstrate the potential of our approach since we outperform, in terms of computational cost and accuracy, the results of state-of-the-art methods on both face labeling and medical image segmentation.
Class Dependent LDA Optimization Using Genetic Algorithm for Robust MFCC Extraction
Abbasian, Houman; Nasersharif, Babak; Akbari, Ahmad
Linear Discrimination analysis (LDA) finds transformations that maximizes the between-class scatter and minimizes within-class scatter. In this paper, we propose a method to use class-dependent LDA for speech recognition and MFCC extraction. To this end, we first use logarithm of clean speech Mel filter bank energies (LMFE) of each class then we obtain class-dependent LDA transformation matrix using multidimensional genetic algorithm (MGA) and use this matrix in place of DCT in MFCC feature extraction. The experimental results show that proposed speech recognition and optimization methods using class-dependent LDA, achieves to a significant isolated word recognition rate on Aurora2 database.
Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.
2016-11-01
The Least Squares (LS), Least Median Squares (LMdS), Reweighted Least Squares (RLS) and Trimmed Least Squares (TLS) estimators are used to obtain parameter estimates of AR models using DE algorithm. The empirical study indicated that, the RLS estimator seems to be very reasonable because of having smaller root mean square error (RMSE), particularly for the Gaussian AR(1) process with unknown drift and additive outliers. Moreover, while LS performs well on shorter processes with less percentage and smaller magnitude of additive outliers (AOS); RLS and TLS compare favorably with respect to LS for longer AR processes. Thus, this study recommends the Reweighted Least Squares estimator as an alternative to the LS estimator in the case of autoregressive processes with additive outliers. The experiment also demonstrates that Differential Evolution (DE) algorithm obtains optimal solutions for fitting first-order autoregressive processes with outliers using the estimators. At the request of all authors of the paper, and with the agreement of the Proceedings Editor, an updated version of this article was published on 15 December 2016. The original version supplied to AIP Publishing contained errors in some of the mathematical equations and in Table 2. The errors have been corrected in the updated and re-published article.
Institute of Scientific and Technical Information of China (English)
杜其奎; 余德浩
2001-01-01
Some new domain decomposition methods based on natural boundary reduction are suggested for overlapping and non-overlapping domains. A two-dimensional scalar wave equation is taken as a model to illustrate these methods. The governing equation is discretized in time, leading to time-stepping scheme, where an exterior elliptic problem has to be solved in each time step. The Dirichlet-Neumann method and Schwartz alternating method are proposed respectively. For the Schwartz alternating method, the convergence of the algorithm and the contraction factor for exterior circular domain are given. Finally, some numerical examples are devoted to illustrate these methods.%提出了无界区域波动方程的区域分解算法.基于自然边界归化,分别研究了重叠型与非重叠型区域分解算法.首先将控制方程对时间进行离散化,得到关于时间步长离散化格式,对每一时间步长给出了Dirichlet-Neumann和Schwartz交替算法.对Schwartz交替算法,给出了算法的收敛性,对圆外区域研究了压缩因子,并给出了数值例子.
Some nonlinear space decomposition algorithms
Energy Technology Data Exchange (ETDEWEB)
Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)
1996-12-31
Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.
Torquato, S; Jiao, Y
2010-12-01
We have formulated the problem of generating dense packings of nonoverlapping, nontiling nonspherical particles within an adaptive fundamental cell subject to periodic boundary conditions as an optimization problem called the adaptive-shrinking cell (ASC) formulation [S. Torquato and Y. Jiao, Phys. Rev. E 80, 041104 (2009)]. Because the objective function and impenetrability constraints can be exactly linearized for sphere packings with a size distribution in d-dimensional Euclidean space R(d), it is most suitable and natural to solve the corresponding ASC optimization problem using sequential-linear-programming (SLP) techniques. We implement an SLP solution to produce robustly a wide spectrum of jammed sphere packings in R(d) for d=2, 3, 4, 5, and 6 with a diversity of disorder and densities up to the respective maximal densities. A novel feature of this deterministic algorithm is that it can produce a broad range of inherent structures (locally maximally dense and mechanically stable packings), besides the usual disordered ones (such as the maximally random jammed state), with very small computational cost compared to that of the best known packing algorithms by tuning the radius of the influence sphere. For example, in three dimensions, we show that it can produce with high probability a variety of strictly jammed packings with a packing density anywhere in the wide range [0.6, 0.7408...], where π/√18 = 0.7408... corresponds to the density of the densest packing. We also apply the algorithm to generate various disordered packings as well as the maximally dense packings for d=2, 4, 5, and 6. Our jammed sphere packings are characterized and compared to the corresponding packings generated by the well-known Lubachevsky-Stillinger (LS) molecular-dynamics packing algorithm. Compared to the LS procedure, our SLP protocol is able to ensure that the final packings are truly jammed, produces disordered jammed packings with anomalously low densities, and is appreciably
Rapid and robust medical image elastic registration using mean shift algorithm
Institute of Scientific and Technical Information of China (English)
Xuan Yang; Jihong Pei
2008-01-01
In landmark-based image registration, estimating the landmark correspondence plays an important role. In this letter, a novel landmark correspondence estimation technique using mean shift algorithm is proposed. Image corner points are detected as landmarks and mean shift iterations are adopted to find the most probable corresponding point positions in two images. Mutual information between intensity of two local regions is computed to eliminate mis-matching points. Multi-level estimation (MLE) technique is proposed to improve the stability of corresponding estimation. Experiments show that the precision in location of correspondence landmarks is exact. The proposed technique is shown to be feasible and rapid in the experiments of various mono-modal medical images.
A robust algorithm for moving interface of multi-material fluids based on Riemann solutions
Institute of Scientific and Technical Information of China (English)
Xueying Zhang; Ning Zhao
2006-01-01
In the paper,the numerical simulation of interface problems for multiple material fluids is studied.The level set function is designed to capture the location of the material interface.For multi-dimensional and multi-material fluids,the modified ghost fluid method needs a Riemann solution to renew the variable states near the interface.Here we present a new convenient and effective algorithm for solving the Riemann problem in the normal direction.The extrapolated variables are populated by Taylor series expansions in the direction.The anti-diffusive high order WENO difference scheme with the limiter is adopted for the numerical simulation.Finally we implement a series of numerical experiments of multi-material flows.The obtained results are satisfying,compared to those by other methods.
Ahmed, N; Zheng, Ziyi; Mueller, K
2012-12-01
Due to the inherent characteristics of the visualization process, most of the problems in this field have strong ties with human cognition and perception. This makes the human brain and sensory system the only truly appropriate evaluation platform for evaluating and fine-tuning a new visualization method or paradigm. However, getting humans to volunteer for these purposes has always been a significant obstacle, and thus this phase of the development process has traditionally formed a bottleneck, slowing down progress in visualization research. We propose to take advantage of the newly emerging field of Human Computation (HC) to overcome these challenges. HC promotes the idea that rather than considering humans as users of the computational system, they can be made part of a hybrid computational loop consisting of traditional computation resources and the human brain and sensory system. This approach is particularly successful in cases where part of the computational problem is considered intractable using known computer algorithms but is trivial to common sense human knowledge. In this paper, we focus on HC from the perspective of solving visualization problems and also outline a framework by which humans can be easily seduced to volunteer their HC resources. We introduce a purpose-driven game titled "Disguise" which serves as a prototypical example for how the evaluation of visualization algorithms can be mapped into a fun and addicting activity, allowing this task to be accomplished in an extensive yet cost effective way. Finally, we sketch out a framework that transcends from the pure evaluation of existing visualization methods to the design of a new one.
Parasuraman, Ramviyas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel
2014-01-01
The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide red...
Touati, Julien; Bologna, Marco; Schwein, Adeline; Migliavacca, Francesco; Garbey, Marc
2017-07-01
Centerlines of blood vessels are useful tools to make important anatomical measurements (length, diameter, area), which cannot be accurately obtained using 2D images. In this paper a brand new method for centerline extraction of vascular trees is presented. By using computational fluid dynamics (CFD) we are able to obtain a robust and purely functional centerline allowing us to support better measurements than classic purely geometrical-based centerlines. We show that the CFD-based centerline is within a few pixels from the geometrical centerline where the latter is defined (far away from inlet/outlets and from the branches). We show that the centerline computed with our method is not affected by traditional errors of other classical volume-based algorithms such as topological thinning, and could be a potential alternative to be considered for future studies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Institute of Scientific and Technical Information of China (English)
Mojtahedi,A.; Lotfollahi Yaghin,M.A.; Hassanzadeh,Y.; Abbasidoust,F.; Ettefagh,M.M.; Aminfar,M.H.
2012-01-01
Steel jacket-type platforms are the common kind of the offshore structures and health monitoring is an important issue in their safety assessment.In the present study,a new damage detection method is adopted for this kind of structures and inspected experimentally by use of a laboratory model.The method is investigated for developing the robust damage detection technique which is less sensitive to both measurement and analytical model uncertainties.For this purpose,incorporation of the artificial immune system with weighted attributes (AISWA) method into finite element (FE) model updating is proposed and compared with other methods for exploring its effectiveness in damage identification.Based on mimicking immune recognition,noise simulation and attributes weighting,the method offers important advantages and has high success rates.Therefore,it is proposed as a suitable method for the detection of the failures in the large civil engineering structures with complicated structural geometry,such as the considered case study.
Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel
2014-12-12
The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the "server-relay-client" framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.
Directory of Open Access Journals (Sweden)
Ramviyas Parasuraman
2014-12-01
Full Text Available The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS. When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities, there is a possibility that some electronic components may fail randomly (due to radiation effects, which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.
Directory of Open Access Journals (Sweden)
Stull C.J.
2012-07-01
Full Text Available The Info-Gap Decision Theory (IGDT is here adopted to assess the robust- ness of a technique aimed at identifying the optimal excitation signal within a structural health monitoring (SHM procedure. Given limited system response measurements and ever-present physical limits on the level of excitation, the ultimate goal of the mentioned technique is to improve the detectability of the damage increasing the difference between measurable outputs of the undamaged and damaged system. In particular, a 2 DOF mass-spring-damper system characterized by the presence of a nonlinear stiffness is considered. Uncertainty is introduced within the system under the form of deviations of its parameters (mass, stiffness, damping ratio… from their nominal values. Variations in the performance of the mentioned technique are then evaluated both in terms of changes in the estimated difference between the responses of the damaged and undamaged system and in terms of deviations of the identified optimal input signal from its nominal estimation. Finally, plots of the performances of the analyzed algorithm for different levels of uncertainty are obtained, showing which parameters are more sensitive to the presence of uncertainty and thus enabling a clear evaluation of its robustness.
PCBDDC: A Class of Robust Dual-Primal Methods in PETSc
Zampini, Stefano
2016-10-27
A class of preconditioners based on balancing domain decomposition by constraints methods is introduced in the Portable, Extensible Toolkit for Scientific Computation (PETSc). The algorithm and the underlying nonoverlapping domain decomposition framework are described with a specific focus on their current implementation in the library. Available user customizations are also presented, together with an experimental interface to the finite element tearing and interconnecting dual-primal methods within PETSc. Large-scale parallel numerical results are provided for the latest version of the code, which is able to tackle symmetric positive definite problems with highly heterogeneous distributions of the coefficients. Current limitations and future extensions of the preconditioner class are also discussed.
Muceli, Silvia; Jiang, Ning; Farina, Dario
2014-05-01
Previous research proposed the extraction of myoelectric control signals by linear factorization of multi-channel electromyogram (EMG) recordings from forearm muscles. This paper further analyses the theoretical basis for dimensionality reduction in high-density EMG signals from forearm muscles. Moreover, it shows that the factorization of muscular activation patterns in weights and activation signals by non-negative matrix factorization (NMF) is robust with respect to the channel configuration from where the EMG signals are obtained. High-density surface EMG signals were recorded from the forearm muscles of six individuals. Weights and activation signals extracted offline from 10 channel configurations with varying channel numbers (6, 8, 16, 192 channels) were highly similar. Additionally, the method proved to be robust against electrode shifts in both transversal and longitudinal direction with respect to the muscle fibers. In a second experiment, six subjects directly used the activation signals extracted from high-density EMG for online goal-directed control tasks involving simultaneous and proportional control of two degrees-of-freedom of the wrist. The synergy weights for this control task were extracted from a reference configuration and activation signals were calculated online from the reference configuration as well as from the two shifted configurations, simulating electrode shift. Despite the electrode shift, the task completion rate, task completion time, and execution efficiency were generally not statistically different among electrode configurations. Online performances were also mostly similar when using either 6, 8, or 16 EMG channels. The robustness of the method to the number and location of channels, proved both offline and online, indicates that EMG signals recorded from forearm muscles can be approximated as linear instantaneous mixtures of activation signals and justifies the use of linear factorization algorithms for extracting, in a
Fritz, Sean
2015-01-01
In this study, an interplanetary space flight mission design is established to obtain the minimum \\(\\Delta V\\) required for a rendezvous and sample return mission from an asteroid. Given the initial (observed) conditions of an asteroid, a (robust) genetic algorithm is implemented to determine the optimal choice of \\(\\Delta V\\) required for the rendezvous. Robustness of the optimum solution is demonstrated through incorporated bounded-uncertainties in the outbound \\(\\Delta V\\) maneuver via genetic fitness function. The improved algorithm results in a solution with improved robustness and reduced sensitivity to propulsive errors in the outbound maneuver. This is achieved over a solution optimized solely on \\(\\Delta V\\), while keeping the increase in \\(\\Delta V\\) to a minimum, as desired. Outcomes of the analysis provide significant results in terms of improved robustness in asteroid rendezvous missions.
Ambikasaran, Sivaram
2015-01-01
Using accurate multi-component diffusion treatment in numerical combustion studies remains formidable due to the computational cost associated with solving for diffusion velocities. To obtain the diffusion velocities, for low density gases, one needs to solve the Stefan-Maxwell equations along with the zero diffusion flux criteria, which scales as $\\mathcal{O}(N^3)$, when solved exactly. In this article, we propose an accurate, fast, direct and robust algorithm to compute multi-component diffusion velocities. To our knowledge, this is the first provably accurate algorithm (the solution can be obtained up to an arbitrary degree of precision) scaling at a computational complexity of $\\mathcal{O}(N)$ in finite precision. The key idea involves leveraging the fact that the matrix of the reciprocal of the binary diffusivities, $V$, is low rank, with its rank being independent of the number of species involved. The low rank representation of matrix $V$ is computed in a fast manner at a computational complexity of $\\...
Energy Technology Data Exchange (ETDEWEB)
Gaiffe, St.
2000-03-23
In this thesis, we are interested in the modeling of fluid flow through porous media with 2-D and 3-D unstructured meshes, and in the use of domain decomposition methods. The behavior of flow through porous media is strongly influenced by heterogeneities: either large-scale lithological discontinuities or quite localized phenomena such as fluid flow in the neighbourhood of wells. In these two typical cases, an accurate consideration of the singularities requires the use of adapted meshes. After having shown the limits of classic meshes we present the future prospects offered by hybrid and flexible meshes. Next, we consider the generalization possibilities of the numerical schemes traditionally used in reservoir simulation and we draw two available approaches: mixed finite elements and U-finite volumes. The investigated phenomena being also characterized by different time-scales, special treatments in terms of time discretization on various parts of the domain are required. We think that the combination of domain decomposition methods with operator splitting techniques may provide a promising approach to obtain high flexibility for a local tune-steps management. Consequently, we develop a new numerical scheme for linear parabolic equations which allows to get a higher flexibility in the local space and time steps management. To conclude, a priori estimates and error estimates on the two variables of interest, namely the pressure and the velocity are proposed. (author)
a Robust Registration Algorithm for Point Clouds from Uav Images for Change Detection
Al-Rawabdeh, A.; Al-Gurrani, H.; Al-Durgham, K.; Detchev, I.; He, F.; El-Sheimy, N.; Habib, A.
2016-06-01
Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs
A ROBUST REGISTRATION ALGORITHM FOR POINT CLOUDS FROM UAV IMAGES FOR CHANGE DETECTION
Directory of Open Access Journals (Sweden)
A. Al-Rawabdeh
2016-06-01
Full Text Available Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs of the camera and the Exterior Orientation Parameters (EOPs of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV action camera which facilitated capturing high-resolution geo-tagged images
Schiattarella, Vincenzo; Spiller, Dario; Curti, Fabio
2017-04-01
This work proposes a novel technique for the star pattern recognition for the Lost in Space, named Multi-Poles Algorithm. This technique is especially designed to ensure a reliable identification of stars when there is a large number of false objects in the image, such as Single Event Upsets, hot pixels or other celestial bodies. The algorithm identifies the stars using three phases: the acceptance phase, the verification phase and the confirmation phase. The acceptance phase uses a polar technique to yield a set of accepted stars. The verification phase performs a cross-check between two sets of accepted stars providing a new set of verified stars. Finally, the confirmation phase introduces an additional check to discard or to keep a verified star. As a result, this procedure guarantees a high robustness to false objects in the acquired images. A reliable simulator is developed to test the algorithm to obtain accurate numerical results. The star tracker is simulated as a 1024 × 1024 Active Pixel Sensor with a 20° Field of View. The sensor noises are added using suitable distribution models. The stars are simulated using the Hipparcos catalog with corrected magnitudes accordingly to the instrumental response of the sensor. The Single Event Upsets are modeled based on typical shapes detected from some missions. The tests are conducted through a Monte Carlo analysis covering the entire celestial sphere. The numerical results are obtained for both a fixed and a variable attitude configuration. In the first case, the angular velocity is zero and the simulations give a success rate of 100% considering a number of false objects up to six times the number of the cataloged stars in the image. The success rate decreases at 66% when the number of false objects is increased to fifteen times the number of cataloged stars. For moderate angular velocities, preliminary results are given for constant rate and direction. By increasing the angular rate, the performances of the
Scalable and Robust BDDC Preconditioners for Reservoir and Electromagnetics Modeling
Zampini, S.
2015-09-13
The purpose of the study is to show the effectiveness of recent algorithmic advances in Balancing Domain Decomposition by Constraints (BDDC) preconditioners for the solution of elliptic PDEs with highly heterogeneous coefficients, and discretized by means of the finite element method. Applications to large linear systems generated by div- and curl- conforming finite elements discretizations commonly arising in the contexts of modelling reservoirs and electromagnetics will be presented.
Directory of Open Access Journals (Sweden)
Othman M. K. Alsmadi
2015-01-01
Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
Alsmadi, Othman M K; Abo-Hammour, Zaer S
2015-01-01
A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm
Skotheim, Øystein; Thielemann, Jens T.; Berge, Asbjørn; Sommerfelt, Arne
2010-02-01
Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.
Energy Technology Data Exchange (ETDEWEB)
Yi, Jianbing, E-mail: yijianbing8@163.com [College of Information Engineering, Shenzhen University, Shenzhen, Guangdong 518000, China and College of Information Engineering, Jiangxi University of Science and Technology, Ganzhou, Jiangxi 341000 (China); Yang, Xuan, E-mail: xyang0520@263.net; Li, Yan-Ran, E-mail: lyran@szu.edu.cn [College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong 518000 (China); Chen, Guoliang, E-mail: glchen@szu.edu.cn [National High Performance Computing Center at Shenzhen, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong 518000 (China)
2015-10-15
Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the
Lorin, E.; Yang, X.; Antoine, X.
2016-06-01
The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.
Schaller, Matthieu; Chalk, Aidan B G; Draper, Peter W
2016-01-01
We present a new open-source cosmological code, called SWIFT, designed to solve the equations of hydrodynamics using a particle-based approach (Smooth Particle Hydrodynamics) on hybrid shared/distributed-memory architectures. SWIFT was designed from the bottom up to provide excellent strong scaling on both commodity clusters (Tier-2 systems) and Top100-supercomputers (Tier-0 systems), without relying on architecture-specific features or specialized accelerator hardware. This performance is due to three main computational approaches: (1) Task-based parallelism for shared-memory parallelism, which provides fine-grained load balancing and thus strong scaling on large numbers of cores. (2) Graph-based domain decomposition, which uses the task graph to decompose the simulation domain such that the work, as opposed to just the data, as is the case with most partitioning schemes, is equally distributed across all nodes. (3) Fully dynamic and asynchronous communication, in which communication is modelled as just anot...
Aagaard, Brad T; Williams, Charles A
2013-01-01
We employ a domain decomposition approach with Lagrange multipliers to implement fault slip in a finite-element code, PyLith, for use in both quasi-static and dynamic crustal deformation applications. This integrated approach to solving both quasi-static and dynamic simulations leverages common finite-element data structures and implementations of various boundary conditions, discretization schemes, and bulk and fault rheologies. We have developed a custom preconditioner for the Lagrange multiplier portion of the system of equations that provides excellent scalability with problem size compared to conventional additive Schwarz methods. We demonstrate application of this approach using benchmarks for both quasi-static viscoelastic deformation and dynamic spontaneous rupture propagation that verify the numerical implementation in PyLith.
Bougeard, M. L.
In recent years, robustness is one problem that has been given much attention in statistical literature. While it is now clear that no single robust regression procedure is best, the L1 and the Huber-M estimators are currently attracting a considerable attention when the errors have a contaminated Gaussian distribution. Nevertheless, they cannot be expressed analytically. So, finding efficient algorithms to produce them in the case of large data sets is still a field of active research. In this paper, the author first discusses the early contribution of Laplace and others to the L1 problem. Then, he presents new algorithms based on the Spingarn Partial Inverse-proximal approach that takes into account both primal and dual aspects of the M-estimation problem. It is shown how the method can be easily extended to handle constrained problems. The result is a family of highly parallel algorithms attractive for large scale problems. Astrometrical applications are considered.
Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report
Energy Technology Data Exchange (ETDEWEB)
Saad, Yousef
2014-01-16
The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the
Saeed, Fahad
2009-01-01
Multiple Sequence Alignment (MSA) is one of the most computationally intensive tasks in Computational Biology. Existing best known solutions for multiple sequence alignment take several hours (in some cases days) of computation time to align, for example, 2000 homologous sequences of average length 300. Inspired by the Sample Sort approach in parallel processing, in this paper we propose a highly scalable multiprocessor solution for the MSA problem in phylogenetically diverse sequences. Our method employs an intelligent scheme to partition the set of sequences into smaller subsets using kmer count based similarity index, referred to as k-mer rank. Each subset is then independently aligned in parallel using any sequential approach. Further fine tuning of the local alignments is achieved using constraints derived from a global ancestor of the entire set. The proposed Sample-Align-D Algorithm has been implemented on a cluster of workstations using MPI message passing library. The accuracy of the proposed solutio...
DEFF Research Database (Denmark)
Aanæs, Henrik; Fisker, Rune; Åström, Kalle;
2002-01-01
Factorization algorithms for recovering structure and motion from an image stream have many advantages, but they usually require a set of well-tracked features. Such a set is in generally not available in practical applications. There is thus a need for making factorization algorithms deal...... effectively with errors in the tracked features. We propose a new and computationally efficient algorithm for applying an arbitrary error function in the factorization scheme. This algorithm enables the use of robust statistical techniques and arbitrary noise models for the individual features....... These techniques and models enable the factorization scheme to deal effectively with mismatched features, missing features, and noise on the individual features. The proposed approach further includes a new method for Euclidean reconstruction that significantly improves convergence of the factorization algorithms...
Directory of Open Access Journals (Sweden)
MOHAMMED FAIZ ABOALMAALY
2014-10-01
Full Text Available With the continuous revolution of multicore architecture, several parallel programming platforms have been introduced in order to pave the way for fast and efficient development of parallel algorithms. Back into its categories, parallel computing can be done through two forms: Data-Level Parallelism (DLP or Task-Level Parallelism (TLP. The former can be done by the distribution of data among the available processing elements while the latter is based on executing independent tasks concurrently. Most of the parallel programming platforms have built-in techniques to distribute the data among processors, these techniques are technically known as automatic distribution (scheduling. However, due to their wide range of purposes, variation of data types, amount of distributed data, possibility of extra computational overhead and other hardware-dependent factors, manual distribution could achieve better outcomes in terms of performance when compared to the automatic distribution. In this paper, this assumption is investigated by conducting a comparison between automatic and our newly proposed manual distribution of data among threads in parallel. Empirical results of matrix addition and matrix multiplication show a considerable performance gain when manual distribution is applied against automatic distribution.
Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei
2015-01-01
A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. PMID:25087857
Directory of Open Access Journals (Sweden)
Walton Surrey M
2005-03-01
Full Text Available Abstract Background Cost utility analysis (CUA using SF-36/SF-12 data has been facilitated by the development of several preference-based algorithms. The purpose of this study was to illustrate how decision-making could be affected by the choice of preference-based algorithms for the SF-36 and SF-12, and provide some guidance on selecting an appropriate algorithm. Methods Two sets of data were used: (1 a clinical trial of adult asthma patients; and (2 a longitudinal study of post-stroke patients. Incremental costs were assumed to be $2000 per year over standard treatment, and QALY gains realized over a 1-year period. Ten published algorithms were identified, denoted by first author: Brazier (SF-36, Brazier (SF-12, Shmueli, Fryback, Lundberg, Nichol, Franks (3 algorithms, and Lawrence. Incremental cost-utility ratios (ICURs for each algorithm, stated in dollars per quality-adjusted life year ($/QALY, were ranked and compared between datasets. Results In the asthma patients, estimated ICURs ranged from Lawrence's SF-12 algorithm at $30,769/QALY (95% CI: 26,316 to 36,697 to Brazier's SF-36 algorithm at $63,492/QALY (95% CI: 48,780 to 83,333. ICURs for the stroke cohort varied slightly more dramatically. The MEPS-based algorithm by Franks et al. provided the lowest ICUR at $27,972/QALY (95% CI: 20,942 to 41,667. The Fryback and Shmueli algorithms provided ICURs that were greater than $50,000/QALY and did not have confidence intervals that overlapped with most of the other algorithms. The ICUR-based ranking of algorithms was strongly correlated between the asthma and stroke datasets (r = 0.60. Conclusion SF-36/SF-12 preference-based algorithms produced a wide range of ICURs that could potentially lead to different reimbursement decisions. Brazier's SF-36 and SF-12 algorithms have a strong methodological and theoretical basis and tended to generate relatively higher ICUR estimates, considerations that support a preference for these algorithms over the
Energy Technology Data Exchange (ETDEWEB)
Sidler, Rolf, E-mail: rsidler@gmail.com [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland); Carcione, José M. [Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS), Borgo Grotta Gigante 42c, 34010 Sgonico, Trieste (Italy); Holliger, Klaus [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland)
2013-02-15
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge–Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid–solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.
Ervik, Åsmund; Müller, Bernhard
2014-01-01
To leverage the last two decades' transition in High-Performance Computing (HPC) towards clusters of compute nodes bound together with fast interconnects, a modern scalable CFD code must be able to efficiently distribute work amongst several nodes using the Message Passing Interface (MPI). MPI can enable very large simulations running on very large clusters, but it is necessary that the bulk of the CFD code be written with MPI in mind, an obstacle to parallelizing an existing serial code. In this work we present the results of extending an existing two-phase 3D Navier-Stokes solver, which was completely serial, to a parallel execution model using MPI. The 3D Navier-Stokes equations for two immiscible incompressible fluids are solved by the continuum surface force method, while the location of the interface is determined by the level-set method. We employ the Portable Extensible Toolkit for Scientific Computing (PETSc) for domain decomposition (DD) in a framework where only a fraction of the code needs to be a...
Yi, Xi; Wang, Xin; Chen, Weiting; Wan, Wenbo; Zhao, Huijuan; Gao, Feng
2014-05-01
The common approach to diffuse optical tomography is to solve a nonlinear and ill-posed inverse problem using a linearized iteration process that involves repeated use of the forward and inverse solvers on an appropriately discretized domain of interest. This scheme normally brings severe computation and storage burdens to its applications on large-sized tissues, such as breast tumor diagnosis and brain functional imaging, and prevents from using the matrix-fashioned linear inversions for improved image quality. To cope with the difficulties, we propose in this paper a parallelized full domain-decomposition scheme, which divides the whole domain into several overlapped subdomains and solves the corresponding subinversions independently within the framework of the Schwarz-type iterations, with the support of a combined multicore CPU and multithread graphics processing unit (GPU) parallelization strategy. The numerical and phantom experiments both demonstrate that the proposed method can effectively reduce the computation time and memory occupation for the large-sized problem and improve the quantitative performance of the reconstruction.
Directory of Open Access Journals (Sweden)
Hengkai Guo
Full Text Available Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US and magnetic resonance (MR. Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods.
Yuan, Wu; Kut, Carmen; Liang, Wenxuan; Li, Xingde
2017-03-01
Cancer is known to alter the local optical properties of tissues. The detection of OCT-based optical attenuation provides a quantitative method to efficiently differentiate cancer from non-cancer tissues. In particular, the intraoperative use of quantitative OCT is able to provide a direct visual guidance in real time for accurate identification of cancer tissues, especially these without any obvious structural layers, such as brain cancer. However, current methods are suboptimal in providing high-speed and accurate OCT attenuation mapping for intraoperative brain cancer detection. In this paper, we report a novel frequency-domain (FD) algorithm to enable robust and fast characterization of optical attenuation as derived from OCT intensity images. The performance of this FD algorithm was compared with traditional fitting methods by analyzing datasets containing images from freshly resected human brain cancer and from a silica phantom acquired by a 1310 nm swept-source OCT (SS-OCT) system. With graphics processing unit (GPU)-based CUDA C/C++ implementation, this new attenuation mapping algorithm can offer robust and accurate quantitative interpretation of OCT images in real time during brain surgery.
Yuan, Wu; Kut, Carmen; Liang, Wenxuan; Li, Xingde
2017-01-01
Cancer is known to alter the local optical properties of tissues. The detection of OCT-based optical attenuation provides a quantitative method to efficiently differentiate cancer from non-cancer tissues. In particular, the intraoperative use of quantitative OCT is able to provide a direct visual guidance in real time for accurate identification of cancer tissues, especially these without any obvious structural layers, such as brain cancer. However, current methods are suboptimal in providing high-speed and accurate OCT attenuation mapping for intraoperative brain cancer detection. In this paper, we report a novel frequency-domain (FD) algorithm to enable robust and fast characterization of optical attenuation as derived from OCT intensity images. The performance of this FD algorithm was compared with traditional fitting methods by analyzing datasets containing images from freshly resected human brain cancer and from a silica phantom acquired by a 1310 nm swept-source OCT (SS-OCT) system. With graphics processing unit (GPU)-based CUDA C/C++ implementation, this new attenuation mapping algorithm can offer robust and accurate quantitative interpretation of OCT images in real time during brain surgery. PMID:28327613
Sweeney, Timothy E; Chen, Albert C; Gevaert, Olivier
2015-11-19
In order to discover new subsets (clusters) of a data set, researchers often use algorithms that perform unsupervised clustering, namely, the algorithmic separation of a dataset into some number of distinct clusters. Deciding whether a particular separation (or number of clusters, K) is correct is a sort of 'dark art', with multiple techniques available for assessing the validity of unsupervised clustering algorithms. Here, we present a new technique for unsupervised clustering that uses multiple clustering algorithms, multiple validity metrics, and progressively bigger subsets of the data to produce an intuitive 3D map of cluster stability that can help determine the optimal number of clusters in a data set, a technique we call COmbined Mapping of Multiple clUsteriNg ALgorithms (COMMUNAL). COMMUNAL locally optimizes algorithms and validity measures for the data being used. We show its application to simulated data with a known K, and then apply this technique to several well-known cancer gene expression datasets, showing that COMMUNAL provides new insights into clustering behavior and stability in all tested cases. COMMUNAL is shown to be a useful tool for determining K in complex biological datasets, and is freely available as a package for R.
Energy Technology Data Exchange (ETDEWEB)
Salazar A, Daniel E. [Division de Computacion Evolutiva (CEANI), Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Universidad de Las Palmas de Gran Canaria. Canary Islands (Spain)]. E-mail: danielsalazaraponte@gmail.com; Rocco S, Claudio M. [Universidad Central de Venezuela, Facultad de Ingenieria, Caracas (Venezuela)]. E-mail: crocco@reacciun.ve
2007-06-15
This paper extends the approach proposed by the second author in [Rocco et al. Robust design using a hybrid-cellular-evolutionary and interval-arithmetic approach: a reliability application. In: Tarantola S, Saltelli A, editors. SAMO 2001: Methodological advances and useful applications of sensitivity analysis. Reliab Eng Syst Saf 2003;79(2):149-59 [special issue
鲁棒的单类协同排序算法%Robust Ranking Algorithms for One-class Collaborative Filtering
Institute of Scientific and Technical Information of China (English)
李改; 李磊
2015-01-01
The problem of ranking for one-class collaborative filtering (OCCF) is a research focus. One drawback of the existing ranking algorithms for OCCF is noise sensitivity, because the noisy data of training data might bring big influences to the training process and lead to inaccuracy of the algorithm. In this paper, in order to solve the noise sensitivity problem of the ranking algorithms, we propose two robust ranking algorithms for OCCF by using the pairwise sigmoid/fidelity loss functions that are flexible and can be easily adopted by the popular matrix factorization (MF) model and the K-nearest-neighbor (KNN) model. We use stochastic gradient descent with bootstrap sampling to optimize the two robust ranking algorithms. Experimental results on three practical datasets containing a large number of noisy data show that our proposed algorithms outperform several state-of-the-art ranking algorithms for OCCF in terms of different evaluation metrics.%单类协同过滤(One-class collaborative filtering, OCCF)问题是当前的一大研究热点。之前的研究所提出的算法对噪声数据很敏感,因为训练数据中的噪声数据将给训练过程带来巨大影响,从而导致算法的不准确性。文中引入了Sigmoid 成对损失函数和Fidelity 成对损失函数,这两个函数具有很好的灵活性,能够和当前最流行的基于矩阵分解(Matrix factorization, MF)的协同过滤算法和基于最近邻(K-nearest neighbor, KNN)的协同过滤算法很好地融合在一起,进而提出了两个鲁棒的单类协同排序算法,解决了之前此类算法对噪声数据的敏感性问题。基于Bootstrap 抽样的随机梯度下降法用于优化学习过程。在包含有大量噪声数据点的实际数据集上实验验证,本文提出的算法在各个评价指标下均优于当前最新的单类协同排序算法。
一种快速鲁棒的LOG-FAST角点算法%Fast and Robust LOG-FAST Corner Algorithm
Institute of Scientific and Technical Information of China (English)
梁艳菊; 李庆; 陈大鹏; 颜学究
2012-01-01
Based on time-efficient FAST algorithm, the paper described a fast and robust LOG-FAST corner algorithm. Histogram equalization as one kind of image enhancement was firstly applied to the original image for sharpening useful image information and improving image illumination invariance. Then Laplacian of Gaussian operator was convoluted to achieve image guassian smooth and edge enhancement,and also suppress noise in the maximal degree. At last,FAST algorithm was applied to produce LOG-FAST corners. The new corner detection algorithm not only features with time^ef-ficient as FAST algorithm, also has illumination invariant, noise invariant and robust feature. Experiments show that LOG-FAST algorithm can achieve a detection time 0. 05s on noised images sized with 640 * 480. It has the similar detection performance on illuminate variant images and has repeatability 98 percent. Due to its excellent performance, LOG-FAST algorithm can be used in real time video process applications such as intelligent vehicle warning system.%基于高时间效率的FAST算法,提出了一种快速鲁棒的FAST-LOG角点算法.使用直方图均衡化方法对图像进行增强,提高图像成分的清晰度并消除图像中光照强度的影响;运用拉普拉斯-高斯函数对图像进行卷积,实现图像的高斯平滑和增强边缘,及对噪声最大化的抑制；最后使用FAST算子检测角点.对比实验证明,新算子对于添加高斯噪声的分辨率为640 * 480的图像,其检测时间可达到0.05s；对光照不同的图像具有相近的检测性能；角点重复率可达98％.该算子可应用于实时视频图像的处理,为开发基于视觉的实时智能车辆预警系统提供了新的研究思路.
Dingle, Nicole M; Harris, Michael T
2005-06-15
The pendant and sessile drop profile analysis using the finite element method (PSDA-FEM) is an algorithm which allows simultaneous determination of the interfacial tension (gamma) and contact angle (theta(c)) from sessile drop profiles. The PSDA-FEM algorithm solves the nonlinear second-order spherical coordinate form of the Young-Laplace equation. Thus, the boundary conditions at the drop apex and contact position of the drop with the substrate are required to solve for the drop profile coordinates. The boundary condition at the position where the drop contacts the substrate may be specified as a fixed contact line or fixed contact angle. This paper will focus on the fixed contact angle boundary condition for sessile drops on a substrate and how this boundary condition is used in the PSDA-FEM curve-fitting algorithm. The PSDA-FEM algorithm has been tested using simulated drop shapes with and without the addition of random error to the drop profile coordinates. The random error is varied to simulate the effect of camera resolution on the estimates of gamma and theta(c) values obtained from the curve-fitting algorithm. The error in the experimental values for gamma from sessile drops of water on acrylic and Mazola corn oil on acrylic falls within the predicted range of errors obtained for gamma values from simulated sessile drop profiles with randomized errors that are comparable in magnitude to the resolution of the experimental setup.
Munhoven, G.
2013-08-01
The total alkalinity-pH equation, which relates total alkalinity and pH for a given set of total concentrations of the acid-base systems that contribute to total alkalinity in a given water sample, is reviewed and its mathematical properties established. We prove that the equation function is strictly monotone and always has exactly one positive root. Different commonly used approximations are discussed and compared. An original method to derive appropriate initial values for the iterative solution of the cubic polynomial equation based upon carbonate-borate-alkalinity is presented. We then review different methods that have been used to solve the total alkalinity-pH equation, with a main focus on biogeochemical models. The shortcomings and limitations of these methods are made out and discussed. We then present two variants of a new, robust and universally convergent algorithm to solve the total alkalinity-pH equation. This algorithm does not require any a priori knowledge of the solution. SolveSAPHE (Solver Suite for Alkalinity-PH Equations) provides reference implementations of several variants of the new algorithm in Fortran 90, together with new implementations of other, previously published solvers. The new iterative procedure is shown to converge from any starting value to the physical solution. The extra computational cost for the convergence security is only 10-15% compared to the fastest algorithm in our test series.
Directory of Open Access Journals (Sweden)
Ren Zhi Ying.
2014-04-01
Full Text Available Dual tree complex wavelet transform (DT-CWT exhibits superiority of shift invariance, directional selectivity, perfect reconstruction (PR, and limited redundancy and can effectively separate various surface components. However, in nano scale the morphology contains pits and convexities and is more complex to characterize. This paper presents an improved approach which can simultaneously separate reference and waviness and allows an image to remain robust against abnormal signals. We included a bilateral filtering (BF stage in DT-CWT to solve imaging problems. In order to verify the feasibility of the new method and to test its performance we used a computer simulation based on three generations of Wavelet and Improved DT-CWT and we conducted two case studies. Our results show that the improved DT-CWT not only enhances the robustness filtering under the conditions of abnormal interference, but also possesses accuracy and reliability of the reference and waviness from the 3-D nano scalar surfaces.
2014-01-01
afloat as a surface warfare officer trained in naval nuclear propulsion, including Assistant Reactor Officer on USS ENTERPRISE (CVN-65) and Chief Staff...integer sequences, were formulated based on coprime modular systems. Symmetrical number systems include the symmetrical number system (SNS), the optimum...real number x we write log x for the maximum between 2 and the natural logarithm of x. II. ROBUST SYMMETRICAL NUMBER SYSTEM The RSNS is a modular based
Hoogerheide, L.F.; Opschoor, A.; Dijk, van, Nico M.
2012-01-01
This discussion paper was published in the Journal of Econometrics (2012). Vol. 171(2), 101-120. A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of...
An O(n log n) Version of the Averbakh-Berman Algorithm for the Robust Median of a Tree
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Georgiadis, Loukas; Katriel, Irit
2008-01-01
We show that the minmax regret median of a tree can be found in O(nlog n) time. This is obtained by a modification of Averbakh and Berman's O(nlog2 n)-time algorithm: We design a dynamic solution to their bottleneck subproblem of finding the middle of every root-leaf path in a tree....
A robust algorithm for time varying parameter estimation%具有鲁棒性的一种时变参数估计算法
Institute of Scientific and Technical Information of China (English)
夏传良
2001-01-01
Time varying parameter estimation is very important to the control of dynamic system. Considering a general model,Z.G.Han put up a time varying param ete r estimation algorithm which in some given conditions, has good properties, but doesn′t consider the robustness. For another model, Goodwin put up a proje ct ive algorithm with deadline, so the algorithm has rubustness. Based on the proje ctive alg orithm with deadline by Goodwin, and deadline being put into the algorithm of dy nami c system parameter estimation by Z. G Han, a new algorithm is achieved. The new algorithm both has the time-varying property and robustness property, and also at given condition has quickly tracing property.%时变参数的估计问题，对于动态系统的控制是十分重要的。针对一种基本模型，韩志 刚给出了一种时变参数估计算法，该算法在一定条件下具有一些优良性质，但是没有考虑算 法的鲁棒性。针对另一种基本模型Goodwin给出了一种带死区的投影算法，由于引入了死区 而使该算法具有鲁棒性。本文基于Goodwin给出的带死区的投影算法，在韩志刚给出的动态 系统时变参数估计算法中引入死区，得到了一组新的算法，该算法既能反映动态系统时变参 数的时变特性，又具有一定的鲁棒性，并且在一定条件下具有快速跟踪性质。
Dashora, Nirvikar
2012-07-01
require an immediate solution to attack this problem. Hence, an alternative approach is chosen in which TEC-depletions are ignored for GIVE estimation. This approach requires further attention to accommodate it in the processing software for a near real time solution for the concerned user in Indian zone. But, nonetheless, as a prime concern, to precluding a particular satellite-link affected by TEC depletion, a reference receiver or user requires an algorithm that can compute the TEC and detect the depletion in TEC in near real time. To answer it, a novel TEC depletion detector algorithm and software has been developed which can be used for any SBAS in India. The algorithm is initially tested for recorded data from ground based dual frequency GPS receivers of GAGAN project. Data from 18-20 stations with 30 second sampling interval was obtained for year 2004 and 2005. The algorithm has been tuned to Indian ionosphere and show a great success in detecting TEC depletions with minimum false alarm. This is because of a specific property of this algorithm that it rejects the smooth fall in TEC in post sunset ionosphere. The depletions in TEC are characterized by a sudden fall and immediate recovery in level of TEC for a given line of sight. Since our algorithm extracts only such signatures and hence minimize the false alarms it may reduce burden on operational systems. We present this algorithm in detail. Another important facet of this algorithm is about its scientific use in automatic analysis of large amount of continuous GPS data. We have analyzed the aforementioned data by a MATLAB based script and obtained significant statistical results. The temporal duration and depth of TEC depletions is obtained for all over Indian region which provide a new insight over the phenomenon called EPBs and TEC depletions.
Institute of Scientific and Technical Information of China (English)
方勇; 张烨
2008-01-01
In underdetermined blind source separation, more sources are to be estimated from less observed mixtures without knowing source signals and the mixing matrix. This paper presents a robust clustering algorithm for underdetermined blind separation of sparse sources with unknown number of sources in the presence of noise. It uses the robust competitive agglomeration (RCA) algorithm to estimate the source number and the mixing matrix, and the source signals then are recovered by using the interior point linear programming. Simulation results show good performance of the proposed algorithm for underdetermined blind sources separation (UBSS).
基于奇异值的鲁棒图像隐写算法%Robust image steganography algorithm based on singular value
Institute of Scientific and Technical Information of China (English)
姚楚茂; 汤光明; 辜刚林
2015-01-01
A image steganography algorithm based on singular value decomposition (SVD) and matrix encoding was proposed , which improved the robustness and imperceptibility of the data hiding method .After analyzing the stability of the singular value and using the EMD (exploiting modification direction) ,secret bits were embedded into the singular value obtained from the sin‐gular value decomposition of image blocks .The robustness of algorithm is insured by the singular value’s stability ,and the usage of EMD encoding method improves the embedding efficiency of the proposed algorithm .Experimental results show that the algo‐rithm has little embedding distortion ,and it also has better robustness compared with the original method .This method can be well applied in the image steganography in noise environment .%为提高数字图像隐写算法的鲁棒性和透明性，提出一种基于奇异值分解和矩阵编码的图像隐写算法。分析奇异值分解的稳定性，采用EMD （exploiting modification direction）矩阵编码方法，将秘密信息嵌入在将图像块进行奇异值分解所得的奇异值向量中。图像块奇异值的稳定性保证隐写算法的鲁棒性，采用EM D编码算法嵌入秘密信息比特使得算法具有较高的嵌入效率。实验结果表明，该算法嵌入秘密信息后图像失真小，与原始 EM D算法相比，该算法具有更好的鲁棒性。该算法能够很好的应用于噪声环境下的图像隐写技术。
Mallick, Rajnish; Ganguli, Ranjan; Seetharama Bhat, M.
2015-09-01
The objective of this study is to determine an optimal trailing edge flap configuration and flap location to achieve minimum hub vibration levels and flap actuation power simultaneously. An aeroelastic analysis of a soft in-plane four-bladed rotor is performed in conjunction with optimal control. A second-order polynomial response surface based on an orthogonal array (OA) with 3-level design describes both the objectives adequately. Two new orthogonal arrays called MGB2P-OA and MGB4P-OA are proposed to generate nonlinear response surfaces with all interaction terms for two and four parameters, respectively. A multi-objective bat algorithm (MOBA) approach is used to obtain the optimal design point for the mutually conflicting objectives. MOBA is a recently developed nature-inspired metaheuristic optimization algorithm that is based on the echolocation behaviour of bats. It is found that MOBA inspired Pareto optimal trailing edge flap design reduces vibration levels by 73% and flap actuation power by 27% in comparison with the baseline design.
Heuristic algorithm to incorporating robustness into airline fleet planning%航空公司机队的鲁棒性规划启发式算法
Institute of Scientific and Technical Information of China (English)
汪瑜; 孙宏
2013-01-01
为了解决传统机队规划方法无法反映机队运营鲁棒性的缺陷,针对单基地线性航线结构运营模式特点,以基地机场配置机型数最小为目标函数,考虑“航班节”机型分配成本限制,“航班节”机型分配唯一性限制,所选机型最少飞机数限制等条件构建机队的鲁棒性规划模型,并结合唯一竞争机型限制为模型设计启发式算法.“39个航班节,6种候选机型”的案例分析表明:传统机队规划法所得出的机队构成中有3种机型,而由机队的鲁棒性规划法所得出的机队构成中机型数仅为2种,且机队构成能够很好的适应市场需求的波动,因此算法可行.%Traditional airline fleet planning methods could not reflect the robustness of fleet composition. In order to solve this shortcoming for airlines which operated in single-base linear route structure operating mode, this paper regarded minimum aircraft types deployed on single-base airport as objective, with flight pairing fleet assignment cost constraint, flight pairing fleet assignment uniqueness constraint, and least numbers of selected aircraft types constraint, to incorporate robustness into airline fleet planning model. Combining with only one competitive aircraft type in a desired fleet composition, the simulated annealing algorithm was employed to design heuristic algorithm for this proposed model. An empirical example containing 39 flight parings and 6 candidate aircraft types indicates that the fleet composition derived from traditional fleet planning method has three aircraft types while the proposed algorithm has only two. Furthermore, the fleet composition can well adapt to the market fluctuations, so the algorithm is feasible.
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that
Directory of Open Access Journals (Sweden)
Eusebio Eduardo Hernández Martinez
2013-01-01
Full Text Available In robotics, solving the direct kinematics problem (DKP for parallel robots is very often more difficult and time consuming than for their serial counterparts. The problem is stated as follows: given the joint variables, the Cartesian variables should be computed, namely the pose of the mobile platform. Most of the time, the DKP requires solving a non‐linear system of equations. In addition, given that the system could be non‐convex, Newton or Quasi‐Newton (Dogleg based solvers get trapped on local minima. The capacity of such kinds of solvers to find an adequate solution strongly depends on the starting point. A well‐known problem is the selection of such a starting point, which requires a priori information about the neighbouring region of the solution. In order to circumvent this issue, this article proposes an efficient method to select and to generate the starting point based on probabilistic learning. Experiments and discussion are presented to show the method performance. The method successfully avoids getting trapped on local minima without the need for human intervention, which increases its robustness when compared with a single Dogleg approach. This proposal can be extended to other structures, to any non‐linear system of equations, and of course, to non‐linear optimization problems.
Gallenne, A; Kervella, P; Monnier, J D; Schaefer, G H; Baron, F; Breitfelder, J; Bouquin, J B Le; Roettenbacher, R M; Gieren, W; Pietrzynski, G; McAlister, H; Brummelaar, T ten; Sturmann, J; Sturmann, L; Turner, N; Ridgway, S; Kraus, S
2015-01-01
Long-baseline interferometry is an important technique to spatially resolve binary or multiple systems in close orbits. By combining several telescopes together and spectrally dispersing the light, it is possible to detect faint components around bright stars. Aims. We provide a rigorous and detailed method to search for high-contrast companions around stars, determine the detection level, and estimate the dynamic range from interferometric observations. We developed the code CANDID (Companion Analysis and Non-Detection in Interferometric Data), a set of Python tools that allows us to search systematically for point-source, high-contrast companions and estimate the detection limit. The search pro- cedure is made on a N x N grid of fit, whose minimum needed resolution is estimated a posteriori. It includes a tool to estimate the detection level of the companion in the number of sigmas. The code CANDID also incorporates a robust method to set a 3{\\sigma} detection limit on the flux ratio, which is based on an a...
改进的块差值无损鲁棒图像水印算法%Improved lossless robust image watermarking algorithm based on block's difference
Institute of Scientific and Technical Information of China (English)
尚冠宇; 韩万兵; 郭凡新; 邓小鸿
2013-01-01
为了解决目前基于块差值无损鲁棒水印算法的不足,提出了一种基于Huffman编码和K-means聚类的改进算法.对嵌入过程中产生的水印负载,利用Huffman编码进一步减少其大小,提高水印嵌入容量；对水印提取过程中,可能发生的1-bit区域和0-bit区域重合的问题,利用K-means聚类算法提高水印提取精确度.实验结果表明,与现有相关算法相比,文种算法在水印嵌入容量和鲁棒性上具有明显优势.%To deal with the problems in lossless robust watermarking based on image block's difference, an adaptive algorithm is proposed. In the embedding procedure, the Huffman coding is employed to reduce the watermark's overhead such as the marked information of image's block, and increases the actual embedding capacity. In the extract procedure, the K-means clustering is utilized to resolve the overlap problem between 1-bit zone and 0-bit zone, and enhances the extracting accuracy. Experimental results show that, compared with previous works, the performance of the proposed method is significantly improved in terms of capacity and robustness.
Directory of Open Access Journals (Sweden)
Sid-Ahmed Selouani
2003-07-01
Full Text Available Limiting the decrease in performance due to acoustic environment changes remains a major challenge for continuous speech recognition (CSR systems. We propose a novel approach which combines the Karhunen-LoÃƒÂ¨ve transform (KLT in the mel-frequency domain with a genetic algorithm (GA to enhance the data representing corrupted speech. The idea consists of projecting noisy speech parameters onto the space generated by the genetically optimized principal axis issued from the KLT. The enhanced parameters increase the recognition rate for highly interfering noise environments. The proposed hybrid technique, when included in the front-end of an HTK-based CSR system, outperforms that of the conventional recognition process in severe interfering car noise environments for a wide range of signal-to-noise ratios (SNRs varying from 16 dB to Ã¢ÂˆÂ’4 dB. We also showed the effectiveness of the KLT-GA method in recognizing speech subject to telephone channel degradations.
Selouani, Sid-Ahmed; O'Shaughnessy, Douglas
2003-12-01
Limiting the decrease in performance due to acoustic environment changes remains a major challenge for continuous speech recognition (CSR) systems. We propose a novel approach which combines the Karhunen-Loève transform (KLT) in the mel-frequency domain with a genetic algorithm (GA) to enhance the data representing corrupted speech. The idea consists of projecting noisy speech parameters onto the space generated by the genetically optimized principal axis issued from the KLT. The enhanced parameters increase the recognition rate for highly interfering noise environments. The proposed hybrid technique, when included in the front-end of an HTK-based CSR system, outperforms that of the conventional recognition process in severe interfering car noise environments for a wide range of signal-to-noise ratios (SNRs) varying from 16 dB to[InlineEquation not available: see fulltext.] dB. We also showed the effectiveness of the KLT-GA method in recognizing speech subject to telephone channel degradations.
Shabani, Hamed; Vahidi, Behrooz; Ebrahimpour, Majid
2013-01-01
A new PID controller for resistant differential control against load disturbance is introduced that can be used for load frequency control (LFC) application. Parameters of the controller have been specified by using imperialist competitive algorithm (ICA). Load disturbance, which is due to continuous and rapid changes of small loads, is always a problem for load frequency control of power systems. This paper introduces a new method to overcome this problem that is based on filtering technique which eliminates the effect of this kind of disturbance. The object is frequency regulation in each area of the power system and decreasing of power transfer between control areas, so the parameters of the proposed controller have been specified in a wide range of load changes by means of ICA to achieve the best dynamic response of frequency. To evaluate the effectiveness of the proposed controller, a three-area power system is simulated in MATLAB/SIMULINK. Each area has different generation units, so utilizes controllers with different parameters. Finally a comparison between the proposed controller and two other prevalent PI controllers, optimized by GA and Neural Networks, has been done which represents advantages of this controller over others. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Y. Srinivas
2012-09-01
Full Text Available The applications of intelligent techniques have increased exponentially in recent days to study most of the non-linear parameters. In particular, the behavior of earth resembles the non-linearity applications. An efficient tool is needed for the interpretation of geophysical parameters to study the subsurface of the earth. Artificial Neural Networks (ANN perform certain tasks if the structure of the network is modified accordingly for the purpose it has been used. The three most robust networks were taken and comparatively analyzed for their performance to choose the appropriate network. The single-layer feed-forward neural network with the back propagation algorithm is chosen as one of the well-suited networks after comparing the results. Initially, certain synthetic data sets of all three-layer curves have been taken for training the network, and the network is validated by the field datasets collected from Tuticorin Coastal Region (78°7′30"E and 8°48′45"N, Tamil Nadu, India. The interpretation has been done successfully using the corresponding learning algorithm in the present study. With proper training of back propagation networks, it tends to give the resistivity and thickness of the subsurface layer model of the field resistivity data concerning the synthetic data trained earlier in the appropriate network. The network is trained with more Vertical Electrical Sounding (VES data, and this trained network is demonstrated by the field data. Groundwater table depth also has been modeled.
基于神经网络的鲁棒NLOS误差抑制算法%Robust NLOS Error Mitigation Algorithm Based on Neural Network
Institute of Scientific and Technical Information of China (English)
王建辉; 崔维嘉; 胡捍英
2011-01-01
提出一种基于Kalman滤波器和神经网络(NN)的非视距(NLOS)误差抑制算法.根据到达时间(ToA)测量值的特点和NLOS误差的统计特性,推导出Kalman滤波器输出无偏估计所需满足的条件,利用NN估计该条件中的环境参数,实现NLOS误差抑制.仿真结果表明,该算法在估计精度和算法鲁棒性方面均具有较好的性能.%In this paper, a new Non Line of Sight(NLOS) error mitigation algorithm based on Kalman filter and neural network is proposed. According to the features of Time of Arrival(TOA) measurements and the statistic characteristics of NLOS errors, the condition on which can obtain the unbiased estimation of Kalman filter is deduced. It fixes on the state transition matrix of Kalman filter with neural network in different environments. Simulation results show that the location performance of the algorithm is improved with better estimation accuracy and robustness.
Directory of Open Access Journals (Sweden)
P. Hanappe
2011-09-01
Full Text Available We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations.
The modified algorithm runs more than 50 times faster on the CELL's Synergistic Processing Element than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60 % of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.
基于对象的抗几何攻击的视频水印算法%Object Based Watermarking Algorithm Robust to Geometric Transformation Attacks
Institute of Scientific and Technical Information of China (English)
谌志鹏; 邹建成
2012-01-01
MPEG-4标准中基于对象的编码方法具有较好的交互性、可存取性,同时也带来了版权保护的问题.为此,提出一种基于对象的水印算法,该算法使得视频对象从一个视频序列被移动到另一个序列中,仍然能正确提取出水印.该算法通过Radon变换校正视频对象的旋转角度和缩放尺度,将水印嵌入到SA -DCT域中的部分系数中.实验结果表明,该算法能够和MPEG-4编码器有机整合、失真小,能抵抗旋转、缩放等几何攻击.%One of the key points of the MPEG-4 standard is the possibility to access and manipulate objects within a video sequence, but it increases the demand for information security protection and multimedia authentication technologies. An object based watermarking algorithm is proposed, which can correctly access the data embedded in the object. To resist against scaling and rotation attacks, two generalized Radon transformations are used. The watermark is embedded in the quantized SA-DCT coefficients. Experiments show that the algorithm is low-distortion, robust to geometry transformation attacks. And the algorithm can integrate with the MPEG-4 codec very well.
Institute of Scientific and Technical Information of China (English)
ZHANG Hong-mei
2015-01-01
In this paper, a modified additive Schwarz finite difference algorithm is applied in the heat conduction equation of the compact difference scheme. The algorithm is on the basis of domain decomposition and the subspace correction. The basic train of thought is the introduction of the units function decomposition and reasonable distribution of the overlap of correction. The residual correction is conducted on each subspace while the computation is completely parallel. The theoretical analysis shows that this method is completely characterized by parallel.
Directory of Open Access Journals (Sweden)
Tromeur-Dervout Damien
2013-12-01
Full Text Available This paper deals with the representation of the trace of iterative Schwarz solutions at the interfaces of domain decomposition to approximate adaptively the interface error operator. This allows to build a cost-effectively accelerating of the convergence of the iterative method by extending to the vectorial case the Aitken’s accelerating convergence technique. The first representation is based on the building of a nonuniform discrete Fourier transform defined on a non-regular grid. We show how to construct a Fourier basis of dimension N+1 on this grid by building numerically a sesquilinear form, its exact accuracy to represent trigonometric polynomials of degree N / 2, and its spectral approximation property that depends on the continuity of the function to approximate. The decay of Fourier-like modes of the approximation of the trace of the iterative solution at the interfaces provides an estimate to adaptively select the modes involved in the acceleration. The drawback of this approach is to be dependent on the continuity of the trace of the iterated solution at the interfaces. The second representation, purely algebraic, uses a singular value decomposition of the trace of the iterative solution at the interfaces to provide a set of orthogonal singular vectors of which the associated singular values provide an estimate to adapt the acceleration. The resulting Aitken-Schwarz methodology is then applied to large scale computing on 3D linear Darcy flow where the permeability follows a log normal random distribution. Cet acte traite de la représentation des solutions itérées aux interfaces de la méthode de décomposition de domaine de type Schwarz afin d’approximer de manière adaptative son opérateur d’erreur aux interfaces des sous domaines. Ceci permet de construire de manière économique l’accélération de la convergence de la méthode itérative en étendant la technique d’accélération de la convergence de Aitken au cas
一种基于可操纵金字塔的鲁棒图像匹配算法%A Robust Steerable Pyramid-Based Image Matching Algorithm
Institute of Scientific and Technical Information of China (English)
张科; 王红梅; 李言俊
2005-01-01
图像匹配是计算机视觉和图像处理领域中一个比较活跃的研究课题.针对离散小波变换对于图像平移和旋转的敏感性,提出了一种基于可操纵金字塔的图像匹配算法.算法考虑的是图像之间存在平移和旋转变换的情况.首先,变换的近似旋转角在图像分解的最底层采用环映射算法得到,然后在图像分解的其余层提取兴趣点作为匹配特征,并使用Hausdorff距离来度量参考图像和待匹配图像之间的相似性,依次得到由粗到细的匹配结果.实验表明该算法具有较高的匹配精度和较强的鲁棒性.%Image matching is an active topic in computer vision and image processing. The steerable filter was used to over-come the translation-and-rotation-sensitive of the wavelet transform. The rigid transformation was considered in the proposed algorithm. The rotated angle was computed approximately by ring-projection method at the lowest level of image' s decomposition. The interesting points were extracted as image feature and the Hausdorff distance was used to measure the similarity between the reference image and input image in the rest level of image's decomposition to refine matching result. Through a coarse-to-fine fast procedure, the final matching results could be achieved. Experimental results verify the algorithm is rather accurate and robust.
A Lagrange multiplier based divide and conquer finite element algorithm
Farhat, C.
1991-01-01
A novel domain decomposition method based on a hybrid variational principle is presented. Prior to any computation, a given finite element mesh is torn into a set of totally disconnected submeshes. First, an incomplete solution is computed in each subdomain. Next, the compatibility of the displacement field at the interface nodes is enforced via discrete, polynomial and/or piecewise polynomial Lagrange multipliers. In the static case, each floating subdomain induces a local singularity that is resolved very efficiently. The interface problem associated with this domain decomposition method is, in general, indefinite and of variable size. A dedicated conjugate projected gradient algorithm is developed for solving the latter problem when it is not feasible to explicitly assemble the interface operator. When implemented on local memory multiprocessors, the proposed methodology requires less interprocessor communication than the classical method of substructuring. It is also suitable for parallel/vector computers with shared memory and compares favorably with factorization based parallel direct methods.
Fuss, Franz Konstantin
2013-01-01
Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.
Directory of Open Access Journals (Sweden)
Franz Konstantin Fuss
2013-01-01
Full Text Available Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal’s time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.
Servin, Manuel; Garnica, Guillermo
2016-01-01
Synthesis of single-wavelength temporal phase-shifting algorithms (PSA) for interferometry is well-known and firmly based on the frequency transfer function (FTF) paradigm. Here we extend the single-wavelength FTF-theory to dual and multi-wavelength PSA-synthesis when several simultaneous laser-colors are present. The FTF-based synthesis for dual-wavelength PSA (DW-PSA) is optimized for high signal-to-noise ratio and minimum number of temporal phase-shifted interferograms. The DW-PSA synthesis herein presented may be used for interferometric contouring of discontinuous industrial objects. Also DW-PSA may be useful for DW shop-testing of deep free-form aspheres. As shown here, using the FTF-based synthesis one may easily find explicit DW-PSA formulae optimized for high signal-to-noise and high detuning robustness. To this date, no general synthesis and analysis for temporal DW-PSAs has been given; only had-hoc DW-PSAs formulas have been reported. Consequently, no explicit formulae for their spectra, their sign...
Davis, Tyler W.; Prentice, I. Colin; Stocker, Benjamin D.; Thomas, Rebecca T.; Whitley, Rhys J.; Wang, Han; Evans, Bradley J.; Gallego-Sala, Angela V.; Sykes, Martin T.; Cramer, Wolfgang
2017-02-01
Bioclimatic indices for use in studies of ecosystem function, species distribution, and vegetation dynamics under changing climate scenarios depend on estimates of surface fluxes and other quantities, such as radiation, evapotranspiration and soil moisture, for which direct observations are sparse. These quantities can be derived indirectly from meteorological variables, such as near-surface air temperature, precipitation and cloudiness. Here we present a consolidated set of simple process-led algorithms for simulating habitats (SPLASH) allowing robust approximations of key quantities at ecologically relevant timescales. We specify equations, derivations, simplifications, and assumptions for the estimation of daily and monthly quantities of top-of-the-atmosphere solar radiation, net surface radiation, photosynthetic photon flux density, evapotranspiration (potential, equilibrium, and actual), condensation, soil moisture, and runoff, based on analysis of their relationship to fundamental climatic drivers. The climatic drivers include a minimum of three meteorological inputs: precipitation, air temperature, and fraction of bright sunshine hours. Indices, such as the moisture index, the climatic water deficit, and the Priestley-Taylor coefficient, are also defined. The SPLASH code is transcribed in C++, FORTRAN, Python, and R. A total of 1 year of results are presented at the local and global scales to exemplify the spatiotemporal patterns of daily and monthly model outputs along with comparisons to other model results.
Robust Interference Alignment Algorithm Based on QR Decomposition%一种基于QR分解的稳健干扰对齐算法
Institute of Scientific and Technical Information of China (English)
2015-01-01
Most interference alignment algorithms assume that the senders know perfect Channel State Information (CSI), but in practical communication systems, due to the channel estimation error, the delayed feedback and so on, the CSI often exists the error. Therefore, a robust interference alignment algorithm is presented based on the QR decomposition. Firstly, the QR is used to preprocess the jointly received signal with the of error for eliminating half of the interference terms. Then this paper minimizes the interference power from the sender to the other receivers to design the pre-coding matrix, and utilizes Minimum Mean Square Error (MMSE) criterion to design the interference suppression matrix. Finally, under the conditions of perfect CSI and error CSI, the simulation results verify that the proposed algorithm improves effectively the performance of the system.%大多数干扰对齐算法都假定发送端可以获得理想的信道状态信息(CSI)，由于信道估计误差、反馈延迟等原因，实际通信系统中CSI往往是有误差的。为此，该文提出一种基于QR分解的稳健干扰对齐算法。对含有误差的联合接收信号进行基于 QR 分解的预处理，消除一半有误差的干扰；然后在有误差的等效信道联合矩阵下，充分考虑信道误差和干扰的影响，通过最小化发送端泄漏到非目标接收端的干扰信号功率来设计预编码矩阵，并基于最小均方误差(MMSE)准则来设计干扰抑制矩阵。最后，在理想CSI和误差CSI的情况下，通过实验仿真，证明了该算法有效地提高了系统性能。
Institute of Scientific and Technical Information of China (English)
李春祥; 赵德奇; 蓝声宁
2015-01-01
Aiming at the stability problem of the LSSVM-LQR intelligent control algorithm for reducing seismic responses of structures,a stability controlling algorithm was proposed to ensure good robustness of the LSSVM-LQR intelligent control algorithm.The main idea of this algorithm is that the limition of control forces is imposed on the control algorithm.If the limition condition of control forces is fulfilled,the control procedure continues to run.However,if the limitation condition of control forces is not fulfilled,the control procedure automatically jumps out and then,instead,the stability controlling algorithm starts to be applied.The whole procedure was referred to as the stable/robust LSSVM-LQR intelligent control algorithm,which ensures the stability of the system mainly through controlling actuator operation with resorting to adjusting feedback.The numerical results show that the developed stability controlling algorithm can effectively guarantee the stability/robustness of LSSVM-LQR intelligent control algorithm.The time-delay LSSVM-LQR intelligent control algorithm and the stability/robust time-delay LSSVM-LQR intelligent control algorithm can complement each other in application.%针对时滞LSSVM-LQR智能控制算法存在的稳定性问题，提出相关的稳定性控制算法，以确保时滞LSS-VM-LQR智能控制算法的鲁棒性。该算法的主要思路为：在时滞LSSVM-LQR控制算法中，加入控制力限制条件。当满足控制力限制条件时，控制程序继续运行；当不满足控制力限制条件时，控制程序自动跳出，便执行稳定性控制算法（或称为稳定／鲁棒的时滞LSSVM-LQR智能控制算法）。稳定性控制算法主要是通过调整反馈来控制作动器运行，从而确保控制系统的稳定性。数值结果表明，稳定性控制算法能够有效地保证时滞LSSVM-LQR智能控制算法的稳定性／鲁棒性；与时滞LSSVM-LQR智能控制算法相辅相成。
基于 DWT-DCT-SVD 的鲁棒盲视频水印算法%Robust video watermarking algorithm based on DWT-DCT-SVD
Institute of Scientific and Technical Information of China (English)
陈玉麟; 梁栋; 张成; 鲍文霞
2015-01-01
为更有效地保护多媒体数据，文中提出了一种基于 DWT （discrete wavelet transform）、DCT （discrete cosine transform）与 SVD（singular value decomposition）结合的盲视频水印算法。利用视频帧内的 R 、G 通道的颜色差值进行关键帧的快速选取，将关键帧的 B 分量进行多级离散小波变换，对变换后的子带进行 Arnold置乱，将水印嵌入到置乱后的子带奇异值中。当嵌入水印视频受到攻击时，利用彩色图像各颜色通道间像素差值很小和奇异值分解的稳定性，用嵌入水印视频关键帧的 G 分量代替原始视频关键帧的 B 分量，实现水印的盲提取。实验结果表明，该算法对噪声、滤波、裁剪、帧置乱、帧平均、MPEG（moving pictures experts group）压缩等攻击具有较好的鲁棒性。%This paper proposed a blind video watermarking algorithm based on DWT ,DCT and SVD .The key frames were selected quickly by color difference of red and green channel in video frames ,then performed the blue component of each of key frames on multi‐level DWT . Sub‐band was scrambled by Arnold transforms ,the watermark was embedded into singular value of scrambled sub‐band .When the video sequence was under attack ,by using the small differences of each color channel pixels of color image and the stability of the singular value decomposition ,this paper used the green component of key frames in a watermarked video to replace the blue component of key frames in the original video and extracted watermark blindly .The experimental results demonstrated that the algorithm was robust against noise addition ,filtering attack ,cropping attack ,frame scrambling ,frame averaging and M PEG compression .
A Robust and Efficient Facial Feature Tracking Algorithm%一种鲁棒高效的人脸特征点跟踪方法
Institute of Scientific and Technical Information of China (English)
黄琛; 丁晓青; 方驰
2012-01-01
人脸特征点跟踪能获取除粗略的人脸位置和运动轨迹以外的人脸部件的精确信息,对计算机视觉研究有重要作用.主动表象模型(Active appearance model,AAM)是描述人脸特征点位置的最有效的方法之一,但是其高维参数空间和梯度下降优化策略使得AAM对初始参数敏感,且易陷入局部极值.因此,基于传统AAM的人脸特征点跟踪方法不能同时较好地解决大姿态、光照和表情的问题.本文在多视角AAM的框架下,提出一种结合随机森林和线性判别分析(Linear discriminate analysis,LDA)的实时姿态估计算法对跟踪的人脸进行姿态预估计和更新,从而有效地解决了视频人脸大姿态变化的问题.提出了一种改进的在线表象模型(Online appearance model,OAM)方法来评估跟踪的准确性,并自适应地通过增量主成分分析(Principle component analysis,PCA)学习来更新AAM的纹理模型,极大地提高了跟踪的稳定性和模型应对光照和表情变化的能力.实验结果表明,本文算法在视频人脸特征点跟踪的准确性、鲁棒性和实时性方面都有良好的性能.%Facial feature tracking obtains precise information of facial components in addition to the coarse face position and moving track, and is important to computer vision. The active appearance model (AAM) is an efficient method to describe the facial features. However, it suffers from the sensitivity to initial parameters and may easily be stuck in local minima due to the gradient-descent optimization, which makes the AAM based tracker unstable in the presence of large pose, illumination and expression changes. In the framework of multi-view AAM, a real time pose estimation algorithm is proposed by combining random forest and linear discriminate analysis (LDA) to estimate and update the head pose during tracking. To improve the robustness to variations in illumination and expression, a modified online appearance model (OAM) is
Salehi, S.; Karami, M.; Fensholt, R.
2016-06-01
Lichens are the dominant autotrophs of polar and subpolar ecosystems commonly encrust the rock outcrops. Spectral mixing of lichens and bare rock can shift diagnostic spectral features of materials of interest thus leading to misinterpretation and false positives if mapping is done based on perfect spectral matching methodologies. Therefore, the ability to distinguish the lichen coverage from rock and decomposing a mixed pixel into a collection of pure reflectance spectra, can improve the applicability of hyperspectral methods for mineral exploration. The objective of this study is to propose a robust lichen index that can be used to estimate lichen coverage, regardless of the mineral composition of the underlying rocks. The performance of three index structures of ratio, normalized ratio and subtraction have been investigated using synthetic linear mixtures of pure rock and lichen spectra with prescribed mixing ratios. Laboratory spectroscopic data are obtained from lichen covered samples collected from Karrat, Liverpool Land, and Sisimiut regions in Greenland. The spectra are then resampled to Hyperspectral Mapper (HyMAP) resolution, in order to further investigate the functionality of the indices for the airborne platform. In both resolutions, a Pattern Search (PS) algorithm is used to identify the optimal band wavelengths and bandwidths for the lichen index. The results of our band optimization procedure revealed that the ratio between R894-1246 and R1110 explains most of the variability in the hyperspectral data at the original laboratory resolution (R2=0.769). However, the normalized index incorporating R1106-1121 and R904-1251 yields the best results for the HyMAP resolution (R2=0.765).
Maglevanny, I. I.; Smolar, V. A.
2016-01-01
We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.
International Conference on Robust Statistics
Filzmoser, Peter; Gather, Ursula; Rousseeuw, Peter
2003-01-01
Aspects of Robust Statistics are important in many areas. Based on the International Conference on Robust Statistics 2001 (ICORS 2001) in Vorau, Austria, this volume discusses future directions of the discipline, bringing together leading scientists, experienced researchers and practitioners, as well as younger researchers. The papers cover a multitude of different aspects of Robust Statistics. For instance, the fundamental problem of data summary (weights of evidence) is considered and its robustness properties are studied. Further theoretical subjects include e.g.: robust methods for skewness, time series, longitudinal data, multivariate methods, and tests. Some papers deal with computational aspects and algorithms. Finally, the aspects of application and programming tools complete the volume.
Progressive refinement for robust image registration
Institute of Scientific and Technical Information of China (English)
Li Song; Yuanhua Zhou; Jun Zhou
2005-01-01
@@ A new image registration algorithm with robust cost function and progressive refinement estimation is developed on the basis of direct method (DM). The robustness lies in M-estimation to avert larger local noise and outliers.
Some Massively Parallel Algorithms from Nature
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
We introduced the work on parallel problem solvers from physics and biology being developedby the research team at the State Key Laboratory of Software Engineering, Wuhan University. Results onparallel solvers include the following areas:Evolutionary algorithms based on imitating the evolution pro-cesses of nature for parallel problem solving, especially for parallel optimization and model-building;Asynchronous parallel algorithms based on domain decomposition which are inspired by physical analogiessuch as elastic relaxation process and annealing process , for scientific computations, especially for solv-ing nonlinear mathematical physics problems. All these algorithms have the following common characteris-tics: inherent parallelism, self-adaptation and self-organization, because the basic ideas of these solversare from imitating the natural evolutionary processes.
Institute of Scientific and Technical Information of China (English)
王彤; 张令弥
2006-01-01
提出了一种基于频域空间域分解(Frequency and Spatial Domain Decomposition,FSDD)的运行模态分析方法.该法将同时具有输入和输出的试验模态分析的经典方法--复模态指示因子(Complex Mode Indicator Function,CMIF)法拓展到了仅有输出响应的运行状态模态分析.FSDD法采用奇异值分解将信号空间和噪声空间分离开来,把奇异值曲线作为模态指示的依据,以奇异值向量作为加权函数得到每一阶模态的增强功率谱(Power Spectrum Density,PSD),进而在频域内对增强PSD曲线进行最小二乘拟合以得到准确的模态频率和阻尼参数.采用了一个二层楼仿真算例和在欧洲广为人知的瑞士Z24公路大桥实测算例来验证FSDD算法.
Institute of Scientific and Technical Information of China (English)
丁旭; 何建忠
2014-01-01
In order to solve the problem of poor robustness of global features and high complexity of local features in image perceptual hashing algorithm,the author proposed an improved perceptual hashing algorithm based on DCT and SURF.Using DCT as global fea tures and SURF as local features,the author gave the hashing functions and the fusion of two features.Then,the author provided the application in image authentication.Experimental results showed that this algorithm has good robustness and efficiency.%针对感知哈希技术中图像全局特征鲁棒性低和局部特征算法复杂度高的特点,提出一种由离散余弦变换(DCT)和SURF算子改进的感知哈希算法.本文以DCT为全局特征,以SURF描述子为局部特征,分别给出了两者的哈希编码算法及两者的融合方式,接着给出在图像认证时的算法流程.实验表明本文算法具有较好的鲁棒性和实时性.
Institute of Scientific and Technical Information of China (English)
陈刚; 吴小辰; 柳勇军; 李鹏; 廖瑞金; 王予疆; 何潜
2011-01-01
Frequency Domain Decomposition(FDD) is introduced to identify low-frequency oscillation modes by analysis of steady-state ambient measurements in power system and some modifications to FDD are made.The relation expression between maximum singular value of PSD matrix and system eigenvalue under poorly damped mode is presented by detailed derivation of the relations among frequency response funcfion(FRF), output power spectral density (PSD), and system eigenvalues.Modal Amplitude Coherence (MAC) is used to determine mode area size in the neighborhood of peak point on maximum SVD curve.Finally, the exact modal frequency and damping ratio can be estimated using least square method.The technique is illustrated on linear time-invariant system, WECC 9-bus system and ambient PMU measurements, to show its effectiveness.The suggested method is suitable for identification of poorly damped oscillatory modes, and is robust to noise, and has important value for real-time application.%对广域稳态量测信号将频域分解方法引入电网低频振荡模式辨识并进行了改进完善.推导了系统的频响函数、输出功率谱密度矩阵、特征量之间的关系,给出了弱阻尼模式下功率谱密度矩阵的最大奇异值与特征量之间的关系表达式;采用模态幅值相干系数以确定奇异值曲线谱峰附近单模态区域大小,最后采用最小二乘方法求取准确的模式频率和阻尼系数.线性时不变系统时域仿真、WECC 9节点系统时域仿真以及实测PMU数据分析表明,所提方法适用于弱阻尼振荡模式辨识,对噪声具有较强鲁棒性,是一种极具实用价值的在线应用方法.
Institute of Scientific and Technical Information of China (English)
商丽媛; 谭清美
2014-01-01
The hub location is an important issue for the hub-and-spoke network optimization design, and hub covering is a type of the hub location problem. The uncertainty of the hub station construction costs and the uncertainty of the distance between two nodes are considered. The model of stochastic��-robust multiple allocation hub set covering problem is proposed with combination of stochastic optimization and robust optimization. Binary quantum-behaved particle swarm optimization algorithm is improved based on immunity thought. Immune quantum-behaved particle swarm optimization algorithm is proposed to solve the stochastic��-robust multiple allocation hub set covering model. The simulation example of this model is given, and the result shows the feasibility and effectiveness of the proposed model and algorithm.%枢纽站选址是轴辐式网络优化设计的重要问题，枢纽站覆盖则是该问题的一个类型。考虑枢纽站建站成本和节点间运输距离的不确定性，结合随机优化和鲁棒优化方法，建立了完备轴辐式网络中多分配枢纽站集覆盖问题的随机��-鲁棒优化模型；采用二进制编码，对量子粒子群算法进行改进，加入免疫思想，设计了免疫量子粒子群求解算法。最后通过算例对模型进行仿真计算，结果表明了该模型及算法的可行性和有效性。
Robust Face Skin Selection for Unobtrusive Vital Signs Monitoring
Ding, M.; Van Leest, A.J.
2010-01-01
In this report we developed an algorithm that robustly selects faceskin. The algorithm has been tested on a set of challenging sequences. It is robust to partial occlusions, rotation of the head and spectrum changes of the illumination.
Robust Automated Identification of Martian Impact Craters
Stepinski, T. F.; Mendenhall, M. P.; Bue, B. D.
2007-03-01
Robust automatic identification of martian craters is achieved by a computer algorithm acting on topographic data. The algorithm outperforms manual counts; derived crater sizes and depths are comparable to those measured manually.
Institute of Scientific and Technical Information of China (English)
张亮; 高井祥; 李增科; 王坚
2014-01-01
For there is the problem of iterative calculation in robust extended Kalman filter (EKF),an algorithm of robust EKF based on the Vondrak gross error was proposed and applied to the GPS navigation and positioning.The first,it could identify and position the gross error of observations.Then,robust EKF model was used.In order to test the new model,dynamic GPS data was measured,double difference observation equation and velocity with acceleration kalman state equation of the model were built.The experiment results show that the new differential robust EKF model can resist the influence of gross errors in observation.Compared to the traditional robust EKF model,it can avoid resistance difference iteration in each epoch,and improve the efficiency of navigation solution.%提出基于Vondrak滤波粗差探测的抗差EKF算法,并应用于GPS导航定位.分析了传统抗差EKF的基本原理,结合粗差特性及Vondrak粗差探测理论,建立了新的抗差EKF模型.首先对观测值进行粗差识别和定位,再在粗差点处采用抗差EKF模型滤波.实验表明,当观测值中存在粗差时,新建抗差EKF模型可以很好地抵抗观测值中粗差的影响;和传统抗差模型相比,避免了对每个历元进行抗差迭代,提高了导航求解的效率.
DEFF Research Database (Denmark)
Gorm Hansen, Birgitte
2012-01-01
as the analytical framework for descri bing the complex relationship between academic science and its so called “external” habitat. Although relational skills and adaptability do seem to be at the heart of successful research management, the key to success does not lie with the ability to assimilate to industrial...... knowledge", Danish research policy seems to have helped develop politically and economically "robust scientists". Scientific robustness is acquired by way of three strategies: 1) tasting and discriminating between resources so as to avoid funding that erodes academic profiles and push scientists away from...... and industrial intere sts. The paper concludes by stressing the potential danger of policy habitats who have promoted the evolution of robust scientists based on a competitive system where only the fittest survive. Robust scientists, it is argued, have the potential to become a new “invasive species...
Damping Estimation by Frequency Domain Decomposition
DEFF Research Database (Denmark)
Brincker, Rune; Ventura, C. E.; Andersen, P.
2001-01-01
frequencies can be accurately estimated without being limited by the frequency resolution of the discrete Fourier transform. It is explained how the spectral density matrix is decomposed into a set of single degree of freedom systems, and how the individual SDOF auto spectral density functions are transformed...
Institute of Scientific and Technical Information of China (English)
高社生; 宋飞彪; 姜微微
2011-01-01
为了提高捷联惯导(sINs)/天文导航(CNS)/合成孔径雷达(SAR)组合导航系统的定位精度,在吸收模型预测滤波和抗差自适应滤波算法优点的基础上,提出了一种新的抗差自适应模型预测滤波算法.该算法首先利用模型预测滤波估计出系统模型误差,并对其进行实时修正,以抑制系统模型误差对导航解算精度的影响；然后利用抗差自适应因子控制观测异常,抑制观测噪声对导航解算精度的影响.将提出的算法应用于SINS/CNS/SAR组合导航系统进行仿真验证,并与抗差自适应滤波进行比较,结果表明,提出的算法得到的姿态误差、速度误差和位置误差分别在[-0.2,+0.2’]、[-0.3 m/s,+0.3 m/s]和[-6 m,+6 m]以内,滤波性能明显优于抗差自适应滤波算法,说明该算法能有效抑制系统模型误差及观测异常对导航解的影响,提高组合导航的解算精度.%In order to improve the navigation positioning accuracy of the strapdown inertial navigation system(SINS)/celestial navigation system(CNS)/synthetic aperture radar(SAR) integrated navigation systems, this paper presents a robust adaptive model predictive filtering algorithm based on the research of model predictive filtering and robust adaptive filtering. First, the algorithm estimates the model error in real-time to correct the system model by model predictive filtering to resist the effects of model errors on solution accuracy of navigation. Then, the algorithm controls the influences of abnormal observation on solution accuracy of navigation by the robust adaptive factor. The proposed algorithm is applied to SINS/CNS/SAR integrated navigation system and compared with the robust adaptive filter. Simulation results demonstrate that the attitude angle error, velocity error and position error obtained by the robust adaptive model predictive filtering are within [-0.2', +0.2'] , [-0.3 m, +0.3m] and [-6m,+6m] respectively; and the filtering performance
Dao, Duy; Salehizadeh, S M A; Noj, Yeon; Chong, Jo Woon; Cho, Chae; Mcmanus, Dave; Darling, Chad E; Mendelson, Yitzhak; Chon, Ki H
2016-10-21
Motion and noise artifacts (MNAs) impose limits on the usability of the photoplethysmogram (PPG), particularly in the context of ambulatory monitoring. MNAs can distort PPG, causing erroneous estimation of physiological parameters such as heart rate (HR) and arterial oxygen saturation (SpO2). In this study we present a novel approach, "TifMA," based on using the Time-frequency spectrum of PPG to first detect the MNA-corrupted data and next discard the non-usable part of the corrupted data. The term "non-usable" refers to segments of PPG data from which the HR signal cannot be recovered accurately. Two sequential classification procedures were included in the TifMA algorithm. The first classifier distinguishes between MNA-corrupted and MNA-free PPG data. Once a segment of data is deemed MNA-corrupted, the next classifier determines whether the HR can be recovered from the corrupted segment or not. A support vector machine (SVM) classifier was used to build a decision boundary for the first classification task using data segments from a training data set. Features from time-frequency spectra of PPG were extracted to build the detection model. Five datasets were considered for evaluating TifMA performance: (1) and (2) were lab-controlled PPG recordings from forehead and finger pulse oximeter sensors with subjects making random movements, (3) and (4) were actual patient PPG recordings from UMass Memorial Medical Center with random free movements and (5) was a lab-controlled PPG recording dataset measured at the forehead while the subjects ran on a treadmill. The first dataset was used to analyze the noise sensitivity of the algorithm. Datasets 2-4 were used to evaluate the MNA detection phase of the algorithm. The results from the first phase of the algorithm (MNA detection) were compared to results from three existing MNA detection algorithms: the Hjorth, kurtosis-Shannon Entropy and time-domain variability-SVM approaches. This last is an approach recently developed
Robust Self Tuning Controllers
DEFF Research Database (Denmark)
Poulsen, Niels Kjølstad
1985-01-01
The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...... has several operation modes and a detector for controlling the mode. A special self tuning controller has been developed to regulate plant with changing time delay....
空间直角坐标系统转换的抗差算法研究%The Application of Robust Algorithm in Space Coordinate Conversion
Institute of Scientific and Technical Information of China (English)
倪飞; 崔桂官
2011-01-01
为了解决空间直角坐标系间相互转换时,由于公共点的坐标精度差直接导致坐标转换精度差的问题.探讨利用抗差估计理论解决公共点坐标粗差的影响,结合具体的坐标转换算例,分别采用Tukey、IGG1及IGG3三种权函数进行迭代计算,完成坐标转换.结果表明,抗差估计原理用于空间直角坐标系间的相互转换,可以降低公共点粗差的影响,提高坐标转换成果的可靠性.%In order to solve the gross error in space rectangular coordinates conversion between the different coordinate systems, this paper explores the use of robust estimation theory to solve public point coordinate error effect of coordinate transformation. Combined with the specific coordinates conversion example, using Tukey, IGG1 and IGG3 three kinds of power function for the iterative calculation, complete the coordinate conversion. The results show that the robust estimation theory used for space rectangular coordinates between the conversion, can reduce the effect of common point gross error and improve the reliability of transformation.
A Radiometric Varying Robust Stereo Matching Algorithm%一种光照度不一致鲁棒立体匹配算法
Institute of Scientific and Technical Information of China (English)
曹晓倩; 马彩文
2014-01-01
In order to improve the matching rate of radiometric varying stereo images, a novel stereo matching algorithm based on the improved epipolar distance transformation in log-chromaticity space is proposed. In log-chromaticity space, the intensity proportion of stereo image pairs is computed firstly according to raw disparity map;secondly, epipolar distance transformation is performed on left and right images respectively using proportional intensity deviation parameters;at last, the final disparity map is acquired by the belief propagation method. Theoretically, the matching rate of the proposed algorithm is independent of radiometric varying situations including differences in light source’s position, spectrum, intensity and the parameters setting of cameras. Experimental results indicate that the matching rate of the proposed algorithm is improved at most 60%comparing with the original epipolar distance transformation algorithm and at most 78%comparing with the state of art algorithms such as ANCC (adaptive normalized cross correlation) when applied to textureless image pairs.%为了提高光照度不一致立体图像对的匹配率，提出一种基于对数颜色空间下改进极线距离变换的立体匹配算法。在对数颜色空间下，首先根据初始视差图计算立体图像对的灰度比；然后，采用与灰度比成比例的灰度误差系数，分别对左右图像进行极线距离变换；最后利用置信度传播算法计算视差图。理论上，本文算法的匹配结果不会受光源位置、光源谱分布、光照强度以及摄像机参数设置等光照度不一致因素的影响。实验表明：本文算法的匹配率相对于原始极线距离变换算法最多可提高60%；而应用于弱纹理图像对时，相对于当前先进的自适应归一化算法，匹配率最多可提高78%。
Robust identification for rational fractional transfer functions
Institute of Scientific and Technical Information of China (English)
王书宁
1997-01-01
An algorithm is proposed for robust identification of a rational fractional transfer function with a fixed degree under the framework of worst-case/deterministic robust identification. The convergence of the algorithm is proven. Its feasibility is shown with a numerical example.
Institute of Scientific and Technical Information of China (English)
王静; 郁梅; 李文锋; 骆挺
2016-01-01
针对 HEVC 的视频流版权问题，提出了一种抗量化转码的零水印算法。首先，经过统计发现量化转码后编码单元(CU)深度具有很强的稳定性，部分深度会发生转移且主要往相邻深度转移；然后，为增加深度特征的顽健性，对 CU 深度进行分组，并映射成二值信息；最后，将加密后的特征信息同混沌置乱后的版权信息异或，与时间戳作为最终注册的零水印。实验结果表明，该算法对量化参数在一定变化范围内的重量化转码攻击以及常见的信号攻击具有很强的鲁棒性。%For the video stream copyright issues of high efficiency video coding (HEVC), a new zero-watermarking algorithm with robustness to re-quantization transcoding is proposed. Firstly, from statistics analysis about re-quantization transcoding, it is found that Coding Unit (CU) depths have strong stability and only a fraction of the depths would shift and almost shift to adjacent depths. Then, in order to increase the robustness of the depth characteristic, the CU depths are divided into two groups and mapped into two values ‘0’ and ‘1’. Finally, ‘xor’ operation is performed between the binary information encrypted and the copyright information scrambled by using the chaotic algorithm, the outcome with the timestamp acts as the ultimate registered zero-watermarking. Experimental results show that the proposed algorithm has strong robustness to re-quantization transcoding and other common signal attacks.
Zampini, Stefano
2016-06-02
Balancing Domain Decomposition by Constraints (BDDC) methods have proven to be powerful preconditioners for large and sparse linear systems arising from the finite element discretization of elliptic PDEs. Condition number bounds can be theoretically established that are independent of the number of subdomains of the decomposition. The core of the methods resides in the design of a larger and partially discontinuous finite element space that allows for fast application of the preconditioner, where Cholesky factorizations of the subdomain finite element problems are additively combined with a coarse, global solver. Multilevel and highly-scalable algorithms can be obtained by replacing the coarse Cholesky solver with a coarse BDDC preconditioner. BDDC methods have the remarkable ability to control the condition number, since the coarse space of the preconditioner can be adaptively enriched at the cost of solving local eigenproblems. The proper identification of these eigenproblems extends the robustness of the methods to any heterogeneity in the distribution of the coefficients of the PDEs, not only when the coefficients jumps align with the subdomain boundaries or when the high contrast regions are confined to lie in the interior of the subdomains. The specific adaptive technique considered in this paper does not depend upon any interaction of discretization and partition; it relies purely on algebraic operations. Coarse space adaptation in BDDC methods has attractive algorithmic properties, since the technique enhances the concurrency and the arithmetic intensity of the preconditioning step of the sparse implicit solver with the aim of controlling the number of iterations of the Krylov method in a black-box fashion, thus reducing the number of global synchronization steps and matrix vector multiplications needed by the iterative solver; data movement and memory bound kernels in the solve phase can be thus limited at the expense of extra local ops during the setup of
A Robust Circle Detection Algorithm for C-arm X-ray Calibration Model Images%一种鲁棒的C臂X光校准靶图像圆孔定位方法
Institute of Scientific and Technical Information of China (English)
薛庆平; 李瑾
2015-01-01
This paper proposed a robust circle detection algorithm based on the cumulative probability of the center,and it j ust for the calibration model’s X-ray image which used in C-arm.This algorithm can be used for the noise and obj ect distortion images.Combined the algorithm and the calibration model im-age’s characteristics,it can be set detection range and precision easily.Because of these,it increases the scope of the algorithm application and also greatly reducing the computation time.Experiments on the cali-bration model images’show that,the detection results have higher true positive and lower false negative rate than the method of hough transform.%提出一种基于圆心累积概率的圆形检测算法，用于C臂校正标靶 X光图像上圆形标志点的定位。该算法有效地消除了因 X光图像的噪声和畸变导致圆形目标失真对定位精度的影响，能根据校正标靶上标志点的图像特点，灵活设定检测范围和精度，适用范围较广。对 X光实验样本的测试表明，相对 Hough变换圆检测，该算法检测精度高、漏检率低，有很强的抗噪声能力。
Banerjee, S; Grebogi, C; Banerjee, Soumitro; Yorke, James A.; Grebogi, Celso
1998-01-01
It has been proposed to make practical use of chaos in communication, in enhancing mixing in chemical processes and in spreading the spectrum of switch-mode power suppies to avoid electromagnetic interference. It is however known that for most smooth chaotic systems, there is a dense set of periodic windows for any range of parameter values. Therefore in practical systems working in chaotic mode, slight inadvertent fluctuation of a parameter may take the system out of chaos. We say a chaotic attractor is robust if, for its parameter values there exists a neighborhood in the parameter space with no periodic attractor and the chaotic attractor is unique in that neighborhood. In this paper we show that robust chaos can occur in piecewise smooth systems and obtain the conditions of its occurrence. We illustrate this phenomenon with a practical example from electrical engineering.
Čίžek, Pavel; Härdle, Wolfgang Karl
2006-01-01
Econometrics often deals with data under, from the statistical point of view, non-standard conditions such as heteroscedasticity or measurement errors and the estimation methods need thus be either adopted to such conditions or be at least insensitive to them. The methods insensitive to violation of certain assumptions, for example insensitive to the presence of heteroscedasticity, are in a broad sense referred to as robust (e.g., to heteroscedasticity). On the other hand, there is also a mor...
Robust statistical methods with R
Jureckova, Jana
2005-01-01
Robust statistical methods were developed to supplement the classical procedures when the data violate classical assumptions. They are ideally suited to applied research across a broad spectrum of study, yet most books on the subject are narrowly focused, overly theoretical, or simply outdated. Robust Statistical Methods with R provides a systematic treatment of robust procedures with an emphasis on practical application.The authors work from underlying mathematical tools to implementation, paying special attention to the computational aspects. They cover the whole range of robust methods, including differentiable statistical functions, distance of measures, influence functions, and asymptotic distributions, in a rigorous yet approachable manner. Highlighting hands-on problem solving, many examples and computational algorithms using the R software supplement the discussion. The book examines the characteristics of robustness, estimators of real parameter, large sample properties, and goodness-of-fit tests. It...
基于三阶累积量的稳健盲波束形成方法%Robust blind beamforming algorithm based on third-order cumulants
Institute of Scientific and Technical Information of China (English)
王荣博; 侯朝焕
2011-01-01
The goal of blind beamforming is to recover source signals only from the output of array without any a priori information about the array manifold. For blind separation of the independent sources, Cardoso and Souloumiac proposed an effectively blind beamforming method (JADE) using fourth-order cumulant. However, the high computation complexity of this method limits its applicability in practical problems. In many applications, the source signals have nonzero third-order cumulant duo to the fact that they obey asymmetric distribution. In this paper, a new method is proposed to separate independent sources with nonzero third-order cumulant, which estimates mixing matrix by jointly diagonalizing several matrices formed by the third-order cumulants of array data. Contrary to JADE method, the proposed method has much lower computation complexity and is more robust to the estimation errors resulting from finite sampling data.%盲波束形成的目的就是在不知道阵列流形先验信息的情况下,从阵列输出数据中恢复出源信号.对于独立源信号的盲分离,Cardoso和Souloumiac提出了一种基于四阶累积量的盲波束形成方法(Joint Approximate Diagonalisation of Eigen-matrices,JADE).然而,巨大的计算量限制了该方法在工程中的应用.而在实际应用中很多信号并不服从对称分布,因而具有三阶累积量.为此,通过联合对角化由接收数据三阶累积量构造的一系列矩阵来估计混合矩阵,进而实现盲波束形成.与JADE相比,该方法具有较小的计算量,并且对于有限采样数据造成的估计误差更加稳健.
HMC algorithm with multiple time scale integration and mass preconditioning
Urbach, C; Shindler, A; Wenger, U
2006-01-01
We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at beta=5.6 and at pion masses ranging from 380 MeV to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the ``Berlin Wall'' figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.
HMC algorithm with multiple time scale integration and mass preconditioning
Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.
2006-01-01
We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.
Robust SIFT Image Matching Algorithm Based on Unsupervised Learning%基于无监督学习的SIFT鲁棒图像匹配算法
Institute of Scientific and Technical Information of China (English)
袭著有
2014-01-01
Due to the invariance of scale,rota-tion,illumination,SIFT (scale invariant feature transform)descriptor is commonly used in image matching.However,on the one hand,in practical applications the isolated point and the noise point may cause mismatching points.On the other hand, SIFT feature points record the relationship of dif-ferent scale between the feature point and around it,so easily caused the described similar between the different image feature point and it can be matched each other after extracting their feature points.In order to solve the problem,this paper proposed a feature point matching method based on SIFT algorithm,use unsupervised learning meth-ods to classify the matching points and eliminate the abnormal points,achieving the goal of the sec-ond accurate feature matching.%SIFT特征匹配算法的匹配能力强，但特征点中孤立点和噪声点等会导致部分特征点误匹配；不同图像间特征点的有关描述相近，也会造成两幅不同结构的图像，在提取出各自的 SIFT 特征点后相互匹配。为此，提出一种改进 SIFT 的图像特征匹配算法。该算法是在 SIFT 特征匹配的基础上，利用无监督学习方法对匹配异常点进行剔除，实现特征点的二次精确匹配。
一种基于交叉验证的稳健SL0目标参数提取算法%Cross validation based robust-SL0 algorithm for target parameter extraction
Institute of Scientific and Technical Information of China (English)
贺亚鹏; 庄珊娜; 张燕洪; 朱晓华
2012-01-01
Utilizing the space sparsity property of radar targets, a compressive sensing based pseudo-random step frequency radar (CS-PRSFR) is studied. Firstly, the CS-PRSFR targets echo is analyzed and the targets parameter extracting model is constructed. To solve the problem of inapplicability of traditional sparse signal reconstruction algorithms amid noise of unknown statistics, a cross validation based robust SLO (CV-RSLO) algorithm extracting the parameter of targets is proposed. Because of the better incoherence of the sensing matrix, the CS-PRSFR can obtain a higher range-velocity joint resolution performance. The proposed algorithm needs no prior information of the noise statistics, and the performance of its targets parameter extraction can rapidly approach the lower bound of the best estimator as the signal to noise ratio improving. Simulation results illuminate the correctness and efficiency of this method.%利用雷达目标在空间的稀疏特性,研究了一种基于压缩感知的伪随机频率步进雷达( compressive sensing based pseudo-random step frequency radar,CS-PRSFR).首先,在分析CS-PRSFR目标回波的基础上,建立了目标参数提取模型；然后,针对在噪声统计特性未知时,传统稀疏信号重构算法无法适用的问题,提出一种基于交叉验证的稳健SL0 (robust SL0 based on cross validation,CV-RSL0)目标参数提取算法.CS-PRSFR由于其感知矩阵较强的非相关性,可获得更高的距离-速度联合分辨性能；该算法无需已知噪声统计特性,随着信噪比的提高,其目标参数提取性能能够快速逼近最佳估计的下限.仿真结果表明该方法的正确性和有效性.
Robust Color Image Watermarking Algorithm Based on 3D-DCT%基于三维离散余弦变换的鲁棒彩色图像水印算法
Institute of Scientific and Technical Information of China (English)
熊祥光; 韦立
2015-01-01
For common color image watermarking algorithms embed watermarking only in luminance component or each color component separately lead to the watermarked image’s transparency and robustness problems cannot achieve trade‐off , a new RGB color image watermarking algorithm based on three‐dimensional discrete cosine transform (3D‐DCT ) is proposed .Firstly ,to enhance the security of the watermarking ,the watermark image is processed by XOR encryption and Arnold scrambling . Secondly , the RGB color image is subdivided to non‐overlapping blocks , and performs 3D‐DCT for each block . Finally , the first coefficient of each block after performing 3D‐DCT is modified by exploiting quantification method to embed watermarking .Experimental results show that the proposed algorithm has good transparency and robustness ,and it has some practical for color image copyright protection applications .%针对常见的彩色图像水印算法仅在亮度分量或在每一分量嵌入水印，致透明性和鲁棒性不能达到权衡的问题，提出了一种新颖的基于三维离散余弦变换的RGB彩色图像水印算法。为增强水印的安全性，先对水印图像进行异或加密和Arnold置乱处理；其次对彩色图像进行互不重叠的分块处理，并对每一分块进行三维离散余弦变换；最后对每一分块三维离散余弦变换后的第一个系数采用量化的嵌入方法嵌入水印信号。实验表明，提出的算法具有较好的透明性和鲁棒性，对彩色图像的版权保护应用具有一定的应用价值。
Institute of Scientific and Technical Information of China (English)
王江峰; 伍贻兆; Périaux J
2004-01-01
对比了进化算法(基因算法)与确定性算法(共轭梯度法)在优化控制问题中的优化效率.两种方法都与分散式优化策略-Nash对策进行了结合,并成功地应用于优化控制问题.计算模型采用绕NACA0012翼型的位流流场.区域分裂技术的引用使得全局流场被分裂为多个带有重叠区的子流场,使用4种不同的方法进行当地流场解的耦合,这些算法可以通过当地的流场解求得全局流场解.数值计算结果的对比表明,进化算法可以得到与共轭梯度法相同的计算结果,并且进化算法的不依赖梯度信息的特性使其在复杂问题及非线性问题中具有广泛的应用前景.%The comparison for optimization efficiency between evolutionary algorithms (Genetic Algorithms, GAs) and deterministic algorithms (Conjugate Gradient, CG) is presented. Both two different methods are combined with Nash strategy-decentralized optimization strategy in Game Theory-and implemented into an optimal control problem using a technique DDM (Domain Decomposition Method). The problem consists in simulating the perfect potential flow field around a NACA0012 airfoil with the technique DDM, the global calculation domain is then split into sub-domains with overlaps, the accord of local solutions on interfaces is obtained using four different algorithms which permit the resolution of global problem via local sub-problems on sub-domains and their interfaces. Comparable numerical results are obtained by different algorithms and show that the property of independence of gradient makes GAs based algorithms serious and robust research tools for great dimension problems or non-linear problems.
Institute of Scientific and Technical Information of China (English)
Pankaj Kumar SRIVASTAVA; Manoj KUMAR
2012-01-01
A numerical algorithm is developed for the approximation of the solution to certain boundary value problems involving the third-order ordinary differential equation associated with draining and coating flows.The authors show that the approximate solutions obtained by the numerical algorithm developed by using nonpolynomial quintic spline functions are better than those produced by other spline and domain decomposition methods.The algorithm is tested on two problems associated with draining and coating flows to demonstrate the practical usefulness of the approach.
基于稳健回归技术的软件成本估计方法%Software Cost Estimation Based on the Robust Regression Algorithm
Institute of Scientific and Technical Information of China (English)
孙士兵; 马莉
2008-01-01
随着软件系统规模的不断扩大和复杂程度的日益加大,从20世纪60年代末期开始,出现了以大量软件项目进度延期、预算超支和质量缺陷为典型特征的软件危机.在对软件项目进行估算时,通常情况下能得到相关软件组织或软件产品的某些历史数据,充分利用这些历史数据对预测与估算软件项目是很有帮助的.稳健回归分析(RRA),就是这样一种相当常用与有效的数据驱动方法.在比较、回顾一些稳健回归分析研究成果的基础上,重点解决了软件成本估算数据用传统回归分析存在的问题,并有效地解决了由于异常数据存在而产生的掩蔽效应.同时尝试提出在软件成本数据估算中运用稳健回归方法进行系统而全面的仿真实验分析,发现该方法能有效地解决异常数据的掩蔽效应,得到比较满意的结果.%Along with the unceasing expansion and the complex degree daily enlarging of the software system scale,it appears the typical software crisis such as the massive software project progress extension,the budget overspending and the quality flaw from the 20th century 60s last stages.The correlated software organization or the software product certain historical data can obtain carry on the estimation of the software project in the usual situation,and it is helpful to take advantage of these data to forecast the future software projects.The robust regression analysis (RRA) is such one kind quite commonly used and the effective data actuation method.Based on some retrospective studies of RRA,focuses on some problems of the software cost estimation data with ordinary methods and tries to propose the RRA methods to analysis of the software development cost estimation data and effectively solutes to the problem of masking effects when the outliers exist.The results are found that this method could solve effectively the masking effects by outliers and obtained better results.
Validation of community robustness
Carissimo, Annamaria; Defeis, Italia
2016-01-01
The large amount of work on community detection and its applications leaves unaddressed one important question: the statistical validation of the results. In this paper we present a methodology able to clearly detect if the community structure found by some algorithms is statistically significant or is a result of chance, merely due to edge positions in the network. Given a community detection method and a network of interest, our proposal examines the stability of the partition recovered against random perturbations of the original graph structure. To address this issue, we specify a perturbation strategy and a null model to build a set of procedures based on a special measure of clustering distance, namely Variation of Information, using tools set up for functional data analysis. The procedures determine whether the obtained clustering departs significantly from the null model. This strongly supports the robustness against perturbation of the algorithm used to identify the community structure. We show the r...
DEFF Research Database (Denmark)
Gorm Hansen, Birgitte
The concepts of “socially robust knowledge” and “mode 2 knowledge production” (Nowotny 2003, Gibbons et al. 1994) have migrated from STS into research policy practices. Both STS-scholars and policy makers have been known to propomote the idea that the way forward for today’s scientist is to jump...... from the ivory tower and learn how to create high-flying synergies with citizens, corporations and governments. In STS as well as in Danish research policy it has thus been argued that scientists will gain more support and enjoy greater success in their work by “externalizing” their research...... and adapting their interests to the needs of outside actors. However, when studying the concrete strategies of such successful scientists, matters seem a bit more complicated. Based on interviews with a plant biologist working in GMO the paper uses the biological concepts of field participants...
Institute of Scientific and Technical Information of China (English)
黄赞; 张宪民; 陈忠
2011-01-01
Aiming at the problems that illumination variation, noise, motion discontinuity and large displacement may badly influence motion estimation precision in micro-motion measurement based on computer microvision, a robust multi-scale micro-motion measurement algorithm based on homomorphic filtering is proposed. Firstly, a homomorphic frltering method for image enhancement is used to correct uneven brightness and enhance contrast. Then biweight function is used to automatically adjust the weights of the data with different residual errors and remove the dato with excessive residual errors; and a multi-scale pyramid is employed to accurately estimate the motion vectors through iterations gradually from coarseness to fineness. Experimental result show that the new algorithm has good robustness, it can effectively weaken the interference of uneven illumination and reduce the influence of outliers caused by noise and motion discontinuity; the accuracy of micro-motion measurement reaches 0.01 pixels.%针对光照变化、噪声、运动不连续及大位移量等影响基于计算机微视觉的微运动测量精度的问题,提出一种基于同态滤波的鲁棒多尺度微运动测量算法.首先采用同态滤波增强方法对显微视觉图像亮度不均匀进行了校正,并增强对比度,然后利用双权重函数,自动调节不同残差数据点的权重,去除残差过大的数据点,并采用多尺度金字塔由粗到精逐层迭代,精确地估计运动矢量.实验结果显示,新算法鲁棒性好,能有效地减弱光照不均匀的干扰,同时减少噪声和运动不连续而引起的界外值的影响,微运动测量精度达到0.01个像素.
Novel robust beamforming algorithm using sequential quadratic programming%采用序列二次规划求解的稳健波束形成新算法
Institute of Scientific and Technical Information of China (English)
虞泓波; 冯大政; 解虎
2016-01-01
Aiming at the probably existing performance loss and high computational complexity of the robust beamforming based on steering vector estimation with as little prior information as possible which is solved by the semi-definite relaxation (SDR) approach, a novel robust beamforming algorithm using sequential quadratic programming (SQP) is proposed. The original non-convex problem is linearly approximated to a convex subproblem using the first order Taylor's series, and the optimal solution is found out by solving the convex subproblem iteratively. Moreover, considering the mismatch of the sample covariance matrix, the SQP-WC method based on worst-case performance optimization is presented to improve the performance of the proposed SQP method. Theoretical analysis and simulation results show that the proposed SQP algorithm can converge fast and its convergence point approximates the optimal solution to the original problem, which indicates that the SQP method can effectively reduce the computational complexity compared with the SDR method, and furthermore, the SQP-WC method can effectively improve the performance of the SQP method with a small parameter.%利用尽可能少的先验信息进行导向矢量估计的稳健波束形成方法利用半正定松弛算法求解,面临可能存在性能损失、计算复杂度高的问题,针对该问题提出一种采用序列二次规划求解的新算法.首先利用一阶泰勒级数将原始模型线性近似为凸优化问题,然后对该子凸优化问题进行迭代求解.此外,还考虑了协方差矩阵失配问题,提出最坏情况性能最优的序列二次规划算法提高序列二次规划算法的性能.理论分析和仿真实验表明,序列二次规划算法收敛速度较快,收敛点逼近原始问题最优解,与现有半正定松弛算法相比,能够有效降低计算量,该算法在小参数值时即可有效改进序列二次规划算法的性能.
Directory of Open Access Journals (Sweden)
Galal Omer
2016-04-01
Full Text Available Leaf area index (LAI is an important biophysical trait for forest ecosystem and ecological modeling, as it plays a key role for the forest productivity and structural characteristics. The ground-based methods like the handheld optical instruments for predicting LAI are subjective, pricy and time-consuming. The advent of very high spatial resolutions multispectral data and robust machine learning regression algorithms like support vector machines (SVM and artificial neural networks (ANN has provided an opportunity to estimate LAI at tree species level. The objective of the this study was therefore to test the utility of spectral vegetation indices (SVI calculated from the multispectral WorldView-2 (WV-2 data in predicting LAI at tree species level using the SVM and ANN machine learning regression algorithms. We further tested whether there are significant differences between LAI of intact and fragmented (open indigenous forest ecosystems at tree species level. The study shows that LAI at tree species level could accurately be estimated using the fragmented stratum data compared with the intact stratum data. Specifically, our study shows that the accurate LAI predictions were achieved for Hymenocardia ulmoides using the fragmented stratum data and SVM regression model based on a validation dataset (R2Val = 0.75, RMSEVal = 0.05 (1.37% of the mean. Our study further showed that SVM regression approach achieved more accurate models for predicting the LAI of the six endangered tree species compared with ANN regression method. It is concluded that the successful application of the WV-2 data, SVM and ANN methods in predicting LAI of six endangered tree species in the Dukuduku indigenous forest could help in making informed decisions and policies regarding management, protection and conservation of these endangered tree species.
Institute of Scientific and Technical Information of China (English)
皇甫宜耿; 王毅; 赵冬冬; 梁波
2015-01-01
Based on Super⁃Twisting algorithm of high order sliding mode, this paper designs a controller of full bridge inverter. The core idea is to transfer the discrete control law to high order sliding manifold, essentially elimi⁃nating the impact of chattering effect. In order to justify the feasibility and effectiveness of the algorithm, it uses MATLAB to simulate control systems and compare results. The simulation results and their analysis show prelimina⁃rily that:(1) under the steady state condition, compared with the typical PI control, the high order sliding mode has better tracking;(2) under the condition of large disturbances in input and output, the high order sliding mode control shows strong robustness to the input and output disturbances.%利用高阶滑模super⁃twisting算法设计一种全桥逆变电路的控制器。该算法的核心思想是将离散控制律转移至更高阶的滑模面，从本质上消除一介滑模的抖颤影响。为了验证算法的可行性和有效性，利用MATLAB仿真软件对控制系统进行了对比实验，结果表明：1）在稳态性能下，相比于典型PI控制高阶滑模控制具有更好的跟踪效果。2）在输入和负载的扰动情况下，高阶滑模控制显示出强鲁棒性，对输入和负载的扰动不敏感。
Robust and Efficient Parametric Face Alignment
Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja
2011-01-01
We propose a correlation-based approach to parametric object alignment particularly suitable for face analysis applications which require efficiency and robustness against occlusions and illumination changes. Our algorithm registers two images by iteratively maximizing their correlation coefficient
Institute of Scientific and Technical Information of China (English)
崔得龙; 左敬龙; 彭志平
2011-01-01
提出了应用Harris角点检测和不变质心的图像Hash算法.算法从仿射变换的数学模型出发,利用仿射前后图像质心位置的不变特性,计算Harris角点与不变质心的欧氏距离作为特征向量,最后经编码量化产生图像Hash.实验结果表明:本算法对视觉可接受的JPEG压缩、滤波等具有良好的鲁棒性,而恶意扰动或篡改则会改变Hash值.密钥的使用保证了Hash的安全性.%A novel image Hash algorithm using Harris corners and invariant centroid is proposed. Originating from the mathematical model of affine transform, the distances between Harris corners and invariant centroid are calculated as feature vector,which based on the invariance of centroid of images in the affine transform,finally the feature vectors are compressed to generate the image Hash. Experimental results show that the proposed scheme is robust against perceptually acceptable modifications to the image such as JPEG compression, filtering, while sensitive to excessive changes and malicious tampering. Security of the Hash is guaranteed by using secret keys.
Control algorithm for multiscale flow simulations of water
DEFF Research Database (Denmark)
Kotsalis, E. M.; Walther, Jens Honore; Kaxiras, E.
2009-01-01
. The use of a mass conserving specular wall results in turn to spurious oscillations in the density profile of the atomistic description of water. These oscillations can be eliminated by using an external boundary force that effectively accounts for the virial component of the pressure. In this Rapid......We present a multiscale algorithm to couple atomistic water models with continuum incompressible flow simulations via a Schwarz domain decomposition approach. The coupling introduces an inhomogeneity in the description of the atomistic domain and prevents the use of periodic boundary conditions...... Communication, we extend a control algorithm, previously introduced for monatomic molecules, to the case of atomistic water and demonstrate the effectiveness of this approach. The proposed computational method is validated for the cases of equilibrium and Couette flow of water....