WorldWideScience

Sample records for robust domain-decomposition algorithm

  1. Parallel algorithms for nuclear reactor analysis via domain decomposition method

    International Nuclear Information System (INIS)

    Kim, Yong Hee

    1995-02-01

    In this thesis, the neutron diffusion equation in reactor physics is discretized by the finite difference method and is solved on a parallel computer network which is composed of T-800 transputers. T-800 transputer is a message-passing type MIMD (multiple instruction streams and multiple data streams) architecture. A parallel variant of Schwarz alternating procedure for overlapping subdomains is developed with domain decomposition. The thesis provides convergence analysis and improvement of the convergence of the algorithm. The convergence of the parallel Schwarz algorithms with DN(or ND), DD, NN, and mixed pseudo-boundary conditions(a weighted combination of Dirichlet and Neumann conditions) is analyzed for both continuous and discrete models in two-subdomain case and various underlying features are explored. The analysis shows that the convergence rate of the algorithm highly depends on the pseudo-boundary conditions and the theoretically best one is the mixed boundary conditions(MM conditions). Also it is shown that there may exist a significant discrepancy between continuous model analysis and discrete model analysis. In order to accelerate the convergence of the parallel Schwarz algorithm, relaxation in pseudo-boundary conditions is introduced and the convergence analysis of the algorithm for two-subdomain case is carried out. The analysis shows that under-relaxation of the pseudo-boundary conditions accelerates the convergence of the parallel Schwarz algorithm if the convergence rate without relaxation is negative, and any relaxation(under or over) decelerates convergence if the convergence rate without relaxation is positive. Numerical implementation of the parallel Schwarz algorithm on an MIMD system requires multi-level iterations: two levels for fixed source problems, three levels for eigenvalue problems. Performance of the algorithm turns out to be very sensitive to the iteration strategy. In general, multi-level iterations provide good performance when

  2. Non-linear scalable TFETI domain decomposition based contact algorithm

    Czech Academy of Sciences Publication Activity Database

    Dobiáš, Jiří; Pták, Svatopluk; Dostál, Z.; Vondrák, V.; Kozubek, T.

    2010-01-01

    Roč. 10, č. 1 (2010), s. 1-10 ISSN 1757-8981. [World Congress on Computational Mechanics/9./. Sydney, 19.07.2010 - 23.07.2010] R&D Projects: GA ČR GA101/08/0574 Institutional research plan: CEZ:AV0Z20760514 Keywords : finite element method * domain decomposition method * contact Subject RIV: BA - General Mathematics http://iopscience.iop.org/1757-899X/10/1/012161/pdf/1757-899X_10_1_012161.pdf

  3. Using Enhanced Frequency Domain Decomposition as a Robust Technique to Harmonic Excitation in Operational Modal Analysis

    DEFF Research Database (Denmark)

    Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune

    2006-01-01

    The presence of harmonic components in the measured responses is unavoidable in many applications of Operational Modal Analysis. This is especially true when measuring on mechanical structures containing rotating or reciprocating parts. This paper describes a new method based on the popular...... agreement is found and the method is proven to be an easy-to-use and robust tool for handling responses with deterministic and stochastic content....... Enhanced Frequency Domain Decomposition technique for eliminating the influence of these harmonic components in the modal parameter extraction process. For various experiments, the quality of the method is assessed and compared to the results obtained using broadband stochastic excitation forces. Good...

  4. Implementation of domain decomposition and data decomposition algorithms in RMC code

    International Nuclear Information System (INIS)

    Liang, J.G.; Cai, Y.; Wang, K.; She, D.

    2013-01-01

    The applications of Monte Carlo method in reactor physics analysis is somewhat restricted due to the excessive memory demand in solving large-scale problems. Memory demand in MC simulation is analyzed firstly, it concerns geometry data, data of nuclear cross-sections, data of particles, and data of tallies. It appears that tally data is dominant in memory cost and should be focused on in solving the memory problem. Domain decomposition and tally data decomposition algorithms are separately designed and implemented in the reactor Monte Carlo code RMC. Basically, the domain decomposition algorithm is a strategy of 'divide and rule', which means problems are divided into different sub-domains to be dealt with separately and some rules are established to make sure the whole results are correct. Tally data decomposition consists in 2 parts: data partition and data communication. Two algorithms with differential communication synchronization mechanisms are proposed. Numerical tests have been executed to evaluate performance of the new algorithms. Domain decomposition algorithm shows potentials to speed up MC simulation as a space parallel method. As for tally data decomposition algorithms, memory size is greatly reduced

  5. Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions

    Energy Technology Data Exchange (ETDEWEB)

    Fattebert, J.-L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Richards, D.F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glosli, J.N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2012-12-01

    We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·106 particles on 65,536 MPI tasks.

  6. Combined spatial/angular domain decomposition SN algorithms for shared memory parallel machines

    International Nuclear Information System (INIS)

    Hunter, M.A.; Haghighat, A.

    1993-01-01

    Several parallel processing algorithms on the basis of spatial and angular domain decomposition methods are developed and incorporated into a two-dimensional discrete ordinates transport theory code. These algorithms divide the spatial and angular domains into independent subdomains so that the flux calculations within each subdomain can be processed simultaneously. Two spatial parallel algorithms (Block-Jacobi, red-black), one angular parallel algorithm (η-level), and their combinations are implemented on an eight processor CRAY Y-MP. Parallel performances of the algorithms are measured using a series of fixed source RZ geometry problems. Some of the results are also compared with those executed on an IBM 3090/600J machine. (orig.)

  7. A New Efficient Algorithm for the 2D WLP-FDTD Method Based on Domain Decomposition Technique

    Directory of Open Access Journals (Sweden)

    Bo-Ao Xu

    2016-01-01

    Full Text Available This letter introduces a new efficient algorithm for the two-dimensional weighted Laguerre polynomials finite difference time-domain (WLP-FDTD method based on domain decomposition scheme. By using the domain decomposition finite difference technique, the whole computational domain is decomposed into several subdomains. The conventional WLP-FDTD and the efficient WLP-FDTD methods are, respectively, used to eliminate the splitting error and speed up the calculation in different subdomains. A joint calculation scheme is presented to reduce the amount of calculation. Through our work, the iteration is not essential to obtain the accurate results. Numerical example indicates that the efficiency and accuracy are improved compared with the efficient WLP-FDTD method.

  8. A Parallel Non-Overlapping Domain-Decomposition Algorithm for Compressible Fluid Flow Problems on Triangulated Domains

    Science.gov (United States)

    Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai

    1998-01-01

    This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.

  9. Robust domain decomposition preconditioners for abstract symmetric positive definite bilinear forms

    KAUST Repository

    Efendiev, Yalchin

    2012-02-22

    An abstract framework for constructing stable decompositions of the spaces corresponding to general symmetric positive definite problems into "local" subspaces and a global "coarse" space is developed. Particular applications of this abstract framework include practically important problems in porous media applications such as: the scalar elliptic (pressure) equation and the stream function formulation of its mixed form, Stokes\\' and Brinkman\\'s equations. The constant in the corresponding abstract energy estimate is shown to be robust with respect to mesh parameters as well as the contrast, which is defined as the ratio of high and low values of the conductivity (or permeability). The derived stable decomposition allows to construct additive overlapping Schwarz iterative methods with condition numbers uniformly bounded with respect to the contrast and mesh parameters. The coarse spaces are obtained by patching together the eigenfunctions corresponding to the smallest eigenvalues of certain local problems. A detailed analysis of the abstract setting is provided. The proposed decomposition builds on a method of Galvis and Efendiev [Multiscale Model. Simul. 8 (2010) 1461-1483] developed for second order scalar elliptic problems with high contrast. Applications to the finite element discretizations of the second order elliptic problem in Galerkin and mixed formulation, the Stokes equations, and Brinkman\\'s problem are presented. A number of numerical experiments for these problems in two spatial dimensions are provided. © EDP Sciences, SMAI, 2012.

  10. Multilevel Balancing Domain Decomposition by Constraints Deluxe Algorithms with Adaptive Coarse Spaces for Flow in Porous Media

    KAUST Repository

    Zampini, Stefano; Tu, Xuemin

    2017-01-01

    Multilevel balancing domain decomposition by constraints (BDDC) deluxe algorithms are developed for the saddle point problems arising from mixed formulations of Darcy flow in porous media. In addition to the standard no-net-flux constraints on each face, adaptive primal constraints obtained from the solutions of local generalized eigenvalue problems are included to control the condition number. Special deluxe scaling and local generalized eigenvalue problems are designed in order to make sure that these additional primal variables lie in a benign subspace in which the preconditioned operator is positive definite. The current multilevel theory for BDDC methods for porous media flow is complemented with an efficient algorithm for the computation of the so-called malign part of the solution, which is needed to make sure the rest of the solution can be obtained using the conjugate gradient iterates lying in the benign subspace. We also propose a new technique, based on the Sherman--Morrison formula, that lets us preserve the complexity of the subdomain local solvers. Condition number estimates are provided under certain standard assumptions. Extensive numerical experiments confirm the theoretical estimates; additional numerical results prove the effectiveness of the method with higher order elements and high-contrast problems from real-world applications.

  11. A parallel algorithm for solving the multidimensional within-group discrete ordinates equations with spatial domain decomposition - 104

    International Nuclear Information System (INIS)

    Zerr, R.J.; Azmy, Y.Y.

    2010-01-01

    A spatial domain decomposition with a parallel block Jacobi solution algorithm has been developed based on the integral transport matrix formulation of the discrete ordinates approximation for solving the within-group transport equation. The new methodology abandons the typical source iteration scheme and solves directly for the fully converged scalar flux. Four matrix operators are constructed based upon the integral form of the discrete ordinates equations. A single differential mesh sweep is performed to construct these operators. The method is parallelized by decomposing the problem domain into several smaller sub-domains, each treated as an independent problem. The scalar flux of each sub-domain is solved exactly given incoming angular flux boundary conditions. Sub-domain boundary conditions are updated iteratively, and convergence is achieved when the scalar flux error in all cells meets a pre-specified convergence criterion. The method has been implemented in a computer code that was then employed for strong scaling studies of the algorithm's parallel performance via a fixed-size problem in tests ranging from one domain up to one cell per sub-domain. Results indicate that the best parallel performance compared to source iterations occurs for optically thick, highly scattering problems, the variety that is most difficult for the traditional SI scheme to solve. Moreover, the minimum execution time occurs when each sub-domain contains a total of four cells. (authors)

  12. Multilevel Balancing Domain Decomposition by Constraints Deluxe Algorithms with Adaptive Coarse Spaces for Flow in Porous Media

    KAUST Repository

    Zampini, Stefano

    2017-08-03

    Multilevel balancing domain decomposition by constraints (BDDC) deluxe algorithms are developed for the saddle point problems arising from mixed formulations of Darcy flow in porous media. In addition to the standard no-net-flux constraints on each face, adaptive primal constraints obtained from the solutions of local generalized eigenvalue problems are included to control the condition number. Special deluxe scaling and local generalized eigenvalue problems are designed in order to make sure that these additional primal variables lie in a benign subspace in which the preconditioned operator is positive definite. The current multilevel theory for BDDC methods for porous media flow is complemented with an efficient algorithm for the computation of the so-called malign part of the solution, which is needed to make sure the rest of the solution can be obtained using the conjugate gradient iterates lying in the benign subspace. We also propose a new technique, based on the Sherman--Morrison formula, that lets us preserve the complexity of the subdomain local solvers. Condition number estimates are provided under certain standard assumptions. Extensive numerical experiments confirm the theoretical estimates; additional numerical results prove the effectiveness of the method with higher order elements and high-contrast problems from real-world applications.

  13. A Temporal Domain Decomposition Algorithmic Scheme for Large-Scale Dynamic Traffic Assignment

    Directory of Open Access Journals (Sweden)

    Eric J. Nava

    2012-03-01

    This paper presents a temporal decomposition scheme for large spatial- and temporal-scale dynamic traffic assignment, in which the entire analysis period is divided into Epochs. Vehicle assignment is performed sequentially in each Epoch, thus improving the model scalability and confining the peak run-time memory requirement regardless of the total analysis period. A proposed self-turning scheme adaptively searches for the run-time-optimal Epoch setting during iterations regardless of the characteristics of the modeled network. Extensive numerical experiments confirm the promising performance of the proposed algorithmic schemes.

  14. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z. [Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China)

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  15. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-01-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  16. Bregmanized Domain Decomposition for Image Restoration

    KAUST Repository

    Langer, Andreas

    2012-05-22

    Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems such as total variation minimization became increasingly important for the solution of these tasks. While being favorable due to the improved enhancement of images compared to smooth imaging approaches, non-smooth minimization problems typically scale badly with the dimension of the data. Hence, for large imaging problems solved by total variation minimization domain decomposition algorithms have been proposed, aiming to split one large problem into N > 1 smaller problems which can be solved on parallel CPUs. The N subproblems constitute constrained minimization problems, where the constraint enforces the support of the minimizer to be the respective subdomain. In this paper we discuss a fast computational algorithm to solve domain decomposition for total variation minimization. In particular, we accelerate the computation of the subproblems by nested Bregman iterations. We propose a Bregmanized Operator Splitting-Split Bregman (BOS-SB) algorithm, which enforces the restriction onto the respective subdomain by a Bregman iteration that is subsequently solved by a Split Bregman strategy. The computational performance of this new approach is discussed for its application to image inpainting and image deblurring. It turns out that the proposed new solution technique is up to three times faster than the iterative algorithm currently used in domain decomposition methods for total variation minimization. © Springer Science+Business Media, LLC 2012.

  17. Multiple Shooting and Time Domain Decomposition Methods

    CERN Document Server

    Geiger, Michael; Körkel, Stefan; Rannacher, Rolf

    2015-01-01

    This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms.  The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics.  This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...

  18. Domain Decomposition: A Bridge between Nature and Parallel Computers

    Science.gov (United States)

    1992-09-01

    B., "Domain Decomposition Algorithms for Indefinite Elliptic Problems," S"IAM Journal of S; cientific and Statistical (’omputing, Vol. 13, 1992, pp...AD-A256 575 NASA Contractor Report 189709 ICASE Report No. 92-44 ICASE DOMAIN DECOMPOSITION: A BRIDGE BETWEEN NATURE AND PARALLEL COMPUTERS DTIC dE...effectively implemented on dis- tributed memory multiprocessors. In 1990 (as reported in Ref. 38 using the tile algo- rithm), a 103,201-unknown 2D elliptic

  19. Domain decomposition methods and parallel computing

    International Nuclear Information System (INIS)

    Meurant, G.

    1991-01-01

    In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset

  20. Spatial domain decomposition for neutron transport problems

    International Nuclear Information System (INIS)

    Yavuz, M.; Larsen, E.W.

    1989-01-01

    A spatial Domain Decomposition method is proposed for modifying the Source Iteration (SI) and Diffusion Synthetic Acceleration (DSA) algorithms for solving discrete ordinates problems. The method, which consists of subdividing the spatial domain of the problem and performing the transport sweeps independently on each subdomain, has the advantage of being parallelizable because the calculations in each subdomain can be performed on separate processors. In this paper we describe the details of this spatial decomposition and study, by numerical experimentation, the effect of this decomposition on the SI and DSA algorithms. Our results show that the spatial decomposition has little effect on the convergence rates until the subdomains become optically thin (less than about a mean free path in thickness)

  1. Robustness Beamforming Algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Dehghani

    2014-04-01

    Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.

  2. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    International Nuclear Information System (INIS)

    Desai, Ajit; Pettit, Chris; Poirel, Dominique; Sarkar, Abhijit

    2017-01-01

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolution in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.

  3. Domain decomposition method for solving elliptic problems in unbounded domains

    International Nuclear Information System (INIS)

    Khoromskij, B.N.; Mazurkevich, G.E.; Zhidkov, E.P.

    1991-01-01

    Computational aspects of the box domain decomposition (DD) method for solving boundary value problems in an unbounded domain are discussed. A new variant of the DD-method for elliptic problems in unbounded domains is suggested. It is based on the partitioning of an unbounded domain adapted to the given asymptotic decay of an unknown function at infinity. The comparison of computational expenditures is given for boundary integral method and the suggested DD-algorithm. 29 refs.; 2 figs.; 2 tabs

  4. Multilevel domain decomposition for electronic structure calculations

    International Nuclear Information System (INIS)

    Barrault, M.; Cances, E.; Hager, W.W.; Le Bris, C.

    2007-01-01

    We introduce a new multilevel domain decomposition method (MDD) for electronic structure calculations within semi-empirical and density functional theory (DFT) frameworks. This method iterates between local fine solvers and global coarse solvers, in the spirit of domain decomposition methods. Using this approach, calculations have been successfully performed on several linear polymer chains containing up to 40,000 atoms and 200,000 atomic orbitals. Both the computational cost and the memory requirement scale linearly with the number of atoms. Additional speed-up can easily be obtained by parallelization. We show that this domain decomposition method outperforms the density matrix minimization (DMM) method for poor initial guesses. Our method provides an efficient preconditioner for DMM and other linear scaling methods, variational in nature, such as the orbital minimization (OM) procedure

  5. Vector domain decomposition schemes for parabolic equations

    Science.gov (United States)

    Vabishchevich, P. N.

    2017-09-01

    A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.

  6. Simulation of two-phase flows by domain decomposition

    International Nuclear Information System (INIS)

    Dao, T.H.

    2013-01-01

    This thesis deals with numerical simulations of compressible fluid flows by implicit finite volume methods. Firstly, we studied and implemented an implicit version of the Roe scheme for compressible single-phase and two-phase flows. Thanks to Newton method for solving nonlinear systems, our schemes are conservative. Unfortunately, the resolution of nonlinear systems is very expensive. It is therefore essential to use an efficient algorithm to solve these systems. For large size matrices, we often use iterative methods whose convergence depends on the spectrum. We have studied the spectrum of the linear system and proposed a strategy, called Scaling, to improve the condition number of the matrix. Combined with the classical ILU pre-conditioner, our strategy has reduced significantly the GMRES iterations for local systems and the computation time. We also show some satisfactory results for low Mach-number flows using the implicit centered scheme. We then studied and implemented a domain decomposition method for compressible fluid flows. We have proposed a new interface variable which makes the Schur complement method easy to build and allows us to treat diffusion terms. Using GMRES iterative solver rather than Richardson for the interface system also provides a better performance compared to other methods. We can also decompose the computational domain into any number of sub-domains. Moreover, the Scaling strategy for the interface system has improved the condition number of the matrix and reduced the number of GMRES iterations. In comparison with the classical distributed computing, we have shown that our method is more robust and efficient. (author) [fr

  7. Domain decomposition methods for fluid dynamics

    International Nuclear Information System (INIS)

    Clerc, S.

    1995-01-01

    A domain decomposition method for steady-state, subsonic fluid dynamics calculations, is proposed. The method is derived from the Schwarz alternating method used for elliptic problems, extended to non-linear hyperbolic problems. Particular emphasis is given on the treatment of boundary conditions. Numerical results are shown for a realistic three-dimensional two-phase flow problem with the FLICA-4 code for PWR cores. (from author). 4 figs., 8 refs

  8. Domain decomposition multigrid for unstructured grids

    Energy Technology Data Exchange (ETDEWEB)

    Shapira, Yair

    1997-01-01

    A two-level preconditioning method for the solution of elliptic boundary value problems using finite element schemes on possibly unstructured meshes is introduced. It is based on a domain decomposition and a Galerkin scheme for the coarse level vertex unknowns. For both the implementation and the analysis, it is not required that the curves of discontinuity in the coefficients of the PDE match the interfaces between subdomains. Generalizations to nonmatching or overlapping grids are made.

  9. Domain decomposition methods for mortar finite elements

    Energy Technology Data Exchange (ETDEWEB)

    Widlund, O.

    1996-12-31

    In the last few years, domain decomposition methods, previously developed and tested for standard finite element methods and elliptic problems, have been extended and modified to work for mortar and other nonconforming finite element methods. A survey will be given of work carried out jointly with Yves Achdou, Mario Casarin, Maksymilian Dryja and Yvon Maday. Results on the p- and h-p-version finite elements will also be discussed.

  10. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    Science.gov (United States)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  11. Simplified approaches to some nonoverlapping domain decomposition methods

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Jinchao

    1996-12-31

    An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.

  12. Lattice QCD with Domain Decomposition on Intel Xeon Phi Co-Processors

    Energy Technology Data Exchange (ETDEWEB)

    Heybrock, Simon; Joo, Balint; Kalamkar, Dhiraj D; Smelyanskiy, Mikhail; Vaidyanathan, Karthikeyan; Wettig, Tilo; Dubey, Pradeep

    2014-12-01

    The gap between the cost of moving data and the cost of computing continues to grow, making it ever harder to design iterative solvers on extreme-scale architectures. This problem can be alleviated by alternative algorithms that reduce the amount of data movement. We investigate this in the context of Lattice Quantum Chromodynamics and implement such an alternative solver algorithm, based on domain decomposition, on Intel Xeon Phi co-processor (KNC) clusters. We demonstrate close-to-linear on-chip scaling to all 60 cores of the KNC. With a mix of single- and half-precision the domain-decomposition method sustains 400-500 Gflop/s per chip. Compared to an optimized KNC implementation of a standard solver [1], our full multi-node domain-decomposition solver strong-scales to more nodes and reduces the time-to-solution by a factor of 5.

  13. Domain decomposition and multilevel integration for fermions

    International Nuclear Information System (INIS)

    Ce, Marco; Giusti, Leonardo; Schaefer, Stefan

    2016-01-01

    The numerical computation of many hadronic correlation functions is exceedingly difficult due to the exponentially decreasing signal-to-noise ratio with the distance between source and sink. Multilevel integration methods, using independent updates of separate regions in space-time, are known to be able to solve such problems but have so far been available only for pure gauge theory. We present first steps into the direction of making such integration schemes amenable to theories with fermions, by factorizing a given observable via an approximated domain decomposition of the quark propagator. This allows for multilevel integration of the (large) factorized contribution to the observable, while its (small) correction can be computed in the standard way.

  14. A convergent overlapping domain decomposition method for total variation minimization

    KAUST Repository

    Fornasier, Massimo

    2010-06-22

    In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.

  15. Time space domain decomposition methods for reactive transport - Application to CO2 geological storage

    International Nuclear Information System (INIS)

    Haeberlein, F.

    2011-01-01

    Reactive transport modelling is a basic tool to model chemical reactions and flow processes in porous media. A totally reduced multi-species reactive transport model including kinetic and equilibrium reactions is presented. A structured numerical formulation is developed and different numerical approaches are proposed. Domain decomposition methods offer the possibility to split large problems into smaller subproblems that can be treated in parallel. The class of Schwarz-type domain decomposition methods that have proved to be high-performing algorithms in many fields of applications is presented with a special emphasis on the geometrical viewpoint. Numerical issues for the realisation of geometrical domain decomposition methods and transmission conditions in the context of finite volumes are discussed. We propose and validate numerically a hybrid finite volume scheme for advection-diffusion processes that is particularly well-suited for the use in a domain decomposition context. Optimised Schwarz waveform relaxation methods are studied in detail on a theoretical and numerical level for a two species coupled reactive transport system with linear and nonlinear coupling terms. Well-posedness and convergence results are developed and the influence of the coupling term on the convergence behaviour of the Schwarz algorithm is studied. Finally, we apply a Schwarz waveform relaxation method on the presented multi-species reactive transport system. (author)

  16. Two-phase flow steam generator simulations on parallel computers using domain decomposition method

    International Nuclear Information System (INIS)

    Belliard, M.

    2003-01-01

    Within the framework of the Domain Decomposition Method (DDM), we present industrial steady state two-phase flow simulations of PWR Steam Generators (SG) using iteration-by-sub-domain methods: standard and Adaptive Dirichlet/Neumann methods (ADN). The averaged mixture balance equations are solved by a Fractional-Step algorithm, jointly with the Crank-Nicholson scheme and the Finite Element Method. The algorithm works with overlapping or non-overlapping sub-domains and with conforming or nonconforming meshing. Computations are run on PC networks or on massively parallel mainframe computers. A CEA code-linker and the PVM package are used (master-slave context). SG mock-up simulations, involving up to 32 sub-domains, highlight the efficiency (speed-up, scalability) and the robustness of the chosen approach. With the DDM, the computational problem size is easily increased to about 1,000,000 cells and the CPU time is significantly reduced. The difficulties related to industrial use are also discussed. (author)

  17. Robust Algebraic Multilevel Methods and Algorithms

    CERN Document Server

    Kraus, Johannes

    2009-01-01

    This book deals with algorithms for the solution of linear systems of algebraic equations with large-scale sparse matrices, with a focus on problems that are obtained after discretization of partial differential equations using finite element methods. Provides a systematic presentation of the recent advances in robust algebraic multilevel methods. Can be used for advanced courses on the topic.

  18. A physics-motivated Centroidal Voronoi Particle domain decomposition method

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de

    2017-04-15

    In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.

  19. A convergent overlapping domain decomposition method for total variation minimization

    KAUST Repository

    Fornasier, Massimo; Langer, Andreas; Schö nlieb, Carola-Bibiane

    2010-01-01

    In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation

  20. Domain decomposition methods and deflated Krylov subspace iterations

    NARCIS (Netherlands)

    Nabben, R.; Vuik, C.

    2006-01-01

    The balancing Neumann-Neumann (BNN) and the additive coarse grid correction (BPS) preconditioner are fast and successful preconditioners within domain decomposition methods for solving partial differential equations. For certain elliptic problems these preconditioners lead to condition numbers which

  1. Mixed first- and second-order transport method using domain decomposition techniques for reactor core calculations

    International Nuclear Information System (INIS)

    Girardi, E.; Ruggieri, J.M.

    2003-01-01

    The aim of this paper is to present the last developments made on a domain decomposition method applied to reactor core calculations. In this method, two kind of balance equation with two different numerical methods dealing with two different unknowns are coupled. In the first part the two balance transport equations (first order and second order one) are presented with the corresponding following numerical methods: Variational Nodal Method and Discrete Ordinate Nodal Method. In the second part, the Multi-Method/Multi-Domain algorithm is introduced by applying the Schwarz domain decomposition to the multigroup eigenvalue problem of the transport equation. The resulting algorithm is then provided. The projection operators used to coupled the two methods are detailed in the last part of the paper. Finally some preliminary numerical applications on benchmarks are given showing encouraging results. (authors)

  2. A robust human face detection algorithm

    Science.gov (United States)

    Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.

    2012-01-01

    Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.

  3. Solving radiative transfer problems in highly heterogeneous media via domain decomposition and convergence acceleration techniques

    International Nuclear Information System (INIS)

    Previti, Alberto; Furfaro, Roberto; Picca, Paolo; Ganapol, Barry D.; Mostacci, Domiziano

    2011-01-01

    This paper deals with finding accurate solutions for photon transport problems in highly heterogeneous media fastly, efficiently and with modest memory resources. We propose an extended version of the analytical discrete ordinates method, coupled with domain decomposition-derived algorithms and non-linear convergence acceleration techniques. Numerical performances are evaluated using a challenging case study available in the literature. A study of accuracy versus computational time and memory requirements is reported for transport calculations that are relevant for remote sensing applications.

  4. Coupling parallel adaptive mesh refinement with a nonoverlapping domain decomposition solver

    Czech Academy of Sciences Publication Activity Database

    Kůs, Pavel; Šístek, Jakub

    2017-01-01

    Roč. 110, August (2017), s. 34-54 ISSN 0965-9978 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : adaptive mesh refinement * parallel algorithms * domain decomposition Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 3.000, year: 2016 http://www.sciencedirect.com/science/article/pii/S0965997816305737

  5. Coupling parallel adaptive mesh refinement with a nonoverlapping domain decomposition solver

    Czech Academy of Sciences Publication Activity Database

    Kůs, Pavel; Šístek, Jakub

    2017-01-01

    Roč. 110, August (2017), s. 34-54 ISSN 0965-9978 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : adaptive mesh refinement * parallel algorithms * domain decomposition Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 3.000, year: 2016 http://www.sciencedirect.com/science/ article /pii/S0965997816305737

  6. B-spline Collocation with Domain Decomposition Method

    International Nuclear Information System (INIS)

    Hidayat, M I P; Parman, S; Ariwahjoedi, B

    2013-01-01

    A global B-spline collocation method has been previously developed and successfully implemented by the present authors for solving elliptic partial differential equations in arbitrary complex domains. However, the global B-spline approximation, which is simply reduced to Bezier approximation of any degree p with C 0 continuity, has led to the use of B-spline basis of high order in order to achieve high accuracy. The need for B-spline bases of high order in the global method would be more prominent in domains of large dimension. For the increased collocation points, it may also lead to the ill-conditioning problem. In this study, overlapping domain decomposition of multiplicative Schwarz algorithm is combined with the global method. Our objective is two-fold that improving the accuracy with the combination technique, and also investigating influence of the combination technique to the employed B-spline basis orders with respect to the obtained accuracy. It was shown that the combination method produced higher accuracy with the B-spline basis of much lower order than that needed in implementation of the initial method. Hence, the approximation stability of the B-spline collocation method was also increased.

  7. TAO-robust backpropagation learning algorithm.

    Science.gov (United States)

    Pernía-Espinoza, Alpha V; Ordieres-Meré, Joaquín B; Martínez-de-Pisón, Francisco J; González-Marcos, Ana

    2005-03-01

    In several fields, as industrial modelling, multilayer feedforward neural networks are often used as universal function approximations. These supervised neural networks are commonly trained by a traditional backpropagation learning format, which minimises the mean squared error (mse) of the training data. However, in the presence of corrupted data (outliers) this training scheme may produce wrong models. We combine the benefits of the non-linear regression model tau-estimates [introduced by Tabatabai, M. A. Argyros, I. K. Robust Estimation and testing for general nonlinear regression models. Applied Mathematics and Computation. 58 (1993) 85-101] with the backpropagation algorithm to produce the TAO-robust learning algorithm, in order to deal with the problems of modelling with outliers. The cost function of this approach has a bounded influence function given by the weighted average of two psi functions, one corresponding to a very robust estimate and the other to a highly efficient estimate. The advantages of the proposed algorithm are studied with an example.

  8. Communication strategies for angular domain decomposition of transport calculations on message passing multiprocessors

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1997-01-01

    The effect of three communication schemes for solving Arbitrarily High Order Transport (AHOT) methods of the Nodal type on parallel performance is examined via direct measurements and performance models. The target architecture in this study is Oak Ridge National Laboratory's 128 node Paragon XP/S 5 computer and the parallelization is based on the Parallel Virtual Machine (PVM) library. However, the conclusions reached can be easily generalized to a large class of message passing platforms and communication software. The three schemes considered here are: (1) PVM's global operations (broadcast and reduce) which utilizes the Paragon's native corresponding operations based on a spanning tree routing; (2) the Bucket algorithm wherein the angular domain decomposition of the mesh sweep is complemented with a spatial domain decomposition of the accumulation process of the scalar flux from the angular flux and the convergence test; (3) a distributed memory version of the Bucket algorithm that pushes the spatial domain decomposition one step farther by actually distributing the fixed source and flux iterates over the memories of the participating processes. Their conclusion is that the Bucket algorithm is the most efficient of the three if all participating processes have sufficient memories to hold the entire problem arrays. Otherwise, the third scheme becomes necessary at an additional cost to speedup and parallel efficiency that is quantifiable via the parallel performance model

  9. Robust MST-Based Clustering Algorithm.

    Science.gov (United States)

    Liu, Qidong; Zhang, Ruisheng; Zhao, Zhili; Wang, Zhenghai; Jiao, Mengyao; Wang, Guangjing

    2018-06-01

    Minimax similarity stresses the connectedness of points via mediating elements rather than favoring high mutual similarity. The grouping principle yields superior clustering results when mining arbitrarily-shaped clusters in data. However, it is not robust against noises and outliers in the data. There are two main problems with the grouping principle: first, a single object that is far away from all other objects defines a separate cluster, and second, two connected clusters would be regarded as two parts of one cluster. In order to solve such problems, we propose robust minimum spanning tree (MST)-based clustering algorithm in this letter. First, we separate the connected objects by applying a density-based coarsening phase, resulting in a low-rank matrix in which the element denotes the supernode by combining a set of nodes. Then a greedy method is presented to partition those supernodes through working on the low-rank matrix. Instead of removing the longest edges from MST, our algorithm groups the data set based on the minimax similarity. Finally, the assignment of all data points can be achieved through their corresponding supernodes. Experimental results on many synthetic and real-world data sets show that our algorithm consistently outperforms compared clustering algorithms.

  10. Multitasking domain decomposition fast Poisson solvers on the Cray Y-MP

    Science.gov (United States)

    Chan, Tony F.; Fatoohi, Rod A.

    1990-01-01

    The results of multitasking implementation of a domain decomposition fast Poisson solver on eight processors of the Cray Y-MP are presented. The object of this research is to study the performance of domain decomposition methods on a Cray supercomputer and to analyze the performance of different multitasking techniques using highly parallel algorithms. Two implementations of multitasking are considered: macrotasking (parallelism at the subroutine level) and microtasking (parallelism at the do-loop level). A conventional FFT-based fast Poisson solver is also multitasked. The results of different implementations are compared and analyzed. A speedup of over 7.4 on the Cray Y-MP running in a dedicated environment is achieved for all cases.

  11. 22nd International Conference on Domain Decomposition Methods

    CERN Document Server

    Gander, Martin; Halpern, Laurence; Krause, Rolf; Pavarino, Luca

    2016-01-01

    These are the proceedings of the 22nd International Conference on Domain Decomposition Methods, which was held in Lugano, Switzerland. With 172 participants from over 24 countries, this conference continued a long-standing tradition of internationally oriented meetings on Domain Decomposition Methods. The book features a well-balanced mix of established and new topics, such as the manifold theory of Schwarz Methods, Isogeometric Analysis, Discontinuous Galerkin Methods, exploitation of modern HPC architectures, and industrial applications. As the conference program reflects, the growing capabilities in terms of theory and available hardware allow increasingly complex non-linear and multi-physics simulations, confirming the tremendous potential and flexibility of the domain decomposition concept.

  12. Domain decomposition methods for the mixed dual formulation of the critical neutron diffusion problem

    International Nuclear Information System (INIS)

    Guerin, P.

    2007-12-01

    The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)

  13. Parallel performance of the angular versus spatial domain decomposition for discrete ordinates transport methods

    International Nuclear Information System (INIS)

    Fischer, J.W.; Azmy, Y.Y.

    2003-01-01

    A previously reported parallel performance model for Angular Domain Decomposition (ADD) of the Discrete Ordinates method for solving multidimensional neutron transport problems is revisited for further validation. Three communication schemes: native MPI, the bucket algorithm, and the distributed bucket algorithm, are included in the validation exercise that is successfully conducted on a Beowulf cluster. The parallel performance model is comprised of three components: serial, parallel, and communication. The serial component is largely independent of the number of participating processors, P, while the parallel component decreases like 1/P. These two components are independent of the communication scheme, in contrast with the communication component that typically increases with P in a manner highly dependent on the global reduced algorithm. Correct trends for each component and each communication scheme were measured for the Arbitrarily High Order Transport (AHOT) code, thus validating the performance models. Furthermore, extensive experiments illustrate the superiority of the bucket algorithm. The primary question addressed in this research is: for a given problem size, which domain decomposition method, angular or spatial, is best suited to parallelize Discrete Ordinates methods on a specific computational platform? We address this question for three-dimensional applications via parallel performance models that include parameters specifying the problem size and system performance: the above-mentioned ADD, and a previously constructed and validated Spatial Domain Decomposition (SDD) model. We conclude that for large problems the parallel component dwarfs the communication component even on moderately large numbers of processors. The main advantages of SDD are: (a) scalability to higher numbers of processors of the order of the number of computational cells; (b) smaller memory requirement; (c) better performance than ADD on high-end platforms and large number of

  14. Scalable Domain Decomposition Preconditioners for Heterogeneous Elliptic Problems

    Directory of Open Access Journals (Sweden)

    Pierre Jolivet

    2014-01-01

    Full Text Available Domain decomposition methods are, alongside multigrid methods, one of the dominant paradigms in contemporary large-scale partial differential equation simulation. In this paper, a lightweight implementation of a theoretically and numerically scalable preconditioner is presented in the context of overlapping methods. The performance of this work is assessed by numerical simulations executed on thousands of cores, for solving various highly heterogeneous elliptic problems in both 2D and 3D with billions of degrees of freedom. Such problems arise in computational science and engineering, in solid and fluid mechanics. While focusing on overlapping domain decomposition methods might seem too restrictive, it will be shown how this work can be applied to a variety of other methods, such as non-overlapping methods and abstract deflation based preconditioners. It is also presented how multilevel preconditioners can be used to avoid communication during an iterative process such as a Krylov method.

  15. Domain decomposition methods for solving an image problem

    Energy Technology Data Exchange (ETDEWEB)

    Tsui, W.K.; Tong, C.S. [Hong Kong Baptist College (Hong Kong)

    1994-12-31

    The domain decomposition method is a technique to break up a problem so that ensuing sub-problems can be solved on a parallel computer. In order to improve the convergence rate of the capacitance systems, pre-conditioned conjugate gradient methods are commonly used. In the last decade, most of the efficient preconditioners are based on elliptic partial differential equations which are particularly useful for solving elliptic partial differential equations. In this paper, the authors apply the so called covering preconditioner, which is based on the information of the operator under investigation. Therefore, it is good for various kinds of applications, specifically, they shall apply the preconditioned domain decomposition method for solving an image restoration problem. The image restoration problem is to extract an original image which has been degraded by a known convolution process and additive Gaussian noise.

  16. A PARALLEL NONOVERLAPPING DOMAIN DECOMPOSITION METHOD FOR STOKES PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Mei-qun Jiang; Pei-liang Dai

    2006-01-01

    A nonoverlapping domain decomposition iterative procedure is developed and analyzed for generalized Stokes problems and their finite element approximate problems in RN(N=2,3). The method is based on a mixed-type consistency condition with two parameters as a transmission condition together with a derivative-free transmission data updating technique on the artificial interfaces. The method can be applied to a general multi-subdomain decomposition and implemented on parallel machines with local simple communications naturally.

  17. Domain decomposition method for solving the neutron diffusion equation

    International Nuclear Information System (INIS)

    Coulomb, F.

    1989-03-01

    The aim of this work is to study methods for solving the neutron diffusion equation; we are interested in methods based on a classical finite element discretization and well suited for use on parallel computers. Domain decomposition methods seem to answer this preoccupation. This study deals with a decomposition of the domain. A theoretical study is carried out for Lagrange finite elements and some examples are given; in the case of mixed dual finite elements, the study is based on examples [fr

  18. An Experiment of Robust Parallel Algorithm for the Eigenvalue problem of a Multigroup Neutron Diffusion based on modified FETI-DP : Part 2

    International Nuclear Information System (INIS)

    Chang, Jonghwa

    2014-01-01

    Today, we can use a computer cluster consist of a few hundreds CPUs with reasonable budget. Such computer system enables us to do detailed modeling of reactor core. The detailed modeling will improve the safety and the economics of a nuclear reactor by eliminating un-necessary conservatism or missing consideration. To take advantage of such a cluster computer, efficient parallel algorithms must be developed. Mechanical structure analysis community has studied the domain decomposition method to solve the stress-strain equation using the finite element methods. One of the most successful domain decomposition method in terms of robustness is FETI-DP. We have modified the original FETI-DP to solve the eigenvalue problem for the multi-group diffusion problem in previous study. In this study, we report the result of recent modification to handle the three-dimensional subdomain partitioning, and the sub-domain multi-group problem. Modified FETI-DP algorithm has been successfully applied for the eigenvalue problem of multi-group neutron diffusion equation. The overall CPU time is decreasing as number of sub-domains (partitions) is increasing. However, there may be a limit in decrement due to increment of the number of primal points will increase the CPU time spent by the solution of the global equation. Even distribution of computational load (criterion a) is important to achieve fast computation. The subdomain partition can be effectively performed using suitable graph theory partition package such as MeTIS

  19. An Experiment of Robust Parallel Algorithm for the Eigenvalue problem of a Multigroup Neutron Diffusion based on modified FETI-DP : Part 2

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Jonghwa [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    Today, we can use a computer cluster consist of a few hundreds CPUs with reasonable budget. Such computer system enables us to do detailed modeling of reactor core. The detailed modeling will improve the safety and the economics of a nuclear reactor by eliminating un-necessary conservatism or missing consideration. To take advantage of such a cluster computer, efficient parallel algorithms must be developed. Mechanical structure analysis community has studied the domain decomposition method to solve the stress-strain equation using the finite element methods. One of the most successful domain decomposition method in terms of robustness is FETI-DP. We have modified the original FETI-DP to solve the eigenvalue problem for the multi-group diffusion problem in previous study. In this study, we report the result of recent modification to handle the three-dimensional subdomain partitioning, and the sub-domain multi-group problem. Modified FETI-DP algorithm has been successfully applied for the eigenvalue problem of multi-group neutron diffusion equation. The overall CPU time is decreasing as number of sub-domains (partitions) is increasing. However, there may be a limit in decrement due to increment of the number of primal points will increase the CPU time spent by the solution of the global equation. Even distribution of computational load (criterion a) is important to achieve fast computation. The subdomain partition can be effectively performed using suitable graph theory partition package such as MeTIS.

  20. An Experiment of Robust Parallel Algorithm for the Eigenvalue problem of a Multigroup Neutron Diffusion based on modified FETI-DP

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Jonghwa [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Parallelization of Monte Carlo simulation is widely adpoted. There are also several parallel algorithms developed for the SN transport theory using the parallel wave sweeping algorithm and for the CPM using parallel ray tracing. For practical purpose of reactor physics application, the thermal feedback and burnup effects on the multigroup cross section should be considered. In this respect, the domain decomposition method(DDM) is suitable for distributing the expensive cross section calculation work. Parallel transport code and diffusion code based on the Raviart-Thomas mixed finite element method was developed. However most of the developed methods rely on the heuristic convergence of flux and current at the domain interfaces. Convergence was not attained in some cases. Mechanical stress computation community has also work on the DDM to solve the stress-strain equation using the finite element methods. The most successful domain decomposition method in terms of robustness is FETI-DP. We have modified the original FETI-DP to solve the eigenvalue problem for the multigroup diffusion problem in this study.

  1. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. In the ATLAS track reconstruction algorithm, an artificial neural network is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The robustness of the neural network algorithm is presented, probing its sensitivity to uncertainties in the detector conditions. The robustness is studied by evaluating the stability of the algorithm's performance under a range of variations in the inputs to the neural networks. Within reasonable variation magnitudes, the neural networks prove to be robust to most variation types.

  2. A Novel Evolutionary Algorithm for Designing Robust Analog Filters

    Directory of Open Access Journals (Sweden)

    Shaobo Li

    2018-03-01

    Full Text Available Designing robust circuits that withstand environmental perturbation and device degradation is critical for many applications. Traditional robust circuit design is mainly done by tuning parameters to improve system robustness. However, the topological structure of a system may set a limit on the robustness achievable through parameter tuning. This paper proposes a new evolutionary algorithm for robust design that exploits the open-ended topological search capability of genetic programming (GP coupled with bond graph modeling. We applied our GP-based robust design (GPRD algorithm to evolve robust lowpass and highpass analog filters. Compared with a traditional robust design approach based on a state-of-the-art real-parameter genetic algorithm (GA, our GPRD algorithm with a fitness criterion rewarding robustness, with respect to parameter perturbations, can evolve more robust filters than what was achieved through parameter tuning alone. We also find that inappropriate GA tuning may mislead the search process and that multiple-simulation and perturbed fitness evaluation methods for evolving robustness have complementary behaviors with no absolute advantage of one over the other.

  3. A TFETI domain decomposition solver for elastoplastic problems

    Czech Academy of Sciences Publication Activity Database

    Čermák, M.; Kozubek, T.; Sysala, Stanislav; Valdman, J.

    2014-01-01

    Roč. 231, č. 1 (2014), s. 634-653 ISSN 0096-3003 Institutional support: RVO:68145535 Keywords : elastoplasticity * Total FETI domain decomposition method * Finite element method * Semismooth Newton method Subject RIV: BA - General Mathematics Impact factor: 1.551, year: 2014 http://ac.els-cdn.com/S0096300314000253/1-s2.0-S0096300314000253-main.pdf?_tid=33a29cf4-996a-11e3-8c5a-00000aacb360&acdnat=1392816896_4584697dc26cf934dcf590c63f0dbab7

  4. Domain decomposition methods for the neutron diffusion problem

    International Nuclear Information System (INIS)

    Guerin, P.; Baudron, A. M.; Lautard, J. J.

    2010-01-01

    The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, simplified transport (SPN) or diffusion approximations are often used. The MINOS solver developed at CEA Saclay uses a mixed dual finite element method for the resolution of these problems. and has shown his efficiency. In order to take into account the heterogeneities of the geometry, a very fine mesh is generally required, and leads to expensive calculations for industrial applications. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose here two domain decomposition methods based on the MINOS solver. The first approach is a component mode synthesis method on overlapping sub-domains: several Eigenmodes solutions of a local problem on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is an iterative method based on a non-overlapping domain decomposition with Robin interface conditions. At each iteration, we solve the problem on each sub-domain with the interface conditions given by the solutions on the adjacent sub-domains estimated at the previous iteration. Numerical results on parallel computers are presented for the diffusion model on realistic 2D and 3D cores. (authors)

  5. Domain decomposition methods for core calculations using the MINOS solver

    International Nuclear Information System (INIS)

    Guerin, P.; Baudron, A. M.; Lautard, J. J.

    2007-01-01

    Cell by cell homogenized transport calculations of an entire nuclear reactor core are currently too expensive for industrial applications, even if a simplified transport (SPn) approximation is used. In order to take advantage of parallel computers, we propose here two domain decomposition methods using the mixed dual finite element solver MINOS. The first one is a modal synthesis method on overlapping sub-domains: several Eigenmodes solutions of a local problem on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second one is an iterative method based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, we solve the problem on each sub-domain with the interface conditions given by the solutions on the close sub-domains estimated at the previous iteration. For these two methods, we give numerical results which demonstrate their accuracy and their efficiency for the diffusion model on realistic 2D and 3D cores. (authors)

  6. Finite Algorithms for Robust Linear Regression

    DEFF Research Database (Denmark)

    Madsen, Kaj; Nielsen, Hans Bruun

    1990-01-01

    The Huber M-estimator for robust linear regression is analyzed. Newton type methods for solution of the problem are defined and analyzed, and finite convergence is proved. Numerical experiments with a large number of test problems demonstrate efficiency and indicate that this kind of approach may...

  7. Robust stability analysis of adaptation algorithms for single perceptron.

    Science.gov (United States)

    Hui, S; Zak, S H

    1991-01-01

    The problem of robust stability and convergence of learning parameters of adaptation algorithms in a noisy environment for the single preceptron is addressed. The case in which the same input pattern is presented in the adaptation cycle is analyzed. The algorithm proposed is of the Widrow-Hoff type. It is concluded that this algorithm is robust. However, the weight vectors do not necessarily converge in the presence of measurement noise. A modified version of this algorithm in which the reduction factors are allowed to vary with time is proposed, and it is shown that this algorithm is robust and that the weight vectors converge in the presence of bounded noise. Only deterministic-type arguments are used in the analysis. An ultimate bound on the error in terms of a convex combination of the initial error and the bound on the noise is obtained.

  8. Europlexus: a domain decomposition method in explicit dynamics

    International Nuclear Information System (INIS)

    Faucher, V.; Hariddh, Bung; Combescure, A.

    2003-01-01

    Explicit time integration methods are used in structural dynamics to simulate fast transient phenomena, such as impacts or explosions. A very fine analysis is required in the vicinity of the loading areas but extending the same method, and especially the same small time-step, to the whole structure frequently yields excessive calculation times. We thus perform a dual Schur domain decomposition, to divide the global problem into several independent ones, to which is added a reduced size interface problem, to ensure connections between sub-domains. Each sub-domain is given its own time-step and its own mesh fineness. Non-matching meshes at the interfaces are handled. An industrial example demonstrates the interest of our approach. (authors)

  9. Neutron transport solver parallelization using a Domain Decomposition method

    International Nuclear Information System (INIS)

    Van Criekingen, S.; Nataf, F.; Have, P.

    2008-01-01

    A domain decomposition (DD) method is investigated for the parallel solution of the second-order even-parity form of the time-independent Boltzmann transport equation. The spatial discretization is performed using finite elements, and the angular discretization using spherical harmonic expansions (P N method). The main idea developed here is due to P.L. Lions. It consists in having sub-domains exchanging not only interface point flux values, but also interface flux 'derivative' values. (The word 'derivative' is here used with quotes, because in the case considered here, it in fact consists in the Ω.∇ operator, with Ω the angular variable vector and ∇ the spatial gradient operator.) A parameter α is introduced, as proportionality coefficient between point flux and 'derivative' values. This parameter can be tuned - so far heuristically - to optimize the method. (authors)

  10. DC Algorithm for Extended Robust Support Vector Machine.

    Science.gov (United States)

    Fujiwara, Shuhei; Takeda, Akiko; Kanamori, Takafumi

    2017-05-01

    Nonconvex variants of support vector machines (SVMs) have been developed for various purposes. For example, robust SVMs attain robustness to outliers by using a nonconvex loss function, while extended [Formula: see text]-SVM (E[Formula: see text]-SVM) extends the range of the hyperparameter by introducing a nonconvex constraint. Here, we consider an extended robust support vector machine (ER-SVM), a robust variant of E[Formula: see text]-SVM. ER-SVM combines two types of nonconvexity from robust SVMs and E[Formula: see text]-SVM. Because of the two nonconvexities, the existing algorithm we proposed needs to be divided into two parts depending on whether the hyperparameter value is in the extended range or not. The algorithm also heuristically solves the nonconvex problem in the extended range. In this letter, we propose a new, efficient algorithm for ER-SVM. The algorithm deals with two types of nonconvexity while never entailing more computations than either E[Formula: see text]-SVM or robust SVM, and it finds a critical point of ER-SVM. Furthermore, we show that ER-SVM includes the existing robust SVMs as special cases. Numerical experiments confirm the effectiveness of integrating the two nonconvexities.

  11. A HYBRID ALGORITHM FOR THE ROBUST GRAPH COLORING PROBLEM

    Directory of Open Access Journals (Sweden)

    Román Anselmo Mora Gutiérrez

    2016-08-01

    Full Text Available A hybridalgorithm which combines mathematical programming techniques (Kruskal’s algorithm and the strategy of maintaining arc consistency to solve constraint satisfaction problem “CSP” and heuristic methods (musical composition method and DSATUR to resolve the robust graph coloring problem (RGCP is proposed in this paper. Experimental result shows that this algorithm is better than the other algorithms presented on the literature.

  12. DOMAIN DECOMPOSITION FOR POROELASTICITY AND ELASTICITY WITH DG JUMPS AND MORTARS

    KAUST Repository

    GIRAULT, V.

    2011-01-01

    We couple a time-dependent poroelastic model in a region with an elastic model in adjacent regions. We discretize each model independently on non-matching grids and we realize a domain decomposition on the interface between the regions by introducing DG jumps and mortars. The unknowns are condensed on the interface, so that at each time step, the computation in each subdomain can be performed in parallel. In addition, by extrapolating the displacement, we present an algorithm where the computations of the pressure and displacement are decoupled. We show that the matrix of the interface problem is positive definite and establish error estimates for this scheme. © 2011 World Scientific Publishing Company.

  13. A non overlapping parallel domain decomposition method applied to the simplified transport equations

    International Nuclear Information System (INIS)

    Lathuiliere, B.; Barrault, M.; Ramet, P.; Roman, J.

    2009-01-01

    A reactivity computation requires to compute the highest eigenvalue of a generalized eigenvalue problem. An inverse power algorithm is used commonly. Very fine modelizations are difficult to tackle for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. So, we propose a non-overlapping domain decomposition method for the approximate resolution of the linear system to solve at each inverse power iteration. Our method brings to a low development effort as the inner multigroup solver can be re-use without modification, and allows us to adapt locally the numerical resolution (mesh, finite element order). Numerical results are obtained by a parallel implementation of the method on two different cases with a pin by pin discretization. This results are analyzed in terms of memory consumption and parallel efficiency. (authors)

  14. A balancing domain decomposition method by constraints for advection-diffusion problems

    Energy Technology Data Exchange (ETDEWEB)

    Tu, Xuemin; Li, Jing

    2008-12-10

    The balancing domain decomposition methods by constraints are extended to solving nonsymmetric, positive definite linear systems resulting from the finite element discretization of advection-diffusion equations. A pre-conditioned GMRES iteration is used to solve a Schur complement system of equations for the subdomain interface variables. In the preconditioning step of each iteration, a partially sub-assembled finite element problem is solved. A convergence rate estimate for the GMRES iteration is established, under the condition that the diameters of subdomains are small enough. It is independent of the number of subdomains and grows only slowly with the subdomain problem size. Numerical experiments for several two-dimensional advection-diffusion problems illustrate the fast convergence of the proposed algorithm.

  15. Domain Decomposition strategy for pin-wise full-core Monte Carlo depletion calculation with the reactor Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)

    2016-06-15

    Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.

  16. A robust firearm identification algorithm of forensic ballistics specimens

    Science.gov (United States)

    Chuan, Z. L.; Jemain, A. A.; Liong, C.-Y.; Ghani, N. A. M.; Tan, L. K.

    2017-09-01

    There are several inherent difficulties in the existing firearm identification algorithms, include requiring the physical interpretation and time consuming. Therefore, the aim of this study is to propose a robust algorithm for a firearm identification based on extracting a set of informative features from the segmented region of interest (ROI) using the simulated noisy center-firing pin impression images. The proposed algorithm comprises Laplacian sharpening filter, clustering-based threshold selection, unweighted least square estimator, and segment a square ROI from the noisy images. A total of 250 simulated noisy images collected from five different pistols of the same make, model and caliber are used to evaluate the robustness of the proposed algorithm. This study found that the proposed algorithm is able to perform the identical task on the noisy images with noise levels as high as 70%, while maintaining a firearm identification accuracy rate of over 90%.

  17. Green cloud environment by using robust planning algorithm

    Directory of Open Access Journals (Sweden)

    Jyoti Thaman

    2017-11-01

    Full Text Available Cloud computing provided a framework for seamless access to resources through network. Access to resources is quantified through SLA between service providers and users. Service provider tries to best exploit their resources and reduce idle times of the resources. Growing energy concerns further makes the life of service providers miserable. User’s requests are served by allocating users tasks to resources in Clouds and Grid environment through scheduling algorithms and planning algorithms. With only few Planning algorithms in existence rarely planning and scheduling algorithms are differentiated. This paper proposes a robust hybrid planning algorithm, Robust Heterogeneous-Earliest-Finish-Time (RHEFT for binding tasks to VMs. The allocation of tasks to VMs is based on a novel task matching algorithm called Interior Scheduling. The consistent performance of proposed RHEFT algorithm is compared with Heterogeneous-Earliest-Finish-Time (HEFT and Distributed HEFT (DHEFT for various parameters like utilization ratio, makespan, Speed-up and Energy Consumption. RHEFT’s consistent performance against HEFT and DHEFT has established the robustness of the hybrid planning algorithm through rigorous simulations.

  18. A Robust Parallel Algorithm for Combinatorial Compressed Sensing

    Science.gov (United States)

    Mendoza-Smith, Rodrigo; Tanner, Jared W.; Wechsung, Florian

    2018-04-01

    In previous work two of the authors have shown that a vector $x \\in \\mathbb{R}^n$ with at most $k Parallel-$\\ell_0$ decoding algorithm, where $\\mathrm{nnz}(A)$ denotes the number of nonzero entries in $A \\in \\mathbb{R}^{m \\times n}$. In this paper we present the Robust-$\\ell_0$ decoding algorithm, which robustifies Parallel-$\\ell_0$ when the sketch $Ax$ is corrupted by additive noise. This robustness is achieved by approximating the asymptotic posterior distribution of values in the sketch given its corrupted measurements. We provide analytic expressions that approximate these posteriors under the assumptions that the nonzero entries in the signal and the noise are drawn from continuous distributions. Numerical experiments presented show that Robust-$\\ell_0$ is superior to existing greedy and combinatorial compressed sensing algorithms in the presence of small to moderate signal-to-noise ratios in the setting of Gaussian signals and Gaussian additive noise.

  19. Robust reactor power control system design by genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Joon; Cho, Kyung Ho; Kim, Sin [Cheju National University, Cheju (Korea, Republic of)

    1997-12-31

    The H{sub {infinity}} robust controller for the reactor power control system is designed by use of the mixed weight sensitivity. The system is configured into the typical two-port model with which the weight functions are augmented. Since the solution depends on the weighting functions and the problem is of nonconvex, the genetic algorithm is used to determine the weighting functions. The cost function applied in the genetic algorithm permits the direct control of the power tracking performances. In addition, the actual operating constraints such as rod velocity and acceleration can be treated as design parameters. Compared with the conventional approach, the controller designed by the genetic algorithm results in the better performances with the realistic constraints. Also, it is found that the genetic algorithm could be used as an effective tool in the robust design. 4 refs., 6 figs. (Author)

  20. Robust reactor power control system design by genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Joon; Cho, Kyung Ho; Kim, Sin [Cheju National University, Cheju (Korea, Republic of)

    1998-12-31

    The H{sub {infinity}} robust controller for the reactor power control system is designed by use of the mixed weight sensitivity. The system is configured into the typical two-port model with which the weight functions are augmented. Since the solution depends on the weighting functions and the problem is of nonconvex, the genetic algorithm is used to determine the weighting functions. The cost function applied in the genetic algorithm permits the direct control of the power tracking performances. In addition, the actual operating constraints such as rod velocity and acceleration can be treated as design parameters. Compared with the conventional approach, the controller designed by the genetic algorithm results in the better performances with the realistic constraints. Also, it is found that the genetic algorithm could be used as an effective tool in the robust design. 4 refs., 6 figs. (Author)

  1. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. The algorithms depend heavily on accurate estimation of the position of particles as they traverse the inner detector elements. An artificial neural network algorithm is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The method recovers otherwise lost tracks in dense environments where particles are separated by distances comparable to the size of the detector read-out elements. Such environments are highly relevant for LHC run 2, e.g. in searches for heavy resonances. Within the scope of run 2 track reconstruction performance and upgrades, the robustness of the neural network algorithm will be presented. The robustness has been studied by evaluating the stability of the algorithm’s performance under a range of variations in the pixel detector conditions.

  2. Analysis of generalized Schwarz alternating procedure for domain decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Engquist, B.; Zhao, Hongkai [Univ. of California, Los Angeles, CA (United States)

    1996-12-31

    The Schwartz alternating method(SAM) is the theoretical basis for domain decomposition which itself is a powerful tool both for parallel computation and for computing in complicated domains. The convergence rate of the classical SAM is very sensitive to the overlapping size between each subdomain, which is not desirable for most applications. We propose a generalized SAM procedure which is an extension of the modified SAM proposed by P.-L. Lions. Instead of using only Dirichlet data at the artificial boundary between subdomains, we take a convex combination of u and {partial_derivative}u/{partial_derivative}n, i.e. {partial_derivative}u/{partial_derivative}n + {Lambda}u, where {Lambda} is some {open_quotes}positive{close_quotes} operator. Convergence of the modified SAM without overlapping in a quite general setting has been proven by P.-L.Lions using delicate energy estimates. The important questions remain for the generalized SAM. (1) What is the most essential mechanism for convergence without overlapping? (2) Given the partial differential equation, what is the best choice for the positive operator {Lambda}? (3) In the overlapping case, is the generalized SAM superior to the classical SAM? (4) What is the convergence rate and what does it depend on? (5) Numerically can we obtain an easy to implement operator {Lambda} such that the convergence is independent of the mesh size. To analyze the convergence of the generalized SAM we focus, for simplicity, on the Poisson equation for two typical geometry in two subdomain case.

  3. Geomagnetic matching navigation algorithm based on robust estimation

    Science.gov (United States)

    Xie, Weinan; Huang, Liping; Qu, Zhenshen; Wang, Zhenhuan

    2017-08-01

    The outliers in the geomagnetic survey data seriously affect the precision of the geomagnetic matching navigation and badly disrupt its reliability. A novel algorithm which can eliminate the outliers influence is investigated in this paper. First, the weight function is designed and its principle of the robust estimation is introduced. By combining the relation equation between the matching trajectory and the reference trajectory with the Taylor series expansion for geomagnetic information, a mathematical expression of the longitude, latitude and heading errors is acquired. The robust target function is obtained by the weight function and the mathematical expression. Then the geomagnetic matching problem is converted to the solutions of nonlinear equations. Finally, Newton iteration is applied to implement the novel algorithm. Simulation results show that the matching error of the novel algorithm is decreased to 7.75% compared to the conventional mean square difference (MSD) algorithm, and is decreased to 18.39% to the conventional iterative contour matching algorithm when the outlier is 40nT. Meanwhile, the position error of the novel algorithm is 0.017° while the other two algorithms fail to match when the outlier is 400nT.

  4. A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems

    Energy Technology Data Exchange (ETDEWEB)

    Taverniers, Søren; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu

    2017-02-01

    Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton–Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement of path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.

  5. A robust embedded vision system feasible white balance algorithm

    Science.gov (United States)

    Wang, Yuan; Yu, Feihong

    2018-01-01

    White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.

  6. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  7. Ant Colony Algorithm and Simulation for Robust Airport Gate Assignment

    Directory of Open Access Journals (Sweden)

    Hui Zhao

    2014-01-01

    Full Text Available Airport gate assignment is core task for airport ground operations. Due to the fact that the departure and arrival time of flights may be influenced by many random factors, the airport gate assignment scheme may encounter gate conflict and many other problems. This paper aims at finding a robust solution for airport gate assignment problem. A mixed integer model is proposed to formulate the problem, and colony algorithm is designed to solve this model. Simulation result shows that, in consideration of robustness, the ability of antidisturbance for airport gate assignment scheme has much improved.

  8. A robust color image watermarking algorithm against rotation attacks

    Science.gov (United States)

    Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min

    2018-01-01

    A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.

  9. A refined Frequency Domain Decomposition tool for structural modal monitoring in earthquake engineering

    Science.gov (United States)

    Pioldi, Fabio; Rizzi, Egidio

    2017-07-01

    Output-only structural identification is developed by a refined Frequency Domain Decomposition ( rFDD) approach, towards assessing current modal properties of heavy-damped buildings (in terms of identification challenge), under strong ground motions. Structural responses from earthquake excitations are taken as input signals for the identification algorithm. A new dedicated computational procedure, based on coupled Chebyshev Type II bandpass filters, is outlined for the effective estimation of natural frequencies, mode shapes and modal damping ratios. The identification technique is also coupled with a Gabor Wavelet Transform, resulting in an effective and self-contained time-frequency analysis framework. Simulated response signals generated by shear-type frames (with variable structural features) are used as a necessary validation condition. In this context use is made of a complete set of seismic records taken from the FEMA P695 database, i.e. all 44 "Far-Field" (22 NS, 22 WE) earthquake signals. The modal estimates are statistically compared to their target values, proving the accuracy of the developed algorithm in providing prompt and accurate estimates of all current strong ground motion modal parameters. At this stage, such analysis tool may be employed for convenient application in the realm of Earthquake Engineering, towards potential Structural Health Monitoring and damage detection purposes.

  10. Implicit upwind schemes for computational fluid dynamics. Solution by domain decomposition

    International Nuclear Information System (INIS)

    Clerc, S.

    1998-01-01

    In this work, the numerical simulation of fluid dynamics equations is addressed. Implicit upwind schemes of finite volume type are used for this purpose. The first part of the dissertation deals with the improvement of the computational precision in unfavourable situations. A non-conservative treatment of some source terms is studied in order to correct some shortcomings of the usual operator-splitting method. Besides, finite volume schemes based on Godunov's approach are unsuited to compute low Mach number flows. A modification of the up-winding by preconditioning is introduced to correct this defect. The second part deals with the solution of steady-state problems arising from an implicit discretization of the equations. A well-posed linearized boundary value problem is formulated. We prove the convergence of a domain decomposition algorithm of Schwartz type for this problem. This algorithm is implemented either directly, or in a Schur complement framework. Finally, another approach is proposed, which consists in decomposing the non-linear steady state problem. (author)

  11. ROBUST ALGORITHMS OF PARAMETRIC ESTIMATION IN SOME STABILIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    A.A. Vedyakov

    2016-07-01

    Full Text Available Subject of Research.The tasks of dynamic systems provision in the stable state by means of ensuring of trite solution stability for various dynamic systems in the education regime with the aid of their parameters tuning are considered. Method. The problems are solved by application of ideology of the robust finitely convergent algorithms creation. Main Results. The concepts of parametric algorithmization of stability and steady asymptotic stability are introduced and the results are presented on synthesis of coarsed gradient algorithms solving the proposed tasks for finite number of iterations with the purpose of the posed problems decision. Practical Relevance. The article results may be called for decision of practical stabilization tasks in the process of various engineering constructions and devices operation.

  12. Robust Semi-Supervised Manifold Learning Algorithm for Classification

    Directory of Open Access Journals (Sweden)

    Mingxia Chen

    2018-01-01

    Full Text Available In the recent years, manifold learning methods have been widely used in data classification to tackle the curse of dimensionality problem, since they can discover the potential intrinsic low-dimensional structures of the high-dimensional data. Given partially labeled data, the semi-supervised manifold learning algorithms are proposed to predict the labels of the unlabeled points, taking into account label information. However, these semi-supervised manifold learning algorithms are not robust against noisy points, especially when the labeled data contain noise. In this paper, we propose a framework for robust semi-supervised manifold learning (RSSML to address this problem. The noisy levels of the labeled points are firstly predicted, and then a regularization term is constructed to reduce the impact of labeled points containing noise. A new robust semi-supervised optimization model is proposed by adding the regularization term to the traditional semi-supervised optimization model. Numerical experiments are given to show the improvement and efficiency of RSSML on noisy data sets.

  13. Optimized waveform relaxation domain decomposition method for discrete finite volume non stationary convection diffusion equation

    International Nuclear Information System (INIS)

    Berthe, P.M.

    2013-01-01

    In the context of nuclear waste repositories, we consider the numerical discretization of the non stationary convection diffusion equation. Discontinuous physical parameters and heterogeneous space and time scales lead us to use different space and time discretizations in different parts of the domain. In this work, we choose the discrete duality finite volume (DDFV) scheme and the discontinuous Galerkin scheme in time, coupled by an optimized Schwarz waveform relaxation (OSWR) domain decomposition method, because this allows the use of non-conforming space-time meshes. The main difficulty lies in finding an upwind discretization of the convective flux which remains local to a sub-domain and such that the multi domain scheme is equivalent to the mono domain one. These difficulties are first dealt with in the one-dimensional context, where different discretizations are studied. The chosen scheme introduces a hybrid unknown on the cell interfaces. The idea of up winding with respect to this hybrid unknown is extended to the DDFV scheme in the two-dimensional setting. The well-posedness of the scheme and of an equivalent multi domain scheme is shown. The latter is solved by an OSWR algorithm, the convergence of which is proved. The optimized parameters in the Robin transmission conditions are obtained by studying the continuous or discrete convergence rates. Several test-cases, one of which inspired by nuclear waste repositories, illustrate these results. (author) [fr

  14. ROBUST CONTROL ALGORITHM FOR MULTIVARIABLE PLANTS WITH QUANTIZED OUTPUT

    Directory of Open Access Journals (Sweden)

    A. A. Margun

    2017-01-01

    Full Text Available The paper deals with robust output control algorithm for multivariable plants under disturbances. A plant is described by the system of linear differential equations with known relative degrees. Plant parameters are unknown but belong to the known closed bounded set. Plant state vector is unmeasured. Plant output is measured only via static quantizer. Control system algorithm is based on the high gain feedback method. Developed controller provides exponential convergence of tracking error to the bounded area. The area bounds depend on quantizer parameters and the value of external disturbances. Experimental approbation of the proposed control algorithm is performed with the use of Twin Rotor MIMO System laboratory bench. This bench is a helicopter like model with two degrees of freedom (pitch and yaw. DC motors are used as actuators. The output signals are measured via optical encoders. Mathematical model of laboratory bench is obtained. Proposed algorithm was compared with proportional - integral – differential controller in conditions of output quantization. Obtained results have confirmed the efficiency of proposed controller.

  15. Large Scale Simulation of Hydrogen Dispersion by a Stabilized Balancing Domain Decomposition Method

    Directory of Open Access Journals (Sweden)

    Qing-He Yao

    2014-01-01

    Full Text Available The dispersion behaviour of leaking hydrogen in a partially open space is simulated by a balancing domain decomposition method in this work. An analogy of the Boussinesq approximation is employed to describe the connection between the flow field and the concentration field. The linear systems of Navier-Stokes equations and the convection diffusion equation are symmetrized by a pressure stabilized Lagrange-Galerkin method, and thus a balancing domain decomposition method is enabled to solve the interface problem of the domain decomposition system. Numerical results are validated by comparing with the experimental data and available numerical results. The dilution effect of ventilation is investigated, especially at the doors, where flow pattern is complicated and oscillations appear in the past research reported by other researchers. The transient behaviour of hydrogen and the process of accumulation in the partially open space are discussed, and more details are revealed by large scale computation.

  16. Multigrid and multilevel domain decomposition for unstructured grids

    Energy Technology Data Exchange (ETDEWEB)

    Chan, T.; Smith, B.

    1994-12-31

    Multigrid has proven itself to be a very versatile method for the iterative solution of linear and nonlinear systems of equations arising from the discretization of PDES. In some applications, however, no natural multilevel structure of grids is available, and these must be generated as part of the solution procedure. In this presentation the authors will consider the problem of generating a multigrid algorithm when only a fine, unstructured grid is given. Their techniques generate a sequence of coarser grids by first forming an approximate maximal independent set of the vertices and then applying a Cavendish type algorithm to form the coarser triangulation. Numerical tests indicate that convergence using this approach can be as fast as standard multigrid on a structured mesh, at least in two dimensions.

  17. Robust ray-tracing algorithms for interactive dose rate evaluation

    International Nuclear Information System (INIS)

    Perrotte, L.

    2011-01-01

    More than ever, it is essential today to develop simulation tools to rapidly evaluate the dose rate received by operators working on nuclear sites. In order to easily study numerous different scenarios of intervention, computation times of available softwares have to be all lowered. This mainly implies to accelerate the geometrical computations needed for the dose rate evaluation. These computations consist in finding and sorting the whole list of intersections between a big 3D scene and multiple groups of 'radiative' rays meeting at the point where the dose has to be measured. In order to perform all these computations in less than a second, we first propose a GPU algorithm that enables the efficient management of one big group of coherent rays. Then we present a modification of this algorithm that guarantees the robustness of the ray-triangle intersection tests through the elimination of the precision issues due to floating-point arithmetic. This modification does not require the definition of scene-dependent coefficients ('epsilon' style) and only implies a small loss of performance (less than 10%). Finally we propose an efficient strategy to handle multiple ray groups (corresponding to multiple radiative objects) which use the previous results.Thanks to these improvements, we are able to perform an interactive and robust dose rate evaluation on big 3D scenes: all of the intersections (more than 13 million) between 700 000 triangles and 12 groups of 100 000 rays each are found, sorted along each ray and transferred to the CPU in 470 milliseconds. (author) [fr

  18. Planar ESPAR Array Design with Nonsymmetrical Pattern by Means of Finite-Element Method, Domain Decomposition, and Spherical Wave Expansion

    Directory of Open Access Journals (Sweden)

    Jesús García

    2012-01-01

    Full Text Available The application of a 3D domain decomposition finite-element and spherical mode expansion for the design of planar ESPAR (electronically steerable passive array radiator made with probe-fed circular microstrip patches is presented in this work. A global generalized scattering matrix (GSM in terms of spherical modes is obtained analytically from the GSM of the isolated patches by using rotation and translation properties of spherical waves. The whole behaviour of the array is characterized including all the mutual coupling effects between its elements. This procedure has been firstly validated by analyzing an array of monopoles on a ground plane, and then it has been applied to synthesize a prescribed radiation pattern optimizing the reactive loads connected to the feeding ports of the array of circular patches by means of a genetic algorithm.

  19. Domain decomposition with local refinement for flow simulation around a nuclear waste disposal site: direct computation versus simulation using code coupling with OCamlP3L

    Energy Technology Data Exchange (ETDEWEB)

    Clement, F.; Vodicka, A.; Weis, P. [Institut National de Recherches Agronomiques (INRA), 78 - Le Chesnay (France); Martin, V. [Institut National de Recherches Agronomiques (INRA), 92 - Chetenay Malabry (France); Di Cosmo, R. [Institut National de Recherches Agronomiques (INRA), 78 - Le Chesnay (France); Paris-7 Univ., 75 (France)

    2003-07-01

    We consider the application of a non-overlapping domain decomposition method with non-matching grids based on Robin interface conditions to the problem of flow surrounding an underground nuclear waste disposal. We show with a simple example how one can refine the mesh locally around the storage with this technique. A second aspect is studied in this paper. The coupling between the sub-domains can be achieved by computing in two ways: either directly (i.e. the domain decomposition algorithm is included in the code that solves the problems on the sub-domains) or using code coupling. In the latter case, each sub-domain problem is solved separately and the coupling is performed by another program. We wrote a coupling program in the functional language Ocaml, using the OcamIP31 environment devoted to ease the parallelism. This at the same time we test the code coupling and we use the natural parallel property of domain decomposition methods. Some simple 2D numerical tests show promising results, and further studies are under way. (authors)

  20. Domain decomposition with local refinement for flow simulation around a nuclear waste disposal site: direct computation versus simulation using code coupling with OCamlP3L

    International Nuclear Information System (INIS)

    Clement, F.; Vodicka, A.; Weis, P.; Martin, V.; Di Cosmo, R.

    2003-01-01

    We consider the application of a non-overlapping domain decomposition method with non-matching grids based on Robin interface conditions to the problem of flow surrounding an underground nuclear waste disposal. We show with a simple example how one can refine the mesh locally around the storage with this technique. A second aspect is studied in this paper. The coupling between the sub-domains can be achieved by computing in two ways: either directly (i.e. the domain decomposition algorithm is included in the code that solves the problems on the sub-domains) or using code coupling. In the latter case, each sub-domain problem is solved separately and the coupling is performed by another program. We wrote a coupling program in the functional language Ocaml, using the OcamIP31 environment devoted to ease the parallelism. This at the same time we test the code coupling and we use the natural parallel property of domain decomposition methods. Some simple 2D numerical tests show promising results, and further studies are under way. (authors)

  1. A robust star identification algorithm with star shortlisting

    Science.gov (United States)

    Mehta, Deval Samirbhai; Chen, Shoushun; Low, Kay Soon

    2018-05-01

    A star tracker provides the most accurate attitude solution in terms of arc seconds compared to the other existing attitude sensors. When no prior attitude information is available, it operates in "Lost-In-Space (LIS)" mode. Star pattern recognition, also known as star identification algorithm, forms the most crucial part of a star tracker in the LIS mode. Recognition reliability and speed are the two most important parameters of a star pattern recognition technique. In this paper, a novel star identification algorithm with star ID shortlisting is proposed. Firstly, the star IDs are shortlisted based on worst-case patch mismatch, and later stars are identified in the image by an initial match confirmed with a running sequential angular match technique. The proposed idea is tested on 16,200 simulated star images having magnitude uncertainty, noise stars, positional deviation, and varying size of the field of view. The proposed idea is also benchmarked with the state-of-the-art star pattern recognition techniques. Finally, the real-time performance of the proposed technique is tested on the 3104 real star images captured by a star tracker SST-20S currently mounted on a satellite. The proposed technique can achieve an identification accuracy of 98% and takes only 8.2 ms for identification on real images. Simulation and real-time results depict that the proposed technique is highly robust and achieves a high speed of identification suitable for actual space applications.

  2. Robust and accurate detection algorithm for multimode polymer optical FBG sensor system

    DEFF Research Database (Denmark)

    Ganziy, Denis; Jespersen, O.; Rose, B.

    2015-01-01

    We propose a novel dynamic gate algorithm (DGA) for robust and fast peak detection. The algorithm uses a threshold determined detection window and center of gravity algorithm with bias compensation. Our experiment demonstrates that the DGA method is fast and robust with better stability and accur...

  3. Multiscale analysis of damage using dual and primal domain decomposition techniques

    NARCIS (Netherlands)

    Lloberas-Valls, O.; Everdij, F.P.X.; Rixen, D.J.; Simone, A.; Sluys, L.J.

    2014-01-01

    In this contribution, dual and primal domain decomposition techniques are studied for the multiscale analysis of failure in quasi-brittle materials. The multiscale strategy essentially consists in decomposing the structure into a number of nonoverlapping domains and considering a refined spatial

  4. Robust digital image inpainting algorithm in the wireless environment

    Science.gov (United States)

    Karapetyan, G.; Sarukhanyan, H. G.; Agaian, S. S.

    2014-05-01

    and implementation steps of the presented algorithm. Furthermore, the simulation results show that the presented method is among the state-of-the-art and compares favorably against many available methods in the wireless environment. Robustness in the wireless environment with respect to the shape of the manually selected "marked" region is also illustrated. Currently, we are working on the expansion of this work to video and 3-D data.

  5. Hybrid and Parallel Domain-Decomposition Methods Development to Enable Monte Carlo for Reactor Analyses

    International Nuclear Information System (INIS)

    Wagner, John C.; Mosher, Scott W.; Evans, Thomas M.; Peplow, Douglas E.; Turner, John A.

    2010-01-01

    This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which

  6. Hybrid and parallel domain-decomposition methods development to enable Monte Carlo for reactor analyses

    International Nuclear Information System (INIS)

    Wagner, J.C.; Mosher, S.W.; Evans, T.M.; Peplow, D.E.; Turner, J.A.

    2010-01-01

    This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform 'real' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the 'gold standard' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method

  7. Finite element analysis of multi-material models using a balancing domain decomposition method combined with the diagonal scaling preconditioner

    International Nuclear Information System (INIS)

    Ogino, Masao

    2016-01-01

    Actual problems in science and industrial applications are modeled by multi-materials and large-scale unstructured mesh, and the finite element analysis has been widely used to solve such problems on the parallel computer. However, for large-scale problems, the iterative methods for linear finite element equations suffer from slow or no convergence. Therefore, numerical methods having both robust convergence and scalable parallel efficiency are in great demand. The domain decomposition method is well known as an iterative substructuring method, and is an efficient approach for parallel finite element methods. Moreover, the balancing preconditioner achieves robust convergence. However, in case of problems consisting of very different materials, the convergence becomes bad. There are some research to solve this issue, however not suitable for cases of complex shape and composite materials. In this study, to improve convergence of the balancing preconditioner for multi-materials, a balancing preconditioner combined with the diagonal scaling preconditioner, called Scaled-BDD method, is proposed. Some numerical results are included which indicate that the proposed method has robust convergence for the number of subdomains and shows high performances compared with the original balancing preconditioner. (author)

  8. Robust domain decomposition preconditioners for abstract symmetric positive definite bilinear forms

    KAUST Repository

    Efendiev, Yalchin; Galvis, Juan; Lazarov, Raytcho; Willems, Joerg

    2012-01-01

    An abstract framework for constructing stable decompositions of the spaces corresponding to general symmetric positive definite problems into "local" subspaces and a global "coarse" space is developed. Particular applications of this abstract

  9. Comparing the Robustness of Evolutionary Algorithms on the Basis of Benchmark Functions

    Directory of Open Access Journals (Sweden)

    DENIZ ULKER, E.

    2013-05-01

    Full Text Available In real-world optimization problems, even though the solution quality is of great importance, the robustness of the solution is also an important aspect. This paper investigates how the optimization algorithms are sensitive to the variations of control parameters and to the random initialization of the solution set for fixed control parameters. The comparison is performed of three well-known evolutionary algorithms which are Particle Swarm Optimization (PSO algorithm, Differential Evolution (DE algorithm and the Harmony Search (HS algorithm. Various benchmark functions with different characteristics are used for the evaluation of these algorithms. The experimental results show that the solution quality of the algorithms is not directly related to their robustness. In particular, the algorithm that is highly robust can have a low solution quality, or the algorithm that has a high quality of solution can be quite sensitive to the parameter variations.

  10. Primal Domain Decomposition Method with Direct and Iterative Solver for Circuit-Field-Torque Coupled Parallel Finite Element Method to Electric Machine Modelling

    Directory of Open Access Journals (Sweden)

    Daniel Marcsa

    2015-01-01

    Full Text Available The analysis and design of electromechanical devices involve the solution of large sparse linear systems, and require therefore high performance algorithms. In this paper, the primal Domain Decomposition Method (DDM with parallel forward-backward and with parallel Preconditioned Conjugate Gradient (PCG solvers are introduced in two-dimensional parallel time-stepping finite element formulation to analyze rotating machine considering the electromagnetic field, external circuit and rotor movement. The proposed parallel direct and the iterative solver with two preconditioners are analyzed concerning its computational efficiency and number of iterations of the solver with different preconditioners. Simulation results of a rotating machine is also presented.

  11. A 3D domain decomposition approach for the identification of spatially varying elastic material parameters

    KAUST Repository

    Moussawi, Ali

    2015-02-24

    Summary: The post-treatment of (3D) displacement fields for the identification of spatially varying elastic material parameters is a large inverse problem that remains out of reach for massive 3D structures. We explore here the potential of the constitutive compatibility method for tackling such an inverse problem, provided an appropriate domain decomposition technique is introduced. In the method described here, the statically admissible stress field that can be related through the known constitutive symmetry to the kinematic observations is sought through minimization of an objective function, which measures the violation of constitutive compatibility. After this stress reconstruction, the local material parameters are identified with the given kinematic observations using the constitutive equation. Here, we first adapt this method to solve 3D identification problems and then implement it within a domain decomposition framework which allows for reduced computational load when handling larger problems.

  12. Parallel finite elements with domain decomposition and its pre-processing

    International Nuclear Information System (INIS)

    Yoshida, A.; Yagawa, G.; Hamada, S.

    1993-01-01

    This paper describes a parallel finite element analysis using a domain decomposition method, and the pre-processing for the parallel calculation. Computer simulations are about to replace experiments in various fields, and the scale of model to be simulated tends to be extremely large. On the other hand, computational environment has drastically changed in these years. Especially, parallel processing on massively parallel computers or computer networks is considered to be promising techniques. In order to achieve high efficiency on such parallel computation environment, large granularity of tasks, a well-balanced workload distribution are key issues. It is also important to reduce the cost of pre-processing in such parallel FEM. From the point of view, the authors developed the domain decomposition FEM with the automatic and dynamic task-allocation mechanism and the automatic mesh generation/domain subdivision system for it. (author)

  13. TARCMO: Theory and Algorithms for Robust, Combinatorial, Multicriteria Optimization

    Science.gov (United States)

    2016-11-28

    methods is presented in the book chapter [CG16d]. 4.4 Robust Timetable Information Problems. Timetable information is the process of determining a...Princeton and Oxford, 2009. [BTN98] A. Ben-Tal and A. Nemirovski. Robust convex optimization. Math - ematics of Operations Research, 23(4):769–805...Goerigk. A note on upper bounds to the robust knapsack problem with discrete scenarios. Annals of Operations Research, 223(1):461–469, 2014. [GS16] M

  14. Robustness of Multiple Clustering Algorithms on Hyperspectral Images

    National Research Council Canada - National Science Library

    Williams, Jason P

    2007-01-01

    .... Various clustering algorithms were employed, including a hierarchical method, ISODATA, K-means, and X-means, and were used on a simple two dimensional dataset in order to discover potential problems with the algorithms...

  15. An additive matrix preconditioning method with application for domain decomposition and two-level matrix partitionings

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe

    2010-01-01

    Roč. 5910, - (2010), s. 76-83 ISSN 0302-9743. [International Conference on Large-Scale Scientific Computations, LSSC 2009 /7./. Sozopol, 04.06.2009-08.06.2009] R&D Projects: GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z30860518 Keywords : additive matrix * condition number * domain decomposition Subject RIV: BA - General Mathematics www.springerlink.com

  16. Java-Based Coupling for Parallel Predictive-Adaptive Domain Decomposition

    Directory of Open Access Journals (Sweden)

    Cécile Germain‐Renaud

    1999-01-01

    Full Text Available Adaptive domain decomposition exemplifies the problem of integrating heterogeneous software components with intermediate coupling granularity. This paper describes an experiment where a data‐parallel (HPF client interfaces with a sequential computation server through Java. We show that seamless integration of data‐parallelism is possible, but requires most of the tools from the Java palette: Java Native Interface (JNI, Remote Method Invocation (RMI, callbacks and threads.

  17. A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis

    Science.gov (United States)

    Jokhio, G. A.; Izzuddin, B. A.

    2015-05-01

    This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.

  18. Unstructured characteristic method embedded with variational nodal method using domain decomposition techniques

    Energy Technology Data Exchange (ETDEWEB)

    Girardi, E.; Ruggieri, J.M. [CEA Cadarache (DER/SPRC/LEPH), 13 - Saint-Paul-lez-Durance (France). Dept. d' Etudes des Reacteurs; Santandrea, S. [CEA Saclay, Dept. Modelisation de Systemes et Structures DM2S/SERMA/LENR, 91 - Gif sur Yvette (France)

    2005-07-01

    This paper describes a recently-developed extension of our 'Multi-methods,multi-domains' (MM-MD) method for the solution of the multigroup transport equation. Based on a domain decomposition technique, our approach allows us to treat the one-group equation by cooperatively employing several numerical methods together. In this work, we describe the coupling between the Method of Characteristics (integro-differential equation, unstructured meshes) with the Variational Nodal Method (even parity equation, cartesian meshes). Then, the coupling method is applied to the benchmark model of the Phebus experimental facility (Cea Cadarache). Our domain decomposition method give us the capability to employ a very fine mesh in describing a particular fuel bundle with an appropriate numerical method (MOC), while using a much large mesh size in the rest of the core, in conjunction with a coarse-mesh method (VNM). This application shows the benefits of our MM-MD approach, in terms of accuracy and computing time: the domain decomposition method allows us to reduce the Cpu time, while preserving a good accuracy of the neutronic indicators: reactivity, core-to-bundle power coupling coefficient and flux error. (authors)

  19. Unstructured characteristic method embedded with variational nodal method using domain decomposition techniques

    International Nuclear Information System (INIS)

    Girardi, E.; Ruggieri, J.M.

    2005-01-01

    This paper describes a recently-developed extension of our 'Multi-methods,multi-domains' (MM-MD) method for the solution of the multigroup transport equation. Based on a domain decomposition technique, our approach allows us to treat the one-group equation by cooperatively employing several numerical methods together. In this work, we describe the coupling between the Method of Characteristics (integro-differential equation, unstructured meshes) with the Variational Nodal Method (even parity equation, cartesian meshes). Then, the coupling method is applied to the benchmark model of the Phebus experimental facility (Cea Cadarache). Our domain decomposition method give us the capability to employ a very fine mesh in describing a particular fuel bundle with an appropriate numerical method (MOC), while using a much large mesh size in the rest of the core, in conjunction with a coarse-mesh method (VNM). This application shows the benefits of our MM-MD approach, in terms of accuracy and computing time: the domain decomposition method allows us to reduce the Cpu time, while preserving a good accuracy of the neutronic indicators: reactivity, core-to-bundle power coupling coefficient and flux error. (authors)

  20. Representation of discrete Steklov-Poincare operator arising in domain decomposition methods in wavelet basis

    Energy Technology Data Exchange (ETDEWEB)

    Jemcov, A.; Matovic, M.D. [Queen`s Univ., Kingston, Ontario (Canada)

    1996-12-31

    This paper examines the sparse representation and preconditioning of a discrete Steklov-Poincare operator which arises in domain decomposition methods. A non-overlapping domain decomposition method is applied to a second order self-adjoint elliptic operator (Poisson equation), with homogeneous boundary conditions, as a model problem. It is shown that the discrete Steklov-Poincare operator allows sparse representation with a bounded condition number in wavelet basis if the transformation is followed by thresholding and resealing. These two steps combined enable the effective use of Krylov subspace methods as an iterative solution procedure for the system of linear equations. Finding the solution of an interface problem in domain decomposition methods, known as a Schur complement problem, has been shown to be equivalent to the discrete form of Steklov-Poincare operator. A common way to obtain Schur complement matrix is by ordering the matrix of discrete differential operator in subdomain node groups then block eliminating interface nodes. The result is a dense matrix which corresponds to the interface problem. This is equivalent to reducing the original problem to several smaller differential problems and one boundary integral equation problem for the subdomain interface.

  1. Markov chain algorithms: a template for building future robust low-power systems

    Science.gov (United States)

    Deka, Biplab; Birklykke, Alex A.; Duwe, Henry; Mansinghka, Vikash K.; Kumar, Rakesh

    2014-01-01

    Although computational systems are looking towards post CMOS devices in the pursuit of lower power, the expected inherent unreliability of such devices makes it difficult to design robust systems without additional power overheads for guaranteeing robustness. As such, algorithmic structures with inherent ability to tolerate computational errors are of significant interest. We propose to cast applications as stochastic algorithms based on Markov chains (MCs) as such algorithms are both sufficiently general and tolerant to transition errors. We show with four example applications—Boolean satisfiability, sorting, low-density parity-check decoding and clustering—how applications can be cast as MC algorithms. Using algorithmic fault injection techniques, we demonstrate the robustness of these implementations to transition errors with high error rates. Based on these results, we make a case for using MCs as an algorithmic template for future robust low-power systems. PMID:24842030

  2. Robust K-Median and K-Means Clustering Algorithms for Incomplete Data

    Directory of Open Access Journals (Sweden)

    Jinhua Li

    2016-01-01

    Full Text Available Incomplete data with missing feature values are prevalent in clustering problems. Traditional clustering methods first estimate the missing values by imputation and then apply the classical clustering algorithms for complete data, such as K-median and K-means. However, in practice, it is often hard to obtain accurate estimation of the missing values, which deteriorates the performance of clustering. To enhance the robustness of clustering algorithms, this paper represents the missing values by interval data and introduces the concept of robust cluster objective function. A minimax robust optimization (RO formulation is presented to provide clustering results, which are insensitive to estimation errors. To solve the proposed RO problem, we propose robust K-median and K-means clustering algorithms with low time and space complexity. Comparisons and analysis of experimental results on both artificially generated and real-world incomplete data sets validate the robustness and effectiveness of the proposed algorithms.

  3. Domain decomposition method using a hybrid parallelism and a low-order acceleration for solving the Sn transport equation on unstructured geometry

    International Nuclear Information System (INIS)

    Odry, Nans

    2016-01-01

    Deterministic calculation schemes are devised to numerically solve the neutron transport equation in nuclear reactors. Dealing with core-sized problems is very challenging for computers, so much that the dedicated core calculations have no choice but to allow simplifying assumptions (assembly- then core scale steps..). The PhD work aims at overcoming some of these approximations: thanks to important changes in computer architecture and capacities (HPC), nowadays one can solve 3D core-sized problems, using both high mesh refinement and the transport operator. It is an essential step forward in order to perform, in the future, reference calculations using deterministic schemes. This work focuses on a spatial domain decomposition method (DDM). Using massive parallelism, DDM allows much more ambitious computations in terms of both memory requirements and calculation time. Developments were performed inside the Sn core solver Minaret, from the new CEA neutronics platform APOLLO3. Only fast reactors (hexagonal periodicity) are considered, even if all kinds of geometries can be dealt with, using Minaret. The work has been divided in four steps: 1) The spatial domain decomposition with no overlap is inserted into the standard algorithmic structure of Minaret. The fundamental idea involves splitting a core-sized problem into smaller, independent, spatial sub-problems. angular flux is exchanged between adjacent sub-domains. In doing so, all combined sub-problems converge to the global solution at the outcome of an iterative process. Various strategies were explored regarding both data management and algorithm design. Results (k eff and flux) are systematically compared to the reference in a numerical verification step. 2) Introducing more parallelism is an unprecedented opportunity to heighten performances of deterministic schemes. Domain decomposition is particularly suited to this. A two-layer hybrid parallelism strategy, suited to HPC, is chosen. It benefits from the

  4. A Robust Level-Set Algorithm for Centerline Extraction

    NARCIS (Netherlands)

    Telea, Alexandru; Vilanova, Anna

    2003-01-01

    We present a robust method for extracting 3D centerlines from volumetric datasets. We start from a 2D skeletonization method to locate voxels centered with respect to three orthogonal slicing directions. Next, we introduce a new detection criterion to extract the centerline voxels from the above

  5. Parallel computing of a climate model on the dawn 1000 by domain decomposition method

    Science.gov (United States)

    Bi, Xunqiang

    1997-12-01

    In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.

  6. A domain decomposition method for analyzing a coupling between multiple acoustical spaces (L).

    Science.gov (United States)

    Chen, Yuehua; Jin, Guoyong; Liu, Zhigang

    2017-05-01

    This letter presents a domain decomposition method to predict the acoustic characteristics of an arbitrary enclosure made up of any number of sub-spaces. While the Lagrange multiplier technique usually has good performance for conditional extremum problems, the present method avoids involving extra coupling parameters and theoretically ensures the continuity conditions of both sound pressure and particle velocity at the coupling interface. Comparisons with the finite element results illustrate the accuracy and efficiency of the present predictions and the effect of coupling parameters between sub-spaces on the natural frequencies and mode shapes of the overall enclosure is revealed.

  7. A mixed finite element domain decomposition method for nearly elastic wave equations in the frequency domain

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Xiaobing [Univ. of Tennessee, Knoxville, TN (United States)

    1996-12-31

    A non-overlapping domain decomposition iterative method is proposed and analyzed for mixed finite element methods for a sequence of noncoercive elliptic systems with radiation boundary conditions. These differential systems describe the motion of a nearly elastic solid in the frequency domain. The convergence of the iterative procedure is demonstrated and the rate of convergence is derived for the case when the domain is decomposed into subdomains in which each subdomain consists of an individual element associated with the mixed finite elements. The hybridization of mixed finite element methods plays a important role in the construction of the discrete procedure.

  8. Robust consensus algorithm for multi-agent systems with exogenous disturbances under convergence conditions

    Science.gov (United States)

    Jiang, Yulian; Liu, Jianchang; Tan, Shubin; Ming, Pingsong

    2014-09-01

    In this paper, a robust consensus algorithm is developed and sufficient conditions for convergence to consensus are proposed for a multi-agent system (MAS) with exogenous disturbances subject to partial information. By utilizing H∞ robust control, differential game theory and a design-based approach, the consensus problem of the MAS with exogenous bounded interference is resolved and the disturbances are restrained, simultaneously. Attention is focused on designing an H∞ robust controller (the robust consensus algorithm) based on minimisation of our proposed rational and individual cost functions according to goals of the MAS. Furthermore, sufficient conditions for convergence of the robust consensus algorithm are given. An example is employed to demonstrate that our results are effective and more capable to restrain exogenous disturbances than the existing literature.

  9. Robust optimization model and algorithm for railway freight center location problem in uncertain environment.

    Science.gov (United States)

    Liu, Xing-Cai; He, Shi-Wei; Song, Rui; Sun, Yang; Li, Hao-Dong

    2014-01-01

    Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  10. Robust Optimization Model and Algorithm for Railway Freight Center Location Problem in Uncertain Environment

    Directory of Open Access Journals (Sweden)

    Xing-cai Liu

    2014-01-01

    Full Text Available Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  11. Robust perception algorithms for road and track autonomous following

    Science.gov (United States)

    Marion, Vincent; Lecointe, Olivier; Lewandowski, Cecile; Morillon, Joel G.; Aufrere, Romuald; Marcotegui, Beatrix; Chapuis, Roland; Beucher, Serge

    2004-09-01

    The French Military Robotic Study Program (introduced in Aerosense 2003), sponsored by the French Defense Procurement Agency and managed by Thales Airborne Systems as the prime contractor, focuses on about 15 robotic themes, which can provide an immediate "operational add-on value." The paper details the "road and track following" theme (named AUT2), which main purpose was to develop a vision based sub-system to automatically detect roadsides of an extended range of roads and tracks suitable to military missions. To achieve the goal, efforts focused on three main areas: (1) Improvement of images quality at algorithms inputs, thanks to the selection of adapted video cameras, and the development of a THALES patented algorithm: it removes in real time most of the disturbing shadows in images taken in natural environments, enhances contrast and lowers reflection effect due to films of water. (2) Selection and improvement of two complementary algorithms (one is segment oriented, the other region based) (3) Development of a fusion process between both algorithms, which feeds in real time a road model with the best available data. Each previous step has been developed so that the global perception process is reliable and safe: as an example, the process continuously evaluates itself and outputs confidence criteria qualifying roadside detection. The paper presents the processes in details, and the results got from passed military acceptance tests, which trigger the next step: autonomous track following (named AUT3).

  12. Hybrid Robust Multi-Objective Evolutionary Optimization Algorithm

    Science.gov (United States)

    2009-03-10

    xfar by xint. Else, generate a new individual, using the Sobol pseudo- random sequence generator within the upper and lower bounds of the variables...12. Deb, K., Multi-Objective Optimization Using Evolutionary Algorithms, John Wiley & Sons. 2002. 13. Sobol , I. M., "Uniformly Distributed Sequences

  13. Robust Bayesian Algorithm for Targeted Compound Screening in Forensic Toxicology.

    Science.gov (United States)

    Woldegebriel, Michael; Gonsalves, John; van Asten, Arian; Vivó-Truyols, Gabriel

    2016-02-16

    As part of forensic toxicological investigation of cases involving unexpected death of an individual, targeted or untargeted xenobiotic screening of post-mortem samples is normally conducted. To this end, liquid chromatography (LC) coupled to high-resolution mass spectrometry (MS) is typically employed. For data analysis, almost all commonly applied algorithms are threshold-based (frequentist). These algorithms examine the value of a certain measurement (e.g., peak height) to decide whether a certain xenobiotic of interest (XOI) is present/absent, yielding a binary output. Frequentist methods pose a problem when several sources of information [e.g., shape of the chromatographic peak, isotopic distribution, estimated mass-to-charge ratio (m/z), adduct, etc.] need to be combined, requiring the approach to make arbitrary decisions at substep levels of data analysis. We hereby introduce a novel Bayesian probabilistic algorithm for toxicological screening. The method tackles the problem with a different strategy. It is not aimed at reaching a final conclusion regarding the presence of the XOI, but it estimates its probability. The algorithm effectively and efficiently combines all possible pieces of evidence from the chromatogram and calculates the posterior probability of the presence/absence of XOI features. This way, the model can accommodate more information by updating the probability if extra evidence is acquired. The final probabilistic result assists the end user to make a final decision with respect to the presence/absence of the xenobiotic. The Bayesian method was validated and found to perform better (in terms of false positives and false negatives) than the vendor-supplied software package.

  14. Domain decomposition method of stochastic PDEs: a two-level scalable preconditioner

    International Nuclear Information System (INIS)

    Subber, Waad; Sarkar, Abhijit

    2012-01-01

    For uncertainty quantification in many practical engineering problems, the stochastic finite element method (SFEM) may be computationally challenging. In SFEM, the size of the algebraic linear system grows rapidly with the spatial mesh resolution and the order of the stochastic dimension. In this paper, we describe a non-overlapping domain decomposition method, namely the iterative substructuring method to tackle the large-scale linear system arising in the SFEM. The SFEM is based on domain decomposition in the geometric space and a polynomial chaos expansion in the probabilistic space. In particular, a two-level scalable preconditioner is proposed for the iterative solver of the interface problem for the stochastic systems. The preconditioner is equipped with a coarse problem which globally connects the subdomains both in the geometric and probabilistic spaces via their corner nodes. This coarse problem propagates the information quickly across the subdomains leading to a scalable preconditioner. For numerical illustrations, a two-dimensional stochastic elliptic partial differential equation (SPDE) with spatially varying non-Gaussian random coefficients is considered. The numerical scalability of the the preconditioner is investigated with respect to the mesh size, subdomain size, fixed problem size per subdomain and order of polynomial chaos expansion. The numerical experiments are performed on a Linux cluster using MPI and PETSc parallel libraries.

  15. Domain Decomposition Preconditioners for Multiscale Flows in High-Contrast Media

    KAUST Repository

    Galvis, Juan; Efendiev, Yalchin

    2010-01-01

    In this paper, we study domain decomposition preconditioners for multiscale flows in high-contrast media. We consider flow equations governed by elliptic equations in heterogeneous media with a large contrast in the coefficients. Our main goal is to develop domain decomposition preconditioners with the condition number that is independent of the contrast when there are variations within coarse regions. This is accomplished by designing coarse-scale spaces and interpolators that represent important features of the solution within each coarse region. The important features are characterized by the connectivities of high-conductivity regions. To detect these connectivities, we introduce an eigenvalue problem that automatically detects high-conductivity regions via a large gap in the spectrum. A main observation is that this eigenvalue problem has a few small, asymptotically vanishing eigenvalues. The number of these small eigenvalues is the same as the number of connected high-conductivity regions. The coarse spaces are constructed such that they span eigenfunctions corresponding to these small eigenvalues. These spaces are used within two-level additive Schwarz preconditioners as well as overlapping methods for the Schur complement to design preconditioners. We show that the condition number of the preconditioned systems is independent of the contrast. More detailed studies are performed for the case when the high-conductivity region is connected within coarse block neighborhoods. Our numerical experiments confirm the theoretical results presented in this paper. © 2010 Society for Industrial and Applied Mathematics.

  16. Mechanical and assembly units of viral capsids identified via quasi-rigid domain decomposition.

    Directory of Open Access Journals (Sweden)

    Guido Polles

    Full Text Available Key steps in a viral life-cycle, such as self-assembly of a protective protein container or in some cases also subsequent maturation events, are governed by the interplay of physico-chemical mechanisms involving various spatial and temporal scales. These salient aspects of a viral life cycle are hence well described and rationalised from a mesoscopic perspective. Accordingly, various experimental and computational efforts have been directed towards identifying the fundamental building blocks that are instrumental for the mechanical response, or constitute the assembly units, of a few specific viral shells. Motivated by these earlier studies we introduce and apply a general and efficient computational scheme for identifying the stable domains of a given viral capsid. The method is based on elastic network models and quasi-rigid domain decomposition. It is first applied to a heterogeneous set of well-characterized viruses (CCMV, MS2, STNV, STMV for which the known mechanical or assembly domains are correctly identified. The validated method is next applied to other viral particles such as L-A, Pariacoto and polyoma viruses, whose fundamental functional domains are still unknown or debated and for which we formulate verifiable predictions. The numerical code implementing the domain decomposition strategy is made freely available.

  17. ROBUST-HYBRID GENETIC ALGORITHM FOR A FLOW-SHOP SCHEDULING PROBLEM (A Case Study at PT FSCM Manufacturing Indonesia

    Directory of Open Access Journals (Sweden)

    Johan Soewanda

    2007-01-01

    Full Text Available This paper discusses the application of Robust Hybrid Genetic Algorithm to solve a flow-shop scheduling problem. The proposed algorithm attempted to reach minimum makespan. PT. FSCM Manufacturing Indonesia Plant 4's case was used as a test case to evaluate the performance of the proposed algorithm. The proposed algorithm was compared to Ant Colony, Genetic-Tabu, Hybrid Genetic Algorithm, and the company's algorithm. We found that Robust Hybrid Genetic produces statistically better result than the company's, but the same as Ant Colony, Genetic-Tabu, and Hybrid Genetic. In addition, Robust Hybrid Genetic Algorithm required less computational time than Hybrid Genetic Algorithm

  18. Robust Cyclic MUSIC Algorithm for Finding Directions in Impulsive Noise Environment

    Directory of Open Access Journals (Sweden)

    Sen Li

    2017-01-01

    Full Text Available This paper addresses the issue of direction finding of a cyclostationary signal under impulsive noise environments modeled by α-stable distribution. Since α-stable distribution does not have finite second-order statistics, the conventional cyclic correlation-based signal-selective direction finding algorithms do not work effectively. To resolve this problem, we define two robust cyclic correlation functions which are derived from robust statistics property of the correntropy and the nonlinear transformation, respectively. The MUSIC algorithm with the robust cyclic correlation matrix of the received signals of arrays is then used to estimate the direction of cyclostationary signal in the presence of impulsive noise. The computer simulation results demonstrate that the two proposed robust cyclic correlation-based algorithms outperform the conventional cyclic correlation and the fractional lower order cyclic correlation based methods.

  19. A Modified LQG Algorithm (MLQG for Robust Control of Nonlinear Multivariable Systems

    Directory of Open Access Journals (Sweden)

    Jens G. Balchen

    1993-07-01

    Full Text Available The original LQG algorithm is often characterized for its lack of robustness. This is because in the design of the estimator (Kalman filter the process disturbance is assumed to be white noise. If the estimator is to give good estimates, the Kalman gain is increased which means that the estimator fails to become robust. A solution to this problem is to replace the proportional Kalman gain matrix by a dynamic PI algorithm and the proportional LQ feedback gain matrix by a PI algorithm. A tuning method is developed which facilitates the tuning of a modified LQG control system (MLQG by only two tuning parameters.

  20. Algorithms and Array Design Criteria for Robust Imaging in Interferometry

    Science.gov (United States)

    Kurien, Binoy George

    Optical interferometry is a technique for obtaining high-resolution imagery of a distant target by interfering light from multiple telescopes. Image restoration from interferometric measurements poses a unique set of challenges. The first challenge is that the measurement set provides only a sparse-sampling of the object's Fourier Transform and hence image formation from these measurements is an inherently ill-posed inverse problem. Secondly, atmospheric turbulence causes severe distortion of the phase of the Fourier samples. We develop array design conditions for unique Fourier phase recovery, as well as a comprehensive algorithmic framework based on the notion of redundant-spaced-calibration (RSC), which together achieve reliable image reconstruction in spite of these challenges. Within this framework, we see that classical interferometric observables such as the bispectrum and closure phase can limit sensitivity, and that generalized notions of these observables can improve both theoretical and empirical performance. Our framework leverages techniques from lattice theory to resolve integer phase ambiguities in the interferometric phase measurements, and from graph theory, to select a reliable set of generalized observables. We analyze the expected shot-noise-limited performance of our algorithm for both pairwise and Fizeau interferometric architectures and corroborate this analysis with simulation results. We apply techniques from the field of compressed sensing to perform image reconstruction from the estimates of the object's Fourier coefficients. The end result is a comprehensive strategy to achieve well-posed and easily-predictable reconstruction performance in optical interferometry.

  1. APPLICATION OF GENETIC ALGORITHMS FOR ROBUST PARAMETER OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    N. Belavendram

    2010-12-01

    Full Text Available Parameter optimization can be achieved by many methods such as Monte-Carlo, full, and fractional factorial designs. Genetic algorithms (GA are fairly recent in this respect but afford a novel method of parameter optimization. In GA, there is an initial pool of individuals each with its own specific phenotypic trait expressed as a ‘genetic chromosome’. Different genes enable individuals with different fitness levels to reproduce according to natural reproductive gene theory. This reproduction is established in terms of selection, crossover and mutation of reproducing genes. The resulting child generation of individuals has a better fitness level akin to natural selection, namely evolution. Populations evolve towards the fittest individuals. Such a mechanism has a parallel application in parameter optimization. Factors in a parameter design can be expressed as a genetic analogue in a pool of sub-optimal random solutions. Allowing this pool of sub-optimal solutions to evolve over several generations produces fitter generations converging to a pre-defined engineering optimum. In this paper, a genetic algorithm is used to study a seven factor non-linear equation for a Wheatstone bridge as the equation to be optimized. A comparison of the full factorial design against a GA method shows that the GA method is about 1200 times faster in finding a comparable solution.

  2. A multi-frame particle tracking algorithm robust against input noise

    International Nuclear Information System (INIS)

    Li, Dongning; Zhang, Yuanhui; Sun, Yigang; Yan, Wei

    2008-01-01

    The performance of a particle tracking algorithm which detects particle trajectories from discretely recorded particle positions could be substantially hindered by the input noise. In this paper, a particle tracking algorithm is developed which is robust against input noise. This algorithm employs the regression method instead of the extrapolation method usually employed by existing algorithms to predict future particle positions. If a trajectory cannot be linked to a particle at a frame, the algorithm can still proceed by trying to find a candidate at the next frame. The connectivity of tracked trajectories is inspected to remove the false ones. The algorithm is validated with synthetic data. The result shows that the algorithm is superior to traditional algorithms in the aspect of tracking long trajectories

  3. Developing robust arsenic awareness prediction models using machine learning algorithms.

    Science.gov (United States)

    Singh, Sushant K; Taylor, Robert W; Rahman, Mohammad Mahmudur; Pradhan, Biswajeet

    2018-04-01

    Arsenic awareness plays a vital role in ensuring the sustainability of arsenic mitigation technologies. Thus far, however, few studies have dealt with the sustainability of such technologies and its associated socioeconomic dimensions. As a result, arsenic awareness prediction has not yet been fully conceptualized. Accordingly, this study evaluated arsenic awareness among arsenic-affected communities in rural India, using a structured questionnaire to record socioeconomic, demographic, and other sociobehavioral factors with an eye to assessing their association with and influence on arsenic awareness. First a logistic regression model was applied and its results compared with those produced by six state-of-the-art machine-learning algorithms (Support Vector Machine [SVM], Kernel-SVM, Decision Tree [DT], k-Nearest Neighbor [k-NN], Naïve Bayes [NB], and Random Forests [RF]) as measured by their accuracy at predicting arsenic awareness. Most (63%) of the surveyed population was found to be arsenic-aware. Significant arsenic awareness predictors were divided into three types: (1) socioeconomic factors: caste, education level, and occupation; (2) water and sanitation behavior factors: number of family members involved in water collection, distance traveled and time spent for water collection, places for defecation, and materials used for handwashing after defecation; and (3) social capital and trust factors: presence of anganwadi and people's trust in other community members, NGOs, and private agencies. Moreover, individuals' having higher social network positively contributed to arsenic awareness in the communities. Results indicated that both the SVM and the RF algorithms outperformed at overall prediction of arsenic awareness-a nonlinear classification problem. Lower-caste, less educated, and unemployed members of the population were found to be the most vulnerable, requiring immediate arsenic mitigation. To this end, local social institutions and NGOs could play a

  4. Basis adaptation and domain decomposition for steady-state partial differential equations with random coefficients

    Energy Technology Data Exchange (ETDEWEB)

    Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.

    2017-12-01

    We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.

  5. A domain decomposition approach for full-field measurements based identification of local elastic parameters

    KAUST Repository

    Lubineau, Gilles

    2015-03-01

    We propose a domain decomposition formalism specifically designed for the identification of local elastic parameters based on full-field measurements. This technique is made possible by a multi-scale implementation of the constitutive compatibility method. Contrary to classical approaches, the constitutive compatibility method resolves first some eigenmodes of the stress field over the structure rather than directly trying to recover the material properties. A two steps micro/macro reconstruction of the stress field is performed: a Dirichlet identification problem is solved first over every subdomain, the macroscopic equilibrium is then ensured between the subdomains in a second step. We apply the method to large linear elastic 2D identification problems to efficiently produce estimates of the material properties at a much lower computational cost than classical approaches.

  6. Domain decomposition parallel computing for transient two-phase flow of nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Ryong; Yoon, Han Young [KAERI, Daejeon (Korea, Republic of); Choi, Hyoung Gwon [Seoul National University, Seoul (Korea, Republic of)

    2016-05-15

    KAERI (Korea Atomic Energy Research Institute) has been developing a multi-dimensional two-phase flow code named CUPID for multi-physics and multi-scale thermal hydraulics analysis of Light water reactors (LWRs). The CUPID code has been validated against a set of conceptual problems and experimental data. In this work, the CUPID code has been parallelized based on the domain decomposition method with Message passing interface (MPI) library. For domain decomposition, the CUPID code provides both manual and automatic methods with METIS library. For the effective memory management, the Compressed sparse row (CSR) format is adopted, which is one of the methods to represent the sparse asymmetric matrix. CSR format saves only non-zero value and its position (row and column). By performing the verification for the fundamental problem set, the parallelization of the CUPID has been successfully confirmed. Since the scalability of a parallel simulation is generally known to be better for fine mesh system, three different scales of mesh system are considered: 40000 meshes for coarse mesh system, 320000 meshes for mid-size mesh system, and 2560000 meshes for fine mesh system. In the given geometry, both single- and two-phase calculations were conducted. In addition, two types of preconditioners for a matrix solver were compared: Diagonal and incomplete LU preconditioner. In terms of enhancement of the parallel performance, the OpenMP and MPI hybrid parallel computing for a pressure solver was examined. It is revealed that the scalability of hybrid calculation was enhanced for the multi-core parallel computation.

  7. An acceleration technique for 2D MOC based on Krylov subspace and domain decomposition methods

    International Nuclear Information System (INIS)

    Zhang Hongbo; Wu Hongchun; Cao Liangzhi

    2011-01-01

    Highlights: → We convert MOC into linear system solved by GMRES as an acceleration method. → We use domain decomposition method to overcome the inefficiency on large matrices. → Parallel technology is applied and a matched ray tracing system is developed. → Results show good efficiency even in large-scale and strong scattering problems. → The emphasis is that the technique is geometry-flexible. - Abstract: The method of characteristics (MOC) has great geometrical flexibility but poor computational efficiency in neutron transport calculations. The generalized minimal residual (GMRES) method, a type of Krylov subspace method, is utilized to accelerate a 2D generalized geometry characteristics solver AutoMOC. In this technique, a form of linear algebraic equation system for angular flux moments and boundary fluxes is derived to replace the conventional characteristics sweep (i.e. inner iteration) scheme, and then the GMRES method is implemented as an efficient linear system solver. This acceleration method is proved to be reliable in theory and simple for implementation. Furthermore, as introducing no restriction in geometry treatment, it is suitable for acceleration of an arbitrary geometry MOC solver. However, it is observed that the speedup decreases when the matrix becomes larger. The spatial domain decomposition method and multiprocessing parallel technology are then employed to overcome the problem. The calculation domain is partitioned into several sub-domains. For each of them, a smaller matrix is established and solved by GMRES; and the adjacent sub-domains are coupled by 'inner-edges', where the trajectory mismatches are considered adequately. Moreover, a matched ray tracing system is developed on the basis of AutoCAD, which allows a user to define the sub-domains on demand conveniently. Numerical results demonstrate that the acceleration techniques are efficient without loss of accuracy, even in the case of large-scale and strong scattering

  8. A Self-embedding Robust Digital Watermarking Algorithm with Blind Detection

    Directory of Open Access Journals (Sweden)

    Gong Yunfeng

    2014-08-01

    Full Text Available In order to achieve the perfectly blind detection of robustness watermarking algorithm, a novel self-embedding robust digital watermarking algorithm with blind detection is proposed in this paper. Firstly the original image is divided to not overlap image blocks and then decomposable coefficients are obtained by lifting-based wavelet transform in every image blocks. Secondly the low-frequency coefficients of block images are selected and then approximately represented as a product of a base matrix and a coefficient matrix using NMF. Then the feature vector represent original image is obtained by quantizing coefficient matrix, and finally the adaptive quantization of the robustness watermark is embedded in the low-frequency coefficients of LWT. Experimental results show that the scheme is robust against common signal processing attacks, meanwhile perfect blind detection is achieve.

  9. Robust Floor Determination Algorithm for Indoor Wireless Localization Systems under Reference Node Failure

    Directory of Open Access Journals (Sweden)

    Kriangkrai Maneerat

    2016-01-01

    Full Text Available One of the challenging problems for indoor wireless multifloor positioning systems is the presence of reference node (RN failures, which cause the values of received signal strength (RSS to be missed during the online positioning phase of the location fingerprinting technique. This leads to performance degradation in terms of floor accuracy, which in turn affects other localization procedures. This paper presents a robust floor determination algorithm called Robust Mean of Sum-RSS (RMoS, which can accurately determine the floor on which mobile objects are located and can work under either the fault-free scenario or the RN-failure scenarios. The proposed fault tolerance floor algorithm is based on the mean of the summation of the strongest RSSs obtained from the IEEE 802.15.4 Wireless Sensor Networks (WSNs during the online phase. The performance of the proposed algorithm is compared with those of different floor determination algorithms in literature. The experimental results show that the proposed robust floor determination algorithm outperformed the other floor algorithms and can achieve the highest percentage of floor determination accuracy in all scenarios tested. Specifically, the proposed algorithm can achieve greater than 95% correct floor determination under the scenario in which 40% of RNs failed.

  10. Assessing the Stability and Robustness of Semantic Web Services Recommendation Algorithms Under Profile Injection Attacks

    Directory of Open Access Journals (Sweden)

    GRANDIN, P. H.

    2014-06-01

    Full Text Available Recommendation systems based on collaborative filtering are open by nature, what makes them vulnerable to profile injection attacks that insert biased evaluations in the system database in order to manipulate recommendations. In this paper we evaluate the stability and robustness of collaborative filtering algorithms applied to semantic web services recommendation when submitted to random and segment profile injection attacks. We evaluated four algorithms: (1 IMEAN, that makes predictions using the average of the evaluations received by the target item; (2 UMEAN, that makes predictions using the average of the evaluation made by the target user; (3 an algorithm based on the k-nearest neighbor (k-NN method and (4, an algorithm based on the k-means clustering method.The experiments showed that the UMEAN algorithm is not affected by the attacks and that IMEAN is the most vulnerable of all algorithms tested. Nevertheless, both UMEAN and IMEAN have little practical application due to the low precision of their predictions. Among the algorithms with intermediate tolerance to attacks but with good prediction performance, the algorithm based on k-nn proved to be more robust and stable than the algorithm based on k-means.

  11. Domain decomposition for the computation of radiosity in lighting simulation; Decomposition de domaines pour le calcul de la radiosite en simulation d'eclairage

    Energy Technology Data Exchange (ETDEWEB)

    Salque, B

    1998-07-01

    This work deals with the equation of radiosity, this equation describes the transport of light energy through a diffuse medium, its resolution enables us to simulate the presence of light sources. The equation of radiosity is an integral equation who admits a unique solution in realistic cases. The different methods of solving are reviewed. The equation of radiosity can not be formulated as the integral form of a classical partial differential equation, but this work shows that the technique of domain decomposition can be successfully applied to the equation of radiosity if this approach is framed by considerations of physics. This method provides a system of independent equations valid for each sub-domain and whose main parameter is luminance. Some numerical examples give an idea of the convergence of the algorithm. This method is applied to the optimization of the shape of a light reflector.

  12. Domain decomposition methods for the mixed dual formulation of the critical neutron diffusion problem; Methodes de decomposition de domaine pour la formulation mixte duale du probleme critique de la diffusion des neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Guerin, P

    2007-12-15

    The neutronic simulation of a nuclear reactor core is performed using the neutron transport equation, and leads to an eigenvalue problem in the steady-state case. Among the deterministic resolution methods, diffusion approximation is often used. For this problem, the MINOS solver based on a mixed dual finite element method has shown his efficiency. In order to take advantage of parallel computers, and to reduce the computing time and the local memory requirement, we propose in this dissertation two domain decomposition methods for the resolution of the mixed dual form of the eigenvalue neutron diffusion problem. The first approach is a component mode synthesis method on overlapping sub-domains. Several Eigenmodes solutions of a local problem solved by MINOS on each sub-domain are taken as basis functions used for the resolution of the global problem on the whole domain. The second approach is a modified iterative Schwarz algorithm based on non-overlapping domain decomposition with Robin interface conditions. At each iteration, the problem is solved on each sub domain by MINOS with the interface conditions deduced from the solutions on the adjacent sub-domains at the previous iteration. The iterations allow the simultaneous convergence of the domain decomposition and the eigenvalue problem. We demonstrate the accuracy and the efficiency in parallel of these two methods with numerical results for the diffusion model on realistic 2- and 3-dimensional cores. (author)

  13. Domain decomposition methods for flows in faulted porous media; Methodes de decomposition de domaine pour les ecoulements en milieux poreux failles

    Energy Technology Data Exchange (ETDEWEB)

    Flauraud, E.

    2004-05-01

    In this thesis, we are interested in using domain decomposition methods for solving fluid flows in faulted porous media. This study comes within the framework of sedimentary basin modeling which its aim is to predict the presence of possible oil fields in the subsoil. A sedimentary basin is regarded as a heterogeneous porous medium in which fluid flows (water, oil, gas) occur. It is often subdivided into several blocks separated by faults. These faults create discontinuities that have a tremendous effect on the fluid flow in the basin. In this work, we present two approaches to model faults from the mathematical point of view. The first approach consists in considering faults as sub-domains, in the same way as blocks but with their own geological properties. However, because of the very small width of the faults in comparison with the size of the basin, the second and new approach consists in considering faults no longer as sub-domains, but as interfaces between the blocks. A mathematical study of the two models is carried out in order to investigate the existence and the uniqueness of solutions. Then; we are interested in using domain decomposition methods for solving the previous models. The main part of this study is devoted to the design of Robin interface conditions and to the formulation of the interface problem. The Schwarz algorithm can be seen as a Jacobi method for solving the interface problem. In order to speed up the convergence, this problem can be solved by a Krylov type algorithm (BICGSTAB). We discretize the equations with a finite volume scheme, and perform extensive numerical tests to compare the different methods. (author)

  14. Robust PD Sway Control of a Lifted Load for a Crane Using a Genetic Algorithm

    Science.gov (United States)

    Kawada, Kazuo; Sogo, Hiroyuki; Yamamoto, Toru; Mada, Yasuhiro

    PID control schemes still continue to be widely used for most industrial control systems. This is mainly because PID controllers have simple control structures, and are simple to maintain and tune. However, it is difficult to find a set of suitable control parameters in the case of time-varying and/or nonlinear systems. For such a problem, the robust controller has been proposed.Although it is important to choose the suitable nominal model in designing the robust controller, it is not usually easy.In this paper, a new robust PD controller design scheme is proposed, which utilizes a genetic algorithm.

  15. Newton-Gauss Algorithm of Robust Weighted Total Least Squares Model

    Directory of Open Access Journals (Sweden)

    WANG Bin

    2015-06-01

    Full Text Available Based on the Newton-Gauss iterative algorithm of weighted total least squares (WTLS, a robust WTLS (RWTLS model is presented. The model utilizes the standardized residuals to construct the weight factor function and the square root of the variance component estimator with robustness is obtained by introducing the median method. Therefore, the robustness in both the observation and structure spaces can be simultaneously achieved. To obtain standardized residuals, the linearly approximate cofactor propagation law is employed to derive the expression of the cofactor matrix of WTLS residuals. The iterative calculation steps for RWTLS are also described. The experiment indicates that the model proposed in this paper exhibits satisfactory robustness for gross errors handling problem of WTLS, the obtained parameters have no significant difference with the results of WTLS without gross errors. Therefore, it is superior to the robust weighted total least squares model directly constructed with residuals.

  16. A Robust Formant Extraction Algorithm Combining Spectral Peak Picking and Root Polishing

    Directory of Open Access Journals (Sweden)

    Seo Kwang-deok

    2006-01-01

    Full Text Available We propose a robust formant extraction algorithm that combines the spectral peak picking, formants location examining for peak merger checking, and the root extraction methods. The spectral peak picking method is employed to locate the formant candidates, and the root extraction is used for solving the peak merger problem. The location and the distance between the extracted formants are also utilized to efficiently find out suspected peak mergers. The proposed algorithm does not require much computation, and is shown to be superior to previous formant extraction algorithms through extensive tests using TIMIT speech database.

  17. An Effective, Robust And Parallel Implementation Of An Interior Point Algorithm For Limit State Optimization

    DEFF Research Database (Denmark)

    Dollerup, Niels; Jepsen, Michael S.; Damkilde, Lars

    2013-01-01

    The artide describes a robust and effective implementation of the interior point optimization algorithm. The adopted method includes a precalculation step, which reduces the number of variables by fulfilling the equilibrium equations a priori. This work presents an improved implementation of the ...

  18. An outlook on robust model predictive control algorithms : Reflections on performance and computational aspects

    NARCIS (Netherlands)

    Saltik, M.B.; Özkan, L.; Ludlage, J.H.A.; Weiland, S.; Van den Hof, P.M.J.

    2018-01-01

    In this paper, we discuss the model predictive control algorithms that are tailored for uncertain systems. Robustness notions with respect to both deterministic (or set based) and stochastic uncertainties are discussed and contributions are reviewed in the model predictive control literature. We

  19. An algorithm for robust non-linear analysis of radioimmunoassays and other bioassays

    International Nuclear Information System (INIS)

    Normolle, D.P.

    1993-01-01

    The four-parameter logistic function is an appropriate model for many types of bioassays that have continuous response variables, such as radioimmunoassays. By modelling the variance of replicates in an assay, one can modify the usual parameter estimation techniques (for example, Gauss-Newton or Marquardt-Levenberg) to produce parameter estimates for the standard curve that are robust against outlying observations. This article describes the computation of robust (M-) estimates for the parameters of the four-parameter logistic function. It describes techniques for modelling the variance structure of the replicates, modifications to the usual iterative algorithms for parameter estimation in non-linear models, and a formula for inverse confidence intervals. To demonstrate the algorithm, the article presents examples where the robustly estimated four-parameter logistic model is compared with the logit-log and four-parameter logistic models with least-squares estimates. (author)

  20. Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application

    Science.gov (United States)

    Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin

    2007-12-01

    We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.

  1. Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application

    Directory of Open Access Journals (Sweden)

    Yang Jian

    2007-01-01

    Full Text Available We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.

  2. A Robust Planning Algorithm for Groups of Entities in Discrete Spaces

    Directory of Open Access Journals (Sweden)

    Igor Wojnicki

    2015-07-01

    Full Text Available Automated planning is a well-established field of artificial intelligence (AI, with applications in route finding, robotics and operational research, among others. The task of developing a plan is often solved by finding a path in a graph representing the search domain; a robust plan consists of numerous paths that can be chosen if the execution of the best (optimal one fails. While robust planning for a single entity is rather simple, development of a robust plan for multiple entities in a common environment can lead to combinatorial explosion. This paper proposes a novel hybrid approach, joining heuristic search and the wavefront algorithm to provide a plan featuring robustness in areas where it is needed, while maintaining a low level of computational complexity.

  3. A FAST AND ROBUST ALGORITHM FOR ROAD EDGES EXTRACTION FROM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    K. Qiu

    2016-06-01

    Full Text Available Fast mapping of roads plays an important role in many geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance. How to extract various road edges fast and robustly is a challenging task. In this paper, we present a fast and robust algorithm for the automatic road edges extraction from terrestrial mobile LiDAR data. The algorithm is based on a key observation: most roads around edges have difference in elevation and road edges with pavement are seen in two different planes. In our algorithm, we firstly extract a rough plane based on RANSAC algorithm, and then multiple refined planes which only contains pavement are extracted from the rough plane. The road edges are extracted based on these refined planes. In practice, there is a serious problem that the rough and refined planes usually extracted badly due to rough roads and different density of point cloud. To eliminate the influence of rough roads, the technology which is similar with the difference of DSM (digital surface model and DTM (digital terrain model is used, and we also propose a method which adjust the point clouds to a similar density to eliminate the influence of different density. Experiments show the validities of the proposed method with multiple datasets (e.g. urban road, highway, and some rural road. We use the same parameters through the experiments and our algorithm can achieve real-time processing speeds.

  4. Weighing Efficiency-Robustness in Supply Chain Disruption by Multi-Objective Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Tong Shu

    2016-03-01

    Full Text Available This paper investigates various supply chain disruptions in terms of scenario planning, including node disruption and chain disruption; namely, disruptions in distribution centers and disruptions between manufacturing centers and distribution centers. Meanwhile, it also focuses on the simultaneous disruption on one node or a number of nodes, simultaneous disruption in one chain or a number of chains and the corresponding mathematical models and exemplification in relation to numerous manufacturing centers and diverse products. Robustness of the design of the supply chain network is examined by weighing efficiency against robustness during supply chain disruptions. Efficiency is represented by operating cost; robustness is indicated by the expected disruption cost and the weighing issue is calculated by the multi-objective firefly algorithm for consistency in the results. It has been shown that the total cost achieved by the optimal target function is lower than that at the most effective time of supply chains. In other words, the decrease of expected disruption cost by improving robustness in supply chains is greater than the increase of operating cost by reducing efficiency, thus leading to cost advantage. Consequently, by approximating the Pareto Front Chart of weighing between efficiency and robustness, enterprises can choose appropriate efficiency and robustness for their longer-term development.

  5. Small Body GN&C Research Report: A Robust Model Predictive Control Algorithm with Guaranteed Resolvability

    Science.gov (United States)

    Acikmese, Behcet A.; Carson, John M., III

    2005-01-01

    A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees the resolvability of the associated finite-horizon optimal control problem in a receding-horizon implementation. The control consists of two components; (i) feedforward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives, and derivatives in polytopes. An illustrative numerical example is also provided.

  6. Robust video watermarking via optimization algorithm for quantization of pseudo-random semi-global statistics

    Science.gov (United States)

    Kucukgoz, Mehmet; Harmanci, Oztan; Mihcak, Mehmet K.; Venkatesan, Ramarathnam

    2005-03-01

    In this paper, we propose a novel semi-blind video watermarking scheme, where we use pseudo-random robust semi-global features of video in the three dimensional wavelet transform domain. We design the watermark sequence via solving an optimization problem, such that the features of the mark-embedded video are the quantized versions of the features of the original video. The exact realizations of the algorithmic parameters are chosen pseudo-randomly via a secure pseudo-random number generator, whose seed is the secret key, that is known (resp. unknown) by the embedder and the receiver (resp. by the public). We experimentally show the robustness of our algorithm against several attacks, such as conventional signal processing modifications and adversarial estimation attacks.

  7. A robust controller design method for feedback substitution schemes using genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Trujillo, Mirsha M; Hadjiloucas, Sillas; Becerra, Victor M, E-mail: s.hadjiloucas@reading.ac.uk [Cybernetics, School of Systems Engineering, University of Reading, RG6 6AY (United Kingdom)

    2011-08-17

    Controllers for feedback substitution schemes demonstrate a trade-off between noise power gain and normalized response time. Using as an example the design of a controller for a radiometric transduction process subjected to arbitrary noise power gain and robustness constraints, a Pareto-front of optimal controller solutions fulfilling a range of time-domain design objectives can be derived. In this work, we consider designs using a loop shaping design procedure (LSDP). The approach uses linear matrix inequalities to specify a range of objectives and a genetic algorithm (GA) to perform a multi-objective optimization for the controller weights (MOGA). A clonal selection algorithm is used to further provide a directed search of the GA towards the Pareto front. We demonstrate that with the proposed methodology, it is possible to design higher order controllers with superior performance in terms of response time, noise power gain and robustness.

  8. Ensemble of data-driven prognostic algorithms for robust prediction of remaining useful life

    International Nuclear Information System (INIS)

    Hu Chao; Youn, Byeng D.; Wang Pingfeng; Taek Yoon, Joung

    2012-01-01

    Prognostics aims at determining whether a failure of an engineered system (e.g., a nuclear power plant) is impending and estimating the remaining useful life (RUL) before the failure occurs. The traditional data-driven prognostic approach is to construct multiple candidate algorithms using a training data set, evaluate their respective performance using a testing data set, and select the one with the best performance while discarding all the others. This approach has three shortcomings: (i) the selected standalone algorithm may not be robust; (ii) it wastes the resources for constructing the algorithms that are discarded; (iii) it requires the testing data in addition to the training data. To overcome these drawbacks, this paper proposes an ensemble data-driven prognostic approach which combines multiple member algorithms with a weighted-sum formulation. Three weighting schemes, namely the accuracy-based weighting, diversity-based weighting and optimization-based weighting, are proposed to determine the weights of member algorithms. The k-fold cross validation (CV) is employed to estimate the prediction error required by the weighting schemes. The results obtained from three case studies suggest that the ensemble approach with any weighting scheme gives more accurate RUL predictions compared to any sole algorithm when member algorithms producing diverse RUL predictions have comparable prediction accuracy and that the optimization-based weighting scheme gives the best overall performance among the three weighting schemes.

  9. Transform Domain Robust Variable Step Size Griffiths' Adaptive Algorithm for Noise Cancellation in ECG

    Science.gov (United States)

    Hegde, Veena; Deekshit, Ravishankar; Satyanarayana, P. S.

    2011-12-01

    The electrocardiogram (ECG) is widely used for diagnosis of heart diseases. Good quality of ECG is utilized by physicians for interpretation and identification of physiological and pathological phenomena. However, in real situations, ECG recordings are often corrupted by artifacts or noise. Noise severely limits the utility of the recorded ECG and thus needs to be removed, for better clinical evaluation. In the present paper a new noise cancellation technique is proposed for removal of random noise like muscle artifact from ECG signal. A transform domain robust variable step size Griffiths' LMS algorithm (TVGLMS) is proposed for noise cancellation. For the TVGLMS, the robust variable step size has been achieved by using the Griffiths' gradient which uses cross-correlation between the desired signal contaminated with observation or random noise and the input. The algorithm is discrete cosine transform (DCT) based and uses symmetric property of the signal to represent the signal in frequency domain with lesser number of frequency coefficients when compared to that of discrete Fourier transform (DFT). The algorithm is implemented for adaptive line enhancer (ALE) filter which extracts the ECG signal in a noisy environment using LMS filter adaptation. The proposed algorithm is found to have better convergence error/misadjustment when compared to that of ordinary transform domain LMS (TLMS) algorithm, both in the presence of white/colored observation noise. The reduction in convergence error achieved by the new algorithm with desired signal decomposition is found to be lower than that obtained without decomposition. The experimental results indicate that the proposed method is better than traditional adaptive filter using LMS algorithm in the aspects of retaining geometrical characteristics of ECG signal.

  10. A systematic approach to robust preconditioning for gradient-based inverse scattering algorithms

    International Nuclear Information System (INIS)

    Nordebo, Sven; Fhager, Andreas; Persson, Mikael; Gustafsson, Mats

    2008-01-01

    This paper presents a systematic approach to robust preconditioning for gradient-based nonlinear inverse scattering algorithms. In particular, one- and two-dimensional inverse problems are considered where the permittivity and conductivity profiles are unknown and the input data consist of the scattered field over a certain bandwidth. A time-domain least-squares formulation is employed and the inversion algorithm is based on a conjugate gradient or quasi-Newton algorithm together with an FDTD-electromagnetic solver. A Fisher information analysis is used to estimate the Hessian of the error functional. A robust preconditioner is then obtained by incorporating a parameter scaling such that the scaled Fisher information has a unit diagonal. By improving the conditioning of the Hessian, the convergence rate of the conjugate gradient or quasi-Newton methods are improved. The preconditioner is robust in the sense that the scaling, i.e. the diagonal Fisher information, is virtually invariant to the numerical resolution and the discretization model that is employed. Numerical examples of image reconstruction are included to illustrate the efficiency of the proposed technique

  11. A ROBUST METHOD FOR STEREO VISUAL ODOMETRY BASED ON MULTIPLE EUCLIDEAN DISTANCE CONSTRAINT AND RANSAC ALGORITHM

    Directory of Open Access Journals (Sweden)

    Q. Zhou

    2017-07-01

    Full Text Available Visual Odometry (VO is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC and Random Sample Consensus (RANSAC algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation. The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.

  12. An integer optimization algorithm for robust identification of non-linear gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Chemmangattuvalappil Nishanth

    2012-09-01

    Full Text Available Abstract Background Reverse engineering gene networks and identifying regulatory interactions are integral to understanding cellular decision making processes. Advancement in high throughput experimental techniques has initiated innovative data driven analysis of gene regulatory networks. However, inherent noise associated with biological systems requires numerous experimental replicates for reliable conclusions. Furthermore, evidence of robust algorithms directly exploiting basic biological traits are few. Such algorithms are expected to be efficient in their performance and robust in their prediction. Results We have developed a network identification algorithm to accurately infer both the topology and strength of regulatory interactions from time series gene expression data in the presence of significant experimental noise and non-linear behavior. In this novel formulism, we have addressed data variability in biological systems by integrating network identification with the bootstrap resampling technique, hence predicting robust interactions from limited experimental replicates subjected to noise. Furthermore, we have incorporated non-linearity in gene dynamics using the S-system formulation. The basic network identification formulation exploits the trait of sparsity of biological interactions. Towards that, the identification algorithm is formulated as an integer-programming problem by introducing binary variables for each network component. The objective function is targeted to minimize the network connections subjected to the constraint of maximal agreement between the experimental and predicted gene dynamics. The developed algorithm is validated using both in silico and experimental data-sets. These studies show that the algorithm can accurately predict the topology and connection strength of the in silico networks, as quantified by high precision and recall, and small discrepancy between the actual and predicted kinetic parameters

  13. Robust and unobtrusive algorithm based on position independence for step detection

    Science.gov (United States)

    Qiu, KeCheng; Li, MengYang; Luo, YiHan

    2018-04-01

    Running is becoming one of the most popular exercises among the people, monitoring steps can help users better understand their running process and improve exercise efficiency. In this paper, we design and implement a robust and unobtrusive algorithm based on position independence for step detection under real environment. It applies Butterworth filter to suppress high frequency interference and then employs the projection based on mathematics to transform system to solve the problem of unknown position of smartphone. Finally, using sliding window to suppress the false peak. The algorithm was tested for eight participants on the Android 7.0 platform. In our experiments, the results show that the proposed algorithm can achieve desired effect in spite of device pose.

  14. Chinese License Plates Recognition Method Based on A Robust and Efficient Feature Extraction and BPNN Algorithm

    Science.gov (United States)

    Zhang, Ming; Xie, Fei; Zhao, Jing; Sun, Rui; Zhang, Lei; Zhang, Yue

    2018-04-01

    The prosperity of license plate recognition technology has made great contribution to the development of Intelligent Transport System (ITS). In this paper, a robust and efficient license plate recognition method is proposed which is based on a combined feature extraction model and BPNN (Back Propagation Neural Network) algorithm. Firstly, the candidate region of the license plate detection and segmentation method is developed. Secondly, a new feature extraction model is designed considering three sets of features combination. Thirdly, the license plates classification and recognition method using the combined feature model and BPNN algorithm is presented. Finally, the experimental results indicate that the license plate segmentation and recognition both can be achieved effectively by the proposed algorithm. Compared with three traditional methods, the recognition accuracy of the proposed method has increased to 95.7% and the consuming time has decreased to 51.4ms.

  15. A Robust Automated Cataract Detection Algorithm Using Diagnostic Opinion Based Parameter Thresholding for Telemedicine Application

    Directory of Open Access Journals (Sweden)

    Shashwat Pathak

    2016-09-01

    Full Text Available This paper proposes and evaluates an algorithm to automatically detect the cataracts from color images in adult human subjects. Currently, methods available for cataract detection are based on the use of either fundus camera or Digital Single-Lens Reflex (DSLR camera; both are very expensive. The main motive behind this work is to develop an inexpensive, robust and convenient algorithm which in conjugation with suitable devices will be able to diagnose the presence of cataract from the true color images of an eye. An algorithm is proposed for cataract screening based on texture features: uniformity, intensity and standard deviation. These features are first computed and mapped with diagnostic opinion by the eye expert to define the basic threshold of screening system and later tested on real subjects in an eye clinic. Finally, a tele-ophthamology model using our proposed system has been suggested, which confirms the telemedicine application of the proposed system.

  16. A fast, robust algorithm for power line interference cancellation in neural recording

    Science.gov (United States)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2014-04-01

    Objective. Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. Approach. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. Main results. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. Significance. The proposed algorithm features a highly robust operation, fast adaptation to

  17. Robust

    DEFF Research Database (Denmark)

    2017-01-01

    Robust – Reflections on Resilient Architecture’, is a scientific publication following the conference of the same name in November of 2017. Researches and PhD-Fellows, associated with the Masters programme: Cultural Heritage, Transformation and Restoration (Transformation), at The Royal Danish...

  18. Collateral missing value imputation: a new robust missing value estimation algorithm for microarray data.

    Science.gov (United States)

    Sehgal, Muhammad Shoaib B; Gondal, Iqbal; Dooley, Laurence S

    2005-05-15

    Microarray data are used in a range of application areas in biology, although often it contains considerable numbers of missing values. These missing values can significantly affect subsequent statistical analysis and machine learning algorithms so there is a strong motivation to estimate these values as accurately as possible before using these algorithms. While many imputation algorithms have been proposed, more robust techniques need to be developed so that further analysis of biological data can be accurately undertaken. In this paper, an innovative missing value imputation algorithm called collateral missing value estimation (CMVE) is presented which uses multiple covariance-based imputation matrices for the final prediction of missing values. The matrices are computed and optimized using least square regression and linear programming methods. The new CMVE algorithm has been compared with existing estimation techniques including Bayesian principal component analysis imputation (BPCA), least square impute (LSImpute) and K-nearest neighbour (KNN). All these methods were rigorously tested to estimate missing values in three separate non-time series (ovarian cancer based) and one time series (yeast sporulation) dataset. Each method was quantitatively analyzed using the normalized root mean square (NRMS) error measure, covering a wide range of randomly introduced missing value probabilities from 0.01 to 0.2. Experiments were also undertaken on the yeast dataset, which comprised 1.7% actual missing values, to test the hypothesis that CMVE performed better not only for randomly occurring but also for a real distribution of missing values. The results confirmed that CMVE consistently demonstrated superior and robust estimation capability of missing values compared with other methods for both series types of data, for the same order of computational complexity. A concise theoretical framework has also been formulated to validate the improved performance of the CMVE

  19. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    Science.gov (United States)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement

  20. A parallel domain decomposition-based implicit method for the Cahn-Hilliard-Cook phase-field equation in 3D

    KAUST Repository

    Zheng, Xiang

    2015-03-01

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors. © 2015 Elsevier Inc.

  1. A parallel domain decomposition-based implicit method for the Cahn–Hilliard–Cook phase-field equation in 3D

    International Nuclear Information System (INIS)

    Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David

    2015-01-01

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors

  2. A parallel domain decomposition-based implicit method for the Cahn-Hilliard-Cook phase-field equation in 3D

    Science.gov (United States)

    Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David

    2015-03-01

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.

  3. A Novel Robust Audio Watermarking Algorithm by Modifying the Average Amplitude in Transform Domain

    Directory of Open Access Journals (Sweden)

    Qiuling Wu

    2018-05-01

    Full Text Available In order to improve the robustness and imperceptibility in practical application, a novel audio watermarking algorithm with strong robustness is proposed by exploring the multi-resolution characteristic of discrete wavelet transform (DWT and the energy compaction capability of discrete cosine transform (DCT. The human auditory system is insensitive to the minor changes in the frequency components of the audio signal, so the watermarks can be embedded by slightly modifying the frequency components of the audio signal. The audio fragments segmented from the cover audio signal are decomposed by DWT to obtain several groups of wavelet coefficients with different frequency bands, and then the fourth level detail coefficient is selected to be divided into the former packet and the latter packet, which are executed for DCT to get two sets of transform domain coefficients (TDC respectively. Finally, the average amplitudes of the two sets of TDC are modified to embed the binary image watermark according to the special embedding rule. The watermark extraction is blind without the carrier audio signal. Experimental results confirm that the proposed algorithm has good imperceptibility, large payload capacity and strong robustness when resisting against various attacks such as MP3 compression, low-pass filtering, re-sampling, re-quantization, amplitude scaling, echo addition and noise corruption.

  4. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2017-04-01

    Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  5. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  6. Robust and Low-Complexity Timing Synchronization Algorithm and its Architecture for ADSRC Applications

    Directory of Open Access Journals (Sweden)

    KIM, J.

    2009-10-01

    Full Text Available 5.9 GHz advanced dedicated short range communications (ADSRC is a short-to-medium range communication standard that supports both public safety and private operations in roadside-to-vehicle and vehicle-to-vehicle communication environments. The core technology of physical layer in ADSRC is orthogonal frequency division multiplexing (OFDM, which is sensitive to timing synchronization error. In this paper, a robust and low-complexity timing synchronization algorithm suitable for ADSRC system and its efficient hardware architecture are proposed. The implementation of the proposed architecture is performed with Xilinx Vertex-II XC2V1000 Field Programmable Gate Array (FPGA. The proposed algorithm is based on cross-correlation technique, which is employed to detect the starting point of short training symbol and the guard interval of the long training symbol. Synchronization error rate (SER evaluation results and post-layout simulation results show that the proposed algorithm is efficient in high-mobility environments. The post-layout results of implementation demonstrate the robustness and low-complexity of the proposed architecture.

  7. A Robust Vision-based Runway Detection and Tracking Algorithm for Automatic UAV Landing

    KAUST Repository

    Abu Jbara, Khaled F.

    2015-05-01

    This work presents a novel real-time algorithm for runway detection and tracking applied to the automatic takeoff and landing of Unmanned Aerial Vehicles (UAVs). The algorithm is based on a combination of segmentation based region competition and the minimization of a specific energy function to detect and identify the runway edges from streaming video data. The resulting video-based runway position estimates are updated using a Kalman Filter, which can integrate other sensory information such as position and attitude angle estimates to allow a more robust tracking of the runway under turbulence. We illustrate the performance of the proposed lane detection and tracking scheme on various experimental UAV flights conducted by the Saudi Aerospace Research Center. Results show an accurate tracking of the runway edges during the landing phase under various lighting conditions. Also, it suggests that such positional estimates would greatly improve the positional accuracy of the UAV during takeoff and landing phases. The robustness of the proposed algorithm is further validated using Hardware in the Loop simulations with diverse takeoff and landing videos generated using a commercial flight simulator.

  8. Robust state feedback controller design of STATCOM using chaotic optimization algorithm

    Directory of Open Access Journals (Sweden)

    Safari Amin

    2010-01-01

    Full Text Available In this paper, a new design technique for the design of robust state feedback controller for static synchronous compensator (STATCOM using Chaotic Optimization Algorithm (COA is presented. The design is formulated as an optimization problem which is solved by the COA. Since chaotic planning enjoys reliability, ergodicity and stochastic feature, the proposed technique presents chaos mapping using Lozi map chaotic sequences which increases its convergence rate. To ensure the robustness of the proposed damping controller, the design process takes into account a wide range of operating conditions and system configurations. The simulation results reveal that the proposed controller has an excellent capability in damping power system low frequency oscillations and enhances greatly the dynamic stability of the power systems. Moreover, the system performance analysis under different operating conditions shows that the phase based controller is superior compare to the magnitude based controller.

  9. Massive parallelization of a 3D finite difference electromagnetic forward solution using domain decomposition methods on multiple CUDA enabled GPUs

    Science.gov (United States)

    Schultz, A.

    2010-12-01

    describe our ongoing efforts to achieve massive parallelization on a novel hybrid GPU testbed machine currently configured with 12 Intel Westmere Xeon CPU cores (or 24 parallel computational threads) with 96 GB DDR3 system memory, 4 GPU subsystems which in aggregate contain 960 NVidia Tesla GPU cores with 16 GB dedicated DDR3 GPU memory, and a second interleved bank of 4 GPU subsystems containing in aggregate 1792 NVidia Fermi GPU cores with 12 GB dedicated DDR5 GPU memory. We are applying domain decomposition methods to a modified version of Weiss' (2001) 3D frequency domain full physics EM finite difference code, an open source GPL licensed f90 code available for download from www.OpenEM.org. This will be the core of a new hybrid 3D inversion that parallelizes frequencies across CPUs and individual forward solutions across GPUs. We describe progress made in modifying the code to use direct solvers in GPU cores dedicated to each small subdomain, iteratively improving the solution by matching adjacent subdomain boundary solutions, rather than iterative Krylov space sparse solvers as currently applied to the whole domain.

  10. Autopiquer - a Robust and Reliable Peak Detection Algorithm for Mass Spectrometry.

    Science.gov (United States)

    Kilgour, David P A; Hughes, Sam; Kilgour, Samantha L; Mackay, C Logan; Palmblad, Magnus; Tran, Bao Quoc; Goo, Young Ah; Ernst, Robert K; Clarke, David J; Goodlett, David R

    2017-02-01

    We present a simple algorithm for robust and unsupervised peak detection by determining a noise threshold in isotopically resolved mass spectrometry data. Solving this problem will greatly reduce the subjective and time-consuming manual picking of mass spectral peaks and so will prove beneficial in many research applications. The Autopiquer approach uses autocorrelation to test for the presence of (isotopic) structure in overlapping windows across the spectrum. Within each window, a noise threshold is optimized to remove the most unstructured data, whilst keeping as much of the (isotopic) structure as possible. This algorithm has been successfully demonstrated for both peak detection and spectral compression on data from many different classes of mass spectrometer and for different sample types, and this approach should also be extendible to other types of data that contain regularly spaced discrete peaks. Graphical Abstract ᅟ.

  11. A Robust Inversion Algorithm for Surface Leaf and Soil Temperatures Using the Vegetation Clumping Index

    Directory of Open Access Journals (Sweden)

    Zunjian Bian

    2017-07-01

    Full Text Available The inversion of land surface component temperatures is an essential source of information for mapping heat fluxes and the angular normalization of thermal infrared (TIR observations. Leaf and soil temperatures can be retrieved using multiple-view-angle TIR observations. In a satellite-scale pixel, the clumping effect of vegetation is usually present, but it is not completely considered during the inversion process. Therefore, we introduced a simple inversion procedure that uses gap frequency with a clumping index (GCI for leaf and soil temperatures over both crop and forest canopies. Simulated datasets corresponding to turbid vegetation, regularly planted crops and randomly distributed forest were generated using a radiosity model and were used to test the proposed inversion algorithm. The results indicated that the GCI algorithm performed well for both crop and forest canopies, with root mean squared errors of less than 1.0 °C against simulated values. The proposed inversion algorithm was also validated using measured datasets over orchard, maize and wheat canopies. Similar results were achieved, demonstrating that using the clumping index can improve inversion results. In all evaluations, we recommend using the GCI algorithm as a foundation for future satellite-based applications due to its straightforward form and robust performance for both crop and forest canopies using the vegetation clumping index.

  12. A Robust Dynamic Heart-Rate Detection Algorithm Framework During Intense Physical Activities Using Photoplethysmographic Signals

    Directory of Open Access Journals (Sweden)

    Jiajia Song

    2017-10-01

    Full Text Available Dynamic accurate heart-rate (HR estimation using a photoplethysmogram (PPG during intense physical activities is always challenging due to corruption by motion artifacts (MAs. It is difficult to reconstruct a clean signal and extract HR from contaminated PPG. This paper proposes a robust HR-estimation algorithm framework that uses one-channel PPG and tri-axis acceleration data to reconstruct the PPG and calculate the HR based on features of the PPG and spectral analysis. Firstly, the signal is judged by the presence of MAs. Then, the spectral peaks corresponding to acceleration data are filtered from the periodogram of the PPG when MAs exist. Different signal-processing methods are applied based on the amount of remaining PPG spectral peaks. The main MA-removal algorithm (NFEEMD includes the repeated single-notch filter and ensemble empirical mode decomposition. Finally, HR calibration is designed to ensure the accuracy of HR tracking. The NFEEMD algorithm was performed on the 23 datasets from the 2015 IEEE Signal Processing Cup Database. The average estimation errors were 1.12 BPM (12 training datasets, 2.63 BPM (10 testing datasets and 1.87 BPM (all 23 datasets, respectively. The Pearson correlation was 0.992. The experiment results illustrate that the proposed algorithm is not only suitable for HR estimation during continuous activities, like slow running (13 training datasets, but also for intense physical activities with acceleration, like arm exercise (10 testing datasets.

  13. A robust algorithm to solve the signal setting problem considering different traffic assignment approaches

    Directory of Open Access Journals (Sweden)

    Adacher Ludovica

    2017-12-01

    Full Text Available In this paper we extend a stochastic discrete optimization algorithm so as to tackle the signal setting problem. Signalized junctions represent critical points of an urban transportation network, and the efficiency of their traffic signal setting influences the overall network performance. Since road congestion usually takes place at or close to junction areas, an improvement in signal settings contributes to improving travel times, drivers’ comfort, fuel consumption efficiency, pollution and safety. In a traffic network, the signal control strategy affects the travel time on the roads and influences drivers’ route choice behavior. The paper presents an algorithm for signal setting optimization of signalized junctions in a congested road network. The objective function used in this work is a weighted sum of delays caused by the signalized intersections. We propose an iterative procedure to solve the problem by alternately updating signal settings based on fixed flows and traffic assignment based on fixed signal settings. To show the robustness of our method, we consider two different assignment methods: one based on user equilibrium assignment, well established in the literature as well as in practice, and the other based on a platoon simulation model with vehicular flow propagation and spill-back. Our optimization algorithm is also compared with others well known in the literature for this problem. The surrogate method (SM, particle swarm optimization (PSO and the genetic algorithm (GA are compared for a combined problem of global optimization of signal settings and traffic assignment (GOSSTA. Numerical experiments on a real test network are reported.

  14. Multilayer perceptron for robust nonlinear interval regression analysis using genetic algorithms.

    Science.gov (United States)

    Hu, Yi-Chung

    2014-01-01

    On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets.

  15. Particle Filter-Based Target Tracking Algorithm for Magnetic Resonance-Guided Respiratory Compensation : Robustness and Accuracy Assessment

    NARCIS (Netherlands)

    Bourque, Alexandra E; Bedwani, Stéphane; Carrier, Jean-François; Ménard, Cynthia; Borman, Pim; Bos, Clemens; Raaymakers, Bas W; Mickevicius, Nikolai; Paulson, Eric; Tijssen, Rob H N

    PURPOSE: To assess overall robustness and accuracy of a modified particle filter-based tracking algorithm for magnetic resonance (MR)-guided radiation therapy treatments. METHODS AND MATERIALS: An improved particle filter-based tracking algorithm was implemented, which used a normalized

  16. Fast and robust ray casting algorithms for virtual X-ray imaging

    International Nuclear Information System (INIS)

    Freud, N.; Duvauchelle, P.; Letang, J.M.; Babot, D.

    2006-01-01

    Deterministic calculations based on ray casting techniques are known as a powerful alternative to the Monte Carlo approach to simulate X- or γ-ray imaging modalities (e.g. digital radiography and computed tomography), whenever computation time is a critical issue. One of the key components, from the viewpoint of computing resource expense, is the algorithm which determines the path length travelled by each ray through complex 3D objects. This issue has given rise to intensive research in the field of 3D rendering (in the visible light domain) during the last decades. The present work proposes algorithmic solutions adapted from state-of-the-art computer graphics to carry out ray casting in X-ray imaging configurations. This work provides an algorithmic basis to simulate direct transmission of X-rays, as well as scattering and secondary emission of radiation. Emphasis is laid on the speed and robustness issues. Computation times are given in a typical case of radiography simulation

  17. Fast and robust ray casting algorithms for virtual X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Freud, N. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France)]. E-mail: Nicolas.Freud@insa-lyon.fr; Duvauchelle, P. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Letang, J.M. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Babot, D. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France)

    2006-07-15

    Deterministic calculations based on ray casting techniques are known as a powerful alternative to the Monte Carlo approach to simulate X- or {gamma}-ray imaging modalities (e.g. digital radiography and computed tomography), whenever computation time is a critical issue. One of the key components, from the viewpoint of computing resource expense, is the algorithm which determines the path length travelled by each ray through complex 3D objects. This issue has given rise to intensive research in the field of 3D rendering (in the visible light domain) during the last decades. The present work proposes algorithmic solutions adapted from state-of-the-art computer graphics to carry out ray casting in X-ray imaging configurations. This work provides an algorithmic basis to simulate direct transmission of X-rays, as well as scattering and secondary emission of radiation. Emphasis is laid on the speed and robustness issues. Computation times are given in a typical case of radiography simulation.

  18. Development of the hierarchical domain decomposition boundary element method for solving the three-dimensional multiregion neutron diffusion equations

    International Nuclear Information System (INIS)

    Chiba, Gou; Tsuji, Masashi; Shimazu, Yoichiro

    2001-01-01

    A hierarchical domain decomposition boundary element method (HDD-BEM) that was developed to solve a two-dimensional neutron diffusion equation has been modified to deal with three-dimensional problems. In the HDD-BEM, the domain is decomposed into homogeneous regions. The boundary conditions on the common inner boundaries between decomposed regions and the neutron multiplication factor are initially assumed. With these assumptions, the neutron diffusion equations defined in decomposed homogeneous regions can be solved respectively by applying the boundary element method. This part corresponds to the 'lower level' calculations. At the 'higher level' calculations, the assumed values, the inner boundary conditions and the neutron multiplication factor, are modified so as to satisfy the continuity conditions for the neutron flux and the neutron currents on the inner boundaries. These procedures of the lower and higher levels are executed alternately and iteratively until the continuity conditions are satisfied within a convergence tolerance. With the hierarchical domain decomposition, it is possible to deal with problems composing a large number of regions, something that has been difficult with the conventional BEM. In this paper, it is showed that a three-dimensional problem even with 722 regions can be solved with a fine accuracy and an acceptable computation time. (author)

  19. A hybrid multi-objective imperialist competitive algorithm and Monte Carlo method for robust safety design of a rail vehicle

    Science.gov (United States)

    Nejlaoui, Mohamed; Houidi, Ajmi; Affi, Zouhaier; Romdhane, Lotfi

    2017-10-01

    This paper deals with the robust safety design optimization of a rail vehicle system moving in short radius curved tracks. A combined multi-objective imperialist competitive algorithm and Monte Carlo method is developed and used for the robust multi-objective optimization of the rail vehicle system. This robust optimization of rail vehicle safety considers simultaneously the derailment angle and its standard deviation where the design parameters uncertainties are considered. The obtained results showed that the robust design reduces significantly the sensitivity of the rail vehicle safety to the design parameters uncertainties compared to the determinist one and to the literature results.

  20. Robust total energy demand estimation with a hybrid Variable Neighborhood Search – Extreme Learning Machine algorithm

    International Nuclear Information System (INIS)

    Sánchez-Oro, J.; Duarte, A.; Salcedo-Sanz, S.

    2016-01-01

    Highlights: • The total energy demand in Spain is estimated with a Variable Neighborhood algorithm. • Socio-economic variables are used, and one year ahead prediction horizon is considered. • Improvement of the prediction with an Extreme Learning Machine network is considered. • Experiments are carried out in real data for the case of Spain. - Abstract: Energy demand prediction is an important problem whose solution is evaluated by policy makers in order to take key decisions affecting the economy of a country. A number of previous approaches to improve the quality of this estimation have been proposed in the last decade, the majority of them applying different machine learning techniques. In this paper, the performance of a robust hybrid approach, composed of a Variable Neighborhood Search algorithm and a new class of neural network called Extreme Learning Machine, is discussed. The Variable Neighborhood Search algorithm is focused on obtaining the most relevant features among the set of initial ones, by including an exponential prediction model. While previous approaches consider that the number of macroeconomic variables used for prediction is a parameter of the algorithm (i.e., it is fixed a priori), the proposed Variable Neighborhood Search method optimizes both: the number of variables and the best ones. After this first step of feature selection, an Extreme Learning Machine network is applied to obtain the final energy demand prediction. Experiments in a real case of energy demand estimation in Spain show the excellent performance of the proposed approach. In particular, the whole method obtains an estimation of the energy demand with an error lower than 2%, even when considering the crisis years, which are a real challenge.

  1. DOMAIN DECOMPOSITION FOR POROELASTICITY AND ELASTICITY WITH DG JUMPS AND MORTARS

    KAUST Repository

    GIRAULT, V.; PENCHEVA, G.; WHEELER, M. F.; WILDEY, T.

    2011-01-01

    by introducing DG jumps and mortars. The unknowns are condensed on the interface, so that at each time step, the computation in each subdomain can be performed in parallel. In addition, by extrapolating the displacement, we present an algorithm where

  2. A general and Robust Ray-Casting-Based Algorithm for Triangulating Surfaces at the Nanoscale

    Science.gov (United States)

    Decherchi, Sergio; Rocchia, Walter

    2013-01-01

    We present a general, robust, and efficient ray-casting-based approach to triangulating complex manifold surfaces arising in the nano-bioscience field. This feature is inserted in a more extended framework that: i) builds the molecular surface of nanometric systems according to several existing definitions, ii) can import external meshes, iii) performs accurate surface area estimation, iv) performs volume estimation, cavity detection, and conditional volume filling, and v) can color the points of a grid according to their locations with respect to the given surface. We implemented our methods in the publicly available NanoShaper software suite (www.electrostaticszone.eu). Robustness is achieved using the CGAL library and an ad hoc ray-casting technique. Our approach can deal with any manifold surface (including nonmolecular ones). Those explicitly treated here are the Connolly-Richards (SES), the Skin, and the Gaussian surfaces. Test results indicate that it is robust to rotation, scale, and atom displacement. This last aspect is evidenced by cavity detection of the highly symmetric structure of fullerene, which fails when attempted by MSMS and has problems in EDTSurf. In terms of timings, NanoShaper builds the Skin surface three times faster than the single threaded version in Lindow et al. on a 100,000 atoms protein and triangulates it at least ten times more rapidly than the Kruithof algorithm. NanoShaper was integrated with the DelPhi Poisson-Boltzmann equation solver. Its SES grid coloring outperformed the DelPhi counterpart. To test the viability of our method on large systems, we chose one of the biggest molecular structures in the Protein Data Bank, namely the 1VSZ entry, which corresponds to the human adenovirus (180,000 atoms after Hydrogen addition). We were able to triangulate the corresponding SES and Skin surfaces (6.2 and 7.0 million triangles, respectively, at a scale of 2 grids per Å) on a middle-range workstation. PMID:23577073

  3. Robust and Accurate Algorithm for Wearable Stereoscopic Augmented Reality with Three Indistinguishable Markers

    Directory of Open Access Journals (Sweden)

    Fabrizio Cutolo

    2016-09-01

    Full Text Available In the context of surgical navigation systems based on augmented reality (AR, the key challenge is to ensure the highest degree of realism in merging computer-generated elements with live views of the surgical scene. This paper presents an algorithm suited for wearable stereoscopic augmented reality video see-through systems for use in a clinical scenario. A video-based tracking solution is proposed that relies on stereo localization of three monochromatic markers rigidly constrained to the scene. A PnP-based optimization step is introduced to refine separately the pose of the two cameras. Video-based tracking methods using monochromatic markers are robust to non-controllable and/or inconsistent lighting conditions. The two-stage camera pose estimation algorithm provides sub-pixel registration accuracy. From a technological and an ergonomic standpoint, the proposed approach represents an effective solution to the implementation of wearable AR-based surgical navigation systems wherever rigid anatomies are involved.

  4. A Robust and Efficient Algorithm for Tool Recognition and Localization for Space Station Robot

    Directory of Open Access Journals (Sweden)

    Lingbo Cheng

    2014-12-01

    Full Text Available This paper studies a robust target recognition and localization method for a maintenance robot in a space station, and its main goal is to solve the target affine transformation caused by microgravity and the strong reflection and refraction of sunlight and lamplight in the cabin, as well as the occlusion of other objects. In this method, an Affine Scale Invariant Feature Transform (Affine-SIFT algorithm is proposed to extract enough local feature points with a fully affine invariant, and the stable matching point is obtained from the above point for target recognition by the selected Random Sample Consensus (RANSAC algorithm. Then, in order to localize the target, the effective and appropriate 3D grasping scope of the target is defined, and we determine and evaluate the grasping precision with the estimated affine transformation parameters presented in this paper. Finally, the threshold of RANSAC is optimized to enhance the accuracy and efficiency of target recognition and localization, and the scopes of illumination, vision distance and viewpoint angle for robot are evaluated to obtain effective image data by Root-Mean-Square Error (RMSE. An experimental system to simulate the illumination environment in a space station is established. Enough experiments have been carried out, and the experimental results show both the validity of the proposed definition of the grasping scope and the feasibility of the proposed recognition and localization method.

  5. A robust algorithm for optimizing protein structures with NMR chemical shifts

    Energy Technology Data Exchange (ETDEWEB)

    Berjanskii, Mark; Arndt, David; Liang, Yongjie; Wishart, David S., E-mail: david.wishart@ualberta.ca [University of Alberta, Department of Computing Science (Canada)

    2015-11-15

    Over the past decade, a number of methods have been developed to determine the approximate structure of proteins using minimal NMR experimental information such as chemical shifts alone, sparse NOEs alone or a combination of comparative modeling data and chemical shifts. However, there have been relatively few methods that allow these approximate models to be substantively refined or improved using the available NMR chemical shift data. Here, we present a novel method, called Chemical Shift driven Genetic Algorithm for biased Molecular Dynamics (CS-GAMDy), for the robust optimization of protein structures using experimental NMR chemical shifts. The method incorporates knowledge-based scoring functions and structural information derived from NMR chemical shifts via a unique combination of multi-objective MD biasing, a genetic algorithm, and the widely used XPLOR molecular modelling language. Using this approach, we demonstrate that CS-GAMDy is able to refine and/or fold models that are as much as 10 Å (RMSD) away from the correct structure using only NMR chemical shift data. CS-GAMDy is also able to refine of a wide range of approximate or mildly erroneous protein structures to more closely match the known/correct structure and the known/correct chemical shifts. We believe CS-GAMDy will allow protein models generated by sparse restraint or chemical-shift-only methods to achieve sufficiently high quality to be considered fully refined and “PDB worthy”. The CS-GAMDy algorithm is explained in detail and its performance is compared over a range of refinement scenarios with several commonly used protein structure refinement protocols. The program has been designed to be easily installed and easily used and is available at http://www.gamdy.ca http://www.gamdy.ca.

  6. Domain decomposition based iterative methods for nonlinear elliptic finite element problems

    Energy Technology Data Exchange (ETDEWEB)

    Cai, X.C. [Univ. of Colorado, Boulder, CO (United States)

    1994-12-31

    The class of overlapping Schwarz algorithms has been extensively studied for linear elliptic finite element problems. In this presentation, the author considers the solution of systems of nonlinear algebraic equations arising from the finite element discretization of some nonlinear elliptic equations. Several overlapping Schwarz algorithms, including the additive and multiplicative versions, with inexact Newton acceleration will be discussed. The author shows that the convergence rate of the Newton`s method is independent of the mesh size used in the finite element discretization, and also independent of the number of subdomains into which the original domain in decomposed. Numerical examples will be presented.

  7. Enhancements to the Combinatorial Geometry Particle Tracker in the Mercury Monte Carlo Transport Code: Embedded Meshes and Domain Decomposition

    International Nuclear Information System (INIS)

    Greenman, G.M.; O'Brien, M.J.; Procassini, R.J.; Joy, K.I.

    2009-01-01

    Two enhancements to the combinatorial geometry (CG) particle tracker in the Mercury Monte Carlo transport code are presented. The first enhancement is a hybrid particle tracker wherein a mesh region is embedded within a CG region. This method permits efficient calculations of problems with contain both large-scale heterogeneous and homogeneous regions. The second enhancement relates to the addition of parallelism within the CG tracker via spatial domain decomposition. This permits calculations of problems with a large degree of geometric complexity, which are not possible through particle parallelism alone. In this method, the cells are decomposed across processors and a particles is communicated to an adjacent processor when it tracks to an interprocessor boundary. Applications that demonstrate the efficacy of these new methods are presented

  8. Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos

    International Nuclear Information System (INIS)

    Ragusa, J.C.

    2003-01-01

    The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster

  9. Application of multi-thread computing and domain decomposition to the 3-D neutronics Fem code Cronos

    Energy Technology Data Exchange (ETDEWEB)

    Ragusa, J.C. [CEA Saclay, Direction de l' Energie Nucleaire, Service d' Etudes des Reacteurs et de Modelisations Avancees (DEN/SERMA), 91 - Gif sur Yvette (France)

    2003-07-01

    The purpose of this paper is to present the parallelization of the flux solver and the isotopic depletion module of the code, either using Message Passing Interface (MPI) or OpenMP. Thread parallelism using OpenMP was used to parallelize the mixed dual FEM (finite element method) flux solver MINOS. Investigations regarding the opportunity of mixing parallelism paradigms will be discussed. The isotopic depletion module was parallelized using domain decomposition and MPI. An attempt at using OpenMP was unsuccessful and will be explained. This paper is organized as follows: the first section recalls the different types of parallelism. The mixed dual flux solver and its parallelization are then presented. In the third section, we describe the isotopic depletion solver and its parallelization; and finally conclude with some future perspectives. Parallel applications are mandatory for fine mesh 3-dimensional transport and simplified transport multigroup calculations. The MINOS solver of the FEM neutronics code CRONOS2 was parallelized using the directive based standard OpenMP. An efficiency of 80% (resp. 60%) was achieved with 2 (resp. 4) threads. Parallelization of the isotopic depletion solver was obtained using domain decomposition principles and MPI. Efficiencies greater than 90% were reached. These parallel implementations were tested on a shared memory symmetric multiprocessor (SMP) cluster machine. The OpenMP implementation in the solver MINOS is only the first step towards fully using the SMPs cluster potential with a mixed mode parallelism. Mixed mode parallelism can be achieved by combining message passing interface between clusters with OpenMP implicit parallelism within a cluster.

  10. System optimization for HVAC energy management using the robust evolutionary algorithm

    International Nuclear Information System (INIS)

    Fong, K.F.; Hanby, V.I.; Chow, T.T.

    2009-01-01

    For an installed centralized heating, ventilating and air conditioning (HVAC) system, appropriate energy management measures would achieve energy conservation targets through the optimal control and operation. The performance optimization of conventional HVAC systems may be handled by operation experience, but it may not cover different optimization scenarios and parameters in response to a variety of load and weather conditions. In this regard, it is common to apply the suitable simulation-optimization technique to model the system then determine the required operation parameters. The particular plant simulation models can be built up by either using the available simulation programs or a system of mathematical expressions. To handle the simulation models, iterations would be involved in the numerical solution methods. Since the gradient information is not easily available due to the complex nature of equations, the traditional gradient-based optimization methods are not applicable for this kind of system models. For the heuristic optimization methods, the continual search is commonly necessary, and the system function call is required for each search. The frequency of simulation function calls would then be a time-determining step, and an efficient optimization method is crucial, in order to find the solution through a number of function calls in a reasonable computational period. In this paper, the robust evolutionary algorithm (REA) is presented to tackle this nature of the HVAC simulation models. REA is based on one of the paradigms of evolutionary algorithm, evolution strategy, which is a stochastic population-based searching technique emphasized on mutation. The REA, which incorporates the Cauchy deterministic mutation, tournament selection and arithmetic recombination, would provide a synergetic effect for optimal search. The REA is effective to cope with the complex simulation models, as well as those represented by explicit mathematical expressions of

  11. Overlapping domain decomposition preconditioners for the generalized Davidson method for the eigenvalue problem

    Energy Technology Data Exchange (ETDEWEB)

    Stathopoulos, A.; Fischer, C.F. [Vanderbilt Univ., Nashville, TN (United States); Saad, Y.

    1994-12-31

    The solution of the large, sparse, symmetric eigenvalue problem, Ax = {lambda}x, is central to many scientific applications. Among many iterative methods that attempt to solve this problem, the Lanczos and the Generalized Davidson (GD) are the most widely used methods. The Lanczos method builds an orthogonal basis for the Krylov subspace, from which the required eigenvectors are approximated through a Rayleigh-Ritz procedure. Each Lanczos iteration is economical to compute but the number of iterations may grow significantly for difficult problems. The GD method can be considered a preconditioned version of Lanczos. In each step the Rayleigh-Ritz procedure is solved and explicit orthogonalization of the preconditioned residual ((M {minus} {lambda}I){sup {minus}1}(A {minus} {lambda}I)x) is performed. Therefore, the GD method attempts to improve convergence and robustness at the expense of a more complicated step.

  12. Color-SIFT model: a robust and an accurate shot boundary detection algorithm

    Science.gov (United States)

    Sharmila Kumari, M.; Shekar, B. H.

    2010-02-01

    In this paper, a new technique called color-SIFT model is devised for shot boundary detection. Unlike scale invariant feature transform model that uses only grayscale information and misses important visual information regarding color, here we have adopted different color planes to extract keypoints which are subsequently used to detect shot boundaries. The basic SIFT model has four stages namely scale-space peak selection, keypoint localization, orientation assignment and keypoint descriptor and all these four stages were employed to extract key descriptors in each color plane. The proposed model works on three different color planes and a fusion has been made to take a decision on number of keypoint matches for shot boundary identification and hence is different from the color global scale invariant feature transform that works on quantized images. In addition, the proposed algorithm possess invariance to linear transformation and robust to occlusion and noisy environment. Experiments have been conducted on the standard TRECVID video database to reveal the performance of the proposed model.

  13. A robust, efficient and accurate β- pdf integration algorithm in nonpremixed turbulent combustion

    International Nuclear Information System (INIS)

    Liu, H.; Lien, F.S.; Chui, E.

    2005-01-01

    Among many presumed-shape pdf approaches, the presumed β-function pdf is widely used in nonpremixed turbulent combustion models in the literature. However, singularity difficulties at Z = 0 and 1, Z being the mixture fraction, may be encountered in the numerical integration of the b-function pdf and there are few publications addressing this issue to date. The present study proposes an efficient, robust and accurate algorithm to overcome these numerical difficulties. The present treatment of the β-pdf integration is firstly used in the Burke-Schumann solution in conjunction with the k - ε turbulent model in the case of CH 4 /H 2 bluff-body jets and flames. Afterward it is extended to a more complex model, the laminar flamelet model, for the same flow. Numerical results obtained by using the proposed β-pdf integration method are compared to experimental values of the velocity field, temperature and constituent mass fraction to illustrate the efficiency and accuracy of the present method. (author)

  14. TVR-DART: A More Robust Algorithm for Discrete Tomography From Limited Projection Data With Automated Gray Value Estimation.

    Science.gov (United States)

    Xiaodong Zhuge; Palenstijn, Willem Jan; Batenburg, Kees Joost

    2016-01-01

    In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.

  15. Domain decomposition method for dynamic faulting under slip-dependent friction

    International Nuclear Information System (INIS)

    Badea, Lori; Ionescu, Ioan R.; Wolf, Sylvie

    2004-01-01

    The anti-plane shearing problem on a system of finite faults under a slip-dependent friction in a linear elastic domain is considered. Using a Newmark method for the time discretization of the problem, we have obtained an elliptic variational inequality at each time step. An upper bound for the time step size, which is not a CFL condition, is deduced from the solution uniqueness criterion using the first eigenvalue of the tangent problem. Finite element form of the variational inequality is solved by a Schwarz method assuming that the inner nodes of the domain lie in one subdomain and the nodes on the fault lie in other subdomains. Two decompositions of the domain are analyzed, one made up of two subdomains and another one with three subdomains. Numerical experiments are performed to illustrate convergence for a single time step (convergence of the Schwarz algorithm, influence of the mesh size, influence of the time step), convergence in time (instability capturing, energy dissipation, optimal time step) and an application to a relevant physical problem (interacting parallel fault segments)

  16. Robustness of SOC Estimation Algorithms for EV Lithium-Ion Batteries against Modeling Errors and Measurement Noise

    Directory of Open Access Journals (Sweden)

    Xue Li

    2015-01-01

    Full Text Available State of charge (SOC is one of the most important parameters in battery management system (BMS. There are numerous algorithms for SOC estimation, mostly of model-based observer/filter types such as Kalman filters, closed-loop observers, and robust observers. Modeling errors and measurement noises have critical impact on accuracy of SOC estimation in these algorithms. This paper is a comparative study of robustness of SOC estimation algorithms against modeling errors and measurement noises. By using a typical battery platform for vehicle applications with sensor noise and battery aging characterization, three popular and representative SOC estimation methods (extended Kalman filter, PI-controlled observer, and H∞ observer are compared on such robustness. The simulation and experimental results demonstrate that deterioration of SOC estimation accuracy under modeling errors resulted from aging and larger measurement noise, which is quantitatively characterized. The findings of this paper provide useful information on the following aspects: (1 how SOC estimation accuracy depends on modeling reliability and voltage measurement accuracy; (2 pros and cons of typical SOC estimators in their robustness and reliability; (3 guidelines for requirements on battery system identification and sensor selections.

  17. Robust Fault-Tolerant Control for Satellite Attitude Stabilization Based on Active Disturbance Rejection Approach with Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Fei Song

    2014-01-01

    Full Text Available This paper proposed a robust fault-tolerant control algorithm for satellite stabilization based on active disturbance rejection approach with artificial bee colony algorithm. The actuating mechanism of attitude control system consists of three working reaction flywheels and one spare reaction flywheel. The speed measurement of reaction flywheel is adopted for fault detection. If any reaction flywheel fault is detected, the corresponding fault flywheel is isolated and the spare reaction flywheel is activated to counteract the fault effect and ensure that the satellite is working safely and reliably. The active disturbance rejection approach is employed to design the controller, which handles input information with tracking differentiator, estimates system uncertainties with extended state observer, and generates control variables by state feedback and compensation. The designed active disturbance rejection controller is robust to both internal dynamics and external disturbances. The bandwidth parameter of extended state observer is optimized by the artificial bee colony algorithm so as to improve the performance of attitude control system. A series of simulation experiment results demonstrate the performance superiorities of the proposed robust fault-tolerant control algorithm.

  18. Robust iris segmentation through parameterization of the Chan-Vese algorithm

    CSIR Research Space (South Africa)

    Mabuza-Hocquet, G

    2015-06-01

    Full Text Available The performance of an iris recognition system relies on automated processes from the segmentation stage to the matching stage. Each stage has traditional algorithms used successfully over the years. The drawback is that these algorithms assume...

  19. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    Science.gov (United States)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  20. A fast-multipole domain decomposition integral equation solver for characterizing electromagnetic wave propagation in mine environments

    KAUST Repository

    Yücel, Abdulkadir C.

    2013-07-01

    Reliable and effective wireless communication and tracking systems in mine environments are key to ensure miners\\' productivity and safety during routine operations and catastrophic events. The design of such systems greatly benefits from simulation tools capable of analyzing electromagnetic (EM) wave propagation in long mine tunnels and large mine galleries. Existing simulation tools for analyzing EM wave propagation in such environments employ modal decompositions (Emslie et. al., IEEE Trans. Antennas Propag., 23, 192-205, 1975), ray-tracing techniques (Zhang, IEEE Tran. Vehic. Tech., 5, 1308-1314, 2003), and full wave methods. Modal approaches and ray-tracing techniques cannot accurately account for the presence of miners and their equipments, as well as wall roughness (especially when the latter is comparable to the wavelength). Full-wave methods do not suffer from such restrictions but require prohibitively large computational resources. To partially alleviate this computational burden, a 2D integral equation-based domain decomposition technique has recently been proposed (Bakir et. al., in Proc. IEEE Int. Symp. APS, 1-2, 8-14 July 2012). © 2013 IEEE.

  1. A Robust Algorithm to Determine the Topology of Space from the Cosmic Microwave Background Radiation

    OpenAIRE

    Weeks, Jeffrey R.

    2001-01-01

    Satellite measurements of the cosmic microwave back-ground radiation will soon provide an opportunity to test whether the universe is multiply connected. This paper presents a new algorithm for deducing the topology of the universe from the microwave background data. Unlike an older algorithm, the new algorithm gives the curvature of space and the radius of the last scattering surface as outputs, rather than requiring them as inputs. The new algorithm is also more tolerant of erro...

  2. Robust frequency diversity based algorithm for clutter noise reduction of ultrasonic signals using multiple sub-spectrum phase coherence

    Energy Technology Data Exchange (ETDEWEB)

    Gongzhang, R.; Xiao, B.; Lardner, T.; Gachagan, A. [Centre for Ultrasonic Engineering, University of Strathclyde, Glasgow, G1 1XW (United Kingdom); Li, M. [School of Engineering, University of Glasgow, Glasgow, G12 8QQ (United Kingdom)

    2014-02-18

    This paper presents a robust frequency diversity based algorithm for clutter reduction in ultrasonic A-scan waveforms. The performance of conventional spectral-temporal techniques like Split Spectrum Processing (SSP) is highly dependent on the parameter selection, especially when the signal to noise ratio (SNR) is low. Although spatial beamforming offers noise reduction with less sensitivity to parameter variation, phased array techniques are not always available. The proposed algorithm first selects an ascending series of frequency bands. A signal is reconstructed for each selected band in which a defect is present when all frequency components are in uniform sign. Combining all reconstructed signals through averaging gives a probability profile of potential defect position. To facilitate data collection and validate the proposed algorithm, Full Matrix Capture is applied on the austenitic steel and high nickel alloy (HNA) samples with 5MHz transducer arrays. When processing A-scan signals with unrefined parameters, the proposed algorithm enhances SNR by 20dB for both samples and consequently, defects are more visible in B-scan images created from the large amount of A-scan traces. Importantly, the proposed algorithm is considered robust, while SSP is shown to fail on the austenitic steel data and achieves less SNR enhancement on the HNA data.

  3. Robust Layout Synthesis of a MEM Crab-Leg Resonator Using a Constrained Genetic Algorithm

    DEFF Research Database (Denmark)

    Fan, Zhun; Achiche, Sofiane

    2007-01-01

    The research work carried out in this paper introduces a robust design method for layout synthesis of MEM resonator subject to inherent geometric uncertainties such as the fabrication error on the sidewall of the structure. The robust design problem is formulated as a multi-objective constrained...

  4. A Robust Vision-based Runway Detection and Tracking Algorithm for Automatic UAV Landing

    KAUST Repository

    Abu Jbara, Khaled F.

    2015-01-01

    and attitude angle estimates to allow a more robust tracking of the runway under turbulence. We illustrate the performance of the proposed lane detection and tracking scheme on various experimental UAV flights conducted by the Saudi Aerospace Research Center

  5. An efficient and robust algorithm for parallel groupwise registration of bone surfaces

    NARCIS (Netherlands)

    van de Giessen, Martijn; Vos, Frans M.; Grimbergen, Cornelis A.; van Vliet, Lucas J.; Streekstra, Geert J.

    2012-01-01

    In this paper a novel groupwise registration algorithm is proposed for the unbiased registration of a large number of densely sampled point clouds. The method fits an evolving mean shape to each of the example point clouds thereby minimizing the total deformation. The registration algorithm

  6. Angle Statistics Reconstruction: a robust reconstruction algorithm for Muon Scattering Tomography

    Science.gov (United States)

    Stapleton, M.; Burns, J.; Quillin, S.; Steer, C.

    2014-11-01

    Muon Scattering Tomography (MST) is a technique for using the scattering of cosmic ray muons to probe the contents of enclosed volumes. As a muon passes through material it undergoes multiple Coulomb scattering, where the amount of scattering is dependent on the density and atomic number of the material as well as the path length. Hence, MST has been proposed as a means of imaging dense materials, for instance to detect special nuclear material in cargo containers. Algorithms are required to generate an accurate reconstruction of the material density inside the volume from the muon scattering information and some have already been proposed, most notably the Point of Closest Approach (PoCA) and Maximum Likelihood/Expectation Maximisation (MLEM) algorithms. However, whilst PoCA-based algorithms are easy to implement, they perform rather poorly in practice. Conversely, MLEM is a complicated algorithm to implement and computationally intensive and there is currently no published, fast and easily-implementable algorithm that performs well in practice. In this paper, we first provide a detailed analysis of the source of inaccuracy in PoCA-based algorithms. We then motivate an alternative method, based on ideas first laid out by Morris et al, presenting and fully specifying an algorithm that performs well against simulations of realistic scenarios. We argue this new algorithm should be adopted by developers of Muon Scattering Tomography as an alternative to PoCA.

  7. Get Your Atoms in Order--An Open-Source Implementation of a Novel and Robust Molecular Canonicalization Algorithm.

    Science.gov (United States)

    Schneider, Nadine; Sayle, Roger A; Landrum, Gregory A

    2015-10-26

    Finding a canonical ordering of the atoms in a molecule is a prerequisite for generating a unique representation of the molecule. The canonicalization of a molecule is usually accomplished by applying some sort of graph relaxation algorithm, the most common of which is the Morgan algorithm. There are known issues with that algorithm that lead to noncanonical atom orderings as well as problems when it is applied to large molecules like proteins. Furthermore, each cheminformatics toolkit or software provides its own version of a canonical ordering, most based on unpublished algorithms, which also complicates the generation of a universal unique identifier for molecules. We present an alternative canonicalization approach that uses a standard stable-sorting algorithm instead of a Morgan-like index. Two new invariants that allow canonical ordering of molecules with dependent chirality as well as those with highly symmetrical cyclic graphs have been developed. The new approach proved to be robust and fast when tested on the 1.45 million compounds of the ChEMBL 20 data set in different scenarios like random renumbering of input atoms or SMILES round tripping. Our new algorithm is able to generate a canonical order of the atoms of protein molecules within a few milliseconds. The novel algorithm is implemented in the open-source cheminformatics toolkit RDKit. With this paper, we provide a reference Python implementation of the algorithm that could easily be integrated in any cheminformatics toolkit. This provides a first step toward a common standard for canonical atom ordering to generate a universal unique identifier for molecules other than InChI.

  8. Simple Algorithms to Calculate Asymptotic Null Distributions of Robust Tests in Case-Control Genetic Association Studies in R

    Directory of Open Access Journals (Sweden)

    Wing Kam Fung

    2010-02-01

    Full Text Available The case-control study is an important design for testing association between genetic markers and a disease. The Cochran-Armitage trend test (CATT is one of the most commonly used statistics for the analysis of case-control genetic association studies. The asymptotically optimal CATT can be used when the underlying genetic model (mode of inheritance is known. However, for most complex diseases, the underlying genetic models are unknown. Thus, tests robust to genetic model misspecification are preferable to the model-dependant CATT. Two robust tests, MAX3 and the genetic model selection (GMS, were recently proposed. Their asymptotic null distributions are often obtained by Monte-Carlo simulations, because they either have not been fully studied or involve multiple integrations. In this article, we study how components of each robust statistic are correlated, and find a linear dependence among the components. Using this new finding, we propose simple algorithms to calculate asymptotic null distributions for MAX3 and GMS, which greatly reduce the computing intensity. Furthermore, we have developed the R package Rassoc implementing the proposed algorithms to calculate the empirical and asymptotic p values for MAX3 and GMS as well as other commonly used tests in case-control association studies. For illustration, Rassoc is applied to the analysis of case-control data of 17 most significant SNPs reported in four genome-wide association studies.

  9. Combining model based and data based techniques in a robust bridge health monitoring algorithm.

    Science.gov (United States)

    2014-09-01

    Structural Health Monitoring (SHM) aims to analyze civil, mechanical and aerospace systems in order to assess : incipient damage occurrence. In this project, we are concerned with the development of an algorithm within the : SHM paradigm for applicat...

  10. An effective, robust and parallel implementation of an interior point algorithm for limit state optimization

    DEFF Research Database (Denmark)

    Dollerup, Niels; Jepsen, Michael S.; Frier, Christian

    2014-01-01

    A robust and effective finite element based implementation of lower bound limit state analysis applying an interior point formulation is presented in this paper. The lower bound formulation results in a convex optimization problem consisting of a number of linear constraints from the equilibrium...

  11. A> L1-TV algorithm for robust perspective photometric stereo with spatially-varying lightings

    DEFF Research Database (Denmark)

    Quéau, Yvain; Lauze, Francois Bernard; Durou, Jean-Denis

    2015-01-01

    We tackle the problem of perspective 3D-reconstruction of Lambertian surfaces through photometric stereo, in the presence of outliers to Lambert's law, depth discontinuities, and unknown spatially-varying lightings. To this purpose, we introduce a robust $L^1$-TV variational formulation of the re...

  12. A Robust Wireless Sensor Network Localization Algorithm in Mixed LOS/NLOS Scenario.

    Science.gov (United States)

    Li, Bing; Cui, Wei; Wang, Bin

    2015-09-16

    Localization algorithms based on received signal strength indication (RSSI) are widely used in the field of target localization due to its advantages of convenient application and independent from hardware devices. Unfortunately, the RSSI values are susceptible to fluctuate under the influence of non-line-of-sight (NLOS) in indoor space. Existing algorithms often produce unreliable estimated distances, leading to low accuracy and low effectiveness in indoor target localization. Moreover, these approaches require extra prior knowledge about the propagation model. As such, we focus on the problem of localization in mixed LOS/NLOS scenario and propose a novel localization algorithm: Gaussian mixed model based non-metric Multidimensional (GMDS). In GMDS, the RSSI is estimated using a Gaussian mixed model (GMM). The dissimilarity matrix is built to generate relative coordinates of nodes by a multi-dimensional scaling (MDS) approach. Finally, based on the anchor nodes' actual coordinates and target's relative coordinates, the target's actual coordinates can be computed via coordinate transformation. Our algorithm could perform localization estimation well without being provided with prior knowledge. The experimental verification shows that GMDS effectively reduces NLOS error and is of higher accuracy in indoor mixed LOS/NLOS localization and still remains effective when we extend single NLOS to multiple NLOS.

  13. A robust and accurate binning algorithm for metagenomic sequences with arbitrary species abundance ratio.

    Science.gov (United States)

    Leung, Henry C M; Yiu, S M; Yang, Bin; Peng, Yu; Wang, Yi; Liu, Zhihua; Chen, Jingchi; Qin, Junjie; Li, Ruiqiang; Chin, Francis Y L

    2011-06-01

    With the rapid development of next-generation sequencing techniques, metagenomics, also known as environmental genomics, has emerged as an exciting research area that enables us to analyze the microbial environment in which we live. An important step for metagenomic data analysis is the identification and taxonomic characterization of DNA fragments (reads or contigs) resulting from sequencing a sample of mixed species. This step is referred to as 'binning'. Binning algorithms that are based on sequence similarity and sequence composition markers rely heavily on the reference genomes of known microorganisms or phylogenetic markers. Due to the limited availability of reference genomes and the bias and low availability of markers, these algorithms may not be applicable in all cases. Unsupervised binning algorithms which can handle fragments from unknown species provide an alternative approach. However, existing unsupervised binning algorithms only work on datasets either with balanced species abundance ratios or rather different abundance ratios, but not both. In this article, we present MetaCluster 3.0, an integrated binning method based on the unsupervised top--down separation and bottom--up merging strategy, which can bin metagenomic fragments of species with very balanced abundance ratios (say 1:1) to very different abundance ratios (e.g. 1:24) with consistently higher accuracy than existing methods. MetaCluster 3.0 can be downloaded at http://i.cs.hku.hk/~alse/MetaCluster/.

  14. Robust low frequency current ripple elimination algorithm for grid-connected fuel cell systems with power balancing technique

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong-Soo; Choe, Gyu-Yeong; Lee, Byoung-Kuk [School of Information and Communication Engineering, Sungkyunkwan University, 300 Cheoncheon-dong, Jangan-gu, Suwon, Gyeonggi-do 440-746 (Korea, Republic of); Kang, Hyun-Soo [R and D Center, Advanced Drive Technology (ADT) Company, 689-26 Geumjeong-dong, Gunpo-si, Gyeonggi-do 435-862 (Korea, Republic of)

    2011-05-15

    The low frequency current ripple in grid-connected fuel cell systems is generated from dc-ac inverter operation, which generates 60 Hz fundamental component, and gives harmful effects on fuel cell stack itself, such as making cathode surface responses slower, causing an increase of more than 10% in the fuel consumption, creating oxygen starvation, causing a reduction in the operating lifetime, and incurring a nuisance tripping such as overload situation. With these reasons, low frequency current ripple makes fuel cell system unstable and lifetime of fuel cell stack itself short. This paper presents a fast and robust control algorithm to eliminate low frequency current ripple in grid-connected fuel cell systems. Compared with the conventional methods, in the proposed control algorithm, dc link voltage controller is shifted from dc-dc converter to dc-ac inverter, resulting that dc-ac inverter handles dc link voltage control and output current control simultaneously with help of power balancing technique. The results indicate that the proposed algorithm can not only completely eliminate current ripple but also significantly reduce the overshoot or undershoot during transient states without any extra hardware. The validity of the proposed algorithm is verified by computer simulations and also by experiments with a 1 kW laboratory prototype. (author)

  15. Robust volume calculations for Constructive Solid Geometry (CSG) components in Monte Carlo transport calculations

    Energy Technology Data Exchange (ETDEWEB)

    Millman, D. L. [Dept. of Computer Science, Univ. of North Carolina at Chapel Hill (United States); Griesheimer, D. P.; Nease, B. R. [Bechtel Marine Propulsion Corporation, Bertis Atomic Power Laboratory (United States); Snoeyink, J. [Dept. of Computer Science, Univ. of North Carolina at Chapel Hill (United States)

    2012-07-01

    In this paper we consider a new generalized algorithm for the efficient calculation of component object volumes given their equivalent constructive solid geometry (CSG) definition. The new method relies on domain decomposition to recursively subdivide the original component into smaller pieces with volumes that can be computed analytically or stochastically, if needed. Unlike simpler brute-force approaches, the proposed decomposition scheme is guaranteed to be robust and accurate to within a user-defined tolerance. The new algorithm is also fully general and can handle any valid CSG component definition, without the need for additional input from the user. The new technique has been specifically optimized to calculate volumes of component definitions commonly found in models used for Monte Carlo particle transport simulations for criticality safety and reactor analysis applications. However, the algorithm can be easily extended to any application which uses CSG representations for component objects. The paper provides a complete description of the novel volume calculation algorithm, along with a discussion of the conjectured error bounds on volumes calculated within the method. In addition, numerical results comparing the new algorithm with a standard stochastic volume calculation algorithm are presented for a series of problems spanning a range of representative component sizes and complexities. (authors)

  16. Robust volume calculations for Constructive Solid Geometry (CSG) components in Monte Carlo transport calculations

    International Nuclear Information System (INIS)

    Millman, D. L.; Griesheimer, D. P.; Nease, B. R.; Snoeyink, J.

    2012-01-01

    In this paper we consider a new generalized algorithm for the efficient calculation of component object volumes given their equivalent constructive solid geometry (CSG) definition. The new method relies on domain decomposition to recursively subdivide the original component into smaller pieces with volumes that can be computed analytically or stochastically, if needed. Unlike simpler brute-force approaches, the proposed decomposition scheme is guaranteed to be robust and accurate to within a user-defined tolerance. The new algorithm is also fully general and can handle any valid CSG component definition, without the need for additional input from the user. The new technique has been specifically optimized to calculate volumes of component definitions commonly found in models used for Monte Carlo particle transport simulations for criticality safety and reactor analysis applications. However, the algorithm can be easily extended to any application which uses CSG representations for component objects. The paper provides a complete description of the novel volume calculation algorithm, along with a discussion of the conjectured error bounds on volumes calculated within the method. In addition, numerical results comparing the new algorithm with a standard stochastic volume calculation algorithm are presented for a series of problems spanning a range of representative component sizes and complexities. (authors)

  17. Robust and rapid algorithms facilitate large-scale whole genome sequencing downstream analysis in an integrative framework.

    Science.gov (United States)

    Li, Miaoxin; Li, Jiang; Li, Mulin Jun; Pan, Zhicheng; Hsu, Jacob Shujui; Liu, Dajiang J; Zhan, Xiaowei; Wang, Junwen; Song, Youqiang; Sham, Pak Chung

    2017-05-19

    Whole genome sequencing (WGS) is a promising strategy to unravel variants or genes responsible for human diseases and traits. However, there is a lack of robust platforms for a comprehensive downstream analysis. In the present study, we first proposed three novel algorithms, sequence gap-filled gene feature annotation, bit-block encoded genotypes and sectional fast access to text lines to address three fundamental problems. The three algorithms then formed the infrastructure of a robust parallel computing framework, KGGSeq, for integrating downstream analysis functions for whole genome sequencing data. KGGSeq has been equipped with a comprehensive set of analysis functions for quality control, filtration, annotation, pathogenic prediction and statistical tests. In the tests with whole genome sequencing data from 1000 Genomes Project, KGGSeq annotated several thousand more reliable non-synonymous variants than other widely used tools (e.g. ANNOVAR and SNPEff). It took only around half an hour on a small server with 10 CPUs to access genotypes of ∼60 million variants of 2504 subjects, while a popular alternative tool required around one day. KGGSeq's bit-block genotype format used 1.5% or less space to flexibly represent phased or unphased genotypes with multiple alleles and achieved a speed of over 1000 times faster to calculate genotypic correlation. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Cooperative vehicles for robust traffic congestion reduction: An analysis based on algorithmic, environmental and agent behavioral factors.

    Directory of Open Access Journals (Sweden)

    Prajakta Desai

    Full Text Available Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN, wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction, across variations in: (a environmental parameters such as road network topology and configuration; (b algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies.

  19. Cooperative vehicles for robust traffic congestion reduction: An analysis based on algorithmic, environmental and agent behavioral factors.

    Science.gov (United States)

    Desai, Prajakta; Loke, Seng W; Desai, Aniruddha

    2017-01-01

    Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN), wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction), across variations in: (a) environmental parameters such as road network topology and configuration; (b) algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c) agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies.

  20. Towards a robust algorithm to determine topological domains from colocalization data

    Directory of Open Access Journals (Sweden)

    Alexander P. Moscalets

    2015-09-01

    Full Text Available One of the most important tasks in understanding the complex spatial organization of the genome consists in extracting information about this spatial organization, the function and structure of chromatin topological domains from existing experimental data, in particular, from genome colocalization (Hi-C matrices. Here we present an algorithm allowing to reveal the underlying hierarchical domain structure of a polymer conformation from analyzing the modularity of colocalization matrices. We also test this algorithm on several model polymer structures: equilibrium globules, random fractal globules and regular fractal (Peano conformations. We define what we call a spectrum of cluster borders, and show that these spectra behave strikingly di erently for equilibrium and fractal conformations, allowing us to suggest an additional criterion to identify fractal polymer conformations.

  1. Meta-algorithmics patterns for robust, low cost, high quality systems

    CERN Document Server

    Simske, Steven J

    2013-01-01

    The confluence of cloud computing, parallelism and advanced machine intelligence approaches has created a world in which the optimum knowledge system will usually be architected from the combination of two or more knowledge-generating systems. There is a need, then, to provide a reusable, broadly-applicable set of design patterns to empower the intelligent system architect to take advantage of this opportunity. This book explains how to design and build intelligent systems that are optimized for changing system requirements (adaptability), optimized for changing system input (robustness), an

  2. Robust surface registration using salient anatomical features for image-guided liver surgery: Algorithm and validation

    OpenAIRE

    Clements, Logan W.; Chapman, William C.; Dawant, Benoit M.; Galloway, Robert L.; Miga, Michael I.

    2008-01-01

    A successful surface-based image-to-physical space registration in image-guided liver surgery (IGLS) is critical to provide reliable guidance information to surgeons and pertinent surface displacement data for use in deformation correction algorithms. The current protocol used to perform the image-to-physical space registration involves an initial pose estimation provided by a point based registration of anatomical landmarks identifiable in both the preoperative tomograms and the intraoperati...

  3. A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain

    Directory of Open Access Journals (Sweden)

    Ibn-Elhaj E

    2009-01-01

    Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.

  4. A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain

    Directory of Open Access Journals (Sweden)

    E. M. Ismaili Aalaoui

    2009-02-01

    Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.

  5. An Efficient and Robust Moving Shadow Removal Algorithm and Its Applications in ITS

    Directory of Open Access Journals (Sweden)

    Shou Yu-Wen

    2010-01-01

    Full Text Available We propose an efficient algorithm for removing shadows of moving vehicles caused by non-uniform distributions of light reflections in the daytime. This paper presents a brand-new and complete structure in feature combination as well as analysis for orientating and labeling moving shadows so as to extract the defined objects in foregrounds more easily in each snapshot of the original files of videos which are acquired in the real traffic situations. Moreover, we make use of Gaussian Mixture Model (GMM for background removal and detection of moving shadows in our tested images, and define two indices for characterizing non-shadowed regions where one indicates the characteristics of lines and the other index can be characterized by the information in gray scales of images which helps us to build a newly defined set of darkening ratios (modified darkening factors based on Gaussian models. To prove the effectiveness of our moving shadow algorithm, we carry it out with a practical application of traffic flow detection in ITS (Intelligent Transportation System—vehicle counting. Our algorithm shows the faster processing speed, 13.84 ms/frame, and can improve the accuracy rate in 4%~10% for our three tested videos in the experimental results of vehicle counting.

  6. A novel robust and efficient algorithm for charge particle tracking in high background flux

    International Nuclear Information System (INIS)

    Fanelli, C; Cisbani, E; Dotto, A Del

    2015-01-01

    The high luminosity that will be reached in the new generation of High Energy Particle and Nuclear physics experiments implies large high background rate and large tracker occupancy, representing therefore a new challenge for particle tracking algorithms. For instance, at Jefferson Laboratory (JLab) (VA,USA), one of the most demanding experiment in this respect, performed with a 12 GeV electron beam, is characterized by a luminosity up to 10 39 cm -2 s -1 . To this scope, Gaseous Electron Multiplier (GEM) based trackers are under development for a new spectrometer that will operate at these high rates in the Hall A of JLab. Within this context, we developed a new tracking algorithm, based on a multistep approach: (i) all hardware - time and charge - information are exploited to minimize the number of hits to associate; (ii) a dedicated Neural Network (NN) has been designed for a fast and efficient association of the hits measured by the GEM detector; (iii) the measurements of the associated hits are further improved in resolution through the application of Kalman filter and Rauch- Tung-Striebel smoother. The algorithm is shortly presented along with a discussion of the promising first results. (paper)

  7. A unified classifier for robust face recognition based on combining multiple subspace algorithms

    Science.gov (United States)

    Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad

    2012-10-01

    Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.

  8. Stochastic algorithm for channel optimized vector quantization: application to robust narrow-band speech coding

    International Nuclear Information System (INIS)

    Bouzid, M.; Benkherouf, H.; Benzadi, K.

    2011-01-01

    In this paper, we propose a stochastic joint source-channel scheme developed for efficient and robust encoding of spectral speech LSF parameters. The encoding system, named LSF-SSCOVQ-RC, is an LSF encoding scheme based on a reduced complexity stochastic split vector quantizer optimized for noisy channel. For transmissions over noisy channel, we will show first that our LSF-SSCOVQ-RC encoder outperforms the conventional LSF encoder designed by the split vector quantizer. After that, we applied the LSF-SSCOVQ-RC encoder (with weighted distance) for the robust encoding of LSF parameters of the 2.4 Kbits/s MELP speech coder operating over a noisy/noiseless channel. The simulation results will show that the proposed LSF encoder, incorporated in the MELP, ensure better performances than the original MELP MSVQ of 25 bits/frame; especially when the transmission channel is highly disturbed. Indeed, we will show that the LSF-SSCOVQ-RC yields significant improvement to the LSFs encoding performances by ensuring reliable transmissions over noisy channel.

  9. Application of a New Robust ECG T-Wave Delineation Algorithm for the Evaluation of the Autonomic Innervation of the Myocardium

    DEFF Research Database (Denmark)

    Cesari, Matteo; Mehlsen, Jesper; Mehlsen, Anne-Birgitte

    2016-01-01

    T-wave amplitude (TWA) is a well know index of the autonomic innervation of the myocardium. However, until now it has been evaluated only manually or with simple and inefficient algorithms. In this paper, we developed a new robust single-lead electrocardiogram (ECG) T-wave delineation algorithm...

  10. A fast and robust iterative algorithm for prediction of RNA pseudoknotted secondary structures

    Science.gov (United States)

    2014-01-01

    Background Improving accuracy and efficiency of computational methods that predict pseudoknotted RNA secondary structures is an ongoing challenge. Existing methods based on free energy minimization tend to be very slow and are limited in the types of pseudoknots that they can predict. Incorporating known structural information can improve prediction accuracy; however, there are not many methods for prediction of pseudoknotted structures that can incorporate structural information as input. There is even less understanding of the relative robustness of these methods with respect to partial information. Results We present a new method, Iterative HFold, for pseudoknotted RNA secondary structure prediction. Iterative HFold takes as input a pseudoknot-free structure, and produces a possibly pseudoknotted structure whose energy is at least as low as that of any (density-2) pseudoknotted structure containing the input structure. Iterative HFold leverages strengths of earlier methods, namely the fast running time of HFold, a method that is based on the hierarchical folding hypothesis, and the energy parameters of HotKnots V2.0. Our experimental evaluation on a large data set shows that Iterative HFold is robust with respect to partial information, with average accuracy on pseudoknotted structures steadily increasing from roughly 54% to 79% as the user provides up to 40% of the input structure. Iterative HFold is much faster than HotKnots V2.0, while having comparable accuracy. Iterative HFold also has significantly better accuracy than IPknot on our HK-PK and IP-pk168 data sets. Conclusions Iterative HFold is a robust method for prediction of pseudoknotted RNA secondary structures, whose accuracy with more than 5% information about true pseudoknot-free structures is better than that of IPknot, and with about 35% information about true pseudoknot-free structures compares well with that of HotKnots V2.0 while being significantly faster. Iterative HFold and all data used in

  11. Robust Algorithms for Detecting a Change in a Stochastic Process with Infinite Memory

    Science.gov (United States)

    1988-03-01

    breakdown point and the additional assumption of 0-mixing on the nominal meas- influence function . The structure of the optimal algorithm ures. Then Huber’s...are i.i.d. sequences of Gaus- For the breakdown point and the influence function sian random variables, with identical variance o2 . Let we will use...algebraic sign for i=0,1. Here z will be chosen such = f nthat it leads to worst case or earliest breakdown. i (14) Next, the influence function measures

  12. Robust Longitudinal Aircraft- Control Based on an Adaptive Fuzzy-Logic Algorithm

    Directory of Open Access Journals (Sweden)

    Abdel- Latif Elshafei

    2002-06-01

    Full Text Available To study the aircraft response to a fast pull-up manoeuvre, a short period approximation of the longitudinal model is considered. The model is highly nonlinear and includes parametric uncertainties. To cope with a wide range of command signals, a robust adaptive fuzzy logic controller is proposed. The proposed controller adopts a dynamic inversion approach. Since feedback linearization is practically imperfect, robustifying and adaptive components are included in the control law to compensate for modeling errors and achieve acceptable tracking errors. Two fuzzy systems are implemented. The first system models the nominal values of the system’s nonlinearity. The second system is an adaptive one that compensates for modeling errors. The derivation of the control law based on a dynamic game approach is given in detail. Stability of the closed-loop control system is also verified. Simulation results based on an F16-model illustrate a successful tracking performance of the proposed controller.

  13. A Robust In-Situ Warp-Correction Algorithm For VISAR Streak Camera Data at the National Ignition Facility

    International Nuclear Information System (INIS)

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.

    2015-01-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  14. A Robust In-Situ Warp-Correction Algorithm For VISAR Streak Camera Data at the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Labaria, George R. [Univ. of California, Santa Cruz, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Warrick, Abbie L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Celliers, Peter M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kalantar, Daniel H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-01-12

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  15. Some nonlinear space decomposition algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  16. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    CERN Document Server

    Parasuraman, Ramviyas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide red...

  17. Square-Wave Voltage Injection Algorithm for PMSM Position Sensorless Control With High Robustness to Voltage Errors

    DEFF Research Database (Denmark)

    Ni, Ronggang; Xu, Dianguo; Blaabjerg, Frede

    2017-01-01

    relationship with the magnetic field distortion. Position estimation errors caused by higher order harmonic inductances and voltage harmonics generated by the SVPWM are also discussed. Both simulations and experiments are carried out based on a commercial PMSM to verify the superiority of the proposed method......Rotor position estimated with high-frequency (HF) voltage injection methods can be distorted by voltage errors due to inverter nonlinearities, motor resistance, and rotational voltage drops, etc. This paper proposes an improved HF square-wave voltage injection algorithm, which is robust to voltage...... errors without any compensations meanwhile has less fluctuation in the position estimation error. The average position estimation error is investigated based on the analysis of phase harmonic inductances, and deduced in the form of the phase shift of the second-order harmonic inductances to derive its...

  18. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    Science.gov (United States)

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734

  19. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    Directory of Open Access Journals (Sweden)

    Ramviyas Parasuraman

    2014-12-01

    Full Text Available The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS. When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities, there is a possibility that some electronic components may fail randomly (due to radiation effects, which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.

  20. Hybrid meshes and domain decomposition for the modeling of oil reservoirs; Maillages hybrides et decomposition de domaine pour la modelisation des reservoirs petroliers

    Energy Technology Data Exchange (ETDEWEB)

    Gaiffe, St

    2000-03-23

    In this thesis, we are interested in the modeling of fluid flow through porous media with 2-D and 3-D unstructured meshes, and in the use of domain decomposition methods. The behavior of flow through porous media is strongly influenced by heterogeneities: either large-scale lithological discontinuities or quite localized phenomena such as fluid flow in the neighbourhood of wells. In these two typical cases, an accurate consideration of the singularities requires the use of adapted meshes. After having shown the limits of classic meshes we present the future prospects offered by hybrid and flexible meshes. Next, we consider the generalization possibilities of the numerical schemes traditionally used in reservoir simulation and we draw two available approaches: mixed finite elements and U-finite volumes. The investigated phenomena being also characterized by different time-scales, special treatments in terms of time discretization on various parts of the domain are required. We think that the combination of domain decomposition methods with operator splitting techniques may provide a promising approach to obtain high flexibility for a local tune-steps management. Consequently, we develop a new numerical scheme for linear parabolic equations which allows to get a higher flexibility in the local space and time steps management. To conclude, a priori estimates and error estimates on the two variables of interest, namely the pressure and the velocity are proposed. (author)

  1. a Robust Registration Algorithm for Point Clouds from Uav Images for Change Detection

    Science.gov (United States)

    Al-Rawabdeh, A.; Al-Gurrani, H.; Al-Durgham, K.; Detchev, I.; He, F.; El-Sheimy, N.; Habib, A.

    2016-06-01

    Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs

  2. A ROBUST REGISTRATION ALGORITHM FOR POINT CLOUDS FROM UAV IMAGES FOR CHANGE DETECTION

    Directory of Open Access Journals (Sweden)

    A. Al-Rawabdeh

    2016-06-01

    Full Text Available Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs of the camera and the Exterior Orientation Parameters (EOPs of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV action camera which facilitated capturing high-resolution geo-tagged images

  3. PCBDDC: A Class of Robust Dual-Primal Methods in PETSc

    KAUST Repository

    Zampini, Stefano

    2016-01-01

    A class of preconditioners based on balancing domain decomposition by constraints methods is introduced in the Portable, Extensible Toolkit for Scientific Computation (PETSc). The algorithm and the underlying nonoverlapping domain decomposition framework are described with a specific focus on their current implementation in the library. Available user customizations are also presented, together with an experimental interface to the finite element tearing and interconnecting dual-primal methods within PETSc. Large-scale parallel numerical results are provided for the latest version of the code, which is able to tackle symmetric positive definite problems with highly heterogeneous distributions of the coefficients. Current limitations and future extensions of the preconditioner class are also discussed.

  4. PCBDDC: A Class of Robust Dual-Primal Methods in PETSc

    KAUST Repository

    Zampini, Stefano

    2016-10-27

    A class of preconditioners based on balancing domain decomposition by constraints methods is introduced in the Portable, Extensible Toolkit for Scientific Computation (PETSc). The algorithm and the underlying nonoverlapping domain decomposition framework are described with a specific focus on their current implementation in the library. Available user customizations are also presented, together with an experimental interface to the finite element tearing and interconnecting dual-primal methods within PETSc. Large-scale parallel numerical results are provided for the latest version of the code, which is able to tackle symmetric positive definite problems with highly heterogeneous distributions of the coefficients. Current limitations and future extensions of the preconditioner class are also discussed.

  5. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    Science.gov (United States)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  6. Robust species taxonomy assignment algorithm for 16S rRNA NGS reads: application to oral carcinoma samples

    Directory of Open Access Journals (Sweden)

    Nezar Noor Al-Hebshi

    2015-09-01

    Full Text Available Background: Usefulness of next-generation sequencing (NGS in assessing bacteria associated with oral squamous cell carcinoma (OSCC has been undermined by inability to classify reads to the species level. Objective: The purpose of this study was to develop a robust algorithm for species-level classification of NGS reads from oral samples and to pilot test it for profiling bacteria within OSCC tissues. Methods: Bacterial 16S V1-V3 libraries were prepared from three OSCC DNA samples and sequenced using 454's FLX chemistry. High-quality, well-aligned, and non-chimeric reads ≥350 bp were classified using a novel, multi-stage algorithm that involves matching reads to reference sequences in revised versions of the Human Oral Microbiome Database (HOMD, HOMD extended (HOMDEXT, and Greengene Gold (GGG at alignment coverage and percentage identity ≥98%, followed by assignment to species level based on top hit reference sequences. Priority was given to hits in HOMD, then HOMDEXT and finally GGG. Unmatched reads were subject to operational taxonomic unit analysis. Results: Nearly, 92.8% of the reads were matched to updated-HOMD 13.2, 1.83% to trusted-HOMDEXT, and 1.36% to modified-GGG. Of all matched reads, 99.6% were classified to species level. A total of 228 species-level taxa were identified, representing 11 phyla; the most abundant were Proteobacteria, Bacteroidetes, Firmicutes, Fusobacteria, and Actinobacteria. Thirty-five species-level taxa were detected in all samples. On average, Prevotella oris, Neisseria flava, Neisseria flavescens/subflava, Fusobacterium nucleatum ss polymorphum, Aggregatibacter segnis, Streptococcus mitis, and Fusobacterium periodontium were the most abundant. Bacteroides fragilis, a species rarely isolated from the oral cavity, was detected in two samples. Conclusion: This multi-stage algorithm maximizes the fraction of reads classified to the species level while ensuring reliable classification by giving priority to the

  7. Domain decomposition and CMFD acceleration applied to discrete-ordinate methods for the solution of the neutron transport equation in XYZ geometries

    International Nuclear Information System (INIS)

    Masiello, Emiliano; Martin, Brunella; Do, Jean-Michel

    2011-01-01

    A new development for the IDT solver is presented for large reactor core applications in XYZ geometries. The multigroup discrete-ordinate neutron transport equation is solved using a Domain-Decomposition (DD) method coupled with the Coarse-Mesh Finite Differences (CMFD). The later is used for accelerating the DD convergence rate. In particular, the external power iterations are preconditioned for stabilizing the oscillatory behavior of the DD iterative process. A set of critical 2-D and 3-D numerical tests on a single processor will be presented for the analysis of the performances of the method. The results show that the application of the CMFD to the DD can be a good candidate for large 3D full-core parallel applications. (author)

  8. Frozen Gaussian approximation based domain decomposition methods for the linear Schrödinger equation beyond the semi-classical regime

    Science.gov (United States)

    Lorin, E.; Yang, X.; Antoine, X.

    2016-06-01

    The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.

  9. A numerical method for the solution of three-dimensional incompressible viscous flow using the boundary-fitted curvilinear coordinate transformation and domain decomposition technique

    International Nuclear Information System (INIS)

    Umegaki, Kikuo; Miki, Kazuyoshi

    1990-01-01

    A numerical method is developed to solve three-dimensional incompressible viscous flow in complicated geometry using curvilinear coordinate transformation and domain decomposition technique. In this approach, a complicated flow domain is decomposed into several subdomains, each of which has an overlapping region with neighboring subdomains. Curvilinear coordinates are numerically generated in each subdomain using the boundary-fitted coordinate transformation technique. The modified SMAC scheme is developed to solve Navier-Stokes equations in which the convective terms are discretized by the QUICK method. A fully vectorized computer program is developed on the basis of the proposed method. The program is applied to flow analysis in a semicircular curved, 90deg elbow and T-shape branched pipes. Computational time with the vector processor of the HITAC S-810/20 supercomputer system, is reduced to 1/10∼1/20 of that with a scalar processor. (author)

  10. A Robust Computational Technique for Model Order Reduction of Two-Time-Scale Discrete Systems via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Othman M. K. Alsmadi

    2015-01-01

    Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  11. A robust Hough transform algorithm for determining the radiation centers of circular and rectangular fields with subpixel accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Du Weiliang; Yang, James [Department of Radiation Physics, University of Texas M D Anderson Cancer Center, 1515 Holcombe Blvd, Unit 94, Houston, TX 77030 (United States)], E-mail: wdu@mdanderson.org

    2009-02-07

    Uncertainty in localizing the radiation field center is among the major components that contribute to the overall positional error and thus must be minimized. In this study, we developed a Hough transform (HT)-based computer algorithm to localize the radiation center of a circular or rectangular field with subpixel accuracy. We found that the HT method detected the centers of the test circular fields with an absolute error of 0.037 {+-} 0.019 pixels. On a typical electronic portal imager with 0.5 mm image resolution, this mean detection error was translated to 0.02 mm, which was much finer than the image resolution. It is worth noting that the subpixel accuracy described here does not include experimental uncertainties such as linac mechanical instability or room laser inaccuracy. The HT method was more accurate and more robust to image noise and artifacts than the traditional center-of-mass method. Application of the HT method in Winston-Lutz tests was demonstrated to measure the ball-radiation center alignment with subpixel accuracy. Finally, the method was applied to quantitative evaluation of the radiation center wobble during collimator rotation.

  12. Performance and robustness of optimal fractional fuzzy PID controllers for pitch control of a wind turbine using chaotic optimization algorithms.

    Science.gov (United States)

    Asgharnia, Amirhossein; Shahnazi, Reza; Jamali, Ali

    2018-05-11

    The most studied controller for pitch control of wind turbines is proportional-integral-derivative (PID) controller. However, due to uncertainties in wind turbine modeling and wind speed profiles, the need for more effective controllers is inevitable. On the other hand, the parameters of PID controller usually are unknown and should be selected by the designer which is neither a straightforward task nor optimal. To cope with these drawbacks, in this paper, two advanced controllers called fuzzy PID (FPID) and fractional-order fuzzy PID (FOFPID) are proposed to improve the pitch control performance. Meanwhile, to find the parameters of the controllers the chaotic evolutionary optimization methods are used. Using evolutionary optimization methods not only gives us the unknown parameters of the controllers but also guarantees the optimality based on the chosen objective function. To improve the performance of the evolutionary algorithms chaotic maps are used. All the optimization procedures are applied to the 2-mass model of 5-MW wind turbine model. The proposed optimal controllers are validated using simulator FAST developed by NREL. Simulation results demonstrate that the FOFPID controller can reach to better performance and robustness while guaranteeing fewer fatigue damages in different wind speeds in comparison to FPID, fractional-order PID (FOPID) and gain-scheduling PID (GSPID) controllers. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Control algorithm for multiscale flow simulations of water

    DEFF Research Database (Denmark)

    Kotsalis, E. M.; Walther, Jens Honore; Kaxiras, E.

    2009-01-01

    We present a multiscale algorithm to couple atomistic water models with continuum incompressible flow simulations via a Schwarz domain decomposition approach. The coupling introduces an inhomogeneity in the description of the atomistic domain and prevents the use of periodic boundary conditions...

  14. Contribution to the development of methods for nuclear reactor core calculations with APOLLO3 code: domain decomposition in transport theory with nonlinear diffusion acceleration for 2D and 3D geometries

    International Nuclear Information System (INIS)

    Lenain, Roland

    2015-01-01

    This thesis is devoted to the implementation of a domain decomposition method applied to the neutron transport equation. The objective of this work is to access high-fidelity deterministic solutions to properly handle heterogeneities located in nuclear reactor cores, for problems' size ranging from color-sets of assemblies to large reactor cores configurations in 2D and 3D. The innovative algorithm developed during the thesis intends to optimize the use of parallelism and memory. The approach also aims to minimize the influence of the parallel implementation on the performances. These goals match the needs of APOLLO3 project, developed at CEA and supported by EDF and AREVA, which must be a portable code (no optimization on a specific architecture) in order to achieve best estimate modeling with resources ranging from personal computer to compute cluster available for engineers analyses. The proposed algorithm is a Parallel Multigroup-Block Jacobi one. Each sub-domain is considered as a multi-group fixed-source problem with volume-sources (fission) and surface-sources (interface flux between the sub-domains). The multi-group problem is solved in each sub-domain and a single communication of the interface flux is required at each power iteration. The spectral radius of the resolution algorithm is made similar to the one of a classical resolution algorithm with a nonlinear diffusion acceleration method: the well-known Coarse Mesh Finite Difference. In this way an ideal scalability is achievable when the calculation is parallelized. The memory organization, taking advantage of shared memory parallelism, optimizes the resources by avoiding redundant copies of the data shared between the sub-domains. Distributed memory architectures are made available by a hybrid parallel method that combines both paradigms of shared memory parallelism and distributed memory parallelism. For large problems, these architectures provide a greater number of processors and the amount of

  15. Scalable and Robust BDDC Preconditioners for Reservoir and Electromagnetics Modeling

    KAUST Repository

    Zampini, S.; Widlund, O.B.; Keyes, David E.

    2015-01-01

    The purpose of the study is to show the effectiveness of recent algorithmic advances in Balancing Domain Decomposition by Constraints (BDDC) preconditioners for the solution of elliptic PDEs with highly heterogeneous coefficients, and discretized by means of the finite element method. Applications to large linear systems generated by div- and curl- conforming finite elements discretizations commonly arising in the contexts of modelling reservoirs and electromagnetics will be presented.

  16. Scalable and Robust BDDC Preconditioners for Reservoir and Electromagnetics Modeling

    KAUST Repository

    Zampini, S.

    2015-09-13

    The purpose of the study is to show the effectiveness of recent algorithmic advances in Balancing Domain Decomposition by Constraints (BDDC) preconditioners for the solution of elliptic PDEs with highly heterogeneous coefficients, and discretized by means of the finite element method. Applications to large linear systems generated by div- and curl- conforming finite elements discretizations commonly arising in the contexts of modelling reservoirs and electromagnetics will be presented.

  17. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  18. PERFORMANCE ANALYSIS BETWEEN EXPLICIT SCHEDULING AND IMPLICIT SCHEDULING OF PARALLEL ARRAY-BASED DOMAIN DECOMPOSITION USING OPENMP

    Directory of Open Access Journals (Sweden)

    MOHAMMED FAIZ ABOALMAALY

    2014-10-01

    Full Text Available With the continuous revolution of multicore architecture, several parallel programming platforms have been introduced in order to pave the way for fast and efficient development of parallel algorithms. Back into its categories, parallel computing can be done through two forms: Data-Level Parallelism (DLP or Task-Level Parallelism (TLP. The former can be done by the distribution of data among the available processing elements while the latter is based on executing independent tasks concurrently. Most of the parallel programming platforms have built-in techniques to distribute the data among processors, these techniques are technically known as automatic distribution (scheduling. However, due to their wide range of purposes, variation of data types, amount of distributed data, possibility of extra computational overhead and other hardware-dependent factors, manual distribution could achieve better outcomes in terms of performance when compared to the automatic distribution. In this paper, this assumption is investigated by conducting a comparison between automatic and our newly proposed manual distribution of data among threads in parallel. Empirical results of matrix addition and matrix multiplication show a considerable performance gain when manual distribution is applied against automatic distribution.

  19. Design of Robust Supertwisting Algorithm Based Second-Order Sliding Mode Controller for Nonlinear Systems with Both Matched and Unmatched Uncertainty

    Directory of Open Access Journals (Sweden)

    Marwa Jouini

    2017-01-01

    Full Text Available This paper proposes a robust supertwisting algorithm (STA design for nonlinear systems where both matched and unmatched uncertainties are considered. The main contributions reside primarily to conceive a novel structure of STA, in order to ensure the desired performance of the uncertain nonlinear system. The modified algorithm is formed of double closed-loop feedback, in which two linear terms are added to the classical STA. In addition, an integral sliding mode switching surface is proposed to construct the attractiveness and reachability of sliding mode. Sufficient conditions are derived to guarantee the exact differentiation stability in finite time based on Lyapunov function theory. Finally, a comparative study for a variable-length pendulum system illustrates the robustness and the effectiveness of the proposed approach compared to other STA schemes.

  20. High-speed parallel solution of the neutron diffusion equation with the hierarchical domain decomposition boundary element method incorporating parallel communications

    International Nuclear Information System (INIS)

    Tsuji, Masashi; Chiba, Gou

    2000-01-01

    A hierarchical domain decomposition boundary element method (HDD-BEM) for solving the multiregion neutron diffusion equation (NDE) has been fully parallelized, both for numerical computations and for data communications, to accomplish a high parallel efficiency on distributed memory message passing parallel computers. Data exchanges between node processors that are repeated during iteration processes of HDD-BEM are implemented, without any intervention of the host processor that was used to supervise parallel processing in the conventional parallelized HDD-BEM (P-HDD-BEM). Thus, the parallel processing can be executed with only cooperative operations of node processors. The communication overhead was even the dominant time consuming part in the conventional P-HDD-BEM, and the parallelization efficiency decreased steeply with the increase of the number of processors. With the parallel data communication, the efficiency is affected only by the number of boundary elements assigned to decomposed subregions, and the communication overhead can be drastically reduced. This feature can be particularly advantageous in the analysis of three-dimensional problems where a large number of processors are required. The proposed P-HDD-BEM offers a promising solution to the deterioration problem of parallel efficiency and opens a new path to parallel computations of NDEs on distributed memory message passing parallel computers. (author)

  1. A pseudo-spectral method for the simulation of poro-elastic seismic wave propagation in 2D polar coordinates using domain decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Sidler, Rolf, E-mail: rsidler@gmail.com [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland); Carcione, José M. [Istituto Nazionale di Oceanografia e di Geofisica Sperimentale (OGS), Borgo Grotta Gigante 42c, 34010 Sgonico, Trieste (Italy); Holliger, Klaus [Center for Research of the Terrestrial Environment, University of Lausanne, CH-1015 Lausanne (Switzerland)

    2013-02-15

    We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge–Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid–solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.

  2. Are decisions using cost-utility analyses robust to choice of SF-36/SF-12 preference-based algorithm?

    Directory of Open Access Journals (Sweden)

    Walton Surrey M

    2005-03-01

    Full Text Available Abstract Background Cost utility analysis (CUA using SF-36/SF-12 data has been facilitated by the development of several preference-based algorithms. The purpose of this study was to illustrate how decision-making could be affected by the choice of preference-based algorithms for the SF-36 and SF-12, and provide some guidance on selecting an appropriate algorithm. Methods Two sets of data were used: (1 a clinical trial of adult asthma patients; and (2 a longitudinal study of post-stroke patients. Incremental costs were assumed to be $2000 per year over standard treatment, and QALY gains realized over a 1-year period. Ten published algorithms were identified, denoted by first author: Brazier (SF-36, Brazier (SF-12, Shmueli, Fryback, Lundberg, Nichol, Franks (3 algorithms, and Lawrence. Incremental cost-utility ratios (ICURs for each algorithm, stated in dollars per quality-adjusted life year ($/QALY, were ranked and compared between datasets. Results In the asthma patients, estimated ICURs ranged from Lawrence's SF-12 algorithm at $30,769/QALY (95% CI: 26,316 to 36,697 to Brazier's SF-36 algorithm at $63,492/QALY (95% CI: 48,780 to 83,333. ICURs for the stroke cohort varied slightly more dramatically. The MEPS-based algorithm by Franks et al. provided the lowest ICUR at $27,972/QALY (95% CI: 20,942 to 41,667. The Fryback and Shmueli algorithms provided ICURs that were greater than $50,000/QALY and did not have confidence intervals that overlapped with most of the other algorithms. The ICUR-based ranking of algorithms was strongly correlated between the asthma and stroke datasets (r = 0.60. Conclusion SF-36/SF-12 preference-based algorithms produced a wide range of ICURs that could potentially lead to different reimbursement decisions. Brazier's SF-36 and SF-12 algorithms have a strong methodological and theoretical basis and tended to generate relatively higher ICUR estimates, considerations that support a preference for these algorithms over the

  3. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  4. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    Science.gov (United States)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  5. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration.

    Directory of Open Access Journals (Sweden)

    Hengkai Guo

    Full Text Available Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US and magnetic resonance (MR. Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods.

  6. Solving advanced multi-objective robust designs by means of multiple objective evolutionary algorithms (MOEA): A reliability application

    Energy Technology Data Exchange (ETDEWEB)

    Salazar A, Daniel E. [Division de Computacion Evolutiva (CEANI), Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Universidad de Las Palmas de Gran Canaria. Canary Islands (Spain)]. E-mail: danielsalazaraponte@gmail.com; Rocco S, Claudio M. [Universidad Central de Venezuela, Facultad de Ingenieria, Caracas (Venezuela)]. E-mail: crocco@reacciun.ve

    2007-06-15

    This paper extends the approach proposed by the second author in [Rocco et al. Robust design using a hybrid-cellular-evolutionary and interval-arithmetic approach: a reliability application. In: Tarantola S, Saltelli A, editors. SAMO 2001: Methodological advances and useful applications of sensitivity analysis. Reliab Eng Syst Saf 2003;79(2):149-59 [special issue

  7. Robust Selection Algorithm (RSA) for Multi-Omic Biomarker Discovery; Integration with Functional Network Analysis to Identify miRNA Regulated Pathways in Multiple Cancers.

    Science.gov (United States)

    Sehgal, Vasudha; Seviour, Elena G; Moss, Tyler J; Mills, Gordon B; Azencott, Robert; Ram, Prahlad T

    2015-01-01

    MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases.

  8. Robustness and precision of an automatic marker detection algorithm for online prostate daily targeting using a standard V-EPID.

    Science.gov (United States)

    Aubin, S; Beaulieu, L; Pouliot, S; Pouliot, J; Roy, R; Girouard, L M; Martel-Brisson, N; Vigneault, E; Laverdière, J

    2003-07-01

    An algorithm for the daily localization of the prostate using implanted markers and a standard video-based electronic portal imaging device (V-EPID) has been tested. Prior to planning, three gold markers were implanted in the prostate of seven patients. The clinical images were acquired with a BeamViewPlus 2.1 V-EPID for each field during the normal course radiotherapy treatment and are used off-line to determine the ability of the automatic marker detection algorithm to adequately and consistently detect the markers. Clinical images were obtained with various dose levels from ranging 2.5 to 75 MU. The algorithm is based on marker attenuation characterization in the portal image and spatial distribution. A total of 1182 clinical images were taken. The results show an average efficiency of 93% for the markers detected individually and 85% for the group of markers. This algorithm accomplishes the detection and validation in 0.20-0.40 s. When the center of mass of the group of implanted markers is used, then all displacements can be corrected to within 1.0 mm in 84% of the cases and within 1.5 mm in 97% of cases. The standard video-based EPID tested provides excellent marker detection capability even with low dose levels. The V-EPID can be used successfully with radiopaque markers and the automatic detection algorithm to track and correct the daily setup deviations due to organ motions.

  9. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  10. An O(n log n) Version of the Averbakh-Berman Algorithm for the Robust Median of a Tree

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Georgiadis, Loukas; Katriel, Irit

    2008-01-01

    We show that the minmax regret median of a tree can be found in O(nlog n) time. This is obtained by a modification of Averbakh and Berman's O(nlog2 n)-time algorithm: We design a dynamic solution to their bottleneck subproblem of finding the middle of every root-leaf path in a tree....

  11. Trajectory Tracking of a Tri-Rotor Aerial Vehicle Using an MRAC-Based Robust Hybrid Control Algorithm

    Directory of Open Access Journals (Sweden)

    Zain Anwar Ali

    2017-01-01

    Full Text Available In this paper, a novel Model Reference Adaptive Control (MRAC-based hybrid control algorithm is presented for the trajectory tracking of a tri-rotor Unmanned Aerial Vehicle (UAV. The mathematical model of the tri-rotor is based on the Newton–Euler formula, whereas the MRAC-based hybrid controller consists of Fuzzy Proportional Integral Derivative (F-PID and Fuzzy Proportional Derivative (F-PD controllers. MRAC is used as the main controller for the dynamics, while the parameters of the adaptive controller are fine-tuned by the F-PD controller for the altitude control subsystem and the F-PID controller for the attitude control subsystem of the UAV. The stability of the system is ensured and proven by Lyapunov stability analysis. The proposed control algorithm is tested and verified using computer simulations for the trajectory tracking of the desired path as an input. The effectiveness of our proposed algorithm is compared with F-PID and the Fuzzy Logic Controller (FLC. Our proposed controller exhibits much less steady state error, quick error convergence in the presence of disturbance or noise, and model uncertainties.

  12. Synthesis of multi-wavelength temporal phase-shifting algorithms optimized for high signal-to-noise ratio and high detuning robustness using the frequency transfer function.

    Science.gov (United States)

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-05-02

    Synthesis of single-wavelength temporal phase-shifting algorithms (PSA) for interferometry is well-known and firmly based on the frequency transfer function (FTF) paradigm. Here we extend the single-wavelength FTF-theory to dual and multi-wavelength PSA-synthesis when several simultaneous laser-colors are present. The FTF-based synthesis for dual-wavelength (DW) PSA is optimized for high signal-to-noise ratio and minimum number of temporal phase-shifted interferograms. The DW-PSA synthesis herein presented may be used for interferometric contouring of discontinuous industrial objects. Also DW-PSA may be useful for DW shop-testing of deep free-form aspheres. As shown here, using the FTF-based synthesis one may easily find explicit DW-PSA formulae optimized for high signal-to-noise and high detuning robustness. To this date, no general synthesis and analysis for temporal DW-PSAs has been given; only ad hoc DW-PSAs formulas have been reported. Consequently, no explicit formulae for their spectra, their signal-to-noise, their detuning and harmonic robustness has been given. Here for the first time a fully general procedure for designing DW-PSAs (or triple-wavelengths PSAs) with desire spectrum, signal-to-noise ratio and detuning robustness is given. We finally generalize DW-PSA to higher number of wavelength temporal PSAs.

  13. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    Science.gov (United States)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  14. An Enhanced Data Integrity Model In Mobile Cloud Environment Using Digital Signature Algorithm And Robust Reversible Watermarking

    Directory of Open Access Journals (Sweden)

    Boukari Souley

    2017-10-01

    Full Text Available the increase use of hand held devices such as smart phones to access multimedia content in the cloud is increasing with rise and growth in information technology. Mobile cloud computing is increasingly used today because it allows users to have access to variety of resources in the cloud such as image video audio and software applications with minimal usage of their inbuilt resources such as storage memory by using the one available in the cloud. The major challenge faced with mobile cloud computing is security. Watermarking and digital signature are some techniques used to provide security and authentication on user data in the cloud. Watermarking is a technique used to embed digital data within a multimedia content such as image video or audio in order to prevent authorized access to those content by intruders whereas digital signature is used to identify and verify user data when accessed. In this work we implemented digital signature and robust reversible image watermarking in order enhance mobile cloud computing security and integrity of data by providing double authentication techniques. The results obtained show the effectiveness of combining the two techniques robust reversible watermarking and digital signature by providing strong authentication to ensures data integrity and extract the original content watermarked without changes.

  15. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  16. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  17. Robust multiple cue fusion-based high-speed and nonrigid object tracking algorithm for short track speed skating

    Science.gov (United States)

    Liu, Chenguang; Cheng, Heng-Da; Zhang, Yingtao; Wang, Yuxuan; Xian, Min

    2016-01-01

    This paper presents a methodology for tracking multiple skaters in short track speed skating competitions. Nonrigid skaters move at high speed with severe occlusions happening frequently among them. The camera is panned quickly in order to capture the skaters in a large and dynamic scene. To automatically track the skaters and precisely output their trajectories becomes a challenging task in object tracking. We employ the global rink information to compensate camera motion and obtain the global spatial information of skaters, utilize random forest to fuse multiple cues and predict the blob of each skater, and finally apply a silhouette- and edge-based template-matching and blob-evolving method to labelling pixels to a skater. The effectiveness and robustness of the proposed method are verified through thorough experiments.

  18. Experimental implementation of a robust damped-oscillation control algorithm on a full-sized, two-degree-of-freedom, AC induction motor-driven crane

    International Nuclear Information System (INIS)

    Kress, R.L.; Jansen, J.F.; Noakes, M.W.

    1994-01-01

    When suspended payloads are moved with an overhead crane, pendulum like oscillations are naturally introduced. This presents a problem any time a crane is used, especially when expensive and/or delicate objects are moved, when moving in a cluttered an or hazardous environment, and when objects are to be placed in tight locations. Damped-oscillation control algorithms have been demonstrated over the past several years for laboratory-scale robotic systems on dc motor-driven overhead cranes. Most overhead cranes presently in use in industry are driven by ac induction motors; consequently, Oak Ridge National Laboratory has implemented damped-oscillation crane control on one of its existing facility ac induction motor-driven overhead cranes. The purpose of this test was to determine feasibility, to work out control and interfacing specifications, and to establish the capability of newly available ac motor control hardware with respect to use in damped-oscillation-controlled systems. Flux vector inverter drives are used to investigate their acceptability for damped-oscillation crane control. The purpose of this paper is to describe the experimental implementation of a control algorithm on a full-sized, two-degree-of-freedom, industrial crane; describe the experimental evaluation of the controller including robustness to payload length changes; explain the results of experiments designed to determine the hardware required for implementation of the control algorithms; and to provide a theoretical description of the controller

  19. New robust algorithm for tracking cells in videos of Drosophila morphogenesis based on finding an ideal path in segmented spatio-temporal cellular structures.

    Science.gov (United States)

    Bellaïche, Yohanns; Bosveld, Floris; Graner, François; Mikula, Karol; Remesíková, Mariana; Smísek, Michal

    2011-01-01

    In this paper, we present a novel algorithm for tracking cells in time lapse confocal microscopy movie of a Drosophila epithelial tissue during pupal morphogenesis. We consider a 2D + time video as a 3D static image, where frames are stacked atop each other, and using a spatio-temporal segmentation algorithm we obtain information about spatio-temporal 3D tubes representing evolutions of cells. The main idea for tracking is the usage of two distance functions--first one from the cells in the initial frame and second one from segmented boundaries. We track the cells backwards in time. The first distance function attracts the subsequently constructed cell trajectories to the cells in the initial frame and the second one forces them to be close to centerlines of the segmented tubular structures. This makes our tracking algorithm robust against noise and missing spatio-temporal boundaries. This approach can be generalized to a 3D + time video analysis, where spatio-temporal tubes are 4D objects.

  20. A robust multi-frequency mixing algorithm for suppression of rivet signal in GMR inspection of riveted structures

    Science.gov (United States)

    Safdernejad, Morteza S.; Karpenko, Oleksii; Ye, Chaofeng; Udpa, Lalita; Udpa, Satish

    2016-02-01

    The advent of Giant Magneto-Resistive (GMR) technology permits development of novel highly sensitive array probes for Eddy Current (EC) inspection of multi-layer riveted structures. Multi-frequency GMR measurements with different EC pene-tration depths show promise for detection of bottom layer notches at fastener sites. However, the distortion of the induced magnetic field due to flaws is dominated by the strong fastener signal, which makes defect detection and classification a challenging prob-lem. This issue is more pronounced for ferromagnetic fasteners that concentrate most of the magnetic flux. In the present work, a novel multi-frequency mixing algorithm is proposed to suppress rivet signal response and enhance defect detection capability of the GMR array probe. The algorithm is baseline-free and does not require any assumptions about the sample geometry being inspected. Fastener signal suppression is based upon the random sample consensus (RANSAC) method, which iteratively estimates parameters of a mathematical model from a set of observed data with outliers. Bottom layer defects at fastener site are simulated as EDM notches of different length. Performance of the proposed multi-frequency mixing approach is evaluated on finite element data and experimental GMR measurements obtained with unidirectional planar current excitation. Initial results are promising demonstrating the feasibility of the approach.

  1. The Key Role of the Vector Optimization Algorithm and Robust Design Approach for the Design of Polygeneration Systems

    Directory of Open Access Journals (Sweden)

    Alfredo Gimelli

    2018-04-01

    Full Text Available In recent decades, growing concerns about global warming and climate change effects have led to specific directives, especially in Europe, promoting the use of primary energy-saving techniques and renewable energy systems. The increasingly stringent requirements for carbon dioxide reduction have led to a more widespread adoption of distributed energy systems. In particular, besides renewable energy systems for power generation, one of the most effective techniques used to face the energy-saving challenges has been the adoption of polygeneration plants for combined heating, cooling, and electricity generation. This technique offers the possibility to achieve a considerable enhancement in energy and cost savings as well as a simultaneous reduction of greenhouse gas emissions. However, the use of small-scale polygeneration systems does not ensure the achievement of mandatory, but sometimes conflicting, aims without the proper sizing and operation of the plant. This paper is focused on a methodology based on vector optimization algorithms and developed by the authors for the identification of optimal polygeneration plant solutions. To this aim, a specific calculation algorithm for the study of cogeneration systems has also been developed. This paper provides, after a detailed description of the proposed methodology, some specific applications to the study of combined heat and power (CHP and organic Rankine cycle (ORC plants, thus highlighting the potential of the proposed techniques and the main results achieved.

  2. A fast and robust bulk-loading algorithm for indexing very large digital elevation datasets II. Experimental results

    Science.gov (United States)

    Rodríguez, Félix R.; Barrena, Manuel

    2011-07-01

    The spatial indexing of eventually all the available topographic information of Earth is a highly valuable tool for different geoscientific application domains. The Shuttle Radar Topography Mission (SRTM) collected and made available to the public one of the world's largest digital elevation models (DEMs). With the aim of providing on easier and faster access to these data by improving their further analysis and processing, we have indexed the SRTM DEM by means of a spatial index based on the kd-tree data structure, called the Q-tree. This paper is the second in a two-part series that includes a thorough performance analysis to validate the bulk-load algorithm efficiency of the Q-tree. We investigate performance measuring elapsed time in different contexts, analyzing disk space usage, testing response time with typical queries, and validating the final index structure balance. In addition, the paper includes performance comparisons with Oracle 11g that helps to understand the real cost of our proposal. Our tests prove that the proposed algorithm outperforms Oracle 11g using around a 9% of the elapsed time, taking six times less storage with more than 96% of page utilization, and getting faster response times to spatial queries issued on 4.5 million points. In addition to this, the behavior of the spatial index has been successfully tested on both an open GIS (VT Builder) and a visualizer tool derived from the previous one.

  3. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  4. Damped least square based genetic algorithm with Gaussian distribution of damping factor for singularity-robust inverse kinematics

    International Nuclear Information System (INIS)

    Phuoc, Le Minh; Lee, Suk Han; Kim, Hun Mo; Martinet, Philippe

    2008-01-01

    Robot inverse kinematics based on Jacobian inversion encounters critical issues of kinematic singularities. In this paper, several techniques based on damped least squares are proposed to lead robot pass through kinematic singularities without excessive joint velocities. Unlike other work in which the same damping factor is used for all singular vectors, this paper proposes a different damping coefficient for each singular vector based on corresponding singular value of the Jacobian. Moreover, a continuous distribution of damping factor following Gaussian function guarantees the continuous in joint velocities. A genetic algorithm is utilized to search for the best maximum damping factor and singular region, which used to require ad hoc searching in other works. As a result, end effector tracking error, which is inherited from damped least squares by introducing damping factors, is minimized. The effectiveness of our approach is compared with other methods in both non-redundant robot and redundant robot

  5. Damped least square based genetic algorithm with Gaussian distribution of damping factor for singularity-robust inverse kinematics

    Energy Technology Data Exchange (ETDEWEB)

    Phuoc, Le Minh; Lee, Suk Han; Kim, Hun Mo [Sungkyunkwan University, Suwon (Korea, Republic of); Martinet, Philippe [Blaise Pascal University, Clermont-Ferrand Cedex (France)

    2008-07-15

    Robot inverse kinematics based on Jacobian inversion encounters critical issues of kinematic singularities. In this paper, several techniques based on damped least squares are proposed to lead robot pass through kinematic singularities without excessive joint velocities. Unlike other work in which the same damping factor is used for all singular vectors, this paper proposes a different damping coefficient for each singular vector based on corresponding singular value of the Jacobian. Moreover, a continuous distribution of damping factor following Gaussian function guarantees the continuous in joint velocities. A genetic algorithm is utilized to search for the best maximum damping factor and singular region, which used to require ad hoc searching in other works. As a result, end effector tracking error, which is inherited from damped least squares by introducing damping factors, is minimized. The effectiveness of our approach is compared with other methods in both non-redundant robot and redundant robot

  6. Robust information encryption diffractive-imaging-based scheme with special phase retrieval algorithm for a customized data container

    Science.gov (United States)

    Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong; Zhou, Nanrun

    2018-06-01

    The diffractive-imaging-based encryption (DIBE) scheme has aroused wide interesting due to its compact architecture and low requirement of conditions. Nevertheless, the primary information can hardly be recovered exactly in the real applications when considering the speckle noise and potential occlusion imposed on the ciphertext. To deal with this issue, the customized data container (CDC) into DIBE is introduced and a new phase retrieval algorithm (PRA) for plaintext retrieval is proposed. The PRA, designed according to the peculiarity of the CDC, combines two key techniques from previous approaches, i.e., input-support-constraint and median-filtering. The proposed scheme can guarantee totally the reconstruction of the primary information despite heavy noise or occlusion and its effectiveness and feasibility have been demonstrated with simulation results.

  7. Towards a Robust Solution of the Non-Linear Kinematics for the General Stewart Platform with Estimation of Distribution Algorithms

    Directory of Open Access Journals (Sweden)

    Eusebio Eduardo Hernández Martinez

    2013-01-01

    Full Text Available In robotics, solving the direct kinematics problem (DKP for parallel robots is very often more difficult and time consuming than for their serial counterparts. The problem is stated as follows: given the joint variables, the Cartesian variables should be computed, namely the pose of the mobile platform. Most of the time, the DKP requires solving a non-linear system of equations. In addition, given that the system could be non-convex, Newton or Quasi-Newton (Dogleg based solvers get trapped on local minima. The capacity of such kinds of solvers to find an adequate solution strongly depends on the starting point. A well-known problem is the selection of such a starting point, which requires a priori information about the neighbouring region of the solution. In order to circumvent this issue, this article proposes an efficient method to select and to generate the starting point based on probabilistic learning. Experiments and discussion are presented to show the method performance. The method successfully avoids getting trapped on local minima without the need for human intervention, which increases its robustness when compared with a single Dogleg approach. This proposal can be extended to other structures, to any non-linear system of equations, and of course, to non-linear optimization problems.

  8. On the Use of Evolutionary Algorithms to Improve the Robustness of Continuous Speech Recognition Systems in Adverse Conditions

    Directory of Open Access Journals (Sweden)

    Sid-Ahmed Selouani

    2003-07-01

    Full Text Available Limiting the decrease in performance due to acoustic environment changes remains a major challenge for continuous speech recognition (CSR systems. We propose a novel approach which combines the Karhunen-Loève transform (KLT in the mel-frequency domain with a genetic algorithm (GA to enhance the data representing corrupted speech. The idea consists of projecting noisy speech parameters onto the space generated by the genetically optimized principal axis issued from the KLT. The enhanced parameters increase the recognition rate for highly interfering noise environments. The proposed hybrid technique, when included in the front-end of an HTK-based CSR system, outperforms that of the conventional recognition process in severe interfering car noise environments for a wide range of signal-to-noise ratios (SNRs varying from 16 dB to −4 dB. We also showed the effectiveness of the KLT-GA method in recognizing speech subject to telephone channel degradations.

  9. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  10. FAMOUS, faster: using parallel computing techniques to accelerate the FAMOUS/HadCM3 climate model with a focus on the radiative transfer algorithm

    Directory of Open Access Journals (Sweden)

    P. Hanappe

    2011-09-01

    Full Text Available We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations.

    The modified algorithm runs more than 50 times faster on the CELL's Synergistic Processing Element than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60 % of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.

  11. A parallel domain decomposition-based implicit method for the Cahn-Hilliard-Cook phase-field equation in 3D

    KAUST Repository

    Zheng, Xiang; Yang, Chao; Cai, Xiaochuan; Keyes, David E.

    2015-01-01

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation

  12. A Robust Algorithm of Multiquadric Method Based on an Improved Huber Loss Function for Interpolating Remote-Sensing-Derived Elevation Data Sets

    Directory of Open Access Journals (Sweden)

    Chuanfa Chen

    2015-03-01

    Full Text Available Remote-sensing-derived elevation data sets often suffer from noise and outliers due to various reasons, such as the physical limitations of sensors, multiple reflectance, occlusions and low contrast of texture. Outliers generally have a seriously negative effect on DEM construction. Some interpolation methods like ordinary kriging (OK are capable of smoothing noise inherent in sample points, but are sensitive to outliers. In this paper, a robust algorithm of multiquadric method (MQ based on an Improved Huber loss function (MQ-IH has been developed to decrease the impact of outliers on DEM construction. Theoretically, the improved Huber loss function is null for outliers, quadratic for small errors, and linear for others. Simulated data sets drawn from a mathematical surface with different error distributions were employed to analyze the robustness of MQ-IH. Results indicate that MQ-IH obtains a good balance between efficiency and robustness. Namely, the performance of MQ-IH is comparative to those of the classical MQ and MQ based on the Classical Huber loss function (MQ-CH when sample points follow a normal distribution, and the former outperforms the latter two when sample points are subject to outliers. For example, for the Cauchy error distribution with the location parameter of 0 and scale parameter of 1, the root mean square errors (RMSEs of MQ-CH and the classical MQ are 0.3916 and 1.4591, respectively, whereas that of MQ-IH is 0.3698. The performance of MQ-IH is further evaluated by qualitative and quantitative analysis through a real-world example of DEM construction with the stereo-images-derived elevation points. Results demonstrate that compared with the classical interpolation methods, including natural neighbor (NN, OK and ANUDEM (a program that calculates regular grid digital elevation models (DEMs with sensible shape and drainage structure from arbitrarily large topographic data sets, and two versions of MQ, including the

  13. A Robust Algorithm for Optimisation and Customisation of Fractal Dimensions of Time Series Modified by Nonlinearly Scaling Their Time Derivatives: Mathematical Theory and Practical Applications

    Directory of Open Access Journals (Sweden)

    Franz Konstantin Fuss

    2013-01-01

    Full Text Available Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal’s time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  14. A robust algorithm for optimisation and customisation of fractal dimensions of time series modified by nonlinearly scaling their time derivatives: mathematical theory and practical applications.

    Science.gov (United States)

    Fuss, Franz Konstantin

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  15. Geometrically Flexible and Efficient Flow Analysis of High Speed Vehicles Via Domain Decomposition, Part 1: Unstructured-Grid Solver for High Speed Flows

    Science.gov (United States)

    White, Jeffery A.; Baurle, Robert A.; Passe, Bradley J.; Spiegel, Seth C.; Nishikawa, Hiroaki

    2017-01-01

    The ability to solve the equations governing the hypersonic turbulent flow of a real gas on unstructured grids using a spatially-elliptic, 2nd-order accurate, cell-centered, finite-volume method has been recently implemented in the VULCAN-CFD code. This paper describes the key numerical methods and techniques that were found to be required to robustly obtain accurate solutions to hypersonic flows on non-hex-dominant unstructured grids. The methods and techniques described include: an augmented stencil, weighted linear least squares, cell-average gradient method, a robust multidimensional cell-average gradient-limiter process that is consistent with the augmented stencil of the cell-average gradient method and a cell-face gradient method that contains a cell skewness sensitive damping term derived using hyperbolic diffusion based concepts. A data-parallel matrix-based symmetric Gauss-Seidel point-implicit scheme, used to solve the governing equations, is described and shown to be more robust and efficient than a matrix-free alternative. In addition, a y+ adaptive turbulent wall boundary condition methodology is presented. This boundary condition methodology is deigned to automatically switch between a solve-to-the-wall and a wall-matching-function boundary condition based on the local y+ of the 1st cell center off the wall. The aforementioned methods and techniques are then applied to a series of hypersonic and supersonic turbulent flat plate unit tests to examine the efficiency, robustness and convergence behavior of the implicit scheme and to determine the ability of the solve-to-the-wall and y+ adaptive turbulent wall boundary conditions to reproduce the turbulent law-of-the-wall. Finally, the thermally perfect, chemically frozen, Mach 7.8 turbulent flow of air through a scramjet flow-path is computed and compared with experimental data to demonstrate the robustness, accuracy and convergence behavior of the unstructured-grid solver for a realistic 3-D geometry on

  16. IDENTIFICATION OF A ROBUST LICHEN INDEX FOR THE DECONVOLUTION OF LICHEN AND ROCK MIXTURES USING PATTERN SEARCH ALGORITHM (CASE STUDY: GREENLAND

    Directory of Open Access Journals (Sweden)

    S. Salehi

    2016-06-01

    Full Text Available Lichens are the dominant autotrophs of polar and subpolar ecosystems commonly encrust the rock outcrops. Spectral mixing of lichens and bare rock can shift diagnostic spectral features of materials of interest thus leading to misinterpretation and false positives if mapping is done based on perfect spectral matching methodologies. Therefore, the ability to distinguish the lichen coverage from rock and decomposing a mixed pixel into a collection of pure reflectance spectra, can improve the applicability of hyperspectral methods for mineral exploration. The objective of this study is to propose a robust lichen index that can be used to estimate lichen coverage, regardless of the mineral composition of the underlying rocks. The performance of three index structures of ratio, normalized ratio and subtraction have been investigated using synthetic linear mixtures of pure rock and lichen spectra with prescribed mixing ratios. Laboratory spectroscopic data are obtained from lichen covered samples collected from Karrat, Liverpool Land, and Sisimiut regions in Greenland. The spectra are then resampled to Hyperspectral Mapper (HyMAP resolution, in order to further investigate the functionality of the indices for the airborne platform. In both resolutions, a Pattern Search (PS algorithm is used to identify the optimal band wavelengths and bandwidths for the lichen index. The results of our band optimization procedure revealed that the ratio between R894-1246 and R1110 explains most of the variability in the hyperspectral data at the original laboratory resolution (R2=0.769. However, the normalized index incorporating R1106-1121 and R904-1251 yields the best results for the HyMAP resolution (R2=0.765.

  17. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    Energy Technology Data Exchange (ETDEWEB)

    Maglevanny, I.I., E-mail: sianko@list.ru [Volgograd State Social Pedagogical University, 27 Lenin Avenue, Volgograd 400131 (Russian Federation); Smolar, V.A. [Volgograd State Technical University, 28 Lenin Avenue, Volgograd 400131 (Russian Federation)

    2016-01-15

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  18. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    International Nuclear Information System (INIS)

    Maglevanny, I.I.; Smolar, V.A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  19. Robust Self Tuning Controllers

    DEFF Research Database (Denmark)

    Poulsen, Niels Kjølstad

    1985-01-01

    The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...... has several operation modes and a detector for controlling the mode. A special self tuning controller has been developed to regulate plant with changing time delay.......The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...

  20. Robust loss functions for boosting.

    Science.gov (United States)

    Kanamori, Takafumi; Takenouchi, Takashi; Eguchi, Shinto; Murata, Noboru

    2007-08-01

    Boosting is known as a gradient descent algorithm over loss functions. It is often pointed out that the typical boosting algorithm, Adaboost, is highly affected by outliers. In this letter, loss functions for robust boosting are studied. Based on the concept of robust statistics, we propose a transformation of loss functions that makes boosting algorithms robust against extreme outliers. Next, the truncation of loss functions is applied to contamination models that describe the occurrence of mislabels near decision boundaries. Numerical experiments illustrate that the proposed loss functions derived from the contamination models are useful for handling highly noisy data in comparison with other loss functions.

  1. Robust Seismic Normal Modes Computation in Radial Earth Models and A Novel Classification Based on Intersection Points of Waveguides

    Science.gov (United States)

    Ye, J.; Shi, J.; De Hoop, M. V.

    2017-12-01

    We develop a robust algorithm to compute seismic normal modes in a spherically symmetric, non-rotating Earth. A well-known problem is the cross-contamination of modes near "intersections" of dispersion curves for separate waveguides. Our novel computational approach completely avoids artificial degeneracies by guaranteeing orthonormality among the eigenfunctions. We extend Wiggins' and Buland's work, and reformulate the Sturm-Liouville problem as a generalized eigenvalue problem with the Rayleigh-Ritz Galerkin method. A special projection operator incorporating the gravity terms proposed by de Hoop and a displacement/pressure formulation are utilized in the fluid outer core to project out the essential spectrum. Moreover, the weak variational form enables us to achieve high accuracy across the solid-fluid boundary, especially for Stoneley modes, which have exponentially decaying behavior. We also employ the mixed finite element technique to avoid spurious pressure modes arising from discretization schemes and a numerical inf-sup test is performed following Bathe's work. In addition, the self-gravitation terms are reformulated to avoid computations outside the Earth, thanks to the domain decomposition technique. Our package enables us to study the physical properties of intersection points of waveguides. According to Okal's classification theory, the group velocities should be continuous within a branch of the same mode family. However, we have found that there will be a small "bump" near intersection points, which is consistent with Miropol'sky's observation. In fact, we can loosely regard Earth's surface and the CMB as independent waveguides. For those modes that are far from the intersection points, their eigenfunctions are localized in the corresponding waveguides. However, those that are close to intersection points will have physical features of both waveguides, which means they cannot be classified in either family. Our results improve on Okal

  2. Load Estimation by Frequency Domain Decomposition

    DEFF Research Database (Denmark)

    Pedersen, Ivar Chr. Bjerg; Hansen, Søren Mosegaard; Brincker, Rune

    2007-01-01

    When performing operational modal analysis the dynamic loading is unknown, however, once the modal properties of the structure have been estimated, the transfer matrix can be obtained, and the loading can be estimated by inverse filtering. In this paper loads in frequency domain are estimated by ...

  3. Bregmanized Domain Decomposition for Image Restoration

    KAUST Repository

    Langer, Andreas; Osher, Stanley; Schö nlieb, Carola-Bibiane

    2012-01-01

    Computational problems of large-scale data are gaining attention recently due to better hardware and hence, higher dimensionality of images and data sets acquired in applications. In the last couple of years non-smooth minimization problems

  4. International Conference on Robust Statistics

    CERN Document Server

    Filzmoser, Peter; Gather, Ursula; Rousseeuw, Peter

    2003-01-01

    Aspects of Robust Statistics are important in many areas. Based on the International Conference on Robust Statistics 2001 (ICORS 2001) in Vorau, Austria, this volume discusses future directions of the discipline, bringing together leading scientists, experienced researchers and practitioners, as well as younger researchers. The papers cover a multitude of different aspects of Robust Statistics. For instance, the fundamental problem of data summary (weights of evidence) is considered and its robustness properties are studied. Further theoretical subjects include e.g.: robust methods for skewness, time series, longitudinal data, multivariate methods, and tests. Some papers deal with computational aspects and algorithms. Finally, the aspects of application and programming tools complete the volume.

  5. A Robust and Self-Paced BCI System Based on a Four Class SSVEP Paradigm: Algorithms and Protocols for a High-Transfer-Rate Direct Brain Communication

    Directory of Open Access Journals (Sweden)

    Sergio Parini

    2009-01-01

    Full Text Available In this paper, we present, with particular focus on the adopted processing and identification chain and protocol-related solutions, a whole self-paced brain-computer interface system based on a 4-class steady-state visual evoked potentials (SSVEPs paradigm. The proposed system incorporates an automated spatial filtering technique centred on the common spatial patterns (CSPs method, an autoscaled and effective signal features extraction which is used for providing an unsupervised biofeedback, and a robust self-paced classifier based on the discriminant analysis theory. The adopted operating protocol is structured in a screening, training, and testing phase aimed at collecting user-specific information regarding best stimulation frequencies, optimal sources identification, and overall system processing chain calibration in only a few minutes. The system, validated on 11 healthy/pathologic subjects, has proven to be reliable in terms of achievable communication speed (up to 70 bit/min and very robust to false positive identifications.

  6. Computational plasticity algorithm for particle dynamics simulations

    Science.gov (United States)

    Krabbenhoft, K.; Lyamin, A. V.; Vignes, C.

    2018-01-01

    The problem of particle dynamics simulation is interpreted in the framework of computational plasticity leading to an algorithm which is mathematically indistinguishable from the common implicit scheme widely used in the finite element analysis of elastoplastic boundary value problems. This algorithm provides somewhat of a unification of two particle methods, the discrete element method and the contact dynamics method, which usually are thought of as being quite disparate. In particular, it is shown that the former appears as the special case where the time stepping is explicit while the use of implicit time stepping leads to the kind of schemes usually labelled contact dynamics methods. The framing of particle dynamics simulation within computational plasticity paves the way for new approaches similar (or identical) to those frequently employed in nonlinear finite element analysis. These include mixed implicit-explicit time stepping, dynamic relaxation and domain decomposition schemes.

  7. A Robust Motion Artifact Detection Algorithm for Accurate Detection of Heart Rates From Photoplethysmographic Signals Using Time-Frequency Spectral Features.

    Science.gov (United States)

    Dao, Duy; Salehizadeh, S M A; Noh, Yeonsik; Chong, Jo Woon; Cho, Chae Ho; McManus, Dave; Darling, Chad E; Mendelson, Yitzhak; Chon, Ki H

    2017-09-01

    Motion and noise artifacts (MNAs) impose limits on the usability of the photoplethysmogram (PPG), particularly in the context of ambulatory monitoring. MNAs can distort PPG, causing erroneous estimation of physiological parameters such as heart rate (HR) and arterial oxygen saturation (SpO2). In this study, we present a novel approach, "TifMA," based on using the time-frequency spectrum of PPG to first detect the MNA-corrupted data and next discard the nonusable part of the corrupted data. The term "nonusable" refers to segments of PPG data from which the HR signal cannot be recovered accurately. Two sequential classification procedures were included in the TifMA algorithm. The first classifier distinguishes between MNA-corrupted and MNA-free PPG data. Once a segment of data is deemed MNA-corrupted, the next classifier determines whether the HR can be recovered from the corrupted segment or not. A support vector machine (SVM) classifier was used to build a decision boundary for the first classification task using data segments from a training dataset. Features from time-frequency spectra of PPG were extracted to build the detection model. Five datasets were considered for evaluating TifMA performance: (1) and (2) were laboratory-controlled PPG recordings from forehead and finger pulse oximeter sensors with subjects making random movements, (3) and (4) were actual patient PPG recordings from UMass Memorial Medical Center with random free movements and (5) was a laboratory-controlled PPG recording dataset measured at the forehead while the subjects ran on a treadmill. The first dataset was used to analyze the noise sensitivity of the algorithm. Datasets 2-4 were used to evaluate the MNA detection phase of the algorithm. The results from the first phase of the algorithm (MNA detection) were compared to results from three existing MNA detection algorithms: the Hjorth, kurtosis-Shannon entropy, and time-domain variability-SVM approaches. This last is an approach

  8. Manipulation Robustness of Collaborative Filtering

    OpenAIRE

    Benjamin Van Roy; Xiang Yan

    2010-01-01

    A collaborative filtering system recommends to users products that similar users like. Collaborative filtering systems influence purchase decisions and hence have become targets of manipulation by unscrupulous vendors. We demonstrate that nearest neighbors algorithms, which are widely used in commercial systems, are highly susceptible to manipulation and introduce new collaborative filtering algorithms that are relatively robust.

  9. Synthesis of multi-wavelength temporal phase-shifting algorithms optimized for high signal-to-noise ratio and high detuning robustness using the frequency transfer function

    OpenAIRE

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-01-01

    Synthesis of single-wavelength temporal phase-shifting algorithms (PSA) for interferometry is well-known and firmly based on the frequency transfer function (FTF) paradigm. Here we extend the single-wavelength FTF-theory to dual and multi-wavelength PSA-synthesis when several simultaneous laser-colors are present. The FTF-based synthesis for dual-wavelength PSA (DW-PSA) is optimized for high signal-to-noise ratio and minimum number of temporal phase-shifted interferograms. The DW-PSA synthesi...

  10. Robust factorization

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Fisker, Rune; Åström, Kalle

    2002-01-01

    Factorization algorithms for recovering structure and motion from an image stream have many advantages, but they usually require a set of well-tracked features. Such a set is in generally not available in practical applications. There is thus a need for making factorization algorithms deal effect...

  11. An automated and robust image processing algorithm for glaucoma diagnosis from fundus images using novel blood vessel tracking and bend point detection.

    Science.gov (United States)

    M, Soorya; Issac, Ashish; Dutta, Malay Kishore

    2018-02-01

    Glaucoma is an ocular disease which can cause irreversible blindness. The disease is currently identified using specialized equipment operated by optometrists manually. The proposed work aims to provide an efficient imaging solution which can help in automating the process of Glaucoma diagnosis using computer vision techniques from digital fundus images. The proposed method segments the optic disc using a geometrical feature based strategic framework which improves the detection accuracy and makes the algorithm invariant to illumination and noise. Corner thresholding and point contour joining based novel methods are proposed to construct smooth contours of Optic Disc. Based on a clinical approach as used by ophthalmologist, the proposed algorithm tracks blood vessels inside the disc region and identifies the points at which first vessel bend from the optic disc boundary and connects them to obtain the contours of Optic Cup. The proposed method has been compared with the ground truth marked by the medical experts and the similarity parameters, used to determine the performance of the proposed method, have yield a high similarity of segmentation. The proposed method has achieved a macro-averaged f-score of 0.9485 and accuracy of 97.01% in correctly classifying fundus images. The proposed method is clinically significant and can be used for Glaucoma screening over a large population which will work in a real time. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Robust Scientists

    DEFF Research Database (Denmark)

    Gorm Hansen, Birgitte

    their core i nterests, 2) developing a selfsupply of industry interests by becoming entrepreneurs and thus creating their own compliant industry partner and 3) balancing resources within a larger collective of researchers, thus countering changes in the influx of funding caused by shifts in political...... knowledge", Danish research policy seems to have helped develop politically and economically "robust scientists". Scientific robustness is acquired by way of three strategies: 1) tasting and discriminating between resources so as to avoid funding that erodes academic profiles and push scientists away from...

  13. On the Robustness and Prospects of Adaptive BDDC Methods for Finite Element Discretizations of Elliptic PDEs with High-Contrast Coefficients

    KAUST Repository

    Zampini, Stefano

    2016-06-02

    Balancing Domain Decomposition by Constraints (BDDC) methods have proven to be powerful preconditioners for large and sparse linear systems arising from the finite element discretization of elliptic PDEs. Condition number bounds can be theoretically established that are independent of the number of subdomains of the decomposition. The core of the methods resides in the design of a larger and partially discontinuous finite element space that allows for fast application of the preconditioner, where Cholesky factorizations of the subdomain finite element problems are additively combined with a coarse, global solver. Multilevel and highly-scalable algorithms can be obtained by replacing the coarse Cholesky solver with a coarse BDDC preconditioner. BDDC methods have the remarkable ability to control the condition number, since the coarse space of the preconditioner can be adaptively enriched at the cost of solving local eigenproblems. The proper identification of these eigenproblems extends the robustness of the methods to any heterogeneity in the distribution of the coefficients of the PDEs, not only when the coefficients jumps align with the subdomain boundaries or when the high contrast regions are confined to lie in the interior of the subdomains. The specific adaptive technique considered in this paper does not depend upon any interaction of discretization and partition; it relies purely on algebraic operations. Coarse space adaptation in BDDC methods has attractive algorithmic properties, since the technique enhances the concurrency and the arithmetic intensity of the preconditioning step of the sparse implicit solver with the aim of controlling the number of iterations of the Krylov method in a black-box fashion, thus reducing the number of global synchronization steps and matrix vector multiplications needed by the iterative solver; data movement and memory bound kernels in the solve phase can be thus limited at the expense of extra local ops during the setup of

  14. HMC algorithm with multiple time scale integration and mass preconditioning

    Science.gov (United States)

    Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.

    2006-01-01

    We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.

  15. Robust continuous clustering.

    Science.gov (United States)

    Shah, Sohil Atul; Koltun, Vladlen

    2017-09-12

    Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank.

  16. Robust statistical methods with R

    CERN Document Server

    Jureckova, Jana

    2005-01-01

    Robust statistical methods were developed to supplement the classical procedures when the data violate classical assumptions. They are ideally suited to applied research across a broad spectrum of study, yet most books on the subject are narrowly focused, overly theoretical, or simply outdated. Robust Statistical Methods with R provides a systematic treatment of robust procedures with an emphasis on practical application.The authors work from underlying mathematical tools to implementation, paying special attention to the computational aspects. They cover the whole range of robust methods, including differentiable statistical functions, distance of measures, influence functions, and asymptotic distributions, in a rigorous yet approachable manner. Highlighting hands-on problem solving, many examples and computational algorithms using the R software supplement the discussion. The book examines the characteristics of robustness, estimators of real parameter, large sample properties, and goodness-of-fit tests. It...

  17. Robust Multimodal Dictionary Learning

    Science.gov (United States)

    Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc

    2014-01-01

    We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674

  18. Robust Return Algorithm for Anisotropic Plasticity Models

    DEFF Research Database (Denmark)

    Tidemann, L.; Krenk, Steen

    2017-01-01

    Plasticity models can be defined by an energy potential, a plastic flow potential and a yield surface. The energy potential defines the relation between the observable elastic strains ϒe and the energy conjugate stresses Τe and between the non-observable internal strains i and the energy conjugat...

  19. Robust efficient video fingerprinting

    Science.gov (United States)

    Puri, Manika; Lubin, Jeffrey

    2009-02-01

    We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.

  20. Implementation of a parallel algorithm for spherical SN calculations on the IBM 3090

    International Nuclear Information System (INIS)

    Haghighat, A.; Lawrence, R.D.

    1989-01-01

    Parallel S N algorithms based on domain decomposition in angle are straightforward to develop in Cartesian geometry because the computation of the angular fluxes for a specific discrete ordinate can be performed independently of all other angles. This is not the case for curvilinear geometries, where the angular redistribution component of the discretized streaming operator results in coupling between angular fluxes along adjacent discrete ordinates. Previously, the authors developed a parallel algorithm for S N calculations in spherical geometry and examined its iterative convergence for criticality and detector problems with differing scattering/absorption ratios. In this paper, the authors describe the implementation of the algorithm on an IBM 3090 Model 400 (four processors) and present computational results illustrating the efficiency of the algorithm relative to serial execution

  1. Robust and Efficient Parametric Face Alignment

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    2011-01-01

    We propose a correlation-based approach to parametric object alignment particularly suitable for face analysis applications which require efficiency and robustness against occlusions and illumination changes. Our algorithm registers two images by iteratively maximizing their correlation coefficient

  2. Mesh Partitioning Algorithm Based on Parallel Finite Element Analysis and Its Actualization

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2013-01-01

    Full Text Available In parallel computing based on finite element analysis, domain decomposition is a key technique for its preprocessing. Generally, a domain decomposition of a mesh can be realized through partitioning of a graph which is converted from a finite element mesh. This paper discusses the method for graph partitioning and the way to actualize mesh partitioning. Relevant softwares are introduced, and the data structure and key functions of Metis and ParMetis are introduced. The writing, compiling, and testing of the mesh partitioning interface program based on these key functions are performed. The results indicate some objective law and characteristics to guide the users who use the graph partitioning algorithm and software to write PFEM program, and ideal partitioning effects can be achieved by actualizing mesh partitioning through the program. The interface program can also be used directly by the engineering researchers as a module of the PFEM software. So that it can reduce the application of the threshold of graph partitioning algorithm, improve the calculation efficiency, and promote the application of graph theory and parallel computing.

  3. Ins-Robust Primitive Words

    OpenAIRE

    Srivastava, Amit Kumar; Kapoor, Kalpesh

    2017-01-01

    Let Q be the set of primitive words over a finite alphabet with at least two symbols. We characterize a class of primitive words, Q_I, referred to as ins-robust primitive words, which remain primitive on insertion of any letter from the alphabet and present some properties that characterizes words in the set Q_I. It is shown that the language Q_I is dense. We prove that the language of primitive words that are not ins-robust is not context-free. We also present a linear time algorithm to reco...

  4. A hybrid algorithm for parallel molecular dynamics simulations

    Science.gov (United States)

    Mangiardi, Chris M.; Meyer, R.

    2017-10-01

    This article describes algorithms for the hybrid parallelization and SIMD vectorization of molecular dynamics simulations with short-range forces. The parallelization method combines domain decomposition with a thread-based parallelization approach. The goal of the work is to enable efficient simulations of very large (tens of millions of atoms) and inhomogeneous systems on many-core processors with hundreds or thousands of cores and SIMD units with large vector sizes. In order to test the efficiency of the method, simulations of a variety of configurations with up to 74 million atoms have been performed. Results are shown that were obtained on multi-core systems with Sandy Bridge and Haswell processors as well as systems with Xeon Phi many-core processors.

  5. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-offs on Phenotype Robustness in Biological Networks. Part III: Synthetic Gene Networks in Synthetic Biology

    Science.gov (United States)

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties that are observed in biological systems at many different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be large enough to confer: intrinsic robustness for tolerating intrinsic parameter fluctuations; genetic robustness for buffering genetic variations; and environmental robustness for resisting environmental disturbances. Network robustness is needed so phenotype stability of biological network can be maintained, guaranteeing phenotype robustness. Synthetic biology is foreseen to have important applications in biotechnology and medicine; it is expected to contribute significantly to a better understanding of functioning of complex biological systems. This paper presents a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation for synthetic gene networks in synthetic biology. Further, from the unifying mathematical framework, we found that the phenotype robustness criterion for synthetic gene networks is the following: if intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in synthetic biology can also be investigated through corresponding phenotype robustness criteria from the systematic point of view. Finally, a robust synthetic design that involves network evolution algorithms with desired behavior under intrinsic parameter fluctuations, genetic variations, and environmental

  6. Robust power system frequency control

    CERN Document Server

    Bevrani, Hassan

    2014-01-01

    This updated edition of the industry standard reference on power system frequency control provides practical, systematic and flexible algorithms for regulating load frequency, offering new solutions to the technical challenges introduced by the escalating role of distributed generation and renewable energy sources in smart electric grids. The author emphasizes the physical constraints and practical engineering issues related to frequency in a deregulated environment, while fostering a conceptual understanding of frequency regulation and robust control techniques. The resulting control strategi

  7. Robust image authentication in the presence of noise

    CERN Document Server

    2015-01-01

    This book addresses the problems that hinder image authentication in the presence of noise. It considers the advantages and disadvantages of existing algorithms for image authentication and shows new approaches and solutions for robust image authentication. The state of the art algorithms are compared and, furthermore, innovative approaches and algorithms are introduced. The introduced algorithms are applied to improve image authentication, watermarking and biometry.    Aside from presenting new directions and algorithms for robust image authentication in the presence of noise, as well as image correction, this book also:   Provides an overview of the state of the art algorithms for image authentication in the presence of noise and modifications, as well as a comparison of these algorithms, Presents novel algorithms for robust image authentication, whereby the image is tried to be corrected and authenticated, Examines different views for the solution of problems connected to image authentication in the pre...

  8. Methods for robustness programming

    NARCIS (Netherlands)

    Olieman, N.J.

    2008-01-01

    Robustness of an object is defined as the probability that an object will have properties as required. Robustness Programming (RP) is a mathematical approach for Robustness estimation and Robustness optimisation. An example in the context of designing a food product, is finding the best composition

  9. Robustness in laying hens

    NARCIS (Netherlands)

    Star, L.

    2008-01-01

    The aim of the project ‘The genetics of robustness in laying hens’ was to investigate nature and regulation of robustness in laying hens under sub-optimal conditions and the possibility to increase robustness by using animal breeding without loss of production. At the start of the project, a robust

  10. A Screen Space GPGPU Surface LIC Algorithm for Distributed Memory Data Parallel Sort Last Rendering Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim

    2014-07-01

    The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.

  11. Robust Object Tracking Using Valid Fragments Selection.

    Science.gov (United States)

    Zheng, Jin; Li, Bo; Tian, Peng; Luo, Gang

    Local features are widely used in visual tracking to improve robustness in cases of partial occlusion, deformation and rotation. This paper proposes a local fragment-based object tracking algorithm. Unlike many existing fragment-based algorithms that allocate the weights to each fragment, this method firstly defines discrimination and uniqueness for local fragment, and builds an automatic pre-selection of useful fragments for tracking. Then, a Harris-SIFT filter is used to choose the current valid fragments, excluding occluded or highly deformed fragments. Based on those valid fragments, fragment-based color histogram provides a structured and effective description for the object. Finally, the object is tracked using a valid fragment template combining the displacement constraint and similarity of each valid fragment. The object template is updated by fusing feature similarity and valid fragments, which is scale-adaptive and robust to partial occlusion. The experimental results show that the proposed algorithm is accurate and robust in challenging scenarios.

  12. Perceptual Robust Design

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard

    The research presented in this PhD thesis has focused on a perceptual approach to robust design. The results of the research and the original contribution to knowledge is a preliminary framework for understanding, positioning, and applying perceptual robust design. Product quality is a topic...... been presented. Therefore, this study set out to contribute to the understanding and application of perceptual robust design. To achieve this, a state-of-the-art and current practice review was performed. From the review two main research problems were identified. Firstly, a lack of tools...... for perceptual robustness was found to overlap with the optimum for functional robustness and at most approximately 2.2% out of the 14.74% could be ascribed solely to the perceptual robustness optimisation. In conclusion, the thesis have offered a new perspective on robust design by merging robust design...

  13. Robustness of Structural Systems

    DEFF Research Database (Denmark)

    Canisius, T.D.G.; Sørensen, John Dalsgaard; Baker, J.W.

    2007-01-01

    The importance of robustness as a property of structural systems has been recognised following several structural failures, such as that at Ronan Point in 1968,where the consequenceswere deemed unacceptable relative to the initiating damage. A variety of research efforts in the past decades have...... attempted to quantify aspects of robustness such as redundancy and identify design principles that can improve robustness. This paper outlines the progress of recent work by the Joint Committee on Structural Safety (JCSS) to develop comprehensive guidance on assessing and providing robustness in structural...... systems. Guidance is provided regarding the assessment of robustness in a framework that considers potential hazards to the system, vulnerability of system components, and failure consequences. Several proposed methods for quantifying robustness are reviewed, and guidelines for robust design...

  14. Robust multivariate analysis

    CERN Document Server

    J Olive, David

    2017-01-01

    This text presents methods that are robust to the assumption of a multivariate normal distribution or methods that are robust to certain types of outliers. Instead of using exact theory based on the multivariate normal distribution, the simpler and more applicable large sample theory is given.  The text develops among the first practical robust regression and robust multivariate location and dispersion estimators backed by theory.   The robust techniques  are illustrated for methods such as principal component analysis, canonical correlation analysis, and factor analysis.  A simple way to bootstrap confidence regions is also provided. Much of the research on robust multivariate analysis in this book is being published for the first time. The text is suitable for a first course in Multivariate Statistical Analysis or a first course in Robust Statistics. This graduate text is also useful for people who are familiar with the traditional multivariate topics, but want to know more about handling data sets with...

  15. Domain decomposition techniques for boundary elements application to fluid flow

    CERN Document Server

    Brebbia, C A; Skerget, L

    2007-01-01

    The sub-domain techniques in the BEM are nowadays finding its place in the toolbox of numerical modellers, especially when dealing with complex 3D problems. We see their main application in conjunction with the classical BEM approach, which is based on a single domain, when part of the domain needs to be solved using a single domain approach, the classical BEM, and part needs to be solved using a domain approach, BEM subdomain technique. This has usually been done in the past by coupling the BEM with the FEM, however, it is much more efficient to use a combination of the BEM and a BEM sub-domain technique. The advantage arises from the simplicity of coupling the single domain and multi-domain solutions, and from the fact that only one formulation needs to be developed, rather than two separate formulations based on different techniques. There are still possibilities for improving the BEM sub-domain techniques. However, considering the increased interest and research in this approach we believe that BEM sub-do...

  16. Domain decomposition solvers for nonlinear multiharmonic finite element equations

    KAUST Repository

    Copeland, D. M.

    2010-01-01

    In many practical applications, for instance, in computational electromagnetics, the excitation is time-harmonic. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple elliptic equation for the amplitude. This is true for linear problems, but not for nonlinear problems. However, due to the periodicity of the solution, we can expand the solution in a Fourier series. Truncating this Fourier series and approximating the Fourier coefficients by finite elements, we arrive at a large-scale coupled nonlinear system for determining the finite element approximation to the Fourier coefficients. The construction of fast solvers for such systems is very crucial for the efficiency of this multiharmonic approach. In this paper we look at nonlinear, time-harmonic potential problems as simple model problems. We construct and analyze almost optimal solvers for the Jacobi systems arising from the Newton linearization of the large-scale coupled nonlinear system that one has to solve instead of performing the expensive time-integration procedure. © 2010 de Gruyter.

  17. Domain Decomposition Solvers for Frequency-Domain Finite Element Equations

    KAUST Repository

    Copeland, Dylan; Kolmbauer, Michael; Langer, Ulrich

    2010-01-01

    The paper is devoted to fast iterative solvers for frequency-domain finite element equations approximating linear and nonlinear parabolic initial boundary value problems with time-harmonic excitations. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple linear elliptic system for the amplitudes belonging to the sine- and to the cosine-excitation or a large nonlinear elliptic system for the Fourier coefficients in the linear and nonlinear case, respectively. The fast solution of the corresponding linear and nonlinear system of finite element equations is crucial for the competitiveness of this method. © 2011 Springer-Verlag Berlin Heidelberg.

  18. Modal Identification from Ambient Responses using Frequency Domain Decomposition

    DEFF Research Database (Denmark)

    Brincker, Rune; Zhang, L.; Andersen, P.

    2000-01-01

    In this paper a new frequency domain technique is introduced for the modal identification from ambient responses, ie. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical ...

  19. Domain Decomposition Solvers for Frequency-Domain Finite Element Equations

    KAUST Repository

    Copeland, Dylan

    2010-10-05

    The paper is devoted to fast iterative solvers for frequency-domain finite element equations approximating linear and nonlinear parabolic initial boundary value problems with time-harmonic excitations. Switching from the time domain to the frequency domain allows us to replace the expensive time-integration procedure by the solution of a simple linear elliptic system for the amplitudes belonging to the sine- and to the cosine-excitation or a large nonlinear elliptic system for the Fourier coefficients in the linear and nonlinear case, respectively. The fast solution of the corresponding linear and nonlinear system of finite element equations is crucial for the competitiveness of this method. © 2011 Springer-Verlag Berlin Heidelberg.

  20. Modal Identification from Ambient Responses Using Frequency Domain Decomposition

    DEFF Research Database (Denmark)

    Brincker, Rune; Zhang, Lingmi; Andersen, Palle

    2000-01-01

    In this paper a new frequency domain technique is introduced for the modal identification from ambient responses, i.e. in the case where the modal parameters must be estimated without knowing the input exciting the system. By its user friendliness the technique is closely related to the classical...

  1. Domain decomposition solvers for nonlinear multiharmonic finite element equations

    KAUST Repository

    Copeland, D. M.; Langer, U.

    2010-01-01

    of a simple elliptic equation for the amplitude. This is true for linear problems, but not for nonlinear problems. However, due to the periodicity of the solution, we can expand the solution in a Fourier series. Truncating this Fourier series

  2. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  3. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  4. Handling Occlusions for Robust Augmented Reality Systems

    Directory of Open Access Journals (Sweden)

    Maidi Madjid

    2010-01-01

    Full Text Available Abstract In Augmented Reality applications, the human perception is enhanced with computer-generated graphics. These graphics must be exactly registered to real objects in the scene and this requires an effective Augmented Reality system to track the user's viewpoint. In this paper, a robust tracking algorithm based on coded fiducials is presented. Square targets are identified and pose parameters are computed using a hybrid approach based on a direct method combined with the Kalman filter. An important factor for providing a robust Augmented Reality system is the correct handling of targets occlusions by real scene elements. To overcome tracking failure due to occlusions, we extend our method using an optical flow approach to track visible points and maintain virtual graphics overlaying when targets are not identified. Our proposed real-time algorithm is tested with different camera viewpoints under various image conditions and shows to be accurate and robust.

  5. Robustness of Structures

    DEFF Research Database (Denmark)

    Faber, Michael Havbro; Vrouwenvelder, A.C.W.M.; Sørensen, John Dalsgaard

    2011-01-01

    In 2005, the Joint Committee on Structural Safety (JCSS) together with Working Commission (WC) 1 of the International Association of Bridge and Structural Engineering (IABSE) organized a workshop on robustness of structures. Two important decisions resulted from this workshop, namely...... ‘COST TU0601: Robustness of Structures’ was initiated in February 2007, aiming to provide a platform for exchanging and promoting research in the area of structural robustness and to provide a basic framework, together with methods, strategies and guidelines enhancing robustness of structures...... the development of a joint European project on structural robustness under the COST (European Cooperation in Science and Technology) programme and the decision to develop a more elaborate document on structural robustness in collaboration between experts from the JCSS and the IABSE. Accordingly, a project titled...

  6. Robust Growth Determinants

    OpenAIRE

    Doppelhofer, Gernot; Weeks, Melvyn

    2011-01-01

    This paper investigates the robustness of determinants of economic growth in the presence of model uncertainty, parameter heterogeneity and outliers. The robust model averaging approach introduced in the paper uses a flexible and parsi- monious mixture modeling that allows for fat-tailed errors compared to the normal benchmark case. Applying robust model averaging to growth determinants, the paper finds that eight out of eighteen variables found to be significantly related to economic growth ...

  7. Robust Programming by Example

    OpenAIRE

    Bishop , Matt; Elliott , Chip

    2011-01-01

    Part 2: WISE 7; International audience; Robust programming lies at the heart of the type of coding called “secure programming”. Yet it is rarely taught in academia. More commonly, the focus is on how to avoid creating well-known vulnerabilities. While important, that misses the point: a well-structured, robust program should anticipate where problems might arise and compensate for them. This paper discusses one view of robust programming and gives an example of how it may be taught.

  8. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  9. Efficient algorithms for flow simulation related to nuclear reactor safety

    International Nuclear Information System (INIS)

    Gornak, Tatiana

    2013-01-01

    Safety analysis is of ultimate importance for operating Nuclear Power Plants (NPP). The overall modeling and simulation of physical and chemical processes occuring in the course of an accident is an interdisciplinary problem and has origins in fluid dynamics, numerical analysis, reactor technology and computer programming. The aim of the study is therefore to create the foundations of a multi-dimensional non-isothermal fluid model for a NPP containment and software tool based on it. The numerical simulations allow to analyze and predict the behavior of NPP systems under different working and accident conditions, and to develop proper action plans for minimizing the risks of accidents, and/or minimizing the consequences of possible accidents. A very large number of scenarios have to be simulated, and at the same time acceptable accuracy for the critical parameters, such as radioactive pollution, temperature, etc., have to be achieved. The existing software tools are either too slow, or not accurate enough. This thesis deals with developing customized algorithm and software tools for simulation of isothermal and non-isothermal flows in a containment pool of NPP. Requirements to such a software are formulated, and proper algorithms are presented. The goal of the work is to achieve a balance between accuracy and speed of calculation, and to develop customized algorithm for this special case. Different discretization and solution approaches are studied and those which correspond best to the formulated goal are selected, adjusted, and when possible, analysed. Fast directional splitting algorithm for Navier-Stokes equations in complicated geometries, in presence of solid and porous obstacles, is in the core of the algorithm. Developing suitable pre-processor and customized domain decomposition algorithms are essential part of the overall algorithm amd software. Results from numerical simulations in test geometries and in real geometries are presented and discussed.

  10. Robust procedures in chemometrics

    DEFF Research Database (Denmark)

    Kotwa, Ewelina

    properties of the analysed data. The broad theoretical background of robust procedures was given as a very useful supplement to the classical methods, and a new tool, based on robust PCA, aiming at identifying Rayleigh and Raman scatters in excitation-mission (EEM) data was developed. The results show...

  11. Robust Trajectory Design in Highly Perturbed Environments Leveraging Continuation Methods, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Research is proposed to investigate continuation methods to improve the robustness of trajectory design algorithms for spacecraft in highly perturbed dynamical...

  12. Discrete Riccati equation solutions: Distributed algorithms

    Directory of Open Access Journals (Sweden)

    D. G. Lainiotis

    1996-01-01

    Full Text Available In this paper new distributed algorithms for the solution of the discrete Riccati equation are introduced. The algorithms are used to provide robust and computational efficient solutions to the discrete Riccati equation. The proposed distributed algorithms are theoretically interesting and computationally attractive.

  13. Aplicación de algoritmos de control clásico, adaptable y robusto a sistemas dinámicos de parámetros variables;Classic, adaptable and robust control algorithm application, to variant parameter dynamic system.

    Directory of Open Access Journals (Sweden)

    Orlando – Regalón Anias

    2012-11-01

    Full Text Available Existen múltiples sistemas dinámicos cuyos modelos matemáticos se caracterizan por ser de primer orden yparámetros variables con el tiempo. En estos casos las herramientas clásicas no siempre logran un sistema decontrol que sea estable, posea un buen desempeño dinámico y rechace adecuadamente las perturbaciones, cuandoel modelo de la planta se desvía del nominal, para el cual se realizó el diseño.En este trabajo se evalúa elcomportamiento de tres estrategias de control en presencia de variación de parámetros. Estas son: control clásico,control adaptable y control robusto. Se realiza un estudio comparativo de las mismas en cuanto a complejidad deldiseño, costo computacional de la implementación y sensibilidad ante variaciones en los parámetros y/o presencia dedisturbios. Se llega a conclusiones que permiten disponer de criterios para la elección más adecuada, endependencia de los requerimientos dinámicos que la aplicación demande, así como de los medios técnicos de que sedisponga.Many dynamic systems have first order mathematic models, with time variable parameters. In these cases, theclassical tools do not satisfy at all control system stability, good performance and perturbation rejection, when theplant model differs from the nominal one, for which the controller was designed.In this article, three control strategiesare evaluated in parameter variations and disturbance presence. The strategies are the followings: classical control,adaptive control and robust control. A comparative study is carried out, taking into account the design complexity, thecomputational cost and the sensitivity. The obtained conclusions helps to provide the criterion to choose the mostadequate control strategy, according to the necessary dynamic, as well as the available technical means.

  14. Robust matching for voice recognition

    Science.gov (United States)

    Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.

    1994-10-01

    This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.

  15. Robustness Metrics: Consolidating the multiple approaches to quantify Robustness

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.

    2016-01-01

    robustness metrics; 3) Functional expectancy and dispersion robustness metrics; and 4) Probability of conformance robustness metrics. The goal was to give a comprehensive overview of robustness metrics and guidance to scholars and practitioners to understand the different types of robustness metrics...

  16. Robustness of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2008-01-01

    This paper describes the background of the robustness requirements implemented in the Danish Code of Practice for Safety of Structures and in the Danish National Annex to the Eurocode 0, see (DS-INF 146, 2003), (DS 409, 2006), (EN 1990 DK NA, 2007) and (Sørensen and Christensen, 2006). More...... frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new structures essential....... According to Danish design rules robustness shall be documented for all structures in high consequence class. The design procedure to document sufficient robustness consists of: 1) Review of loads and possible failure modes / scenarios and determination of acceptable collapse extent; 2) Review...

  17. Robustness of structures

    DEFF Research Database (Denmark)

    Vrouwenvelder, T.; Sørensen, John Dalsgaard

    2009-01-01

    After the collapse of the World Trade Centre towers in 2001 and a number of collapses of structural systems in the beginning of the century, robustness of structural systems has gained renewed interest. Despite many significant theoretical, methodical and technological advances, structural...... of robustness for structural design such requirements are not substantiated in more detail, nor have the engineering profession been able to agree on an interpretation of robustness which facilitates for its uantification. A European COST action TU 601 on ‘Robustness of structures' has started in 2007...... by a group of members of the CSS. This paper describes the ongoing work in this action, with emphasis on the development of a theoretical and risk based quantification and optimization procedure on the one side and a practical pre-normative guideline on the other....

  18. Robust Approaches to Forecasting

    OpenAIRE

    Jennifer Castle; David Hendry; Michael P. Clements

    2014-01-01

    We investigate alternative robust approaches to forecasting, using a new class of robust devices, contrasted with equilibrium correction models. Their forecasting properties are derived facing a range of likely empirical problems at the forecast origin, including measurement errors, implulses, omitted variables, unanticipated location shifts and incorrectly included variables that experience a shift. We derive the resulting forecast biases and error variances, and indicate when the methods ar...

  19. Robustness - theoretical framework

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.

    2010-01-01

    More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....

  20. A robust and fast generic voltage sag detection technique

    DEFF Research Database (Denmark)

    L. Dantas, Joacillo; Lima, Francisco Kleber A.; Branco, Carlos Gustavo C.

    2015-01-01

    In this paper, a fast and robust voltage sag detection algorithm, named VPS2D, is introduced. Using the DSOGI, the algorithm creates a virtual positive sequence voltage and monitories the fundamental voltage component of each phase. After calculating the aggregate value in the o:;3-reference fram...

  1. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  2. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  3. A robust multilevel simultaneous eigenvalue solver

    Science.gov (United States)

    Costiner, Sorin; Taasan, Shlomo

    1993-01-01

    Multilevel (ML) algorithms for eigenvalue problems are often faced with several types of difficulties such as: the mixing of approximated eigenvectors by the solution process, the approximation of incomplete clusters of eigenvectors, the poor representation of solution on coarse levels, and the existence of close or equal eigenvalues. Algorithms that do not treat appropriately these difficulties usually fail, or their performance degrades when facing them. These issues motivated the development of a robust adaptive ML algorithm which treats these difficulties, for the calculation of a few eigenvectors and their corresponding eigenvalues. The main techniques used in the new algorithm include: the adaptive completion and separation of the relevant clusters on different levels, the simultaneous treatment of solutions within each cluster, and the robustness tests which monitor the algorithm's efficiency and convergence. The eigenvectors' separation efficiency is based on a new ML projection technique generalizing the Rayleigh Ritz projection, combined with a technique, the backrotations. These separation techniques, when combined with an FMG formulation, in many cases lead to algorithms of O(qN) complexity, for q eigenvectors of size N on the finest level. Previously developed ML algorithms are less focused on the mentioned difficulties. Moreover, algorithms which employ fine level separation techniques are of O(q(sub 2)N) complexity and usually do not overcome all these difficulties. Computational examples are presented where Schrodinger type eigenvalue problems in 2-D and 3-D, having equal and closely clustered eigenvalues, are solved with the efficiency of the Poisson multigrid solver. A second order approximation is obtained in O(qN) work, where the total computational work is equivalent to only a few fine level relaxations per eigenvector.

  4. Qualitative Robustness in Estimation

    Directory of Open Access Journals (Sweden)

    Mohammed Nasser

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of   qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.

  5. Aplicación de algoritmos de control clásico, adaptable y robusto a sistemas dinámicos de parámetros variables; Classic, adaptable and robust control algorithm application, to variant parameter dynamic system

    Directory of Open Access Journals (Sweden)

    Orlando Regalón Anias

    2012-11-01

    Full Text Available Existen múltiples sistemas dinámicos cuyos modelos matemáticos se caracterizan por ser de primer orden y parámetros variables con el tiempo. En estos casos las herramientas clásicas no siempre logran un sistema decontrol que sea estable, posea un buen desempeño dinámico y rechace adecuadamente las perturbaciones, cuando el modelo de la planta se desvía del nominal, para el cual se realizó el diseño.En este trabajo se evalúa el comportamiento de tres estrategias de control en presencia de variación de parámetros. Estas son: control clásico, control adaptable y control robusto. Se realiza un estudio comparativo de las mismas en cuanto a complejidad del diseño, costo computacional de la implementación y sensibilidad ante variaciones en los parámetros y/o presencia de disturbios. Se llega a conclusiones que permiten disponer de criterios para la elección más adecuada, en dependencia de los requerimientos dinámicos que la aplicación demande, así como de los medios técnicos de que se disponga.  Many dynamic systems have first order mathematic models, with time variable parameters. In these cases, the classical tools do not satisfy at all control system stability, good performance and perturbation rejection, when the plant model differs from the nominal one, for which the controller was designed.In this article, three control strategies are evaluated in parameter variations and disturbance presence. The strategies are the followings: classical control, adaptive control and robust control. A comparative study is carried out, taking into account the design complexity, the computational cost and the sensitivity. The obtained conclusions helps to provide the criterion to choose the mostadequate control strategy, according to the necessary dynamic, as well as the available technical means.

  6. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    Science.gov (United States)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  7. Robust Pseudo-Hierarchical Support Vector Clustering

    DEFF Research Database (Denmark)

    Hansen, Michael Sass; Sjöstrand, Karl; Olafsdóttir, Hildur

    2007-01-01

    Support vector clustering (SVC) has proven an efficient algorithm for clustering of noisy and high-dimensional data sets, with applications within many fields of research. An inherent problem, however, has been setting the parameters of the SVC algorithm. Using the recent emergence of a method...... for calculating the entire regularization path of the support vector domain description, we propose a fast method for robust pseudo-hierarchical support vector clustering (HSVC). The method is demonstrated to work well on generated data, as well as for detecting ischemic segments from multidimensional myocardial...

  8. The algorithm and program complex for splitting on a parts the records of acoustic waves recorded during the work of plasma actuator flush-mounted in the model plane nozzle with the purpose of analyzing their robust spectral and correlation characteristics

    International Nuclear Information System (INIS)

    Chernousov, A D; Malakhov, D V; Skvortsova, N N

    2014-01-01

    Currently acute problem of developing new technologies by reducing the noise of aircraft engines, including the directional impact on the noise on the basis of the interaction of plasma disturbances and sound generation pulsations. One of the devices built on this principle being developed in GPI RAS. They are plasma actuators (group of related to each other gaps, built on the perimeter of the nozzle) of various shapes and forms. In this paper an algorithm was developed which allows to separate impulses from the received experimental data, acquired during the work of plasma actuator flush-mounted in the model plane nozzle. The algorithm can be adjusted manually under a variety of situations (work of actuator in a nozzle with or without airflow, adjustment to different frequencies and pulse duration of the actuator). And program complex is developed on the basis of MatLab software, designed for building sustainable robust spectral and autocovariation functions of acoustic signals recorded during the experiments with the model of a nozzle with working actuator

  9. Robust efficient estimation of heart rate pulse from video

    Science.gov (United States)

    Xu, Shuchang; Sun, Lingyun; Rohde, Gustavo Kunde

    2014-01-01

    We describe a simple but robust algorithm for estimating the heart rate pulse from video sequences containing human skin in real time. Based on a model of light interaction with human skin, we define the change of blood concentration due to arterial pulsation as a pixel quotient in log space, and successfully use the derived signal for computing the pulse heart rate. Various experiments with different cameras, different illumination condition, and different skin locations were conducted to demonstrate the effectiveness and robustness of the proposed algorithm. Examples computed with normal illumination show the algorithm is comparable with pulse oximeter devices both in accuracy and sensitivity. PMID:24761294

  10. Robust Parameter Coordination for Multidisciplinary Design

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper introduced a robust parameter coordination method to analyze parameter uncertainties so as to predict conflicts and coordinate parameters in multidisciplinary design. The proposed method is based on constraints network, which gives a formulated model to analyze the coupling effects between design variables and product specifications. In this model, interval boxes are adopted to describe the uncertainty of design parameters quantitatively to enhance the design robustness. To solve this constraint network model, a general consistent algorithm framework is designed and implemented with interval arithmetic and the genetic algorithm, which can deal with both algebraic and ordinary differential equations. With the help of this method, designers could infer the consistent solution space from the given specifications. A case study involving the design of a bogie dumping system demonstrates the usefulness of this approach.

  11. An algorithm for online optimization of accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Xiaobiao [SLAC National Accelerator Lab., Menlo Park, CA (United States); Corbett, Jeff [SLAC National Accelerator Lab., Menlo Park, CA (United States); Safranek, James [SLAC National Accelerator Lab., Menlo Park, CA (United States); Wu, Juhao [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2013-10-01

    We developed a general algorithm for online optimization of accelerator performance, i.e., online tuning, using the performance measure as the objective function. This method, named robust conjugate direction search (RCDS), combines the conjugate direction set approach of Powell's method with a robust line optimizer which considers the random noise in bracketing the minimum and uses parabolic fit of data points that uniformly sample the bracketed zone. Moreover, it is much more robust against noise than traditional algorithms and is therefore suitable for online application. Simulation and experimental studies have been carried out to demonstrate the strength of the new algorithm.

  12. Spatial updating grand canonical Monte Carlo algorithms for fluid simulation: generalization to continuous potentials and parallel implementation.

    Science.gov (United States)

    O'Keeffe, C J; Ren, Ruichao; Orkoulas, G

    2007-11-21

    Spatial updating grand canonical Monte Carlo algorithms are generalizations of random and sequential updating algorithms for lattice systems to continuum fluid models. The elementary steps, insertions or removals, are constructed by generating points in space either at random (random updating) or in a prescribed order (sequential updating). These algorithms have previously been developed only for systems of impenetrable spheres for which no particle overlap occurs. In this work, spatial updating grand canonical algorithms are generalized to continuous, soft-core potentials to account for overlapping configurations. Results on two- and three-dimensional Lennard-Jones fluids indicate that spatial updating grand canonical algorithms, both random and sequential, converge faster than standard grand canonical algorithms. Spatial algorithms based on sequential updating not only exhibit the fastest convergence but also are ideal for parallel implementation due to the absence of strict detailed balance and the nature of the updating that minimizes interprocessor communication. Parallel simulation results for three-dimensional Lennard-Jones fluids show a substantial reduction of simulation time for systems of moderate and large size. The efficiency improvement by parallel processing through domain decomposition is always in addition to the efficiency improvement by sequential updating.

  13. A fully robust PARAFAC method for analyzing fluorescence data

    DEFF Research Database (Denmark)

    Engelen, Sanne; Frosch, Stina; Jørgensen, Bo

    2009-01-01

    and Rayleigh scatter. Recently, a robust PARAFAC method that circumvents the harmful effects of outlying samples has been developed. For removing the scatter effects on the final PARAFAC model, different techniques exist. Newly, an automated scatter identification tool has been constructed. However......, there still exists no robust method for handling fluorescence data encountering both outlying EEM landscapes and scatter. In this paper, we present an iterative algorithm where the robust PARAFAC method and the scatter identification tool are alternately performed. A fully automated robust PARAFAC method...

  14. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  15. Sparse alignment for robust tensor learning.

    Science.gov (United States)

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming

    2014-10-01

    Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods.

  16. Neuromorphic Configurable Architecture for Robust Motion Estimation

    Directory of Open Access Journals (Sweden)

    Guillermo Botella

    2008-01-01

    Full Text Available The robustness of the human visual system recovering motion estimation in almost any visual situation is enviable, performing enormous calculation tasks continuously, robustly, efficiently, and effortlessly. There is obviously a great deal we can learn from our own visual system. Currently, there are several optical flow algorithms, although none of them deals efficiently with noise, illumination changes, second-order motion, occlusions, and so on. The main contribution of this work is the efficient implementation of a biologically inspired motion algorithm that borrows nature templates as inspiration in the design of architectures and makes use of a specific model of human visual motion perception: Multichannel Gradient Model (McGM. This novel customizable architecture of a neuromorphic robust optical flow can be constructed with FPGA or ASIC device using properties of the cortical motion pathway, constituting a useful framework for building future complex bioinspired systems running in real time with high computational complexity. This work includes the resource usage and performance data, and the comparison with actual systems. This hardware has many application fields like object recognition, navigation, or tracking in difficult environments due to its bioinspired and robustness properties.

  17. Robustness in econometrics

    CERN Document Server

    Sriboonchitta, Songsak; Huynh, Van-Nam

    2017-01-01

    This book presents recent research on robustness in econometrics. Robust data processing techniques – i.e., techniques that yield results minimally affected by outliers – and their applications to real-life economic and financial situations are the main focus of this book. The book also discusses applications of more traditional statistical techniques to econometric problems. Econometrics is a branch of economics that uses mathematical (especially statistical) methods to analyze economic systems, to forecast economic and financial dynamics, and to develop strategies for achieving desirable economic performance. In day-by-day data, we often encounter outliers that do not reflect the long-term economic trends, e.g., unexpected and abrupt fluctuations. As such, it is important to develop robust data processing techniques that can accommodate these fluctuations.

  18. Robust Manufacturing Control

    CERN Document Server

    2013-01-01

    This contributed volume collects research papers, presented at the CIRP Sponsored Conference Robust Manufacturing Control: Innovative and Interdisciplinary Approaches for Global Networks (RoMaC 2012, Jacobs University, Bremen, Germany, June 18th-20th 2012). These research papers present the latest developments and new ideas focusing on robust manufacturing control for global networks. Today, Global Production Networks (i.e. the nexus of interconnected material and information flows through which products and services are manufactured, assembled and distributed) are confronted with and expected to adapt to: sudden and unpredictable large-scale changes of important parameters which are occurring more and more frequently, event propagation in networks with high degree of interconnectivity which leads to unforeseen fluctuations, and non-equilibrium states which increasingly characterize daily business. These multi-scale changes deeply influence logistic target achievement and call for robust planning and control ...

  19. Competition improves robustness against loss of information

    Directory of Open Access Journals (Sweden)

    Arash eKermani Kolankeh

    2015-03-01

    Full Text Available A substantial number of works aimed at modeling the receptive field properties of the primary visual cortex (V1. Their evaluation criterion is usually the similarity of the model response properties to the recorded responses from biological organisms. However, as several algorithms were able to demonstrate some degree of similarity to biological data based on the existing criteria, we focus on the robustness against loss of information in the form of occlusions as an additional constraint for better understanding the algorithmic level of early vision in the brain. We try to investigate the influence of competition mechanisms on the robustness. Therefore, we compared four methods employing different competition mechanisms, namely, independent component analysis, non-negative matrix factorization with sparseness constraint, predictive coding/biased competition, and a Hebbian neural network with lateral inhibitory connections. Each of those methods is known to be capable of developing receptive fields comparable to those of V1 simple-cells. Since measuring the robustness of methods having simple-cell like receptive fields against occlusion is difficult, we measure the robustness using the classification accuracy on the MNIST hand written digit dataset. For this we trained all methods on the training set of the MNIST hand written digits dataset and tested them on a MNIST test set with different levels of occlusions. We observe that methods which employ competitive mechanisms have higher robustness against loss of information. Also the kind of the competition mechanisms plays an important role in robustness. Global feedback inhibition as employed in predictive coding/biased competition has an advantage compared to local lateral inhibition learned by an anti-Hebb rule.

  20. Robust boosting via convex optimization

    Science.gov (United States)

    Rätsch, Gunnar

    2001-12-01

    In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems

  1. Robust online Hamiltonian learning

    International Nuclear Information System (INIS)

    Granade, Christopher E; Ferrie, Christopher; Wiebe, Nathan; Cory, D G

    2012-01-01

    In this work we combine two distinct machine learning methodologies, sequential Monte Carlo and Bayesian experimental design, and apply them to the problem of inferring the dynamical parameters of a quantum system. We design the algorithm with practicality in mind by including parameters that control trade-offs between the requirements on computational and experimental resources. The algorithm can be implemented online (during experimental data collection), avoiding the need for storage and post-processing. Most importantly, our algorithm is capable of learning Hamiltonian parameters even when the parameters change from experiment-to-experiment, and also when additional noise processes are present and unknown. The algorithm also numerically estimates the Cramer–Rao lower bound, certifying its own performance. (paper)

  2. Robust plasmonic substrates

    DEFF Research Database (Denmark)

    Kostiučenko, Oksana; Fiutowski, Jacek; Tamulevicius, Tomas

    2014-01-01

    Robustness is a key issue for the applications of plasmonic substrates such as tip-enhanced Raman spectroscopy, surface-enhanced spectroscopies, enhanced optical biosensing, optical and optoelectronic plasmonic nanosensors and others. A novel approach for the fabrication of robust plasmonic...... substrates is presented, which relies on the coverage of gold nanostructures with diamond-like carbon (DLC) thin films of thicknesses 25, 55 and 105 nm. DLC thin films were grown by direct hydrocarbon ion beam deposition. In order to find the optimum balance between optical and mechanical properties...

  3. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  4. GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations.

    Science.gov (United States)

    Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji

    2015-07-01

    GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.

  5. PET functional volume delineation: a robustness and repeatability study

    International Nuclear Information System (INIS)

    Hatt, Mathieu; Cheze-le Rest, Catherine; Albarghach, Nidal; Pradier, Olivier; Visvikis, Dimitris

    2011-01-01

    Current state-of-the-art algorithms for functional uptake volume segmentation in PET imaging consist of threshold-based approaches, whose parameters often require specific optimization for a given scanner and associated reconstruction algorithms. Different advanced image segmentation approaches previously proposed and extensively validated, such as among others fuzzy C-means (FCM) clustering, or fuzzy locally adaptive bayesian (FLAB) algorithm have the potential to improve the robustness of functional uptake volume measurements. The objective of this study was to investigate robustness and repeatability with respect to various scanner models, reconstruction algorithms and acquisition conditions. Robustness was evaluated using a series of IEC phantom acquisitions carried out on different PET/CT scanners (Philips Gemini and Gemini Time-of-Flight, Siemens Biograph and GE Discovery LS) with their associated reconstruction algorithms (RAMLA, TF MLEM, OSEM). A range of acquisition parameters (contrast, duration) and reconstruction parameters (voxel size) were considered for each scanner model, and the repeatability of each method was evaluated on simulated and clinical tumours and compared to manual delineation. For all the scanner models, acquisition parameters and reconstruction algorithms considered, the FLAB algorithm demonstrated higher robustness in delineation of the spheres with low mean errors (10%) and variability (5%), with respect to threshold-based methodologies and FCM. The repeatability provided by all segmentation algorithms considered was very high with a negligible variability of <5% in comparison to that associated with manual delineation (5-35%). The use of advanced image segmentation algorithms may not only allow high accuracy as previously demonstrated, but also provide a robust and repeatable tool to aid physicians as an initial guess in determining functional volumes in PET. (orig.)

  6. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  7. Robust hashing for 3D models

    Science.gov (United States)

    Berchtold, Waldemar; Schäfer, Marcel; Rettig, Michael; Steinebach, Martin

    2014-02-01

    3D models and applications are of utmost interest in both science and industry. With the increment of their usage, their number and thereby the challenge to correctly identify them increases. Content identification is commonly done by cryptographic hashes. However, they fail as a solution in application scenarios such as computer aided design (CAD), scientific visualization or video games, because even the smallest alteration of the 3D model, e.g. conversion or compression operations, massively changes the cryptographic hash as well. Therefore, this work presents a robust hashing algorithm for 3D mesh data. The algorithm applies several different bit extraction methods. They are built to resist desired alterations of the model as well as malicious attacks intending to prevent correct allocation. The different bit extraction methods are tested against each other and, as far as possible, the hashing algorithm is compared to the state of the art. The parameters tested are robustness, security and runtime performance as well as False Acceptance Rate (FAR) and False Rejection Rate (FRR), also the probability calculation of hash collision is included. The introduced hashing algorithm is kept adaptive e.g. in hash length, to serve as a proper tool for all applications in practice.

  8. Robust surgery loading

    NARCIS (Netherlands)

    Hans, Elias W.; Wullink, Gerhard; van Houdenhoven, Mark; Kazemier, Geert

    2008-01-01

    We consider the robust surgery loading problem for a hospital’s operating theatre department, which concerns assigning surgeries and sufficient planned slack to operating room days. The objective is to maximize capacity utilization and minimize the risk of overtime, and thus cancelled patients. This

  9. Robustness Envelopes of Networks

    NARCIS (Netherlands)

    Trajanovski, S.; Martín-Hernández, J.; Winterbach, W.; Van Mieghem, P.

    2013-01-01

    We study the robustness of networks under node removal, considering random node failure, as well as targeted node attacks based on network centrality measures. Whilst both of these have been studied in the literature, existing approaches tend to study random failure in terms of average-case

  10. Robust Circle Detection Using Harmony Search

    Directory of Open Access Journals (Sweden)

    Jaco Fourie

    2017-01-01

    Full Text Available Automatic circle detection is an important element of many image processing algorithms. Traditionally the Hough transform has been used to find circular objects in images but more modern approaches that make use of heuristic optimisation techniques have been developed. These are often used in large complex images where the presence of noise or limited computational resources make the Hough transform impractical. Previous research on the use of the Harmony Search (HS in circle detection showed that HS is an attractive alternative to many of the modern circle detectors based on heuristic optimisers like genetic algorithms and simulated annealing. We propose improvements to this work that enables our algorithm to robustly find multiple circles in larger data sets and still work on realistic images that are heavily corrupted by noisy edges.

  11. Robust Adaptive Thresholder For Document Scanning Applications

    Science.gov (United States)

    Hsing, To R.

    1982-12-01

    In document scanning applications, thresholding is used to obtain binary data from a scanner. However, due to: (1) a wide range of different color backgrounds; (2) density variations of printed text information; and (3) the shading effect caused by the optical systems, the use of adaptive thresholding to enhance the useful information is highly desired. This paper describes a new robust adaptive thresholder for obtaining valid binary images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local adaptive threshold function. The results of high image quality from different types of simulate test patterns can be obtained by this algorithm. The software algorithm is described and experiment results are present to describe the procedures. Results also show that the techniques described here can be used for real-time signal processing in the varied applications.

  12. Approximate truncation robust computed tomography—ATRACT

    International Nuclear Information System (INIS)

    Dennerlein, Frank; Maier, Andreas

    2013-01-01

    We present an approximate truncation robust algorithm to compute tomographic images (ATRACT). This algorithm targets at reconstructing volumetric images from cone-beam projections in scenarios where these projections are highly truncated in each dimension. It thus facilitates reconstructions of small subvolumes of interest, without involving prior knowledge about the object. Our method is readily applicable to medical C-arm imaging, where it may contribute to new clinical workflows together with a considerable reduction of x-ray dose. We give a detailed derivation of ATRACT that starts from the conventional Feldkamp filtered-backprojection algorithm and that involves, as one component, a novel original formula for the inversion of the two-dimensional Radon transform. Discretization and numerical implementation are discussed and reconstruction results from both, simulated projections and first clinical data sets are presented. (paper)

  13. An iterative algorithm for solving the multidimensional neutron diffusion nodal method equations on parallel computers

    International Nuclear Information System (INIS)

    Kirk, B.L.; Azmy, Y.Y.

    1992-01-01

    In this paper the one-group, steady-state neutron diffusion equation in two-dimensional Cartesian geometry is solved using the nodal integral method. The discrete variable equations comprise loosely coupled sets of equations representing the nodal balance of neutrons, as well as neutron current continuity along rows or columns of computational cells. An iterative algorithm that is more suitable for solving large problems concurrently is derived based on the decomposition of the spatial domain and is accelerated using successive overrelaxation. This algorithm is very well suited for parallel computers, especially since the spatial domain decomposition occurs naturally, so that the number of iterations required for convergence does not depend on the number of processors participating in the calculation. Implementation of the authors' algorithm on the Intel iPSC/2 hypercube and Sequent Balance 8000 parallel computer is presented, and measured speedup and efficiency for test problems are reported. The results suggest that the efficiency of the hypercube quickly deteriorates when many processors are used, while the Sequent Balance retains very high efficiency for a comparable number of participating processors. This leads to the conjecture that message-passing parallel computers are not as well suited for this algorithm as shared-memory machines

  14. Structure-Based Algorithms for Microvessel Classification

    KAUST Repository

    Smith, Amy F.; Secomb, Timothy W.; Pries, Axel R.; Smith, Nicolas P.; Shipley, Rebecca J.

    2015-01-01

    algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules

  15. Synthesis of fixed-architecture, robust H 2 and H ∞ controllers

    Directory of Open Access Journals (Sweden)

    Collins Jr. Emmanuel G.

    2000-01-01

    Full Text Available This paper discusses and compares the synthesis of fixed-architecture controllers that guarantee either robust H 2 or H ∞ performance. The synthesis is accomplished by solving a Riccati equation feasibility problem resulting from mixed structured singular value theory with Popov multipliers. Whereas the algorithm for robust H 2 performance had been previously implemented, a major contribution described in this paper is the implementation of the much more complex algorithm for robust H ∞ performance. Both robust H 2 and H ∞ , controllers are designed for a benchmark problem and a comparison is made between the resulting controllers and control algorithms. It is found that the numerical algorithm for robust H ∞ performance is much more computationally intensive than that for robust H 2 performance. Both controllers are found to have smaller bandwidth, lower control authority and to be less conservative than controllers obtained using complex structured singular value synthesis

  16. Synthesis of fixed-architecture, robust H2 and H∞ controllers

    Directory of Open Access Journals (Sweden)

    Emmanuel G. Collins

    2000-01-01

    Full Text Available This paper discusses and compares the synthesis of fixed-architecture controllers that guarantee either robust H2 or H∞ performance. The synthesis is accomplished by solving a Riccati equation feasibility problem resulting from mixed structured singular value theory with Popov multipliers. Whereas the algorithm for robust H2 performance had been previously implemented, a major contribution described in this paper is the implementation of the much more complex algorithm for robust H∞ performance. Both robust H2 and H∞, controllers are designed for a benchmark problem and a comparison is made between the resulting controllers and control algorithms. It is found that the numerical algorithm for robust H∞ performance is much more computationally intensive than that for robust H2 performance. Both controllers are found to have smaller bandwidth, lower control authority and to be less conservative than controllers obtained using complex structured singular value synthesis.

  17. Research in Parallel Algorithms and Software for Computational Aerosciences

    Science.gov (United States)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  18. A robust classic.

    Science.gov (United States)

    Kutzner, Florian; Vogel, Tobias; Freytag, Peter; Fiedler, Klaus

    2011-01-01

    In the present research, we argue for the robustness of illusory correlations (ICs, Hamilton & Gifford, 1976) regarding two boundary conditions suggested in previous research. First, we argue that ICs are maintained under extended experience. Using simulations, we derive conflicting predictions. Whereas noise-based accounts predict ICs to be maintained (Fielder, 2000; Smith, 1991), a prominent account based on discrepancy-reducing feedback learning predicts ICs to disappear (Van Rooy et al., 2003). An experiment involving 320 observations with majority and minority members supports the claim that ICs are maintained. Second, we show that actively using the stereotype to make predictions that are met with reward and punishment does not eliminate the bias. In addition, participants' operant reactions afford a novel online measure of ICs. In sum, our findings highlight the robustness of ICs that can be explained as a result of unbiased but noisy learning.

  19. Robust Airline Schedules

    OpenAIRE

    Eggenberg, Niklaus; Salani, Matteo; Bierlaire, Michel

    2010-01-01

    Due to economic pressure industries, when planning, tend to focus on optimizing the expected profit or the yield. The consequence of highly optimized solutions is an increased sensitivity to uncertainty. This generates additional "operational" costs, incurred by possible modifications of the original plan to be performed when reality does not reflect what was expected in the planning phase. The modern research trend focuses on "robustness" of solutions instead of yield or profit. Although ro...

  20. The Crane Robust Control

    Directory of Open Access Journals (Sweden)

    Marek Hicar

    2004-01-01

    Full Text Available The article is about a control design for complete structure of the crane: crab, bridge and crane uplift.The most important unknown parameters for simulations are burden weight and length of hanging rope. We will use robustcontrol for crab and bridge control to ensure adaptivity for burden weight and rope length. Robust control will be designed for current control of the crab and bridge, necessary is to know the range of unknown parameters. Whole robust will be splitto subintervals and after correct identification of unknown parameters the most suitable robust controllers will be chosen.The most important condition at the crab and bridge motion is avoiding from burden swinging in the final position. Crab and bridge drive is designed by asynchronous motor fed from frequency converter. We will use crane uplift with burden weightobserver in combination for uplift, crab and bridge drive with cooperation of their parameters: burden weight, rope length and crab and bridge position. Controllers are designed by state control method. We will use preferably a disturbance observerwhich will identify burden weight as a disturbance. The system will be working in both modes at empty hook as well asat maximum load: burden uplifting and dropping down.

  1. A Hybrid Algorithm for Optimizing Multi- Modal Functions

    Institute of Scientific and Technical Information of China (English)

    Li Qinghua; Yang Shida; Ruan Youlin

    2006-01-01

    A new genetic algorithm is presented based on the musical performance. The novelty of this algorithm is that a new genetic algorithm, mimicking the musical process of searching for a perfect state of harmony, which increases the robustness of it greatly and gives a new meaning of it in the meantime, has been developed. Combining the advantages of the new genetic algorithm, simplex algorithm and tabu search, a hybrid algorithm is proposed. In order to verify the effectiveness of the hybrid algorithm, it is applied to solving some typical numerical function optimization problems which are poorly solved by traditional genetic algorithms. The experimental results show that the hybrid algorithm is fast and reliable.

  2. Improved numerical algorithm and experimental validation of a system thermal-hydraulic/CFD coupling method for multi-scale transient simulations of pool-type reactors

    International Nuclear Information System (INIS)

    Toti, A.; Vierendeels, J.; Belloni, F.

    2017-01-01

    Highlights: • A system thermal-hydraulic/CFD coupling methodology is proposed for high-fidelity transient flow analyses. • The method is based on domain decomposition and implicit numerical scheme. • A novel interface Quasi-Newton algorithm is implemented to improve stability and convergence rate. • Preliminary validation analyses on the TALL-3D experiment. - Abstract: The paper describes the development and validation of a coupling methodology between the best-estimate system thermal-hydraulic code RELAP5-3D and the CFD code FLUENT, conceived for high fidelity plant-scale safety analyses of pool-type reactors. The computational tool is developed to assess the impact of three-dimensional phenomena occurring in accidental transients such as loss of flow (LOF) in the research reactor MYRRHA, currently in the design phase at the Belgian Nuclear Research Centre, SCK• CEN. A partitioned, implicit domain decomposition coupling algorithm is implemented, in which the coupled domains exchange thermal-hydraulics variables at coupling boundary interfaces. Numerical stability and interface convergence rates are improved by a novel interface Quasi-Newton algorithm, which is compared in this paper with previously tested numerical schemes. The developed computational method has been assessed for validation purposes against the experiment performed at the test facility TALL-3D, operated by the Royal Institute of Technology (KTH) in Sweden. This paper details the results of the simulation of a loss of forced convection test, showing the capability of the developed methodology to predict transients influenced by local three-dimensional phenomena.

  3. Design and implementation of robust controllers for a gait trainer.

    Science.gov (United States)

    Wang, F C; Yu, C H; Chou, T Y

    2009-08-01

    This paper applies robust algorithms to control an active gait trainer for children with walking disabilities. Compared with traditional rehabilitation procedures, in which two or three trainers are required to assist the patient, a motor-driven mechanism was constructed to improve the efficiency of the procedures. First, a six-bar mechanism was designed and constructed to mimic the trajectory of children's ankles in walking. Second, system identification techniques were applied to obtain system transfer functions at different operating points by experiments. Third, robust control algorithms were used to design Hinfinity robust controllers for the system. Finally, the designed controllers were implemented to verify experimentally the system performance. From the results, the proposed robust control strategies are shown to be effective.

  4. Robust precision alignment algorithm for micro tube laser forming

    NARCIS (Netherlands)

    Folkersma, Ger; Brouwer, Dannis Michel; Römer, Gerardus Richardus, Bernardus, Engelina; Herder, Justus Laurens

    2016-01-01

    Tube laser forming on a small diameter tube can be used as a high precision actuator to permanently align small (optical)components. Applications, such as the alignment of optical fibers to photonic integrated circuits, often require sub-micron alignment accuracy. Although the process causes

  5. Robust Dehaze Algorithm for Degraded Image of CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Chen Qu

    2017-09-01

    Full Text Available The CMOS (Complementary Metal-Oxide-Semiconductor is a new type of solid image sensor device widely used in object tracking, object recognition, intelligent navigation fields, and so on. However, images captured by outdoor CMOS sensor devices are usually affected by suspended atmospheric particles (such as haze, causing a reduction in image contrast, color distortion problems, and so on. In view of this, we propose a novel dehazing approach based on a local consistent Markov random field (MRF framework. The neighboring clique in traditional MRF is extended to the non-neighboring clique, which is defined on local consistent blocks based on two clues, where both the atmospheric light and transmission map satisfy the character of local consistency. In this framework, our model can strengthen the restriction of the whole image while incorporating more sophisticated statistical priors, resulting in more expressive power of modeling, thus, solving inadequate detail recovery effectively and alleviating color distortion. Moreover, the local consistent MRF framework can obtain details while maintaining better results for dehazing, which effectively improves the image quality captured by the CMOS image sensor. Experimental results verified that the method proposed has the combined advantages of detail recovery and color preservation.

  6. A fast and robust hepatocyte quantification algorithm including vein processing

    Directory of Open Access Journals (Sweden)

    Homeyer André

    2010-03-01

    Full Text Available Abstract Background Quantification of different types of cells is often needed for analysis of histological images. In our project, we compute the relative number of proliferating hepatocytes for the evaluation of the regeneration process after partial hepatectomy in normal rat livers. Results Our presented automatic approach for hepatocyte (HC quantification is suitable for the analysis of an entire digitized histological section given in form of a series of images. It is the main part of an automatic hepatocyte quantification tool that allows for the computation of the ratio between the number of proliferating HC-nuclei and the total number of all HC-nuclei for a series of images in one processing run. The processing pipeline allows us to obtain desired and valuable results for a wide range of images with different properties without additional parameter adjustment. Comparing the obtained segmentation results with a manually retrieved segmentation mask which is considered to be the ground truth, we achieve results with sensitivity above 90% and false positive fraction below 15%. Conclusions The proposed automatic procedure gives results with high sensitivity and low false positive fraction and can be applied to process entire stained sections.

  7. Robust Bayesian Algorithm for Targeted Compound Screening in Forensic Toxicology

    NARCIS (Netherlands)

    Woldegebriel, M.; Gonsalves, J.; van Asten, A.; Vivó-Truyols, G.

    2016-01-01

    As part of forensic toxicological investigation of cases involving unexpected death of an individual, targeted or untargeted xenobiotic screening of post-mortem samples is normally conducted. To this end, liquid chromatography (LC) coupled to high-resolution mass spectrometry (MS) is typically

  8. DWT-based blind and robust watermarking using SPIHT algorithm ...

    Indian Academy of Sciences (India)

    TOSHANLAL MEENPAL

    2018-02-07

    Feb 7, 2018 ... reported where the crucial diseases have been identified and understood very .... the core technology of the emerging multimedia stan- dards MPEG-4 ... scheme resistive against large scale compression, crop- ping and many ...

  9. Algorithms and Array Design Criteria for Robust Imaging in Interferometry

    Science.gov (United States)

    2016-04-01

    comical . My mom Jean has been a steady source of encouragement and support throughout my life in times of success as well as in times of failure. My dad Tom...has always encouraged diligence in my education and in my career, and has instilled in me the importance of being systematic in engineering, writing

  10. Numerical evaluation of a robust self-triggered MPC algorithm

    NARCIS (Netherlands)

    Brunner, F.D.; Heemels, W.P.M.H.; Allgöwer, F.

    2016-01-01

    We present numerical examples demonstrating the efficacy of a recently proposed self-triggered model predictive control scheme for disturbed linear discrete-time systems with hard constraints on the input and state. In order to reduce the amount of communication between the controller and the

  11. Robustness and Optimization of Complex Networks : Reconstructability, Algorithms and Modeling

    NARCIS (Netherlands)

    Liu, D.

    2013-01-01

    The infrastructure networks, including the Internet, telecommunication networks, electrical power grids, transportation networks (road, railway, waterway, and airway networks), gas networks and water networks, are becoming more and more complex. The complex infrastructure networks are crucial to our

  12. Algoritmo de recocido simulado para la descomposición robusta del horizonte de tiempo en problemas de planeación de producción A simulated annealing algorithm for the robust decomposition of temporal horizons in production planning problems

    Directory of Open Access Journals (Sweden)

    José Fidel Torres Delgado

    2007-06-01

    Full Text Available El problema de la descomposición robusta del horizonte de tiempo en planeación de producción fue inicialmente tratado en [1]. Posteriormente, en [2], Torres propone partir de una solución entera encontrada por programación dinámica, para luego mejorarla mediante un algoritmo de recocido simulado(simulated annealing. De acuerdo con [2], es necesario investigar más a fondo la capacidad de este algoritmo para mejorar la solución inicial y el impacto de la selección de los parámetros de control del algoritmo sobre la calidad de las soluciones encontradas. En este trabajo se desarrolla esta propuesta de analizar más a fondo la capacidad del algoritmo de recocido simulado para mejorar la solución inicial. Como resultado de los experimentos computacionales realizados, se determinó que el método de enfriamiento y la tasa de enfriamiento tienen efecto significativo en la calidad de la solución final. De igual manera se estableció que la solución depende en gran medida de las características del plan de operaciones, encontrándose mejores soluciones para planes con horizontes de tiempo más cortos.The problem of robust decomposition of temporal horizons in production planning was first introduced by Torres [1]. Later, in [2], Torres suggests to start with an integer solution found by dynamic programming, and then to use a simulated annealing algorithm to improve it. According to [2], more needs to be known about the impact of the control parameters in the simulated annealing algorithm, and their sensitivity with respect to the quality of the solutions. In this work we develop this idea and analyze in depth the ability of the simulated annealing algorithm to improve the initial solution. As a result of the computational experiments conducted, we determined that the cooling scheme and the cooling rate have significant effect on the quality of the final solution. It was also established that the solution found depends strongly on the

  13. Improved autonomous star identification algorithm

    International Nuclear Information System (INIS)

    Luo Li-Yan; Xu Lu-Ping; Zhang Hua; Sun Jing-Rong

    2015-01-01

    The log–polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. (paper)

  14. Robust linear registration of CT images using random regression forests

    Science.gov (United States)

    Konukoglu, Ender; Criminisi, Antonio; Pathak, Sayan; Robertson, Duncan; White, Steve; Haynor, David; Siddiqui, Khan

    2011-03-01

    Global linear registration is a necessary first step for many different tasks in medical image analysis. Comparing longitudinal studies1, cross-modality fusion2, and many other applications depend heavily on the success of the automatic registration. The robustness and efficiency of this step is crucial as it affects all subsequent operations. Most common techniques cast the linear registration problem as the minimization of a global energy function based on the image intensities. Although these algorithms have proved useful, their robustness in fully automated scenarios is still an open question. In fact, the optimization step often gets caught in local minima yielding unsatisfactory results. Recent algorithms constrain the space of registration parameters by exploiting implicit or explicit organ segmentations, thus increasing robustness4,5. In this work we propose a novel robust algorithm for automatic global linear image registration. Our method uses random regression forests to estimate posterior probability distributions for the locations of anatomical structures - represented as axis aligned bounding boxes6. These posterior distributions are later integrated in a global linear registration algorithm. The biggest advantage of our algorithm is that it does not require pre-defined segmentations or regions. Yet it yields robust registration results. We compare the robustness of our algorithm with that of the state of the art Elastix toolbox7. Validation is performed via 1464 pair-wise registrations in a database of very diverse 3D CT images. We show that our method decreases the "failure" rate of the global linear registration from 12.5% (Elastix) to only 1.9%.

  15. Asymmetric forecasting and commitment policy in a robust control problem

    OpenAIRE

    Taro Ikeda

    2013-01-01

    This paper provides a piece of results regarding asymmetric forecasting and commitment monetary policy with a robust control algorithm. Previous studies provide no clarification of the connection between asymmetric preference and robust commitment policy. Three results emerge from general equilibrium modeling with asymmetric preference: (i) the condition for system stability implies an average inflation bias with respect to asymmetry (ii) the effect of asymmetry can be mitigated if policy mak...

  16. Robust Fringe Projection Profilometry via Sparse Representation.

    Science.gov (United States)

    Budianto; Lun, Daniel P K

    2016-04-01

    In this paper, a robust fringe projection profilometry (FPP) algorithm using the sparse dictionary learning and sparse coding techniques is proposed. When reconstructing the 3D model of objects, traditional FPP systems often fail to perform if the captured fringe images have a complex scene, such as having multiple and occluded objects. It introduces great difficulty to the phase unwrapping process of an FPP system that can result in serious distortion in the final reconstructed 3D model. For the proposed algorithm, it encodes the period order information, which is essential to phase unwrapping, into some texture patterns and embeds them to the projected fringe patterns. When the encoded fringe image is captured, a modified morphological component analysis and a sparse classification procedure are performed to decode and identify the embedded period order information. It is then used to assist the phase unwrapping process to deal with the different artifacts in the fringe images. Experimental results show that the proposed algorithm can significantly improve the robustness of an FPP system. It performs equally well no matter the fringe images have a simple or complex scene, or are affected due to the ambient lighting of the working environment.

  17. Towards Robust Multiagent Plans

    Science.gov (United States)

    2016-01-20

    Pasadena, Califor- nia . Coquelin, P.-A., & Munos, R. (2007). Bandit algorithms for tree search. In Proceedings of the 23rd Conference on Uncertainty in...7.6 1.8k 1.8k 120 145 18 7 24 323 52k 52k 168 254 30,9 log 4 14 0.6 1.5k 847 14 2.5 505 272 10k 50 14 0.4 365 223 32 45 14 6

  18. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  19. Robust haptic large distance telemanipulation for ITER

    International Nuclear Information System (INIS)

    Heck, D.J.F.; Heemskerk, C.J.M.; Koning, J.F.; Abbasi, A.; Nijmeijer, H.

    2013-01-01

    Highlights: • ITER remote handling maintenance can be controlled safely over a large distance. • Bilateral teleoperation experiments were performed in a local network. • Wave variables make the controller robust against constant communication delays. • Master and slave position synchronization guaranteed by proportional action. -- Abstract: During shutdowns, maintenance crews are expected to work in 24/6 shifts to perform critical remote handling maintenance tasks on the ITER system. In this article, we investigate the possibility to safely perform these haptic maintenance tasks remotely from control stations located anywhere around the world. To guarantee stability in time delayed bilateral teleoperation, the symmetric position tracking controller using wave variables is selected. This algorithm guarantees robustness against communication delays, can eliminate wave reflections and provide position synchronization of the master and slave devices. Experiments have been conducted under realistic local network bandwidth, latency and jitter constraints. They show sufficient transparency even for substantial communication delays

  20. Track filtering by robust neural network

    International Nuclear Information System (INIS)

    Baginyan, S.A.; Kisel', I.V.; Konotopskaya, E.V.; Ososkov, G.A.

    1993-01-01

    In the present paper we study the following problems of track information extraction by the artificial neural network (ANN) rotor model: providing initial ANN configuration by an algorithm general enough to be applicable for any discrete detector in- or out of a magnetic field; robustness to heavy contaminated raw data (up to 100% signal-to-noise ratio); stability to the growing event multiplicity. These problems were carried out by corresponding innovations of our model, namely: by a special one-dimensional histogramming, by multiplying weights by a specially designed robust multiplier, and by replacing the simulated annealing schedule by ANN dynamics with an optimally fixed temperature. Our approach is valid for both circular and straight (non-magnetic) tracks and tested on 2D simulated data contaminated by 100% noise points distributed uniformly. To be closer to some reality in our simulation, we keep parameters of the cylindrical spectrometer ARES. 12 refs.; 9 figs

  1. Robustness analysis method for orbit control

    Science.gov (United States)

    Zhang, Jingrui; Yang, Keying; Qi, Rui; Zhao, Shuge; Li, Yanyan

    2017-08-01

    Satellite orbits require periodical maintenance due to the presence of perturbations. However, random errors caused by inaccurate orbit determination and thrust implementation may lead to failure of the orbit control strategy. Therefore, it is necessary to analyze the robustness of the orbit control methods. Feasible strategies which are tolerant to errors of a certain magnitude can be developed to perform reliable orbit control for the satellite. In this paper, first, the orbital dynamic model is formulated by Gauss' form of the planetary equation using the mean orbit elements; the atmospheric drag and the Earth's non-spherical perturbations are taken into consideration in this model. Second, an impulsive control strategy employing the differential correction algorithm is developed to maintain the satellite trajectory parameters in given ranges. Finally, the robustness of the impulsive control method is analyzed through Monte Carlo simulations while taking orbit determination error and thrust error into account.

  2. Robust haptic large distance telemanipulation for ITER

    Energy Technology Data Exchange (ETDEWEB)

    Heck, D.J.F., E-mail: d.j.f.heck@tue.nl [Eindhoven University of Technology, Department of Mechanical Engineering, Eindhoven (Netherlands); Heemskerk, C.J.M.; Koning, J.F. [Heemskerk Innovative Technologies, Sassenheim (Netherlands); Abbasi, A.; Nijmeijer, H. [Eindhoven University of Technology, Department of Mechanical Engineering, Eindhoven (Netherlands)

    2013-10-15

    Highlights: • ITER remote handling maintenance can be controlled safely over a large distance. • Bilateral teleoperation experiments were performed in a local network. • Wave variables make the controller robust against constant communication delays. • Master and slave position synchronization guaranteed by proportional action. -- Abstract: During shutdowns, maintenance crews are expected to work in 24/6 shifts to perform critical remote handling maintenance tasks on the ITER system. In this article, we investigate the possibility to safely perform these haptic maintenance tasks remotely from control stations located anywhere around the world. To guarantee stability in time delayed bilateral teleoperation, the symmetric position tracking controller using wave variables is selected. This algorithm guarantees robustness against communication delays, can eliminate wave reflections and provide position synchronization of the master and slave devices. Experiments have been conducted under realistic local network bandwidth, latency and jitter constraints. They show sufficient transparency even for substantial communication delays.

  3. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  4. Robust Transceiver Design for Multiuser MIMO Downlink with Channel Uncertainties

    Science.gov (United States)

    Miao, Wei; Li, Yunzhou; Chen, Xiang; Zhou, Shidong; Wang, Jing

    This letter addresses the problem of robust transceiver design for the multiuser multiple-input-multiple-output (MIMO) downlink where the channel state information at the base station (BS) is imperfect. A stochastic approach which minimizes the expectation of the total mean square error (MSE) of the downlink conditioned on the channel estimates under a total transmit power constraint is adopted. The iterative algorithm reported in [2] is improved to handle the proposed robust optimization problem. Simulation results show that our proposed robust scheme effectively reduces the performance loss due to channel uncertainties and outperforms existing methods, especially when the channel errors of the users are different.

  5. Research on robust optimization of emergency logistics network considering the time dependence characteristic

    Science.gov (United States)

    WANG, Qingrong; ZHU, Changfeng; LI, Ying; ZHANG, Zhengkun

    2017-06-01

    Considering the time dependence of emergency logistic network and complexity of the environment that the network exists in, in this paper the time dependent network optimization theory and robust discrete optimization theory are combined, and the emergency logistics dynamic network optimization model with characteristics of robustness is built to maximize the timeliness of emergency logistics. On this basis, considering the complexity of dynamic network and the time dependence of edge weight, an improved ant colony algorithm is proposed to realize the coupling of the optimization algorithm and the network time dependence and robustness. Finally, a case study has been carried out in order to testify validity of this robustness optimization model and its algorithm, and the value of different regulation factors was analyzed considering the importance of the value of the control factor in solving the optimal path. Analysis results show that this model and its algorithm above-mentioned have good timeliness and strong robustness.

  6. Robust Learning Control Design for Quantum Unitary Transformations.

    Science.gov (United States)

    Wu, Chengzhi; Qi, Bo; Chen, Chunlin; Dong, Daoyi

    2017-12-01

    Robust control design for quantum unitary transformations has been recognized as a fundamental and challenging task in the development of quantum information processing due to unavoidable decoherence or operational errors in the experimental implementation of quantum operations. In this paper, we extend the systematic methodology of sampling-based learning control (SLC) approach with a gradient flow algorithm for the design of robust quantum unitary transformations. The SLC approach first uses a "training" process to find an optimal control strategy robust against certain ranges of uncertainties. Then a number of randomly selected samples are tested and the performance is evaluated according to their average fidelity. The approach is applied to three typical examples of robust quantum transformation problems including robust quantum transformations in a three-level quantum system, in a superconducting quantum circuit, and in a spin chain system. Numerical results demonstrate the effectiveness of the SLC approach and show its potential applications in various implementation of quantum unitary transformations.

  7. Robustness of Populations in Stochastic Environments

    DEFF Research Database (Denmark)

    Gießen, Christian; Kötzing, Timo

    2016-01-01

    We consider stochastic versions of OneMax and LeadingOnes and analyze the performance of evolutionary algorithms with and without populations on these problems. It is known that the (1+1) EA on OneMax performs well in the presence of very small noise, but poorly for higher noise levels. We extend...... the abilities of the (1+1) EA. Larger population sizes are even more beneficial; we consider both parent and offspring populations. In this sense, populations are robust in these stochastic settings....

  8. Robust PID Controller for a Pneumatic Actuator

    Directory of Open Access Journals (Sweden)

    Skarpetis Michael G.

    2016-01-01

    Full Text Available In this paper the position control pneumatic actuator using a robust PID controller is presented. The parameters of the PID controller are computed using a Hurwitz invariability technique enriched with a Simulated Annealing Algorithm. The nonlinear model involves uncertain parameters due to linearization of the servo valve, variations of the initial volume of the cylinder and variation of the external load. The problem is proven to be solvable and the controller parameters are chosen to provide a suboptimal solution for tracking error minimization. Simulation results are presented for the nonlinear model.

  9. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  10. Robust classification using mixtures of dependency networks

    DEFF Research Database (Denmark)

    Gámez, José A.; Mateo, Juan L.; Nielsen, Thomas Dyhre

    2008-01-01

    Dependency networks have previously been proposed as alternatives to e.g. Bayesian networks by supporting fast algorithms for automatic learning. Recently dependency networks have also been proposed as classification models, but as with e.g. general probabilistic inference, the reported speed......-ups are often obtained at the expense of accuracy. In this paper we try to address this issue through the use of mixtures of dependency networks. To reduce learning time and improve robustness when dealing with data sparse classes, we outline methods for reusing calculations across mixture components. Finally...

  11. Reconfigurable Robust Routing for Mobile Outreach Network

    Science.gov (United States)

    Lin, Ching-Fang

    2010-01-01

    The Reconfigurable Robust Routing for Mobile Outreach Network (R3MOO N) provides advanced communications networking technologies suitable for the lunar surface environment and applications. The R3MOON techn ology is based on a detailed concept of operations tailored for luna r surface networks, and includes intelligent routing algorithms and wireless mesh network implementation on AGNC's Coremicro Robots. The product's features include an integrated communication solution inco rporating energy efficiency and disruption-tolerance in a mobile ad h oc network, and a real-time control module to provide researchers an d engineers a convenient tool for reconfiguration, investigation, an d management.

  12. Robust indexing for automatic data collection

    International Nuclear Information System (INIS)

    Sauter, Nicholas K.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2003-01-01

    We present improved methods for indexing diffraction patterns from macromolecular crystals. The novel procedures include a more robust way to verify the position of the incident X-ray beam on the detector, an algorithm to verify that the deduced lattice basis is consistent with the observations, and an alternative approach to identify the metric symmetry of the lattice. These methods help to correct failures commonly experienced during indexing, and increase the overall success rate of the process. Rapid indexing, without the need for visual inspection, will play an important role as beamlines at synchrotron sources prepare for high-throughput automation

  13. Project Robust Scheduling Based on the Scattered Buffer Technology

    Directory of Open Access Journals (Sweden)

    Nansheng Pang

    2018-04-01

    Full Text Available The research object in this paper is the sub network formed by the predecessor’s affect on the solution activity. This paper is to study three types of influencing factors from the predecessors that lead to the delay of starting time of the solution activity on the longest path, and to analyze the influence degree on the delay of the solution activity’s starting time from different types of factors. On this basis, through the comprehensive analysis of various factors that influence the solution activity, this paper proposes a metric that is used to evaluate the solution robustness of the project scheduling, and this metric is taken as the optimization goal. This paper also adopts the iterative process to design a scattered buffer heuristics algorithm based on the robust scheduling of the time buffer. At the same time, the resource flow network is introduced in this algorithm, using the tabu search algorithm to solve baseline scheduling. For the generation of resource flow network in the baseline scheduling, this algorithm designs a resource allocation algorithm with the maximum use of the precedence relations. Finally, the algorithm proposed in this paper and some other algorithms in previous literature are taken into the simulation experiment; under the comparative analysis, the experimental results show that the algorithm proposed in this paper is reasonable and feasible.

  14. Salmon: Robust Proxy Distribution for Censorship Circumvention

    Directory of Open Access Journals (Sweden)

    Douglas Frederick

    2016-10-01

    Full Text Available Many governments block their citizens’ access to much of the Internet. Simple workarounds are unreliable; censors quickly discover and patch them. Previously proposed robust approaches either have non-trivial obstacles to deployment, or rely on low-performance covert channels that cannot support typical Internet usage such as streaming video. We present Salmon, an incrementally deployable system designed to resist a censor with the resources of the “Great Firewall” of China. Salmon relies on a network of volunteers in uncensored countries to run proxy servers. Although any member of the public can become a user, Salmon protects the bulk of its servers from being discovered and blocked by the censor via an algorithm for quickly identifying malicious users. The algorithm entails identifying some users as especially trustworthy or suspicious, based on their actions. We impede Sybil attacks by requiring either an unobtrusive check of a social network account, or a referral from a trustworthy user.

  15. Robust automated knowledge capture.

    Energy Technology Data Exchange (ETDEWEB)

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  16. Passion, Robustness and Perseverance

    DEFF Research Database (Denmark)

    Lim, Miguel Antonio; Lund, Rebecca

    2016-01-01

    Evaluation and merit in the measured university are increasingly based on taken-for-granted assumptions about the “ideal academic”. We suggest that the scholar now needs to show that she is passionate about her work and that she gains pleasure from pursuing her craft. We suggest that passion...... and pleasure achieve an exalted status as something compulsory. The scholar ought to feel passionate about her work and signal that she takes pleasure also in the difficult moments. Passion has become a signal of robustness and perseverance in a job market characterised by funding shortages, increased pressure...... way to demonstrate their potential and, crucially, their passion for their work. Drawing on the literature on technologies of governance, we reflect on what is captured and what is left out by these two evaluation instruments. We suggest that bibliometric analysis at the individual level is deeply...

  17. Robust Optical Flow Estimation

    Directory of Open Access Journals (Sweden)

    Javier Sánchez Pérez

    2013-10-01

    Full Text Available n this work, we describe an implementation of the variational method proposed by Brox etal. in 2004, which yields accurate optical flows with low running times. It has several benefitswith respect to the method of Horn and Schunck: it is more robust to the presence of outliers,produces piecewise-smooth flow fields and can cope with constant brightness changes. Thismethod relies on the brightness and gradient constancy assumptions, using the information ofthe image intensities and the image gradients to find correspondences. It also generalizes theuse of continuous L1 functionals, which help mitigate the effect of outliers and create a TotalVariation (TV regularization. Additionally, it introduces a simple temporal regularizationscheme that enforces a continuous temporal coherence of the flow fields.

  18. Robust snapshot interferometric spectropolarimetry.

    Science.gov (United States)

    Kim, Daesuk; Seo, Yoonho; Yoon, Yonghee; Dembele, Vamara; Yoon, Jae Woong; Lee, Kyu Jin; Magnusson, Robert

    2016-05-15

    This Letter describes a Stokes vector measurement method based on a snapshot interferometric common-path spectropolarimeter. The proposed scheme, which employs an interferometric polarization-modulation module, can extract the spectral polarimetric parameters Ψ(k) and Δ(k) of a transmissive anisotropic object by which an accurate Stokes vector can be calculated in the spectral domain. It is inherently strongly robust to the object 3D pose variation, since it is designed distinctly so that the measured object can be placed outside of the interferometric module. Experiments are conducted to verify the feasibility of the proposed system. The proposed snapshot scheme enables us to extract the spectral Stokes vector of a transmissive anisotropic object within tens of msec with high accuracy.

  19. Robust point matching via vector field consensus.

    Science.gov (United States)

    Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu

    2014-04-01

    In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.

  20. Robustness Analyses of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Hald, Frederik

    2013-01-01

    The robustness of structural systems has obtained a renewed interest arising from a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures, many mo...... with respect to robustness of timber structures and will discuss the consequences of such robustness issues related to the future development of timber structures.......The robustness of structural systems has obtained a renewed interest arising from a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures, many...... modern building codes consider the need for the robustness of structures and provide strategies and methods to obtain robustness. Therefore, a structural engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper summaries issues...

  1. Distributed Robust Optimization in Networked System.

    Science.gov (United States)

    Wang, Shengnan; Li, Chunguang

    2016-10-11

    In this paper, we consider a distributed robust optimization (DRO) problem, where multiple agents in a networked system cooperatively minimize a global convex objective function with respect to a global variable under the global constraints. The objective function can be represented by a sum of local objective functions. The global constraints contain some uncertain parameters which are partially known, and can be characterized by some inequality constraints. After problem transformation, we adopt the Lagrangian primal-dual method to solve this problem. We prove that the primal and dual optimal solutions of the problem are restricted in some specific sets, and we give a method to construct these sets. Then, we propose a DRO algorithm to find the primal-dual optimal solutions of the Lagrangian function, which consists of a subgradient step, a projection step, and a diffusion step, and in the projection step of the algorithm, the optimized variables are projected onto the specific sets to guarantee the boundedness of the subgradients. Convergence analysis and numerical simulations verifying the performance of the proposed algorithm are then provided. Further, for nonconvex DRO problem, the corresponding approach and algorithm framework are also provided.

  2. Robust Optimization of Fourth Party Logistics Network Design under Disruptions

    Directory of Open Access Journals (Sweden)

    Jia Li

    2015-01-01

    Full Text Available The Fourth Party Logistics (4PL network faces disruptions of various sorts under the dynamic and complex environment. In order to explore the robustness of the network, the 4PL network design with consideration of random disruptions is studied. The purpose of the research is to construct a 4PL network that can provide satisfactory service to customers at a lower cost when disruptions strike. Based on the definition of β-robustness, a robust optimization model of 4PL network design under disruptions is established. Based on the NP-hard characteristic of the problem, the artificial fish swarm algorithm (AFSA and the genetic algorithm (GA are developed. The effectiveness of the algorithms is tested and compared by simulation examples. By comparing the optimal solutions of the 4PL network for different robustness level, it is indicated that the robust optimization model can evade the market risks effectively and save the cost in the maximum limit when it is applied to 4PL network design.

  3. Robust Active Label Correction

    DEFF Research Database (Denmark)

    Kremer, Jan; Sha, Fei; Igel, Christian

    2018-01-01

    for the noisy data lead to different active label correction algorithms. If loss functions consider the label noise rates, these rates are estimated during learning, where importance weighting compensates for the sampling bias. We show empirically that viewing the true label as a latent variable and computing......Active label correction addresses the problem of learning from input data for which noisy labels are available (e.g., from imprecise measurements or crowd-sourcing) and each true label can be obtained at a significant cost (e.g., through additional measurements or human experts). To minimize......). To select labels for correction, we adopt the active learning strategy of maximizing the expected model change. We consider the change in regularized empirical risk functionals that use different pointwise loss functions for patterns with noisy and true labels, respectively. Different loss functions...

  4. Robust self-triggered MPC for constrained linear systems

    NARCIS (Netherlands)

    Brunner, F.D.; Heemels, W.P.M.H.; Allgöwer, F.

    2014-01-01

    In this paper we propose a robust self-triggered model predictive control algorithm for linear systems with additive bounded disturbances and hard constraints on the inputs and state. In self-triggered control, at every sampling instant the time until the next sampling instant is computed online

  5. Stochastic Robust Mathematical Programming Model for Power System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay

    2016-01-01

    This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.

  6. Intrinsic Grassmann Averages for Online Linear and Robust Subspace Learning

    DEFF Research Database (Denmark)

    Chakraborty, Rudrasis; Hauberg, Søren; Vemuri, Baba C.

    2017-01-01

    Principal Component Analysis (PCA) is a fundamental method for estimating a linear subspace approximation to high-dimensional data. Many algorithms exist in literature to achieve a statistically robust version of PCA called RPCA. In this paper, we present a geometric framework for computing the p...

  7. Dynamics robustness of cascading systems.

    Directory of Open Access Journals (Sweden)

    Jonathan T Young

    2017-03-01

    Full Text Available A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of when signaling cascades will display dynamics robustness. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade's kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1 Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2 Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream changes are attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it

  8. Effects of a random noisy oracle on search algorithm complexity

    International Nuclear Information System (INIS)

    Shenvi, Neil; Brown, Kenneth R.; Whaley, K. Birgitta

    2003-01-01

    Grover's algorithm provides a quadratic speed-up over classical algorithms for unstructured database or library searches. This paper examines the robustness of Grover's search algorithm to a random phase error in the oracle and analyzes the complexity of the search process as a function of the scaling of the oracle error with database or library size. Both the discrete- and continuous-time implementations of the search algorithm are investigated. It is shown that unless the oracle phase error scales as O(N -1/4 ), neither the discrete- nor the continuous-time implementation of Grover's algorithm is scalably robust to this error in the absence of error correction

  9. ADSL Transceivers Applying DSM and Their Nonstationary Noise Robustness

    Directory of Open Access Journals (Sweden)

    Bostoen Tom

    2006-01-01

    Full Text Available Dynamic spectrum management (DSM comprises a new set of techniques for multiuser power allocation and/or detection in digital subscriber line (DSL networks. At the Alcatel Research and Innovation Labs, we have recently developed a DSM test bed, which allows the performance of DSM algorithms to be evaluated in practice. With this test bed, we have evaluated the performance of a DSM level-1 algorithm known as iterative water-filling in an ADSL scenario. This paper describes the results of, on the one hand, the performance gains achieved with iterative water-filling, and, on the other hand, the nonstationary noise robustness of DSM-enabled ADSL modems. It will be shown that DSM trades off nonstationary noise robustness for performance improvements. A new bit swap procedure is then introduced to increase the noise robustness when applying DSM.

  10. Copyright Protection of Color Imaging Using Robust-Encoded Watermarking

    Directory of Open Access Journals (Sweden)

    M. Cedillo-Hernandez

    2015-04-01

    Full Text Available In this paper we present a robust-encoded watermarking method applied to color images for copyright protection, which presents robustness against several geometric and signal processing distortions. Trade-off between payload, robustness and imperceptibility is a very important aspect which has to be considered when a watermark algorithm is designed. In our proposed scheme, previously to be embedded into the image, the watermark signal is encoded using a convolutional encoder, which can perform forward error correction achieving better robustness performance. Then, the embedding process is carried out through the discrete cosine transform domain (DCT of an image using the image normalization technique to accomplish robustness against geometric and signal processing distortions. The embedded watermark coded bits are extracted and decoded using the Viterbi algorithm. In order to determine the presence or absence of the watermark into the image we compute the bit error rate (BER between the recovered and the original watermark data sequence. The quality of the watermarked image is measured using the well-known indices: Peak Signal to Noise Ratio (PSNR, Visual Information Fidelity (VIF and Structural Similarity Index (SSIM. The color difference between the watermarked and original images is obtained by using the Normalized Color Difference (NCD measure. The experimental results show that the proposed method provides good performance in terms of imperceptibility and robustness. The comparison among the proposed and previously reported methods based on different techniques is also provided.

  11. Robust Trust in Expert Testimony

    Directory of Open Access Journals (Sweden)

    Christian Dahlman

    2015-05-01

    Full Text Available The standard of proof in criminal trials should require that the evidence presented by the prosecution is robust. This requirement of robustness says that it must be unlikely that additional information would change the probability that the defendant is guilty. Robustness is difficult for a judge to estimate, as it requires the judge to assess the possible effect of information that the he or she does not have. This article is concerned with expert witnesses and proposes a method for reviewing the robustness of expert testimony. According to the proposed method, the robustness of expert testimony is estimated with regard to competence, motivation, external strength, internal strength and relevance. The danger of trusting non-robust expert testimony is illustrated with an analysis of the Thomas Quick Case, a Swedish legal scandal where a patient at a mental institution was wrongfully convicted for eight murders.

  12. A robust nonlinear filter for image restoration.

    Science.gov (United States)

    Koivunen, V

    1995-01-01

    A class of nonlinear regression filters based on robust estimation theory is introduced. The goal of the filtering is to recover a high-quality image from degraded observations. Models for desired image structures and contaminating processes are employed, but deviations from strict assumptions are allowed since the assumptions on signal and noise are typically only approximately true. The robustness of filters is usually addressed only in a distributional sense, i.e., the actual error distribution deviates from the nominal one. In this paper, the robustness is considered in a broad sense since the outliers may also be due to inappropriate signal model, or there may be more than one statistical population present in the processing window, causing biased estimates. Two filtering algorithms minimizing a least trimmed squares criterion are provided. The design of the filters is simple since no scale parameters or context-dependent threshold values are required. Experimental results using both real and simulated data are presented. The filters effectively attenuate both impulsive and nonimpulsive noise while recovering the signal structure and preserving interesting details.

  13. Hyperspectral Unmixing with Robust Collaborative Sparse Regression

    Directory of Open Access Journals (Sweden)

    Chang Li

    2016-07-01

    Full Text Available Recently, sparse unmixing (SU of hyperspectral data has received particular attention for analyzing remote sensing images. However, most SU methods are based on the commonly admitted linear mixing model (LMM, which ignores the possible nonlinear effects (i.e., nonlinearity. In this paper, we propose a new method named robust collaborative sparse regression (RCSR based on the robust LMM (rLMM for hyperspectral unmixing. The rLMM takes the nonlinearity into consideration, and the nonlinearity is merely treated as outlier, which has the underlying sparse property. The RCSR simultaneously takes the collaborative sparse property of the abundance and sparsely distributed additive property of the outlier into consideration, which can be formed as a robust joint sparse regression problem. The inexact augmented Lagrangian method (IALM is used to optimize the proposed RCSR. The qualitative and quantitative experiments on synthetic datasets and real hyperspectral images demonstrate that the proposed RCSR is efficient for solving the hyperspectral SU problem compared with the other four state-of-the-art algorithms.

  14. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  15. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  16. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  17. ROBUST CYLINDER FITTING IN THREE-DIMENSIONAL POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    A. Nurunnabi

    2017-05-01

    Full Text Available This paper investigates the problems of cylinder fitting in laser scanning three-dimensional Point Cloud Data (PCD. Most existing methods require full cylinder data, do not study the presence of outliers, and are not statistically robust. But especially mobile laser scanning often has incomplete data, as street poles for example are only scanned from the road. Moreover, existence of outliers is common. Outliers may occur as random or systematic errors, and may be scattered and/or clustered. In this paper, we present a statistically robust cylinder fitting algorithm for PCD that combines Robust Principal Component Analysis (RPCA with robust regression. Robust principal components as obtained by RPCA allow estimating cylinder directions more accurately, and an existing efficient circle fitting algorithm following robust regression principles, properly fit cylinder. We demonstrate the performance of the proposed method on artificial and real PCD. Results show that the proposed method provides more accurate and robust results: (i in the presence of noise and high percentage of outliers, (ii for incomplete as well as complete data, (iii for small and large number of points, and (iv for different sizes of radius. On 1000 simulated quarter cylinders of 1m radius with 10% outliers a PCA based method fit cylinders with a radius of on average 3.63 meter (m; the proposed method on the other hand fit cylinders of on average 1.02 m radius. The algorithm has potential in applications such as fitting cylindrical (e.g., light and traffic poles, diameter at breast height estimation for trees, and building and bridge information modelling.

  18. Design of Robust Adaptive Array Processors for Non-Stationary Ocean Environments

    National Research Council Canada - National Science Library

    Wage, Kathleen E

    2009-01-01

    The overall goal of this project is to design adaptive array processing algorithms that have good transient performance, are robust to mismatch, work with low sample support, and incorporate waveguide...

  19. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  20. On the Robustness and Prospects of Adaptive BDDC Methods for Finite Element Discretizations of Elliptic PDEs with High-Contrast Coefficients

    KAUST Repository

    Zampini, Stefano; Keyes, David E.

    2016-01-01

    Balancing Domain Decomposition by Constraints (BDDC) methods have proven to be powerful preconditioners for large and sparse linear systems arising from the finite element discretization of elliptic PDEs. Condition number bounds can be theoretically

  1. Robust and distributed hypothesis testing

    CERN Document Server

    Gül, Gökhan

    2017-01-01

    This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the boo...

  2. Robustness of IPTV business models

    NARCIS (Netherlands)

    Bouwman, H.; Zhengjia, M.; Duin, P. van der; Limonard, S.

    2008-01-01

    The final stage in the STOF method is an evaluation of the robustness of the design, for which the method provides some guidelines. For many innovative services, the future holds numerous uncertainties, which makes evaluating the robustness of a business model a difficult task. In this chapter, we

  3. Robustness Evaluation of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard

    2009-01-01

    Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure....

  4. Parallel Algorithms for Monte Carlo Particle Transport Simulation on Exascale Computing Architectures

    Science.gov (United States)

    Romano, Paul Kollath

    Monte Carlo particle transport methods are being considered as a viable option for high-fidelity simulation of nuclear reactors. While Monte Carlo methods offer several potential advantages over deterministic methods, there are a number of algorithmic shortcomings that would prevent their immediate adoption for full-core analyses. In this thesis, algorithms are proposed both to ameliorate the degradation in parallel efficiency typically observed for large numbers of processors and to offer a means of decomposing large tally data that will be needed for reactor analysis. A nearest-neighbor fission bank algorithm was proposed and subsequently implemented in the OpenMC Monte Carlo code. A theoretical analysis of the communication pattern shows that the expected cost is O( N ) whereas traditional fission bank algorithms are O(N) at best. The algorithm was tested on two supercomputers, the Intrepid Blue Gene/P and the Titan Cray XK7, and demonstrated nearly linear parallel scaling up to 163,840 processor cores on a full-core benchmark problem. An algorithm for reducing network communication arising from tally reduction was analyzed and implemented in OpenMC. The proposed algorithm groups only particle histories on a single processor into batches for tally purposes---in doing so it prevents all network communication for tallies until the very end of the simulation. The algorithm was tested, again on a full-core benchmark, and shown to reduce network communication substantially. A model was developed to predict the impact of load imbalances on the performance of domain decomposed simulations. The analysis demonstrated that load imbalances in domain decomposed simulations arise from two distinct phenomena: non-uniform particle densities and non-uniform spatial leakage. The dominant performance penalty for domain decomposition was shown to come from these physical effects rather than insufficient network bandwidth or high latency. The model predictions were verified with

  5. The research on optimization of auto supply chain network robust model under macroeconomic fluctuations

    International Nuclear Information System (INIS)

    Guo, Chunxiang; Liu, Xiaoli; Jin, Maozhu; Lv, Zhihan

    2016-01-01

    Considering the uncertainty of the macroeconomic environment, the robust optimization method is studied for constructing and designing the automotive supply chain network, and based on the definition of robust solution a robust optimization model is built for integrated supply chain network design that consists of supplier selection problem and facility location–distribution problem. The tabu search algorithm is proposed for supply chain node configuration, analyzing the influence of the level of uncertainty on robust results, and by comparing the performance of supply chain network design through the stochastic programming model and robustness optimize model, on this basis, determining the rational layout of supply chain network under macroeconomic fluctuations. At last the contrastive test result validates that the performance of tabu search algorithm is outstanding on convergence and computational time. Meanwhile it is indicated that the robust optimization model can reduce investment risks effectively when it is applied to supply chain network design.

  6. Robustness Analysis of Visual QA Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong

    2017-09-14

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  7. Robustness Analysis of Visual Question Answering Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong

    2017-11-01

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  8. Robustness Analysis of Visual Question Answering Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong

    2017-01-01

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  9. Robustness Analysis of Visual QA Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong; Alfadly, Modar; Ghanem, Bernard

    2017-01-01

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  10. Automatic Circuit Design and Optimization Using Modified PSO Algorithm

    Directory of Open Access Journals (Sweden)

    Subhash Patel

    2016-04-01

    Full Text Available In this work, we have proposed modified PSO algorithm based optimizer for automatic circuit design. The performance of the modified PSO algorithm is compared with two other evolutionary algorithms namely ABC algorithm and standard PSO algorithm by designing two stage CMOS operational amplifier and bulk driven OTA in 130nm technology. The results show the robustness of the proposed algorithm. With modified PSO algorithm, the average design error for two stage op-amp is only 0.054% in contrast to 3.04% for standard PSO algorithm and 5.45% for ABC algorithm. For bulk driven OTA, average design error is 1.32% with MPSO compared to 4.70% with ABC algorithm and 5.63% with standard PSO algorithm.

  11. On robust parameter estimation in brain-computer interfacing

    Science.gov (United States)

    Samek, Wojciech; Nakajima, Shinichi; Kawanabe, Motoaki; Müller, Klaus-Robert

    2017-12-01

    Objective. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain-computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the trials consisting of a few hundred EEG samples and indicating the start and end of a task. Approach. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. Main results. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. Significance. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.

  12. Using Spread Spectrum Transform for Fast and Robust Simultaneous Measurement in Active Sensors with Multiple Emitters

    DEFF Research Database (Denmark)

    Harbo, Anders La-Cour; Stoustrup, Jakob

    2002-01-01

    We present a signal processing algorithm for making robust and simultaneous measurements in an active sensor, which has one or more emitters and a receiver, and which employs some sort of signal processing hardware. Robustness means low sensitivity to time and frequency localized disturbances......-cost active sensors....

  13. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  14. A Case Study of a Multiobjective Elitist Recombinative Genetic Algorithm with Coevolutionary Sharing

    NARCIS (Netherlands)

    Neef, R.M.; Thierens, D.; Arciszewski, H.F.R.

    1999-01-01

    We present a multiobjective genetic algorithm that incorporates various genetic algorithm techniques that have been proven to be efficient and robust in their problem domain. More specifically, we integrate rank based selection, adaptive niching through coevolutionary sharing, elitist recombination,

  15. A case study of a multiobjective recombinative genetic algorithm with coevolutionary sharing

    NARCIS (Netherlands)

    Neef, R.M.; Thierens, D.; Arciszewski, H.F.R.

    1999-01-01

    We present a multiobjective genetic algorithm that incorporates various genetic algorithm techniques that have been proven to be efficient and robust in their problem domain. More specifically, we integrate rank based selection, adaptive niching through coevolutionary sharing, elitist recombination,

  16. Theoretical Framework for Robustness Evaluation

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2011-01-01

    This paper presents a theoretical framework for evaluation of robustness of structural systems, incl. bridges and buildings. Typically modern structural design codes require that ‘the consequence of damages to structures should not be disproportional to the causes of the damages’. However, although...... the importance of robustness for structural design is widely recognized the code requirements are not specified in detail, which makes the practical use difficult. This paper describes a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines...

  17. Robustness of airline route networks

    Science.gov (United States)

    Lordan, Oriol; Sallan, Jose M.; Escorihuela, Nuria; Gonzalez-Prieto, David

    2016-03-01

    Airlines shape their route network by defining their routes through supply and demand considerations, paying little attention to network performance indicators, such as network robustness. However, the collapse of an airline network can produce high financial costs for the airline and all its geographical area of influence. The aim of this study is to analyze the topology and robustness of the network route of airlines following Low Cost Carriers (LCCs) and Full Service Carriers (FSCs) business models. Results show that FSC hubs are more central than LCC bases in their route network. As a result, LCC route networks are more robust than FSC networks.

  18. Algorithms for Protein Structure Prediction

    DEFF Research Database (Denmark)

    Paluszewski, Martin

    -trace. Here we present three different approaches for reconstruction of C-traces from predictable measures. In our first approach [63, 62], the C-trace is positioned on a lattice and a tabu-search algorithm is applied to find minimum energy structures. The energy function is based on half-sphere-exposure (HSE......) is more robust than standard Monte Carlo search. In the second approach for reconstruction of C-traces, an exact branch and bound algorithm has been developed [67, 65]. The model is discrete and makes use of secondary structure predictions, HSE, CN and radius of gyration. We show how to compute good lower...... bounds for partial structures very fast. Using these lower bounds, we are able to find global minimum structures in a huge conformational space in reasonable time. We show that many of these global minimum structures are of good quality compared to the native structure. Our branch and bound algorithm...

  19. An Evolutionary Approach for Robust Layout Synthesis of MEMS

    DEFF Research Database (Denmark)

    Fan, Zhun; Wang, Jiachuan; Goodman, Erik

    2005-01-01

    The paper introduces a robust design method for layout synthesis of MEM resonators subject to inherent geometric uncertainties such as the fabrication error on the sidewall of the structure. The robust design problem is formulated as a multi-objective constrained optimisation problem after certain...... assumptions and treated with multiobjective genetic algorithm (MOGA), a special type of evolutionary computing approaches. Case study based on layout synthesis of a comb-driven MEM resonator shows that the approach proposed in this paper can lead to design results that meet the target performance and are less...

  20. Multitarget Approaches to Robust Navigation

    Data.gov (United States)

    National Aeronautics and Space Administration — The performance, stability, and statistical consistency of a vehicle's navigation algorithm are vitally important to the success and safety of its mission....