WorldWideScience

Sample records for robust geometric algorithms

  1. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  2. Research on Geometric Positioning Algorithm of License Plate in Multidimensional Parameter Space

    Directory of Open Access Journals (Sweden)

    Yinhua Huan

    2014-05-01

    Full Text Available Considering features of vehicle license plate location method which commonly used, in order to search a consistent location for reference images with license plates feature in multidimensional parameter space, a new algorithm of geometric location is proposed. Geometric location algorithm main include model training and real time search. Which not only adapt the gray-scale linearity and the gray non-linear changes, but also support changes of scale and angle. Compared with the mainstream locating software, numerical results shows under the same test conditions that the position deviation of geometric positioning algorithm is less than 0.5 pixel. Without taking into account the multidimensional parameter space, Geometric positioning algorithm position deviation is less than 1.0 pixel and angle deviation is less than 1.0 degree taking into account the multidimensional parameter space. This algorithm is robust, simple, practical and is better than the traditional method.

  3. A robust human face detection algorithm

    Science.gov (United States)

    Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.

    2012-01-01

    Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.

  4. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  5. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  6. Geometric computations with interval and new robust methods applications in computer graphics, GIS and computational geometry

    CERN Document Server

    Ratschek, H

    2003-01-01

    This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th

  7. MM Algorithms for Geometric and Signomial Programming.

    Science.gov (United States)

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  8. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  9. Robust Geometric Control of a Distillation Column

    DEFF Research Database (Denmark)

    Kymmel, Mogens; Andersen, Henrik Weisberg

    1987-01-01

    A frequency domain method, which makes it possible to adjust multivariable controllers with respect to both nominal performance and robustness, is presented. The basic idea in the approach is that the designer assigns objectives such as steady-state tracking, maximum resonance peaks, bandwidth, m...... is used to examine and improve geometric control of a binary distillation column....

  10. A Color Image Watermarking Scheme Resistant against Geometrical Attacks

    Directory of Open Access Journals (Sweden)

    Y. Xing

    2010-04-01

    Full Text Available The geometrical attacks are still a problem for many digital watermarking algorithms at present. In this paper, we propose a watermarking algorithm for color images resistant to geometrical distortions (rotation and scaling. The singular value decomposition is used for watermark embedding and extraction. The log-polar map- ping (LPM and phase correlation method are used to register the position of geometrical distortion suffered by the watermarked image. Experiments with different kinds of color images and watermarks demonstrate that the watermarking algorithm is robust to common image processing attacks, especially geometrical attacks.

  11. TAO-robust backpropagation learning algorithm.

    Science.gov (United States)

    Pernía-Espinoza, Alpha V; Ordieres-Meré, Joaquín B; Martínez-de-Pisón, Francisco J; González-Marcos, Ana

    2005-03-01

    In several fields, as industrial modelling, multilayer feedforward neural networks are often used as universal function approximations. These supervised neural networks are commonly trained by a traditional backpropagation learning format, which minimises the mean squared error (mse) of the training data. However, in the presence of corrupted data (outliers) this training scheme may produce wrong models. We combine the benefits of the non-linear regression model tau-estimates [introduced by Tabatabai, M. A. Argyros, I. K. Robust Estimation and testing for general nonlinear regression models. Applied Mathematics and Computation. 58 (1993) 85-101] with the backpropagation algorithm to produce the TAO-robust learning algorithm, in order to deal with the problems of modelling with outliers. The cost function of this approach has a bounded influence function given by the weighted average of two psi functions, one corresponding to a very robust estimate and the other to a highly efficient estimate. The advantages of the proposed algorithm are studied with an example.

  12. Structure-preserving geometric algorithms for plasma physics and beam physics

    Science.gov (United States)

    Qin, Hong

    2017-10-01

    Standard algorithms in the plasma physics and beam physics do not possess the long-term accuracy and fidelity required in the study of multi-scale dynamics, because they do not preserve the geometric structures of the physical systems, such as the local energy-momentum conservation, symplectic structure and gauge symmetry. As a result, numerical errors accumulate coherently with time and long-term simulation results are not reliable. To overcome this difficulty, since 2008 structure-preserving geometric algorithms have been developed. This new generation of algorithms utilizes advanced techniques, such as interpolating differential forms, canonical and non-canonical symplectic integrators, and finite element exterior calculus to guarantee gauge symmetry and charge conservation, and the conservation of energy-momentum and symplectic structure. It is our vision that future numerical capabilities in plasma physics and beam physics will be based on the structure-preserving geometric algorithms.

  13. Fast decoding algorithms for geometric coded apertures

    International Nuclear Information System (INIS)

    Byard, Kevin

    2015-01-01

    Fast decoding algorithms are described for the class of coded aperture designs known as geometric coded apertures which were introduced by Gourlay and Stephen. When compared to the direct decoding method, the algorithms significantly reduce the number of calculations required when performing the decoding for these apertures and hence speed up the decoding process. Experimental tests confirm the efficacy of these fast algorithms, demonstrating a speed up of approximately two to three orders of magnitude over direct decoding.

  14. A Novel Evolutionary Algorithm for Designing Robust Analog Filters

    Directory of Open Access Journals (Sweden)

    Shaobo Li

    2018-03-01

    Full Text Available Designing robust circuits that withstand environmental perturbation and device degradation is critical for many applications. Traditional robust circuit design is mainly done by tuning parameters to improve system robustness. However, the topological structure of a system may set a limit on the robustness achievable through parameter tuning. This paper proposes a new evolutionary algorithm for robust design that exploits the open-ended topological search capability of genetic programming (GP coupled with bond graph modeling. We applied our GP-based robust design (GPRD algorithm to evolve robust lowpass and highpass analog filters. Compared with a traditional robust design approach based on a state-of-the-art real-parameter genetic algorithm (GA, our GPRD algorithm with a fitness criterion rewarding robustness, with respect to parameter perturbations, can evolve more robust filters than what was achieved through parameter tuning alone. We also find that inappropriate GA tuning may mislead the search process and that multiple-simulation and perturbed fitness evaluation methods for evolving robustness have complementary behaviors with no absolute advantage of one over the other.

  15. GeoBuilder: a geometric algorithm visualization and debugging system for 2D and 3D geometric computing.

    Science.gov (United States)

    Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai

    2009-01-01

    Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.

  16. DC Algorithm for Extended Robust Support Vector Machine.

    Science.gov (United States)

    Fujiwara, Shuhei; Takeda, Akiko; Kanamori, Takafumi

    2017-05-01

    Nonconvex variants of support vector machines (SVMs) have been developed for various purposes. For example, robust SVMs attain robustness to outliers by using a nonconvex loss function, while extended [Formula: see text]-SVM (E[Formula: see text]-SVM) extends the range of the hyperparameter by introducing a nonconvex constraint. Here, we consider an extended robust support vector machine (ER-SVM), a robust variant of E[Formula: see text]-SVM. ER-SVM combines two types of nonconvexity from robust SVMs and E[Formula: see text]-SVM. Because of the two nonconvexities, the existing algorithm we proposed needs to be divided into two parts depending on whether the hyperparameter value is in the extended range or not. The algorithm also heuristically solves the nonconvex problem in the extended range. In this letter, we propose a new, efficient algorithm for ER-SVM. The algorithm deals with two types of nonconvexity while never entailing more computations than either E[Formula: see text]-SVM or robust SVM, and it finds a critical point of ER-SVM. Furthermore, we show that ER-SVM includes the existing robust SVMs as special cases. Numerical experiments confirm the effectiveness of integrating the two nonconvexities.

  17. Green cloud environment by using robust planning algorithm

    Directory of Open Access Journals (Sweden)

    Jyoti Thaman

    2017-11-01

    Full Text Available Cloud computing provided a framework for seamless access to resources through network. Access to resources is quantified through SLA between service providers and users. Service provider tries to best exploit their resources and reduce idle times of the resources. Growing energy concerns further makes the life of service providers miserable. User’s requests are served by allocating users tasks to resources in Clouds and Grid environment through scheduling algorithms and planning algorithms. With only few Planning algorithms in existence rarely planning and scheduling algorithms are differentiated. This paper proposes a robust hybrid planning algorithm, Robust Heterogeneous-Earliest-Finish-Time (RHEFT for binding tasks to VMs. The allocation of tasks to VMs is based on a novel task matching algorithm called Interior Scheduling. The consistent performance of proposed RHEFT algorithm is compared with Heterogeneous-Earliest-Finish-Time (HEFT and Distributed HEFT (DHEFT for various parameters like utilization ratio, makespan, Speed-up and Energy Consumption. RHEFT’s consistent performance against HEFT and DHEFT has established the robustness of the hybrid planning algorithm through rigorous simulations.

  18. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. In the ATLAS track reconstruction algorithm, an artificial neural network is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The robustness of the neural network algorithm is presented, probing its sensitivity to uncertainties in the detector conditions. The robustness is studied by evaluating the stability of the algorithm's performance under a range of variations in the inputs to the neural networks. Within reasonable variation magnitudes, the neural networks prove to be robust to most variation types.

  19. Geometric Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodari; Zeh, Norbert

    2010-01-01

    -D convex hulls. These results are obtained by analyzing adaptations of either the PEM merge sort algorithm or PRAM algorithms. For the second group of problems—orthogonal line segment intersection reporting, batched range reporting, and related problems—more effort is required. What distinguishes......We study techniques for obtaining efficient algorithms for geometric problems on private-cache chip multiprocessors. We show how to obtain optimal algorithms for interval stabbing counting, 1-D range counting, weighted 2-D dominance counting, and for computing 3-D maxima, 2-D lower envelopes, and 2...... these problems from the ones in the previous group is the variable output size, which requires I/O-efficient load balancing strategies based on the contribution of the individual input elements to the output size. To obtain nearly optimal algorithms for these problems, we introduce a parallel distribution...

  20. Robust ray-tracing algorithms for interactive dose rate evaluation

    International Nuclear Information System (INIS)

    Perrotte, L.

    2011-01-01

    More than ever, it is essential today to develop simulation tools to rapidly evaluate the dose rate received by operators working on nuclear sites. In order to easily study numerous different scenarios of intervention, computation times of available softwares have to be all lowered. This mainly implies to accelerate the geometrical computations needed for the dose rate evaluation. These computations consist in finding and sorting the whole list of intersections between a big 3D scene and multiple groups of 'radiative' rays meeting at the point where the dose has to be measured. In order to perform all these computations in less than a second, we first propose a GPU algorithm that enables the efficient management of one big group of coherent rays. Then we present a modification of this algorithm that guarantees the robustness of the ray-triangle intersection tests through the elimination of the precision issues due to floating-point arithmetic. This modification does not require the definition of scene-dependent coefficients ('epsilon' style) and only implies a small loss of performance (less than 10%). Finally we propose an efficient strategy to handle multiple ray groups (corresponding to multiple radiative objects) which use the previous results.Thanks to these improvements, we are able to perform an interactive and robust dose rate evaluation on big 3D scenes: all of the intersections (more than 13 million) between 700 000 triangles and 12 groups of 100 000 rays each are found, sorted along each ray and transferred to the CPU in 470 milliseconds. (author) [fr

  1. Robust MST-Based Clustering Algorithm.

    Science.gov (United States)

    Liu, Qidong; Zhang, Ruisheng; Zhao, Zhili; Wang, Zhenghai; Jiao, Mengyao; Wang, Guangjing

    2018-06-01

    Minimax similarity stresses the connectedness of points via mediating elements rather than favoring high mutual similarity. The grouping principle yields superior clustering results when mining arbitrarily-shaped clusters in data. However, it is not robust against noises and outliers in the data. There are two main problems with the grouping principle: first, a single object that is far away from all other objects defines a separate cluster, and second, two connected clusters would be regarded as two parts of one cluster. In order to solve such problems, we propose robust minimum spanning tree (MST)-based clustering algorithm in this letter. First, we separate the connected objects by applying a density-based coarsening phase, resulting in a low-rank matrix in which the element denotes the supernode by combining a set of nodes. Then a greedy method is presented to partition those supernodes through working on the low-rank matrix. Instead of removing the longest edges from MST, our algorithm groups the data set based on the minimax similarity. Finally, the assignment of all data points can be achieved through their corresponding supernodes. Experimental results on many synthetic and real-world data sets show that our algorithm consistently outperforms compared clustering algorithms.

  2. A Feed-forward Geometrical Compensation and Adaptive Feedback Control Algorithm for Hydraulic Robot Manipulators

    DEFF Research Database (Denmark)

    Conrad, Finn; Zhou, Jianjun; Gabacik, Andrzej

    1998-01-01

    Invited paper presents a new control algorithm based on feed-forward geometrical compensation strategy combined with adaptive feedback control.......Invited paper presents a new control algorithm based on feed-forward geometrical compensation strategy combined with adaptive feedback control....

  3. Robust stability analysis of adaptation algorithms for single perceptron.

    Science.gov (United States)

    Hui, S; Zak, S H

    1991-01-01

    The problem of robust stability and convergence of learning parameters of adaptation algorithms in a noisy environment for the single preceptron is addressed. The case in which the same input pattern is presented in the adaptation cycle is analyzed. The algorithm proposed is of the Widrow-Hoff type. It is concluded that this algorithm is robust. However, the weight vectors do not necessarily converge in the presence of measurement noise. A modified version of this algorithm in which the reduction factors are allowed to vary with time is proposed, and it is shown that this algorithm is robust and that the weight vectors converge in the presence of bounded noise. Only deterministic-type arguments are used in the analysis. An ultimate bound on the error in terms of a convex combination of the initial error and the bound on the noise is obtained.

  4. Transform Domain Robust Variable Step Size Griffiths' Adaptive Algorithm for Noise Cancellation in ECG

    Science.gov (United States)

    Hegde, Veena; Deekshit, Ravishankar; Satyanarayana, P. S.

    2011-12-01

    The electrocardiogram (ECG) is widely used for diagnosis of heart diseases. Good quality of ECG is utilized by physicians for interpretation and identification of physiological and pathological phenomena. However, in real situations, ECG recordings are often corrupted by artifacts or noise. Noise severely limits the utility of the recorded ECG and thus needs to be removed, for better clinical evaluation. In the present paper a new noise cancellation technique is proposed for removal of random noise like muscle artifact from ECG signal. A transform domain robust variable step size Griffiths' LMS algorithm (TVGLMS) is proposed for noise cancellation. For the TVGLMS, the robust variable step size has been achieved by using the Griffiths' gradient which uses cross-correlation between the desired signal contaminated with observation or random noise and the input. The algorithm is discrete cosine transform (DCT) based and uses symmetric property of the signal to represent the signal in frequency domain with lesser number of frequency coefficients when compared to that of discrete Fourier transform (DFT). The algorithm is implemented for adaptive line enhancer (ALE) filter which extracts the ECG signal in a noisy environment using LMS filter adaptation. The proposed algorithm is found to have better convergence error/misadjustment when compared to that of ordinary transform domain LMS (TLMS) algorithm, both in the presence of white/colored observation noise. The reduction in convergence error achieved by the new algorithm with desired signal decomposition is found to be lower than that obtained without decomposition. The experimental results indicate that the proposed method is better than traditional adaptive filter using LMS algorithm in the aspects of retaining geometrical characteristics of ECG signal.

  5. A Robust Parallel Algorithm for Combinatorial Compressed Sensing

    Science.gov (United States)

    Mendoza-Smith, Rodrigo; Tanner, Jared W.; Wechsung, Florian

    2018-04-01

    In previous work two of the authors have shown that a vector $x \\in \\mathbb{R}^n$ with at most $k Parallel-$\\ell_0$ decoding algorithm, where $\\mathrm{nnz}(A)$ denotes the number of nonzero entries in $A \\in \\mathbb{R}^{m \\times n}$. In this paper we present the Robust-$\\ell_0$ decoding algorithm, which robustifies Parallel-$\\ell_0$ when the sketch $Ax$ is corrupted by additive noise. This robustness is achieved by approximating the asymptotic posterior distribution of values in the sketch given its corrupted measurements. We provide analytic expressions that approximate these posteriors under the assumptions that the nonzero entries in the signal and the noise are drawn from continuous distributions. Numerical experiments presented show that Robust-$\\ell_0$ is superior to existing greedy and combinatorial compressed sensing algorithms in the presence of small to moderate signal-to-noise ratios in the setting of Gaussian signals and Gaussian additive noise.

  6. Optimizing clinical performance and geometrical robustness of a new electrode device for intracranial tumor electroporation

    DEFF Research Database (Denmark)

    Mahmood, Faisal; Gehl, Julie

    2011-01-01

    and genes to intracranial tumors in humans, and demonstrate a method to optimize the design (i.e. geometry) of the electrode device prototype to improve both clinical performance and geometrical tolerance (robustness). We have employed a semiempirical objective function based on constraints similar to those...... sensitive to random geometrical deviations. The method is readily applicable to other electrode configurations....

  7. Area collapse algorithm computing new curve of 2D geometric objects

    Science.gov (United States)

    Buczek, Michał Mateusz

    2017-06-01

    The processing of cartographic data demands human involvement. Up-to-date algorithms try to automate a part of this process. The goal is to obtain a digital model, or additional information about shape and topology of input geometric objects. A topological skeleton is one of the most important tools in the branch of science called shape analysis. It represents topological and geometrical characteristics of input data. Its plot depends on using algorithms such as medial axis, skeletonization, erosion, thinning, area collapse and many others. Area collapse, also known as dimension change, replaces input data with lower-dimensional geometric objects like, for example, a polygon with a polygonal chain, a line segment with a point. The goal of this paper is to introduce a new algorithm for the automatic calculation of polygonal chains representing a 2D polygon. The output is entirely contained within the area of the input polygon, and it has a linear plot without branches. The computational process is automatic and repeatable. The requirements of input data are discussed. The author analyzes results based on the method of computing ends of output polygonal chains. Additional methods to improve results are explored. The algorithm was tested on real-world cartographic data received from BDOT/GESUT databases, and on point clouds from laser scanning. An implementation for computing hatching of embankment is described.

  8. Geomagnetic matching navigation algorithm based on robust estimation

    Science.gov (United States)

    Xie, Weinan; Huang, Liping; Qu, Zhenshen; Wang, Zhenhuan

    2017-08-01

    The outliers in the geomagnetic survey data seriously affect the precision of the geomagnetic matching navigation and badly disrupt its reliability. A novel algorithm which can eliminate the outliers influence is investigated in this paper. First, the weight function is designed and its principle of the robust estimation is introduced. By combining the relation equation between the matching trajectory and the reference trajectory with the Taylor series expansion for geomagnetic information, a mathematical expression of the longitude, latitude and heading errors is acquired. The robust target function is obtained by the weight function and the mathematical expression. Then the geomagnetic matching problem is converted to the solutions of nonlinear equations. Finally, Newton iteration is applied to implement the novel algorithm. Simulation results show that the matching error of the novel algorithm is decreased to 7.75% compared to the conventional mean square difference (MSD) algorithm, and is decreased to 18.39% to the conventional iterative contour matching algorithm when the outlier is 40nT. Meanwhile, the position error of the novel algorithm is 0.017° while the other two algorithms fail to match when the outlier is 400nT.

  9. Robust K-Median and K-Means Clustering Algorithms for Incomplete Data

    Directory of Open Access Journals (Sweden)

    Jinhua Li

    2016-01-01

    Full Text Available Incomplete data with missing feature values are prevalent in clustering problems. Traditional clustering methods first estimate the missing values by imputation and then apply the classical clustering algorithms for complete data, such as K-median and K-means. However, in practice, it is often hard to obtain accurate estimation of the missing values, which deteriorates the performance of clustering. To enhance the robustness of clustering algorithms, this paper represents the missing values by interval data and introduces the concept of robust cluster objective function. A minimax robust optimization (RO formulation is presented to provide clustering results, which are insensitive to estimation errors. To solve the proposed RO problem, we propose robust K-median and K-means clustering algorithms with low time and space complexity. Comparisons and analysis of experimental results on both artificially generated and real-world incomplete data sets validate the robustness and effectiveness of the proposed algorithms.

  10. Markov chain algorithms: a template for building future robust low-power systems

    Science.gov (United States)

    Deka, Biplab; Birklykke, Alex A.; Duwe, Henry; Mansinghka, Vikash K.; Kumar, Rakesh

    2014-01-01

    Although computational systems are looking towards post CMOS devices in the pursuit of lower power, the expected inherent unreliability of such devices makes it difficult to design robust systems without additional power overheads for guaranteeing robustness. As such, algorithmic structures with inherent ability to tolerate computational errors are of significant interest. We propose to cast applications as stochastic algorithms based on Markov chains (MCs) as such algorithms are both sufficiently general and tolerant to transition errors. We show with four example applications—Boolean satisfiability, sorting, low-density parity-check decoding and clustering—how applications can be cast as MC algorithms. Using algorithmic fault injection techniques, we demonstrate the robustness of these implementations to transition errors with high error rates. Based on these results, we make a case for using MCs as an algorithmic template for future robust low-power systems. PMID:24842030

  11. Robust and accurate detection algorithm for multimode polymer optical FBG sensor system

    DEFF Research Database (Denmark)

    Ganziy, Denis; Jespersen, O.; Rose, B.

    2015-01-01

    We propose a novel dynamic gate algorithm (DGA) for robust and fast peak detection. The algorithm uses a threshold determined detection window and center of gravity algorithm with bias compensation. Our experiment demonstrates that the DGA method is fast and robust with better stability and accur...

  12. Comparing the Robustness of Evolutionary Algorithms on the Basis of Benchmark Functions

    Directory of Open Access Journals (Sweden)

    DENIZ ULKER, E.

    2013-05-01

    Full Text Available In real-world optimization problems, even though the solution quality is of great importance, the robustness of the solution is also an important aspect. This paper investigates how the optimization algorithms are sensitive to the variations of control parameters and to the random initialization of the solution set for fixed control parameters. The comparison is performed of three well-known evolutionary algorithms which are Particle Swarm Optimization (PSO algorithm, Differential Evolution (DE algorithm and the Harmony Search (HS algorithm. Various benchmark functions with different characteristics are used for the evaluation of these algorithms. The experimental results show that the solution quality of the algorithms is not directly related to their robustness. In particular, the algorithm that is highly robust can have a low solution quality, or the algorithm that has a high quality of solution can be quite sensitive to the parameter variations.

  13. Parallel algorithms for geometric connected component labeling on a hypercube multiprocessor

    Science.gov (United States)

    Belkhale, K. P.; Banerjee, P.

    1992-01-01

    Different algorithms for the geometric connected component labeling (GCCL) problem are defined each of which involves d stages of message passing, for a d-dimensional hypercube. The major idea is that in each stage a hypercube multiprocessor increases its knowledge of domain. The algorithms under consideration include the QUAD algorithm for small number of processors and the Overlap Quad algorithm for large number of processors, subject to the locality of the connected sets. These algorithms differ in their run time, memory requirements, and message complexity. They were implemented on an Intel iPSC2/D4/MX hypercube.

  14. A robust firearm identification algorithm of forensic ballistics specimens

    Science.gov (United States)

    Chuan, Z. L.; Jemain, A. A.; Liong, C.-Y.; Ghani, N. A. M.; Tan, L. K.

    2017-09-01

    There are several inherent difficulties in the existing firearm identification algorithms, include requiring the physical interpretation and time consuming. Therefore, the aim of this study is to propose a robust algorithm for a firearm identification based on extracting a set of informative features from the segmented region of interest (ROI) using the simulated noisy center-firing pin impression images. The proposed algorithm comprises Laplacian sharpening filter, clustering-based threshold selection, unweighted least square estimator, and segment a square ROI from the noisy images. A total of 250 simulated noisy images collected from five different pistols of the same make, model and caliber are used to evaluate the robustness of the proposed algorithm. This study found that the proposed algorithm is able to perform the identical task on the noisy images with noise levels as high as 70%, while maintaining a firearm identification accuracy rate of over 90%.

  15. Robust topology optimization accounting for geometric imperfections

    DEFF Research Database (Denmark)

    Schevenels, M.; Jansen, M.; Lombaert, Geert

    2013-01-01

    performance. As a consequence, the actual structure may be far from optimal. In this paper, a robust approach to topology optimization is presented, taking into account two types of geometric imperfections: variations of (1) the crosssections and (2) the locations of structural elements. The first type...... is modeled by means of a scalar non-Gaussian random field, which is represented as a translation process. The underlying Gaussian field is simulated by means of the EOLE method. The second type of imperfections is modeled as a Gaussian vector-valued random field, which is simulated directly by means...... of the EOLE method. In each iteration of the optimization process, the relevant statistics of the structural response are evaluated by means of a Monte Carlo simulation. The proposed methodology is successfully applied to a test problem involving the design of a compliant mechanism (for the first type...

  16. Robust Cyclic MUSIC Algorithm for Finding Directions in Impulsive Noise Environment

    Directory of Open Access Journals (Sweden)

    Sen Li

    2017-01-01

    Full Text Available This paper addresses the issue of direction finding of a cyclostationary signal under impulsive noise environments modeled by α-stable distribution. Since α-stable distribution does not have finite second-order statistics, the conventional cyclic correlation-based signal-selective direction finding algorithms do not work effectively. To resolve this problem, we define two robust cyclic correlation functions which are derived from robust statistics property of the correntropy and the nonlinear transformation, respectively. The MUSIC algorithm with the robust cyclic correlation matrix of the received signals of arrays is then used to estimate the direction of cyclostationary signal in the presence of impulsive noise. The computer simulation results demonstrate that the two proposed robust cyclic correlation-based algorithms outperform the conventional cyclic correlation and the fractional lower order cyclic correlation based methods.

  17. Robust reactor power control system design by genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Joon; Cho, Kyung Ho; Kim, Sin [Cheju National University, Cheju (Korea, Republic of)

    1998-12-31

    The H{sub {infinity}} robust controller for the reactor power control system is designed by use of the mixed weight sensitivity. The system is configured into the typical two-port model with which the weight functions are augmented. Since the solution depends on the weighting functions and the problem is of nonconvex, the genetic algorithm is used to determine the weighting functions. The cost function applied in the genetic algorithm permits the direct control of the power tracking performances. In addition, the actual operating constraints such as rod velocity and acceleration can be treated as design parameters. Compared with the conventional approach, the controller designed by the genetic algorithm results in the better performances with the realistic constraints. Also, it is found that the genetic algorithm could be used as an effective tool in the robust design. 4 refs., 6 figs. (Author)

  18. Robust reactor power control system design by genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Joon; Cho, Kyung Ho; Kim, Sin [Cheju National University, Cheju (Korea, Republic of)

    1997-12-31

    The H{sub {infinity}} robust controller for the reactor power control system is designed by use of the mixed weight sensitivity. The system is configured into the typical two-port model with which the weight functions are augmented. Since the solution depends on the weighting functions and the problem is of nonconvex, the genetic algorithm is used to determine the weighting functions. The cost function applied in the genetic algorithm permits the direct control of the power tracking performances. In addition, the actual operating constraints such as rod velocity and acceleration can be treated as design parameters. Compared with the conventional approach, the controller designed by the genetic algorithm results in the better performances with the realistic constraints. Also, it is found that the genetic algorithm could be used as an effective tool in the robust design. 4 refs., 6 figs. (Author)

  19. A robust embedded vision system feasible white balance algorithm

    Science.gov (United States)

    Wang, Yuan; Yu, Feihong

    2018-01-01

    White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.

  20. A MATCHING METHOD TO REDUCE THE INFLUENCE OF SAR GEOMETRIC DEFORMATION

    Directory of Open Access Journals (Sweden)

    C. Gao

    2018-04-01

    Full Text Available There are large geometrical deformations in SAR image, including foreshortening, layover, shade,which leads to SAR Image matching with low accuracy. Especially in complex terrain area, the control points are difficult to obtain, and the matching is difficult to achieve. Considering the impact of geometric distortions in SAR image pairs, a matching algorithm with a combination of speeded up robust features (SURF and summed of normalize cross correlation (SNCC was proposed, which can avoid the influence of SAR geometric deformation. Firstly, SURF algorithm was utilized to predict the search area. Then the matching point pairs was selected based on summed of normalized cross correlation. Finally, false match points were eliminated by the bidirectional consistency. SURF algorithm can control the range of matching points, and the matching points extracted from the deformation area are eliminated, and the matching points with stable and even distribution are obtained. The experimental results demonstrated that the proposed algorithm had high precision, and can effectively avoid the effect of geometric distortion on SAR image matching. Meet accuracy requirements of the block adjustment with sparse control points.

  1. Parallel Algorithm of Geometrical Hashing Based on NumPy Package and Processes Pool

    Directory of Open Access Journals (Sweden)

    Klyachin Vladimir Aleksandrovich

    2015-10-01

    Full Text Available The article considers the problem of multi-dimensional geometric hashing. The paper describes a mathematical model of geometric hashing and considers an example of its use in localization problems for the point. A method of constructing the corresponding hash matrix by parallel algorithm is considered. In this paper an algorithm of parallel geometric hashing using a development pattern «pool processes» is proposed. The implementation of the algorithm is executed using the Python programming language and NumPy package for manipulating multidimensional data. To implement the process pool it is proposed to use a class Process Pool Executor imported from module concurrent.futures, which is included in the distribution of the interpreter Python since version 3.2. All the solutions are presented in the paper by corresponding UML class diagrams. Designed GeomNash package includes classes Data, Result, GeomHash, Job. The results of the developed program presents the corresponding graphs. Also, the article presents the theoretical justification for the application process pool for the implementation of parallel algorithms. It is obtained condition t2 > (p/(p-1*t1 of the appropriateness of process pool. Here t1 - the time of transmission unit of data between processes, and t2 - the time of processing unit data by one processor.

  2. Robust Semi-Supervised Manifold Learning Algorithm for Classification

    Directory of Open Access Journals (Sweden)

    Mingxia Chen

    2018-01-01

    Full Text Available In the recent years, manifold learning methods have been widely used in data classification to tackle the curse of dimensionality problem, since they can discover the potential intrinsic low-dimensional structures of the high-dimensional data. Given partially labeled data, the semi-supervised manifold learning algorithms are proposed to predict the labels of the unlabeled points, taking into account label information. However, these semi-supervised manifold learning algorithms are not robust against noisy points, especially when the labeled data contain noise. In this paper, we propose a framework for robust semi-supervised manifold learning (RSSML to address this problem. The noisy levels of the labeled points are firstly predicted, and then a regularization term is constructed to reduce the impact of labeled points containing noise. A new robust semi-supervised optimization model is proposed by adding the regularization term to the traditional semi-supervised optimization model. Numerical experiments are given to show the improvement and efficiency of RSSML on noisy data sets.

  3. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. The algorithms depend heavily on accurate estimation of the position of particles as they traverse the inner detector elements. An artificial neural network algorithm is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The method recovers otherwise lost tracks in dense environments where particles are separated by distances comparable to the size of the detector read-out elements. Such environments are highly relevant for LHC run 2, e.g. in searches for heavy resonances. Within the scope of run 2 track reconstruction performance and upgrades, the robustness of the neural network algorithm will be presented. The robustness has been studied by evaluating the stability of the algorithm’s performance under a range of variations in the pixel detector conditions.

  4. Robust Algebraic Multilevel Methods and Algorithms

    CERN Document Server

    Kraus, Johannes

    2009-01-01

    This book deals with algorithms for the solution of linear systems of algebraic equations with large-scale sparse matrices, with a focus on problems that are obtained after discretization of partial differential equations using finite element methods. Provides a systematic presentation of the recent advances in robust algebraic multilevel methods. Can be used for advanced courses on the topic.

  5. A Self-embedding Robust Digital Watermarking Algorithm with Blind Detection

    Directory of Open Access Journals (Sweden)

    Gong Yunfeng

    2014-08-01

    Full Text Available In order to achieve the perfectly blind detection of robustness watermarking algorithm, a novel self-embedding robust digital watermarking algorithm with blind detection is proposed in this paper. Firstly the original image is divided to not overlap image blocks and then decomposable coefficients are obtained by lifting-based wavelet transform in every image blocks. Secondly the low-frequency coefficients of block images are selected and then approximately represented as a product of a base matrix and a coefficient matrix using NMF. Then the feature vector represent original image is obtained by quantizing coefficient matrix, and finally the adaptive quantization of the robustness watermark is embedded in the low-frequency coefficients of LWT. Experimental results show that the scheme is robust against common signal processing attacks, meanwhile perfect blind detection is achieve.

  6. A HYBRID ALGORITHM FOR THE ROBUST GRAPH COLORING PROBLEM

    Directory of Open Access Journals (Sweden)

    Román Anselmo Mora Gutiérrez

    2016-08-01

    Full Text Available A hybridalgorithm which combines mathematical programming techniques (Kruskal’s algorithm and the strategy of maintaining arc consistency to solve constraint satisfaction problem “CSP” and heuristic methods (musical composition method and DSATUR to resolve the robust graph coloring problem (RGCP is proposed in this paper. Experimental result shows that this algorithm is better than the other algorithms presented on the literature.

  7. Robust consensus algorithm for multi-agent systems with exogenous disturbances under convergence conditions

    Science.gov (United States)

    Jiang, Yulian; Liu, Jianchang; Tan, Shubin; Ming, Pingsong

    2014-09-01

    In this paper, a robust consensus algorithm is developed and sufficient conditions for convergence to consensus are proposed for a multi-agent system (MAS) with exogenous disturbances subject to partial information. By utilizing H∞ robust control, differential game theory and a design-based approach, the consensus problem of the MAS with exogenous bounded interference is resolved and the disturbances are restrained, simultaneously. Attention is focused on designing an H∞ robust controller (the robust consensus algorithm) based on minimisation of our proposed rational and individual cost functions according to goals of the MAS. Furthermore, sufficient conditions for convergence of the robust consensus algorithm are given. An example is employed to demonstrate that our results are effective and more capable to restrain exogenous disturbances than the existing literature.

  8. Robust Layout Synthesis of a MEM Crab-Leg Resonator Using a Constrained Genetic Algorithm

    DEFF Research Database (Denmark)

    Fan, Zhun; Achiche, Sofiane

    2007-01-01

    The research work carried out in this paper introduces a robust design method for layout synthesis of MEM resonator subject to inherent geometric uncertainties such as the fabrication error on the sidewall of the structure. The robust design problem is formulated as a multi-objective constrained...

  9. Frequency-Domain Robust Performance Condition for Controller Uncertainty in SISO LTI Systems: A Geometric Approach

    Directory of Open Access Journals (Sweden)

    Vahid Raissi Dehkordi

    2009-01-01

    Full Text Available This paper deals with the robust performance problem of a linear time-invariant control system in the presence of robust controller uncertainty. Assuming that plant uncertainty is modeled as an additive perturbation, a geometrical approach is followed in order to find a necessary and sufficient condition for robust performance in the form of a bound on the magnitude of controller uncertainty. This frequency domain bound is derived by converting the problem into an optimization problem, whose solution is shown to be more time-efficient than a conventional structured singular value calculation. The bound on controller uncertainty can be used in controller order reduction and implementation problems.

  10. A Prefiltered Cuckoo Search Algorithm with Geometric Operators for Solving Sudoku Problems

    Directory of Open Access Journals (Sweden)

    Ricardo Soto

    2014-01-01

    Full Text Available The Sudoku is a famous logic-placement game, originally popularized in Japan and today widely employed as pastime and as testbed for search algorithms. The classic Sudoku consists in filling a 9×9 grid, divided into nine 3×3 regions, so that each column, row, and region contains different digits from 1 to 9. This game is known to be NP-complete, with existing various complete and incomplete search algorithms able to solve different instances of it. In this paper, we present a new cuckoo search algorithm for solving Sudoku puzzles combining prefiltering phases and geometric operations. The geometric operators allow one to correctly move toward promising regions of the combinatorial space, while the prefiltering phases are able to previously delete from domains the values that do not conduct to any feasible solution. This integration leads to a more efficient domain filtering and as a consequence to a faster solving process. We illustrate encouraging experimental results where our approach noticeably competes with the best approximate methods reported in the literature.

  11. A robust color image watermarking algorithm against rotation attacks

    Science.gov (United States)

    Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min

    2018-01-01

    A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.

  12. Robust optimization model and algorithm for railway freight center location problem in uncertain environment.

    Science.gov (United States)

    Liu, Xing-Cai; He, Shi-Wei; Song, Rui; Sun, Yang; Li, Hao-Dong

    2014-01-01

    Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  13. Robust Optimization Model and Algorithm for Railway Freight Center Location Problem in Uncertain Environment

    Directory of Open Access Journals (Sweden)

    Xing-cai Liu

    2014-01-01

    Full Text Available Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  14. Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application

    Science.gov (United States)

    Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin

    2007-12-01

    We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.

  15. Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application

    Directory of Open Access Journals (Sweden)

    Yang Jian

    2007-01-01

    Full Text Available We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.

  16. Geometric Distribution-Based Readers Scheduling Optimization Algorithm Using Artificial Immune System

    Directory of Open Access Journals (Sweden)

    Litian Duan

    2016-11-01

    Full Text Available In the multiple-reader environment (MRE of radio frequency identification (RFID system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range.

  17. Copyright Protection of Color Imaging Using Robust-Encoded Watermarking

    Directory of Open Access Journals (Sweden)

    M. Cedillo-Hernandez

    2015-04-01

    Full Text Available In this paper we present a robust-encoded watermarking method applied to color images for copyright protection, which presents robustness against several geometric and signal processing distortions. Trade-off between payload, robustness and imperceptibility is a very important aspect which has to be considered when a watermark algorithm is designed. In our proposed scheme, previously to be embedded into the image, the watermark signal is encoded using a convolutional encoder, which can perform forward error correction achieving better robustness performance. Then, the embedding process is carried out through the discrete cosine transform domain (DCT of an image using the image normalization technique to accomplish robustness against geometric and signal processing distortions. The embedded watermark coded bits are extracted and decoded using the Viterbi algorithm. In order to determine the presence or absence of the watermark into the image we compute the bit error rate (BER between the recovered and the original watermark data sequence. The quality of the watermarked image is measured using the well-known indices: Peak Signal to Noise Ratio (PSNR, Visual Information Fidelity (VIF and Structural Similarity Index (SSIM. The color difference between the watermarked and original images is obtained by using the Normalized Color Difference (NCD measure. The experimental results show that the proposed method provides good performance in terms of imperceptibility and robustness. The comparison among the proposed and previously reported methods based on different techniques is also provided.

  18. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  19. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  20. Intrinsic Grassmann Averages for Online Linear and Robust Subspace Learning

    DEFF Research Database (Denmark)

    Chakraborty, Rudrasis; Hauberg, Søren; Vemuri, Baba C.

    2017-01-01

    Principal Component Analysis (PCA) is a fundamental method for estimating a linear subspace approximation to high-dimensional data. Many algorithms exist in literature to achieve a statistically robust version of PCA called RPCA. In this paper, we present a geometric framework for computing the p...

  1. A multi-frame particle tracking algorithm robust against input noise

    International Nuclear Information System (INIS)

    Li, Dongning; Zhang, Yuanhui; Sun, Yigang; Yan, Wei

    2008-01-01

    The performance of a particle tracking algorithm which detects particle trajectories from discretely recorded particle positions could be substantially hindered by the input noise. In this paper, a particle tracking algorithm is developed which is robust against input noise. This algorithm employs the regression method instead of the extrapolation method usually employed by existing algorithms to predict future particle positions. If a trajectory cannot be linked to a particle at a frame, the algorithm can still proceed by trying to find a candidate at the next frame. The connectivity of tracked trajectories is inspected to remove the false ones. The algorithm is validated with synthetic data. The result shows that the algorithm is superior to traditional algorithms in the aspect of tracking long trajectories

  2. Robust point matching via vector field consensus.

    Science.gov (United States)

    Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu

    2014-04-01

    In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.

  3. A Modified LQG Algorithm (MLQG for Robust Control of Nonlinear Multivariable Systems

    Directory of Open Access Journals (Sweden)

    Jens G. Balchen

    1993-07-01

    Full Text Available The original LQG algorithm is often characterized for its lack of robustness. This is because in the design of the estimator (Kalman filter the process disturbance is assumed to be white noise. If the estimator is to give good estimates, the Kalman gain is increased which means that the estimator fails to become robust. A solution to this problem is to replace the proportional Kalman gain matrix by a dynamic PI algorithm and the proportional LQ feedback gain matrix by a PI algorithm. A tuning method is developed which facilitates the tuning of a modified LQG control system (MLQG by only two tuning parameters.

  4. Robust Floor Determination Algorithm for Indoor Wireless Localization Systems under Reference Node Failure

    Directory of Open Access Journals (Sweden)

    Kriangkrai Maneerat

    2016-01-01

    Full Text Available One of the challenging problems for indoor wireless multifloor positioning systems is the presence of reference node (RN failures, which cause the values of received signal strength (RSS to be missed during the online positioning phase of the location fingerprinting technique. This leads to performance degradation in terms of floor accuracy, which in turn affects other localization procedures. This paper presents a robust floor determination algorithm called Robust Mean of Sum-RSS (RMoS, which can accurately determine the floor on which mobile objects are located and can work under either the fault-free scenario or the RN-failure scenarios. The proposed fault tolerance floor algorithm is based on the mean of the summation of the strongest RSSs obtained from the IEEE 802.15.4 Wireless Sensor Networks (WSNs during the online phase. The performance of the proposed algorithm is compared with those of different floor determination algorithms in literature. The experimental results show that the proposed robust floor determination algorithm outperformed the other floor algorithms and can achieve the highest percentage of floor determination accuracy in all scenarios tested. Specifically, the proposed algorithm can achieve greater than 95% correct floor determination under the scenario in which 40% of RNs failed.

  5. Constant-work-space algorithms for geometric problems

    Directory of Open Access Journals (Sweden)

    Tetsuo Asano

    2011-07-01

    Full Text Available Constant-work-space algorithms may use only constantly many cells of storage in addition to their input, which is provided as a read-only array. We show how to construct several geometric structures efficiently in the constant-work-space model. Traditional algorithms process the input into a suitable data structure (like a doubly-connected edge list that allows efficient traversal of the structure at hand. In the constant-work-space setting, however, we cannot afford to do this. Instead, we provide operations that compute the desired features on the fly by accessing the input with no extra space. The whole geometric structure can be obtained by using these operations to enumerate all the features. Of course, we must pay for the space savings by slower running times. While the standard data structure allows us to implement traversal operations in constant time, our schemes typically take linear time to read the input data in each step.We begin with two simple problems: triangulating a planar point set and finding the trapezoidal decomposition of a simple polygon. In both cases adjacent features can be enumerated in linear time per step, resulting in total quadratic running time to output the whole structure. Actually, we show that the former result carries over to the Delaunay triangulation, and hence the Voronoi diagram. This also means that we can compute the largest empty circle of a planar point set in quadratic time and constant work-space. As another application, we demonstrate how to enumerate the features of an Euclidean minimum spanning tree (EMST in quadratic time per step, so that the whole EMST can be found in cubic time using constant work-space.Finally, we describe how to compute a shortest geodesic path between two points in a simple polygon. Although the shortest path problem in general graphs is NL-complete (Jakoby and Tantau 2003, this constrained problem can be solved in quadratic time using only constant work-space.

  6. ROBUST-HYBRID GENETIC ALGORITHM FOR A FLOW-SHOP SCHEDULING PROBLEM (A Case Study at PT FSCM Manufacturing Indonesia

    Directory of Open Access Journals (Sweden)

    Johan Soewanda

    2007-01-01

    Full Text Available This paper discusses the application of Robust Hybrid Genetic Algorithm to solve a flow-shop scheduling problem. The proposed algorithm attempted to reach minimum makespan. PT. FSCM Manufacturing Indonesia Plant 4's case was used as a test case to evaluate the performance of the proposed algorithm. The proposed algorithm was compared to Ant Colony, Genetic-Tabu, Hybrid Genetic Algorithm, and the company's algorithm. We found that Robust Hybrid Genetic produces statistically better result than the company's, but the same as Ant Colony, Genetic-Tabu, and Hybrid Genetic. In addition, Robust Hybrid Genetic Algorithm required less computational time than Hybrid Genetic Algorithm

  7. A methodology for the geometric design of heat recovery steam generators applying genetic algorithms

    International Nuclear Information System (INIS)

    Durán, M. Dolores; Valdés, Manuel; Rovira, Antonio; Rincón, E.

    2013-01-01

    This paper shows how the geometric design of heat recovery steam generators (HRSG) can be achieved. The method calculates the product of the overall heat transfer coefficient (U) by the area of the heat exchange surface (A) as a function of certain thermodynamic design parameters of the HRSG. A genetic algorithm is then applied to determine the best set of geometric parameters which comply with the desired UA product and, at the same time, result in a small heat exchange area and low pressure losses in the HRSG. In order to test this method, the design was applied to the HRSG of an existing plant and the results obtained were compared with the real exchange area of the steam generator. The findings show that the methodology is sound and offers reliable results even for complex HRSG designs. -- Highlights: ► The paper shows a methodology for the geometric design of heat recovery steam generators. ► Calculates product of the overall heat transfer coefficient by heat exchange area as a function of certain HRSG thermodynamic design parameters. ► It is a complement for the thermoeconomic optimization method. ► Genetic algorithms are used for solving the optimization problem

  8. A systematic approach to robust preconditioning for gradient-based inverse scattering algorithms

    International Nuclear Information System (INIS)

    Nordebo, Sven; Fhager, Andreas; Persson, Mikael; Gustafsson, Mats

    2008-01-01

    This paper presents a systematic approach to robust preconditioning for gradient-based nonlinear inverse scattering algorithms. In particular, one- and two-dimensional inverse problems are considered where the permittivity and conductivity profiles are unknown and the input data consist of the scattered field over a certain bandwidth. A time-domain least-squares formulation is employed and the inversion algorithm is based on a conjugate gradient or quasi-Newton algorithm together with an FDTD-electromagnetic solver. A Fisher information analysis is used to estimate the Hessian of the error functional. A robust preconditioner is then obtained by incorporating a parameter scaling such that the scaled Fisher information has a unit diagonal. By improving the conditioning of the Hessian, the convergence rate of the conjugate gradient or quasi-Newton methods are improved. The preconditioner is robust in the sense that the scaling, i.e. the diagonal Fisher information, is virtually invariant to the numerical resolution and the discretization model that is employed. Numerical examples of image reconstruction are included to illustrate the efficiency of the proposed technique

  9. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  10. Newton-Gauss Algorithm of Robust Weighted Total Least Squares Model

    Directory of Open Access Journals (Sweden)

    WANG Bin

    2015-06-01

    Full Text Available Based on the Newton-Gauss iterative algorithm of weighted total least squares (WTLS, a robust WTLS (RWTLS model is presented. The model utilizes the standardized residuals to construct the weight factor function and the square root of the variance component estimator with robustness is obtained by introducing the median method. Therefore, the robustness in both the observation and structure spaces can be simultaneously achieved. To obtain standardized residuals, the linearly approximate cofactor propagation law is employed to derive the expression of the cofactor matrix of WTLS residuals. The iterative calculation steps for RWTLS are also described. The experiment indicates that the model proposed in this paper exhibits satisfactory robustness for gross errors handling problem of WTLS, the obtained parameters have no significant difference with the results of WTLS without gross errors. Therefore, it is superior to the robust weighted total least squares model directly constructed with residuals.

  11. Data-Driven Neural Network Model for Robust Reconstruction of Automobile Casting

    Science.gov (United States)

    Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Lu

    2017-09-01

    In computer vision system, it is a challenging task to robustly reconstruct complex 3D geometries of automobile castings. However, 3D scanning data is usually interfered by noises, the scanning resolution is low, these effects normally lead to incomplete matching and drift phenomenon. In order to solve these problems, a data-driven local geometric learning model is proposed to achieve robust reconstruction of automobile casting. In order to relieve the interference of sensor noise and to be compatible with incomplete scanning data, a 3D convolution neural network is established to match the local geometric features of automobile casting. The proposed neural network combines the geometric feature representation with the correlation metric function to robustly match the local correspondence. We use the truncated distance field(TDF) around the key point to represent the 3D surface of casting geometry, so that the model can be directly embedded into the 3D space to learn the geometric feature representation; Finally, the training labels is automatically generated for depth learning based on the existing RGB-D reconstruction algorithm, which accesses to the same global key matching descriptor. The experimental results show that the matching accuracy of our network is 92.2% for automobile castings, the closed loop rate is about 74.0% when the matching tolerance threshold τ is 0.2. The matching descriptors performed well and retained 81.6% matching accuracy at 95% closed loop. For the sparse geometric castings with initial matching failure, the 3D matching object can be reconstructed robustly by training the key descriptors. Our method performs 3D reconstruction robustly for complex automobile castings.

  12. ROBUST ALGORITHMS OF PARAMETRIC ESTIMATION IN SOME STABILIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    A.A. Vedyakov

    2016-07-01

    Full Text Available Subject of Research.The tasks of dynamic systems provision in the stable state by means of ensuring of trite solution stability for various dynamic systems in the education regime with the aid of their parameters tuning are considered. Method. The problems are solved by application of ideology of the robust finitely convergent algorithms creation. Main Results. The concepts of parametric algorithmization of stability and steady asymptotic stability are introduced and the results are presented on synthesis of coarsed gradient algorithms solving the proposed tasks for finite number of iterations with the purpose of the posed problems decision. Practical Relevance. The article results may be called for decision of practical stabilization tasks in the process of various engineering constructions and devices operation.

  13. Numerical nonlinear complex geometrical optics algorithm for the 3D Calderón problem

    DEFF Research Database (Denmark)

    Delbary, Fabrice; Knudsen, Kim

    2014-01-01

    to the generalized Laplace equation. The 3D problem was solved in theory in late 1980s using complex geometrical optics solutions and a scattering transform. Several approximations to the reconstruction method have been suggested and implemented numerically in the literature, but here, for the first time, a complete...... computer implementation of the full nonlinear algorithm is given. First a boundary integral equation is solved by a Nystrom method for the traces of the complex geometrical optics solutions, second the scattering transform is computed and inverted using fast Fourier transform, and finally a boundary value...

  14. An algorithm for robust non-linear analysis of radioimmunoassays and other bioassays

    International Nuclear Information System (INIS)

    Normolle, D.P.

    1993-01-01

    The four-parameter logistic function is an appropriate model for many types of bioassays that have continuous response variables, such as radioimmunoassays. By modelling the variance of replicates in an assay, one can modify the usual parameter estimation techniques (for example, Gauss-Newton or Marquardt-Levenberg) to produce parameter estimates for the standard curve that are robust against outlying observations. This article describes the computation of robust (M-) estimates for the parameters of the four-parameter logistic function. It describes techniques for modelling the variance structure of the replicates, modifications to the usual iterative algorithms for parameter estimation in non-linear models, and a formula for inverse confidence intervals. To demonstrate the algorithm, the article presents examples where the robustly estimated four-parameter logistic model is compared with the logit-log and four-parameter logistic models with least-squares estimates. (author)

  15. An Evolutionary Approach for Robust Layout Synthesis of MEMS

    DEFF Research Database (Denmark)

    Fan, Zhun; Wang, Jiachuan; Goodman, Erik

    2005-01-01

    The paper introduces a robust design method for layout synthesis of MEM resonators subject to inherent geometric uncertainties such as the fabrication error on the sidewall of the structure. The robust design problem is formulated as a multi-objective constrained optimisation problem after certain...... assumptions and treated with multiobjective genetic algorithm (MOGA), a special type of evolutionary computing approaches. Case study based on layout synthesis of a comb-driven MEM resonator shows that the approach proposed in this paper can lead to design results that meet the target performance and are less...

  16. An improved geometric algorithm for calculating the topology of lattice gauge fields

    International Nuclear Information System (INIS)

    Pugh, D.J.R.; Teper, M.; Oxford Univ.

    1989-01-01

    We implement the algorithm of Phillips and Stone on a hypercubic, periodic lattice and show that at currently accessible couplings the SU(2) topological charge so calculated is dominated by short-distance fluctuations. We propose and test an improvement to rid the measure of such lattice artifacts. We find that the improved algorithm produces a topological susceptibility that is consistent with that obtained by the alternative cooling method, thus resolving the controversial discrepancy between geometric and cooling methods. We briefly discuss the reasons for this and point out that our improvement is likely to be particularly effective when applied to the case of SU(3). (orig.)

  17. Mobile Watermarking against Geometrical Distortions

    Directory of Open Access Journals (Sweden)

    Jing Zhang

    2015-08-01

    Full Text Available Mobile watermarking robust to geometrical distortions is still a great challenge. In mobile watermarking, efficient computation is necessary because mobile devices have very limited resources due to power consumption. In this paper, we propose a low-complexity geometrically resilient watermarking approach based on the optimal tradeoff circular harmonic function (OTCHF correlation filter and the minimum average correlation energy Mellin radial harmonic (MACE-MRH correlation filter. By the rotation, translation and scale tolerance properties of the two kinds of filter, the proposed watermark detector can be robust to geometrical attacks. The embedded watermark is weighted by a perceptual mask which matches very well with the properties of the human visual system. Before correlation, a whitening process is utilized to improve watermark detection reliability. Experimental results demonstrate that the proposed watermarking approach is computationally efficient and robust to geometrical distortions.

  18. Robust real-time pattern matching using bayesian sequential hypothesis testing.

    Science.gov (United States)

    Pele, Ofir; Werman, Michael

    2008-08-01

    This paper describes a method for robust real time pattern matching. We first introduce a family of image distance measures, the "Image Hamming Distance Family". Members of this family are robust to occlusion, small geometrical transforms, light changes and non-rigid deformations. We then present a novel Bayesian framework for sequential hypothesis testing on finite populations. Based on this framework, we design an optimal rejection/acceptance sampling algorithm. This algorithm quickly determines whether two images are similar with respect to a member of the Image Hamming Distance Family. We also present a fast framework that designs a near-optimal sampling algorithm. Extensive experimental results show that the sequential sampling algorithm performance is excellent. Implemented on a Pentium 4 3 GHz processor, detection of a pattern with 2197 pixels, in 640 x 480 pixel frames, where in each frame the pattern rotated and was highly occluded, proceeds at only 0.022 seconds per frame.

  19. A Robust Planning Algorithm for Groups of Entities in Discrete Spaces

    Directory of Open Access Journals (Sweden)

    Igor Wojnicki

    2015-07-01

    Full Text Available Automated planning is a well-established field of artificial intelligence (AI, with applications in route finding, robotics and operational research, among others. The task of developing a plan is often solved by finding a path in a graph representing the search domain; a robust plan consists of numerous paths that can be chosen if the execution of the best (optimal one fails. While robust planning for a single entity is rather simple, development of a robust plan for multiple entities in a common environment can lead to combinatorial explosion. This paper proposes a novel hybrid approach, joining heuristic search and the wavefront algorithm to provide a plan featuring robustness in areas where it is needed, while maintaining a low level of computational complexity.

  20. Robustness Beamforming Algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Dehghani

    2014-04-01

    Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.

  1. Robust and Blind 3D Mesh Watermarking in Spatial Domain Based on Faces Categorization and Sorting

    Science.gov (United States)

    Molaei, Amir Masoud; Ebrahimnezhad, Hossein; Sedaaghi, Mohammad Hossein

    2016-06-01

    In this paper, a 3D watermarking algorithm in spatial domain is presented with blind detection. In the proposed method, a negligible visual distortion is observed in host model. Initially, a preprocessing is applied on the 3D model to make it robust against geometric transformation attacks. Then, a number of triangle faces are determined as mark triangles using a novel systematic approach in which faces are categorized and sorted robustly. In order to enhance the capability of information retrieval by attacks, block watermarks are encoded using Reed-Solomon block error-correcting code before embedding into the mark triangles. Next, the encoded watermarks are embedded in spherical coordinates. The proposed method is robust against additive noise, mesh smoothing and quantization attacks. Also, it is stout next to geometric transformation, vertices and faces reordering attacks. Moreover, the proposed algorithm is designed so that it is robust against the cropping attack. Simulation results confirm that the watermarked models confront very low distortion if the control parameters are selected properly. Comparison with other methods demonstrates that the proposed method has good performance against the mesh smoothing attacks.

  2. An integer optimization algorithm for robust identification of non-linear gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Chemmangattuvalappil Nishanth

    2012-09-01

    Full Text Available Abstract Background Reverse engineering gene networks and identifying regulatory interactions are integral to understanding cellular decision making processes. Advancement in high throughput experimental techniques has initiated innovative data driven analysis of gene regulatory networks. However, inherent noise associated with biological systems requires numerous experimental replicates for reliable conclusions. Furthermore, evidence of robust algorithms directly exploiting basic biological traits are few. Such algorithms are expected to be efficient in their performance and robust in their prediction. Results We have developed a network identification algorithm to accurately infer both the topology and strength of regulatory interactions from time series gene expression data in the presence of significant experimental noise and non-linear behavior. In this novel formulism, we have addressed data variability in biological systems by integrating network identification with the bootstrap resampling technique, hence predicting robust interactions from limited experimental replicates subjected to noise. Furthermore, we have incorporated non-linearity in gene dynamics using the S-system formulation. The basic network identification formulation exploits the trait of sparsity of biological interactions. Towards that, the identification algorithm is formulated as an integer-programming problem by introducing binary variables for each network component. The objective function is targeted to minimize the network connections subjected to the constraint of maximal agreement between the experimental and predicted gene dynamics. The developed algorithm is validated using both in silico and experimental data-sets. These studies show that the algorithm can accurately predict the topology and connection strength of the in silico networks, as quantified by high precision and recall, and small discrepancy between the actual and predicted kinetic parameters

  3. Canonical symplectic structure and structure-preserving geometric algorithms for Schrödinger-Maxwell systems

    Science.gov (United States)

    Chen, Qiang; Qin, Hong; Liu, Jian; Xiao, Jianyuan; Zhang, Ruili; He, Yang; Wang, Yulei

    2017-11-01

    An infinite dimensional canonical symplectic structure and structure-preserving geometric algorithms are developed for the photon-matter interactions described by the Schrödinger-Maxwell equations. The algorithms preserve the symplectic structure of the system and the unitary nature of the wavefunctions, and bound the energy error of the simulation for all time-steps. This new numerical capability enables us to carry out first-principle based simulation study of important photon-matter interactions, such as the high harmonic generation and stabilization of ionization, with long-term accuracy and fidelity.

  4. Ant Colony Algorithm and Simulation for Robust Airport Gate Assignment

    Directory of Open Access Journals (Sweden)

    Hui Zhao

    2014-01-01

    Full Text Available Airport gate assignment is core task for airport ground operations. Due to the fact that the departure and arrival time of flights may be influenced by many random factors, the airport gate assignment scheme may encounter gate conflict and many other problems. This paper aims at finding a robust solution for airport gate assignment problem. A mixed integer model is proposed to formulate the problem, and colony algorithm is designed to solve this model. Simulation result shows that, in consideration of robustness, the ability of antidisturbance for airport gate assignment scheme has much improved.

  5. Robust efficient video fingerprinting

    Science.gov (United States)

    Puri, Manika; Lubin, Jeffrey

    2009-02-01

    We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.

  6. ROBUST CONTROL ALGORITHM FOR MULTIVARIABLE PLANTS WITH QUANTIZED OUTPUT

    Directory of Open Access Journals (Sweden)

    A. A. Margun

    2017-01-01

    Full Text Available The paper deals with robust output control algorithm for multivariable plants under disturbances. A plant is described by the system of linear differential equations with known relative degrees. Plant parameters are unknown but belong to the known closed bounded set. Plant state vector is unmeasured. Plant output is measured only via static quantizer. Control system algorithm is based on the high gain feedback method. Developed controller provides exponential convergence of tracking error to the bounded area. The area bounds depend on quantizer parameters and the value of external disturbances. Experimental approbation of the proposed control algorithm is performed with the use of Twin Rotor MIMO System laboratory bench. This bench is a helicopter like model with two degrees of freedom (pitch and yaw. DC motors are used as actuators. The output signals are measured via optical encoders. Mathematical model of laboratory bench is obtained. Proposed algorithm was compared with proportional - integral – differential controller in conditions of output quantization. Obtained results have confirmed the efficiency of proposed controller.

  7. A new deconvolution approach to robust fluence for intensity modulation under geometrical uncertainty

    Science.gov (United States)

    Zhang, Pengcheng; De Crevoisier, Renaud; Simon, Antoine; Haigron, Pascal; Coatrieux, Jean-Louis; Li, Baosheng; Shu, Huazhong

    2013-09-01

    This work addresses random geometrical uncertainties that are intrinsically observed in radiation therapy by means of a new deconvolution method combining a series expansion and a Butterworth filter. The method efficiently suppresses high-frequency components by discarding the higher order terms of the series expansion and then filtering out deviations on the field edges. An additional approximation is made in order to set the fluence values outside the field to zero in the robust profiles. This method is compared to the deconvolution kernel method for a regular 2D fluence map, a real intensity-modulated radiation therapy field, and a prostate case. The results show that accuracy is improved while fulfilling clinical planning requirements.

  8. A new deconvolution approach to robust fluence for intensity modulation under geometrical uncertainty

    International Nuclear Information System (INIS)

    Zhang Pengcheng; Coatrieux, Jean-Louis; Shu Huazhong; De Crevoisier, Renaud; Simon, Antoine; Haigron, Pascal; Li Baosheng

    2013-01-01

    This work addresses random geometrical uncertainties that are intrinsically observed in radiation therapy by means of a new deconvolution method combining a series expansion and a Butterworth filter. The method efficiently suppresses high-frequency components by discarding the higher order terms of the series expansion and then filtering out deviations on the field edges. An additional approximation is made in order to set the fluence values outside the field to zero in the robust profiles. This method is compared to the deconvolution kernel method for a regular 2D fluence map, a real intensity-modulated radiation therapy field, and a prostate case. The results show that accuracy is improved while fulfilling clinical planning requirements. (paper)

  9. A novel rotational matrix and translation vector algorithm: geometric accuracy for augmented reality in oral and maxillofacial surgeries.

    Science.gov (United States)

    Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C

    2018-06-01

    Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.

  10. Assessing the Stability and Robustness of Semantic Web Services Recommendation Algorithms Under Profile Injection Attacks

    Directory of Open Access Journals (Sweden)

    GRANDIN, P. H.

    2014-06-01

    Full Text Available Recommendation systems based on collaborative filtering are open by nature, what makes them vulnerable to profile injection attacks that insert biased evaluations in the system database in order to manipulate recommendations. In this paper we evaluate the stability and robustness of collaborative filtering algorithms applied to semantic web services recommendation when submitted to random and segment profile injection attacks. We evaluated four algorithms: (1 IMEAN, that makes predictions using the average of the evaluations received by the target item; (2 UMEAN, that makes predictions using the average of the evaluation made by the target user; (3 an algorithm based on the k-nearest neighbor (k-NN method and (4, an algorithm based on the k-means clustering method.The experiments showed that the UMEAN algorithm is not affected by the attacks and that IMEAN is the most vulnerable of all algorithms tested. Nevertheless, both UMEAN and IMEAN have little practical application due to the low precision of their predictions. Among the algorithms with intermediate tolerance to attacks but with good prediction performance, the algorithm based on k-nn proved to be more robust and stable than the algorithm based on k-means.

  11. Multilayer perceptron for robust nonlinear interval regression analysis using genetic algorithms.

    Science.gov (United States)

    Hu, Yi-Chung

    2014-01-01

    On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets.

  12. Robust and unobtrusive algorithm based on position independence for step detection

    Science.gov (United States)

    Qiu, KeCheng; Li, MengYang; Luo, YiHan

    2018-04-01

    Running is becoming one of the most popular exercises among the people, monitoring steps can help users better understand their running process and improve exercise efficiency. In this paper, we design and implement a robust and unobtrusive algorithm based on position independence for step detection under real environment. It applies Butterworth filter to suppress high frequency interference and then employs the projection based on mathematics to transform system to solve the problem of unknown position of smartphone. Finally, using sliding window to suppress the false peak. The algorithm was tested for eight participants on the Android 7.0 platform. In our experiments, the results show that the proposed algorithm can achieve desired effect in spite of device pose.

  13. A fast, robust algorithm for power line interference cancellation in neural recording

    Science.gov (United States)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2014-04-01

    Objective. Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. Approach. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. Main results. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. Significance. The proposed algorithm features a highly robust operation, fast adaptation to

  14. A Robust Formant Extraction Algorithm Combining Spectral Peak Picking and Root Polishing

    Directory of Open Access Journals (Sweden)

    Seo Kwang-deok

    2006-01-01

    Full Text Available We propose a robust formant extraction algorithm that combines the spectral peak picking, formants location examining for peak merger checking, and the root extraction methods. The spectral peak picking method is employed to locate the formant candidates, and the root extraction is used for solving the peak merger problem. The location and the distance between the extracted formants are also utilized to efficiently find out suspected peak mergers. The proposed algorithm does not require much computation, and is shown to be superior to previous formant extraction algorithms through extensive tests using TIMIT speech database.

  15. Color Image Secret Watermarking Erase and Write Algorithm Based on SIFT

    Science.gov (United States)

    Qu, Jubao

    The use of adaptive characteristics of SIFT, image features, the implementation of the write, erase operations on Extraction and color image hidden watermarking. From the experimental results, this algorithm has better imperceptibility and at the same time, is robust against geometric attacks and common signal processing.

  16. A FAST AND ROBUST ALGORITHM FOR ROAD EDGES EXTRACTION FROM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    K. Qiu

    2016-06-01

    Full Text Available Fast mapping of roads plays an important role in many geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance. How to extract various road edges fast and robustly is a challenging task. In this paper, we present a fast and robust algorithm for the automatic road edges extraction from terrestrial mobile LiDAR data. The algorithm is based on a key observation: most roads around edges have difference in elevation and road edges with pavement are seen in two different planes. In our algorithm, we firstly extract a rough plane based on RANSAC algorithm, and then multiple refined planes which only contains pavement are extracted from the rough plane. The road edges are extracted based on these refined planes. In practice, there is a serious problem that the rough and refined planes usually extracted badly due to rough roads and different density of point cloud. To eliminate the influence of rough roads, the technology which is similar with the difference of DSM (digital surface model and DTM (digital terrain model is used, and we also propose a method which adjust the point clouds to a similar density to eliminate the influence of different density. Experiments show the validities of the proposed method with multiple datasets (e.g. urban road, highway, and some rural road. We use the same parameters through the experiments and our algorithm can achieve real-time processing speeds.

  17. Designing an Iterative Learning Control Algorithm Based on Process History using limited post process geometrical information

    DEFF Research Database (Denmark)

    Endelt, Benny Ørtoft; Volk, Wolfram

    2013-01-01

    , there is a number of obstacles which need to be addressed before an industrial implementation is possible, e.g. the proposed control algorithms are often limited by the ability to sample process data with both sufficient accuracy and robustness - this lack of robust sampling technologies is one of the main barriers...

  18. A robust controller design method for feedback substitution schemes using genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Trujillo, Mirsha M; Hadjiloucas, Sillas; Becerra, Victor M, E-mail: s.hadjiloucas@reading.ac.uk [Cybernetics, School of Systems Engineering, University of Reading, RG6 6AY (United Kingdom)

    2011-08-17

    Controllers for feedback substitution schemes demonstrate a trade-off between noise power gain and normalized response time. Using as an example the design of a controller for a radiometric transduction process subjected to arbitrary noise power gain and robustness constraints, a Pareto-front of optimal controller solutions fulfilling a range of time-domain design objectives can be derived. In this work, we consider designs using a loop shaping design procedure (LSDP). The approach uses linear matrix inequalities to specify a range of objectives and a genetic algorithm (GA) to perform a multi-objective optimization for the controller weights (MOGA). A clonal selection algorithm is used to further provide a directed search of the GA towards the Pareto front. We demonstrate that with the proposed methodology, it is possible to design higher order controllers with superior performance in terms of response time, noise power gain and robustness.

  19. Artefacts in geometric phase analysis of compound materials.

    Science.gov (United States)

    Peters, Jonathan J P; Beanland, Richard; Alexe, Marin; Cockburn, John W; Revin, Dmitry G; Zhang, Shiyong Y; Sanchez, Ana M

    2015-10-01

    The geometric phase analysis (GPA) algorithm is known as a robust and straightforward technique that can be used to measure lattice strains in high resolution transmission electron microscope (TEM) images. It is also attractive for analysis of aberration-corrected scanning TEM (ac-STEM) images that resolve every atom column, since it uses Fourier transforms and does not require real-space peak detection and assignment to appropriate sublattices. Here it is demonstrated that, in ac-STEM images of compound materials with compositionally distinct atom columns, an additional geometric phase is present in the Fourier transform. If the structure changes from one area to another in the image (e.g. across an interface), the change in this additional phase will appear as a strain in conventional GPA, even if there is no lattice strain. Strategies to avoid this pitfall are outlined. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Geometrical framework for robust portfolio optimization

    OpenAIRE

    Bazovkin, Pavel

    2014-01-01

    We consider a vector-valued multivariate risk measure that depends on the user's profile given by the user's utility. It is constructed on the basis of weighted-mean trimmed regions and represents the solution of an optimization problem. The key feature of this measure is convexity. We apply the measure to the portfolio selection problem, employing different measures of performance as objective functions in a common geometrical framework.

  1. Robust surface registration using N-points approximate congruent sets

    Directory of Open Access Journals (Sweden)

    Yao Jian

    2011-01-01

    Full Text Available Abstract Scans acquired by 3D sensors are typically represented in a local coordinate system. When multiple scans, taken from different locations, represent the same scene these must be registered to a common reference frame. We propose a fast and robust registration approach to automatically align two scans by finding two sets of N-points, that are approximately congruent under rigid transformation and leading to a good estimate of the transformation between their corresponding point clouds. Given two scans, our algorithm randomly searches for the best sets of congruent groups of points using a RANSAC-based approach. To successfully and reliably align two scans when there is only a small overlap, we improve the basic RANSAC random selection step by employing a weight function that approximates the probability of each pair of points in one scan to match one pair in the other. The search time to find pairs of congruent sets of N-points is greatly reduced by employing a fast search codebook based on both binary and multi-dimensional lookup tables. Moreover, we introduce a novel indicator of the overlapping region quality which is used to verify the estimated rigid transformation and to improve the alignment robustness. Our framework is general enough to incorporate and efficiently combine different point descriptors derived from geometric and texture-based feature points or scene geometrical characteristics. We also present a method to improve the matching effectiveness of texture feature descriptors by extracting them from an atlas of rectified images recovered from the scan reflectance image. Our algorithm is robust with respect to different sampling densities and also resilient to noise and outliers. We demonstrate its robustness and efficiency on several challenging scan datasets with varying degree of noise, outliers, extent of overlap, acquired from indoor and outdoor scenarios.

  2. Robustness of SOC Estimation Algorithms for EV Lithium-Ion Batteries against Modeling Errors and Measurement Noise

    Directory of Open Access Journals (Sweden)

    Xue Li

    2015-01-01

    Full Text Available State of charge (SOC is one of the most important parameters in battery management system (BMS. There are numerous algorithms for SOC estimation, mostly of model-based observer/filter types such as Kalman filters, closed-loop observers, and robust observers. Modeling errors and measurement noises have critical impact on accuracy of SOC estimation in these algorithms. This paper is a comparative study of robustness of SOC estimation algorithms against modeling errors and measurement noises. By using a typical battery platform for vehicle applications with sensor noise and battery aging characterization, three popular and representative SOC estimation methods (extended Kalman filter, PI-controlled observer, and H∞ observer are compared on such robustness. The simulation and experimental results demonstrate that deterioration of SOC estimation accuracy under modeling errors resulted from aging and larger measurement noise, which is quantitatively characterized. The findings of this paper provide useful information on the following aspects: (1 how SOC estimation accuracy depends on modeling reliability and voltage measurement accuracy; (2 pros and cons of typical SOC estimators in their robustness and reliability; (3 guidelines for requirements on battery system identification and sensor selections.

  3. Robust and Low-Complexity Timing Synchronization Algorithm and its Architecture for ADSRC Applications

    Directory of Open Access Journals (Sweden)

    KIM, J.

    2009-10-01

    Full Text Available 5.9 GHz advanced dedicated short range communications (ADSRC is a short-to-medium range communication standard that supports both public safety and private operations in roadside-to-vehicle and vehicle-to-vehicle communication environments. The core technology of physical layer in ADSRC is orthogonal frequency division multiplexing (OFDM, which is sensitive to timing synchronization error. In this paper, a robust and low-complexity timing synchronization algorithm suitable for ADSRC system and its efficient hardware architecture are proposed. The implementation of the proposed architecture is performed with Xilinx Vertex-II XC2V1000 Field Programmable Gate Array (FPGA. The proposed algorithm is based on cross-correlation technique, which is employed to detect the starting point of short training symbol and the guard interval of the long training symbol. Synchronization error rate (SER evaluation results and post-layout simulation results show that the proposed algorithm is efficient in high-mobility environments. The post-layout results of implementation demonstrate the robustness and low-complexity of the proposed architecture.

  4. A Robust Vision-based Runway Detection and Tracking Algorithm for Automatic UAV Landing

    KAUST Repository

    Abu Jbara, Khaled F.

    2015-05-01

    This work presents a novel real-time algorithm for runway detection and tracking applied to the automatic takeoff and landing of Unmanned Aerial Vehicles (UAVs). The algorithm is based on a combination of segmentation based region competition and the minimization of a specific energy function to detect and identify the runway edges from streaming video data. The resulting video-based runway position estimates are updated using a Kalman Filter, which can integrate other sensory information such as position and attitude angle estimates to allow a more robust tracking of the runway under turbulence. We illustrate the performance of the proposed lane detection and tracking scheme on various experimental UAV flights conducted by the Saudi Aerospace Research Center. Results show an accurate tracking of the runway edges during the landing phase under various lighting conditions. Also, it suggests that such positional estimates would greatly improve the positional accuracy of the UAV during takeoff and landing phases. The robustness of the proposed algorithm is further validated using Hardware in the Loop simulations with diverse takeoff and landing videos generated using a commercial flight simulator.

  5. Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems.

    Science.gov (United States)

    Huang, Shuqiang; Tao, Ming

    2017-01-22

    Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K -center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms.

  6. Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems

    Directory of Open Access Journals (Sweden)

    Shuqiang Huang

    2017-01-01

    Full Text Available Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest and the population optimum (gbest; thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K-center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms.

  7. Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems

    Science.gov (United States)

    Huang, Shuqiang; Tao, Ming

    2017-01-01

    Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K-center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms. PMID:28117735

  8. Particle Filter-Based Target Tracking Algorithm for Magnetic Resonance-Guided Respiratory Compensation : Robustness and Accuracy Assessment

    NARCIS (Netherlands)

    Bourque, Alexandra E; Bedwani, Stéphane; Carrier, Jean-François; Ménard, Cynthia; Borman, Pim; Bos, Clemens; Raaymakers, Bas W; Mickevicius, Nikolai; Paulson, Eric; Tijssen, Rob H N

    PURPOSE: To assess overall robustness and accuracy of a modified particle filter-based tracking algorithm for magnetic resonance (MR)-guided radiation therapy treatments. METHODS AND MATERIALS: An improved particle filter-based tracking algorithm was implemented, which used a normalized

  9. DETERMINATION ALGORITHM OF OPTIMAL GEOMETRICAL PARAMETERS FOR COMPONENTS OF FREIGHT CARS ON THE BASIS OF GENERALIZED MATHEMATICAL MODELS

    Directory of Open Access Journals (Sweden)

    O. V. Fomin

    2013-10-01

    Full Text Available Purpose. Presentation of features and example of the use of the offered determination algorithm of optimum geometrical parameters for the components of freight cars on the basis of the generalized mathematical models, which is realized using computer. Methodology. The developed approach to search for optimal geometrical parameters can be described as the determination of optimal decision of the selected set of possible variants. Findings. The presented application example of the offered algorithm proved its operation capacity and efficiency of use. Originality. The determination procedure of optimal geometrical parameters for freight car components on the basis of the generalized mathematical models was formalized in the paper. Practical value. Practical introduction of the research results for universal open cars allows one to reduce container of their design and accordingly to increase the carrying capacity almost by100 kg with the improvement of strength characteristics. Taking into account the mass of their park this will provide a considerable economic effect when producing and operating. The offered approach is oriented to the distribution of the software packages (for example Microsoft Excel, which are used by technical services of the most enterprises, and does not require additional capital investments (acquisitions of the specialized programs and proper technical staff training. This proves the correctness of the research direction. The offered algorithm can be used for the solution of other optimization tasks on the basis of the generalized mathematical models.

  10. A generic algorithm for constructing hierarchical representations of geometric objects

    International Nuclear Information System (INIS)

    Xavier, P.G.

    1995-01-01

    For a number of years, robotics researchers have exploited hierarchical representations of geometrical objects and scenes in motion-planning, collision-avoidance, and simulation. However, few general techniques exist for automatically constructing them. We present a generic, bottom-up algorithm that uses a heuristic clustering technique to produced balanced, coherent hierarchies. Its worst-case running time is O(N 2 logN), but for non-pathological cases it is O(NlogN), where N is the number of input primitives. We have completed a preliminary C++ implementation for input collections of 3D convex polygons and 3D convex polyhedra and conducted simple experiments with scenes of up to 12,000 polygons, which take only a few minutes to process. We present examples using spheres and convex hulls as hierarchy primitives

  11. Robust video watermarking via optimization algorithm for quantization of pseudo-random semi-global statistics

    Science.gov (United States)

    Kucukgoz, Mehmet; Harmanci, Oztan; Mihcak, Mehmet K.; Venkatesan, Ramarathnam

    2005-03-01

    In this paper, we propose a novel semi-blind video watermarking scheme, where we use pseudo-random robust semi-global features of video in the three dimensional wavelet transform domain. We design the watermark sequence via solving an optimization problem, such that the features of the mark-embedded video are the quantized versions of the features of the original video. The exact realizations of the algorithmic parameters are chosen pseudo-randomly via a secure pseudo-random number generator, whose seed is the secret key, that is known (resp. unknown) by the embedder and the receiver (resp. by the public). We experimentally show the robustness of our algorithm against several attacks, such as conventional signal processing modifications and adversarial estimation attacks.

  12. An information geometric approach to least squares minimization

    Science.gov (United States)

    Transtrum, Mark; Machta, Benjamin; Sethna, James

    2009-03-01

    Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.

  13. Geometric reconstruction methods for electron tomography

    DEFF Research Database (Denmark)

    Alpers, Andreas; Gardner, Richard J.; König, Stefan

    2013-01-01

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts...... and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed...

  14. Weighing Efficiency-Robustness in Supply Chain Disruption by Multi-Objective Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Tong Shu

    2016-03-01

    Full Text Available This paper investigates various supply chain disruptions in terms of scenario planning, including node disruption and chain disruption; namely, disruptions in distribution centers and disruptions between manufacturing centers and distribution centers. Meanwhile, it also focuses on the simultaneous disruption on one node or a number of nodes, simultaneous disruption in one chain or a number of chains and the corresponding mathematical models and exemplification in relation to numerous manufacturing centers and diverse products. Robustness of the design of the supply chain network is examined by weighing efficiency against robustness during supply chain disruptions. Efficiency is represented by operating cost; robustness is indicated by the expected disruption cost and the weighing issue is calculated by the multi-objective firefly algorithm for consistency in the results. It has been shown that the total cost achieved by the optimal target function is lower than that at the most effective time of supply chains. In other words, the decrease of expected disruption cost by improving robustness in supply chains is greater than the increase of operating cost by reducing efficiency, thus leading to cost advantage. Consequently, by approximating the Pareto Front Chart of weighing between efficiency and robustness, enterprises can choose appropriate efficiency and robustness for their longer-term development.

  15. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun

    2015-01-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)

  16. Fast and robust ray casting algorithms for virtual X-ray imaging

    International Nuclear Information System (INIS)

    Freud, N.; Duvauchelle, P.; Letang, J.M.; Babot, D.

    2006-01-01

    Deterministic calculations based on ray casting techniques are known as a powerful alternative to the Monte Carlo approach to simulate X- or γ-ray imaging modalities (e.g. digital radiography and computed tomography), whenever computation time is a critical issue. One of the key components, from the viewpoint of computing resource expense, is the algorithm which determines the path length travelled by each ray through complex 3D objects. This issue has given rise to intensive research in the field of 3D rendering (in the visible light domain) during the last decades. The present work proposes algorithmic solutions adapted from state-of-the-art computer graphics to carry out ray casting in X-ray imaging configurations. This work provides an algorithmic basis to simulate direct transmission of X-rays, as well as scattering and secondary emission of radiation. Emphasis is laid on the speed and robustness issues. Computation times are given in a typical case of radiography simulation

  17. Fast and robust ray casting algorithms for virtual X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Freud, N. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France)]. E-mail: Nicolas.Freud@insa-lyon.fr; Duvauchelle, P. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Letang, J.M. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Babot, D. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France)

    2006-07-15

    Deterministic calculations based on ray casting techniques are known as a powerful alternative to the Monte Carlo approach to simulate X- or {gamma}-ray imaging modalities (e.g. digital radiography and computed tomography), whenever computation time is a critical issue. One of the key components, from the viewpoint of computing resource expense, is the algorithm which determines the path length travelled by each ray through complex 3D objects. This issue has given rise to intensive research in the field of 3D rendering (in the visible light domain) during the last decades. The present work proposes algorithmic solutions adapted from state-of-the-art computer graphics to carry out ray casting in X-ray imaging configurations. This work provides an algorithmic basis to simulate direct transmission of X-rays, as well as scattering and secondary emission of radiation. Emphasis is laid on the speed and robustness issues. Computation times are given in a typical case of radiography simulation.

  18. Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness

    Science.gov (United States)

    Sattarivand, Mike; Kusano, Maggie; Poon, Ian; Caldwell, Curtis

    2012-11-01

    Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET point spread function (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

  19. Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness

    International Nuclear Information System (INIS)

    Sattarivand, Mike; Caldwell, Curtis; Kusano, Maggie; Poon, Ian

    2012-01-01

    Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET point spread function (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

  20. Robust PD Sway Control of a Lifted Load for a Crane Using a Genetic Algorithm

    Science.gov (United States)

    Kawada, Kazuo; Sogo, Hiroyuki; Yamamoto, Toru; Mada, Yasuhiro

    PID control schemes still continue to be widely used for most industrial control systems. This is mainly because PID controllers have simple control structures, and are simple to maintain and tune. However, it is difficult to find a set of suitable control parameters in the case of time-varying and/or nonlinear systems. For such a problem, the robust controller has been proposed.Although it is important to choose the suitable nominal model in designing the robust controller, it is not usually easy.In this paper, a new robust PD controller design scheme is proposed, which utilizes a genetic algorithm.

  1. An outlook on robust model predictive control algorithms : Reflections on performance and computational aspects

    NARCIS (Netherlands)

    Saltik, M.B.; Özkan, L.; Ludlage, J.H.A.; Weiland, S.; Van den Hof, P.M.J.

    2018-01-01

    In this paper, we discuss the model predictive control algorithms that are tailored for uncertain systems. Robustness notions with respect to both deterministic (or set based) and stochastic uncertainties are discussed and contributions are reviewed in the model predictive control literature. We

  2. Robust Fault-Tolerant Control for Satellite Attitude Stabilization Based on Active Disturbance Rejection Approach with Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Fei Song

    2014-01-01

    Full Text Available This paper proposed a robust fault-tolerant control algorithm for satellite stabilization based on active disturbance rejection approach with artificial bee colony algorithm. The actuating mechanism of attitude control system consists of three working reaction flywheels and one spare reaction flywheel. The speed measurement of reaction flywheel is adopted for fault detection. If any reaction flywheel fault is detected, the corresponding fault flywheel is isolated and the spare reaction flywheel is activated to counteract the fault effect and ensure that the satellite is working safely and reliably. The active disturbance rejection approach is employed to design the controller, which handles input information with tracking differentiator, estimates system uncertainties with extended state observer, and generates control variables by state feedback and compensation. The designed active disturbance rejection controller is robust to both internal dynamics and external disturbances. The bandwidth parameter of extended state observer is optimized by the artificial bee colony algorithm so as to improve the performance of attitude control system. A series of simulation experiment results demonstrate the performance superiorities of the proposed robust fault-tolerant control algorithm.

  3. Robustness of holonomic quantum gates

    International Nuclear Information System (INIS)

    Solinas, P.; Zanardi, P.; Zanghi, N.

    2005-01-01

    Full text: If the driving field fluctuates during the quantum evolution this produces errors in the applied operator. The holonomic (and geometrical) quantum gates are believed to be robust against some kind of noise. Because of the geometrical dependence of the holonomic operators can be robust against this kind of noise; in fact if the fluctuations are fast enough they cancel out leaving the final operator unchanged. I present the numerical studies of holonomic quantum gates subject to this parametric noise, the fidelity of the noise and ideal evolution is calculated for different noise correlation times. The holonomic quantum gates seem robust not only for fast fluctuating fields but also for slow fluctuating fields. These results can be explained as due to the geometrical feature of the holonomic operator: for fast fluctuating fields the fluctuations are canceled out, for slow fluctuating fields the fluctuations do not perturb the loop in the parameter space. (author)

  4. A Novel Robust Audio Watermarking Algorithm by Modifying the Average Amplitude in Transform Domain

    Directory of Open Access Journals (Sweden)

    Qiuling Wu

    2018-05-01

    Full Text Available In order to improve the robustness and imperceptibility in practical application, a novel audio watermarking algorithm with strong robustness is proposed by exploring the multi-resolution characteristic of discrete wavelet transform (DWT and the energy compaction capability of discrete cosine transform (DCT. The human auditory system is insensitive to the minor changes in the frequency components of the audio signal, so the watermarks can be embedded by slightly modifying the frequency components of the audio signal. The audio fragments segmented from the cover audio signal are decomposed by DWT to obtain several groups of wavelet coefficients with different frequency bands, and then the fourth level detail coefficient is selected to be divided into the former packet and the latter packet, which are executed for DCT to get two sets of transform domain coefficients (TDC respectively. Finally, the average amplitudes of the two sets of TDC are modified to embed the binary image watermark according to the special embedding rule. The watermark extraction is blind without the carrier audio signal. Experimental results confirm that the proposed algorithm has good imperceptibility, large payload capacity and strong robustness when resisting against various attacks such as MP3 compression, low-pass filtering, re-sampling, re-quantization, amplitude scaling, echo addition and noise corruption.

  5. Collateral missing value imputation: a new robust missing value estimation algorithm for microarray data.

    Science.gov (United States)

    Sehgal, Muhammad Shoaib B; Gondal, Iqbal; Dooley, Laurence S

    2005-05-15

    Microarray data are used in a range of application areas in biology, although often it contains considerable numbers of missing values. These missing values can significantly affect subsequent statistical analysis and machine learning algorithms so there is a strong motivation to estimate these values as accurately as possible before using these algorithms. While many imputation algorithms have been proposed, more robust techniques need to be developed so that further analysis of biological data can be accurately undertaken. In this paper, an innovative missing value imputation algorithm called collateral missing value estimation (CMVE) is presented which uses multiple covariance-based imputation matrices for the final prediction of missing values. The matrices are computed and optimized using least square regression and linear programming methods. The new CMVE algorithm has been compared with existing estimation techniques including Bayesian principal component analysis imputation (BPCA), least square impute (LSImpute) and K-nearest neighbour (KNN). All these methods were rigorously tested to estimate missing values in three separate non-time series (ovarian cancer based) and one time series (yeast sporulation) dataset. Each method was quantitatively analyzed using the normalized root mean square (NRMS) error measure, covering a wide range of randomly introduced missing value probabilities from 0.01 to 0.2. Experiments were also undertaken on the yeast dataset, which comprised 1.7% actual missing values, to test the hypothesis that CMVE performed better not only for randomly occurring but also for a real distribution of missing values. The results confirmed that CMVE consistently demonstrated superior and robust estimation capability of missing values compared with other methods for both series types of data, for the same order of computational complexity. A concise theoretical framework has also been formulated to validate the improved performance of the CMVE

  6. Small Body GN&C Research Report: A Robust Model Predictive Control Algorithm with Guaranteed Resolvability

    Science.gov (United States)

    Acikmese, Behcet A.; Carson, John M., III

    2005-01-01

    A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees the resolvability of the associated finite-horizon optimal control problem in a receding-horizon implementation. The control consists of two components; (i) feedforward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives, and derivatives in polytopes. An illustrative numerical example is also provided.

  7. Reconstruction of an InAs nanowire using geometric tomography

    DEFF Research Database (Denmark)

    Pennington, Robert S.; König, Stefan; Alpers, Andreas

    Geometric tomography and conventional algebraic tomography algorithms are used to reconstruct cross-sections of an InAs nanowire from a tilt series of experimental annular dark-field images. Both algorithms are also applied to a test object to assess what factors affect the reconstruction quality....... When using the present algorithms, geometric tomography is faster, but artifacts in the reconstruction may be difficult to recognize....

  8. Fast and Easy 3D Reconstruction with the Help of Geometric Constraints and Genetic Algorithms

    Science.gov (United States)

    Annich, Afafe; El Abderrahmani, Abdellatif; Satori, Khalid

    2017-09-01

    The purpose of the work presented in this paper is to describe new method of 3D reconstruction from one or more uncalibrated images. This method is based on two important concepts: geometric constraints and genetic algorithms (GAs). At first, we are going to discuss the combination between bundle adjustment and GAs that we have proposed in order to improve 3D reconstruction efficiency and success. We used GAs in order to improve fitness quality of initial values that are used in the optimization problem. It will increase surely convergence rate. Extracted geometric constraints are used first to obtain an estimated value of focal length that helps us in the initialization step. Matching homologous points and constraints is used to estimate the 3D model. In fact, our new method gives us a lot of advantages: reducing the estimated parameter number in optimization step, decreasing used image number, winning time and stabilizing good quality of 3D results. At the end, without any prior information about our 3D scene, we obtain an accurate calibration of the cameras, and a realistic 3D model that strictly respects the geometric constraints defined before in an easy way. Various data and examples will be used to highlight the efficiency and competitiveness of our present approach.

  9. Autopiquer - a Robust and Reliable Peak Detection Algorithm for Mass Spectrometry.

    Science.gov (United States)

    Kilgour, David P A; Hughes, Sam; Kilgour, Samantha L; Mackay, C Logan; Palmblad, Magnus; Tran, Bao Quoc; Goo, Young Ah; Ernst, Robert K; Clarke, David J; Goodlett, David R

    2017-02-01

    We present a simple algorithm for robust and unsupervised peak detection by determining a noise threshold in isotopically resolved mass spectrometry data. Solving this problem will greatly reduce the subjective and time-consuming manual picking of mass spectral peaks and so will prove beneficial in many research applications. The Autopiquer approach uses autocorrelation to test for the presence of (isotopic) structure in overlapping windows across the spectrum. Within each window, a noise threshold is optimized to remove the most unstructured data, whilst keeping as much of the (isotopic) structure as possible. This algorithm has been successfully demonstrated for both peak detection and spectral compression on data from many different classes of mass spectrometer and for different sample types, and this approach should also be extendible to other types of data that contain regularly spaced discrete peaks. Graphical Abstract ᅟ.

  10. Robust state feedback controller design of STATCOM using chaotic optimization algorithm

    Directory of Open Access Journals (Sweden)

    Safari Amin

    2010-01-01

    Full Text Available In this paper, a new design technique for the design of robust state feedback controller for static synchronous compensator (STATCOM using Chaotic Optimization Algorithm (COA is presented. The design is formulated as an optimization problem which is solved by the COA. Since chaotic planning enjoys reliability, ergodicity and stochastic feature, the proposed technique presents chaos mapping using Lozi map chaotic sequences which increases its convergence rate. To ensure the robustness of the proposed damping controller, the design process takes into account a wide range of operating conditions and system configurations. The simulation results reveal that the proposed controller has an excellent capability in damping power system low frequency oscillations and enhances greatly the dynamic stability of the power systems. Moreover, the system performance analysis under different operating conditions shows that the phase based controller is superior compare to the magnitude based controller.

  11. Robust Image Hashing Using Radon Transform and Invariant Features

    Directory of Open Access Journals (Sweden)

    Y.L. Liu

    2016-09-01

    Full Text Available A robust image hashing method based on radon transform and invariant features is proposed for image authentication, image retrieval, and image detection. Specifically, an input image is firstly converted into a counterpart with a normalized size. Then the invariant centroid algorithm is applied to obtain the invariant feature point and the surrounding circular area, and the radon transform is employed to acquire the mapping coefficient matrix of the area. Finally, the hashing sequence is generated by combining the feature vectors and the invariant moments calculated from the coefficient matrix. Experimental results show that this method not only can resist against the normal image processing operations, but also some geometric distortions. Comparisons of receiver operating characteristic (ROC curve indicate that the proposed method outperforms some existing methods in classification between perceptual robustness and discrimination.

  12. TVR-DART: A More Robust Algorithm for Discrete Tomography From Limited Projection Data With Automated Gray Value Estimation.

    Science.gov (United States)

    Xiaodong Zhuge; Palenstijn, Willem Jan; Batenburg, Kees Joost

    2016-01-01

    In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.

  13. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    Science.gov (United States)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time

  14. An Effective, Robust And Parallel Implementation Of An Interior Point Algorithm For Limit State Optimization

    DEFF Research Database (Denmark)

    Dollerup, Niels; Jepsen, Michael S.; Damkilde, Lars

    2013-01-01

    The artide describes a robust and effective implementation of the interior point optimization algorithm. The adopted method includes a precalculation step, which reduces the number of variables by fulfilling the equilibrium equations a priori. This work presents an improved implementation of the ...

  15. Mechanical Model of Geometric Cell and Topological Algorithm for Cell Dynamics from Single-Cell to Formation of Monolayered Tissues with Pattern

    KAUST Repository

    Kachalo, Së ma; Naveed, Hammad; Cao, Youfang; Zhao, Jieling; Liang, Jie

    2015-01-01

    development, and other emerging behavior. Here we describe a cell model and an efficient geometric algorithm for studying the dynamic process of tissue formation in 2D (e.g. epithelial tissues). Our approach improves upon previous methods by incorporating

  16. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    Science.gov (United States)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement

  17. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  18. Ensemble of data-driven prognostic algorithms for robust prediction of remaining useful life

    International Nuclear Information System (INIS)

    Hu Chao; Youn, Byeng D.; Wang Pingfeng; Taek Yoon, Joung

    2012-01-01

    Prognostics aims at determining whether a failure of an engineered system (e.g., a nuclear power plant) is impending and estimating the remaining useful life (RUL) before the failure occurs. The traditional data-driven prognostic approach is to construct multiple candidate algorithms using a training data set, evaluate their respective performance using a testing data set, and select the one with the best performance while discarding all the others. This approach has three shortcomings: (i) the selected standalone algorithm may not be robust; (ii) it wastes the resources for constructing the algorithms that are discarded; (iii) it requires the testing data in addition to the training data. To overcome these drawbacks, this paper proposes an ensemble data-driven prognostic approach which combines multiple member algorithms with a weighted-sum formulation. Three weighting schemes, namely the accuracy-based weighting, diversity-based weighting and optimization-based weighting, are proposed to determine the weights of member algorithms. The k-fold cross validation (CV) is employed to estimate the prediction error required by the weighting schemes. The results obtained from three case studies suggest that the ensemble approach with any weighting scheme gives more accurate RUL predictions compared to any sole algorithm when member algorithms producing diverse RUL predictions have comparable prediction accuracy and that the optimization-based weighting scheme gives the best overall performance among the three weighting schemes.

  19. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm.

    Science.gov (United States)

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-09-19

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.

  20. Sudan-decoding generalized geometric Goppa codes

    DEFF Research Database (Denmark)

    Heydtmann, Agnes Eileen

    2003-01-01

    Generalized geometric Goppa codes are vector spaces of n-tuples with entries from different extension fields of a ground field. They are derived from evaluating functions similar to conventional geometric Goppa codes, but allowing evaluation in places of arbitrary degree. A decoding scheme...... for these codes based on Sudan's improved algorithm is presented and its error-correcting capacity is analyzed. For the implementation of the algorithm it is necessary that the so-called increasing zero bases of certain spaces of functions are available. A method to obtain such bases is developed....

  1. Computationally Efficient Robust Color Image Watermarking Using Fast Walsh Hadamard Transform

    Directory of Open Access Journals (Sweden)

    Suja Kalarikkal Pullayikodi

    2017-10-01

    Full Text Available Watermark is the copy deterrence mechanism used in the multimedia signal that is to be protected from hacking and piracy such a way that it can later be extracted from the watermarked signal by the decoder. Watermarking can be used in various applications such as authentication, video indexing, copyright protection and access control. In this paper a new CDMA (Code Division Multiple Access based robust watermarking algorithm using customized 8 × 8 Walsh Hadamard Transform, is proposed for the color images and detailed performance and robustness analysis have been performed. The paper studies in detail the effect of spreading code length, number of spreading codes and type of spreading codes on the performance of the watermarking system. Compared to the existing techniques the proposed scheme is computationally more efficient and consumes much less time for execution. Furthermore, the proposed scheme is robust and survives most of the common signal processing and geometric attacks.

  2. A content-based digital image watermarking scheme resistant to local geometric distortions

    International Nuclear Information System (INIS)

    Yang, Hong-ying; Chen, Li-li; Wang, Xiang-yang

    2011-01-01

    Geometric distortion is known as one of the most difficult attacks to resist, as it can desynchronize the location of the watermark and hence cause incorrect watermark detection. Geometric distortion can be decomposed into two classes: global affine transforms and local geometric distortions. Most countermeasures proposed in the literature only address the problem of global affine transforms. It is a challenging problem to design a robust image watermarking scheme against local geometric distortions. In this paper, we propose a new content-based digital image watermarking scheme with good visual quality and reasonable resistance against local geometric distortions. Firstly, the robust feature points, which can survive various common image processing and global affine transforms, are extracted by using a multi-scale SIFT (scale invariant feature transform) detector. Then, the affine covariant local feature regions (LFRs) are constructed adaptively according to the feature scale and local invariant centroid. Finally, the digital watermark is embedded into the affine covariant LFRs by modulating the magnitudes of discrete Fourier transform (DFT) coefficients. By binding the watermark with the affine covariant LFRs, the watermark detection can be done without synchronization error. Experimental results show that the proposed image watermarking is not only invisible and robust against common image processing operations such as sharpening, noise addition, and JPEG compression, etc, but also robust against global affine transforms and local geometric distortions

  3. Artefacts in geometric phase analysis of compound materials

    Energy Technology Data Exchange (ETDEWEB)

    Peters, Jonathan J.P., E-mail: j.j.p.peters@warwick.ac.uk [Department of Physics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL (United Kingdom); Beanland, Richard; Alexe, Marin [Department of Physics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL (United Kingdom); Cockburn, John W.; Revin, Dmitry G.; Zhang, Shiyong Y. [Department of Physics and Astronomy, University of Sheffield, Hounsfield Road, Sheffield S3 7RH (United Kingdom); Sanchez, Ana M., E-mail: a.m.sanchez@warwick.ac.uk [Department of Physics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL (United Kingdom)

    2015-10-15

    The geometric phase analysis (GPA) algorithm is known as a robust and straightforward technique that can be used to measure lattice strains in high resolution transmission electron microscope (TEM) images. It is also attractive for analysis of aberration-corrected scanning TEM (ac-STEM) images that resolve every atom column, since it uses Fourier transforms and does not require real-space peak detection and assignment to appropriate sublattices. Here it is demonstrated that, in ac-STEM images of compound materials with compositionally distinct atom columns, an additional geometric phase is present in the Fourier transform. If the structure changes from one area to another in the image (e.g. across an interface), the change in this additional phase will appear as a strain in conventional GPA, even if there is no lattice strain. Strategies to avoid this pitfall are outlined. - Highlights: • GPA is shown to produce incorrect strains when applied to images of compound materials. • A mathematical description is laid out for why GPA can produce artefacts. • The artefact is demonstrated using experimental and simulated data. • A ‘rule’ is set to avoid this artefact in GPA.

  4. A robust star identification algorithm with star shortlisting

    Science.gov (United States)

    Mehta, Deval Samirbhai; Chen, Shoushun; Low, Kay Soon

    2018-05-01

    A star tracker provides the most accurate attitude solution in terms of arc seconds compared to the other existing attitude sensors. When no prior attitude information is available, it operates in "Lost-In-Space (LIS)" mode. Star pattern recognition, also known as star identification algorithm, forms the most crucial part of a star tracker in the LIS mode. Recognition reliability and speed are the two most important parameters of a star pattern recognition technique. In this paper, a novel star identification algorithm with star ID shortlisting is proposed. Firstly, the star IDs are shortlisted based on worst-case patch mismatch, and later stars are identified in the image by an initial match confirmed with a running sequential angular match technique. The proposed idea is tested on 16,200 simulated star images having magnitude uncertainty, noise stars, positional deviation, and varying size of the field of view. The proposed idea is also benchmarked with the state-of-the-art star pattern recognition techniques. Finally, the real-time performance of the proposed technique is tested on the 3104 real star images captured by a star tracker SST-20S currently mounted on a satellite. The proposed technique can achieve an identification accuracy of 98% and takes only 8.2 ms for identification on real images. Simulation and real-time results depict that the proposed technique is highly robust and achieves a high speed of identification suitable for actual space applications.

  5. A mixture model for robust point matching under multi-layer motion.

    Directory of Open Access Journals (Sweden)

    Jiayi Ma

    Full Text Available This paper proposes an efficient mixture model for establishing robust point correspondences between two sets of points under multi-layer motion. Our algorithm starts by creating a set of putative correspondences which can contain a number of false correspondences, or outliers, in addition to the true correspondences (inliers. Next we solve for correspondence by interpolating a set of spatial transformations on the putative correspondence set based on a mixture model, which involves estimating a consensus of inlier points whose matching follows a non-parametric geometrical constraint. We formulate this as a maximum a posteriori (MAP estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation. We further provide a fast implementation based on sparse approximation which can achieve a significant speed-up without much performance degradation. We illustrate the proposed method on 2D and 3D real images for sparse feature correspondence, as well as a public available dataset for shape matching. The quantitative results demonstrate that our method is robust to non-rigid deformation and multi-layer/large discontinuous motion.

  6. Efficient 3D geometric and Zernike moments computation from unstructured surface meshes.

    Science.gov (United States)

    Pozo, José María; Villa-Uriol, Maria-Cruz; Frangi, Alejandro F

    2011-03-01

    This paper introduces and evaluates a fast exact algorithm and a series of faster approximate algorithms for the computation of 3D geometric moments from an unstructured surface mesh of triangles. Being based on the object surface reduces the computational complexity of these algorithms with respect to volumetric grid-based algorithms. In contrast, it can only be applied for the computation of geometric moments of homogeneous objects. This advantage and restriction is shared with other proposed algorithms based on the object boundary. The proposed exact algorithm reduces the computational complexity for computing geometric moments up to order N with respect to previously proposed exact algorithms, from N(9) to N(6). The approximate series algorithm appears as a power series on the rate between triangle size and object size, which can be truncated at any desired degree. The higher the number and quality of the triangles, the better the approximation. This approximate algorithm reduces the computational complexity to N(3). In addition, the paper introduces a fast algorithm for the computation of 3D Zernike moments from the computed geometric moments, with a computational complexity N(4), while the previously proposed algorithm is of order N(6). The error introduced by the proposed approximate algorithms is evaluated in different shapes and the cost-benefit ratio in terms of error, and computational time is analyzed for different moment orders.

  7. ALGORITHM FOR THE AUTOMATIC ESTIMATION OF AGRICULTURAL TREE GEOMETRIC PARAMETERS USING AIRBORNE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    E. Hadaś

    2016-06-01

    Full Text Available The estimation of dendrometric parameters has become an important issue for the agricultural planning and management. Since the classical field measurements are time consuming and inefficient, Airborne Laser Scanning (ALS data can be used for this purpose. Point clouds acquired for orchard areas allow to determine orchard structures and geometric parameters of individual trees. In this research we propose an automatic method that allows to determine geometric parameters of individual olive trees using ALS data. The method is based on the α-shape algorithm applied for normalized point clouds. The algorithm returns polygons representing crown shapes. For points located inside each polygon, we select the maximum height and the minimum height and then we estimate the tree height and the crown base height. We use the first two components of the Principal Component Analysis (PCA as the estimators for crown diameters. The α-shape algorithm requires to define the radius parameter R. In this study we investigated how sensitive are the results to the radius size, by comparing the results obtained with various settings of the R with reference values of estimated parameters from field measurements. Our study area was the olive orchard located in the Castellon Province, Spain. We used a set of ALS data with an average density of 4 points m−2. We noticed, that there was a narrow range of the R parameter, from 0.48 m to 0.80 m, for which all trees were detected and for which we obtained a high correlation coefficient (> 0.9 between estimated and measured values. We compared our estimates with field measurements. The RMSE of differences was 0.8 m for the tree height, 0.5 m for the crown base height, 0.6 m and 0.4 m for the longest and shorter crown diameter, respectively. The accuracy obtained with the method is thus sufficient for agricultural applications.

  8. Robust pattern decoding in shape-coded structured light

    Science.gov (United States)

    Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai

    2017-09-01

    Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.

  9. Robust frequency diversity based algorithm for clutter noise reduction of ultrasonic signals using multiple sub-spectrum phase coherence

    Energy Technology Data Exchange (ETDEWEB)

    Gongzhang, R.; Xiao, B.; Lardner, T.; Gachagan, A. [Centre for Ultrasonic Engineering, University of Strathclyde, Glasgow, G1 1XW (United Kingdom); Li, M. [School of Engineering, University of Glasgow, Glasgow, G12 8QQ (United Kingdom)

    2014-02-18

    This paper presents a robust frequency diversity based algorithm for clutter reduction in ultrasonic A-scan waveforms. The performance of conventional spectral-temporal techniques like Split Spectrum Processing (SSP) is highly dependent on the parameter selection, especially when the signal to noise ratio (SNR) is low. Although spatial beamforming offers noise reduction with less sensitivity to parameter variation, phased array techniques are not always available. The proposed algorithm first selects an ascending series of frequency bands. A signal is reconstructed for each selected band in which a defect is present when all frequency components are in uniform sign. Combining all reconstructed signals through averaging gives a probability profile of potential defect position. To facilitate data collection and validate the proposed algorithm, Full Matrix Capture is applied on the austenitic steel and high nickel alloy (HNA) samples with 5MHz transducer arrays. When processing A-scan signals with unrefined parameters, the proposed algorithm enhances SNR by 20dB for both samples and consequently, defects are more visible in B-scan images created from the large amount of A-scan traces. Importantly, the proposed algorithm is considered robust, while SSP is shown to fail on the austenitic steel data and achieves less SNR enhancement on the HNA data.

  10. A ROBUST METHOD FOR STEREO VISUAL ODOMETRY BASED ON MULTIPLE EUCLIDEAN DISTANCE CONSTRAINT AND RANSAC ALGORITHM

    Directory of Open Access Journals (Sweden)

    Q. Zhou

    2017-07-01

    Full Text Available Visual Odometry (VO is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC and Random Sample Consensus (RANSAC algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation. The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.

  11. Geometric Algorithms for Part Orienting and Probing

    NARCIS (Netherlands)

    Panahi, F.

    2015-01-01

    In this thesis, detailed solutions are presented to several problems dealing with geometric shape and orientation of an object in the field of robotics and automation. We first have considered a general model for shape variations that allows variation along the entire boundary of an object, both in

  12. Reconstruction of an InAs nanowire using geometric and algebraic tomography

    International Nuclear Information System (INIS)

    Pennington, R S; Boothroyd, C B; König, S; Alpers, A; Dunin-Borkowski, R E

    2011-01-01

    Geometric tomography and conventional algebraic tomography algorithms are used to reconstruct cross-sections of an InAs nanowire from a tilt series of experimental annular dark-field images. Both algorithms are also applied to a test object to assess what factors affect the reconstruction quality. When using the present algorithms, geometric tomography is faster, but artifacts in the reconstruction may be difficult to recognize.

  13. Reconstruction of an InAs nanowire using geometric and algebraic tomography

    DEFF Research Database (Denmark)

    Pennington, Robert S.; König, S.; Alpers, A.

    2011-01-01

    Geometric tomography and conventional algebraic tomography algorithms are used to reconstruct cross-sections of an InAs nanowire from a tilt series of experimental annular dark-field images. Both algorithms are also applied to a test object to assess what factors affect the reconstruction quality....... When using the present algorithms, geometric tomography is faster, but artifacts in the reconstruction may be difficult to recognize....

  14. A hybrid multi-objective imperialist competitive algorithm and Monte Carlo method for robust safety design of a rail vehicle

    Science.gov (United States)

    Nejlaoui, Mohamed; Houidi, Ajmi; Affi, Zouhaier; Romdhane, Lotfi

    2017-10-01

    This paper deals with the robust safety design optimization of a rail vehicle system moving in short radius curved tracks. A combined multi-objective imperialist competitive algorithm and Monte Carlo method is developed and used for the robust multi-objective optimization of the rail vehicle system. This robust optimization of rail vehicle safety considers simultaneously the derailment angle and its standard deviation where the design parameters uncertainties are considered. The obtained results showed that the robust design reduces significantly the sensitivity of the rail vehicle safety to the design parameters uncertainties compared to the determinist one and to the literature results.

  15. Geometric differential evolution for combinatorial and programs spaces.

    Science.gov (United States)

    Moraglio, A; Togelius, J; Silva, S

    2013-01-01

    Geometric differential evolution (GDE) is a recently introduced formal generalization of traditional differential evolution (DE) that can be used to derive specific differential evolution algorithms for both continuous and combinatorial spaces retaining the same geometric interpretation of the dynamics of the DE search across representations. In this article, we first review the theory behind the GDE algorithm, then, we use this framework to formally derive specific GDE for search spaces associated with binary strings, permutations, vectors of permutations and genetic programs. The resulting algorithms are representation-specific differential evolution algorithms searching the target spaces by acting directly on their underlying representations. We present experimental results for each of the new algorithms on a number of well-known problems comprising NK-landscapes, TSP, and Sudoku, for binary strings, permutations, and vectors of permutations. We also present results for the regression, artificial ant, parity, and multiplexer problems within the genetic programming domain. Experiments show that overall the new DE algorithms are competitive with well-tuned standard search algorithms.

  16. Application of a New Robust ECG T-Wave Delineation Algorithm for the Evaluation of the Autonomic Innervation of the Myocardium

    DEFF Research Database (Denmark)

    Cesari, Matteo; Mehlsen, Jesper; Mehlsen, Anne-Birgitte

    2016-01-01

    T-wave amplitude (TWA) is a well know index of the autonomic innervation of the myocardium. However, until now it has been evaluated only manually or with simple and inefficient algorithms. In this paper, we developed a new robust single-lead electrocardiogram (ECG) T-wave delineation algorithm...

  17. A mixture model for robust registration in Kinect sensor

    Science.gov (United States)

    Peng, Li; Zhou, Huabing; Zhu, Shengguo

    2018-03-01

    The Microsoft Kinect sensor has been widely used in many applications, but it suffers from the drawback of low registration precision between color image and depth image. In this paper, we present a robust method to improve the registration precision by a mixture model that can handle multiply images with the nonparametric model. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS).The estimation is performed by the EM algorithm which by also estimating the variance of the prior model is able to obtain good estimates. We illustrate the proposed method on the public available dataset. The experimental results show that our approach outperforms the baseline methods.

  18. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    Science.gov (United States)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  19. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  20. Correcting geometric and photometric distortion of document images on a smartphone

    Science.gov (United States)

    Simon, Christian; Williem; Park, In Kyu

    2015-01-01

    A set of document image processing algorithms for improving the optical character recognition (OCR) capability of smartphone applications is presented. The scope of the problem covers the geometric and photometric distortion correction of document images. The proposed framework was developed to satisfy industrial requirements. It is implemented on an off-the-shelf smartphone with limited resources in terms of speed and memory. Geometric distortions, i.e., skew and perspective distortion, are corrected by sending horizontal and vertical vanishing points toward infinity in a downsampled image. Photometric distortion includes image degradation from moiré pattern noise and specular highlights. Moiré pattern noise is removed using low-pass filters with different sizes independently applied to the background and text region. The contrast of the text in a specular highlighted area is enhanced by locally enlarging the intensity difference between the background and text while the noise is suppressed. Intensive experiments indicate that the proposed methods show a consistent and robust performance on a smartphone with a runtime of less than 1 s.

  1. Robustness against parametric noise of nonideal holonomic gates

    International Nuclear Information System (INIS)

    Lupo, Cosmo; Aniello, Paolo; Napolitano, Mario; Florio, Giuseppe

    2007-01-01

    Holonomic gates for quantum computation are commonly considered to be robust against certain kinds of parametric noise, the cause of this robustness being the geometric character of the transformation achieved in the adiabatic limit. On the other hand, the effects of decoherence are expected to become more and more relevant when the adiabatic limit is approached. Starting from the system described by Florio et al. [Phys. Rev. A 73, 022327 (2006)], here we discuss the behavior of nonideal holonomic gates at finite operational time, i.e., long before the adiabatic limit is reached. We have considered several models of parametric noise and studied the robustness of finite-time gates. The results obtained suggest that the finite-time gates present some effects of cancellation of the perturbations introduced by the noise which mimic the geometrical cancellation effect of standard holonomic gates. Nevertheless, a careful analysis of the results leads to the conclusion that these effects are related to a dynamical instead of a geometrical feature

  2. Robustness against parametric noise of nonideal holonomic gates

    Science.gov (United States)

    Lupo, Cosmo; Aniello, Paolo; Napolitano, Mario; Florio, Giuseppe

    2007-07-01

    Holonomic gates for quantum computation are commonly considered to be robust against certain kinds of parametric noise, the cause of this robustness being the geometric character of the transformation achieved in the adiabatic limit. On the other hand, the effects of decoherence are expected to become more and more relevant when the adiabatic limit is approached. Starting from the system described by Florio [Phys. Rev. A 73, 022327 (2006)], here we discuss the behavior of nonideal holonomic gates at finite operational time, i.e., long before the adiabatic limit is reached. We have considered several models of parametric noise and studied the robustness of finite-time gates. The results obtained suggest that the finite-time gates present some effects of cancellation of the perturbations introduced by the noise which mimic the geometrical cancellation effect of standard holonomic gates. Nevertheless, a careful analysis of the results leads to the conclusion that these effects are related to a dynamical instead of a geometrical feature.

  3. Implementation of an algorithm for cylindrical object identification using range data

    Science.gov (United States)

    Bozeman, Sylvia T.; Martin, Benjamin J.

    1989-01-01

    One of the problems in 3-D object identification and localization is addressed. In robotic and navigation applications the vision system must be able to distinguish cylindrical or spherical objects as well as those of other geometric shapes. An algorithm was developed to identify cylindrical objects in an image when range data is used. The algorithm incorporates the Hough transform for line detection using edge points which emerge from a Sobel mask. Slices of the data are examined to locate arcs of circles using the normal equations of an over-determined linear system. Current efforts are devoted to testing the computer implementation of the algorithm. Refinements are expected to continue in order to accommodate cylinders in various positions. A technique is sought which is robust in the presence of noise and partial occlusions.

  4. Geometric reconstruction methods for electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Alpers, Andreas, E-mail: alpers@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Gardner, Richard J., E-mail: Richard.Gardner@wwu.edu [Department of Mathematics, Western Washington University, Bellingham, WA 98225-9063 (United States); König, Stefan, E-mail: koenig@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Pennington, Robert S., E-mail: robert.pennington@uni-ulm.de [Center for Electron Nanoscopy, Technical University of Denmark, DK-2800 Kongens Lyngby (Denmark); Boothroyd, Chris B., E-mail: ChrisBoothroyd@cantab.net [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Houben, Lothar, E-mail: l.houben@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Dunin-Borkowski, Rafal E., E-mail: rdb@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Joost Batenburg, Kees, E-mail: Joost.Batenburg@cwi.nl [Centrum Wiskunde and Informatica, NL-1098XG, Amsterdam, The Netherlands and Vision Lab, Department of Physics, University of Antwerp, B-2610 Wilrijk (Belgium)

    2013-05-15

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand.

  5. Geometric reconstruction methods for electron tomography

    International Nuclear Information System (INIS)

    Alpers, Andreas; Gardner, Richard J.; König, Stefan; Pennington, Robert S.; Boothroyd, Chris B.; Houben, Lothar; Dunin-Borkowski, Rafal E.; Joost Batenburg, Kees

    2013-01-01

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand

  6. A Robust Obstacle Avoidance for Service Robot Using Bayesian Approach

    Directory of Open Access Journals (Sweden)

    Widodo Budiharto

    2011-03-01

    Full Text Available The objective of this paper is to propose a robust obstacle avoidance method for service robot in indoor environment. The method for obstacles avoidance uses information about static obstacles on the landmark using edge detection. Speed and direction of people that walks as moving obstacle obtained by single camera using tracking and recognition system and distance measurement using 3 ultrasonic sensors. A new geometrical model and maneuvering method for moving obstacle avoidance introduced and combined with Bayesian approach for state estimation. The obstacle avoidance problem is formulated using decision theory, prior and posterior distribution and loss function to determine an optimal response based on inaccurate sensor data. Algorithms for moving obstacles avoidance method proposed and experiment results implemented to service robot also presented. Various experiments show that our proposed method very fast, robust and successfully implemented to service robot called Srikandi II that equipped with 4 DOF arm robot developed in our laboratory.

  7. A general modeling method for I-V characteristics of geometrically and electrically configured photovoltaic arrays

    International Nuclear Information System (INIS)

    Liu Guangyu; Nguang, Sing Kiong; Partridge, Ashton

    2011-01-01

    Highlights: → A novel and general method is proposed for modeling PV arrays or modules. → A robust algorithm is used for the first time to improve the convergence to solution. → Auxiliary functions in other general methods are not compulsory in our method. → It is novel that geometric configuration is also incorporated. → A case study is performed to show the approach's advantages and unique features. - Abstract: A general method for modeling typical photovoltaic (PV) arrays and modules is proposed to find the exact current and voltage relationship of PV arrays or modules of geometrically and electrically different configurations. Nonlinear characteristic equations of electrical devices in solar array or module systems are numerically constructed without adding any virtual electrical components. Then, a robust damped Newton method is used to find exact I-V relationship of these general nonlinear equations, where the convergence is guaranteed. The model can deal with different mismatch effects such as different configurations of bypass diodes, and partial shading. Geometry coordinates of PV components are also considered to facilitate the modeling of the actual physical configuration. Simulation of a PV array with 48 modules, partially shaded by a concrete structure, is performed to verify the effectiveness and advantages of the proposed method.

  8. Robust loss functions for boosting.

    Science.gov (United States)

    Kanamori, Takafumi; Takenouchi, Takashi; Eguchi, Shinto; Murata, Noboru

    2007-08-01

    Boosting is known as a gradient descent algorithm over loss functions. It is often pointed out that the typical boosting algorithm, Adaboost, is highly affected by outliers. In this letter, loss functions for robust boosting are studied. Based on the concept of robust statistics, we propose a transformation of loss functions that makes boosting algorithms robust against extreme outliers. Next, the truncation of loss functions is applied to contamination models that describe the occurrence of mislabels near decision boundaries. Numerical experiments illustrate that the proposed loss functions derived from the contamination models are useful for handling highly noisy data in comparison with other loss functions.

  9. Structure-Based Algorithms for Microvessel Classification

    KAUST Repository

    Smith, Amy F.

    2015-02-01

    © 2014 The Authors. Microcirculation published by John Wiley & Sons Ltd. Objective: Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods: Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results: The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions: The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries, and venules.

  10. Almagest, a new trackless ring finding algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lamanna, G., E-mail: gianluca.lamanna@cern.ch

    2014-12-01

    A fast ring finding algorithm is a crucial point to allow the use of RICH in on-line trigger selection. The present algorithms are either too slow (with respect to the incoming data rate) or need the information coming from a tracking system. Digital image techniques, assuming limited computing power (as for example Hough transform), are not perfectly robust for what concerns the noise immunity. We present a novel technique based on Ptolemy's theorem for multi-ring pattern recognition. Starting from purely geometrical considerations, this algorithm (also known as “Almagest”) allows fast and trackless rings reconstruction, with spatial resolution comparable with other offline techniques. Almagest is particularly suitable for parallel implementation on multi-cores machines. Preliminary tests on GPUs (multi-cores video card processors) show that, thanks to an execution time smaller than 10 μs per event, this algorithm could be employed for on-line selection in trigger systems. The user case of the NA62 RICH trigger, based on GPU, will be discussed. - Highlights: • A new algorithm for fast multiple ring searching in RICH detectors is presented. • The Almagest algorithm exploits the computing power of Graphics processers (GPUs). • A preliminary implementation for on-line triggering in the NA62 experiment shows encouraging results.

  11. The Data Transfer Kit: A geometric rendezvous-based tool for multiphysics data transfer

    International Nuclear Information System (INIS)

    Slattery, S. R.; Wilson, P. P. H.; Pawlowski, R. P.

    2013-01-01

    The Data Transfer Kit (DTK) is a software library designed to provide parallel data transfer services for arbitrary physics components based on the concept of geometric rendezvous. The rendezvous algorithm provides a means to geometrically correlate two geometric domains that may be arbitrarily decomposed in a parallel simulation. By repartitioning both domains such that they have the same geometric domain on each parallel process, efficient and load balanced search operations and data transfer can be performed at a desirable algorithmic time complexity with low communication overhead relative to other types of mapping algorithms. With the increased development efforts in multiphysics simulation and other multiple mesh and geometry problems, generating parallel topology maps for transferring fields and other data between geometric domains is a common operation. The algorithms used to generate parallel topology maps based on the concept of geometric rendezvous as implemented in DTK are described with an example using a conjugate heat transfer calculation and thermal coupling with a neutronics code. In addition, we provide the results of initial scaling studies performed on the Jaguar Cray XK6 system at Oak Ridge National Laboratory for a worse-case-scenario problem in terms of algorithmic complexity that shows good scaling on 0(1 x 104) cores for topology map generation and excellent scaling on 0(1 x 105) cores for the data transfer operation with meshes of O(1 x 109) elements. (authors)

  12. The Data Transfer Kit: A geometric rendezvous-based tool for multiphysics data transfer

    Energy Technology Data Exchange (ETDEWEB)

    Slattery, S. R.; Wilson, P. P. H. [Department of Engineering Physics, University of Wisconsin - Madison, 1500 Engineering Dr., Madison, WI 53706 (United States); Pawlowski, R. P. [Sandia National Laboratories, P.O. Box 5800, Albuquerque, NM 87185 (United States)

    2013-07-01

    The Data Transfer Kit (DTK) is a software library designed to provide parallel data transfer services for arbitrary physics components based on the concept of geometric rendezvous. The rendezvous algorithm provides a means to geometrically correlate two geometric domains that may be arbitrarily decomposed in a parallel simulation. By repartitioning both domains such that they have the same geometric domain on each parallel process, efficient and load balanced search operations and data transfer can be performed at a desirable algorithmic time complexity with low communication overhead relative to other types of mapping algorithms. With the increased development efforts in multiphysics simulation and other multiple mesh and geometry problems, generating parallel topology maps for transferring fields and other data between geometric domains is a common operation. The algorithms used to generate parallel topology maps based on the concept of geometric rendezvous as implemented in DTK are described with an example using a conjugate heat transfer calculation and thermal coupling with a neutronics code. In addition, we provide the results of initial scaling studies performed on the Jaguar Cray XK6 system at Oak Ridge National Laboratory for a worse-case-scenario problem in terms of algorithmic complexity that shows good scaling on 0(1 x 104) cores for topology map generation and excellent scaling on 0(1 x 105) cores for the data transfer operation with meshes of O(1 x 109) elements. (authors)

  13. Geometrical features assessment of liver's tumor with application of artificial neural network evolved by imperialist competitive algorithm.

    Science.gov (United States)

    Keshavarz, M; Mojra, A

    2015-05-01

    Geometrical features of a cancerous tumor embedded in biological soft tissue, including tumor size and depth, are a necessity in the follow-up procedure and making suitable therapeutic decisions. In this paper, a new socio-politically motivated global search strategy which is called imperialist competitive algorithm (ICA) is implemented to train a feed forward neural network (FFNN) to estimate the tumor's geometrical characteristics (FFNNICA). First, a viscoelastic model of liver tissue is constructed by using a series of in vitro uniaxial and relaxation test data. Then, 163 samples of the tissue including a tumor with different depths and diameters are generated by making use of PYTHON programming to link the ABAQUS and MATLAB together. Next, the samples are divided into 123 samples as training dataset and 40 samples as testing dataset. Training inputs of the network are mechanical parameters extracted from palpation of the tissue through a developing noninvasive technology called artificial tactile sensing (ATS). Last, to evaluate the FFNNICA performance, outputs of the network including tumor's depth and diameter are compared with desired values for both training and testing datasets. Deviations of the outputs from desired values are calculated by a regression analysis. Statistical analysis is also performed by measuring Root Mean Square Error (RMSE) and Efficiency (E). RMSE in diameter and depth estimations are 0.50 mm and 1.49, respectively, for the testing dataset. Results affirm that the proposed optimization algorithm for training neural network can be useful to characterize soft tissue tumors accurately by employing an artificial palpation approach. Copyright © 2015 John Wiley & Sons, Ltd.

  14. A Robust Geometric Model for Argument Classification

    Science.gov (United States)

    Giannone, Cristina; Croce, Danilo; Basili, Roberto; de Cao, Diego

    Argument classification is the task of assigning semantic roles to syntactic structures in natural language sentences. Supervised learning techniques for frame semantics have been recently shown to benefit from rich sets of syntactic features. However argument classification is also highly dependent on the semantics of the involved lexicals. Empirical studies have shown that domain dependence of lexical information causes large performance drops in outside domain tests. In this paper a distributional approach is proposed to improve the robustness of the learning model against out-of-domain lexical phenomena.

  15. Geometrical methods in learning theory

    International Nuclear Information System (INIS)

    Burdet, G.; Combe, Ph.; Nencka, H.

    2001-01-01

    The methods of information theory provide natural approaches to learning algorithms in the case of stochastic formal neural networks. Most of the classical techniques are based on some extremization principle. A geometrical interpretation of the associated algorithms provides a powerful tool for understanding the learning process and its stability and offers a framework for discussing possible new learning rules. An illustration is given using sequential and parallel learning in the Boltzmann machine

  16. Development and validation of a novel large field of view phantom and a software module for the quality assurance of geometric distortion in magnetic resonance imaging.

    Science.gov (United States)

    Torfeh, Tarraf; Hammoud, Rabih; McGarry, Maeve; Al-Hammadi, Noora; Perkins, Gregory

    2015-09-01

    To develop and validate a large field of view phantom and quality assurance software tool for the assessment and characterization of geometric distortion in MRI scanners commissioned for radiation therapy planning. A purpose built phantom was developed consisting of 357 rods (6mm in diameter) of polymethyl-methacrylat separated by 20mm intervals, providing a three dimensional array of control points at known spatial locations covering a large field of view up to a diameter of 420mm. An in-house software module was developed to allow automatic geometric distortion assessment. This software module was validated against a virtual dataset of the phantom that reproduced the exact geometry of the physical phantom, but with known translational and rotational displacements and warping. For validation experiments, clinical MRI sequences were acquired with and without the application of a commercial 3D distortion correction algorithm (Gradwarp™). The software module was used to characterize and assess system-related geometric distortion in the sequences relative to a benchmark CT dataset, and the efficacy of the vendor geometric distortion correction algorithms (GDC) was also assessed. Results issued from the validation of the software against virtual images demonstrate the algorithm's ability to accurately calculate geometric distortion with sub-pixel precision by the extraction of rods and quantization of displacements. Geometric distortion was assessed for the typical sequences used in radiotherapy applications and over a clinically relevant 420mm field of view (FOV). As expected and towards the edges of the field of view (FOV), distortion increased with increasing FOV. For all assessed sequences, the vendor GDC was able to reduce the mean distortion to below 1mm over a field of view of 5, 10, 15 and 20cm radius respectively. Results issued from the application of the developed phantoms and algorithms demonstrate a high level of precision. The results indicate that this

  17. SU-F-J-74: High Z Geometric Integrity and Beam Hardening Artifact Assessment Using a Retrospective Metal Artifact Reduction (MAR) Reconstruction Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Woods, K; DiCostanzo, D; Gupta, N [Ohio State University Columbus, OH (United States)

    2016-06-15

    Purpose: To test the efficacy of a retrospective metal artifact reduction (MAR) reconstruction algorithm for a commercial computed tomography (CT) scanner for radiation therapy purposes. Methods: High Z geometric integrity and artifact reduction analysis was performed with three phantoms using General Electric’s (GE) Discovery CT. The three phantoms included: a Computerized Imaging Reference Systems (CIRS) electron density phantom (Model 062) with a 6.5 mm diameter titanium rod insert, a custom spine phantom using Synthes Spine hardware submerged in water, and a dental phantom with various high Z fillings submerged in water. Each phantom was reconstructed using MAR and compared against the original scan. Furthermore, each scenario was tested using standard and extended Hounsfield Unit (HU) ranges. High Z geometric integrity was performed using the CIRS phantom, while the artifact reduction was performed using all three phantoms. Results: Geometric integrity of the 6.5 mm diameter rod was slightly overestimated for non-MAR scans for both standard and extended HU. With MAR reconstruction, the rod was underestimated for both standard and extended HU. For artifact reduction, the mean and standard deviation was compared in a volume of interest (VOI) in the surrounding material (water and water equivalent material, ∼0HU). Overall, the mean value of the VOI was closer to 0 HU for the MAR reconstruction compared to the non-MAR scan for most phantoms. Additionally, the standard deviations for all phantoms were greatly reduced using MAR reconstruction. Conclusion: GE’s MAR reconstruction algorithm improves image quality with the presence of high Z material with minimal degradation of its geometric integrity. High Z delineation can be carried out with proper contouring techniques. The effects of beam hardening artifacts are greatly reduced with MAR reconstruction. Tissue corrections due to these artifacts can be eliminated for simple high Z geometries and greatly

  18. SU-F-J-74: High Z Geometric Integrity and Beam Hardening Artifact Assessment Using a Retrospective Metal Artifact Reduction (MAR) Reconstruction Algorithm

    International Nuclear Information System (INIS)

    Woods, K; DiCostanzo, D; Gupta, N

    2016-01-01

    Purpose: To test the efficacy of a retrospective metal artifact reduction (MAR) reconstruction algorithm for a commercial computed tomography (CT) scanner for radiation therapy purposes. Methods: High Z geometric integrity and artifact reduction analysis was performed with three phantoms using General Electric’s (GE) Discovery CT. The three phantoms included: a Computerized Imaging Reference Systems (CIRS) electron density phantom (Model 062) with a 6.5 mm diameter titanium rod insert, a custom spine phantom using Synthes Spine hardware submerged in water, and a dental phantom with various high Z fillings submerged in water. Each phantom was reconstructed using MAR and compared against the original scan. Furthermore, each scenario was tested using standard and extended Hounsfield Unit (HU) ranges. High Z geometric integrity was performed using the CIRS phantom, while the artifact reduction was performed using all three phantoms. Results: Geometric integrity of the 6.5 mm diameter rod was slightly overestimated for non-MAR scans for both standard and extended HU. With MAR reconstruction, the rod was underestimated for both standard and extended HU. For artifact reduction, the mean and standard deviation was compared in a volume of interest (VOI) in the surrounding material (water and water equivalent material, ∼0HU). Overall, the mean value of the VOI was closer to 0 HU for the MAR reconstruction compared to the non-MAR scan for most phantoms. Additionally, the standard deviations for all phantoms were greatly reduced using MAR reconstruction. Conclusion: GE’s MAR reconstruction algorithm improves image quality with the presence of high Z material with minimal degradation of its geometric integrity. High Z delineation can be carried out with proper contouring techniques. The effects of beam hardening artifacts are greatly reduced with MAR reconstruction. Tissue corrections due to these artifacts can be eliminated for simple high Z geometries and greatly

  19. A Robust Automated Cataract Detection Algorithm Using Diagnostic Opinion Based Parameter Thresholding for Telemedicine Application

    Directory of Open Access Journals (Sweden)

    Shashwat Pathak

    2016-09-01

    Full Text Available This paper proposes and evaluates an algorithm to automatically detect the cataracts from color images in adult human subjects. Currently, methods available for cataract detection are based on the use of either fundus camera or Digital Single-Lens Reflex (DSLR camera; both are very expensive. The main motive behind this work is to develop an inexpensive, robust and convenient algorithm which in conjugation with suitable devices will be able to diagnose the presence of cataract from the true color images of an eye. An algorithm is proposed for cataract screening based on texture features: uniformity, intensity and standard deviation. These features are first computed and mapped with diagnostic opinion by the eye expert to define the basic threshold of screening system and later tested on real subjects in an eye clinic. Finally, a tele-ophthamology model using our proposed system has been suggested, which confirms the telemedicine application of the proposed system.

  20. Robust and Accurate Algorithm for Wearable Stereoscopic Augmented Reality with Three Indistinguishable Markers

    Directory of Open Access Journals (Sweden)

    Fabrizio Cutolo

    2016-09-01

    Full Text Available In the context of surgical navigation systems based on augmented reality (AR, the key challenge is to ensure the highest degree of realism in merging computer-generated elements with live views of the surgical scene. This paper presents an algorithm suited for wearable stereoscopic augmented reality video see-through systems for use in a clinical scenario. A video-based tracking solution is proposed that relies on stereo localization of three monochromatic markers rigidly constrained to the scene. A PnP-based optimization step is introduced to refine separately the pose of the two cameras. Video-based tracking methods using monochromatic markers are robust to non-controllable and/or inconsistent lighting conditions. The two-stage camera pose estimation algorithm provides sub-pixel registration accuracy. From a technological and an ergonomic standpoint, the proposed approach represents an effective solution to the implementation of wearable AR-based surgical navigation systems wherever rigid anatomies are involved.

  1. Geometrical determinations of IMRT photon pencil-beam path in radiotherapy wedges and limit divergence angle with the Anisotropic Analytic Algorithm (AAA

    Directory of Open Access Journals (Sweden)

    Francisco Casesnoves

    2014-08-01

    Full Text Available Purpose: Static wedge filters (WF are commonly used in radiation therapy, forward and/or inverse planning. We calculated the exact 2D/3D geometrical pathway of the photon-beam through the usual alloy WF, in order to get a better dose related to the beam intensity attenuation factor(s, after the beam has passed through the WF. The objective was to provide general formulation into the Anisotropic Analytical Algorithm (AAA model coordinates system (depending on collimator/wedge angles that also can be applied to other models. Additionally, second purpose of this study was to develop integral formulation for 3D wedge exponential factor with statistical approximations, with introduction for the limit angle/conformal wedge.Methods: The radiotherapy model used to develop this mathematical task is the classical superposition-convolution algorithm, AAA (developed by Ulmer and Harder. We worked with optimal geometrical approximations to make the computational IMRT calculations quicker/reduce the planning-system time. Analytic geometry/computational-techniques to carry out simulations (for standard wedges are detailed/developed sharply. Integral developments/integral-statistical approximations are explained. Beam-divergence limit Angle for optimal wedge filtration formulas is calculated/sketched, with geometrical approximations. Fundamental trigonometry is used for this purpose.Results: Extent simulation tables for WF of 15º, 30º, 45º, and 60º are shown with errors. As a result, it is possible to determine the best individual treatment dose distribution for each patient. We presented these basic simulations/numerical examples for standard manufacturing WF of straight sloping surface, to check the accuracy/errors of the calculations. Simulations results give low RMS/Relative Error values (formulated for WF of 15º, 30º, 45º, and 60º.Conclusion: We obtained a series of formulas of analytic geometry for WF that can be applied for any particular dose

  2. Chinese License Plates Recognition Method Based on A Robust and Efficient Feature Extraction and BPNN Algorithm

    Science.gov (United States)

    Zhang, Ming; Xie, Fei; Zhao, Jing; Sun, Rui; Zhang, Lei; Zhang, Yue

    2018-04-01

    The prosperity of license plate recognition technology has made great contribution to the development of Intelligent Transport System (ITS). In this paper, a robust and efficient license plate recognition method is proposed which is based on a combined feature extraction model and BPNN (Back Propagation Neural Network) algorithm. Firstly, the candidate region of the license plate detection and segmentation method is developed. Secondly, a new feature extraction model is designed considering three sets of features combination. Thirdly, the license plates classification and recognition method using the combined feature model and BPNN algorithm is presented. Finally, the experimental results indicate that the license plate segmentation and recognition both can be achieved effectively by the proposed algorithm. Compared with three traditional methods, the recognition accuracy of the proposed method has increased to 95.7% and the consuming time has decreased to 51.4ms.

  3. A high-accuracy image registration algorithm using phase-only correlation for dental radiographs

    International Nuclear Information System (INIS)

    Ito, Koichi; Nikaido, Akira; Aoki, Takafumi; Kosuge, Eiko; Kawamata, Ryota; Kashima, Isamu

    2008-01-01

    Dental radiographs have been used for the accurate assessment and treatment of dental diseases. The nonlinear deformation between two dental radiographs may be observed, even if they are taken from the same oral regions of the subject. For an accurate diagnosis, the complete geometric registration between radiographs is required. This paper presents an efficient dental radiograph registration algorithm using Phase-Only Correlation (POC) function. The use of phase components in 2D (two-dimensional) discrete Fourier transforms of dental radiograph images makes possible to achieve highly robust image registration and recognition. Experimental evaluation using a dental radiograph database indicates that the proposed algorithm exhibits efficient recognition performance even for distorted radiographs. (author)

  4. A curvature-based weighted fuzzy c-means algorithm for point clouds de-noising

    Science.gov (United States)

    Cui, Xin; Li, Shipeng; Yan, Xiutian; He, Xinhua

    2018-04-01

    In order to remove the noise of three-dimensional scattered point cloud and smooth the data without damnify the sharp geometric feature simultaneity, a novel algorithm is proposed in this paper. The feature-preserving weight is added to fuzzy c-means algorithm which invented a curvature weighted fuzzy c-means clustering algorithm. Firstly, the large-scale outliers are removed by the statistics of r radius neighboring points. Then, the algorithm estimates the curvature of the point cloud data by using conicoid parabolic fitting method and calculates the curvature feature value. Finally, the proposed clustering algorithm is adapted to calculate the weighted cluster centers. The cluster centers are regarded as the new points. The experimental results show that this approach is efficient to different scale and intensities of noise in point cloud with a high precision, and perform a feature-preserving nature at the same time. Also it is robust enough to different noise model.

  5. Height and Tilt Geometric Texture

    DEFF Research Database (Denmark)

    Andersen, Vedrana; Desbrun, Mathieu; Bærentzen, Jakob Andreas

    2009-01-01

    compromise between functionality and simplicity: it can efficiently handle and process geometric texture too complex to be represented as a height field, without having recourse to full blown mesh editing algorithms. The height-and-tilt representation proposed here is fully intrinsic to the mesh, making...

  6. Non-Markovian effect on the geometric phase of a dissipative qubit

    International Nuclear Information System (INIS)

    Chen Juanjuan; Tong Qingjun; An Junhong; Luo Honggang; Oh, C. H.

    2010-01-01

    We studied the geometric phase of a two-level atom coupled to an environment with Lorentzian spectral density. The non-Markovian effect on the geometric phase is explored analytically and numerically. In the weak coupling limit, the lowest order correction to the geometric phase is derived analytically and the general case is calculated numerically. It was found that the correction to the geometric phase is significantly large if the spectral width is small, and in this case the non-Markovian dynamics has a significant impact on the geometric phase. When the spectral width increases, the correction to the geometric phase becomes negligible, which shows the robustness of the geometric phase to the environmental white noises. The result is significant to the quantum information processing based on the geometric phase.

  7. Geometrical methods for power network analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bellucci, Stefano; Tiwari, Bhupendra Nath [Istituto Nazioneale di Fisica Nucleare, Frascati, Rome (Italy). Lab. Nazionali di Frascati; Gupta, Neeraj [Indian Institute of Technology, Kanpur (India). Dept. of Electrical Engineering

    2013-02-01

    Uses advanced geometrical methods to analyse power networks. Provides a self-contained and tutorial introduction. Includes a fully worked-out example for the IEEE 5 bus system. This book is a short introduction to power system planning and operation using advanced geometrical methods. The approach is based on well-known insights and techniques developed in theoretical physics in the context of Riemannian manifolds. The proof of principle and robustness of this approach is examined in the context of the IEEE 5 bus system. This work addresses applied mathematicians, theoretical physicists and power engineers interested in novel mathematical approaches to power network theory.

  8. Design of Robust Supertwisting Algorithm Based Second-Order Sliding Mode Controller for Nonlinear Systems with Both Matched and Unmatched Uncertainty

    Directory of Open Access Journals (Sweden)

    Marwa Jouini

    2017-01-01

    Full Text Available This paper proposes a robust supertwisting algorithm (STA design for nonlinear systems where both matched and unmatched uncertainties are considered. The main contributions reside primarily to conceive a novel structure of STA, in order to ensure the desired performance of the uncertain nonlinear system. The modified algorithm is formed of double closed-loop feedback, in which two linear terms are added to the classical STA. In addition, an integral sliding mode switching surface is proposed to construct the attractiveness and reachability of sliding mode. Sufficient conditions are derived to guarantee the exact differentiation stability in finite time based on Lyapunov function theory. Finally, a comparative study for a variable-length pendulum system illustrates the robustness and the effectiveness of the proposed approach compared to other STA schemes.

  9. Image understanding using geometric context

    Science.gov (United States)

    Zhang, Xiaochun; Liu, Chuancai

    2017-07-01

    A Gibbs Sampler based topic model for image annotation, which takes into account the interaction between visual geometric context and related topic, is presented. Most of the existing topic models for scene annotation use segmentation-based algorithm. However, topic models using segmentation algorithm alone sometimes can produce erroneous results when used to annotate real-life scene pictures. Therefore, our algorithm makes use of peaks of image surface instead of segmentation regions. Existing approaches use SIFT algorithm and treat the peaks as round blob features. In this paper, the peaks are treated as anisotropic blob features, which models low level visual elements more precisely. In order to better utilize visual features, our model not only takes into consideration visual codeword, but also considers influence of visual properties to topic formation, such as orientation, width, length and color. The basic idea is based on the assumption that different topics will produce distinct visual appearance, and different visual appearance is helpful to distinguish topics. During the learning stage, each topic will be associated with a set of distributions of visual properties, which depicts appearance of the topic. This paper considers more geometric properties, which will reduce topic uncertainty and learn the images better. Tested with Corel5K, SAIAPR-TC12 and Espgame100k Datasets, our method performs moderately better than some state of the arts methods.

  10. Triangular Geometrized Sampling Heuristics for Fast Optimal Motion Planning

    Directory of Open Access Journals (Sweden)

    Ahmed Hussain Qureshi

    2015-02-01

    Full Text Available Rapidly-exploring Random Tree (RRT-based algorithms have become increasingly popular due to their lower computational complexity as compared with other path planning algorithms. The recently presented RRT* motion planning algorithm improves upon the original RRT algorithm by providing optimal path solutions. While RRT determines an initial collision-free path fairly quickly, RRT* guarantees almost certain convergence to an optimal, obstacle-free path from the start to the goal points for any given geometrical environment. However, the main limitations of RRT* include its slow processing rate and high memory consumption, due to the large number of iterations required for calculating the optimal path. In order to overcome these limitations, we present another improvement, i.e, the Triangular Geometerized-RRT* (TG-RRT* algorithm, which utilizes triangular geometrical methods to improve the performance of the RRT* algorithm in terms of the processing time and a decreased number of iterations required for an optimal path solution. Simulations comparing the performance results of the improved TG-RRT* with RRT* are presented to demonstrate the overall improvement in performance and optimal path detection.

  11. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    Science.gov (United States)

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  12. Robust digital image inpainting algorithm in the wireless environment

    Science.gov (United States)

    Karapetyan, G.; Sarukhanyan, H. G.; Agaian, S. S.

    2014-05-01

    and implementation steps of the presented algorithm. Furthermore, the simulation results show that the presented method is among the state-of-the-art and compares favorably against many available methods in the wireless environment. Robustness in the wireless environment with respect to the shape of the manually selected "marked" region is also illustrated. Currently, we are working on the expansion of this work to video and 3-D data.

  13. Robust image authentication in the presence of noise

    CERN Document Server

    2015-01-01

    This book addresses the problems that hinder image authentication in the presence of noise. It considers the advantages and disadvantages of existing algorithms for image authentication and shows new approaches and solutions for robust image authentication. The state of the art algorithms are compared and, furthermore, innovative approaches and algorithms are introduced. The introduced algorithms are applied to improve image authentication, watermarking and biometry.    Aside from presenting new directions and algorithms for robust image authentication in the presence of noise, as well as image correction, this book also:   Provides an overview of the state of the art algorithms for image authentication in the presence of noise and modifications, as well as a comparison of these algorithms, Presents novel algorithms for robust image authentication, whereby the image is tried to be corrected and authenticated, Examines different views for the solution of problems connected to image authentication in the pre...

  14. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2017-04-01

    Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  15. Algorithms and file structures for computational geometry

    International Nuclear Information System (INIS)

    Hinrichs, K.; Nievergelt, J.

    1983-01-01

    Algorithms for solving geometric problems and file structures for storing large amounts of geometric data are of increasing importance in computer graphics and computer-aided design. As examples of recent progress in computational geometry, we explain plane-sweep algorithms, which solve various topological and geometric problems efficiently; and we present the grid file, an adaptable, symmetric multi-key file structure that provides efficient access to multi-dimensional data along any space dimension. (orig.)

  16. Simple Algorithms to Calculate Asymptotic Null Distributions of Robust Tests in Case-Control Genetic Association Studies in R

    Directory of Open Access Journals (Sweden)

    Wing Kam Fung

    2010-02-01

    Full Text Available The case-control study is an important design for testing association between genetic markers and a disease. The Cochran-Armitage trend test (CATT is one of the most commonly used statistics for the analysis of case-control genetic association studies. The asymptotically optimal CATT can be used when the underlying genetic model (mode of inheritance is known. However, for most complex diseases, the underlying genetic models are unknown. Thus, tests robust to genetic model misspecification are preferable to the model-dependant CATT. Two robust tests, MAX3 and the genetic model selection (GMS, were recently proposed. Their asymptotic null distributions are often obtained by Monte-Carlo simulations, because they either have not been fully studied or involve multiple integrations. In this article, we study how components of each robust statistic are correlated, and find a linear dependence among the components. Using this new finding, we propose simple algorithms to calculate asymptotic null distributions for MAX3 and GMS, which greatly reduce the computing intensity. Furthermore, we have developed the R package Rassoc implementing the proposed algorithms to calculate the empirical and asymptotic p values for MAX3 and GMS as well as other commonly used tests in case-control association studies. For illustration, Rassoc is applied to the analysis of case-control data of 17 most significant SNPs reported in four genome-wide association studies.

  17. Comparison of two (geometric) algorithms for auto OMA

    DEFF Research Database (Denmark)

    Juul, Martin; Olsen, Peter; Balling, Ole

    2018-01-01

    parameters. The two algorithms are compared and illustrated on simulated data. Different choices of distance measures are discussed and evaluated. It is illustrated how a simple distance measure outperforms traditional distance measures from other Auto OMA algorithms. Traditional measures are unable...

  18. A scalable geometric multigrid solver for nonsymmetric elliptic systems with application to variable-density flows

    Science.gov (United States)

    Esmaily, M.; Jofre, L.; Mani, A.; Iaccarino, G.

    2018-03-01

    A geometric multigrid algorithm is introduced for solving nonsymmetric linear systems resulting from the discretization of the variable density Navier-Stokes equations on nonuniform structured rectilinear grids and high-Reynolds number flows. The restriction operation is defined such that the resulting system on the coarser grids is symmetric, thereby allowing for the use of efficient smoother algorithms. To achieve an optimal rate of convergence, the sequence of interpolation and restriction operations are determined through a dynamic procedure. A parallel partitioning strategy is introduced to minimize communication while maintaining the load balance between all processors. To test the proposed algorithm, we consider two cases: 1) homogeneous isotropic turbulence discretized on uniform grids and 2) turbulent duct flow discretized on stretched grids. Testing the algorithm on systems with up to a billion unknowns shows that the cost varies linearly with the number of unknowns. This O (N) behavior confirms the robustness of the proposed multigrid method regarding ill-conditioning of large systems characteristic of multiscale high-Reynolds number turbulent flows. The robustness of our method to density variations is established by considering cases where density varies sharply in space by a factor of up to 104, showing its applicability to two-phase flow problems. Strong and weak scalability studies are carried out, employing up to 30,000 processors, to examine the parallel performance of our implementation. Excellent scalability of our solver is shown for a granularity as low as 104 to 105 unknowns per processor. At its tested peak throughput, it solves approximately 4 billion unknowns per second employing over 16,000 processors with a parallel efficiency higher than 50%.

  19. Forward error correction based on algebraic-geometric theory

    CERN Document Server

    A Alzubi, Jafar; M Chen, Thomas

    2014-01-01

    This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.

  20. Geometric Semantic Genetic Programming Algorithm and Slump Prediction

    OpenAIRE

    Xu, Juncai; Shen, Zhenzhong; Ren, Qingwen; Xie, Xin; Yang, Zhengyu

    2017-01-01

    Research on the performance of recycled concrete as building material in the current world is an important subject. Given the complex composition of recycled concrete, conventional methods for forecasting slump scarcely obtain satisfactory results. Based on theory of nonlinear prediction method, we propose a recycled concrete slump prediction model based on geometric semantic genetic programming (GSGP) and combined it with recycled concrete features. Tests show that the model can accurately p...

  1. Uhlmann's geometric phase in presence of isotropic decoherence

    International Nuclear Information System (INIS)

    Tidstroem, Jonas; Sjoeqvist, Erik

    2003-01-01

    Uhlmann's mixed state geometric phase [Rep. Math. Phys. 24, 229 (1986)] is analyzed in the case of a qubit affected by isotropic decoherence treated in the Markovian approximation. It is demonstrated that this phase decreases rapidly with increasing decoherence rate and that it is most fragile to weak decoherence for pure or nearly pure initial states. In the unitary case, we compare Uhlmann's geometric phase for mixed states with that occurring in standard Mach-Zehnder interferometry [Phys. Rev. Lett. 85, 2845 (2000)] and show that the latter is more robust to reduction in the length of the Bloch vector. We also describe how Uhlmann's geometric phase in the present case could in principle be realized experimentally

  2. A robust algorithm for optimizing protein structures with NMR chemical shifts

    Energy Technology Data Exchange (ETDEWEB)

    Berjanskii, Mark; Arndt, David; Liang, Yongjie; Wishart, David S., E-mail: david.wishart@ualberta.ca [University of Alberta, Department of Computing Science (Canada)

    2015-11-15

    Over the past decade, a number of methods have been developed to determine the approximate structure of proteins using minimal NMR experimental information such as chemical shifts alone, sparse NOEs alone or a combination of comparative modeling data and chemical shifts. However, there have been relatively few methods that allow these approximate models to be substantively refined or improved using the available NMR chemical shift data. Here, we present a novel method, called Chemical Shift driven Genetic Algorithm for biased Molecular Dynamics (CS-GAMDy), for the robust optimization of protein structures using experimental NMR chemical shifts. The method incorporates knowledge-based scoring functions and structural information derived from NMR chemical shifts via a unique combination of multi-objective MD biasing, a genetic algorithm, and the widely used XPLOR molecular modelling language. Using this approach, we demonstrate that CS-GAMDy is able to refine and/or fold models that are as much as 10 Å (RMSD) away from the correct structure using only NMR chemical shift data. CS-GAMDy is also able to refine of a wide range of approximate or mildly erroneous protein structures to more closely match the known/correct structure and the known/correct chemical shifts. We believe CS-GAMDy will allow protein models generated by sparse restraint or chemical-shift-only methods to achieve sufficiently high quality to be considered fully refined and “PDB worthy”. The CS-GAMDy algorithm is explained in detail and its performance is compared over a range of refinement scenarios with several commonly used protein structure refinement protocols. The program has been designed to be easily installed and easily used and is available at http://www.gamdy.ca http://www.gamdy.ca.

  3. A differential-geometric approach to generalized linear models with grouped predictors

    NARCIS (Netherlands)

    Augugliaro, Luigi; Mineo, Angelo M.; Wit, Ernst C.

    We propose an extension of the differential-geometric least angle regression method to perform sparse group inference in a generalized linear model. An efficient algorithm is proposed to compute the solution curve. The proposed group differential-geometric least angle regression method has important

  4. Geometrical analysis of woven fabric microstructure based on micron-resolution computed tomography data

    Science.gov (United States)

    Krieger, Helga; Seide, Gunnar; Gries, Thomas; Stapleton, Scott E.

    2018-04-01

    The global mechanical properties of textiles such as elasticity and strength, as well as transport properties such as permeability depend strongly on the microstructure of the textile. Textiles are heterogeneous structures with highly anisotropic material properties, including local fiber orientation and local fiber volume fraction. In this paper, an algorithm is presented to generate a virtual 3D-model of a woven fabric architecture with information about the local fiber orientation and the local fiber volume fraction. The geometric data of the woven fabric impregnated with resin was obtained by micron-resolution computed tomography (μCT). The volumetric μCT-scan was discretized into cells and the microstructure of each cell was analyzed and homogenized. Furthermore, the discretized data was used to calculate the local permeability tensors of each cell. An example application of the analyzed data is the simulation of the resin flow through a woven fabric based on the determined local permeability tensors and on Darcy's law. The presented algorithm is an automated and robust method of going from μCT-scans to structural or flow models.

  5. Synthesis of fixed-architecture, robust H2 and H∞ controllers

    Directory of Open Access Journals (Sweden)

    Emmanuel G. Collins

    2000-01-01

    Full Text Available This paper discusses and compares the synthesis of fixed-architecture controllers that guarantee either robust H2 or H∞ performance. The synthesis is accomplished by solving a Riccati equation feasibility problem resulting from mixed structured singular value theory with Popov multipliers. Whereas the algorithm for robust H2 performance had been previously implemented, a major contribution described in this paper is the implementation of the much more complex algorithm for robust H∞ performance. Both robust H2 and H∞, controllers are designed for a benchmark problem and a comparison is made between the resulting controllers and control algorithms. It is found that the numerical algorithm for robust H∞ performance is much more computationally intensive than that for robust H2 performance. Both controllers are found to have smaller bandwidth, lower control authority and to be less conservative than controllers obtained using complex structured singular value synthesis.

  6. Stock price prediction using geometric Brownian motion

    Science.gov (United States)

    Farida Agustini, W.; Restu Affianti, Ika; Putri, Endah RM

    2018-03-01

    Geometric Brownian motion is a mathematical model for predicting the future price of stock. The phase that done before stock price prediction is determine stock expected price formulation and determine the confidence level of 95%. On stock price prediction using geometric Brownian Motion model, the algorithm starts from calculating the value of return, followed by estimating value of volatility and drift, obtain the stock price forecast, calculating the forecast MAPE, calculating the stock expected price and calculating the confidence level of 95%. Based on the research, the output analysis shows that geometric Brownian motion model is the prediction technique with high rate of accuracy. It is proven with forecast MAPE value ≤ 20%.

  7. Approximate joint diagonalization and geometric mean of symmetric positive definite matrices.

    Science.gov (United States)

    Congedo, Marco; Afsari, Bijan; Barachant, Alexandre; Moakher, Maher

    2014-01-01

    We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations.

  8. Approximate joint diagonalization and geometric mean of symmetric positive definite matrices.

    Directory of Open Access Journals (Sweden)

    Marco Congedo

    Full Text Available We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD matrices and their approximate joint diagonalization (AJD. Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations.

  9. A Robust Inversion Algorithm for Surface Leaf and Soil Temperatures Using the Vegetation Clumping Index

    Directory of Open Access Journals (Sweden)

    Zunjian Bian

    2017-07-01

    Full Text Available The inversion of land surface component temperatures is an essential source of information for mapping heat fluxes and the angular normalization of thermal infrared (TIR observations. Leaf and soil temperatures can be retrieved using multiple-view-angle TIR observations. In a satellite-scale pixel, the clumping effect of vegetation is usually present, but it is not completely considered during the inversion process. Therefore, we introduced a simple inversion procedure that uses gap frequency with a clumping index (GCI for leaf and soil temperatures over both crop and forest canopies. Simulated datasets corresponding to turbid vegetation, regularly planted crops and randomly distributed forest were generated using a radiosity model and were used to test the proposed inversion algorithm. The results indicated that the GCI algorithm performed well for both crop and forest canopies, with root mean squared errors of less than 1.0 °C against simulated values. The proposed inversion algorithm was also validated using measured datasets over orchard, maize and wheat canopies. Similar results were achieved, demonstrating that using the clumping index can improve inversion results. In all evaluations, we recommend using the GCI algorithm as a foundation for future satellite-based applications due to its straightforward form and robust performance for both crop and forest canopies using the vegetation clumping index.

  10. Finite Algorithms for Robust Linear Regression

    DEFF Research Database (Denmark)

    Madsen, Kaj; Nielsen, Hans Bruun

    1990-01-01

    The Huber M-estimator for robust linear regression is analyzed. Newton type methods for solution of the problem are defined and analyzed, and finite convergence is proved. Numerical experiments with a large number of test problems demonstrate efficiency and indicate that this kind of approach may...

  11. Robust and rapid algorithms facilitate large-scale whole genome sequencing downstream analysis in an integrative framework.

    Science.gov (United States)

    Li, Miaoxin; Li, Jiang; Li, Mulin Jun; Pan, Zhicheng; Hsu, Jacob Shujui; Liu, Dajiang J; Zhan, Xiaowei; Wang, Junwen; Song, Youqiang; Sham, Pak Chung

    2017-05-19

    Whole genome sequencing (WGS) is a promising strategy to unravel variants or genes responsible for human diseases and traits. However, there is a lack of robust platforms for a comprehensive downstream analysis. In the present study, we first proposed three novel algorithms, sequence gap-filled gene feature annotation, bit-block encoded genotypes and sectional fast access to text lines to address three fundamental problems. The three algorithms then formed the infrastructure of a robust parallel computing framework, KGGSeq, for integrating downstream analysis functions for whole genome sequencing data. KGGSeq has been equipped with a comprehensive set of analysis functions for quality control, filtration, annotation, pathogenic prediction and statistical tests. In the tests with whole genome sequencing data from 1000 Genomes Project, KGGSeq annotated several thousand more reliable non-synonymous variants than other widely used tools (e.g. ANNOVAR and SNPEff). It took only around half an hour on a small server with 10 CPUs to access genotypes of ∼60 million variants of 2504 subjects, while a popular alternative tool required around one day. KGGSeq's bit-block genotype format used 1.5% or less space to flexibly represent phased or unphased genotypes with multiple alleles and achieved a speed of over 1000 times faster to calculate genotypic correlation. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Robust Fourier Watermarking for ID Images on Smart Card Plastic Supports

    Directory of Open Access Journals (Sweden)

    RIAD, R.

    2016-11-01

    Full Text Available Security checking can be improved by watermarking identity (ID images printed on smart cards plastic supports. The major challenge is resistance to attacks: printing the images on the plastic cards, durability and other attacks then scanning the image from the plastic card. In this work, a robust watermarking technique is presented in this context. It is composed of three main mechanisms. The first is a watermarking algorithm based on the Fourier transform to cope with global geometric distortions. The second comprises a filter that reduces image blurring. The third attenuates color degradations. Experiments on 400 ID images show that the Wiener filter strongly improves the detection rate and outperforms competitive algorithms (blind deconvolution and unsharp filter. Color corrections also enhance the watermarking score. The whole scheme has a high efficiency and a low computational cost. It makes it compatible with the desired industrial constraints, i.e. the watermark is to be invisible, the error rate must be lower than 1%, and the detection of the mark should be fast and simple for the user.

  13. Automated sulcal depth measurement on cortical surface reflecting geometrical properties of sulci.

    Directory of Open Access Journals (Sweden)

    Hyuk Jin Yun

    Full Text Available Sulcal depth that is one of the quantitative measures of cerebral cortex has been widely used as an important marker for brain morphological studies. Several studies have employed Euclidean (EUD or geodesic (GED algorithms to measure sulcal depth, which have limitations that ignore sulcal geometry in highly convoluted regions and result in under or overestimated depth. In this study, we proposed an automated measurement for sulcal depth on cortical surface reflecting geometrical properties of sulci, which named the adaptive distance transform (ADT. We first defined the volume region of cerebrospinal fluid between the 3D convex hull and the cortical surface, and constructed local coordinates for that restricted region. Dijkstra's algorithm was then used to compute the shortest paths from the convex hull to the vertices of the cortical surface based on the local coordinates, which may be the most proper approach for defining sulcal depth. We applied our algorithm to both a clinical dataset including patients with mild Alzheimer's disease (AD and 25 normal controls and a simulated dataset whose shape was similar to a single sulcus. The mean sulcal depth in the mild AD group was significantly lower than controls (p = 0.007, normal [mean±SD]: 7.29±0.23 mm, AD: 7.11±0.29 and the area under the receiver operating characteristic curve was relatively high, showing the value of 0.818. Results from clinical dataset that were consistent with former studies using EUD or GED demonstrated that ADT was sensitive to cortical atrophy. The robustness against inter-individual variability of ADT was highlighted through simulation dataset. ADT showed a low and constant normalized difference between the depth of the simulated data and the calculated depth, whereas EUD and GED had high and variable differences. We suggest that ADT is more robust than EUD or GED and might be a useful alternative algorithm for measuring sulcal depth.

  14. Cooperative vehicles for robust traffic congestion reduction: An analysis based on algorithmic, environmental and agent behavioral factors.

    Directory of Open Access Journals (Sweden)

    Prajakta Desai

    Full Text Available Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN, wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction, across variations in: (a environmental parameters such as road network topology and configuration; (b algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies.

  15. Cooperative vehicles for robust traffic congestion reduction: An analysis based on algorithmic, environmental and agent behavioral factors.

    Science.gov (United States)

    Desai, Prajakta; Loke, Seng W; Desai, Aniruddha

    2017-01-01

    Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN), wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction), across variations in: (a) environmental parameters such as road network topology and configuration; (b) algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c) agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies.

  16. Efficient Geometric Sound Propagation Using Visibility Culling

    Science.gov (United States)

    Chandak, Anish

    2011-07-01

    Simulating propagation of sound can improve the sense of realism in interactive applications such as video games and can lead to better designs in engineering applications such as architectural acoustics. In this thesis, we present geometric sound propagation techniques which are faster than prior methods and map well to upcoming parallel multi-core CPUs. We model specular reflections by using the image-source method and model finite-edge diffraction by using the well-known Biot-Tolstoy-Medwin (BTM) model. We accelerate the computation of specular reflections by applying novel visibility algorithms, FastV and AD-Frustum, which compute visibility from a point. We accelerate finite-edge diffraction modeling by applying a novel visibility algorithm which computes visibility from a region. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms. The AD-Frustum algorithm adapts its computation to the scene complexity and allows small errors in computing specular reflection paths for higher computational efficiency. FastV and our visibility algorithm from a region are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy. Our geometric propagation algorithms are an order of magnitude faster than prior approaches for modeling specular reflections and two to ten times faster for modeling finite-edge diffraction. Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes. We also compare the accuracy of our sound propagation algorithms with other methods. Once sound propagation is performed, it is desirable to listen to the propagated sound in interactive and engineering applications. We can generate smooth, artifact-free output audio signals by applying

  17. Get Your Atoms in Order--An Open-Source Implementation of a Novel and Robust Molecular Canonicalization Algorithm.

    Science.gov (United States)

    Schneider, Nadine; Sayle, Roger A; Landrum, Gregory A

    2015-10-26

    Finding a canonical ordering of the atoms in a molecule is a prerequisite for generating a unique representation of the molecule. The canonicalization of a molecule is usually accomplished by applying some sort of graph relaxation algorithm, the most common of which is the Morgan algorithm. There are known issues with that algorithm that lead to noncanonical atom orderings as well as problems when it is applied to large molecules like proteins. Furthermore, each cheminformatics toolkit or software provides its own version of a canonical ordering, most based on unpublished algorithms, which also complicates the generation of a universal unique identifier for molecules. We present an alternative canonicalization approach that uses a standard stable-sorting algorithm instead of a Morgan-like index. Two new invariants that allow canonical ordering of molecules with dependent chirality as well as those with highly symmetrical cyclic graphs have been developed. The new approach proved to be robust and fast when tested on the 1.45 million compounds of the ChEMBL 20 data set in different scenarios like random renumbering of input atoms or SMILES round tripping. Our new algorithm is able to generate a canonical order of the atoms of protein molecules within a few milliseconds. The novel algorithm is implemented in the open-source cheminformatics toolkit RDKit. With this paper, we provide a reference Python implementation of the algorithm that could easily be integrated in any cheminformatics toolkit. This provides a first step toward a common standard for canonical atom ordering to generate a universal unique identifier for molecules other than InChI.

  18. A robust algorithm to solve the signal setting problem considering different traffic assignment approaches

    Directory of Open Access Journals (Sweden)

    Adacher Ludovica

    2017-12-01

    Full Text Available In this paper we extend a stochastic discrete optimization algorithm so as to tackle the signal setting problem. Signalized junctions represent critical points of an urban transportation network, and the efficiency of their traffic signal setting influences the overall network performance. Since road congestion usually takes place at or close to junction areas, an improvement in signal settings contributes to improving travel times, drivers’ comfort, fuel consumption efficiency, pollution and safety. In a traffic network, the signal control strategy affects the travel time on the roads and influences drivers’ route choice behavior. The paper presents an algorithm for signal setting optimization of signalized junctions in a congested road network. The objective function used in this work is a weighted sum of delays caused by the signalized intersections. We propose an iterative procedure to solve the problem by alternately updating signal settings based on fixed flows and traffic assignment based on fixed signal settings. To show the robustness of our method, we consider two different assignment methods: one based on user equilibrium assignment, well established in the literature as well as in practice, and the other based on a platoon simulation model with vehicular flow propagation and spill-back. Our optimization algorithm is also compared with others well known in the literature for this problem. The surrogate method (SM, particle swarm optimization (PSO and the genetic algorithm (GA are compared for a combined problem of global optimization of signal settings and traffic assignment (GOSSTA. Numerical experiments on a real test network are reported.

  19. Robust glint detection through homography normalization

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Roholm, Lars; García Ferreiros, Iván

    2014-01-01

    A novel normalization principle for robust glint detection is presented. The method is based on geometric properties of corneal reflections and allows for simple and effective detection of glints even in the presence of several spurious and identically appearing reflections. The method is tested...

  20. Implementation and efficiency of two geometric stiffening approaches

    International Nuclear Information System (INIS)

    Lugris, Urbano; Naya, Miguel A.; Perez, Jose A.; Cuadrado, Javier

    2008-01-01

    When the modeling of flexible bodies is required in multibody systems, the floating frame of reference formulations are probably the most efficient methods available. In the case of beams undergoing high speed rotations, the geometric stiffening effect can appear due to geometric nonlinearities, and it is often not captured by the aforementioned methods, since it is common to linearize the elastic forces assuming small deformations. The present work discusses the implementation of different existing methods developed to consider such geometric nonlinearities within a floating frame of reference formulation in natural coordinates, making emphasis on the relation between efficiency and accuracy of the resulting algorithms, seeking to provide practical criteria of use

  1. TARCMO: Theory and Algorithms for Robust, Combinatorial, Multicriteria Optimization

    Science.gov (United States)

    2016-11-28

    methods is presented in the book chapter [CG16d]. 4.4 Robust Timetable Information Problems. Timetable information is the process of determining a...Princeton and Oxford, 2009. [BTN98] A. Ben-Tal and A. Nemirovski. Robust convex optimization. Math - ematics of Operations Research, 23(4):769–805...Goerigk. A note on upper bounds to the robust knapsack problem with discrete scenarios. Annals of Operations Research, 223(1):461–469, 2014. [GS16] M

  2. PET functional volume delineation: a robustness and repeatability study

    International Nuclear Information System (INIS)

    Hatt, Mathieu; Cheze-le Rest, Catherine; Albarghach, Nidal; Pradier, Olivier; Visvikis, Dimitris

    2011-01-01

    Current state-of-the-art algorithms for functional uptake volume segmentation in PET imaging consist of threshold-based approaches, whose parameters often require specific optimization for a given scanner and associated reconstruction algorithms. Different advanced image segmentation approaches previously proposed and extensively validated, such as among others fuzzy C-means (FCM) clustering, or fuzzy locally adaptive bayesian (FLAB) algorithm have the potential to improve the robustness of functional uptake volume measurements. The objective of this study was to investigate robustness and repeatability with respect to various scanner models, reconstruction algorithms and acquisition conditions. Robustness was evaluated using a series of IEC phantom acquisitions carried out on different PET/CT scanners (Philips Gemini and Gemini Time-of-Flight, Siemens Biograph and GE Discovery LS) with their associated reconstruction algorithms (RAMLA, TF MLEM, OSEM). A range of acquisition parameters (contrast, duration) and reconstruction parameters (voxel size) were considered for each scanner model, and the repeatability of each method was evaluated on simulated and clinical tumours and compared to manual delineation. For all the scanner models, acquisition parameters and reconstruction algorithms considered, the FLAB algorithm demonstrated higher robustness in delineation of the spheres with low mean errors (10%) and variability (5%), with respect to threshold-based methodologies and FCM. The repeatability provided by all segmentation algorithms considered was very high with a negligible variability of <5% in comparison to that associated with manual delineation (5-35%). The use of advanced image segmentation algorithms may not only allow high accuracy as previously demonstrated, but also provide a robust and repeatable tool to aid physicians as an initial guess in determining functional volumes in PET. (orig.)

  3. Robust Self Tuning Controllers

    DEFF Research Database (Denmark)

    Poulsen, Niels Kjølstad

    1985-01-01

    The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...... has several operation modes and a detector for controlling the mode. A special self tuning controller has been developed to regulate plant with changing time delay.......The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...

  4. Robust Spacecraft Component Detection in Point Clouds

    Directory of Open Access Journals (Sweden)

    Quanmao Wei

    2018-03-01

    Full Text Available Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  5. Robust Spacecraft Component Detection in Point Clouds.

    Science.gov (United States)

    Wei, Quanmao; Jiang, Zhiguo; Zhang, Haopeng

    2018-03-21

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  6. Geometric integrator for simulations in the canonical ensemble

    Energy Technology Data Exchange (ETDEWEB)

    Tapias, Diego, E-mail: diego.tapias@nucleares.unam.mx [Departamento de Física, Facultad de Ciencias, Universidad Nacional Autónoma de México, Ciudad Universitaria, Ciudad de México 04510 (Mexico); Sanders, David P., E-mail: dpsanders@ciencias.unam.mx [Departamento de Física, Facultad de Ciencias, Universidad Nacional Autónoma de México, Ciudad Universitaria, Ciudad de México 04510 (Mexico); Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 (United States); Bravetti, Alessandro, E-mail: alessandro.bravetti@iimas.unam.mx [Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Ciudad Universitaria, Ciudad de México 04510 (Mexico)

    2016-08-28

    We introduce a geometric integrator for molecular dynamics simulations of physical systems in the canonical ensemble that preserves the invariant distribution in equations arising from the density dynamics algorithm, with any possible type of thermostat. Our integrator thus constitutes a unified framework that allows the study and comparison of different thermostats and of their influence on the equilibrium and non-equilibrium (thermo-)dynamic properties of a system. To show the validity and the generality of the integrator, we implement it with a second-order, time-reversible method and apply it to the simulation of a Lennard-Jones system with three different thermostats, obtaining good conservation of the geometrical properties and recovering the expected thermodynamic results. Moreover, to show the advantage of our geometric integrator over a non-geometric one, we compare the results with those obtained by using the non-geometric Gear integrator, which is frequently used to perform simulations in the canonical ensemble. The non-geometric integrator induces a drift in the invariant quantity, while our integrator has no such drift, thus ensuring that the system is effectively sampling the correct ensemble.

  7. Geometric integrator for simulations in the canonical ensemble

    International Nuclear Information System (INIS)

    Tapias, Diego; Sanders, David P.; Bravetti, Alessandro

    2016-01-01

    We introduce a geometric integrator for molecular dynamics simulations of physical systems in the canonical ensemble that preserves the invariant distribution in equations arising from the density dynamics algorithm, with any possible type of thermostat. Our integrator thus constitutes a unified framework that allows the study and comparison of different thermostats and of their influence on the equilibrium and non-equilibrium (thermo-)dynamic properties of a system. To show the validity and the generality of the integrator, we implement it with a second-order, time-reversible method and apply it to the simulation of a Lennard-Jones system with three different thermostats, obtaining good conservation of the geometrical properties and recovering the expected thermodynamic results. Moreover, to show the advantage of our geometric integrator over a non-geometric one, we compare the results with those obtained by using the non-geometric Gear integrator, which is frequently used to perform simulations in the canonical ensemble. The non-geometric integrator induces a drift in the invariant quantity, while our integrator has no such drift, thus ensuring that the system is effectively sampling the correct ensemble.

  8. A Robust Dynamic Heart-Rate Detection Algorithm Framework During Intense Physical Activities Using Photoplethysmographic Signals

    Directory of Open Access Journals (Sweden)

    Jiajia Song

    2017-10-01

    Full Text Available Dynamic accurate heart-rate (HR estimation using a photoplethysmogram (PPG during intense physical activities is always challenging due to corruption by motion artifacts (MAs. It is difficult to reconstruct a clean signal and extract HR from contaminated PPG. This paper proposes a robust HR-estimation algorithm framework that uses one-channel PPG and tri-axis acceleration data to reconstruct the PPG and calculate the HR based on features of the PPG and spectral analysis. Firstly, the signal is judged by the presence of MAs. Then, the spectral peaks corresponding to acceleration data are filtered from the periodogram of the PPG when MAs exist. Different signal-processing methods are applied based on the amount of remaining PPG spectral peaks. The main MA-removal algorithm (NFEEMD includes the repeated single-notch filter and ensemble empirical mode decomposition. Finally, HR calibration is designed to ensure the accuracy of HR tracking. The NFEEMD algorithm was performed on the 23 datasets from the 2015 IEEE Signal Processing Cup Database. The average estimation errors were 1.12 BPM (12 training datasets, 2.63 BPM (10 testing datasets and 1.87 BPM (all 23 datasets, respectively. The Pearson correlation was 0.992. The experiment results illustrate that the proposed algorithm is not only suitable for HR estimation during continuous activities, like slow running (13 training datasets, but also for intense physical activities with acceleration, like arm exercise (10 testing datasets.

  9. Geometric Algebra Computing

    CERN Document Server

    Corrochano, Eduardo Bayro

    2010-01-01

    This book presents contributions from a global selection of experts in the field. This useful text offers new insights and solutions for the development of theorems, algorithms and advanced methods for real-time applications across a range of disciplines. Written in an accessible style, the discussion of all applications is enhanced by the inclusion of numerous examples, figures and experimental analysis. Features: provides a thorough discussion of several tasks for image processing, pattern recognition, computer vision, robotics and computer graphics using the geometric algebra framework; int

  10. Geometric scalar theory of gravity beyond spherical symmetry

    Science.gov (United States)

    Moschella, U.; Novello, M.

    2017-04-01

    We construct several exact solutions for a recently proposed geometric scalar theory of gravity. We focus on a class of axisymmetric geometries and a big-bang-like geometry and discuss their Lorentzian character. The axisymmetric solutions are parametrized by an integer angular momentum l . The l =0 (spherical) case gives rise to the Schwarzschild geometry. The other solutions have naked singular surfaces. While not a priori obvious, all the solutions that we present here are globally Lorentzian. The Lorentzian signature appears to be a robust property of the disformal geometries solving the vacuum geometric scalar theory of gravity equations.

  11. Universal geometrical module for MARS program

    International Nuclear Information System (INIS)

    Talanov, V.V.

    1992-01-01

    Geometrical program module for modeling hadron and electromagnetic cascades, which accomplishes comparison of physical coordinates with the particle current state of one of the auxilliary cells, is described. The whole medium wherein the particles are tracked, is divided into a certain number of auxilliary cells. The identification algorithm of the cell, through which the particle trajectory passes, is considered in detail. The described algorithm for cell identification was developed for the MARS program and realized in form of a set of subprograms written in the FORTRAN language. 4 refs., 1 tab

  12. Geometrically Consistent Mesh Modification

    KAUST Repository

    Bonito, A.

    2010-01-01

    A new paradigm of adaptivity is to execute refinement, coarsening, and smoothing of meshes on manifolds with incomplete information about their geometry and yet preserve position and curvature accuracy. We refer to this collectively as geometrically consistent (GC) mesh modification. We discuss the concept of discrete GC, show the failure of naive approaches, and propose and analyze a simple algorithm that is GC and accuracy preserving. © 2010 Society for Industrial and Applied Mathematics.

  13. Synthesis of fixed-architecture, robust H 2 and H ∞ controllers

    Directory of Open Access Journals (Sweden)

    Collins Jr. Emmanuel G.

    2000-01-01

    Full Text Available This paper discusses and compares the synthesis of fixed-architecture controllers that guarantee either robust H 2 or H ∞ performance. The synthesis is accomplished by solving a Riccati equation feasibility problem resulting from mixed structured singular value theory with Popov multipliers. Whereas the algorithm for robust H 2 performance had been previously implemented, a major contribution described in this paper is the implementation of the much more complex algorithm for robust H ∞ performance. Both robust H 2 and H ∞ , controllers are designed for a benchmark problem and a comparison is made between the resulting controllers and control algorithms. It is found that the numerical algorithm for robust H ∞ performance is much more computationally intensive than that for robust H 2 performance. Both controllers are found to have smaller bandwidth, lower control authority and to be less conservative than controllers obtained using complex structured singular value synthesis

  14. Multiscale geometric modeling of macromolecules I: Cartesian representation

    Science.gov (United States)

    Xia, Kelin; Feng, Xin; Chen, Zhan; Tong, Yiying; Wei, Guo-Wei

    2014-01-01

    This paper focuses on the geometric modeling and computational algorithm development of biomolecular structures from two data sources: Protein Data Bank (PDB) and Electron Microscopy Data Bank (EMDB) in the Eulerian (or Cartesian) representation. Molecular surface (MS) contains non-smooth geometric singularities, such as cusps, tips and self-intersecting facets, which often lead to computational instabilities in molecular simulations, and violate the physical principle of surface free energy minimization. Variational multiscale surface definitions are proposed based on geometric flows and solvation analysis of biomolecular systems. Our approach leads to geometric and potential driven Laplace-Beltrami flows for biomolecular surface evolution and formation. The resulting surfaces are free of geometric singularities and minimize the total free energy of the biomolecular system. High order partial differential equation (PDE)-based nonlinear filters are employed for EMDB data processing. We show the efficacy of this approach in feature-preserving noise reduction. After the construction of protein multiresolution surfaces, we explore the analysis and characterization of surface morphology by using a variety of curvature definitions. Apart from the classical Gaussian curvature and mean curvature, maximum curvature, minimum curvature, shape index, and curvedness are also applied to macromolecular surface analysis for the first time. Our curvature analysis is uniquely coupled to the analysis of electrostatic surface potential, which is a by-product of our variational multiscale solvation models. As an expository investigation, we particularly emphasize the numerical algorithms and computational protocols for practical applications of the above multiscale geometric models. Such information may otherwise be scattered over the vast literature on this topic. Based on the curvature and electrostatic analysis from our multiresolution surfaces, we introduce a new concept, the

  15. Multiscale geometric modeling of macromolecules I: Cartesian representation

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Kelin [Department of Mathematics, Michigan State University, MI 48824 (United States); Feng, Xin [Department of Computer Science and Engineering, Michigan State University, MI 48824 (United States); Chen, Zhan [Department of Mathematics, Michigan State University, MI 48824 (United States); Tong, Yiying [Department of Computer Science and Engineering, Michigan State University, MI 48824 (United States); Wei, Guo-Wei, E-mail: wei@math.msu.edu [Department of Mathematics, Michigan State University, MI 48824 (United States); Department of Biochemistry and Molecular Biology, Michigan State University, MI 48824 (United States)

    2014-01-15

    This paper focuses on the geometric modeling and computational algorithm development of biomolecular structures from two data sources: Protein Data Bank (PDB) and Electron Microscopy Data Bank (EMDB) in the Eulerian (or Cartesian) representation. Molecular surface (MS) contains non-smooth geometric singularities, such as cusps, tips and self-intersecting facets, which often lead to computational instabilities in molecular simulations, and violate the physical principle of surface free energy minimization. Variational multiscale surface definitions are proposed based on geometric flows and solvation analysis of biomolecular systems. Our approach leads to geometric and potential driven Laplace–Beltrami flows for biomolecular surface evolution and formation. The resulting surfaces are free of geometric singularities and minimize the total free energy of the biomolecular system. High order partial differential equation (PDE)-based nonlinear filters are employed for EMDB data processing. We show the efficacy of this approach in feature-preserving noise reduction. After the construction of protein multiresolution surfaces, we explore the analysis and characterization of surface morphology by using a variety of curvature definitions. Apart from the classical Gaussian curvature and mean curvature, maximum curvature, minimum curvature, shape index, and curvedness are also applied to macromolecular surface analysis for the first time. Our curvature analysis is uniquely coupled to the analysis of electrostatic surface potential, which is a by-product of our variational multiscale solvation models. As an expository investigation, we particularly emphasize the numerical algorithms and computational protocols for practical applications of the above multiscale geometric models. Such information may otherwise be scattered over the vast literature on this topic. Based on the curvature and electrostatic analysis from our multiresolution surfaces, we introduce a new concept, the

  16. Positioning performance analysis of the time sum of arrival algorithm with error features

    Science.gov (United States)

    Gong, Feng-xun; Ma, Yan-qiu

    2018-03-01

    The theoretical positioning accuracy of multilateration (MLAT) with the time difference of arrival (TDOA) algorithm is very high. However, there are some problems in practical applications. Here we analyze the location performance of the time sum of arrival (TSOA) algorithm from the root mean square error ( RMSE) and geometric dilution of precision (GDOP) in additive white Gaussian noise (AWGN) environment. The TSOA localization model is constructed. Using it, the distribution of location ambiguity region is presented with 4-base stations. And then, the location performance analysis is started from the 4-base stations with calculating the RMSE and GDOP variation. Subsequently, when the location parameters are changed in number of base stations, base station layout and so on, the performance changing patterns of the TSOA location algorithm are shown. So, the TSOA location characteristics and performance are revealed. From the RMSE and GDOP state changing trend, the anti-noise performance and robustness of the TSOA localization algorithm are proved. The TSOA anti-noise performance will be used for reducing the blind-zone and the false location rate of MLAT systems.

  17. Computational geometry algorithms and applications

    CERN Document Server

    de Berg, Mark; Overmars, Mark; Schwarzkopf, Otfried

    1997-01-01

    Computational geometry emerged from the field of algorithms design and anal­ ysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The suc­ cess of the field as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains--computer graphics, geographic in­ formation systems (GIS), robotics, and others-in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or difficult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simplified many of the previous approaches. In this textbook we have tried to make these modem algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can ...

  18. Quantum algorithms for topological and geometric analysis of data

    Science.gov (United States)

    Lloyd, Seth; Garnerone, Silvano; Zanardi, Paolo

    2016-01-01

    Extracting useful information from large data sets can be a daunting task. Topological methods for analysing data sets provide a powerful technique for extracting such information. Persistent homology is a sophisticated tool for identifying topological features and for determining how such features persist as the data is viewed at different scales. Here we present quantum machine learning algorithms for calculating Betti numbers—the numbers of connected components, holes and voids—in persistent homology, and for finding eigenvectors and eigenvalues of the combinatorial Laplacian. The algorithms provide an exponential speed-up over the best currently known classical algorithms for topological data analysis. PMID:26806491

  19. Creating geometrically robust designs for highly sensitive problems using topology optimization: Acoustic cavity design

    DEFF Research Database (Denmark)

    Christiansen, Rasmus E.; Lazarov, Boyan S.; Jensen, Jakob S.

    2015-01-01

    Resonance and wave-propagation problems are known to be highly sensitive towards parameter variations. This paper discusses topology optimization formulations for creating designs that perform robustly under spatial variations for acoustic cavity problems. For several structural problems, robust...... and limitations are discussed. In addition, a known explicit penalization approach is considered for comparison. For near-uniform spatial variations it is shown that highly robust designs can be obtained using the double filter approach. It is finally demonstrated that taking non-uniform variations into account...... further improves the robustness of the designs....

  20. Robust Template Decomposition without Weight Restriction for Cellular Neural Networks Implementing Arbitrary Boolean Functions Using Support Vector Classifiers

    Directory of Open Access Journals (Sweden)

    Yih-Lon Lin

    2013-01-01

    Full Text Available If the given Boolean function is linearly separable, a robust uncoupled cellular neural network can be designed as a maximal margin classifier. On the other hand, if the given Boolean function is linearly separable but has a small geometric margin or it is not linearly separable, a popular approach is to find a sequence of robust uncoupled cellular neural networks implementing the given Boolean function. In the past research works using this approach, the control template parameters and thresholds are restricted to assume only a given finite set of integers, and this is certainly unnecessary for the template design. In this study, we try to remove this restriction. Minterm- and maxterm-based decomposition algorithms utilizing the soft margin and maximal margin support vector classifiers are proposed to design a sequence of robust templates implementing an arbitrary Boolean function. Several illustrative examples are simulated to demonstrate the efficiency of the proposed method by comparing our results with those produced by other decomposition methods with restricted weights.

  1. Robustness of Multiple Clustering Algorithms on Hyperspectral Images

    National Research Council Canada - National Science Library

    Williams, Jason P

    2007-01-01

    .... Various clustering algorithms were employed, including a hierarchical method, ISODATA, K-means, and X-means, and were used on a simple two dimensional dataset in order to discover potential problems with the algorithms...

  2. Symplectic Geometric Algorithms for Hamiltonian Systems

    CERN Document Server

    Feng, Kang

    2010-01-01

    "Symplectic Geometry Algorithms for Hamiltonian Systems" will be useful not only for numerical analysts, but also for those in theoretical physics, computational chemistry, celestial mechanics, etc. The book generalizes and develops the generating function and Hamilton-Jacobi equation theory from the perspective of the symplectic geometry and symplectic algebra. It will be a useful resource for engineers and scientists in the fields of quantum theory, astrophysics, atomic and molecular dynamics, climate prediction, oil exploration, etc. Therefore a systematic research and development

  3. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning

    International Nuclear Information System (INIS)

    Chen Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.

    2010-01-01

    Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK's interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  4. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning.

    Science.gov (United States)

    Chen, Wei; Craft, David; Madden, Thomas M; Zhang, Kewu; Kooy, Hanne M; Herman, Gabor T

    2010-09-01

    To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK'S interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  5. Robust low frequency current ripple elimination algorithm for grid-connected fuel cell systems with power balancing technique

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong-Soo; Choe, Gyu-Yeong; Lee, Byoung-Kuk [School of Information and Communication Engineering, Sungkyunkwan University, 300 Cheoncheon-dong, Jangan-gu, Suwon, Gyeonggi-do 440-746 (Korea, Republic of); Kang, Hyun-Soo [R and D Center, Advanced Drive Technology (ADT) Company, 689-26 Geumjeong-dong, Gunpo-si, Gyeonggi-do 435-862 (Korea, Republic of)

    2011-05-15

    The low frequency current ripple in grid-connected fuel cell systems is generated from dc-ac inverter operation, which generates 60 Hz fundamental component, and gives harmful effects on fuel cell stack itself, such as making cathode surface responses slower, causing an increase of more than 10% in the fuel consumption, creating oxygen starvation, causing a reduction in the operating lifetime, and incurring a nuisance tripping such as overload situation. With these reasons, low frequency current ripple makes fuel cell system unstable and lifetime of fuel cell stack itself short. This paper presents a fast and robust control algorithm to eliminate low frequency current ripple in grid-connected fuel cell systems. Compared with the conventional methods, in the proposed control algorithm, dc link voltage controller is shifted from dc-dc converter to dc-ac inverter, resulting that dc-ac inverter handles dc link voltage control and output current control simultaneously with help of power balancing technique. The results indicate that the proposed algorithm can not only completely eliminate current ripple but also significantly reduce the overshoot or undershoot during transient states without any extra hardware. The validity of the proposed algorithm is verified by computer simulations and also by experiments with a 1 kW laboratory prototype. (author)

  6. Geometrical superresolved imaging using nonperiodic spatial masking.

    Science.gov (United States)

    Borkowski, Amikam; Zalevsky, Zeev; Javidi, Bahram

    2009-03-01

    The resolution of every imaging system is limited either by the F-number of its optics or by the geometry of its detection array. The geometrical limitation is caused by lack of spatial sampling points as well as by the shape of every sampling pixel that generates spectral low-pass filtering. We present a novel approach to overcome the low-pass filtering that is due to the shape of the sampling pixels. The approach combines special algorithms together with spatial masking placed in the intermediate image plane and eventually allows geometrical superresolved imaging without relation to the actual shape of the pixels.

  7. 2D Vector Field Simplification Based on Robustness

    KAUST Repository

    Skraba, Primoz

    2014-03-01

    Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. These geometric metrics do not consider the flow magnitude, an important physical property of the flow. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness, which provides a complementary view on flow structure compared to the traditional topological-skeleton-based approaches. Robustness enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory, has fewer boundary restrictions, and so can handle more general cases. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. © 2014 IEEE.

  8. Pure endmember extraction using robust kernel archetypoid analysis for hyperspectral imagery

    Science.gov (United States)

    Sun, Weiwei; Yang, Gang; Wu, Ke; Li, Weiyue; Zhang, Dianfa

    2017-09-01

    A robust kernel archetypoid analysis (RKADA) method is proposed to extract pure endmembers from hyperspectral imagery (HSI). The RKADA assumes that each pixel is a sparse linear mixture of all endmembers and each endmember corresponds to a real pixel in the image scene. First, it improves the re8gular archetypal analysis with a new binary sparse constraint, and the adoption of the kernel function constructs the principal convex hull in an infinite Hilbert space and enlarges the divergences between pairwise pixels. Second, the RKADA transfers the pure endmember extraction problem into an optimization problem by minimizing residual errors with the Huber loss function. The Huber loss function reduces the effects from big noises and outliers in the convergence procedure of RKADA and enhances the robustness of the optimization function. Third, the random kernel sinks for fast kernel matrix approximation and the two-stage algorithm for optimizing initial pure endmembers are utilized to improve its computational efficiency in realistic implementations of RKADA, respectively. The optimization equation of RKADA is solved by using the block coordinate descend scheme and the desired pure endmembers are finally obtained. Six state-of-the-art pure endmember extraction methods are employed to make comparisons with the RKADA on both synthetic and real Cuprite HSI datasets, including three geometrical algorithms vertex component analysis (VCA), alternative volume maximization (AVMAX) and orthogonal subspace projection (OSP), and three matrix factorization algorithms the preconditioning for successive projection algorithm (PreSPA), hierarchical clustering based on rank-two nonnegative matrix factorization (H2NMF) and self-dictionary multiple measurement vector (SDMMV). Experimental results show that the RKADA outperforms all the six methods in terms of spectral angle distance (SAD) and root-mean-square-error (RMSE). Moreover, the RKADA has short computational times in offline

  9. Color-SIFT model: a robust and an accurate shot boundary detection algorithm

    Science.gov (United States)

    Sharmila Kumari, M.; Shekar, B. H.

    2010-02-01

    In this paper, a new technique called color-SIFT model is devised for shot boundary detection. Unlike scale invariant feature transform model that uses only grayscale information and misses important visual information regarding color, here we have adopted different color planes to extract keypoints which are subsequently used to detect shot boundaries. The basic SIFT model has four stages namely scale-space peak selection, keypoint localization, orientation assignment and keypoint descriptor and all these four stages were employed to extract key descriptors in each color plane. The proposed model works on three different color planes and a fusion has been made to take a decision on number of keypoint matches for shot boundary identification and hence is different from the color global scale invariant feature transform that works on quantized images. In addition, the proposed algorithm possess invariance to linear transformation and robust to occlusion and noisy environment. Experiments have been conducted on the standard TRECVID video database to reveal the performance of the proposed model.

  10. Symmetry and Algorithmic Complexity of Polyominoes and Polyhedral Graphs

    KAUST Repository

    Zenil, Hector

    2018-02-24

    We introduce a definition of algorithmic symmetry able to capture essential aspects of geometric symmetry. We review, study and apply a method for approximating the algorithmic complexity (also known as Kolmogorov-Chaitin complexity) of graphs and networks based on the concept of Algorithmic Probability (AP). AP is a concept (and method) capable of recursively enumeration all properties of computable (causal) nature beyond statistical regularities. We explore the connections of algorithmic complexity---both theoretical and numerical---with geometric properties mainly symmetry and topology from an (algorithmic) information-theoretic perspective. We show that approximations to algorithmic complexity by lossless compression and an Algorithmic Probability-based method can characterize properties of polyominoes, polytopes, regular and quasi-regular polyhedra as well as polyhedral networks, thereby demonstrating its profiling capabilities.

  11. Symmetry and Algorithmic Complexity of Polyominoes and Polyhedral Graphs

    KAUST Repository

    Zenil, Hector; Kiani, Narsis A.; Tegner, Jesper

    2018-01-01

    We introduce a definition of algorithmic symmetry able to capture essential aspects of geometric symmetry. We review, study and apply a method for approximating the algorithmic complexity (also known as Kolmogorov-Chaitin complexity) of graphs and networks based on the concept of Algorithmic Probability (AP). AP is a concept (and method) capable of recursively enumeration all properties of computable (causal) nature beyond statistical regularities. We explore the connections of algorithmic complexity---both theoretical and numerical---with geometric properties mainly symmetry and topology from an (algorithmic) information-theoretic perspective. We show that approximations to algorithmic complexity by lossless compression and an Algorithmic Probability-based method can characterize properties of polyominoes, polytopes, regular and quasi-regular polyhedra as well as polyhedral networks, thereby demonstrating its profiling capabilities.

  12. Discrete geometric analysis of message passing algorithm on graphs

    Science.gov (United States)

    Watanabe, Yusuke

    2010-04-01

    We often encounter probability distributions given as unnormalized products of non-negative functions. The factorization structures are represented by hypergraphs called factor graphs. Such distributions appear in various fields, including statistics, artificial intelligence, statistical physics, error correcting codes, etc. Given such a distribution, computations of marginal distributions and the normalization constant are often required. However, they are computationally intractable because of their computational costs. One successful approximation method is Loopy Belief Propagation (LBP) algorithm. The focus of this thesis is an analysis of the LBP algorithm. If the factor graph is a tree, i.e. having no cycle, the algorithm gives the exact quantities. If the factor graph has cycles, however, the LBP algorithm does not give exact results and possibly exhibits oscillatory and non-convergent behaviors. The thematic question of this thesis is "How the behaviors of the LBP algorithm are affected by the discrete geometry of the factor graph?" The primary contribution of this thesis is the discovery of a formula that establishes the relation between the LBP, the Bethe free energy and the graph zeta function. This formula provides new techniques for analysis of the LBP algorithm, connecting properties of the graph and of the LBP and the Bethe free energy. We demonstrate applications of the techniques to several problems including (non) convexity of the Bethe free energy, the uniqueness and stability of the LBP fixed point. We also discuss the loop series initiated by Chertkov and Chernyak. The loop series is a subgraph expansion of the normalization constant, or partition function, and reflects the graph geometry. We investigate theoretical natures of the series. Moreover, we show a partial connection between the loop series and the graph zeta function.

  13. Robust linear registration of CT images using random regression forests

    Science.gov (United States)

    Konukoglu, Ender; Criminisi, Antonio; Pathak, Sayan; Robertson, Duncan; White, Steve; Haynor, David; Siddiqui, Khan

    2011-03-01

    Global linear registration is a necessary first step for many different tasks in medical image analysis. Comparing longitudinal studies1, cross-modality fusion2, and many other applications depend heavily on the success of the automatic registration. The robustness and efficiency of this step is crucial as it affects all subsequent operations. Most common techniques cast the linear registration problem as the minimization of a global energy function based on the image intensities. Although these algorithms have proved useful, their robustness in fully automated scenarios is still an open question. In fact, the optimization step often gets caught in local minima yielding unsatisfactory results. Recent algorithms constrain the space of registration parameters by exploiting implicit or explicit organ segmentations, thus increasing robustness4,5. In this work we propose a novel robust algorithm for automatic global linear image registration. Our method uses random regression forests to estimate posterior probability distributions for the locations of anatomical structures - represented as axis aligned bounding boxes6. These posterior distributions are later integrated in a global linear registration algorithm. The biggest advantage of our algorithm is that it does not require pre-defined segmentations or regions. Yet it yields robust registration results. We compare the robustness of our algorithm with that of the state of the art Elastix toolbox7. Validation is performed via 1464 pair-wise registrations in a database of very diverse 3D CT images. We show that our method decreases the "failure" rate of the global linear registration from 12.5% (Elastix) to only 1.9%.

  14. International Conference on Robust Statistics

    CERN Document Server

    Filzmoser, Peter; Gather, Ursula; Rousseeuw, Peter

    2003-01-01

    Aspects of Robust Statistics are important in many areas. Based on the International Conference on Robust Statistics 2001 (ICORS 2001) in Vorau, Austria, this volume discusses future directions of the discipline, bringing together leading scientists, experienced researchers and practitioners, as well as younger researchers. The papers cover a multitude of different aspects of Robust Statistics. For instance, the fundamental problem of data summary (weights of evidence) is considered and its robustness properties are studied. Further theoretical subjects include e.g.: robust methods for skewness, time series, longitudinal data, multivariate methods, and tests. Some papers deal with computational aspects and algorithms. Finally, the aspects of application and programming tools complete the volume.

  15. Geometric Representations of Condition Queries on Three-Dimensional Vector Fields

    Science.gov (United States)

    Henze, Chris

    1999-01-01

    Condition queries on distributed data ask where particular conditions are satisfied. It is possible to represent condition queries as geometric objects by plotting field data in various spaces derived from the data, and by selecting loci within these derived spaces which signify the desired conditions. Rather simple geometric partitions of derived spaces can represent complex condition queries because much complexity can be encapsulated in the derived space mapping itself A geometric view of condition queries provides a useful conceptual unification, allowing one to intuitively understand many existing vector field feature detection algorithms -- and to design new ones -- as variations on a common theme. A geometric representation of condition queries also provides a simple and coherent basis for computer implementation, reducing a wide variety of existing and potential vector field feature detection techniques to a few simple geometric operations.

  16. Robust multi-objective calibration strategies – possibilities for improving flood forecasting

    Directory of Open Access Journals (Sweden)

    G. H. Schmitz

    2012-10-01

    Full Text Available Process-oriented rainfall-runoff models are designed to approximate the complex hydrologic processes within a specific catchment and in particular to simulate the discharge at the catchment outlet. Most of these models exhibit a high degree of complexity and require the determination of various parameters by calibration. Recently, automatic calibration methods became popular in order to identify parameter vectors with high corresponding model performance. The model performance is often assessed by a purpose-oriented objective function. Practical experience suggests that in many situations one single objective function cannot adequately describe the model's ability to represent any aspect of the catchment's behaviour. This is regardless of whether the objective is aggregated of several criteria that measure different (possibly opposite aspects of the system behaviour. One strategy to circumvent this problem is to define multiple objective functions and to apply a multi-objective optimisation algorithm to identify the set of Pareto optimal or non-dominated solutions. Nonetheless, there is a major disadvantage of automatic calibration procedures that understand the problem of model calibration just as the solution of an optimisation problem: due to the complex-shaped response surface, the estimated solution of the optimisation problem can result in different near-optimum parameter vectors that can lead to a very different performance on the validation data. Bárdossy and Singh (2008 studied this problem for single-objective calibration problems using the example of hydrological models and proposed a geometrical sampling approach called Robust Parameter Estimation (ROPE. This approach applies the concept of data depth in order to overcome the shortcomings of automatic calibration procedures and find a set of robust parameter vectors. Recent studies confirmed the effectivity of this method. However, all ROPE approaches published so far just identify

  17. Revival and robustness of Bures distance discord under decoherence channels

    International Nuclear Information System (INIS)

    Shi, Jia-dong; Wang, Dong; Ma, Yang-cheng; Ye, Liu

    2016-01-01

    In this paper, we demonstrate the revival and robustness of Bures distance discord in comparison with entanglement under local decoherent evolutions. The results show that in depolarizing channel Bures distance discord revives after a dark point of time, while entanglement will damp into death without revival. In addition, in hybrid channel the declining initial condition can enable Bures distance discord to decay more smoothly within a limited time, but speed up the death of entanglement. In this sense, Bures distance discord is typically more robust against decoherence than entanglement. Furthermore, we also provide a geometric interpretation concerning these phenomena. - Highlights: • Bures distance discord is more robust against decoherence than entanglement. • Bures distance discord revives after a dark point of time, while entanglement damps to death. • The initial condition enables Bures distance discord to damp smoothly, but it speeds up the death of entanglement. • A geometric interpretation concerning these phenomena has been provided.

  18. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    Science.gov (United States)

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734

  19. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    Directory of Open Access Journals (Sweden)

    Ramviyas Parasuraman

    2014-12-01

    Full Text Available The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS. When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities, there is a possibility that some electronic components may fail randomly (due to radiation effects, which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.

  20. approche algorithme génétique

    African Journals Online (AJOL)

    Structure / acentric factor relationship of alcohols and phenols: genetic ... descriptors of geometrical type selected by genetic algorithm, among more than 1600 ..... Practical handbook of genetic algorithms: Applications Volume I; CRC Press.

  1. Algorithmic foundation of multi-scale spatial representation

    CERN Document Server

    Li, Zhilin

    2006-01-01

    With the widespread use of GIS, multi-scale representation has become an important issue in the realm of spatial data handling. However, no book to date has systematically tackled the different aspects of this discipline. Emphasizing map generalization, Algorithmic Foundation of Multi-Scale Spatial Representation addresses the mathematical basis of multi-scale representation, specifically, the algorithmic foundation.Using easy-to-understand language, the author focuses on geometric transformations, with each chapter surveying a particular spatial feature. After an introduction to the essential operations required for geometric transformations as well as some mathematical and theoretical background, the book describes algorithms for a class of point features/clusters. It then examines algorithms for individual line features, such as the reduction of data points, smoothing (filtering), and scale-driven generalization, followed by a discussion of algorithms for a class of line features including contours, hydrog...

  2. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-offs on Phenotype Robustness in Biological Networks. Part III: Synthetic Gene Networks in Synthetic Biology

    Science.gov (United States)

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties that are observed in biological systems at many different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be large enough to confer: intrinsic robustness for tolerating intrinsic parameter fluctuations; genetic robustness for buffering genetic variations; and environmental robustness for resisting environmental disturbances. Network robustness is needed so phenotype stability of biological network can be maintained, guaranteeing phenotype robustness. Synthetic biology is foreseen to have important applications in biotechnology and medicine; it is expected to contribute significantly to a better understanding of functioning of complex biological systems. This paper presents a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation for synthetic gene networks in synthetic biology. Further, from the unifying mathematical framework, we found that the phenotype robustness criterion for synthetic gene networks is the following: if intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in synthetic biology can also be investigated through corresponding phenotype robustness criteria from the systematic point of view. Finally, a robust synthetic design that involves network evolution algorithms with desired behavior under intrinsic parameter fluctuations, genetic variations, and environmental

  3. System optimization for HVAC energy management using the robust evolutionary algorithm

    International Nuclear Information System (INIS)

    Fong, K.F.; Hanby, V.I.; Chow, T.T.

    2009-01-01

    For an installed centralized heating, ventilating and air conditioning (HVAC) system, appropriate energy management measures would achieve energy conservation targets through the optimal control and operation. The performance optimization of conventional HVAC systems may be handled by operation experience, but it may not cover different optimization scenarios and parameters in response to a variety of load and weather conditions. In this regard, it is common to apply the suitable simulation-optimization technique to model the system then determine the required operation parameters. The particular plant simulation models can be built up by either using the available simulation programs or a system of mathematical expressions. To handle the simulation models, iterations would be involved in the numerical solution methods. Since the gradient information is not easily available due to the complex nature of equations, the traditional gradient-based optimization methods are not applicable for this kind of system models. For the heuristic optimization methods, the continual search is commonly necessary, and the system function call is required for each search. The frequency of simulation function calls would then be a time-determining step, and an efficient optimization method is crucial, in order to find the solution through a number of function calls in a reasonable computational period. In this paper, the robust evolutionary algorithm (REA) is presented to tackle this nature of the HVAC simulation models. REA is based on one of the paradigms of evolutionary algorithm, evolution strategy, which is a stochastic population-based searching technique emphasized on mutation. The REA, which incorporates the Cauchy deterministic mutation, tournament selection and arithmetic recombination, would provide a synergetic effect for optimal search. The REA is effective to cope with the complex simulation models, as well as those represented by explicit mathematical expressions of

  4. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  5. Robust Object Tracking Using Valid Fragments Selection.

    Science.gov (United States)

    Zheng, Jin; Li, Bo; Tian, Peng; Luo, Gang

    Local features are widely used in visual tracking to improve robustness in cases of partial occlusion, deformation and rotation. This paper proposes a local fragment-based object tracking algorithm. Unlike many existing fragment-based algorithms that allocate the weights to each fragment, this method firstly defines discrimination and uniqueness for local fragment, and builds an automatic pre-selection of useful fragments for tracking. Then, a Harris-SIFT filter is used to choose the current valid fragments, excluding occluded or highly deformed fragments. Based on those valid fragments, fragment-based color histogram provides a structured and effective description for the object. Finally, the object is tracked using a valid fragment template combining the displacement constraint and similarity of each valid fragment. The object template is updated by fusing feature similarity and valid fragments, which is scale-adaptive and robust to partial occlusion. The experimental results show that the proposed algorithm is accurate and robust in challenging scenarios.

  6. Robust Optimization of Fourth Party Logistics Network Design under Disruptions

    Directory of Open Access Journals (Sweden)

    Jia Li

    2015-01-01

    Full Text Available The Fourth Party Logistics (4PL network faces disruptions of various sorts under the dynamic and complex environment. In order to explore the robustness of the network, the 4PL network design with consideration of random disruptions is studied. The purpose of the research is to construct a 4PL network that can provide satisfactory service to customers at a lower cost when disruptions strike. Based on the definition of β-robustness, a robust optimization model of 4PL network design under disruptions is established. Based on the NP-hard characteristic of the problem, the artificial fish swarm algorithm (AFSA and the genetic algorithm (GA are developed. The effectiveness of the algorithms is tested and compared by simulation examples. By comparing the optimal solutions of the 4PL network for different robustness level, it is indicated that the robust optimization model can evade the market risks effectively and save the cost in the maximum limit when it is applied to 4PL network design.

  7. Manipulation Robustness of Collaborative Filtering

    OpenAIRE

    Benjamin Van Roy; Xiang Yan

    2010-01-01

    A collaborative filtering system recommends to users products that similar users like. Collaborative filtering systems influence purchase decisions and hence have become targets of manipulation by unscrupulous vendors. We demonstrate that nearest neighbors algorithms, which are widely used in commercial systems, are highly susceptible to manipulation and introduce new collaborative filtering algorithms that are relatively robust.

  8. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  9. Probabilistic active recognition of multiple objects using Hough-based geometric matching features

    CSIR Research Space (South Africa)

    Govender, N

    2015-01-01

    Full Text Available be recognized simultaneously, and occlusion and clutter (through distracter objects) is common. We propose a representation for object viewpoints using Hough transform based geometric matching features, which are robust in such circumstances. We show how...

  10. Accurate technique for complete geometric calibration of cone-beam computed tomography systems

    International Nuclear Information System (INIS)

    Cho Youngbin; Moseley, Douglas J.; Siewerdsen, Jeffrey H.; Jaffray, David A.

    2005-01-01

    Cone-beam computed tomography systems have been developed to provide in situ imaging for the purpose of guiding radiation therapy. Clinical systems have been constructed using this approach, a clinical linear accelerator (Elekta Synergy RP) and an iso-centric C-arm. Geometric calibration involves the estimation of a set of parameters that describes the geometry of such systems, and is essential for accurate image reconstruction. We have developed a general analytic algorithm and corresponding calibration phantom for estimating these geometric parameters in cone-beam computed tomography (CT) systems. The performance of the calibration algorithm is evaluated and its application is discussed. The algorithm makes use of a calibration phantom to estimate the geometric parameters of the system. The phantom consists of 24 steel ball bearings (BBs) in a known geometry. Twelve BBs are spaced evenly at 30 deg in two plane-parallel circles separated by a given distance along the tube axis. The detector (e.g., a flat panel detector) is assumed to have no spatial distortion. The method estimates geometric parameters including the position of the x-ray source, position, and rotation of the detector, and gantry angle, and can describe complex source-detector trajectories. The accuracy and sensitivity of the calibration algorithm was analyzed. The calibration algorithm estimates geometric parameters in a high level of accuracy such that the quality of CT reconstruction is not degraded by the error of estimation. Sensitivity analysis shows uncertainty of 0.01 deg. (around beam direction) to 0.3 deg. (normal to the beam direction) in rotation, and 0.2 mm (orthogonal to the beam direction) to 4.9 mm (beam direction) in position for the medical linear accelerator geometry. Experimental measurements using a laboratory bench Cone-beam CT system of known geometry demonstrate the sensitivity of the method in detecting small changes in the imaging geometry with an uncertainty of 0.1 mm in

  11. A robust, efficient and accurate β- pdf integration algorithm in nonpremixed turbulent combustion

    International Nuclear Information System (INIS)

    Liu, H.; Lien, F.S.; Chui, E.

    2005-01-01

    Among many presumed-shape pdf approaches, the presumed β-function pdf is widely used in nonpremixed turbulent combustion models in the literature. However, singularity difficulties at Z = 0 and 1, Z being the mixture fraction, may be encountered in the numerical integration of the b-function pdf and there are few publications addressing this issue to date. The present study proposes an efficient, robust and accurate algorithm to overcome these numerical difficulties. The present treatment of the β-pdf integration is firstly used in the Burke-Schumann solution in conjunction with the k - ε turbulent model in the case of CH 4 /H 2 bluff-body jets and flames. Afterward it is extended to a more complex model, the laminar flamelet model, for the same flow. Numerical results obtained by using the proposed β-pdf integration method are compared to experimental values of the velocity field, temperature and constituent mass fraction to illustrate the efficiency and accuracy of the present method. (author)

  12. Geometrical primitives reconstruction from image sequence in an interactive context

    International Nuclear Information System (INIS)

    Monchal, L.; Aubry, P.

    1995-01-01

    We propose a method to recover 3D geometrical shape from image sequence, in a context of man machine co-operation. The human operator has to point out the edges of an object in the first image and choose a corresponding geometrical model. The algorithm tracks each relevant 2D segments describing surface discontinuities or limbs, in the images. Then, knowing motion of the camera between images, the positioning and the size of the virtual object are deduced by minimising a function. The function describes how well the virtual objects is linked to the extracted segments of the sequence, its geometrical model and pieces of information given by the operator. (author). 13 refs., 7 figs., 8 tabs

  13. A Robust In-Situ Warp-Correction Algorithm For VISAR Streak Camera Data at the National Ignition Facility

    International Nuclear Information System (INIS)

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.

    2015-01-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  14. A Robust In-Situ Warp-Correction Algorithm For VISAR Streak Camera Data at the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Labaria, George R. [Univ. of California, Santa Cruz, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Warrick, Abbie L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Celliers, Peter M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kalantar, Daniel H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-01-12

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  15. Research on robust optimization of emergency logistics network considering the time dependence characteristic

    Science.gov (United States)

    WANG, Qingrong; ZHU, Changfeng; LI, Ying; ZHANG, Zhengkun

    2017-06-01

    Considering the time dependence of emergency logistic network and complexity of the environment that the network exists in, in this paper the time dependent network optimization theory and robust discrete optimization theory are combined, and the emergency logistics dynamic network optimization model with characteristics of robustness is built to maximize the timeliness of emergency logistics. On this basis, considering the complexity of dynamic network and the time dependence of edge weight, an improved ant colony algorithm is proposed to realize the coupling of the optimization algorithm and the network time dependence and robustness. Finally, a case study has been carried out in order to testify validity of this robustness optimization model and its algorithm, and the value of different regulation factors was analyzed considering the importance of the value of the control factor in solving the optimal path. Analysis results show that this model and its algorithm above-mentioned have good timeliness and strong robustness.

  16. Experimental realization of universal geometric quantum gates with solid-state spins.

    Science.gov (United States)

    Zu, C; Wang, W-B; He, L; Zhang, W-G; Dai, C-Y; Wang, F; Duan, L-M

    2014-10-02

    Experimental realization of a universal set of quantum logic gates is the central requirement for the implementation of a quantum computer. In an 'all-geometric' approach to quantum computation, the quantum gates are implemented using Berry phases and their non-Abelian extensions, holonomies, from geometric transformation of quantum states in the Hilbert space. Apart from its fundamental interest and rich mathematical structure, the geometric approach has some built-in noise-resilience features. On the experimental side, geometric phases and holonomies have been observed in thermal ensembles of liquid molecules using nuclear magnetic resonance; however, such systems are known to be non-scalable for the purposes of quantum computing. There are proposals to implement geometric quantum computation in scalable experimental platforms such as trapped ions, superconducting quantum bits and quantum dots, and a recent experiment has realized geometric single-bit gates in a superconducting system. Here we report the experimental realization of a universal set of geometric quantum gates using the solid-state spins of diamond nitrogen-vacancy centres. These diamond defects provide a scalable experimental platform with the potential for room-temperature quantum computing, which has attracted strong interest in recent years. Our experiment shows that all-geometric and potentially robust quantum computation can be realized with solid-state spin quantum bits, making use of recent advances in the coherent control of this system.

  17. On the dosimetric behaviour of photon dose calculation algorithms in the presence of simple geometric heterogeneities: comparison with Monte Carlo calculations

    Science.gov (United States)

    Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk; Brink, Carsten; Clivio, Alessandro; Knöös, Tommy; Nicolini, Giorgia; Cozzi, Luca

    2007-03-01

    A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (ρ = 1.00 g cm-3) with inserts of different densities simulating light lung tissue (ρ = 0.035 g cm-3), normal lung (ρ = 0.20 g cm-3) and cortical bone tissue (ρ = 1.80 g cm-3). Experiments were performed for low- and high-energy photon beams (6 and 15 MV) and for square (13 × 13 cm2) and elongated rectangular (2.8 × 13 cm2) fields. Analysis was carried out on the basis of depth dose curves and transverse profiles at several depths. Assuming the MC data as reference, γ index analysis was carried out distinguishing between regions inside the non-water inserts or inside the uniform water. For this study, a distance to agreement was set to 3 mm while the dose difference varied from 2% to 10%. In general all algorithms based on pencil-beam convolutions showed a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (ρ = 0.035 g cm-3), enhanced for the most energetic beam. For denser, and more clinical, densities a better agreement among the sophisticated algorithms with respect to MC was observed.

  18. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration.

    Directory of Open Access Journals (Sweden)

    Hengkai Guo

    Full Text Available Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US and magnetic resonance (MR. Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods.

  19. A Robust and Efficient Algorithm for Tool Recognition and Localization for Space Station Robot

    Directory of Open Access Journals (Sweden)

    Lingbo Cheng

    2014-12-01

    Full Text Available This paper studies a robust target recognition and localization method for a maintenance robot in a space station, and its main goal is to solve the target affine transformation caused by microgravity and the strong reflection and refraction of sunlight and lamplight in the cabin, as well as the occlusion of other objects. In this method, an Affine Scale Invariant Feature Transform (Affine-SIFT algorithm is proposed to extract enough local feature points with a fully affine invariant, and the stable matching point is obtained from the above point for target recognition by the selected Random Sample Consensus (RANSAC algorithm. Then, in order to localize the target, the effective and appropriate 3D grasping scope of the target is defined, and we determine and evaluate the grasping precision with the estimated affine transformation parameters presented in this paper. Finally, the threshold of RANSAC is optimized to enhance the accuracy and efficiency of target recognition and localization, and the scopes of illumination, vision distance and viewpoint angle for robot are evaluated to obtain effective image data by Root-Mean-Square Error (RMSE. An experimental system to simulate the illumination environment in a space station is established. Enough experiments have been carried out, and the experimental results show both the validity of the proposed definition of the grasping scope and the feasibility of the proposed recognition and localization method.

  20. An algorithm for online optimization of accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Xiaobiao [SLAC National Accelerator Lab., Menlo Park, CA (United States); Corbett, Jeff [SLAC National Accelerator Lab., Menlo Park, CA (United States); Safranek, James [SLAC National Accelerator Lab., Menlo Park, CA (United States); Wu, Juhao [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2013-10-01

    We developed a general algorithm for online optimization of accelerator performance, i.e., online tuning, using the performance measure as the objective function. This method, named robust conjugate direction search (RCDS), combines the conjugate direction set approach of Powell's method with a robust line optimizer which considers the random noise in bracketing the minimum and uses parabolic fit of data points that uniformly sample the bracketed zone. Moreover, it is much more robust against noise than traditional algorithms and is therefore suitable for online application. Simulation and experimental studies have been carried out to demonstrate the strength of the new algorithm.

  1. Graph Algorithm Animation with Grrr

    OpenAIRE

    Rodgers, Peter; Vidal, Natalia

    2000-01-01

    We discuss geometric positioning, highlighting of visited nodes and user defined highlighting that form the algorithm animation facilities in the Grrr graph rewriting programming language. The main purpose of animation was initially for the debugging and profiling of Grrr code, but recently it has been extended for the purpose of teaching algorithms to undergraduate students. The animation is restricted to graph based algorithms such as graph drawing, list manipulation or more traditional gra...

  2. Robust total energy demand estimation with a hybrid Variable Neighborhood Search – Extreme Learning Machine algorithm

    International Nuclear Information System (INIS)

    Sánchez-Oro, J.; Duarte, A.; Salcedo-Sanz, S.

    2016-01-01

    Highlights: • The total energy demand in Spain is estimated with a Variable Neighborhood algorithm. • Socio-economic variables are used, and one year ahead prediction horizon is considered. • Improvement of the prediction with an Extreme Learning Machine network is considered. • Experiments are carried out in real data for the case of Spain. - Abstract: Energy demand prediction is an important problem whose solution is evaluated by policy makers in order to take key decisions affecting the economy of a country. A number of previous approaches to improve the quality of this estimation have been proposed in the last decade, the majority of them applying different machine learning techniques. In this paper, the performance of a robust hybrid approach, composed of a Variable Neighborhood Search algorithm and a new class of neural network called Extreme Learning Machine, is discussed. The Variable Neighborhood Search algorithm is focused on obtaining the most relevant features among the set of initial ones, by including an exponential prediction model. While previous approaches consider that the number of macroeconomic variables used for prediction is a parameter of the algorithm (i.e., it is fixed a priori), the proposed Variable Neighborhood Search method optimizes both: the number of variables and the best ones. After this first step of feature selection, an Extreme Learning Machine network is applied to obtain the final energy demand prediction. Experiments in a real case of energy demand estimation in Spain show the excellent performance of the proposed approach. In particular, the whole method obtains an estimation of the energy demand with an error lower than 2%, even when considering the crisis years, which are a real challenge.

  3. Robust and Effective Component-based Banknote Recognition by SURF Features.

    Science.gov (United States)

    Hasanuzzaman, Faiz M; Yang, Xiaodong; Tian, YingLi

    2011-01-01

    Camera-based computer vision technology is able to assist visually impaired people to automatically recognize banknotes. A good banknote recognition algorithm for blind or visually impaired people should have the following features: 1) 100% accuracy, and 2) robustness to various conditions in different environments and occlusions. Most existing algorithms of banknote recognition are limited to work for restricted conditions. In this paper we propose a component-based framework for banknote recognition by using Speeded Up Robust Features (SURF). The component-based framework is effective in collecting more class-specific information and robust in dealing with partial occlusion and viewpoint changes. Furthermore, the evaluation of SURF demonstrates its effectiveness in handling background noise, image rotation, scale, and illumination changes. To authenticate the robustness and generalizability of the proposed approach, we have collected a large dataset of banknotes from a variety of conditions including occlusion, cluttered background, rotation, and changes of illumination, scaling, and viewpoints. The proposed algorithm achieves 100% recognition rate on our challenging dataset.

  4. Fifth SIAM conference on geometric design 97: Final program and abstracts. Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    The meeting was divided into the following sessions: (1) CAD/CAM; (2) Curve/Surface Design; (3) Geometric Algorithms; (4) Multiresolution Methods; (5) Robotics; (6) Solid Modeling; and (7) Visualization. This report contains the abstracts of papers presented at the meeting. Proceding the conference there was a short course entitled ``Wavelets for Geometric Modeling and Computer Graphics``.

  5. Matching Real and Synthetic Panoramic Images Using a Variant of Geometric Hashing

    Science.gov (United States)

    Li-Chee-Ming, J.; Armenakis, C.

    2017-05-01

    This work demonstrates an approach to automatically initialize a visual model-based tracker, and recover from lost tracking, without prior camera pose information. These approaches are commonly referred to as tracking-by-detection. Previous tracking-by-detection techniques used either fiducials (i.e. landmarks or markers) or the object's texture. The main contribution of this work is the development of a tracking-by-detection algorithm that is based solely on natural geometric features. A variant of geometric hashing, a model-to-image registration algorithm, is proposed that searches for a matching panoramic image from a database of synthetic panoramic images captured in a 3D virtual environment. The approach identifies corresponding features between the matched panoramic images. The corresponding features are to be used in a photogrammetric space resection to estimate the camera pose. The experiments apply this algorithm to initialize a model-based tracker in an indoor environment using the 3D CAD model of the building.

  6. A novel scheme for automatic nonrigid image registration using deformation invariant feature and geometric constraint

    Science.gov (United States)

    Deng, Zhipeng; Lei, Lin; Zhou, Shilin

    2015-10-01

    Automatic image registration is a vital yet challenging task, particularly for non-rigid deformation images which are more complicated and common in remote sensing images, such as distorted UAV (unmanned aerial vehicle) images or scanning imaging images caused by flutter. Traditional non-rigid image registration methods are based on the correctly matched corresponding landmarks, which usually needs artificial markers. It is a rather challenging task to locate the accurate position of the points and get accurate homonymy point sets. In this paper, we proposed an automatic non-rigid image registration algorithm which mainly consists of three steps: To begin with, we introduce an automatic feature point extraction method based on non-linear scale space and uniform distribution strategy to extract the points which are uniform distributed along the edge of the image. Next, we propose a hybrid point matching algorithm using DaLI (Deformation and Light Invariant) descriptor and local affine invariant geometric constraint based on triangulation which is constructed by K-nearest neighbor algorithm. Based on the accurate homonymy point sets, the two images are registrated by the model of TPS (Thin Plate Spline). Our method is demonstrated by three deliberately designed experiments. The first two experiments are designed to evaluate the distribution of point set and the correctly matching rate on synthetic data and real data respectively. The last experiment is designed on the non-rigid deformation remote sensing images and the three experimental results demonstrate the accuracy, robustness, and efficiency of the proposed algorithm compared with other traditional methods.

  7. Robust statistical methods with R

    CERN Document Server

    Jureckova, Jana

    2005-01-01

    Robust statistical methods were developed to supplement the classical procedures when the data violate classical assumptions. They are ideally suited to applied research across a broad spectrum of study, yet most books on the subject are narrowly focused, overly theoretical, or simply outdated. Robust Statistical Methods with R provides a systematic treatment of robust procedures with an emphasis on practical application.The authors work from underlying mathematical tools to implementation, paying special attention to the computational aspects. They cover the whole range of robust methods, including differentiable statistical functions, distance of measures, influence functions, and asymptotic distributions, in a rigorous yet approachable manner. Highlighting hands-on problem solving, many examples and computational algorithms using the R software supplement the discussion. The book examines the characteristics of robustness, estimators of real parameter, large sample properties, and goodness-of-fit tests. It...

  8. Development of morphing algorithms for Histfactory using information geometry

    Energy Technology Data Exchange (ETDEWEB)

    Bandyopadhyay, Anjishnu; Brock, Ian [University of Bonn (Germany); Cranmer, Kyle [New York University (United States)

    2016-07-01

    Many statistical analyses are based on likelihood fits. In any likelihood fit we try to incorporate all uncertainties, both systematic and statistical. We generally have distributions for the nominal and ±1 σ variations of a given uncertainty. Using that information, Histfactory morphs the distributions for any arbitrary value of the given uncertainties. In this talk, a new morphing algorithm will be presented, which is based on information geometry. The algorithm uses the information about the difference between various probability distributions. Subsequently, we map this information onto geometrical structures and develop the algorithm on the basis of different geometrical properties. Apart from varying all nuisance parameters together, this algorithm can also probe both small (< 1 σ) and large (> 2 σ) variations. It will also be shown how this algorithm can be used for interpolating other forms of probability distributions.

  9. Sparse alignment for robust tensor learning.

    Science.gov (United States)

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming

    2014-10-01

    Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods.

  10. Robust efficient estimation of heart rate pulse from video

    Science.gov (United States)

    Xu, Shuchang; Sun, Lingyun; Rohde, Gustavo Kunde

    2014-01-01

    We describe a simple but robust algorithm for estimating the heart rate pulse from video sequences containing human skin in real time. Based on a model of light interaction with human skin, we define the change of blood concentration due to arterial pulsation as a pixel quotient in log space, and successfully use the derived signal for computing the pulse heart rate. Various experiments with different cameras, different illumination condition, and different skin locations were conducted to demonstrate the effectiveness and robustness of the proposed algorithm. Examples computed with normal illumination show the algorithm is comparable with pulse oximeter devices both in accuracy and sensitivity. PMID:24761294

  11. Ins-Robust Primitive Words

    OpenAIRE

    Srivastava, Amit Kumar; Kapoor, Kalpesh

    2017-01-01

    Let Q be the set of primitive words over a finite alphabet with at least two symbols. We characterize a class of primitive words, Q_I, referred to as ins-robust primitive words, which remain primitive on insertion of any letter from the alphabet and present some properties that characterizes words in the set Q_I. It is shown that the language Q_I is dense. We prove that the language of primitive words that are not ins-robust is not context-free. We also present a linear time algorithm to reco...

  12. Device-Free Localization via an Extreme Learning Machine with Parameterized Geometrical Feature Extraction

    Directory of Open Access Journals (Sweden)

    Jie Zhang

    2017-04-01

    Full Text Available Device-free localization (DFL is becoming one of the new technologies in wireless localization field, due to its advantage that the target to be localized does not need to be attached to any electronic device. In the radio-frequency (RF DFL system, radio transmitters (RTs and radio receivers (RXs are used to sense the target collaboratively, and the location of the target can be estimated by fusing the changes of the received signal strength (RSS measurements associated with the wireless links. In this paper, we will propose an extreme learning machine (ELM approach for DFL, to improve the efficiency and the accuracy of the localization algorithm. Different from the conventional machine learning approaches for wireless localization, in which the above differential RSS measurements are trivially used as the only input features, we introduce the parameterized geometrical representation for an affected link, which consists of its geometrical intercepts and differential RSS measurement. Parameterized geometrical feature extraction (PGFE is performed for the affected links and the features are used as the inputs of ELM. The proposed PGFE-ELM for DFL is trained in the offline phase and performed for real-time localization in the online phase, where the estimated location of the target is obtained through the created ELM. PGFE-ELM has the advantages that the affected links used by ELM in the online phase can be different from those used for training in the offline phase, and can be more robust to deal with the uncertain combination of the detectable wireless links. Experimental results show that the proposed PGFE-ELM can improve the localization accuracy and learning speed significantly compared with a number of the existing machine learning and DFL approaches, including the weighted K-nearest neighbor (WKNN, support vector machine (SVM, back propagation neural network (BPNN, as well as the well-known radio tomographic imaging (RTI DFL approach.

  13. A Robust Level-Set Algorithm for Centerline Extraction

    NARCIS (Netherlands)

    Telea, Alexandru; Vilanova, Anna

    2003-01-01

    We present a robust method for extracting 3D centerlines from volumetric datasets. We start from a 2D skeletonization method to locate voxels centered with respect to three orthogonal slicing directions. Next, we introduce a new detection criterion to extract the centerline voxels from the above

  14. Hybrid Robust Multi-Objective Evolutionary Optimization Algorithm

    Science.gov (United States)

    2009-03-10

    xfar by xint. Else, generate a new individual, using the Sobol pseudo- random sequence generator within the upper and lower bounds of the variables...12. Deb, K., Multi-Objective Optimization Using Evolutionary Algorithms, John Wiley & Sons. 2002. 13. Sobol , I. M., "Uniformly Distributed Sequences

  15. A general and Robust Ray-Casting-Based Algorithm for Triangulating Surfaces at the Nanoscale

    Science.gov (United States)

    Decherchi, Sergio; Rocchia, Walter

    2013-01-01

    We present a general, robust, and efficient ray-casting-based approach to triangulating complex manifold surfaces arising in the nano-bioscience field. This feature is inserted in a more extended framework that: i) builds the molecular surface of nanometric systems according to several existing definitions, ii) can import external meshes, iii) performs accurate surface area estimation, iv) performs volume estimation, cavity detection, and conditional volume filling, and v) can color the points of a grid according to their locations with respect to the given surface. We implemented our methods in the publicly available NanoShaper software suite (www.electrostaticszone.eu). Robustness is achieved using the CGAL library and an ad hoc ray-casting technique. Our approach can deal with any manifold surface (including nonmolecular ones). Those explicitly treated here are the Connolly-Richards (SES), the Skin, and the Gaussian surfaces. Test results indicate that it is robust to rotation, scale, and atom displacement. This last aspect is evidenced by cavity detection of the highly symmetric structure of fullerene, which fails when attempted by MSMS and has problems in EDTSurf. In terms of timings, NanoShaper builds the Skin surface three times faster than the single threaded version in Lindow et al. on a 100,000 atoms protein and triangulates it at least ten times more rapidly than the Kruithof algorithm. NanoShaper was integrated with the DelPhi Poisson-Boltzmann equation solver. Its SES grid coloring outperformed the DelPhi counterpart. To test the viability of our method on large systems, we chose one of the biggest molecular structures in the Protein Data Bank, namely the 1VSZ entry, which corresponds to the human adenovirus (180,000 atoms after Hydrogen addition). We were able to triangulate the corresponding SES and Skin surfaces (6.2 and 7.0 million triangles, respectively, at a scale of 2 grids per Å) on a middle-range workstation. PMID:23577073

  16. ROBUST CYLINDER FITTING IN THREE-DIMENSIONAL POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    A. Nurunnabi

    2017-05-01

    Full Text Available This paper investigates the problems of cylinder fitting in laser scanning three-dimensional Point Cloud Data (PCD. Most existing methods require full cylinder data, do not study the presence of outliers, and are not statistically robust. But especially mobile laser scanning often has incomplete data, as street poles for example are only scanned from the road. Moreover, existence of outliers is common. Outliers may occur as random or systematic errors, and may be scattered and/or clustered. In this paper, we present a statistically robust cylinder fitting algorithm for PCD that combines Robust Principal Component Analysis (RPCA with robust regression. Robust principal components as obtained by RPCA allow estimating cylinder directions more accurately, and an existing efficient circle fitting algorithm following robust regression principles, properly fit cylinder. We demonstrate the performance of the proposed method on artificial and real PCD. Results show that the proposed method provides more accurate and robust results: (i in the presence of noise and high percentage of outliers, (ii for incomplete as well as complete data, (iii for small and large number of points, and (iv for different sizes of radius. On 1000 simulated quarter cylinders of 1m radius with 10% outliers a PCA based method fit cylinders with a radius of on average 3.63 meter (m; the proposed method on the other hand fit cylinders of on average 1.02 m radius. The algorithm has potential in applications such as fitting cylindrical (e.g., light and traffic poles, diameter at breast height estimation for trees, and building and bridge information modelling.

  17. Robust continuous clustering.

    Science.gov (United States)

    Shah, Sohil Atul; Koltun, Vladlen

    2017-09-12

    Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank.

  18. Digital and discrete geometry theory and algorithms

    CERN Document Server

    Chen, Li

    2014-01-01

    This book provides comprehensive coverage of the modern methods for geometric problems in the computing sciences. It also covers concurrent topics in data sciences including geometric processing, manifold learning, Google search, cloud data, and R-tree for wireless networks and BigData.The author investigates digital geometry and its related constructive methods in discrete geometry, offering detailed methods and algorithms. The book is divided into five sections: basic geometry; digital curves, surfaces and manifolds; discretely represented objects; geometric computation and processing; and a

  19. Tenth workshop on the algorithmic foundations of robotics (WAFR)

    CERN Document Server

    Lozano-Perez, Tomas; Roy, Nicholas; Rus, Daniela; Algorithmic foundations of robotics X

    2013-01-01

    Algorithms are a fundamental component of robotic systems. Robot algorithms process inputs from sensors that provide noisy and partial data, build geometric and physical models of the world, plan high-and low-level actions at different time horizons, and execute these actions on actuators with limited precision. The design and analysis of robot algorithms raise a unique combination of questions from many elds, including control theory, computational geometry and topology, geometrical and physical modeling, reasoning under uncertainty, probabilistic algorithms, game theory, and theoretical computer science. The Workshop on Algorithmic Foundations of Robotics (WAFR) is a single-track meeting of leading researchers in the eld of robot algorithms. Since its inception in 1994, WAFR has been held every other year, and has provided one of the premiere venues for the publication of some of the eld's most important and lasting contributions. This books contains the proceedings of the tenth WAFR, held on June 13{15 201...

  20. SIFT based algorithm for point feature tracking

    Directory of Open Access Journals (Sweden)

    Adrian BURLACU

    2007-12-01

    Full Text Available In this paper a tracking algorithm for SIFT features in image sequences is developed. For each point feature extracted using SIFT algorithm a descriptor is computed using information from its neighborhood. Using an algorithm based on minimizing the distance between two descriptors tracking point features throughout image sequences is engaged. Experimental results, obtained from image sequences that capture scaling of different geometrical type object, reveal the performances of the tracking algorithm.

  1. Design and implementation of robust controllers for a gait trainer.

    Science.gov (United States)

    Wang, F C; Yu, C H; Chou, T Y

    2009-08-01

    This paper applies robust algorithms to control an active gait trainer for children with walking disabilities. Compared with traditional rehabilitation procedures, in which two or three trainers are required to assist the patient, a motor-driven mechanism was constructed to improve the efficiency of the procedures. First, a six-bar mechanism was designed and constructed to mimic the trajectory of children's ankles in walking. Second, system identification techniques were applied to obtain system transfer functions at different operating points by experiments. Third, robust control algorithms were used to design Hinfinity robust controllers for the system. Finally, the designed controllers were implemented to verify experimentally the system performance. From the results, the proposed robust control strategies are shown to be effective.

  2. SU-E-J-94: Geometric and Dosimetric Evaluation of Deformation Image Registration Algorithms Using Virtual Phantoms Generated From Patients with Lung Cancer

    International Nuclear Information System (INIS)

    Shen, Z; Greskovich, J; Xia, P; Bzdusek, K

    2015-01-01

    Purpose: To generate virtual phantoms with clinically relevant deformation and use them to objectively evaluate geometric and dosimetric uncertainties of deformable image registration (DIR) algorithms. Methods: Ten lung cancer patients undergoing adaptive 3DCRT planning were selected. For each patient, a pair of planning CT (pCT) and replanning CT (rCT) were used as the basis for virtual phantom generation. Manually adjusted meshes were created for selected ROIs (e.g. PTV, lungs, spinal cord, esophagus, and heart) on pCT and rCT. The mesh vertices were input into a thin-plate spline algorithm to generate a reference displacement vector field (DVF). The reference DVF was used to deform pCT to generate a simulated replanning CT (srCT) that was closely matched to rCT. Three DIR algorithms (Demons, B-Spline, and intensity-based) were applied to these ten virtual phantoms. The images, ROIs, and doses were mapped from pCT to srCT using the DVFs computed by these three DIRs and compared to those mapped using the reference DVF. Results: The average Dice coefficients for selected ROIs were from 0.85 to 0.96 for Demons, from 0.86 to 0.97 for intensity-based, and from 0.76 to 0.95 for B-Spline. The average Hausdorff distances for selected ROIs were from 2.2 to 5.4 mm for Demons, from 2.3 to 6.8 mm for intensity-based, and from 2.4 to 11.4 mm for B-Spline. The average absolute dose errors for selected ROIs were from 0.2 to 0.6 Gy for Demons, from 0.1 to 0.5 Gy for intensity-based, and from 0.5 to 1.5 Gy for B-Spline. Conclusion: Virtual phantoms were modeled after patients with lung cancer and were clinically relevant for adaptive radiotherapy treatment replanning. Virtual phantoms with known DVFs serve as references and can provide a fair comparison when evaluating different DIRs. Demons and intensity-based DIRs were shown to have smaller geometric and dosimetric uncertainties than B-Spline. Z Shen: None; K Bzdusek: an employee of Philips Healthcare; J Greskovich: None; P Xia

  3. Non-stoquastic Hamiltonians in quantum annealing via geometric phases

    Science.gov (United States)

    Vinci, Walter; Lidar, Daniel A.

    2017-09-01

    We argue that a complete description of quantum annealing implemented with continuous variables must take into account the non-adiabatic Aharonov-Anandan geometric phase that arises when the system Hamiltonian changes during the anneal. We show that this geometric effect leads to the appearance of non-stoquasticity in the effective quantum Ising Hamiltonians that are typically used to describe quantum annealing with flux qubits. We explicitly demonstrate the effect of this geometric non-stoquasticity when quantum annealing is performed with a system of one and two coupled flux qubits. The realization of non-stoquastic Hamiltonians has important implications from a computational complexity perspective, since it is believed that in many cases quantum annealing with stoquastic Hamiltonians can be efficiently simulated via classical algorithms such as Quantum Monte Carlo. It is well known that the direct implementation of non-stoquastic Hamiltonians with flux qubits is particularly challenging. Our results suggest an alternative path for the implementation of non-stoquasticity via geometric phases that can be exploited for computational purposes.

  4. Geometric Accuracy Investigations of SEVIRI High Resolution Visible (HRV Level 1.5 Imagery

    Directory of Open Access Journals (Sweden)

    Sultan Kocaman Aksakal

    2013-05-01

    Full Text Available GCOS (Global Climate Observing System is a long-term program for monitoring the climate, detecting the changes, and assessing their impacts. Remote sensing techniques are being increasingly used for climate-related measurements. Imagery of the SEVIRI instrument on board of the European geostationary satellites Meteosat-8 and Meteosat-9 are often used for the estimation of essential climate variables. In a joint project between the Swiss GCOS Office and ETH Zurich, geometric accuracy and temporal stability of 1-km resolution HRV channel imagery of SEVIRI have been evaluated over Switzerland. A set of tools and algorithms has been developed for the investigations. Statistical analysis and blunder detection have been integrated in the process for robust evaluation. The relative accuracy is evaluated by tracking large numbers of feature points in consecutive HRV images taken at 15-minute intervals. For the absolute accuracy evaluation, lakes in Switzerland and surroundings are used as reference. 20 lakes digitized from Landsat orthophotos are transformed into HRV images and matched via 2D translation terms at sub-pixel level. The algorithms are tested using HRV images taken on 24 days in 2008 (2 days per month. The results show that 2D shifts that are up to 8 pixels are present both in relative and absolute terms.

  5. Online measurement for geometrical parameters of wheel set based on structure light and CUDA parallel processing

    Science.gov (United States)

    Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie

    2018-01-01

    The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.

  6. Robust gates for holonomic quantum computation

    International Nuclear Information System (INIS)

    Florio, Giuseppe; Pascazio, Saverio; Facchi, Paolo; Fazio, Rosario; Giovannetti, Vittorio

    2006-01-01

    Non-Abelian geometric phases are attracting increasing interest because of possible experimental application in quantum computation. We study the effects of the environment (modeled as an ensemble of harmonic oscillators) on a holonomic transformation and write the corresponding master equation. The solution is analytically and numerically investigated and the behavior of the fidelity analyzed: fidelity revivals are observed and an optimal finite operation time is determined at which the gate is most robust against noise

  7. Algorithms for Planar Graphs and Graphs in Metric Spaces

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    structural properties that can be exploited. For instance, a road network or a wire layout on a microchip is typically (near-)planar and distances in the network are often defined w.r.t. the Euclidean or the rectilinear metric. Specialized algorithms that take advantage of such properties are often orders...... of magnitude faster than the corresponding algorithms for general graphs. The first and main part of this thesis focuses on the development of efficient planar graph algorithms. The most important contributions include a faster single-source shortest path algorithm, a distance oracle with subquadratic...... for geometric graphs and graphs embedded in metric spaces. Roughly speaking, the stretch factor is a real value expressing how well a (geo-)metric graph approximates the underlying complete graph w.r.t. distances. We give improved algorithms for computing the stretch factor of a given graph and for augmenting...

  8. Handling Occlusions for Robust Augmented Reality Systems

    Directory of Open Access Journals (Sweden)

    Maidi Madjid

    2010-01-01

    Full Text Available Abstract In Augmented Reality applications, the human perception is enhanced with computer-generated graphics. These graphics must be exactly registered to real objects in the scene and this requires an effective Augmented Reality system to track the user's viewpoint. In this paper, a robust tracking algorithm based on coded fiducials is presented. Square targets are identified and pose parameters are computed using a hybrid approach based on a direct method combined with the Kalman filter. An important factor for providing a robust Augmented Reality system is the correct handling of targets occlusions by real scene elements. To overcome tracking failure due to occlusions, we extend our method using an optical flow approach to track visible points and maintain virtual graphics overlaying when targets are not identified. Our proposed real-time algorithm is tested with different camera viewpoints under various image conditions and shows to be accurate and robust.

  9. Development changes of geometric layout product, developed by means of computer aided design

    Directory of Open Access Journals (Sweden)

    С.Г. Кєворков

    2007-01-01

    Full Text Available  Contains results of development of modification formation methodology in a product geometrical mockup made by means of CAD system. Change process of a CAD data (assembly structures, details and influencing on a product structure is considered. The analysis of the assembly version creations algorithm, which creates a product structure with certain serial number, is carried out. The algorithms of CAD user environment creations, restriction of CAD object and CAD object cancellation algorithm are created.

  10. Improved autonomous star identification algorithm

    International Nuclear Information System (INIS)

    Luo Li-Yan; Xu Lu-Ping; Zhang Hua; Sun Jing-Rong

    2015-01-01

    The log–polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. (paper)

  11. The Research and Application of SURF Algorithm Based on Feature Point Selection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Fang Hu

    2014-04-01

    Full Text Available As the pixel information of depth image is derived from the distance information, when implementing SURF algorithm with KINECT sensor for static sign language recognition, there can be some mismatched pairs in palm area. This paper proposes a feature point selection algorithm, by filtering the SURF feature points step by step based on the number of feature points within adaptive radius r and the distance between the two points, it not only greatly improves the recognition rate, but also ensures the robustness under the environmental factors, such as skin color, illumination intensity, complex background, angle and scale changes. The experiment results show that the improved SURF algorithm can effectively improve the recognition rate, has a good robustness.

  12. Re-verification of a Lip Synchronization Protocol using Robust Reachability

    Directory of Open Access Journals (Sweden)

    Piotr Kordy

    2010-03-01

    Full Text Available The timed automata formalism is an important model for specifying and analysing real-time systems. Robustness is the correctness of the model in the presence of small drifts on clocks or imprecision in testing guards. A symbolic algorithm for the analysis of the robustness of timed automata has been implemented. In this paper, we re-analyse an industrial case lip synchronization protocol using the new robust reachability algorithm. This lip synchronization protocol is an interesting case because timing aspects are crucial for the correctness of the protocol. Several versions of the model are considered: with an ideal video stream, with anchored jitter, and with non-anchored jitter.

  13. A robust multilevel simultaneous eigenvalue solver

    Science.gov (United States)

    Costiner, Sorin; Taasan, Shlomo

    1993-01-01

    Multilevel (ML) algorithms for eigenvalue problems are often faced with several types of difficulties such as: the mixing of approximated eigenvectors by the solution process, the approximation of incomplete clusters of eigenvectors, the poor representation of solution on coarse levels, and the existence of close or equal eigenvalues. Algorithms that do not treat appropriately these difficulties usually fail, or their performance degrades when facing them. These issues motivated the development of a robust adaptive ML algorithm which treats these difficulties, for the calculation of a few eigenvectors and their corresponding eigenvalues. The main techniques used in the new algorithm include: the adaptive completion and separation of the relevant clusters on different levels, the simultaneous treatment of solutions within each cluster, and the robustness tests which monitor the algorithm's efficiency and convergence. The eigenvectors' separation efficiency is based on a new ML projection technique generalizing the Rayleigh Ritz projection, combined with a technique, the backrotations. These separation techniques, when combined with an FMG formulation, in many cases lead to algorithms of O(qN) complexity, for q eigenvectors of size N on the finest level. Previously developed ML algorithms are less focused on the mentioned difficulties. Moreover, algorithms which employ fine level separation techniques are of O(q(sub 2)N) complexity and usually do not overcome all these difficulties. Computational examples are presented where Schrodinger type eigenvalue problems in 2-D and 3-D, having equal and closely clustered eigenvalues, are solved with the efficiency of the Poisson multigrid solver. A second order approximation is obtained in O(qN) work, where the total computational work is equivalent to only a few fine level relaxations per eigenvector.

  14. Spatial Precision in Magnetic Resonance Imaging–Guided Radiation Therapy: The Role of Geometric Distortion

    Energy Technology Data Exchange (ETDEWEB)

    Weygand, Joseph, E-mail: jw2899@columbia.edu [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); The University of Texas Graduate School of Biomedical Sciences at Houston, Houston, Texas (United States); Fuller, Clifton David [The University of Texas Graduate School of Biomedical Sciences at Houston, Houston, Texas (United States); Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Ibbott, Geoffrey S. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); The University of Texas Graduate School of Biomedical Sciences at Houston, Houston, Texas (United States); Mohamed, Abdallah S.R. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Department of Clinical Oncology and Nuclear Medicine, Alexandria University, Alexandria (Egypt); Ding, Yao [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Yang, Jinzhong [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); The University of Texas Graduate School of Biomedical Sciences at Houston, Houston, Texas (United States); Hwang, Ken-Pin [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Wang, Jihong [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); The University of Texas Graduate School of Biomedical Sciences at Houston, Houston, Texas (United States)

    2016-07-15

    Because magnetic resonance imaging–guided radiation therapy (MRIgRT) offers exquisite soft tissue contrast and the ability to image tissues in arbitrary planes, the interest in this technology has increased dramatically in recent years. However, intrinsic geometric distortion stemming from both the system hardware and the magnetic properties of the patient affects MR images and compromises the spatial integrity of MRI-based radiation treatment planning, given that for real-time MRIgRT, precision within 2 mm is desired. In this article, we discuss the causes of geometric distortion, describe some well-known distortion correction algorithms, and review geometric distortion measurements from 12 studies, while taking into account relevant imaging parameters. Eleven of the studies reported phantom measurements quantifying system-dependent geometric distortion, while 2 studies reported simulation data quantifying magnetic susceptibility–induced geometric distortion. Of the 11 studies investigating system-dependent geometric distortion, 5 reported maximum measurements less than 2 mm. The simulation studies demonstrated that magnetic susceptibility–induced distortion is typically smaller than system-dependent distortion but still nonnegligible, with maximum distortion ranging from 2.1 to 2.6 mm at a field strength of 1.5 T. As expected, anatomic landmarks containing interfaces between air and soft tissue had the largest distortions. The evidence indicates that geometric distortion reduces the spatial integrity of MRI-based radiation treatment planning and likely diminishes the efficacy of MRIgRT. Better phantom measurement techniques and more effective distortion correction algorithms are needed to achieve the desired spatial precision.

  15. Spatial Precision in Magnetic Resonance Imaging–Guided Radiation Therapy: The Role of Geometric Distortion

    International Nuclear Information System (INIS)

    Weygand, Joseph; Fuller, Clifton David; Ibbott, Geoffrey S.; Mohamed, Abdallah S.R.; Ding, Yao; Yang, Jinzhong; Hwang, Ken-Pin; Wang, Jihong

    2016-01-01

    Because magnetic resonance imaging–guided radiation therapy (MRIgRT) offers exquisite soft tissue contrast and the ability to image tissues in arbitrary planes, the interest in this technology has increased dramatically in recent years. However, intrinsic geometric distortion stemming from both the system hardware and the magnetic properties of the patient affects MR images and compromises the spatial integrity of MRI-based radiation treatment planning, given that for real-time MRIgRT, precision within 2 mm is desired. In this article, we discuss the causes of geometric distortion, describe some well-known distortion correction algorithms, and review geometric distortion measurements from 12 studies, while taking into account relevant imaging parameters. Eleven of the studies reported phantom measurements quantifying system-dependent geometric distortion, while 2 studies reported simulation data quantifying magnetic susceptibility–induced geometric distortion. Of the 11 studies investigating system-dependent geometric distortion, 5 reported maximum measurements less than 2 mm. The simulation studies demonstrated that magnetic susceptibility–induced distortion is typically smaller than system-dependent distortion but still nonnegligible, with maximum distortion ranging from 2.1 to 2.6 mm at a field strength of 1.5 T. As expected, anatomic landmarks containing interfaces between air and soft tissue had the largest distortions. The evidence indicates that geometric distortion reduces the spatial integrity of MRI-based radiation treatment planning and likely diminishes the efficacy of MRIgRT. Better phantom measurement techniques and more effective distortion correction algorithms are needed to achieve the desired spatial precision.

  16. Square-Wave Voltage Injection Algorithm for PMSM Position Sensorless Control With High Robustness to Voltage Errors

    DEFF Research Database (Denmark)

    Ni, Ronggang; Xu, Dianguo; Blaabjerg, Frede

    2017-01-01

    relationship with the magnetic field distortion. Position estimation errors caused by higher order harmonic inductances and voltage harmonics generated by the SVPWM are also discussed. Both simulations and experiments are carried out based on a commercial PMSM to verify the superiority of the proposed method......Rotor position estimated with high-frequency (HF) voltage injection methods can be distorted by voltage errors due to inverter nonlinearities, motor resistance, and rotational voltage drops, etc. This paper proposes an improved HF square-wave voltage injection algorithm, which is robust to voltage...... errors without any compensations meanwhile has less fluctuation in the position estimation error. The average position estimation error is investigated based on the analysis of phase harmonic inductances, and deduced in the form of the phase shift of the second-order harmonic inductances to derive its...

  17. On robust parameter estimation in brain-computer interfacing

    Science.gov (United States)

    Samek, Wojciech; Nakajima, Shinichi; Kawanabe, Motoaki; Müller, Klaus-Robert

    2017-12-01

    Objective. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain-computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the trials consisting of a few hundred EEG samples and indicating the start and end of a task. Approach. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. Main results. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. Significance. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.

  18. Non-convex polygons clustering algorithm

    Directory of Open Access Journals (Sweden)

    Kruglikov Alexey

    2016-01-01

    Full Text Available A clustering algorithm is proposed, to be used as a preliminary step in motion planning. It is tightly coupled to the applied problem statement, i.e. uses parameters meaningful only with respect to it. Use of geometrical properties for polygons clustering allows for a better calculation time as opposed to general-purpose algorithms. A special form of map optimized for quick motion planning is constructed as a result.

  19. Robust Parameter Coordination for Multidisciplinary Design

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper introduced a robust parameter coordination method to analyze parameter uncertainties so as to predict conflicts and coordinate parameters in multidisciplinary design. The proposed method is based on constraints network, which gives a formulated model to analyze the coupling effects between design variables and product specifications. In this model, interval boxes are adopted to describe the uncertainty of design parameters quantitatively to enhance the design robustness. To solve this constraint network model, a general consistent algorithm framework is designed and implemented with interval arithmetic and the genetic algorithm, which can deal with both algebraic and ordinary differential equations. With the help of this method, designers could infer the consistent solution space from the given specifications. A case study involving the design of a bogie dumping system demonstrates the usefulness of this approach.

  20. Neuromorphic Configurable Architecture for Robust Motion Estimation

    Directory of Open Access Journals (Sweden)

    Guillermo Botella

    2008-01-01

    Full Text Available The robustness of the human visual system recovering motion estimation in almost any visual situation is enviable, performing enormous calculation tasks continuously, robustly, efficiently, and effortlessly. There is obviously a great deal we can learn from our own visual system. Currently, there are several optical flow algorithms, although none of them deals efficiently with noise, illumination changes, second-order motion, occlusions, and so on. The main contribution of this work is the efficient implementation of a biologically inspired motion algorithm that borrows nature templates as inspiration in the design of architectures and makes use of a specific model of human visual motion perception: Multichannel Gradient Model (McGM. This novel customizable architecture of a neuromorphic robust optical flow can be constructed with FPGA or ASIC device using properties of the cortical motion pathway, constituting a useful framework for building future complex bioinspired systems running in real time with high computational complexity. This work includes the resource usage and performance data, and the comparison with actual systems. This hardware has many application fields like object recognition, navigation, or tracking in difficult environments due to its bioinspired and robustness properties.

  1. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  2. ADORE-GA: Genetic algorithm variant of the ADORE algorithm for ROP detector layout optimization in CANDU reactors

    International Nuclear Information System (INIS)

    Kastanya, Doddy

    2012-01-01

    Highlights: ► ADORE is an algorithm for CANDU ROP Detector Layout Optimization. ► ADORE-GA is a Genetic Algorithm variant of the ADORE algorithm. ► Robustness test of ADORE-GA algorithm is presented in this paper. - Abstract: The regional overpower protection (ROP) systems protect CANDU® reactors against overpower in the fuel that could reduce the safety margin-to-dryout. The overpower could originate from a localized power peaking within the core or a general increase in the global core power level. The design of the detector layout for ROP systems is a challenging discrete optimization problem. In recent years, two algorithms have been developed to find a quasi optimal solution to this detector layout optimization problem. Both of these algorithms utilize the simulated annealing (SA) algorithm as their optimization engine. In the present paper, an alternative optimization algorithm, namely the genetic algorithm (GA), has been implemented as the optimization engine. The implementation is done within the ADORE algorithm. Results from evaluating the effects of using various mutation rates and crossover parameters are presented in this paper. It has been demonstrated that the algorithm is sufficiently robust in producing similar quality solutions.

  3. Robust THP Transceiver Designs for Multiuser MIMO Downlink with Imperfect CSIT

    Directory of Open Access Journals (Sweden)

    P. Ubaidulla

    2009-01-01

    Full Text Available We present robust joint nonlinear transceiver designs for multiuser multiple-input multiple-output (MIMO downlink in the presence of imperfections in the channel state information at the transmitter (CSIT. The base station (BS is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs Tomlinson-Harashima precoding (THP for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE model, where the CSIT error is Gaussian-distributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a norm-bounded error (NBE model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worst-case design. For this model, we consider robust (i minimum SMSE, (ii MSE-constrained, and (iii MSE-balancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs. Further, we consider an extension of the proposed algorithm to the case with per-antenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.

  4. Robust THP Transceiver Designs for Multiuser MIMO Downlink with Imperfect CSIT

    Science.gov (United States)

    Ubaidulla, P.; Chockalingam, A.

    2009-12-01

    We present robust joint nonlinear transceiver designs for multiuser multiple-input multiple-output (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs Tomlinson-Harashima precoding (THP) for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE) model, where the CSIT error is Gaussian-distributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE) under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a norm-bounded error (NBE) model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worst-case design. For this model, we consider robust (i) minimum SMSE, (ii) MSE-constrained, and (iii) MSE-balancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs). Further, we consider an extension of the proposed algorithm to the case with per-antenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.

  5. Robust iris segmentation through parameterization of the Chan-Vese algorithm

    CSIR Research Space (South Africa)

    Mabuza-Hocquet, G

    2015-06-01

    Full Text Available The performance of an iris recognition system relies on automated processes from the segmentation stage to the matching stage. Each stage has traditional algorithms used successfully over the years. The drawback is that these algorithms assume...

  6. Robust buckling optimization of laminated composite structures using discrete material optimization considering “worst” shape imperfections

    DEFF Research Database (Denmark)

    Henrichsen, Søren Randrup; Lindgaard, Esben; Lund, Erik

    2015-01-01

    Robust buckling optimal design of laminated composite structures is conducted in this work. Optimal designs are obtained by considering geometric imperfections in the optimization procedure. Discrete Material Optimization is applied to obtain optimal laminate designs. The optimal geometric...... imperfection is represented by the “worst” shape imperfection. The two optimization problems are combined through the recurrence optimization. Hereby the imperfection sensitivity of the considered structures can be studied. The recurrence optimization is demonstrated through a U-profile and a cylindrical panel...... example. The imperfection sensitivity of the optimized structure decreases during the recurrence optimization for both examples, hence robust buckling optimal structures are designed....

  7. Discrete Riccati equation solutions: Distributed algorithms

    Directory of Open Access Journals (Sweden)

    D. G. Lainiotis

    1996-01-01

    Full Text Available In this paper new distributed algorithms for the solution of the discrete Riccati equation are introduced. The algorithms are used to provide robust and computational efficient solutions to the discrete Riccati equation. The proposed distributed algorithms are theoretically interesting and computationally attractive.

  8. Angle Statistics Reconstruction: a robust reconstruction algorithm for Muon Scattering Tomography

    Science.gov (United States)

    Stapleton, M.; Burns, J.; Quillin, S.; Steer, C.

    2014-11-01

    Muon Scattering Tomography (MST) is a technique for using the scattering of cosmic ray muons to probe the contents of enclosed volumes. As a muon passes through material it undergoes multiple Coulomb scattering, where the amount of scattering is dependent on the density and atomic number of the material as well as the path length. Hence, MST has been proposed as a means of imaging dense materials, for instance to detect special nuclear material in cargo containers. Algorithms are required to generate an accurate reconstruction of the material density inside the volume from the muon scattering information and some have already been proposed, most notably the Point of Closest Approach (PoCA) and Maximum Likelihood/Expectation Maximisation (MLEM) algorithms. However, whilst PoCA-based algorithms are easy to implement, they perform rather poorly in practice. Conversely, MLEM is a complicated algorithm to implement and computationally intensive and there is currently no published, fast and easily-implementable algorithm that performs well in practice. In this paper, we first provide a detailed analysis of the source of inaccuracy in PoCA-based algorithms. We then motivate an alternative method, based on ideas first laid out by Morris et al, presenting and fully specifying an algorithm that performs well against simulations of realistic scenarios. We argue this new algorithm should be adopted by developers of Muon Scattering Tomography as an alternative to PoCA.

  9. Geometrical-optics approximation of forward scattering by coated particles.

    Science.gov (United States)

    Xu, Feng; Cai, Xiaoshu; Ren, Kuanfang

    2004-03-20

    By means of geometrical optics we present an approximation algorithm with which to accelerate the computation of scattering intensity distribution within a forward angular range (0 degrees-60 degrees) for coated particles illuminated by a collimated incident beam. Phases of emerging rays are exactly calculated to improve the approximation precision. This method proves effective for transparent and tiny absorbent particles with size parameters larger than 75 but fails to give good approximation results at scattering angles at which refractive rays are absent. When the absorption coefficient of a particle is greater than 0.01, the geometrical optics approximation is effective only for forward small angles, typically less than 10 degrees or so.

  10. Geometric Modeling and Reasoning of Human-Centered Freeform Products

    CERN Document Server

    Wang, Charlie C L

    2013-01-01

    The recent trend in user-customized product design requires the shape of products to be automatically adjusted according to the human body’s shape, so that people will feel more comfortable when wearing these products.  Geometric approaches can be used to design the freeform shape of products worn by people, which can greatly improve the efficiency of design processes in various industries involving customized products (e.g., garment design, toy design, jewel design, shoe design, and design of medical devices, etc.). These products are usually composed of very complex geometric shapes (represented by free-form surfaces), and are not driven by a parameter table but a digital human model with free-form shapes or part of human bodies (e.g., wrist, foot, and head models).   Geometric Modeling and Reasoning of Human-Centered Freeform Products introduces the algorithms of human body reconstruction, freeform product modeling, constraining and reconstructing freeform products, and shape optimization for improving...

  11. Robust hashing for 3D models

    Science.gov (United States)

    Berchtold, Waldemar; Schäfer, Marcel; Rettig, Michael; Steinebach, Martin

    2014-02-01

    3D models and applications are of utmost interest in both science and industry. With the increment of their usage, their number and thereby the challenge to correctly identify them increases. Content identification is commonly done by cryptographic hashes. However, they fail as a solution in application scenarios such as computer aided design (CAD), scientific visualization or video games, because even the smallest alteration of the 3D model, e.g. conversion or compression operations, massively changes the cryptographic hash as well. Therefore, this work presents a robust hashing algorithm for 3D mesh data. The algorithm applies several different bit extraction methods. They are built to resist desired alterations of the model as well as malicious attacks intending to prevent correct allocation. The different bit extraction methods are tested against each other and, as far as possible, the hashing algorithm is compared to the state of the art. The parameters tested are robustness, security and runtime performance as well as False Acceptance Rate (FAR) and False Rejection Rate (FRR), also the probability calculation of hash collision is included. The introduced hashing algorithm is kept adaptive e.g. in hash length, to serve as a proper tool for all applications in practice.

  12. de Casteljau's Algorithm Revisited

    DEFF Research Database (Denmark)

    Gravesen, Jens

    1998-01-01

    It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...

  13. A 3D Printing Model Watermarking Algorithm Based on 3D Slicing and Feature Points

    Directory of Open Access Journals (Sweden)

    Giao N. Pham

    2018-02-01

    Full Text Available With the increase of three-dimensional (3D printing applications in many areas of life, a large amount of 3D printing data is copied, shared, and used several times without any permission from the original providers. Therefore, copyright protection and ownership identification for 3D printing data in communications or commercial transactions are practical issues. This paper presents a novel watermarking algorithm for 3D printing models based on embedding watermark data into the feature points of a 3D printing model. Feature points are determined and computed by the 3D slicing process along the Z axis of a 3D printing model. The watermark data is embedded into a feature point of a 3D printing model by changing the vector length of the feature point in OXY space based on the reference length. The x and y coordinates of the feature point will be then changed according to the changed vector length that has been embedded with a watermark. Experimental results verified that the proposed algorithm is invisible and robust to geometric attacks, such as rotation, scaling, and translation. The proposed algorithm provides a better method than the conventional works, and the accuracy of the proposed algorithm is much higher than previous methods.

  14. A novel algorithm for fast grasping of unknown objects using C-shape configuration

    Science.gov (United States)

    Lei, Qujiang; Chen, Guangming; Meijer, Jonathan; Wisse, Martijn

    2018-02-01

    Increasing grasping efficiency is very important for the robots to grasp unknown objects especially subjected to unfamiliar environments. To achieve this, a new algorithm is proposed based on the C-shape configuration. Specifically, the geometric model of the used under-actuated gripper is approximated as a C-shape. To obtain an appropriate graspable position, this C-shape configuration is applied to fit geometric model of an unknown object. The geometric model of unknown object is constructed by using a single-view partial point cloud. To examine the algorithm using simulations, a comparison of the commonly used motion planners is made. The motion planner with the highest number of solved runs, lowest computing time and the shortest path length is chosen to execute grasps found by this grasping algorithm. The simulation results demonstrate that excellent grasping efficiency is achieved by adopting our algorithm. To validate this algorithm, experiment tests are carried out using a UR5 robot arm and an under-actuated gripper. The experimental results show that steady grasping actions are obtained. Hence, this research provides a novel algorithm for fast grasping of unknown objects.

  15. Robust and Efficient Parametric Face Alignment

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    2011-01-01

    We propose a correlation-based approach to parametric object alignment particularly suitable for face analysis applications which require efficiency and robustness against occlusions and illumination changes. Our algorithm registers two images by iteratively maximizing their correlation coefficient

  16. Robust Satisficing Decision Making for Unmanned Aerial Vehicle Complex Missions under Severe Uncertainty.

    Directory of Open Access Journals (Sweden)

    Xiaoting Ji

    Full Text Available This paper presents a robust satisficing decision-making method for Unmanned Aerial Vehicles (UAVs executing complex missions in an uncertain environment. Motivated by the info-gap decision theory, we formulate this problem as a novel robust satisficing optimization problem, of which the objective is to maximize the robustness while satisfying some desired mission requirements. Specifically, a new info-gap based Markov Decision Process (IMDP is constructed to abstract the uncertain UAV system and specify the complex mission requirements with the Linear Temporal Logic (LTL. A robust satisficing policy is obtained to maximize the robustness to the uncertain IMDP while ensuring a desired probability of satisfying the LTL specifications. To this end, we propose a two-stage robust satisficing solution strategy which consists of the construction of a product IMDP and the generation of a robust satisficing policy. In the first stage, a product IMDP is constructed by combining the IMDP with an automaton representing the LTL specifications. In the second, an algorithm based on robust dynamic programming is proposed to generate a robust satisficing policy, while an associated robustness evaluation algorithm is presented to evaluate the robustness. Finally, through Monte Carlo simulation, the effectiveness of our algorithms is demonstrated on an UAV search mission under severe uncertainty so that the resulting policy can maximize the robustness while reaching the desired performance level. Furthermore, by comparing the proposed method with other robust decision-making methods, it can be concluded that our policy can tolerate higher uncertainty so that the desired performance level can be guaranteed, which indicates that the proposed method is much more effective in real applications.

  17. Sensitivity Analysis of the Scattering-Based SARBM3D Despeckling Algorithm.

    Science.gov (United States)

    Di Simone, Alessio

    2016-06-25

    Synthetic Aperture Radar (SAR) imagery greatly suffers from multiplicative speckle noise, typical of coherent image acquisition sensors, such as SAR systems. Therefore, a proper and accurate despeckling preprocessing step is almost mandatory to aid the interpretation and processing of SAR data by human users and computer algorithms, respectively. Very recently, a scattering-oriented version of the popular SAR Block-Matching 3D (SARBM3D) despeckling filter, named Scattering-Based (SB)-SARBM3D, was proposed. The new filter is based on the a priori knowledge of the local topography of the scene. In this paper, an experimental sensitivity analysis of the above-mentioned despeckling algorithm is carried out, and the main results are shown and discussed. In particular, the role of both electromagnetic and geometrical parameters of the surface and the impact of its scattering behavior are investigated. Furthermore, a comprehensive sensitivity analysis of the SB-SARBM3D filter against the Digital Elevation Model (DEM) resolution and the SAR image-DEM coregistration step is also provided. The sensitivity analysis shows a significant robustness of the algorithm against most of the surface parameters, while the DEM resolution plays a key role in the despeckling process. Furthermore, the SB-SARBM3D algorithm outperforms the original SARBM3D in the presence of the most realistic scattering behaviors of the surface. An actual scenario is also presented to assess the DEM role in real-life conditions.

  18. A fully robust PARAFAC method for analyzing fluorescence data

    DEFF Research Database (Denmark)

    Engelen, Sanne; Frosch, Stina; Jørgensen, Bo

    2009-01-01

    and Rayleigh scatter. Recently, a robust PARAFAC method that circumvents the harmful effects of outlying samples has been developed. For removing the scatter effects on the final PARAFAC model, different techniques exist. Newly, an automated scatter identification tool has been constructed. However......, there still exists no robust method for handling fluorescence data encountering both outlying EEM landscapes and scatter. In this paper, we present an iterative algorithm where the robust PARAFAC method and the scatter identification tool are alternately performed. A fully automated robust PARAFAC method...

  19. A GEOMETRICAL HEIGHT SCALE FOR SUNSPOT PENUMBRAE

    International Nuclear Information System (INIS)

    Puschmann, K. G.; Ruiz Cobo, B.; MartInez Pillet, V.

    2010-01-01

    Inversions of spectropolarimetric observations of penumbral filaments deliver the stratification of different physical quantities in an optical depth scale. However, without establishing a geometrical height scale, their three-dimensional geometrical structure cannot be derived. This is crucial in understanding the correct spatial variation of physical properties in the penumbral atmosphere and to provide insights into the mechanism capable of explaining the observed penumbral brightness. The aim of this work is to determine a global geometrical height scale in the penumbra by minimizing the divergence of the magnetic field vector and the deviations from static equilibrium as imposed by a force balance equation that includes pressure gradients, gravity, and the Lorentz force. Optical depth models are derived from the inversion of spectropolarimetric data of an active region observed with the Solar Optical Telescope on board the Hinode satellite. We use a genetic algorithm to determine the boundary condition for the inference of geometrical heights. The retrieved geometrical height scale permits the evaluation of the Wilson depression at each pixel and the correlation of physical quantities at each height. Our results fit into the uncombed penumbral scenario, i.e., a penumbra composed of flux tubes with channeled mass flow and with a weaker and more horizontal magnetic field as compared with the background field. The ascending material is hotter and denser than their surroundings. We do not find evidence of overturning convection or field-free regions in the inner penumbral area analyzed. The penumbral brightness can be explained by the energy transfer of the ascending mass carried by the Evershed flow, if the physical quantities below z = -75 km are extrapolated from the results of the inversion.

  20. Robust Face Recognition Via Gabor Feature and Sparse Representation

    Directory of Open Access Journals (Sweden)

    Hao Yu-Juan

    2016-01-01

    Full Text Available Sparse representation based on compressed sensing theory has been widely used in the field of face recognition, and has achieved good recognition results. but the face feature extraction based on sparse representation is too simple, and the sparse coefficient is not sparse. In this paper, we improve the classification algorithm based on the fusion of sparse representation and Gabor feature, and then improved algorithm for Gabor feature which overcomes the problem of large dimension of the vector dimension, reduces the computation and storage cost, and enhances the robustness of the algorithm to the changes of the environment.The classification efficiency of sparse representation is determined by the collaborative representation,we simplify the sparse constraint based on L1 norm to the least square constraint, which makes the sparse coefficients both positive and reduce the complexity of the algorithm. Experimental results show that the proposed method is robust to illumination, facial expression and pose variations of face recognition, and the recognition rate of the algorithm is improved.

  1. Robust perception algorithms for road and track autonomous following

    Science.gov (United States)

    Marion, Vincent; Lecointe, Olivier; Lewandowski, Cecile; Morillon, Joel G.; Aufrere, Romuald; Marcotegui, Beatrix; Chapuis, Roland; Beucher, Serge

    2004-09-01

    The French Military Robotic Study Program (introduced in Aerosense 2003), sponsored by the French Defense Procurement Agency and managed by Thales Airborne Systems as the prime contractor, focuses on about 15 robotic themes, which can provide an immediate "operational add-on value." The paper details the "road and track following" theme (named AUT2), which main purpose was to develop a vision based sub-system to automatically detect roadsides of an extended range of roads and tracks suitable to military missions. To achieve the goal, efforts focused on three main areas: (1) Improvement of images quality at algorithms inputs, thanks to the selection of adapted video cameras, and the development of a THALES patented algorithm: it removes in real time most of the disturbing shadows in images taken in natural environments, enhances contrast and lowers reflection effect due to films of water. (2) Selection and improvement of two complementary algorithms (one is segment oriented, the other region based) (3) Development of a fusion process between both algorithms, which feeds in real time a road model with the best available data. Each previous step has been developed so that the global perception process is reliable and safe: as an example, the process continuously evaluates itself and outputs confidence criteria qualifying roadside detection. The paper presents the processes in details, and the results got from passed military acceptance tests, which trigger the next step: autonomous track following (named AUT3).

  2. Designing Phononic Crystals with Wide and Robust Band Gaps

    Science.gov (United States)

    Jia, Zian; Chen, Yanyu; Yang, Haoxiang; Wang, Lifeng

    2018-04-01

    Phononic crystals (PnCs) engineered to manipulate and control the propagation of mechanical waves have enabled the design of a range of novel devices, such as waveguides, frequency modulators, and acoustic cloaks, for which wide and robust phononic band gaps are highly preferable. While numerous PnCs have been designed in recent decades, to the best of our knowledge, PnCs that possess simultaneous wide and robust band gaps (to randomness and deformations) have not yet been reported. Here, we demonstrate that by combining the band-gap formation mechanisms of Bragg scattering and local resonances (the latter one is dominating), PnCs with wide and robust phononic band gaps can be established. The robustness of the phononic band gaps are then discussed from two aspects: robustness to geometric randomness (manufacture defects) and robustness to deformations (mechanical stimuli). Analytical formulations further predict the optimal design parameters, and an uncertainty analysis quantifies the randomness effect of each designing parameter. Moreover, we show that the deformation robustness originates from a local resonance-dominant mechanism together with the suppression of structural instability. Importantly, the proposed PnCs require only a small number of layers of elements (three unit cells) to obtain broad, robust, and strong attenuation bands, which offer great potential in designing flexible and deformable phononic devices.

  3. Designing Phononic Crystals with Wide and Robust Band Gaps

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yanyu [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jia, Zian [State University of New York at Stony Brook; Yang, Haoxiang [State University of New York at Stony Brook; Wang, Lifeng [State University of New York at Stony Brook

    2018-04-16

    Phononic crystals (PnCs) engineered to manipulate and control the propagation of mechanical waves have enabled the design of a range of novel devices, such as waveguides, frequency modulators, and acoustic cloaks, for which wide and robust phononic band gaps are highly preferable. While numerous PnCs have been designed in recent decades, to the best of our knowledge, PnCs that possess simultaneous wide and robust band gaps (to randomness and deformations) have not yet been reported. Here, we demonstrate that by combining the band-gap formation mechanisms of Bragg scattering and local resonances (the latter one is dominating), PnCs with wide and robust phononic band gaps can be established. The robustness of the phononic band gaps are then discussed from two aspects: robustness to geometric randomness (manufacture defects) and robustness to deformations (mechanical stimuli). Analytical formulations further predict the optimal design parameters, and an uncertainty analysis quantifies the randomness effect of each designing parameter. Moreover, we show that the deformation robustness originates from a local resonance-dominant mechanism together with the suppression of structural instability. Importantly, the proposed PnCs require only a small number of layers of elements (three unit cells) to obtain broad, robust, and strong attenuation bands, which offer great potential in designing flexible and deformable phononic devices.

  4. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    CERN Document Server

    Parasuraman, Ramviyas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide red...

  5. A new method for robust video watermarking resistant against key estimation attacks

    Science.gov (United States)

    Mitekin, Vitaly

    2015-12-01

    This paper presents a new method for high-capacity robust digital video watermarking and algorithms of embedding and extraction of watermark based on this method. Proposed method uses password-based two-dimensional pseudonoise arrays for watermark embedding, making brute-force attacks aimed at steganographic key retrieval mostly impractical. Proposed algorithm for 2-dimensional "noise-like" watermarking patterns generation also allows to significantly decrease watermark collision probability ( i.e. probability of correct watermark detection and extraction using incorrect steganographic key or password).. Experimental research provided in this work also shows that simple correlation-based watermark detection procedure can be used, providing watermark robustness against lossy compression and watermark estimation attacks. At the same time, without decreasing robustness of embedded watermark, average complexity of the brute-force key retrieval attack can be increased to 1014 watermark extraction attempts (compared to 104-106 for a known robust watermarking schemes). Experimental results also shows that for lowest embedding intensity watermark preserves it's robustness against lossy compression of host video and at the same time preserves higher video quality (PSNR up to 51dB) compared to known wavelet-based and DCT-based watermarking algorithms.

  6. Geometric description of images as topographic maps

    CERN Document Server

    Caselles, Vicent

    2010-01-01

    This volume discusses the basic geometric contents of an image and presents a tree data structure to handle those contents efficiently. The nodes of the tree are derived from connected components of level sets of the intensity, while the edges represent inclusion information. Grain filters, morphological operators simplifying these geometric contents, are analyzed and several applications to image comparison and registration, and to edge and corner detection, are presented. The mathematically inclined reader may be most interested in Chapters 2 to 6, which generalize the topological Morse description to continuous or semicontinuous functions, while mathematical morphologists may more closely consider grain filters in Chapter 3. Computer scientists will find algorithmic considerations in Chapters 6 and 7, the full justification of which may be found in Chapters 2 and 4 respectively. Lastly, all readers can learn more about the motivation for this work in the image processing applications presented in Chapter 8...

  7. New robust algorithm for tracking cells in videos of Drosophila morphogenesis based on finding an ideal path in segmented spatio-temporal cellular structures.

    Science.gov (United States)

    Bellaïche, Yohanns; Bosveld, Floris; Graner, François; Mikula, Karol; Remesíková, Mariana; Smísek, Michal

    2011-01-01

    In this paper, we present a novel algorithm for tracking cells in time lapse confocal microscopy movie of a Drosophila epithelial tissue during pupal morphogenesis. We consider a 2D + time video as a 3D static image, where frames are stacked atop each other, and using a spatio-temporal segmentation algorithm we obtain information about spatio-temporal 3D tubes representing evolutions of cells. The main idea for tracking is the usage of two distance functions--first one from the cells in the initial frame and second one from segmented boundaries. We track the cells backwards in time. The first distance function attracts the subsequently constructed cell trajectories to the cells in the initial frame and the second one forces them to be close to centerlines of the segmented tubular structures. This makes our tracking algorithm robust against noise and missing spatio-temporal boundaries. This approach can be generalized to a 3D + time video analysis, where spatio-temporal tubes are 4D objects.

  8. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  9. Radio resource management using geometric water-filling

    CERN Document Server

    He, Peter; Zhou, Sheng; Niu, Zhisheng

    2014-01-01

    This brief introduces the fundamental theory and development of managing radio resources using a water-filling algorithm that can optimize system performance in wireless communication. Geometric Water-Filling (GWF) is a crucial underlying tool in emerging communication systems such as multiple input multiple output systems, cognitive radio systems, and green communication systems. Early chapters introduce emerging wireless technologies and provide a detailed analysis of water-filling. The brief investigates single user and multi-user issues of radio resource management, allocation of resources

  10. ROBUST PERSON TRACKING WITH MULTIPLE NON-OVERLAPPING CAMERAS IN AN OUTDOOR ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    S. Hellwig

    2012-07-01

    Full Text Available The aim of our work is to combine multiple cameras for a robust tracking of persons in an outdoor environment. Although surveillance is a well established field, many algorithms apply various constraints like overlapping fields of view or precise calibration of the cameras to improve results. An application of these developed systems in a realistic outdoor environment is often difficult. Our aim is to be widely independent from the camera setup and the observed scene, in order to use existing cameras. Thereby our algorithm needs to be capable to work with both overlapping and non-overlapping fields of views. We propose an algorithm that allows flexible combination of different static cameras with varying properties. Another requirement of a practical application is that the algorithm is able to work online. Our system is able to process the data during runtime and to provide results immediately. In addition to seeking flexibility in the camera setup, we present a specific approach that combines state of the art algorithms in order to be robust to environment influences. We present results that indicate a good performance of our introduced algorithm in different scenarios. We show its robustness to different types of image artifacts. In addition we demonstrate that our algorithm is able to match persons between cameras in a non-overlapping scenario.

  11. Project Robust Scheduling Based on the Scattered Buffer Technology

    Directory of Open Access Journals (Sweden)

    Nansheng Pang

    2018-04-01

    Full Text Available The research object in this paper is the sub network formed by the predecessor’s affect on the solution activity. This paper is to study three types of influencing factors from the predecessors that lead to the delay of starting time of the solution activity on the longest path, and to analyze the influence degree on the delay of the solution activity’s starting time from different types of factors. On this basis, through the comprehensive analysis of various factors that influence the solution activity, this paper proposes a metric that is used to evaluate the solution robustness of the project scheduling, and this metric is taken as the optimization goal. This paper also adopts the iterative process to design a scattered buffer heuristics algorithm based on the robust scheduling of the time buffer. At the same time, the resource flow network is introduced in this algorithm, using the tabu search algorithm to solve baseline scheduling. For the generation of resource flow network in the baseline scheduling, this algorithm designs a resource allocation algorithm with the maximum use of the precedence relations. Finally, the algorithm proposed in this paper and some other algorithms in previous literature are taken into the simulation experiment; under the comparative analysis, the experimental results show that the algorithm proposed in this paper is reasonable and feasible.

  12. A robust and fast generic voltage sag detection technique

    DEFF Research Database (Denmark)

    L. Dantas, Joacillo; Lima, Francisco Kleber A.; Branco, Carlos Gustavo C.

    2015-01-01

    In this paper, a fast and robust voltage sag detection algorithm, named VPS2D, is introduced. Using the DSOGI, the algorithm creates a virtual positive sequence voltage and monitories the fundamental voltage component of each phase. After calculating the aggregate value in the o:;3-reference fram...

  13. A geometric framework for evaluating rare variant tests of association.

    Science.gov (United States)

    Liu, Keli; Fast, Shannon; Zawistowski, Matthew; Tintle, Nathan L

    2013-05-01

    The wave of next-generation sequencing data has arrived. However, many questions still remain about how to best analyze sequence data, particularly the contribution of rare genetic variants to human disease. Numerous statistical methods have been proposed to aggregate association signals across multiple rare variant sites in an effort to increase statistical power; however, the precise relation between the tests is often not well understood. We present a geometric representation for rare variant data in which rare allele counts in case and control samples are treated as vectors in Euclidean space. The geometric framework facilitates a rigorous classification of existing rare variant tests into two broad categories: tests for a difference in the lengths of the case and control vectors, and joint tests for a difference in either the lengths or angles of the two vectors. We demonstrate that genetic architecture of a trait, including the number and frequency of risk alleles, directly relates to the behavior of the length and joint tests. Hence, the geometric framework allows prediction of which tests will perform best under different disease models. Furthermore, the structure of the geometric framework immediately suggests additional classes and types of rare variant tests. We consider two general classes of tests which show robustness to noncausal and protective variants. The geometric framework introduces a novel and unique method to assess current rare variant methodology and provides guidelines for both applied and theoretical researchers. © 2013 Wiley Periodicals, Inc.

  14. A geometrical model for DNA organization in bacteria.

    Directory of Open Access Journals (Sweden)

    Mathias Buenemann

    Full Text Available Recent experimental studies have revealed that bacteria, such as C. crescentus, show a remarkable spatial ordering of their chromosome. A strong linear correlation has been found between the position of genes on the chromosomal map and their spatial position in the cellular volume. We show that this correlation can be explained by a purely geometrical model. Namely, self-avoidance of DNA, specific positioning of one or few DNA loci (such as origin or terminus together with the action of DNA compaction proteins (that organize the chromosome into topological domains are sufficient to get a linear arrangement of the chromosome along the cell axis. We develop a Monte-Carlo method that allows us to test our model numerically and to analyze the dependence of the spatial ordering on various physiologically relevant parameters. We show that the proposed geometrical ordering mechanism is robust and universal (i.e. does not depend on specific bacterial details. The geometrical mechanism should work in all bacteria that have compacted chromosomes with spatially fixed regions. We use our model to make specific and experimentally testable predictions about the spatial arrangement of the chromosome in mutants of C. crescentus and the growth-stage dependent ordering in E. coli.

  15. ADSL Transceivers Applying DSM and Their Nonstationary Noise Robustness

    Directory of Open Access Journals (Sweden)

    Bostoen Tom

    2006-01-01

    Full Text Available Dynamic spectrum management (DSM comprises a new set of techniques for multiuser power allocation and/or detection in digital subscriber line (DSL networks. At the Alcatel Research and Innovation Labs, we have recently developed a DSM test bed, which allows the performance of DSM algorithms to be evaluated in practice. With this test bed, we have evaluated the performance of a DSM level-1 algorithm known as iterative water-filling in an ADSL scenario. This paper describes the results of, on the one hand, the performance gains achieved with iterative water-filling, and, on the other hand, the nonstationary noise robustness of DSM-enabled ADSL modems. It will be shown that DSM trades off nonstationary noise robustness for performance improvements. A new bit swap procedure is then introduced to increase the noise robustness when applying DSM.

  16. Geometric measure of pairwise quantum discord for superpositions of multipartite generalized coherent states

    International Nuclear Information System (INIS)

    Daoud, M.; Ahl Laamara, R.

    2012-01-01

    We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl–Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger–Horne–Zeilinger states. -- Highlights: ► Pairwise quantum correlations multipartite coherent states. ► Explicit expression of geometric quantum discord. ► Entanglement sudden death and quantum discord robustness. ► Generalized coherent states interpolating between Werner and Greenberger–Horne–Zeilinger states

  17. Geometric measure of pairwise quantum discord for superpositions of multipartite generalized coherent states

    Energy Technology Data Exchange (ETDEWEB)

    Daoud, M., E-mail: m_daoud@hotmail.com [Department of Physics, Faculty of Sciences, University Ibnou Zohr, Agadir (Morocco); Ahl Laamara, R., E-mail: ahllaamara@gmail.com [LPHE-Modeling and Simulation, Faculty of Sciences, University Mohammed V, Rabat (Morocco); Centre of Physics and Mathematics, CPM, CNESTEN, Rabat (Morocco)

    2012-07-16

    We give the explicit expressions of the pairwise quantum correlations present in superpositions of multipartite coherent states. A special attention is devoted to the evaluation of the geometric quantum discord. The dynamics of quantum correlations under a dephasing channel is analyzed. A comparison of geometric measure of quantum discord with that of concurrence shows that quantum discord in multipartite coherent states is more resilient to dissipative environments than is quantum entanglement. To illustrate our results, we consider some special superpositions of Weyl–Heisenberg, SU(2) and SU(1,1) coherent states which interpolate between Werner and Greenberger–Horne–Zeilinger states. -- Highlights: ► Pairwise quantum correlations multipartite coherent states. ► Explicit expression of geometric quantum discord. ► Entanglement sudden death and quantum discord robustness. ► Generalized coherent states interpolating between Werner and Greenberger–Horne–Zeilinger states.

  18. 三维装配几何约束闭环系统的递归分解方法%A Recursive Decomposition Algorithm for 3D Assembly Geometric Constraint System with Closed-loops

    Institute of Scientific and Technical Information of China (English)

    黄学良; 李娜; 陈立平

    2013-01-01

    Numerical methods are always employed to solve 3D assembly geometric constraint system with closed-loops which can not be decomposed by the existing decomposition methods,but their inherent inefficiency and instability can not be overcome.In this paper,with the analysis of the structural constraint of serial kinematic chain and the topological structure of geometric constraint closed-loop graph,a recursive decomposition algorithm for 3D geometric constraint system with closed-loops is proposed.The basic idea of the proposed algorithm is to introduce the equivalent geometric constraint combination to substitute the structural constraint of serial kinematic chain,and separate the geometric constraint subsystems which can be solved independently from the geometric constraint system with closed-loops.The proposed method can decompose most 3D geometric constraint closed-loop systems which are always solved by numerical method into a series of geometric constraint subsystems between two rigid bodies which can be solved by analytical or reasoning method,so that the computational efficiency and stability can be improved dramatically.Finally,a typical example has been given to validate the correctness and effectiveness of the proposed method.%由于现有几何约束分解方法无法分解三维装配几何约束闭环系统,故常采用数值迭代方法对其进行求解,但存在效率低、稳定性差等问题.为此,通过分析几何约束闭环图的拓扑结构和串联运动链的结构约束,提出基于串联运动链结构约束等价替换的三维几何约束闭环系统的递归分解方法.该方法通过不断地引入几何约束组合等价替换串联运动链的结构约束,从几何约束闭环系统中分离出可独立求解的子系统,实现几何约束闭环系统的递归分解.该方法可将此前许多必须整体迭代求解的三维几何约束闭环系统分解为一系列可解析求解的2

  19. A self-organizing algorithm for modeling protein loops.

    Directory of Open Access Journals (Sweden)

    Pu Liu

    2009-08-01

    Full Text Available Protein loops, the flexible short segments connecting two stable secondary structural units in proteins, play a critical role in protein structure and function. Constructing chemically sensible conformations of protein loops that seamlessly bridge the gap between the anchor points without introducing any steric collisions remains an open challenge. A variety of algorithms have been developed to tackle the loop closure problem, ranging from inverse kinematics to knowledge-based approaches that utilize pre-existing fragments extracted from known protein structures. However, many of these approaches focus on the generation of conformations that mainly satisfy the fixed end point condition, leaving the steric constraints to be resolved in subsequent post-processing steps. In the present work, we describe a simple solution that simultaneously satisfies not only the end point and steric conditions, but also chirality and planarity constraints. Starting from random initial atomic coordinates, each individual conformation is generated independently by using a simple alternating scheme of pairwise distance adjustments of randomly chosen atoms, followed by fast geometric matching of the conformationally rigid components of the constituent amino acids. The method is conceptually simple, numerically stable and computationally efficient. Very importantly, additional constraints, such as those derived from NMR experiments, hydrogen bonds or salt bridges, can be incorporated into the algorithm in a straightforward and inexpensive way, making the method ideal for solving more complex multi-loop problems. The remarkable performance and robustness of the algorithm are demonstrated on a set of protein loops of length 4, 8, and 12 that have been used in previous studies.

  20. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    Science.gov (United States)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  1. Fractal supersymmetric QM, Geometric Probability and the Riemann Hypothesis

    CERN Document Server

    Castro, C

    2004-01-01

    The Riemann's hypothesis (RH) states that the nontrivial zeros of the Riemann zeta-function are of the form $ s_n =1/2+i\\lambda_n $. Earlier work on the RH based on supersymmetric QM, whose potential was related to the Gauss-Jacobi theta series, allows to provide the proper framework to construct the well defined algorithm to compute the probability to find a zero (an infinity of zeros) in the critical line. Geometric probability theory furnishes the answer to the very difficult question whether the probability that the RH is true is indeed equal to unity or not. To test the validity of this geometric probabilistic framework to compute the probability if the RH is true, we apply it directly to the the hyperbolic sine function $ \\sinh (s) $ case which obeys a trivial analog of the RH (the HSRH). Its zeros are equally spaced in the imaginary axis $ s_n = 0 + i n \\pi $. The geometric probability to find a zero (and an infinity of zeros) in the imaginary axis is exactly unity. We proceed with a fractal supersymme...

  2. A geometric viewpoint on generalized hydrodynamics

    Directory of Open Access Journals (Sweden)

    Benjamin Doyon

    2018-01-01

    Full Text Available Generalized hydrodynamics (GHD is a large-scale theory for the dynamics of many-body integrable systems. It consists of an infinite set of conservation laws for quasi-particles traveling with effective (“dressed” velocities that depend on the local state. We show that these equations can be recast into a geometric dynamical problem. They are conservation equations with state-independent quasi-particle velocities, in a space equipped with a family of metrics, parametrized by the quasi-particles' type and speed, that depend on the local state. In the classical hard rod or soliton gas picture, these metrics measure the free length of space as perceived by quasi-particles; in the quantum picture, they weigh space with the density of states available to them. Using this geometric construction, we find a general solution to the initial value problem of GHD, in terms of a set of integral equations where time appears explicitly. These integral equations are solvable by iteration and provide an extremely efficient solution algorithm for GHD.

  3. Hybrid Geometric Calibration Method for Multi-Platform Spaceborne SAR Image with Sparse Gcps

    Science.gov (United States)

    Lv, G.; Tang, X.; Ai, B.; Li, T.; Chen, Q.

    2018-04-01

    Geometric calibration is able to provide high-accuracy geometric coordinates of spaceborne SAR image through accurate geometric parameters in the Range-Doppler model by ground control points (GCPs). However, it is very difficult to obtain GCPs that covering large-scale areas, especially in the mountainous regions. In addition, the traditional calibration method is only used for single platform SAR images and can't support the hybrid geometric calibration for multi-platform images. To solve the above problems, a hybrid geometric calibration method for multi-platform spaceborne SAR images with sparse GCPs is proposed in this paper. First, we calibrate the master image that contains GCPs. Secondly, the point tracking algorithm is used to obtain the tie points (TPs) between the master and slave images. Finally, we calibrate the slave images using TPs as the GCPs. We take the Beijing-Tianjin- Hebei region as an example to study SAR image hybrid geometric calibration method using 3 TerraSAR-X images, 3 TanDEM-X images and 5 GF-3 images covering more than 235 kilometers in the north-south direction. Geometric calibration of all images is completed using only 5 GCPs. The GPS data extracted from GNSS receiver are used to assess the plane accuracy after calibration. The results after geometric calibration with sparse GCPs show that the geometric positioning accuracy is 3 m for TSX/TDX images and 7.5 m for GF-3 images.

  4. Competition improves robustness against loss of information

    Directory of Open Access Journals (Sweden)

    Arash eKermani Kolankeh

    2015-03-01

    Full Text Available A substantial number of works aimed at modeling the receptive field properties of the primary visual cortex (V1. Their evaluation criterion is usually the similarity of the model response properties to the recorded responses from biological organisms. However, as several algorithms were able to demonstrate some degree of similarity to biological data based on the existing criteria, we focus on the robustness against loss of information in the form of occlusions as an additional constraint for better understanding the algorithmic level of early vision in the brain. We try to investigate the influence of competition mechanisms on the robustness. Therefore, we compared four methods employing different competition mechanisms, namely, independent component analysis, non-negative matrix factorization with sparseness constraint, predictive coding/biased competition, and a Hebbian neural network with lateral inhibitory connections. Each of those methods is known to be capable of developing receptive fields comparable to those of V1 simple-cells. Since measuring the robustness of methods having simple-cell like receptive fields against occlusion is difficult, we measure the robustness using the classification accuracy on the MNIST hand written digit dataset. For this we trained all methods on the training set of the MNIST hand written digits dataset and tested them on a MNIST test set with different levels of occlusions. We observe that methods which employ competitive mechanisms have higher robustness against loss of information. Also the kind of the competition mechanisms plays an important role in robustness. Global feedback inhibition as employed in predictive coding/biased competition has an advantage compared to local lateral inhibition learned by an anti-Hebb rule.

  5. A Collision-Free G2 Continuous Path-Smoothing Algorithm Using Quadratic Polynomial Interpolation

    Directory of Open Access Journals (Sweden)

    Seong-Ryong Chang

    2014-12-01

    Full Text Available Most path-planning algorithms are used to obtain a collision-free path without considering continuity. On the other hand, a continuous path is needed for stable movement. In this paper, the searched path was converted into a G2 continuous path using the modified quadratic polynomial and membership function interpolation algorithm. It is simple, unique and provides a good geometric interpretation. In addition, a collision-checking and improvement algorithm is proposed. The collision-checking algorithm can check the collisions of a smoothed path. If collisions are detected, the collision improvement algorithm modifies the collision path to a collision-free path. The collision improvement algorithm uses a geometric method. This method uses the perpendicular line between a collision position and the collision piecewise linear path. The sub-waypoint is added, and the QPMI algorithm is applied again. As a result, the collision-smoothed path is converted into a collision-free smooth path without changing the continuity.

  6. Robust species taxonomy assignment algorithm for 16S rRNA NGS reads: application to oral carcinoma samples

    Directory of Open Access Journals (Sweden)

    Nezar Noor Al-Hebshi

    2015-09-01

    Full Text Available Background: Usefulness of next-generation sequencing (NGS in assessing bacteria associated with oral squamous cell carcinoma (OSCC has been undermined by inability to classify reads to the species level. Objective: The purpose of this study was to develop a robust algorithm for species-level classification of NGS reads from oral samples and to pilot test it for profiling bacteria within OSCC tissues. Methods: Bacterial 16S V1-V3 libraries were prepared from three OSCC DNA samples and sequenced using 454's FLX chemistry. High-quality, well-aligned, and non-chimeric reads ≥350 bp were classified using a novel, multi-stage algorithm that involves matching reads to reference sequences in revised versions of the Human Oral Microbiome Database (HOMD, HOMD extended (HOMDEXT, and Greengene Gold (GGG at alignment coverage and percentage identity ≥98%, followed by assignment to species level based on top hit reference sequences. Priority was given to hits in HOMD, then HOMDEXT and finally GGG. Unmatched reads were subject to operational taxonomic unit analysis. Results: Nearly, 92.8% of the reads were matched to updated-HOMD 13.2, 1.83% to trusted-HOMDEXT, and 1.36% to modified-GGG. Of all matched reads, 99.6% were classified to species level. A total of 228 species-level taxa were identified, representing 11 phyla; the most abundant were Proteobacteria, Bacteroidetes, Firmicutes, Fusobacteria, and Actinobacteria. Thirty-five species-level taxa were detected in all samples. On average, Prevotella oris, Neisseria flava, Neisseria flavescens/subflava, Fusobacterium nucleatum ss polymorphum, Aggregatibacter segnis, Streptococcus mitis, and Fusobacterium periodontium were the most abundant. Bacteroides fragilis, a species rarely isolated from the oral cavity, was detected in two samples. Conclusion: This multi-stage algorithm maximizes the fraction of reads classified to the species level while ensuring reliable classification by giving priority to the

  7. Mixing geometric and radiometric features for change classification

    Science.gov (United States)

    Fournier, Alexandre; Descombes, Xavier; Zerubia, Josiane

    2008-02-01

    Most basic change detection algorithms use a pixel-based approach. Whereas such approach is quite well defined for monitoring important area changes (such as urban growth monitoring) in low resolution images, an object based approach seems more relevant when the change detection is specifically aimed toward targets (such as small buildings and vehicles). In this paper, we present an approach that mixes radiometric and geometric features to qualify the changed zones. The goal is to establish bounds (appearance, disappearance, substitution ...) between the detected changes and the underlying objects. We proceed by first clustering the change map (containing each pixel bitemporal radiosity) in different classes using the entropy-kmeans algorithm. Assuming that most man-made objects have a polygonal shape, a polygonal approximation algorithm is then used in order to characterize the resulting zone shapes. Hence allowing us to refine the primary rough classification, by integrating the polygon orientations in the state space. Tests are currently conducted on Quickbird data.

  8. Quality Evaluation and Nonuniform Compression of Geometrically Distorted Images Using the Quadtree Distortion Map

    Directory of Open Access Journals (Sweden)

    Cristina Costa

    2004-09-01

    Full Text Available The paper presents an analysis of the effects of lossy compression algorithms applied to images affected by geometrical distortion. It will be shown that the encoding-decoding process results in a nonhomogeneous image degradation in the geometrically corrected image, due to the different amount of information associated to each pixel. A distortion measure named quadtree distortion map (QDM able to quantify this aspect is proposed. Furthermore, QDM is exploited to achieve adaptive compression of geometrically distorted pictures, in order to ensure a uniform quality on the final image. Tests are performed using JPEG and JPEG2000 coding standards in order to quantitatively and qualitatively assess the performance of the proposed method.

  9. Simulation Experiment on Landing Site Selection Using a Simple Geometric Approach

    Science.gov (United States)

    Zhao, W.; Tong, X.; Xie, H.; Jin, Y.; Liu, S.; Wu, D.; Liu, X.; Guo, L.; Zhou, Q.

    2017-07-01

    Safe landing is an important part of the planetary exploration mission. Even fine scale terrain hazards (such as rocks, small craters, steep slopes, which would not be accurately detected from orbital reconnaissance) could also pose a serious risk on planetary lander or rover and scientific instruments on-board it. In this paper, a simple geometric approach on planetary landing hazard detection and safe landing site selection is proposed. In order to achieve full implementation of this algorithm, two easy-to-compute metrics are presented for extracting the terrain slope and roughness information. Unlike conventional methods which must do the robust plane fitting and elevation interpolation for DEM generation, in this work, hazards is identified through the processing directly on LiDAR point cloud. For safe landing site selection, a Generalized Voronoi Diagram is constructed. Based on the idea of maximum empty circle, the safest landing site can be determined. In this algorithm, hazards are treated as general polygons, without special simplification (e.g. regarding hazards as discrete circles or ellipses). So using the aforementioned method to process hazards is more conforming to the real planetary exploration scenario. For validating the approach mentioned above, a simulated planetary terrain model was constructed using volcanic ash with rocks in indoor environment. A commercial laser scanner mounted on a rail was used to scan the terrain surface at different hanging positions. The results demonstrate that fairly hazard detection capability and reasonable site selection was obtained compared with conventional method, yet less computational time and less memory usage was consumed. Hence, it is a feasible candidate approach for future precision landing selection on planetary surface.

  10. SIMULATION EXPERIMENT ON LANDING SITE SELECTION USING A SIMPLE GEOMETRIC APPROACH

    Directory of Open Access Journals (Sweden)

    W. Zhao

    2017-07-01

    Full Text Available Safe landing is an important part of the planetary exploration mission. Even fine scale terrain hazards (such as rocks, small craters, steep slopes, which would not be accurately detected from orbital reconnaissance could also pose a serious risk on planetary lander or rover and scientific instruments on-board it. In this paper, a simple geometric approach on planetary landing hazard detection and safe landing site selection is proposed. In order to achieve full implementation of this algorithm, two easy-to-compute metrics are presented for extracting the terrain slope and roughness information. Unlike conventional methods which must do the robust plane fitting and elevation interpolation for DEM generation, in this work, hazards is identified through the processing directly on LiDAR point cloud. For safe landing site selection, a Generalized Voronoi Diagram is constructed. Based on the idea of maximum empty circle, the safest landing site can be determined. In this algorithm, hazards are treated as general polygons, without special simplification (e.g. regarding hazards as discrete circles or ellipses. So using the aforementioned method to process hazards is more conforming to the real planetary exploration scenario. For validating the approach mentioned above, a simulated planetary terrain model was constructed using volcanic ash with rocks in indoor environment. A commercial laser scanner mounted on a rail was used to scan the terrain surface at different hanging positions. The results demonstrate that fairly hazard detection capability and reasonable site selection was obtained compared with conventional method, yet less computational time and less memory usage was consumed. Hence, it is a feasible candidate approach for future precision landing selection on planetary surface.

  11. Multiple-image hiding using super resolution reconstruction in high-frequency domains

    Science.gov (United States)

    Li, Xiao-Wei; Zhao, Wu-Xiang; Wang, Jun; Wang, Qiong-Hua

    2017-12-01

    In this paper, a robust multiple-image hiding method using the computer-generated integral imaging and the modified super-resolution reconstruction algorithm is proposed. In our work, the host image is first transformed into frequency domains by cellular automata (CA), to assure the quality of the stego-image, the secret images are embedded into the CA high-frequency domains. The proposed method has the following advantages: (1) robustness to geometric attacks because of the memory-distributed property of elemental images, (2) increasing quality of the reconstructed secret images as the scheme utilizes the modified super-resolution reconstruction algorithm. The simulation results show that the proposed multiple-image hiding method outperforms other similar hiding methods and is robust to some geometric attacks, e.g., Gaussian noise and JPEG compression attacks.

  12. Robust modified GA based multi-stage fuzzy LFC

    International Nuclear Information System (INIS)

    Shayeghi, H.; Jalili, A.; Shayanfar, H.A.

    2007-01-01

    In this paper, a robust genetic algorithm (GA) based multi-stage fuzzy (MSF) controller is proposed for solution of the load frequency control (LFC) problem in a restructured power system that operates under deregulation based on the bilateral policy scheme. In this strategy, the control signal is tuned online from the knowledge base and the fuzzy inference, which request fewer sources and has two rule base sets. In the proposed method, for achieving the desired level of robust performance, exact tuning of the membership functions is very important. Thus, to reduce the design effort and find a better fuzzy system control, membership functions are designed automatically by modified genetic algorithms. The classical genetic algorithms are powerful search techniques to find the global optimal area. However, the global optimum value is not guaranteed using this method, and the speed of the algorithm's convergence is extremely reduced too. To overcome this drawback, a modified genetic algorithm is being used to tune the membership functions of the proposed MSF controller. The effectiveness of the proposed method is demonstrated on a three area restructured power system with possible contracted scenarios under large load demand and area disturbances in comparison with the multi-stage fuzzy and classical fuzzy PID controllers through FD and ITAE performance indices. The results evaluation shows that the proposed control strategy achieves good robust performance for a wide range of system parameters and load changes in the presence of system nonlinearities and is superior to the other controllers. Moreover, this newly developed control strategy has a simple structure, does not require an accurate model of the plant and is fairly easy to implement, which can be useful for the real world complex power systems

  13. Robust modified GA based multi-stage fuzzy LFC

    Energy Technology Data Exchange (ETDEWEB)

    Shayeghi, H. [Technical Engineering Department, The University of Mohaghegh Ardebili, Daneshkah St., Ardebil (Iran); Jalili, A. [Electrical Engineering Group, Islamic Azad University, Ardebil Branch, Ardebil (Iran); Shayanfar, H.A. [Electrical Engineering Department, Iran University of Science and Technology, Tehran (Iran)

    2007-05-15

    In this paper, a robust genetic algorithm (GA) based multi-stage fuzzy (MSF) controller is proposed for solution of the load frequency control (LFC) problem in a restructured power system that operates under deregulation based on the bilateral policy scheme. In this strategy, the control signal is tuned online from the knowledge base and the fuzzy inference, which request fewer sources and has two rule base sets. In the proposed method, for achieving the desired level of robust performance, exact tuning of the membership functions is very important. Thus, to reduce the design effort and find a better fuzzy system control, membership functions are designed automatically by modified genetic algorithms. The classical genetic algorithms are powerful search techniques to find the global optimal area. However, the global optimum value is not guaranteed using this method, and the speed of the algorithm's convergence is extremely reduced too. To overcome this drawback, a modified genetic algorithm is being used to tune the membership functions of the proposed MSF controller. The effectiveness of the proposed method is demonstrated on a three area restructured power system with possible contracted scenarios under large load demand and area disturbances in comparison with the multi-stage fuzzy and classical fuzzy PID controllers through FD and ITAE performance indices. The results evaluation shows that the proposed control strategy achieves good robust performance for a wide range of system parameters and load changes in the presence of system nonlinearities and is superior to the other controllers. Moreover, this newly developed control strategy has a simple structure, does not require an accurate model of the plant and is fairly easy to implement, which can be useful for the real world complex power systems. (author)

  14. Many-objective robust decision making for water allocation under climate change.

    Science.gov (United States)

    Yan, Dan; Ludwig, Fulco; Huang, He Qing; Werners, Saskia E

    2017-12-31

    Water allocation is facing profound challenges due to climate change uncertainties. To identify adaptive water allocation strategies that are robust to climate change uncertainties, a model framework combining many-objective robust decision making and biophysical modeling is developed for large rivers. The framework was applied to the Pearl River basin (PRB), China where sufficient flow to the delta is required to reduce saltwater intrusion in the dry season. Before identifying and assessing robust water allocation plans for the future, the performance of ten state-of-the-art MOEAs (multi-objective evolutionary algorithms) is evaluated for the water allocation problem in the PRB. The Borg multi-objective evolutionary algorithm (Borg MOEA), which is a self-adaptive optimization algorithm, has the best performance during the historical periods. Therefore it is selected to generate new water allocation plans for the future (2079-2099). This study shows that robust decision making using carefully selected MOEAs can help limit saltwater intrusion in the Pearl River Delta. However, the framework could perform poorly due to larger than expected climate change impacts on water availability. Results also show that subjective design choices from the researchers and/or water managers could potentially affect the ability of the model framework, and cause the most robust water allocation plans to fail under future climate change. Developing robust allocation plans in a river basin suffering from increasing water shortage requires the researchers and water managers to well characterize future climate change of the study regions and vulnerabilities of their tools. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Effects of a random noisy oracle on search algorithm complexity

    International Nuclear Information System (INIS)

    Shenvi, Neil; Brown, Kenneth R.; Whaley, K. Birgitta

    2003-01-01

    Grover's algorithm provides a quadratic speed-up over classical algorithms for unstructured database or library searches. This paper examines the robustness of Grover's search algorithm to a random phase error in the oracle and analyzes the complexity of the search process as a function of the scaling of the oracle error with database or library size. Both the discrete- and continuous-time implementations of the search algorithm are investigated. It is shown that unless the oracle phase error scales as O(N -1/4 ), neither the discrete- nor the continuous-time implementation of Grover's algorithm is scalably robust to this error in the absence of error correction

  16. Geometric covers, graph orientations, counter games

    DEFF Research Database (Denmark)

    Berglin, Edvin

    -directed graph is dynamic (can be altered by some outside actor), some orientations may need to be reversed in order to maintain the low out-degree. We present a new algorithm that is simpler than earlier work, yet matches or outperforms the efficiency of these results with very few exceptions. Counter games...... example is Line Cover, also known as Point-Line Cover, where a set of points in a geometric space are to be covered by placing a restricted number of lines. We present new FPT algorithms for the sub-family Curve Cover (which includes Line Cover), as well as for Hyperplane Cover restricted to R 3 (i...... are a type of abstract game played over a set of counters holding values, and these values may be moved between counters according to some set of rules. Typically they are played between two players: the adversary who tries to concentrate the greatest value possible in a single counter, and the benevolent...

  17. Robust Selection Algorithm (RSA) for Multi-Omic Biomarker Discovery; Integration with Functional Network Analysis to Identify miRNA Regulated Pathways in Multiple Cancers.

    Science.gov (United States)

    Sehgal, Vasudha; Seviour, Elena G; Moss, Tyler J; Mills, Gordon B; Azencott, Robert; Ram, Prahlad T

    2015-01-01

    MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases.

  18. Salt bridges: geometrically specific, designable interactions.

    Science.gov (United States)

    Donald, Jason E; Kulp, Daniel W; DeGrado, William F

    2011-03-01

    Salt bridges occur frequently in proteins, providing conformational specificity and contributing to molecular recognition and catalysis. We present a comprehensive analysis of these interactions in protein structures by surveying a large database of protein structures. Salt bridges between Asp or Glu and His, Arg, or Lys display extremely well-defined geometric preferences. Several previously observed preferences are confirmed, and others that were previously unrecognized are discovered. Salt bridges are explored for their preferences for different separations in sequence and in space, geometric preferences within proteins and at protein-protein interfaces, co-operativity in networked salt bridges, inclusion within metal-binding sites, preference for acidic electrons, apparent conformational side chain entropy reduction on formation, and degree of burial. Salt bridges occur far more frequently between residues at close than distant sequence separations, but, at close distances, there remain strong preferences for salt bridges at specific separations. Specific types of complex salt bridges, involving three or more members, are also discovered. As we observe a strong relationship between the propensity to form a salt bridge and the placement of salt-bridging residues in protein sequences, we discuss the role that salt bridges might play in kinetically influencing protein folding and thermodynamically stabilizing the native conformation. We also develop a quantitative method to select appropriate crystal structure resolution and B-factor cutoffs. Detailed knowledge of these geometric and sequence dependences should aid de novo design and prediction algorithms. Copyright © 2010 Wiley-Liss, Inc.

  19. Design optimization of a robust sleeve antenna for hepatic microwave ablation

    International Nuclear Information System (INIS)

    Prakash, Punit; Webster, John G; Deng Geng; Converse, Mark C; Mahvi, David M; Ferris, Michael C

    2008-01-01

    We describe the application of a Bayesian variable-number sample-path (VNSP) optimization algorithm to yield a robust design for a floating sleeve antenna for hepatic microwave ablation. Finite element models are used to generate the electromagnetic (EM) field and thermal distribution in liver given a particular design. Dielectric properties of the tissue are assumed to vary within ± 10% of average properties to simulate the variation among individuals. The Bayesian VNSP algorithm yields an optimal design that is a 14.3% improvement over the original design and is more robust in terms of lesion size, shape and efficiency. Moreover, the Bayesian VNSP algorithm finds an optimal solution saving 68.2% simulation of the evaluations compared to the standard sample-path optimization method

  20. An Experiment of Robust Parallel Algorithm for the Eigenvalue problem of a Multigroup Neutron Diffusion based on modified FETI-DP

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Jonghwa [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Parallelization of Monte Carlo simulation is widely adpoted. There are also several parallel algorithms developed for the SN transport theory using the parallel wave sweeping algorithm and for the CPM using parallel ray tracing. For practical purpose of reactor physics application, the thermal feedback and burnup effects on the multigroup cross section should be considered. In this respect, the domain decomposition method(DDM) is suitable for distributing the expensive cross section calculation work. Parallel transport code and diffusion code based on the Raviart-Thomas mixed finite element method was developed. However most of the developed methods rely on the heuristic convergence of flux and current at the domain interfaces. Convergence was not attained in some cases. Mechanical stress computation community has also work on the DDM to solve the stress-strain equation using the finite element methods. The most successful domain decomposition method in terms of robustness is FETI-DP. We have modified the original FETI-DP to solve the eigenvalue problem for the multigroup diffusion problem in this study.

  1. Multi-scale graph-cut algorithm for efficient water-fat separation.

    Science.gov (United States)

    Berglund, Johan; Skorpil, Mikael

    2017-09-01

    To improve the accuracy and robustness to noise in water-fat separation by unifying the multiscale and graph cut based approaches to B 0 -correction. A previously proposed water-fat separation algorithm that corrects for B 0 field inhomogeneity in 3D by a single quadratic pseudo-Boolean optimization (QPBO) graph cut was incorporated into a multi-scale framework, where field map solutions are propagated from coarse to fine scales for voxels that are not resolved by the graph cut. The accuracy of the single-scale and multi-scale QPBO algorithms was evaluated against benchmark reference datasets. The robustness to noise was evaluated by adding noise to the input data prior to water-fat separation. Both algorithms achieved the highest accuracy when compared with seven previously published methods, while computation times were acceptable for implementation in clinical routine. The multi-scale algorithm was more robust to noise than the single-scale algorithm, while causing only a small increase (+10%) of the reconstruction time. The proposed 3D multi-scale QPBO algorithm offers accurate water-fat separation, robustness to noise, and fast reconstruction. The software implementation is freely available to the research community. Magn Reson Med 78:941-949, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  2. Robust Pseudo-Hierarchical Support Vector Clustering

    DEFF Research Database (Denmark)

    Hansen, Michael Sass; Sjöstrand, Karl; Olafsdóttir, Hildur

    2007-01-01

    Support vector clustering (SVC) has proven an efficient algorithm for clustering of noisy and high-dimensional data sets, with applications within many fields of research. An inherent problem, however, has been setting the parameters of the SVC algorithm. Using the recent emergence of a method...... for calculating the entire regularization path of the support vector domain description, we propose a fast method for robust pseudo-hierarchical support vector clustering (HSVC). The method is demonstrated to work well on generated data, as well as for detecting ischemic segments from multidimensional myocardial...

  3. Watermarking Algorithms for 3D NURBS Graphic Data

    Directory of Open Access Journals (Sweden)

    Jae Jun Lee

    2004-10-01

    Full Text Available Two watermarking algorithms for 3D nonuniform rational B-spline (NURBS graphic data are proposed: one is appropriate for the steganography, and the other for watermarking. Instead of directly embedding data into the parameters of NURBS, the proposed algorithms embed data into the 2D virtual images extracted by parameter sampling of 3D model. As a result, the proposed steganography algorithm can embed information into more places of the surface than the conventional algorithm, while preserving the data size of the model. Also, any existing 2D watermarking technique can be used for the watermarking of 3D NURBS surfaces. From the experiment, it is found that the algorithm for the watermarking is robust to the attacks on weights, control points, and knots. It is also found to be robust to the remodeling of NURBS models.

  4. Robust distributed model predictive control of linear systems with structured time-varying uncertainties

    Science.gov (United States)

    Zhang, Langwen; Xie, Wei; Wang, Jingcheng

    2017-11-01

    In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.

  5. Designing algorithms using CAD technologies

    Directory of Open Access Journals (Sweden)

    Alin IORDACHE

    2008-01-01

    Full Text Available A representative example of eLearning-platform modular application, ‘Logical diagrams’, is intended to be a useful learning and testing tool for the beginner programmer, but also for the more experienced one. The problem this application is trying to solve concerns young programmers who forget about the fundamentals of this domain, algorithmic. Logical diagrams are a graphic representation of an algorithm, which uses different geometrical figures (parallelograms, rectangles, rhombuses, circles with particular meaning that are called blocks and connected between them to reveal the flow of the algorithm. The role of this application is to help the user build the diagram for the algorithm and then automatically generate the C code and test it.

  6. Geometric manipulation of the quantum states of two-level atoms

    International Nuclear Information System (INIS)

    Tian, Mingzhen; Barber, Zeb W.; Fischer, Joe A.; Babbitt, Wm. Randall

    2004-01-01

    Manipulation of the quantum states of two-level atoms has been investigated using laser-controlled geometric phase change, which has the potential to build robust quantum logic gates for quantum computing. For a qubit based on two electronic transition levels of an atom, two basic quantum operations that can make any universal single qubit gate have been designed employing resonant laser pulses. An operation equivalent to a phase gate has been demonstrated using Tm 3+ doped in a yttrium aluminum garnet crystal

  7. A Fast and Robust Feature-Based Scan-Matching Method in 3D SLAM and the Effect of Sampling Strategies

    Directory of Open Access Journals (Sweden)

    Cihan Ulas

    2013-11-01

    Full Text Available Simultaneous localization and mapping (SLAM plays an important role in fully autonomous systems when a GNSS (global navigation satellite system is not available. Studies in both 2D indoor and 3D outdoor SLAM are based on the appearance of environments and utilize scan-matching methods to find rigid body transformation parameters between two consecutive scans. In this study, a fast and robust scan-matching method based on feature extraction is introduced. Since the method is based on the matching of certain geometric structures, like plane segments, the outliers and noise in the point cloud are considerably eliminated. Therefore, the proposed scan-matching algorithm is more robust than conventional methods. Besides, the registration time and the number of iterations are significantly reduced, since the number of matching points is efficiently decreased. As a scan-matching framework, an improved version of the normal distribution transform (NDT is used. The probability density functions (PDFs of the reference scan are generated as in the traditional NDT, and the feature extraction - based on stochastic plane detection - is applied to the only input scan. By using experimental dataset belongs to an outdoor environment like a university campus, we obtained satisfactory performance results. Moreover, the feature extraction part of the algorithm is considered as a special sampling strategy for scan-matching and compared to other sampling strategies, such as random sampling and grid-based sampling, the latter of which is first used in the NDT. Thus, this study also shows the effect of the subsampling on the performance of the NDT.

  8. Robust Fringe Projection Profilometry via Sparse Representation.

    Science.gov (United States)

    Budianto; Lun, Daniel P K

    2016-04-01

    In this paper, a robust fringe projection profilometry (FPP) algorithm using the sparse dictionary learning and sparse coding techniques is proposed. When reconstructing the 3D model of objects, traditional FPP systems often fail to perform if the captured fringe images have a complex scene, such as having multiple and occluded objects. It introduces great difficulty to the phase unwrapping process of an FPP system that can result in serious distortion in the final reconstructed 3D model. For the proposed algorithm, it encodes the period order information, which is essential to phase unwrapping, into some texture patterns and embeds them to the projected fringe patterns. When the encoded fringe image is captured, a modified morphological component analysis and a sparse classification procedure are performed to decode and identify the embedded period order information. It is then used to assist the phase unwrapping process to deal with the different artifacts in the fringe images. Experimental results show that the proposed algorithm can significantly improve the robustness of an FPP system. It performs equally well no matter the fringe images have a simple or complex scene, or are affected due to the ambient lighting of the working environment.

  9. Geometrical basis for the Standard Model

    Science.gov (United States)

    Potter, Franklin

    1994-02-01

    The robust character of the Standard Model is confirmed. Examination of its geometrical basis in three equivalent internal symmetry spaces-the unitary plane C 2, the quaternion space Q, and the real space R 4—as well as the real space R 3 uncovers mathematical properties that predict the physical properties of leptons and quarks. The finite rotational subgroups of the gauge group SU(2) L × U(1) Y generate exactly three lepton families and four quark families and reveal how quarks and leptons are related. Among the physical properties explained are the mass ratios of the six leptons and eight quarks, the origin of the left-handed preference by the weak interaction, the geometrical source of color symmetry, and the zero neutrino masses. The ( u, d) and ( c, s) quark families team together to satisfy the triangle anomaly cancellation with the electron family, while the other families pair one-to-one for cancellation. The spontaneously broken symmetry is discrete and needs no Higgs mechanism. Predictions include all massless neutrinos, the top quark at 160 GeV/ c 2, the b' quark at 80 GeV/ c 2, and the t' quark at 2600 GeV/ c 2.

  10. Time Series Analysis Using Geometric Template Matching.

    Science.gov (United States)

    Frank, Jordan; Mannor, Shie; Pineau, Joelle; Precup, Doina

    2013-03-01

    We present a novel framework for analyzing univariate time series data. At the heart of the approach is a versatile algorithm for measuring the similarity of two segments of time series called geometric template matching (GeTeM). First, we use GeTeM to compute a similarity measure for clustering and nearest-neighbor classification. Next, we present a semi-supervised learning algorithm that uses the similarity measure with hierarchical clustering in order to improve classification performance when unlabeled training data are available. Finally, we present a boosting framework called TDEBOOST, which uses an ensemble of GeTeM classifiers. TDEBOOST augments the traditional boosting approach with an additional step in which the features used as inputs to the classifier are adapted at each step to improve the training error. We empirically evaluate the proposed approaches on several datasets, such as accelerometer data collected from wearable sensors and ECG data.

  11. The research on optimization of auto supply chain network robust model under macroeconomic fluctuations

    International Nuclear Information System (INIS)

    Guo, Chunxiang; Liu, Xiaoli; Jin, Maozhu; Lv, Zhihan

    2016-01-01

    Considering the uncertainty of the macroeconomic environment, the robust optimization method is studied for constructing and designing the automotive supply chain network, and based on the definition of robust solution a robust optimization model is built for integrated supply chain network design that consists of supplier selection problem and facility location–distribution problem. The tabu search algorithm is proposed for supply chain node configuration, analyzing the influence of the level of uncertainty on robust results, and by comparing the performance of supply chain network design through the stochastic programming model and robustness optimize model, on this basis, determining the rational layout of supply chain network under macroeconomic fluctuations. At last the contrastive test result validates that the performance of tabu search algorithm is outstanding on convergence and computational time. Meanwhile it is indicated that the robust optimization model can reduce investment risks effectively when it is applied to supply chain network design.

  12. Design of uav robust autopilot based on adaptive neuro-fuzzy inference system

    Directory of Open Access Journals (Sweden)

    Mohand Achour Touat

    2008-04-01

    Full Text Available  This paper is devoted to the application of adaptive neuro-fuzzy inference systems to the robust control of the UAV longitudinal motion. The adaptive neore-fuzzy inference system model needs to be trained by input/output data. This data were obtained from the modeling of a ”crisp” robust control system. The synthesis of this system is based on the separation theorem, which defines the structure and parameters of LQG-optimal controller, and further - robust optimization of this controller, based on the genetic algorithm. Such design procedure can define the rule base and parameters of fuzzyfication and defuzzyfication algorithms of the adaptive neore-fuzzy inference system controller, which ensure the robust properties of the control system. Simulation of the closed loop control system of UAV longitudinal motion with adaptive neore-fuzzy inference system controller demonstrates high efficiency of proposed design procedure.

  13. Algorithms, architectures and information systems security

    CERN Document Server

    Sur-Kolay, Susmita; Nandy, Subhas C; Bagchi, Aditya

    2008-01-01

    This volume contains articles written by leading researchers in the fields of algorithms, architectures, and information systems security. The first five chapters address several challenging geometric problems and related algorithms. These topics have major applications in pattern recognition, image analysis, digital geometry, surface reconstruction, computer vision and in robotics. The next five chapters focus on various optimization issues in VLSI design and test architectures, and in wireless networks. The last six chapters comprise scholarly articles on information systems security coverin

  14. A novel hybrid algorithm of GSA with Kepler algorithm for numerical optimization

    Directory of Open Access Journals (Sweden)

    Soroor Sarafrazi

    2015-07-01

    Full Text Available It is now well recognized that pure algorithms can be promisingly improved by hybridization with other techniques. One of the relatively new metaheuristic algorithms is Gravitational Search Algorithm (GSA which is based on the Newton laws. In this paper, to enhance the performance of GSA, a novel algorithm called “Kepler”, inspired by the astrophysics, is introduced. The Kepler algorithm is based on the principle of the first Kepler law. The hybridization of GSA and Kepler algorithm is an efficient approach to provide much stronger specialization in intensification and/or diversification. The performance of GSA–Kepler is evaluated by applying it to 14 benchmark functions with 20–1000 dimensions and the optimal approximation of linear system as a practical optimization problem. The results obtained reveal that the proposed hybrid algorithm is robust enough to optimize the benchmark functions and practical optimization problems.

  15. COMPARISON OF METHODS FOR GEOMETRIC CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    J. Hieronymus

    2012-09-01

    Full Text Available Methods for geometric calibration of cameras in close-range photogrammetry are established and well investigated. The most common one is based on test-fields with well-known pattern, which are observed from different directions. The parameters of a distortion model are calculated using bundle-block-adjustment-algorithms. This methods works well for short focal lengths, but is essentially more problematic to use with large focal lengths. Those would require very large test-fields and surrounding space. To overcome this problem, there is another common method for calibration used in remote sensing. It employs measurements using collimator and a goniometer. A third calibration method uses diffractive optical elements (DOE to project holograms of well known pattern. In this paper these three calibration methods are compared empirically, especially in terms of accuracy. A camera has been calibrated with those methods mentioned above. All methods provide a set of distortion correction parameters as used by the photogrammetric software Australis. The resulting parameter values are very similar for all investigated methods. The three sets of distortion parameters are crosscompared against all three calibration methods. This is achieved by inserting the gained distortion parameters as fixed input into the calibration algorithms and only adjusting the exterior orientation. The RMS (root mean square of the remaining image coordinate residuals are taken as a measure of distortion correction quality. There are differences resulting from the different calibration methods. Nevertheless the measure is small for every comparison, which means that all three calibration methods can be used for accurate geometric calibration.

  16. On the robustness of EC-PC spike detection method for online neural recording.

    Science.gov (United States)

    Zhou, Yin; Wu, Tong; Rastegarnia, Amir; Guan, Cuntai; Keefer, Edward; Yang, Zhi

    2014-09-30

    Online spike detection is an important step to compress neural data and perform real-time neural information decoding. An unsupervised, automatic, yet robust signal processing is strongly desired, thus it can support a wide range of applications. We have developed a novel spike detection algorithm called "exponential component-polynomial component" (EC-PC) spike detection. We firstly evaluate the robustness of the EC-PC spike detector under different firing rates and SNRs. Secondly, we show that the detection Precision can be quantitatively derived without requiring additional user input parameters. We have realized the algorithm (including training) into a 0.13 μm CMOS chip, where an unsupervised, nonparametric operation has been demonstrated. Both simulated data and real data are used to evaluate the method under different firing rates (FRs), SNRs. The results show that the EC-PC spike detector is the most robust in comparison with some popular detectors. Moreover, the EC-PC detector can track changes in the background noise due to the ability to re-estimate the neural data distribution. Both real and synthesized data have been used for testing the proposed algorithm in comparison with other methods, including the absolute thresholding detector (AT), median absolute deviation detector (MAD), nonlinear energy operator detector (NEO), and continuous wavelet detector (CWD). Comparative testing results reveals that the EP-PC detection algorithm performs better than the other algorithms regardless of recording conditions. The EC-PC spike detector can be considered as an unsupervised and robust online spike detection. It is also suitable for hardware implementation. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Robust Bayesian Algorithm for Targeted Compound Screening in Forensic Toxicology.

    Science.gov (United States)

    Woldegebriel, Michael; Gonsalves, John; van Asten, Arian; Vivó-Truyols, Gabriel

    2016-02-16

    As part of forensic toxicological investigation of cases involving unexpected death of an individual, targeted or untargeted xenobiotic screening of post-mortem samples is normally conducted. To this end, liquid chromatography (LC) coupled to high-resolution mass spectrometry (MS) is typically employed. For data analysis, almost all commonly applied algorithms are threshold-based (frequentist). These algorithms examine the value of a certain measurement (e.g., peak height) to decide whether a certain xenobiotic of interest (XOI) is present/absent, yielding a binary output. Frequentist methods pose a problem when several sources of information [e.g., shape of the chromatographic peak, isotopic distribution, estimated mass-to-charge ratio (m/z), adduct, etc.] need to be combined, requiring the approach to make arbitrary decisions at substep levels of data analysis. We hereby introduce a novel Bayesian probabilistic algorithm for toxicological screening. The method tackles the problem with a different strategy. It is not aimed at reaching a final conclusion regarding the presence of the XOI, but it estimates its probability. The algorithm effectively and efficiently combines all possible pieces of evidence from the chromatogram and calculates the posterior probability of the presence/absence of XOI features. This way, the model can accommodate more information by updating the probability if extra evidence is acquired. The final probabilistic result assists the end user to make a final decision with respect to the presence/absence of the xenobiotic. The Bayesian method was validated and found to perform better (in terms of false positives and false negatives) than the vendor-supplied software package.

  18. Geometrical Bell inequalities for arbitrarily many qudits with different outcome strategies

    International Nuclear Information System (INIS)

    Wieśniak, Marcin; Dutta, Arijit; Ryu, Junghee

    2016-01-01

    Greenberger–Horne–Zeilinger (GHZ) states are intuitively known to be the most nonclassical ones. They lead to the most radically nonclassical behavior of three or more entangled quantum subsystems. In the case of two-dimensional systems, it has been shown that GHZ states lead to an exponentially higher robustness of Bell nonclassicality against the white noise for geometrical inequalities than in the case of Weinfurter–Werner–Wolf–Żukowski–Brukner ones. We introduce geometrical Bell inequalities for collections of arbitrarily many systems of any dimensionality. We show that the violation factor of these inequalities grows exponentially with the number of parties and study their behavior in terms of dimensionality of subsystems and number of local measurements. We also investigate various strategies of assigning mathematical objects to events in the experiment, each leading to different violation ratios. (paper)

  19. A voting-based star identification algorithm utilizing local and global distribution

    Science.gov (United States)

    Fan, Qiaoyun; Zhong, Xuyang; Sun, Junhua

    2018-03-01

    A novel star identification algorithm based on voting scheme is presented in this paper. In the proposed algorithm, the global distribution and local distribution of sensor stars are fully utilized, and the stratified voting scheme is adopted to obtain the candidates for sensor stars. The database optimization is employed to reduce its memory requirement and improve the robustness of the proposed algorithm. The simulation shows that the proposed algorithm exhibits 99.81% identification rate with 2-pixel standard deviations of positional noises and 0.322-Mv magnitude noises. Compared with two similar algorithms, the proposed algorithm is more robust towards noise, and the average identification time and required memory is less. Furthermore, the real sky test shows that the proposed algorithm performs well on the real star images.

  20. Gradient descent for robust kernel-based regression

    Science.gov (United States)

    Guo, Zheng-Chu; Hu, Ting; Shi, Lei

    2018-06-01

    In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.

  1. Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process

    Science.gov (United States)

    Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh

    2018-06-01

    Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.

  2. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  3. A geometric view on learning Bayesian network structures

    Czech Academy of Sciences Publication Activity Database

    Studený, Milan; Vomlel, Jiří; Hemmecke, R.

    2010-01-01

    Roč. 51, č. 5 (2010), s. 578-586 ISSN 0888-613X. [PGM 2008] R&D Projects: GA AV ČR(CZ) IAA100750603; GA MŠk(CZ) 1M0572; GA ČR GA201/08/0539 Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : learning Bayesian networks * standard imset * inclusion neighborhood * geometric neighborhood * GES algorithm Subject RIV: BA - General Mathematics Impact factor: 1.679, year: 2010 http://library.utia.cas.cz/separaty/2010/MTR/studeny-0342804. pdf

  4. Accurate and robust brain image alignment using boundary-based registration.

    Science.gov (United States)

    Greve, Douglas N; Fischl, Bruce

    2009-10-15

    The fine spatial scales of the structures in the human brain represent an enormous challenge to the successful integration of information from different images for both within- and between-subject analysis. While many algorithms to register image pairs from the same subject exist, visual inspection shows that their accuracy and robustness to be suspect, particularly when there are strong intensity gradients and/or only part of the brain is imaged. This paper introduces a new algorithm called Boundary-Based Registration, or BBR. The novelty of BBR is that it treats the two images very differently. The reference image must be of sufficient resolution and quality to extract surfaces that separate tissue types. The input image is then aligned to the reference by maximizing the intensity gradient across tissue boundaries. Several lower quality images can be aligned through their alignment with the reference. Visual inspection and fMRI results show that BBR is more accurate than correlation ratio or normalized mutual information and is considerably more robust to even strong intensity inhomogeneities. BBR also excels at aligning partial-brain images to whole-brain images, a domain in which existing registration algorithms frequently fail. Even in the limit of registering a single slice, we show the BBR results to be robust and accurate.

  5. Research on geometric rectification of the Large FOV Linear Array Whiskbroom Image

    Science.gov (United States)

    Liu, Dia; Liu, Hui-tong; Dong, Hao; Liu, Xiao-bo

    2015-08-01

    To solve the geometric distortion problem of large FOV linear array whiskbroom image, a model of multi center central projection collinearity equation was founded considering its whiskbroom and linear CCD imaging feature, and the principle of distortion was analyzed. Based on the rectification method with POS, we introduced the angular position sensor data of the servo system, and restored the geometric imaging process exactly. An indirect rectification scheme aiming at linear array imaging with best scanline searching method was adopted, matrixes for calculating the exterior orientation elements was redesigned. We improved two iterative algorithms for this device, and did comparison and analysis. The rectification for the images of airborne imaging experiment showed ideal effect.

  6. Improved remote gaze estimation using corneal reflection-adaptive geometric transforms

    Science.gov (United States)

    Ma, Chunfei; Baek, Seung-Jin; Choi, Kang-A.; Ko, Sung-Jea

    2014-05-01

    Recently, the remote gaze estimation (RGE) technique has been widely applied to consumer devices as a more natural interface. In general, the conventional RGE method estimates a user's point of gaze using a geometric transform, which represents the relationship between several infrared (IR) light sources and their corresponding corneal reflections (CRs) in the eye image. Among various methods, the homography normalization (HN) method achieves state-of-the-art performance. However, the geometric transform of the HN method requiring four CRs is infeasible for the case when fewer than four CRs are available. To solve this problem, this paper proposes a new RGE method based on three alternative geometric transforms, which are adaptive to the number of CRs. Unlike the HN method, the proposed method not only can operate with two or three CRs, but can also provide superior accuracy. To further enhance the performance, an effective error correction method is also proposed. By combining the introduced transforms with the error-correction method, the proposed method not only provides high accuracy and robustness for gaze estimation, but also allows for a more flexible system setup with a different number of IR light sources. Experimental results demonstrate the effectiveness of the proposed method.

  7. APPLICATION OF GENETIC ALGORITHMS FOR ROBUST PARAMETER OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    N. Belavendram

    2010-12-01

    Full Text Available Parameter optimization can be achieved by many methods such as Monte-Carlo, full, and fractional factorial designs. Genetic algorithms (GA are fairly recent in this respect but afford a novel method of parameter optimization. In GA, there is an initial pool of individuals each with its own specific phenotypic trait expressed as a ‘genetic chromosome’. Different genes enable individuals with different fitness levels to reproduce according to natural reproductive gene theory. This reproduction is established in terms of selection, crossover and mutation of reproducing genes. The resulting child generation of individuals has a better fitness level akin to natural selection, namely evolution. Populations evolve towards the fittest individuals. Such a mechanism has a parallel application in parameter optimization. Factors in a parameter design can be expressed as a genetic analogue in a pool of sub-optimal random solutions. Allowing this pool of sub-optimal solutions to evolve over several generations produces fitter generations converging to a pre-defined engineering optimum. In this paper, a genetic algorithm is used to study a seven factor non-linear equation for a Wheatstone bridge as the equation to be optimized. A comparison of the full factorial design against a GA method shows that the GA method is about 1200 times faster in finding a comparable solution.

  8. Robust Circle Detection Using Harmony Search

    Directory of Open Access Journals (Sweden)

    Jaco Fourie

    2017-01-01

    Full Text Available Automatic circle detection is an important element of many image processing algorithms. Traditionally the Hough transform has been used to find circular objects in images but more modern approaches that make use of heuristic optimisation techniques have been developed. These are often used in large complex images where the presence of noise or limited computational resources make the Hough transform impractical. Previous research on the use of the Harmony Search (HS in circle detection showed that HS is an attractive alternative to many of the modern circle detectors based on heuristic optimisers like genetic algorithms and simulated annealing. We propose improvements to this work that enables our algorithm to robustly find multiple circles in larger data sets and still work on realistic images that are heavily corrupted by noisy edges.

  9. Approximate truncation robust computed tomography—ATRACT

    International Nuclear Information System (INIS)

    Dennerlein, Frank; Maier, Andreas

    2013-01-01

    We present an approximate truncation robust algorithm to compute tomographic images (ATRACT). This algorithm targets at reconstructing volumetric images from cone-beam projections in scenarios where these projections are highly truncated in each dimension. It thus facilitates reconstructions of small subvolumes of interest, without involving prior knowledge about the object. Our method is readily applicable to medical C-arm imaging, where it may contribute to new clinical workflows together with a considerable reduction of x-ray dose. We give a detailed derivation of ATRACT that starts from the conventional Feldkamp filtered-backprojection algorithm and that involves, as one component, a novel original formula for the inversion of the two-dimensional Radon transform. Discretization and numerical implementation are discussed and reconstruction results from both, simulated projections and first clinical data sets are presented. (paper)

  10. Visualizing the Geometric Series.

    Science.gov (United States)

    Bennett, Albert B., Jr.

    1989-01-01

    Mathematical proofs often leave students unconvinced or without understanding of what has been proved, because they provide no visual-geometric representation. Presented are geometric models for the finite geometric series when r is a whole number, and the infinite geometric series when r is the reciprocal of a whole number. (MNS)

  11. Geometric Rationalization for Freeform Architecture

    KAUST Repository

    Jiang, Caigui

    2016-06-20

    The emergence of freeform architecture provides interesting geometric challenges with regards to the design and manufacturing of large-scale structures. To design these architectural structures, we have to consider two types of constraints. First, aesthetic constraints are important because the buildings have to be visually impressive. Sec- ond, functional constraints are important for the performance of a building and its e cient construction. This thesis contributes to the area of architectural geometry. Specifically, we are interested in the geometric rationalization of freeform architec- ture with the goal of combining aesthetic and functional constraints and construction requirements. Aesthetic requirements typically come from designers and architects. To obtain visually pleasing structures, they favor smoothness of the building shape, but also smoothness of the visible patterns on the surface. Functional requirements typically come from the engineers involved in the construction process. For exam- ple, covering freeform structures using planar panels is much cheaper than using non-planar ones. Further, constructed buildings have to be stable and should not collapse. In this thesis, we explore the geometric rationalization of freeform archi- tecture using four specific example problems inspired by real life applications. We achieve our results by developing optimization algorithms and a theoretical study of the underlying geometrical structure of the problems. The four example problems are the following: (1) The design of shading and lighting systems which are torsion-free structures with planar beams based on quad meshes. They satisfy the functionality requirements of preventing light from going inside a building as shad- ing systems or reflecting light into a building as lighting systems. (2) The Design of freeform honeycomb structures that are constructed based on hex-dominant meshes with a planar beam mounted along each edge. The beams intersect without

  12. Geometric analysis

    CERN Document Server

    Bray, Hubert L; Mazzeo, Rafe; Sesum, Natasa

    2015-01-01

    This volume includes expanded versions of the lectures delivered in the Graduate Minicourse portion of the 2013 Park City Mathematics Institute session on Geometric Analysis. The papers give excellent high-level introductions, suitable for graduate students wishing to enter the field and experienced researchers alike, to a range of the most important areas of geometric analysis. These include: the general issue of geometric evolution, with more detailed lectures on Ricci flow and Kähler-Ricci flow, new progress on the analytic aspects of the Willmore equation as well as an introduction to the recent proof of the Willmore conjecture and new directions in min-max theory for geometric variational problems, the current state of the art regarding minimal surfaces in R^3, the role of critical metrics in Riemannian geometry, and the modern perspective on the study of eigenfunctions and eigenvalues for Laplace-Beltrami operators.

  13. Adaptive Watermarking Algorithm in DCT Domain Based on Chaos

    Directory of Open Access Journals (Sweden)

    Wenhao Wang

    2013-05-01

    Full Text Available In order to improve the security, robustness and invisibility of the digital watermarking, a new adaptive watermarking algorithm is proposed in this paper. Firstly, this algorithm uses chaos sequence, which Logistic chaotic mapping produces, to encrypt the watermark image. And then the original image is divided into many sub-blocks and discrete cosine transform (DCT.The watermark information is embedded into sub-blocks medium coefficients. With the features of Human Visual System (HVS and image texture sufficiently taken into account during embedding, the embedding intensity of watermark is able to adaptively adjust according to HVS and texture characteristic. The watermarking is embedded into the different sub-blocks coefficients. Experiment results haven shown that the proposed algorithm is robust against the attacks of general image processing methods, such as noise, cut, filtering and JPEG compression, and receives a good tradeoff between invisible and robustness, and better security.

  14. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  15. Feasibility study of the iterative x-ray phase retrieval algorithm

    International Nuclear Information System (INIS)

    Meng Fanbo; Liu Hong; Wu Xizeng

    2009-01-01

    An iterative phase retrieval algorithm was previously investigated for in-line x-ray phase imaging. Through detailed theoretical analysis and computer simulations, we now discuss the limitations, robustness, and efficiency of the algorithm. The iterative algorithm was proved robust against imaging noise but sensitive to the variations of several system parameters. It is also efficient in terms of calculation time. It was shown that the algorithm can be applied to phase retrieval based on one phase-contrast image and one attenuation image, or two phase-contrast images; in both cases, the two images can be obtained either by one detector in two exposures, or by two detectors in only one exposure as in the dual-detector scheme

  16. On bivariate geometric distribution

    Directory of Open Access Journals (Sweden)

    K. Jayakumar

    2013-05-01

    Full Text Available Characterizations of bivariate geometric distribution using univariate and bivariate geometric compounding are obtained. Autoregressive models with marginals as bivariate geometric distribution are developed. Various bivariate geometric distributions analogous to important bivariate exponential distributions like, Marshall-Olkin’s bivariate exponential, Downton’s bivariate exponential and Hawkes’ bivariate exponential are presented.

  17. Robust Learning Control Design for Quantum Unitary Transformations.

    Science.gov (United States)

    Wu, Chengzhi; Qi, Bo; Chen, Chunlin; Dong, Daoyi

    2017-12-01

    Robust control design for quantum unitary transformations has been recognized as a fundamental and challenging task in the development of quantum information processing due to unavoidable decoherence or operational errors in the experimental implementation of quantum operations. In this paper, we extend the systematic methodology of sampling-based learning control (SLC) approach with a gradient flow algorithm for the design of robust quantum unitary transformations. The SLC approach first uses a "training" process to find an optimal control strategy robust against certain ranges of uncertainties. Then a number of randomly selected samples are tested and the performance is evaluated according to their average fidelity. The approach is applied to three typical examples of robust quantum transformation problems including robust quantum transformations in a three-level quantum system, in a superconducting quantum circuit, and in a spin chain system. Numerical results demonstrate the effectiveness of the SLC approach and show its potential applications in various implementation of quantum unitary transformations.

  18. Seismic noise attenuation using an online subspace tracking algorithm

    Science.gov (United States)

    Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang

    2018-02-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.

  19. Distributionally robust hydro-thermal-wind economic dispatch

    International Nuclear Information System (INIS)

    Chen, Yue; Wei, Wei; Liu, Feng; Mei, Shengwei

    2016-01-01

    Highlights: • A two-stage distributionally robust hydro-thermal-wind model is proposed. • A semi-definite programing equivalent and its algorithm are developed. • Cases that demonstrate the effectiveness of the proposed model are included. - Abstract: With the penetration of wind energy increasing, uncertainty has become a major challenge in power system dispatch. Hydro power can change rapidly and is regarded as one promising complementary energy resource to mitigate wind power fluctuation. Joint scheduling of hydro, thermal, and wind energy is attracting more and more attention nowadays. This paper proposes a distributionally robust hydro-thermal-wind economic dispatch (DR-HTW-ED) method to enhance the flexibility and reliability of power system operation. In contrast to the traditional stochastic optimization (SO) and adjustable robust optimization (ARO) method, distributionally robust optimization (DRO) method describes the uncertain wind power output by all possible probability distribution functions (PDFs) with the same mean and variance recovered from the forecast data, and optimizes the expected operation cost in the worst distribution. Traditional DRO optimized the random parameter in entire space, which is sometimes contradict to the actual situation. In this paper, we restrict the wind power uncertainty in a bounded set, and derive an equivalent semi-definite programming (SDP) for the DR-HTW-ED using S-lemma. A delayed constraint generation algorithm is suggested to solve it in a tractable manner. The proposed DR-HTW-ED is compared with the existing ARO based hydro-thermal-wind economic dispatch (AR-HTW-ED). Their respective features are shown from the perspective of computational efficiency and conservativeness of dispatch strategies.

  20. A robust regression based on weighted LSSVM and penalized trimmed squares

    International Nuclear Information System (INIS)

    Liu, Jianyong; Wang, Yong; Fu, Chengqun; Guo, Jie; Yu, Qin

    2016-01-01

    Least squares support vector machine (LS-SVM) for nonlinear regression is sensitive to outliers in the field of machine learning. Weighted LS-SVM (WLS-SVM) overcomes this drawback by adding weight to each training sample. However, as the number of outliers increases, the accuracy of WLS-SVM may decrease. In order to improve the robustness of WLS-SVM, a new robust regression method based on WLS-SVM and penalized trimmed squares (WLSSVM–PTS) has been proposed. The algorithm comprises three main stages. The initial parameters are obtained by least trimmed squares at first. Then, the significant outliers are identified and eliminated by the Fast-PTS algorithm. The remaining samples with little outliers are estimated by WLS-SVM at last. The statistical tests of experimental results carried out on numerical datasets and real-world datasets show that the proposed WLSSVM–PTS is significantly robust than LS-SVM, WLS-SVM and LSSVM–LTS.

  1. A Robust Color Image Watermarking Scheme Using Entropy and QR Decomposition

    Directory of Open Access Journals (Sweden)

    L. Laur

    2015-12-01

    Full Text Available Internet has affected our everyday life drastically. Expansive volumes of information are exchanged over the Internet consistently which causes numerous security concerns. Issues like content identification, document and image security, audience measurement, ownership, copyrights and others can be settled by using digital watermarking. In this work, robust and imperceptible non-blind color image watermarking algorithm is proposed, which benefit from the fact that watermark can be hidden in different color channel which results into further robustness of the proposed technique to attacks. Given method uses some algorithms such as entropy, discrete wavelet transform, Chirp z-transform, orthogonal-triangular decomposition and Singular value decomposition in order to embed the watermark in a color image. Many experiments are performed using well-known signal processing attacks such as histogram equalization, adding noise and compression. Experimental results show that proposed scheme is imperceptible and robust against common signal processing attacks.

  2. An algorithm for 3D target scatterer feature estimation from sparse SAR apertures

    Science.gov (United States)

    Jackson, Julie Ann; Moses, Randolph L.

    2009-05-01

    We present an algorithm for extracting 3D canonical scattering features from complex targets observed over sparse 3D SAR apertures. The algorithm begins with complex phase history data and ends with a set of geometrical features describing the scene. The algorithm provides a pragmatic approach to initialization of a nonlinear feature estimation scheme, using regularization methods to deconvolve the point spread function and obtain sparse 3D images. Regions of high energy are detected in the sparse images, providing location initializations for scattering center estimates. A single canonical scattering feature, corresponding to a geometric shape primitive, is fit to each region via nonlinear optimization of fit error between the regularized data and parametric canonical scattering models. Results of the algorithm are presented using 3D scattering prediction data of a simple scene for both a densely-sampled and a sparsely-sampled SAR measurement aperture.

  3. Quantification of differences between nailfold capillaroscopy images with a scleroderma pattern and normal pattern using measures of geometric and algorithmic complexity.

    Science.gov (United States)

    Urwin, Samuel George; Griffiths, Bridget; Allen, John

    2017-02-01

    This study aimed to quantify and investigate differences in the geometric and algorithmic complexity of the microvasculature in nailfold capillaroscopy (NFC) images displaying a scleroderma pattern and those displaying a 'normal' pattern. 11 NFC images were qualitatively classified by a capillary specialist as indicative of 'clear microangiopathy' (CM), i.e. a scleroderma pattern, and 11 as 'not clear microangiopathy' (NCM), i.e. a 'normal' pattern. Pre-processing was performed, and fractal dimension (FD) and Kolmogorov complexity (KC) were calculated following image binarisation. FD and KC were compared between groups, and a k-means cluster analysis (n  =  2) on all images was performed, without prior knowledge of the group assigned to them (i.e. CM or NCM), using FD and KC as inputs. CM images had significantly reduced FD and KC compared to NCM images, and the cluster analysis displayed promising results that the quantitative classification of images into CM and NCM groups is possible using the mathematical measures of FD and KC. The analysis techniques used show promise for quantitative microvascular investigation in patients with systemic sclerosis.

  4. DOA Estimation of Cylindrical Conformal Array Based on Geometric Algebra

    Directory of Open Access Journals (Sweden)

    Minjie Wu

    2016-01-01

    Full Text Available Due to the variable curvature of the conformal carrier, the pattern of each element has a different direction. The traditional method of analyzing the conformal array is to use the Euler rotation angle and its matrix representation. However, it is computationally demanding especially for irregular array structures. In this paper, we present a novel algorithm by combining the geometric algebra with Multiple Signal Classification (MUSIC, termed as GA-MUSIC, to solve the direction of arrival (DOA for cylindrical conformal array. And on this basis, we derive the pattern and array manifold. Compared with the existing algorithms, our proposed one avoids the cumbersome matrix transformations and largely decreases the computational complexity. The simulation results verify the effectiveness of the proposed method.

  5. An efficient and robust algorithm for parallel groupwise registration of bone surfaces

    NARCIS (Netherlands)

    van de Giessen, Martijn; Vos, Frans M.; Grimbergen, Cornelis A.; van Vliet, Lucas J.; Streekstra, Geert J.

    2012-01-01

    In this paper a novel groupwise registration algorithm is proposed for the unbiased registration of a large number of densely sampled point clouds. The method fits an evolving mean shape to each of the example point clouds thereby minimizing the total deformation. The registration algorithm

  6. Industrial Computed Tomography using Proximal Algorithm

    KAUST Repository

    Zang, Guangming

    2016-04-14

    In this thesis, we present ProxiSART, a flexible proximal framework for robust 3D cone beam tomographic reconstruction based on the Simultaneous Algebraic Reconstruction Technique (SART). We derive the proximal operator for the SART algorithm and use it for minimizing the data term in a proximal algorithm. We show the flexibility of the framework by plugging in different powerful regularizers, and show its robustness in achieving better reconstruction results in the presence of noise and using fewer projections. We compare our framework to state-of-the-art methods and existing popular software tomography reconstruction packages, on both synthetic and real datasets, and show superior reconstruction quality, especially from noisy data and a small number of projections.

  7. Automatic Offline Formulation of Robust Model Predictive Control Based on Linear Matrix Inequalities Method

    Directory of Open Access Journals (Sweden)

    Longge Zhang

    2013-01-01

    Full Text Available Two automatic robust model predictive control strategies are presented for uncertain polytopic linear plants with input and output constraints. A sequence of nested geometric proportion asymptotically stable ellipsoids and controllers is constructed offline first. Then the feedback controllers are automatically selected with the receding horizon online in the first strategy. Finally, a modified automatic offline robust MPC approach is constructed to improve the closed system's performance. The new proposed strategies not only reduce the conservatism but also decrease the online computation. Numerical examples are given to illustrate their effectiveness.

  8. Mechanical Model of Geometric Cell and Topological Algorithm for Cell Dynamics from Single-Cell to Formation of Monolayered Tissues with Pattern

    KAUST Repository

    Kachalo, Sëma

    2015-05-14

    Geometric and mechanical properties of individual cells and interactions among neighboring cells are the basis of formation of tissue patterns. Understanding the complex interplay of cells is essential for gaining insight into embryogenesis, tissue development, and other emerging behavior. Here we describe a cell model and an efficient geometric algorithm for studying the dynamic process of tissue formation in 2D (e.g. epithelial tissues). Our approach improves upon previous methods by incorporating properties of individual cells as well as detailed description of the dynamic growth process, with all topological changes accounted for. Cell size, shape, and division plane orientation are modeled realistically. In addition, cell birth, cell growth, cell shrinkage, cell death, cell division, cell collision, and cell rearrangements are now fully accounted for. Different models of cell-cell interactions, such as lateral inhibition during the process of growth, can be studied in detail. Cellular pattern formation for monolayered tissues from arbitrary initial conditions, including that of a single cell, can also be studied in detail. Computational efficiency is achieved through the employment of a special data structure that ensures access to neighboring cells in constant time, without additional space requirement. We have successfully generated tissues consisting of more than 20,000 cells starting from 2 cells within 1 hour. We show that our model can be used to study embryogenesis, tissue fusion, and cell apoptosis. We give detailed study of the classical developmental process of bristle formation on the epidermis of D. melanogaster and the fundamental problem of homeostatic size control in epithelial tissues. Simulation results reveal significant roles of solubility of secreted factors in both the bristle formation and the homeostatic control of tissue size. Our method can be used to study broad problems in monolayered tissue formation. Our software is publicly

  9. Geometrical parton

    Energy Technology Data Exchange (ETDEWEB)

    Ebata, T [Tohoku Univ., Sendai (Japan). Coll. of General Education

    1976-06-01

    The geometrical distribution inferred from the inelastic cross section is assumed to be proportional to the partial waves. The precocious scaling and the Q/sup 2/-dependence of various quantities are treated from the geometrical point of view. It is shown that the approximate conservation of the orbital angular momentum may be a very practical rule to understand the helicity structure of various hadronic and electromagnetic reactions. The rule can be applied to inclusive reactions as well. The model is also applied to large angle processes. Through the discussion, it is suggested that many peculiar properties of the quark-parton can be ascribed to the geometrical effects.

  10. Asymmetric forecasting and commitment policy in a robust control problem

    OpenAIRE

    Taro Ikeda

    2013-01-01

    This paper provides a piece of results regarding asymmetric forecasting and commitment monetary policy with a robust control algorithm. Previous studies provide no clarification of the connection between asymmetric preference and robust commitment policy. Three results emerge from general equilibrium modeling with asymmetric preference: (i) the condition for system stability implies an average inflation bias with respect to asymmetry (ii) the effect of asymmetry can be mitigated if policy mak...

  11. Geometric properties of nucleic acids with potential for autobuilding

    International Nuclear Information System (INIS)

    Gruene, Tim; Sheldrick, George M.

    2011-01-01

    Algorithms and geometrical properties are described for the automated building of nucleic acids in experimental electron density. Medium- to high-resolution X-ray structures of DNA and RNA molecules were investigated to find geometric properties useful for automated model building in crystallographic electron-density maps. We describe a simple method, starting from a list of electron-density ‘blobs’, for identifying backbone phosphates and nucleic acid bases based on properties of the local electron-density distribution. This knowledge should be useful for the automated building of nucleic acid models into electron-density maps. We show that the distances and angles involving C1′ and the P atoms, using the pseudo-torsion angles η' and θ' that describe the …P—C1′—P—C1′… chain, provide a promising basis for building the nucleic acid polymer. These quantities show reasonably narrow distributions with asymmetry that should allow the direction of the phosphate backbone to be established

  12. Geometrical Modification of Learning Vector Quantization Method for Solving Classification Problems

    Directory of Open Access Journals (Sweden)

    Korhan GÜNEL

    2016-09-01

    Full Text Available In this paper, a geometrical scheme is presented to show how to overcome an encountered problem arising from the use of generalized delta learning rule within competitive learning model. It is introduced a theoretical methodology for describing the quantization of data via rotating prototype vectors on hyper-spheres.The proposed learning algorithm is tested and verified on different multidimensional datasets including a binary class dataset and two multiclass datasets from the UCI repository, and a multiclass dataset constructed by us. The proposed method is compared with some baseline learning vector quantization variants in literature for all domains. Large number of experiments verify the performance of our proposed algorithm with acceptable accuracy and macro f1 scores.

  13. Robust Optimization for Time-Cost Tradeoff Problem in Construction Projects

    Directory of Open Access Journals (Sweden)

    Ming Li

    2014-01-01

    Full Text Available Construction projects are generally subject to uncertainty, which influences the realization of time-cost tradeoff in project management. This paper addresses a time-cost tradeoff problem under uncertainty, in which activities in projects can be executed in different construction modes corresponding to specified time and cost with interval uncertainty. Based on multiobjective robust optimization method, a robust optimization model for time-cost tradeoff problem is developed. In order to illustrate the robust model, nondominated sorting genetic algorithm-II (NSGA-II is modified to solve the project example. The results show that, by means of adjusting the time and cost robust coefficients, the robust Pareto sets for time-cost tradeoff can be obtained according to different acceptable risk level, from which the decision maker could choose the preferred construction alternative.

  14. Fast algorithm of track reconstruction for the Delphy TPC

    International Nuclear Information System (INIS)

    Maillard, J.

    1984-01-01

    We describe a simple geometrical method (polar inversion) to reconstruct tracks. When the magnetic field is constant in magnitude and direction. This method uses geometrical properties of the trajectories. In the case of the DELPHI apparatus, the track reconstruction is done using TPC informations. After explaining the algorithm, we give results on ''GEANT'' simulated events using the ''Lund'' generator. Today we get a computer time of the order of 1.2 milliseconds on a CDC 7600 and an efficiency of 98% [fr

  15. Synthesis of multi-wavelength temporal phase-shifting algorithms optimized for high signal-to-noise ratio and high detuning robustness using the frequency transfer function.

    Science.gov (United States)

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-05-02

    Synthesis of single-wavelength temporal phase-shifting algorithms (PSA) for interferometry is well-known and firmly based on the frequency transfer function (FTF) paradigm. Here we extend the single-wavelength FTF-theory to dual and multi-wavelength PSA-synthesis when several simultaneous laser-colors are present. The FTF-based synthesis for dual-wavelength (DW) PSA is optimized for high signal-to-noise ratio and minimum number of temporal phase-shifted interferograms. The DW-PSA synthesis herein presented may be used for interferometric contouring of discontinuous industrial objects. Also DW-PSA may be useful for DW shop-testing of deep free-form aspheres. As shown here, using the FTF-based synthesis one may easily find explicit DW-PSA formulae optimized for high signal-to-noise and high detuning robustness. To this date, no general synthesis and analysis for temporal DW-PSAs has been given; only ad hoc DW-PSAs formulas have been reported. Consequently, no explicit formulae for their spectra, their signal-to-noise, their detuning and harmonic robustness has been given. Here for the first time a fully general procedure for designing DW-PSAs (or triple-wavelengths PSAs) with desire spectrum, signal-to-noise ratio and detuning robustness is given. We finally generalize DW-PSA to higher number of wavelength temporal PSAs.

  16. Robust Path Planning and Feedback Design Under Stochastic Uncertainty

    Science.gov (United States)

    Blackmore, Lars

    2008-01-01

    Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.

  17. Robust chaotic control of Lorenz system by backstepping design

    International Nuclear Information System (INIS)

    Peng, C.-C.; Chen, C.-L.

    2008-01-01

    This work presents a robust chaotic control strategy for the Lorenz chaos via backstepping design. Backstepping technique is a systematic tool of control law design to provide Lyapunov stability. The concept of extended system is used such that a continuous sliding mode control (SMC) effort is generated using backstepping scheme. In the proposed control algorithm, an adaptation law is applied to estimate the system parameter and the SMC offers the robustness to model uncertainties and external disturbances so that the asymptotical convergence of tracking error can be achieved. Regarding the SMC, an equivalent control algorithm is chosen based on the selection of Lyapunov stability criterion during backstepping approach. The converging rate of error state is relative to the corresponding dynamics of sliding surface. Numerical simulations demonstrate its advantages to a regulation problem and an orbit tracking problem of the Lorenz chaos

  18. Experimental implementation of a robust damped-oscillation control algorithm on a full-sized, two-degree-of-freedom, AC induction motor-driven crane

    International Nuclear Information System (INIS)

    Kress, R.L.; Jansen, J.F.; Noakes, M.W.

    1994-01-01

    When suspended payloads are moved with an overhead crane, pendulum like oscillations are naturally introduced. This presents a problem any time a crane is used, especially when expensive and/or delicate objects are moved, when moving in a cluttered an or hazardous environment, and when objects are to be placed in tight locations. Damped-oscillation control algorithms have been demonstrated over the past several years for laboratory-scale robotic systems on dc motor-driven overhead cranes. Most overhead cranes presently in use in industry are driven by ac induction motors; consequently, Oak Ridge National Laboratory has implemented damped-oscillation crane control on one of its existing facility ac induction motor-driven overhead cranes. The purpose of this test was to determine feasibility, to work out control and interfacing specifications, and to establish the capability of newly available ac motor control hardware with respect to use in damped-oscillation-controlled systems. Flux vector inverter drives are used to investigate their acceptability for damped-oscillation crane control. The purpose of this paper is to describe the experimental implementation of a control algorithm on a full-sized, two-degree-of-freedom, industrial crane; describe the experimental evaluation of the controller including robustness to payload length changes; explain the results of experiments designed to determine the hardware required for implementation of the control algorithms; and to provide a theoretical description of the controller

  19. Robust self-triggered MPC for constrained linear systems

    NARCIS (Netherlands)

    Brunner, F.D.; Heemels, W.P.M.H.; Allgöwer, F.

    2014-01-01

    In this paper we propose a robust self-triggered model predictive control algorithm for linear systems with additive bounded disturbances and hard constraints on the inputs and state. In self-triggered control, at every sampling instant the time until the next sampling instant is computed online

  20. Evaluation of geometric changes of parotid glands during head and neck cancer radiotherapy using daily MVCT and automatic deformable registration

    International Nuclear Information System (INIS)

    Lee, Choonik; Langen, Katja M.; Lu, Weiguo; Haimerl, Jason; Schnarr, Eric; Ruchala, Kenneth J.; Olivera, Gustavo H.; Meeks, Sanford L.; Kupelian, Patrick A.; Shellenberger, Thomas D.; Manon, Rafael R.

    2008-01-01

    Background and purpose: To assess and evaluate geometrical changes in parotid glands using deformable image registration and megavoltage CT (MVCT) images. Methods: A deformable registration algorithm was applied to 330 daily MVCT images (10 patients) to create deformed parotid contours. The accuracy and robustness of the algorithm was evaluated through visual review, comparison with manual contours, and precision analysis. Temporal changes in the parotid gland geometry were observed. Results: The deformed parotid contours were qualitatively judged to be acceptable. Compared with manual contours, the uncertainties of automatically deformed contours were similar with regard to geometry and dosimetric endpoint. The day-to-day variations (1 standard deviation of errors) in the center-of-mass distance and volume were 1.61 mm and 4.36%, respectively. The volumes tended to decrease with a median total loss of 21.3% (6.7-31.5%) and a median change rate of 0.7%/day (0.4-1.3%/day). Parotids migrated toward the patient center with a median total distance change of -5.26 mm (0.00 to -16.35 mm) and a median change rate of -0.22 mm/day (0.02 to -0.56 mm/day). Conclusion: The deformable image registration and daily MVCT images provide an efficient and reliable assessment of parotid changes over the course of a radiation therapy

  1. Bio-Inspired Microsystem for Robust Genetic Assay Recognition

    Directory of Open Access Journals (Sweden)

    Jaw-Chyng Lue

    2008-01-01

    Full Text Available A compact integrated system-on-chip (SoC architecture solution for robust, real-time, and on-site genetic analysis has been proposed. This microsystem solution is noise-tolerable and suitable for analyzing the weak fluorescence patterns from a PCR prepared dual-labeled DNA microchip assay. In the architecture, a preceding VLSI differential logarithm microchip is designed for effectively computing the logarithm of the normalized input fluorescence signals. A posterior VLSI artificial neural network (ANN processor chip is used for analyzing the processed signals from the differential logarithm stage. A single-channel logarithmic circuit was fabricated and characterized. A prototype ANN chip with unsupervised winner-take-all (WTA function was designed, fabricated, and tested. An ANN learning algorithm using a novel sigmoid-logarithmic transfer function based on the supervised backpropagation (BP algorithm is proposed for robustly recognizing low-intensity patterns. Our results show that the trained new ANN can recognize low-fluorescence patterns better than an ANN using the conventional sigmoid function.

  2. Optimization of Pressurizer Based on Genetic-Simplex Algorithm

    International Nuclear Information System (INIS)

    Wang, Cheng; Yan, Chang Qi; Wang, Jian Jun

    2014-01-01

    Pressurizer is one of key components in nuclear power system. It's important to control the dimension in the design of pressurizer through optimization techniques. In this work, a mathematic model of a vertical electric heating pressurizer was established. A new Genetic-Simplex Algorithm (GSA) that combines genetic algorithm and simplex algorithm was developed to enhance the searching ability, and the comparison among modified and original algorithms is conducted by calculating the benchmark function. Furthermore, the optimization design of pressurizer, taking minimization of volume and net weight as objectives, was carried out considering thermal-hydraulic and geometric constraints through GSA. The results indicate that the mathematical model is agreeable for the pressurizer and the new algorithm is more effective than the traditional genetic algorithm. The optimization design shows obvious validity and can provide guidance for real engineering design

  3. Improved SURF Algorithm and Its Application in Seabed Relief Image Matching

    Directory of Open Access Journals (Sweden)

    Zhang Hong-Mei

    2017-01-01

    Full Text Available The matching based on seabed relief image is widely used in underwater relief matching navigation and target recognition, etc. However, being influenced by various factors, some conventional matching algorithms are difficult to obtain an ideal result in the matching of seabed relief image. SURF(Speeded Up Robust Features algorithm is based on feature points pair to achieve matching, and can get good results in the seabed relief image matching. However, in practical applications, the traditional SURF algorithm is easy to get false matching, especially when the area’s features are similar or not obvious, the problem is more seriously. In order to improve the robustness of the algorithm, this paper proposes an improved matching algorithm, which combines the SURF, and RANSAC (Random Sample Consensus algorithms. The new algorithm integrates the two algorithms advantages, firstly, the SURF algorithm is applied to detect and extract the feature points then to pre-match. Secondly, RANSAC algorithm is utilized to eliminate mismatching points, and then the accurate matching is accomplished with the correct matching points. The experimental results show that the improved algorithm overcomes the mismatching problem effectively and have better precision and faster speed than the traditional SURF algorithm.

  4. A fast sparse reconstruction algorithm for electrical tomography

    International Nuclear Information System (INIS)

    Zhao, Jia; Xu, Yanbin; Tan, Chao; Dong, Feng

    2014-01-01

    Electrical tomography (ET) has been widely investigated due to its advantages of being non-radiative, low-cost and high-speed. However, the image reconstruction of ET is a nonlinear and ill-posed inverse problem and the imaging results are easily affected by measurement noise. A sparse reconstruction algorithm based on L 1 regularization is robust to noise and consequently provides a high quality of reconstructed images. In this paper, a sparse reconstruction by separable approximation algorithm (SpaRSA) is extended to solve the ET inverse problem. The algorithm is competitive with the fastest state-of-the-art algorithms in solving the standard L 2 −L 1 problem. However, it is computationally expensive when the dimension of the matrix is large. To further improve the calculation speed of solving inverse problems, a projection method based on the Krylov subspace is employed and combined with the SpaRSA algorithm. The proposed algorithm is tested with image reconstruction of electrical resistance tomography (ERT). Both simulation and experimental results demonstrate that the proposed method can reduce the computational time and improve the noise robustness for the image reconstruction. (paper)

  5. Geometric Design Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — Purpose: The mission of the Geometric Design Laboratory (GDL) is to support the Office of Safety Research and Development in research related to the geometric design...

  6. Off-Line Robust Constrained MPC for Linear Time-Varying Systems with Persistent Disturbances

    Directory of Open Access Journals (Sweden)

    P. Bumroongsri

    2014-01-01

    Full Text Available An off-line robust constrained model predictive control (MPC algorithm for linear time-varying (LTV systems is developed. A novel feature is the fact that both model uncertainty and bounded additive disturbance are explicitly taken into account in the off-line formulation of MPC. In order to reduce the on-line computational burdens, a sequence of explicit control laws corresponding to a sequence of positively invariant sets is computed off-line. At each sampling time, the smallest positively invariant set containing the measured state is determined and the corresponding control law is implemented in the process. The proposed MPC algorithm can guarantee robust stability while ensuring the satisfaction of input and output constraints. The effectiveness of the proposed MPC algorithm is illustrated by two examples.

  7. A Robust Computational Technique for Model Order Reduction of Two-Time-Scale Discrete Systems via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Othman M. K. Alsmadi

    2015-01-01

    Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  8. Multi-objective optimization of a plate and frame heat exchanger via genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Najafi, Hamidreza; Najafi, Behzad [K. N. Toosi University of Technology, Department of Mechanical Engineering, Tehran (Iran)

    2010-06-15

    In the present paper, a plate and frame heat exchanger is considered. Multi-objective optimization using genetic algorithm is developed in order to obtain a set of geometric design parameters, which lead to minimum pressure drop and the maximum overall heat transfer coefficient. Vividly, considered objective functions are conflicting and no single solution can satisfy both objectives simultaneously. Multi-objective optimization procedure yields a set of optimal solutions, called Pareto front, each of which is a trade-off between objectives and can be selected by the user, regarding the application and the project's limits. The presented work takes care of numerous geometric parameters in the presence of logical constraints. A sensitivity analysis is also carried out to study the effects of different geometric parameters on the considered objective functions. Modeling the system and implementing the multi-objective optimization via genetic algorithm has been performed by MATLAB. (orig.)

  9. Robust Distributed Kalman Filter for Wireless Sensor Networks with Uncertain Communication Channels

    Directory of Open Access Journals (Sweden)

    Du Yong Kim

    2012-01-01

    Full Text Available We address a state estimation problem over a large-scale sensor network with uncertain communication channel. Consensus protocol is usually used to adapt a large-scale sensor network. However, when certain parts of communication channels are broken down, the accuracy performance is seriously degraded. Specifically, outliers in the channel or temporal disconnection are avoided via proposed method for the practical implementation of the distributed estimation over large-scale sensor networks. We handle this practical challenge by using adaptive channel status estimator and robust L1-norm Kalman filter in design of the processor of the individual sensor node. Then, they are incorporated into the consensus algorithm in order to achieve the robust distributed state estimation. The robust property of the proposed algorithm enables the sensor network to selectively weight sensors of normal conditions so that the filter can be practically useful.

  10. An automated and robust image processing algorithm for glaucoma diagnosis from fundus images using novel blood vessel tracking and bend point detection.

    Science.gov (United States)

    M, Soorya; Issac, Ashish; Dutta, Malay Kishore

    2018-02-01

    Glaucoma is an ocular disease which can cause irreversible blindness. The disease is currently identified using specialized equipment operated by optometrists manually. The proposed work aims to provide an efficient imaging solution which can help in automating the process of Glaucoma diagnosis using computer vision techniques from digital fundus images. The proposed method segments the optic disc using a geometrical feature based strategic framework which improves the detection accuracy and makes the algorithm invariant to illumination and noise. Corner thresholding and point contour joining based novel methods are proposed to construct smooth contours of Optic Disc. Based on a clinical approach as used by ophthalmologist, the proposed algorithm tracks blood vessels inside the disc region and identifies the points at which first vessel bend from the optic disc boundary and connects them to obtain the contours of Optic Cup. The proposed method has been compared with the ground truth marked by the medical experts and the similarity parameters, used to determine the performance of the proposed method, have yield a high similarity of segmentation. The proposed method has achieved a macro-averaged f-score of 0.9485 and accuracy of 97.01% in correctly classifying fundus images. The proposed method is clinically significant and can be used for Glaucoma screening over a large population which will work in a real time. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Machine learning meliorates computing and robustness in discrete combinatorial optimization problems.

    Directory of Open Access Journals (Sweden)

    Fushing Hsieh

    2016-11-01

    Full Text Available Discrete combinatorial optimization problems in real world are typically defined via an ensemble of potentially high dimensional measurements pertaining to all subjects of a system under study. We point out that such a data ensemble in fact embeds with system's information content that is not directly used in defining the combinatorial optimization problems. Can machine learning algorithms extract such information content and make combinatorial optimizing tasks more efficient? Would such algorithmic computations bring new perspectives into this classic topic of Applied Mathematics and Theoretical Computer Science? We show that answers to both questions are positive. One key reason is due to permutation invariance. That is, the data ensemble of subjects' measurement vectors is permutation invariant when it is represented through a subject-vs-measurement matrix. An unsupervised machine learning algorithm, called Data Mechanics (DM, is applied to find optimal permutations on row and column axes such that the permuted matrix reveals coupled deterministic and stochastic structures as the system's information content. The deterministic structures are shown to facilitate geometry-based divide-and-conquer scheme that helps optimizing task, while stochastic structures are used to generate an ensemble of mimicries retaining the deterministic structures, and then reveal the robustness pertaining to the original version of optimal solution. Two simulated systems, Assignment problem and Traveling Salesman problem, are considered. Beyond demonstrating computational advantages and intrinsic robustness in the two systems, we propose brand new robust optimal solutions. We believe such robust versions of optimal solutions are potentially more realistic and practical in real world settings.

  12. Robust Design Optimization of an Aerospace Vehicle Prolusion System

    Directory of Open Access Journals (Sweden)

    Muhammad Aamir Raza

    2011-01-01

    Full Text Available This paper proposes a robust design optimization methodology under design uncertainties of an aerospace vehicle propulsion system. The approach consists of 3D geometric design coupled with complex internal ballistics, hybrid optimization, worst-case deviation, and efficient statistical approach. The uncertainties are propagated through worst-case deviation using first-order orthogonal design matrices. The robustness assessment is measured using the framework of mean-variance and percentile difference approach. A parametric sensitivity analysis is carried out to analyze the effects of design variables variation on performance parameters. A hybrid simulated annealing and pattern search approach is used as an optimizer. The results show the objective function of optimizing the mean performance and minimizing the variation of performance parameters in terms of thrust ratio and total impulse could be achieved while adhering to the system constraints.

  13. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    Science.gov (United States)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  14. Primal-dual convex optimization in large deformation diffeomorphic metric mapping: LDDMM meets robust regularizers

    Science.gov (United States)

    Hernandez, Monica

    2017-12-01

    This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.

  15. Robust power system frequency control

    CERN Document Server

    Bevrani, Hassan

    2014-01-01

    This updated edition of the industry standard reference on power system frequency control provides practical, systematic and flexible algorithms for regulating load frequency, offering new solutions to the technical challenges introduced by the escalating role of distributed generation and renewable energy sources in smart electric grids. The author emphasizes the physical constraints and practical engineering issues related to frequency in a deregulated environment, while fostering a conceptual understanding of frequency regulation and robust control techniques. The resulting control strategi

  16. An FMS Dynamic Production Scheduling Algorithm Considering Cutting Tool Failure and Cutting Tool Life

    International Nuclear Information System (INIS)

    Setiawan, A; Wangsaputra, R; Halim, A H; Martawirya, Y Y

    2016-01-01

    This paper deals with Flexible Manufacturing System (FMS) production rescheduling due to unavailability of cutting tools caused either of cutting tool failure or life time limit. The FMS consists of parallel identical machines integrated with an automatic material handling system and it runs fully automatically. Each machine has a same cutting tool configuration that consists of different geometrical cutting tool types on each tool magazine. The job usually takes two stages. Each stage has sequential operations allocated to machines considering the cutting tool life. In the real situation, the cutting tool can fail before the cutting tool life is reached. The objective in this paper is to develop a dynamic scheduling algorithm when a cutting tool is broken during unmanned and a rescheduling needed. The algorithm consists of four steps. The first step is generating initial schedule, the second step is determination the cutting tool failure time, the third step is determination of system status at cutting tool failure time and the fourth step is the rescheduling for unfinished jobs. The approaches to solve the problem are complete-reactive scheduling and robust-proactive scheduling. The new schedules result differences starting time and completion time of each operations from the initial schedule. (paper)

  17. A Hybrid Algorithm for Optimizing Multi- Modal Functions

    Institute of Scientific and Technical Information of China (English)

    Li Qinghua; Yang Shida; Ruan Youlin

    2006-01-01

    A new genetic algorithm is presented based on the musical performance. The novelty of this algorithm is that a new genetic algorithm, mimicking the musical process of searching for a perfect state of harmony, which increases the robustness of it greatly and gives a new meaning of it in the meantime, has been developed. Combining the advantages of the new genetic algorithm, simplex algorithm and tabu search, a hybrid algorithm is proposed. In order to verify the effectiveness of the hybrid algorithm, it is applied to solving some typical numerical function optimization problems which are poorly solved by traditional genetic algorithms. The experimental results show that the hybrid algorithm is fast and reliable.

  18. Robust Multimodal Dictionary Learning

    Science.gov (United States)

    Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc

    2014-01-01

    We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674

  19. Multiple Constraints Based Robust Matching of Poor-Texture Close-Range Images for Monitoring a Simulated Landslide

    Directory of Open Access Journals (Sweden)

    Gang Qiao

    2016-05-01

    Full Text Available Landslides are one of the most destructive geo-hazards that can bring about great threats to both human lives and infrastructures. Landslide monitoring has been always a research hotspot. In particular, landslide simulation experimentation is an effective tool in landslide research to obtain critical parameters that help understand the mechanism and evaluate the triggering and controlling factors of slope failure. Compared with other traditional geotechnical monitoring approaches, the close-range photogrammetry technique shows potential in tracking and recording the 3D surface deformation and failure processes. In such cases, image matching usually plays a critical role in stereo image processing for the 3D geometric reconstruction. However, the complex imaging conditions such as rainfall, mass movement, illumination, and ponding will reduce the texture quality of the stereo images, bringing about difficulties in the image matching process and resulting in very sparse matches. To address this problem, this paper presents a multiple-constraints based robust image matching approach for poor-texture close-range images particularly useful in monitoring a simulated landslide. The Scale Invariant Feature Transform (SIFT algorithm was first applied to the stereo images for generation of scale-invariate feature points, followed by a two-step matching process: feature-based image matching and area-based image matching. In the first feature-based matching step, the triangulation process was performed based on the SIFT matches filtered by the Fundamental Matrix (FM and a robust checking procedure, to serve as the basic constraints for feature-based iterated matching of all the non-matched SIFT-derived feature points inside each triangle. In the following area-based image-matching step, the corresponding points of the non-matched features in each triangle of the master image were predicted in the homologous triangle of the searching image by using geometric

  20. A new approach for handling longitudinal count data with zero-inflation and overdispersion: poisson geometric process model.

    Science.gov (United States)

    Wan, Wai-Yin; Chan, Jennifer S K

    2009-08-01

    For time series of count data, correlated measurements, clustering as well as excessive zeros occur simultaneously in biomedical applications. Ignoring such effects might contribute to misleading treatment outcomes. A generalized mixture Poisson geometric process (GMPGP) model and a zero-altered mixture Poisson geometric process (ZMPGP) model are developed from the geometric process model, which was originally developed for modelling positive continuous data and was extended to handle count data. These models are motivated by evaluating the trend development of new tumour counts for bladder cancer patients as well as by identifying useful covariates which affect the count level. The models are implemented using Bayesian method with Markov chain Monte Carlo (MCMC) algorithms and are assessed using deviance information criterion (DIC).

  1. Innovative hyperchaotic encryption algorithm for compressed video

    Science.gov (United States)

    Yuan, Chun; Zhong, Yuzhuo; Yang, Shiqiang

    2002-12-01

    It is accepted that stream cryptosystem can achieve good real-time performance and flexibility which implements encryption by selecting few parts of the block data and header information of the compressed video stream. Chaotic random number generator, for example Logistics Map, is a comparatively promising substitute, but it is easily attacked by nonlinear dynamic forecasting and geometric information extracting. In this paper, we present a hyperchaotic cryptography scheme to encrypt the compressed video, which integrates Logistics Map with Z(232 - 1) field linear congruential algorithm to strengthen the security of the mono-chaotic cryptography, meanwhile, the real-time performance and flexibility of the chaotic sequence cryptography are maintained. It also integrates with the dissymmetrical public-key cryptography and implements encryption and identity authentification on control parameters at initialization phase. In accord with the importance of data in compressed video stream, encryption is performed in layered scheme. In the innovative hyperchaotic cryptography, the value and the updating frequency of control parameters can be changed online to satisfy the requirement of the network quality, processor capability and security requirement. The innovative hyperchaotic cryprography proves robust security by cryptoanalysis, shows good real-time performance and flexible implement capability through the arithmetic evaluating and test.

  2. Design of Robust Adaptive Array Processors for Non-Stationary Ocean Environments

    National Research Council Canada - National Science Library

    Wage, Kathleen E

    2009-01-01

    The overall goal of this project is to design adaptive array processing algorithms that have good transient performance, are robust to mismatch, work with low sample support, and incorporate waveguide...

  3. Quantification of parameter uncertainty for robust control of shape memory alloy bending actuators

    International Nuclear Information System (INIS)

    Crews, John H; McMahan, Jerry A; Smith, Ralph C; Hannen, Jennifer C

    2013-01-01

    In this paper, we employ Bayesian parameter estimation techniques to derive gains for robust control of smart materials. Specifically, we demonstrate the feasibility of utilizing parameter uncertainty estimation provided by Markov chain Monte Carlo (MCMC) methods to determine controller gains for a shape memory alloy bending actuator. We treat the parameters in the equations governing the actuator’s temperature dynamics as uncertain and use the MCMC method to construct the probability densities for these parameters. The densities are then used to derive parameter bounds for robust control algorithms. For illustrative purposes, we construct a sliding mode controller based on the homogenized energy model and experimentally compare its performance to a proportional-integral controller. While sliding mode control is used here, the techniques described in this paper provide a useful starting point for many robust control algorithms. (paper)

  4. Gems of combinatorial optimization and graph algorithms

    CERN Document Server

    Skutella, Martin; Stiller, Sebastian; Wagner, Dorothea

    2015-01-01

    Are you looking for new lectures for your course on algorithms, combinatorial optimization, or algorithmic game theory?  Maybe you need a convenient source of relevant, current topics for a graduate student or advanced undergraduate student seminar?  Or perhaps you just want an enjoyable look at some beautiful mathematical and algorithmic results, ideas, proofs, concepts, and techniques in discrete mathematics and theoretical computer science?   Gems of Combinatorial Optimization and Graph Algorithms is a handpicked collection of up-to-date articles, carefully prepared by a select group of international experts, who have contributed some of their most mathematically or algorithmically elegant ideas.  Topics include longest tours and Steiner trees in geometric spaces, cartograms, resource buying games, congestion games, selfish routing, revenue equivalence and shortest paths, scheduling, linear structures in graphs, contraction hierarchies, budgeted matching problems, and motifs in networks.   This ...

  5. Robust linearized image reconstruction for multifrequency EIT of the breast.

    Science.gov (United States)

    Boverman, Gregory; Kao, Tzu-Jen; Kulkarni, Rujuta; Kim, Bong Seok; Isaacson, David; Saulnier, Gary J; Newell, Jonathan C

    2008-10-01

    Electrical impedance tomography (EIT) is a developing imaging modality that is beginning to show promise for detecting and characterizing tumors in the breast. At Rensselaer Polytechnic Institute, we have developed a combined EIT-tomosynthesis system that allows for the coregistered and simultaneous analysis of the breast using EIT and X-ray imaging. A significant challenge in EIT is the design of computationally efficient image reconstruction algorithms which are robust to various forms of model mismatch. Specifically, we have implemented a scaling procedure that is robust to the presence of a thin highly-resistive layer of skin at the boundary of the breast and we have developed an algorithm to detect and exclude from the image reconstruction electrodes that are in poor contact with the breast. In our initial clinical studies, it has been difficult to ensure that all electrodes make adequate contact with the breast, and thus procedures for the use of data sets containing poorly contacting electrodes are particularly important. We also present a novel, efficient method to compute the Jacobian matrix for our linearized image reconstruction algorithm by reducing the computation of the sensitivity for each voxel to a quadratic form. Initial clinical results are presented, showing the potential of our algorithms to detect and localize breast tumors.

  6. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors.

    Science.gov (United States)

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-07-07

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.

  7. Robust transceiver design for reciprocal M × N interference channel based on statistical linearization approximation

    Science.gov (United States)

    Mayvan, Ali D.; Aghaeinia, Hassan; Kazemi, Mohammad

    2017-12-01

    This paper focuses on robust transceiver design for throughput enhancement on the interference channel (IC), under imperfect channel state information (CSI). In this paper, two algorithms are proposed to improve the throughput of the multi-input multi-output (MIMO) IC. Each transmitter and receiver has, respectively, M and N antennas and IC operates in a time division duplex mode. In the first proposed algorithm, each transceiver adjusts its filter to maximize the expected value of signal-to-interference-plus-noise ratio (SINR). On the other hand, the second algorithm tries to minimize the variances of the SINRs to hedge against the variability due to CSI error. Taylor expansion is exploited to approximate the effect of CSI imperfection on mean and variance. The proposed robust algorithms utilize the reciprocity of wireless networks to optimize the estimated statistical properties in two different working modes. Monte Carlo simulations are employed to investigate sum rate performance of the proposed algorithms and the advantage of incorporating variation minimization into the transceiver design.

  8. Exact computation of the Voronoi Diagram of spheres in 3D, its topology and its geometric invariants

    DEFF Research Database (Denmark)

    Anton, François; Mioc, Darka; Santos, Marcelo

    2011-01-01

    In this paper, we are addressing the exact computation of the Delaunay graph (or quasi-triangulation) and the Voronoi diagram of spheres using Wu’s algorithm. Our main contribution is first a methodology for automated derivation of invariants of the Delaunay empty circumcircle predicate for spheres...... and the Voronoi vertex of four spheres, then the application of this methodology to get all geometrical invariants that intervene in this problem and the exact computation of the Delaunay graph and the Voronoi diagram of spheres. To the best of our knowledge, there does not exist a comprehensive treatment...... of the exact computation with geometrical invariants of the Delaunay graph and the Voronoi diagram of spheres. Starting from the system of equations defining the zero-dimensional algebraic set of the problem, we are following Wu’s algorithm to transform the initial system into an equivalent Wu characteristic...

  9. Robustness Analysis of Visual QA Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong

    2017-09-14

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  10. Robustness Analysis of Visual QA Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong; Alfadly, Modar; Ghanem, Bernard

    2017-01-01

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  11. Architecture for the Secret-Key BC3 Cryptography Algorithm

    Directory of Open Access Journals (Sweden)

    Arif Sasongko

    2011-08-01

    Full Text Available Cryptography is a very important aspect in data security. The focus of research in this field is shifting from merely security aspect to consider as well the implementation aspect. This paper aims to introduce BC3 algorithm with focus on its hardware implementation. It proposes architecture for the hardware implementation for this algorithm. BC3 algorithm is a secret-key cryptography algorithm developed with two considerations: robustness and implementation efficiency. This algorithm has been implemented on software and has good performance compared to AES algorithm. BC3 is improvement of BC2 and AE cryptographic algorithm and it is expected to have the same level of robustness and to gain competitive advantages in the implementation aspect. The development of the architecture gives much attention on (1 resource sharing and (2 having single clock for each round. It exploits regularity of the algorithm. This architecture is then implemented on an FPGA. This implementation is three times smaller area than AES, but about five times faster. Furthermore, this BC3 hardware implementation has better performance compared to BC3 software both in key expansion stage and randomizing stage. For the future, the security of this implementation must be reviewed especially against side channel attack.

  12. Automated robust registration of grossly misregistered whole-slide images with varying stains

    Science.gov (United States)

    Litjens, G.; Safferling, K.; Grabe, N.

    2016-03-01

    Cancer diagnosis and pharmaceutical research increasingly depend on the accurate quantification of cancer biomarkers. Identification of biomarkers is usually performed through immunohistochemical staining of cancer sections on glass slides. However, combination of multiple biomarkers from a wide variety of immunohistochemically stained slides is a tedious process in traditional histopathology due to the switching of glass slides and re-identification of regions of interest by pathologists. Digital pathology now allows us to apply image registration algorithms to digitized whole-slides to align the differing immunohistochemical stains automatically. However, registration algorithms need to be robust to changes in color due to differing stains and severe changes in tissue content between slides. In this work we developed a robust registration methodology to allow for fast coarse alignment of multiple immunohistochemical stains to the base hematyoxylin and eosin stained image. We applied HSD color model conversion to obtain a less stain color dependent representation of the whole-slide images. Subsequently, optical density thresholding and connected component analysis were used to identify the relevant regions for registration. Template matching using normalized mutual information was applied to provide initial translation and rotation parameters, after which a cost function-driven affine registration was performed. The algorithm was validated using 40 slides from 10 prostate cancer patients, with landmark registration error as a metric. Median landmark registration error was around 180 microns, which indicates performance is adequate for practical application. None of the registrations failed, indicating the robustness of the algorithm.

  13. Robust Transceiver Design for Multiuser MIMO Downlink with Channel Uncertainties

    Science.gov (United States)

    Miao, Wei; Li, Yunzhou; Chen, Xiang; Zhou, Shidong; Wang, Jing

    This letter addresses the problem of robust transceiver design for the multiuser multiple-input-multiple-output (MIMO) downlink where the channel state information at the base station (BS) is imperfect. A stochastic approach which minimizes the expectation of the total mean square error (MSE) of the downlink conditioned on the channel estimates under a total transmit power constraint is adopted. The iterative algorithm reported in [2] is improved to handle the proposed robust optimization problem. Simulation results show that our proposed robust scheme effectively reduces the performance loss due to channel uncertainties and outperforms existing methods, especially when the channel errors of the users are different.

  14. Field-Split Preconditioned Inexact Newton Algorithms

    KAUST Repository

    Liu, Lulu

    2015-06-02

    The multiplicative Schwarz preconditioned inexact Newton (MSPIN) algorithm is presented as a complement to additive Schwarz preconditioned inexact Newton (ASPIN). At an algebraic level, ASPIN and MSPIN are variants of the same strategy to improve the convergence of systems with unbalanced nonlinearities; however, they have natural complementarity in practice. MSPIN is naturally based on partitioning of degrees of freedom in a nonlinear PDE system by field type rather than by subdomain, where a modest factor of concurrency can be sacrificed for physically motivated convergence robustness. ASPIN, originally introduced for decompositions into subdomains, is natural for high concurrency and reduction of global synchronization. We consider both types of inexact Newton algorithms in the field-split context, and we augment the classical convergence theory of ASPIN for the multiplicative case. Numerical experiments show that MSPIN can be significantly more robust than Newton methods based on global linearizations, and that MSPIN can be more robust than ASPIN and maintain fast convergence even for challenging problems, such as high Reynolds number Navier--Stokes equations.

  15. Field-Split Preconditioned Inexact Newton Algorithms

    KAUST Repository

    Liu, Lulu; Keyes, David E.

    2015-01-01

    The multiplicative Schwarz preconditioned inexact Newton (MSPIN) algorithm is presented as a complement to additive Schwarz preconditioned inexact Newton (ASPIN). At an algebraic level, ASPIN and MSPIN are variants of the same strategy to improve the convergence of systems with unbalanced nonlinearities; however, they have natural complementarity in practice. MSPIN is naturally based on partitioning of degrees of freedom in a nonlinear PDE system by field type rather than by subdomain, where a modest factor of concurrency can be sacrificed for physically motivated convergence robustness. ASPIN, originally introduced for decompositions into subdomains, is natural for high concurrency and reduction of global synchronization. We consider both types of inexact Newton algorithms in the field-split context, and we augment the classical convergence theory of ASPIN for the multiplicative case. Numerical experiments show that MSPIN can be significantly more robust than Newton methods based on global linearizations, and that MSPIN can be more robust than ASPIN and maintain fast convergence even for challenging problems, such as high Reynolds number Navier--Stokes equations.

  16. Incremental Activation Detection for Real-Time fMRI Series Using Robust Kalman Filter

    Directory of Open Access Journals (Sweden)

    Liang Li

    2014-01-01

    Full Text Available Real-time functional magnetic resonance imaging (rt-fMRI is a technique that enables us to observe human brain activations in real time. However, some unexpected noises that emerged in fMRI data collecting, such as acute swallowing, head moving and human manipulations, will cause much confusion and unrobustness for the activation analysis. In this paper, a new activation detection method for rt-fMRI data is proposed based on robust Kalman filter. The idea is to add a variation to the extended kalman filter to handle the additional sparse measurement noise and a sparse noise term to the measurement update step. Hence, the robust Kalman filter is designed to improve the robustness for the outliers and can be computed separately for each voxel. The algorithm can compute activation maps on each scan within a repetition time, which meets the requirement for real-time analysis. Experimental results show that this new algorithm can bring out high performance in robustness and in real-time activation detection.

  17. The effect of photometric and geometric context on photometric and geometric lightness effects.

    Science.gov (United States)

    Lee, Thomas Y; Brainard, David H

    2014-01-24

    We measured the lightness of probe tabs embedded at different orientations in various contextual images presented on a computer-controlled stereo display. Two background context planes met along a horizontal roof-like ridge. Each plane was a graphic rendering of a set of achromatic surfaces with the simulated illumination for each plane controlled independently. Photometric context was varied by changing the difference in simulated illumination intensity between the two background planes. Geometric context was varied by changing the angle between them. We parsed the data into separate photometric effects and geometric effects. For fixed geometry, varying photometric context led to linear changes in both the photometric and geometric effects. Varying geometric context did not produce a statistically reliable change in either the photometric or geometric effects.

  18. Noncyclic geometric changes of quantum states

    International Nuclear Information System (INIS)

    Kult, David; Sjoeqvist, Erik; Aaberg, Johan

    2006-01-01

    Non-Abelian quantum holonomies, i.e., unitary state changes solely induced by geometric properties of a quantum system, have been much under focus in the physics community as generalizations of the Abelian Berry phase. Apart from being a general phenomenon displayed in various subfields of quantum physics, the use of holonomies has lately been suggested as a robust technique to obtain quantum gates; the building blocks of quantum computers. Non-Abelian holonomies are usually associated with cyclic changes of quantum systems, but here we consider a generalization to noncyclic evolutions. We argue that this open-path holonomy can be used to construct quantum gates. We also show that a structure of partially defined holonomies emerges from the open-path holonomy. This structure has no counterpart in the Abelian setting. We illustrate the general ideas using an example that may be accessible to tests in various physical systems

  19. A guidance and control algorithm for scent tracking micro-robotic vehicle swarms

    International Nuclear Information System (INIS)

    Dohner, J.L.

    1998-03-01

    Cooperative micro-robotic scent tracking vehicles are designed to collectively sniff out locations of high scent concentrations in unknown, geometrically complex environments. These vehicles are programmed with guidance and control algorithms that allow inter cooperation among vehicles. In this paper a cooperative guidance and control algorithm for scent tracking micro-robotic vehicles is presented. This algorithm is comprised of a sensory compensation sub-algorithm using point source cancellation, a guidance sub-algorithm using gradient descent tracking, and a control sub-algorithm using proportional feedback. The concepts of social rank and point source cancellation are new concepts introduced within. Simulation results for cooperative vehicles swarms are given. Limitations are discussed

  20. A guidance and control algorithm for scent tracking micro-robotic vehicle swarms

    Energy Technology Data Exchange (ETDEWEB)

    Dohner, J.L. [Sandia National Labs., Albuquerque, NM (United States). Structural Dynamics Dept.

    1998-03-01

    Cooperative micro-robotic scent tracking vehicles are designed to collectively sniff out locations of high scent concentrations in unknown, geometrically complex environments. These vehicles are programmed with guidance and control algorithms that allow inter cooperation among vehicles. In this paper a cooperative guidance and control algorithm for scent tracking micro-robotic vehicles is presented. This algorithm is comprised of a sensory compensation sub-algorithm using point source cancellation, a guidance sub-algorithm using gradient descent tracking, and a control sub-algorithm using proportional feedback. The concepts of social rank and point source cancellation are new concepts introduced within. Simulation results for cooperative vehicles swarms are given. Limitations are discussed.

  1. Implementation of the geometrical problem in CNC metal cutting machine

    Directory of Open Access Journals (Sweden)

    Erokhin V.V.

    2017-06-01

    Full Text Available The article deals with the tasks of managing the production process (technological process and technological equip-ment, the most detailed analysis of the implementation of the geometric problem in CNC machines. The influence of the solution of the geometric CNC problem on the accuracy of workpiece machining is analyzed by implementing a certain interpolation algorithm and the values of the discreteness of the movements of the working bodies of the CNC machine. The technique of forming a given trajectory of motion of the machine's executive organ is given, by means of which it is required to ensure the coordinated movement of the shaping coordinates according to a certain law, depend-ing on the specified trajectory of the cutting edge of the tool. The advantages and disadvantages of the implementation of interpolation in CNC systems by various methods are considered, and particular attention is paid to combined meth-ods of realizing interpolation.

  2. Computational Contact Mechanics Geometrically Exact Theory for Arbitrary Shaped Bodies

    CERN Document Server

    Konyukhov, Alexander

    2013-01-01

    This book contains a systematical analysis of geometrical situations  leading to  contact pairs -- point-to-surface, surface-to-surface, point-to-curve, curve-to-curve and curve-to-surface.  Each contact pair  is inherited with a special coordinate system based on its geometrical properties such as a Gaussian surface coordinate system or a Serret-Frenet curve coordinate system.  The formulation in a covariant form allows in a straightforward fashion to consider various constitutive relations for a  certain pair such as anisotropy for both frictional and structural parts. Then standard methods well known in computational contact mechanics such as penalty, Lagrange multiplier methods, combination of both and others  are formulated in these coordinate systems. Such formulations require then the powerful apparatus of differential geometry of surfaces and curves as well as of convex analysis. The final goals of such transformations are  then ready-for-implementation numerical algorithms within the finite e...

  3. Geometric model of pseudo-distance measurement in satellite location systems

    Science.gov (United States)

    Panchuk, K. L.; Lyashkov, A. A.; Lyubchinov, E. V.

    2018-04-01

    The existing mathematical model of pseudo-distance measurement in satellite location systems does not provide a precise solution of the problem, but rather an approximate one. The existence of such inaccuracy, as well as bias in measurement of distance from satellite to receiver, results in inaccuracy level of several meters. Thereupon, relevance of refinement of the current mathematical model becomes obvious. The solution of the system of quadratic equations used in the current mathematical model is based on linearization. The objective of the paper is refinement of current mathematical model and derivation of analytical solution of the system of equations on its basis. In order to attain the objective, geometric analysis is performed; geometric interpretation of the equations is given. As a result, an equivalent system of equations, which allows analytical solution, is derived. An example of analytical solution implementation is presented. Application of analytical solution algorithm to the problem of pseudo-distance measurement in satellite location systems allows to improve the accuracy such measurements.

  4. Robust Trajectory Design in Highly Perturbed Environments Leveraging Continuation Methods, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Research is proposed to investigate continuation methods to improve the robustness of trajectory design algorithms for spacecraft in highly perturbed dynamical...

  5. Hyper-parameter tuning of a decision tree induction algorithm

    NARCIS (Netherlands)

    Mantovani, R.G.; Horváth, T.; Cerri, R.; Vanschoren, J.; de Carvalho, A.C.P.L.F.

    2017-01-01

    Supervised classification is the most studied task in Machine Learning. Among the many algorithms used in such task, Decision Tree algorithms are a popular choice, since they are robust and efficient to construct. Moreover, they have the advantage of producing comprehensible models and satisfactory

  6. Geometric entanglement in topologically ordered states

    International Nuclear Information System (INIS)

    Orús, Román; Wei, Tzu-Chieh; Buerschaper, Oliver; Nest, Maarten Van den

    2014-01-01

    Here we investigate the connection between topological order and the geometric entanglement, as measured by the logarithm of the overlap between a given state and its closest product state of blocks. We do this for a variety of topologically ordered systems such as the toric code, double semion, colour code and quantum double models. As happens for the entanglement entropy, we find that for sufficiently large block sizes the geometric entanglement is, up to possible sub-leading corrections, the sum of two contributions: a bulk contribution obeying a boundary law times the number of blocks and a contribution quantifying the underlying pattern of long-range entanglement of the topologically ordered state. This topological contribution is also present in the case of single-spin blocks in most cases, and constitutes an alternative characterization of topological order for these quantum states based on a multipartite entanglement measure. In particular, we see that the topological term for the two-dimensional colour code is twice as much as the one for the toric code, in accordance with recent renormalization group arguments (Bombin et al 2012 New J. Phys. 14 073048). Motivated by these results, we also derive a general formalism to obtain upper- and lower-bounds to the geometric entanglement of states with a non-Abelian group symmetry, and which we explicitly use to analyse quantum double models. Furthermore, we also provide an analysis of the robustness of the topological contribution in terms of renormalization and perturbation theory arguments, as well as a numerical estimation for small systems. Some of the results in this paper rely on the ability to disentangle single sites from the quantum state, which is always possible for the systems that we consider. Additionally we relate our results to the behaviour of the relative entropy of entanglement in topologically ordered systems, and discuss a number of numerical approaches based on tensor networks that could be

  7. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    Science.gov (United States)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  8. A Robust Wireless Sensor Network Localization Algorithm in Mixed LOS/NLOS Scenario.

    Science.gov (United States)

    Li, Bing; Cui, Wei; Wang, Bin

    2015-09-16

    Localization algorithms based on received signal strength indication (RSSI) are widely used in the field of target localization due to its advantages of convenient application and independent from hardware devices. Unfortunately, the RSSI values are susceptible to fluctuate under the influence of non-line-of-sight (NLOS) in indoor space. Existing algorithms often produce unreliable estimated distances, leading to low accuracy and low effectiveness in indoor target localization. Moreover, these approaches require extra prior knowledge about the propagation model. As such, we focus on the problem of localization in mixed LOS/NLOS scenario and propose a novel localization algorithm: Gaussian mixed model based non-metric Multidimensional (GMDS). In GMDS, the RSSI is estimated using a Gaussian mixed model (GMM). The dissimilarity matrix is built to generate relative coordinates of nodes by a multi-dimensional scaling (MDS) approach. Finally, based on the anchor nodes' actual coordinates and target's relative coordinates, the target's actual coordinates can be computed via coordinate transformation. Our algorithm could perform localization estimation well without being provided with prior knowledge. The experimental verification shows that GMDS effectively reduces NLOS error and is of higher accuracy in indoor mixed LOS/NLOS localization and still remains effective when we extend single NLOS to multiple NLOS.

  9. An optimisation algorithm for determination of treatment margins around moving and deformable targets

    International Nuclear Information System (INIS)

    Redpath, Anthony Thomas; Muren, Ludvig Paul

    2005-01-01

    Purpose: Determining treatment margins for inter-fractional motion of moving and deformable clinical target volumes (CTVs) remains a major challenge. This paper describes and applies an optimisation algorithm designed to derive such margins. Material and methods: The algorithm works by expanding the CTV, as determined from a pre-treatment or planning scan, to enclose the CTV positions observed during treatment. CTV positions during treatment may be obtained using, for example, repeat CT scanning and/or repeat electronic portal imaging (EPI). The algorithm can be applied to both individual patients and to a set of patients. The margins derived will minimise the excess volume outside the envelope that encloses all observed CTV positions (the CTV envelope). Initially, margins are set such that the envelope is more than adequately covered when the planning CTV is expanded. The algorithm uses an iterative method where the margins are sampled randomly and are then either increased or decreased randomly. The algorithm is tested on a set of 19 bladder cancer patients that underwent weekly repeat CT scanning and EPI throughout their treatment course. Results: From repeated runs on individual patients, the algorithm produces margins within a range of ±2 mm that lie among the best results found with an exhaustive search approach, and that agree within 3 mm with margins determined by a manual approach on the same data. The algorithm could be used to determine margins to cover any specified geometrical uncertainty, and allows for the determination of reduced margins by relaxing the coverage criteria, for example disregarding extreme CTV positions, or an arbitrarily selected volume fraction of the CTV envelope, and/or patients with extreme geometrical uncertainties. Conclusion: An optimisation approach to margin determination is found to give reproducible results within the accuracy required. The major advantage with this algorithm is that it is completely empirical, and it is

  10. Optimisation in the Design of Environmental Sensor Networks with Robustness Consideration

    Science.gov (United States)

    Budi, Setia; de Souza, Paulo; Timms, Greg; Malhotra, Vishv; Turner, Paul

    2015-01-01

    This work proposes the design of Environmental Sensor Networks (ESN) through balancing robustness and redundancy. An Evolutionary Algorithm (EA) is employed to find the optimal placement of sensor nodes in the Region of Interest (RoI). Data quality issues are introduced to simulate their impact on the performance of the ESN. Spatial Regression Test (SRT) is also utilised to promote robustness in data quality of the designed ESN. The proposed method provides high network representativeness (fit for purpose) with minimum sensor redundancy (cost), and ensures robustness by enabling the network to continue to achieve its objectives when some sensors fail. PMID:26633392

  11. Random broadcast on random geometric graphs

    Energy Technology Data Exchange (ETDEWEB)

    Bradonjic, Milan [Los Alamos National Laboratory; Elsasser, Robert [UNIV OF PADERBORN; Friedrich, Tobias [ICSI/BERKELEY; Sauerwald, Tomas [ICSI/BERKELEY

    2009-01-01

    In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or the giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.

  12. Compactness and robustness: Applications in the solution of integral equations for chemical kinetics and electromagnetic scattering

    Science.gov (United States)

    Zhou, Yajun

    This thesis employs the topological concept of compactness to deduce robust solutions to two integral equations arising from chemistry and physics: the inverse Laplace problem in chemical kinetics and the vector wave scattering problem in dielectric optics. The inverse Laplace problem occurs in the quantitative understanding of biological processes that exhibit complex kinetic behavior: different subpopulations of transition events from the "reactant" state to the "product" state follow distinct reaction rate constants, which results in a weighted superposition of exponential decay modes. Reconstruction of the rate constant distribution from kinetic data is often critical for mechanistic understandings of chemical reactions related to biological macromolecules. We devise a "phase function approach" to recover the probability distribution of rate constants from decay data in the time domain. The robustness (numerical stability) of this reconstruction algorithm builds upon the continuity of the transformations connecting the relevant function spaces that are compact metric spaces. The robust "phase function approach" not only is useful for the analysis of heterogeneous subpopulations of exponential decays within a single transition step, but also is generalizable to the kinetic analysis of complex chemical reactions that involve multiple intermediate steps. A quantitative characterization of the light scattering is central to many meteoro-logical, optical, and medical applications. We give a rigorous treatment to electromagnetic scattering on arbitrarily shaped dielectric media via the Born equation: an integral equation with a strongly singular convolution kernel that corresponds to a non-compact Green operator. By constructing a quadratic polynomial of the Green operator that cancels out the kernel singularity and satisfies the compactness criterion, we reveal the universality of a real resonance mode in dielectric optics. Meanwhile, exploiting the properties of

  13. Geometric group theory

    CERN Document Server

    Druţu, Cornelia

    2018-01-01

    The key idea in geometric group theory is to study infinite groups by endowing them with a metric and treating them as geometric spaces. This applies to many groups naturally appearing in topology, geometry, and algebra, such as fundamental groups of manifolds, groups of matrices with integer coefficients, etc. The primary focus of this book is to cover the foundations of geometric group theory, including coarse topology, ultralimits and asymptotic cones, hyperbolic groups, isoperimetric inequalities, growth of groups, amenability, Kazhdan's Property (T) and the Haagerup property, as well as their characterizations in terms of group actions on median spaces and spaces with walls. The book contains proofs of several fundamental results of geometric group theory, such as Gromov's theorem on groups of polynomial growth, Tits's alternative, Stallings's theorem on ends of groups, Dunwoody's accessibility theorem, the Mostow Rigidity Theorem, and quasiisometric rigidity theorems of Tukia and Schwartz. This is the f...

  14. Quasipolynomial Approach to Simultaneous Robust Control of Time-Delay Systems

    Directory of Open Access Journals (Sweden)

    Nikolaj Semenič

    2014-01-01

    Full Text Available A control law for retarded time-delay systems is considered, concerning infinite closed-loop spectrum assignment. An algebraic method for spectrum assignment is presented with a unique optimization algorithm for minimization of spectral abscissa and effective shaping of the chains of infinitely many closed-loop poles. Uncertainty of plant delays of a certain structure is considered in a sense of a robust simultaneous stabilization. Robust performance is achieved using mixed sensitivity design, which is incorporated into the addressed control law.

  15. Stochastic Robust Mathematical Programming Model for Power System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay

    2016-01-01

    This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.

  16. Evaluation of Geometrical Modulation Transfer Function in Optical Lens System

    Directory of Open Access Journals (Sweden)

    Cheng-Mu Tsai

    2015-01-01

    Full Text Available This paper presents ray tracing algorithms to evaluate the geometrical modulation transfer function (GMTF of optical lens system. There are two kinds of ray tracings methods that can be applied to help simulate the point spread function (PSF in the image plane, for example, paraxial optics and real ray tracings. The paraxial optics ray tracing is used to calculate the first-order properties such as the effective focal length (EFL and the entrance pupil position through less cost of computation. However, the PSF could have a large tolerance by only using paraxial optics ray tracing for simulation. Some formulas for real ray tracing are applied in the sagittal and tangential line spread function (LSF. The algorithms are developed to demonstrate the simulation of LSF. Finally, the GMTF is evaluated after the fast Fourier transform (FFT of the LSF.

  17. Observation Quality Control with a Robust Ensemble Kalman Filter

    KAUST Repository

    Roh, Soojin

    2013-12-01

    Current ensemble-based Kalman filter (EnKF) algorithms are not robust to gross observation errors caused by technical or human errors during the data collection process. In this paper, the authors consider two types of gross observational errors, additive statistical outliers and innovation outliers, and introduce a method to make EnKF robust to gross observation errors. Using both a one-dimensional linear system of dynamics and a 40-variable Lorenz model, the performance of the proposed robust ensemble Kalman filter (REnKF) was tested and it was found that the new approach greatly improves the performance of the filter in the presence of gross observation errors and leads to only a modest loss of accuracy with clean, outlier-free, observations.

  18. Software Tools for Robust Analysis of High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Valentin Todorov

    2014-06-01

    Full Text Available The present work discusses robust multivariate methods specifically designed for highdimensions. Their implementation in R is presented and their application is illustratedon examples. The first group are algorithms for outlier detection, already introducedelsewhere and implemented in other packages. The value added of the new package isthat all methods follow the same design pattern and thus can use the same graphicaland diagnostic tools. The next topic covered is sparse principal components including anobject oriented interface to the standard method proposed by Zou, Hastie, and Tibshirani(2006 and the robust one proposed by Croux, Filzmoser, and Fritz (2013. Robust partialleast squares (see Hubert and Vanden Branden 2003 as well as partial least squares fordiscriminant analysis conclude the scope of the new package.

  19. Observation Quality Control with a Robust Ensemble Kalman Filter

    KAUST Repository

    Roh, Soojin; Genton, Marc G.; Jun, Mikyoung; Szunyogh, Istvan; Hoteit, Ibrahim

    2013-01-01

    Current ensemble-based Kalman filter (EnKF) algorithms are not robust to gross observation errors caused by technical or human errors during the data collection process. In this paper, the authors consider two types of gross observational errors, additive statistical outliers and innovation outliers, and introduce a method to make EnKF robust to gross observation errors. Using both a one-dimensional linear system of dynamics and a 40-variable Lorenz model, the performance of the proposed robust ensemble Kalman filter (REnKF) was tested and it was found that the new approach greatly improves the performance of the filter in the presence of gross observation errors and leads to only a modest loss of accuracy with clean, outlier-free, observations.

  20. An H(∞) control approach to robust learning of feedforward neural networks.

    Science.gov (United States)

    Jing, Xingjian

    2011-09-01

    A novel H(∞) robust control approach is proposed in this study to deal with the learning problems of feedforward neural networks (FNNs). The analysis and design of a desired weight update law for the FNN is transformed into a robust controller design problem for a discrete dynamic system in terms of the estimation error. The drawbacks of some existing learning algorithms can therefore be revealed, especially for the case that the output data is fast changing with respect to the input or the output data is corrupted by noise. Based on this approach, the optimal learning parameters can be found by utilizing the linear matrix inequality (LMI) optimization techniques to achieve a predefined H(∞) "noise" attenuation level. Several existing BP-type algorithms are shown to be special cases of the new H(∞)-learning algorithm. Theoretical analysis and several examples are provided to show the advantages of the new method. Copyright © 2011 Elsevier Ltd. All rights reserved.