WorldWideScience

Sample records for robust geometric algorithms

  1. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  2. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  3. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  4. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  5. Robust Geometric Control of a Distillation Column

    DEFF Research Database (Denmark)

    Kymmel, Mogens; Andersen, Henrik Weisberg

    1987-01-01

    A frequency domain method, which makes it possible to adjust multivariable controllers with respect to both nominal performance and robustness, is presented. The basic idea in the approach is that the designer assigns objectives such as steady-state tracking, maximum resonance peaks, bandwidth, m...... is used to examine and improve geometric control of a binary distillation column....

  6. Robustness Beamforming Algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Dehghani

    2014-04-01

    Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.

  7. Robust topology optimization accounting for geometric imperfections

    DEFF Research Database (Denmark)

    Schevenels, M.; Jansen, M.; Lombaert, Geert

    2013-01-01

    performance. As a consequence, the actual structure may be far from optimal. In this paper, a robust approach to topology optimization is presented, taking into account two types of geometric imperfections: variations of (1) the crosssections and (2) the locations of structural elements. The first type...... is modeled by means of a scalar non-Gaussian random field, which is represented as a translation process. The underlying Gaussian field is simulated by means of the EOLE method. The second type of imperfections is modeled as a Gaussian vector-valued random field, which is simulated directly by means...... of the EOLE method. In each iteration of the optimization process, the relevant statistics of the structural response are evaluated by means of a Monte Carlo simulation. The proposed methodology is successfully applied to a test problem involving the design of a compliant mechanism (for the first type...

  8. A robust human face detection algorithm

    Science.gov (United States)

    Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.

    2012-01-01

    Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.

  9. MM Algorithms for Geometric and Signomial Programming.

    Science.gov (United States)

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  10. Fast decoding algorithms for geometric coded apertures

    International Nuclear Information System (INIS)

    Byard, Kevin

    2015-01-01

    Fast decoding algorithms are described for the class of coded aperture designs known as geometric coded apertures which were introduced by Gourlay and Stephen. When compared to the direct decoding method, the algorithms significantly reduce the number of calculations required when performing the decoding for these apertures and hence speed up the decoding process. Experimental tests confirm the efficacy of these fast algorithms, demonstrating a speed up of approximately two to three orders of magnitude over direct decoding.

  11. Geometrical framework for robust portfolio optimization

    OpenAIRE

    Bazovkin, Pavel

    2014-01-01

    We consider a vector-valued multivariate risk measure that depends on the user's profile given by the user's utility. It is constructed on the basis of weighted-mean trimmed regions and represents the solution of an optimization problem. The key feature of this measure is convexity. We apply the measure to the portfolio selection problem, employing different measures of performance as objective functions in a common geometrical framework.

  12. A Robust Geometric Model for Argument Classification

    Science.gov (United States)

    Giannone, Cristina; Croce, Danilo; Basili, Roberto; de Cao, Diego

    Argument classification is the task of assigning semantic roles to syntactic structures in natural language sentences. Supervised learning techniques for frame semantics have been recently shown to benefit from rich sets of syntactic features. However argument classification is also highly dependent on the semantics of the involved lexicals. Empirical studies have shown that domain dependence of lexical information causes large performance drops in outside domain tests. In this paper a distributional approach is proposed to improve the robustness of the learning model against out-of-domain lexical phenomena.

  13. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  14. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  15. Geometric computations with interval and new robust methods applications in computer graphics, GIS and computational geometry

    CERN Document Server

    Ratschek, H

    2003-01-01

    This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th

  16. Research on Geometric Positioning Algorithm of License Plate in Multidimensional Parameter Space

    Directory of Open Access Journals (Sweden)

    Yinhua Huan

    2014-05-01

    Full Text Available Considering features of vehicle license plate location method which commonly used, in order to search a consistent location for reference images with license plates feature in multidimensional parameter space, a new algorithm of geometric location is proposed. Geometric location algorithm main include model training and real time search. Which not only adapt the gray-scale linearity and the gray non-linear changes, but also support changes of scale and angle. Compared with the mainstream locating software, numerical results shows under the same test conditions that the position deviation of geometric positioning algorithm is less than 0.5 pixel. Without taking into account the multidimensional parameter space, Geometric positioning algorithm position deviation is less than 1.0 pixel and angle deviation is less than 1.0 degree taking into account the multidimensional parameter space. This algorithm is robust, simple, practical and is better than the traditional method.

  17. Robust Algebraic Multilevel Methods and Algorithms

    CERN Document Server

    Kraus, Johannes

    2009-01-01

    This book deals with algorithms for the solution of linear systems of algebraic equations with large-scale sparse matrices, with a focus on problems that are obtained after discretization of partial differential equations using finite element methods. Provides a systematic presentation of the recent advances in robust algebraic multilevel methods. Can be used for advanced courses on the topic.

  18. Geometric Algorithms for Part Orienting and Probing

    NARCIS (Netherlands)

    Panahi, F.

    2015-01-01

    In this thesis, detailed solutions are presented to several problems dealing with geometric shape and orientation of an object in the field of robotics and automation. We first have considered a general model for shape variations that allows variation along the entire boundary of an object, both in

  19. TAO-robust backpropagation learning algorithm.

    Science.gov (United States)

    Pernía-Espinoza, Alpha V; Ordieres-Meré, Joaquín B; Martínez-de-Pisón, Francisco J; González-Marcos, Ana

    2005-03-01

    In several fields, as industrial modelling, multilayer feedforward neural networks are often used as universal function approximations. These supervised neural networks are commonly trained by a traditional backpropagation learning format, which minimises the mean squared error (mse) of the training data. However, in the presence of corrupted data (outliers) this training scheme may produce wrong models. We combine the benefits of the non-linear regression model tau-estimates [introduced by Tabatabai, M. A. Argyros, I. K. Robust Estimation and testing for general nonlinear regression models. Applied Mathematics and Computation. 58 (1993) 85-101] with the backpropagation algorithm to produce the TAO-robust learning algorithm, in order to deal with the problems of modelling with outliers. The cost function of this approach has a bounded influence function given by the weighted average of two psi functions, one corresponding to a very robust estimate and the other to a highly efficient estimate. The advantages of the proposed algorithm are studied with an example.

  20. Robust MST-Based Clustering Algorithm.

    Science.gov (United States)

    Liu, Qidong; Zhang, Ruisheng; Zhao, Zhili; Wang, Zhenghai; Jiao, Mengyao; Wang, Guangjing

    2018-06-01

    Minimax similarity stresses the connectedness of points via mediating elements rather than favoring high mutual similarity. The grouping principle yields superior clustering results when mining arbitrarily-shaped clusters in data. However, it is not robust against noises and outliers in the data. There are two main problems with the grouping principle: first, a single object that is far away from all other objects defines a separate cluster, and second, two connected clusters would be regarded as two parts of one cluster. In order to solve such problems, we propose robust minimum spanning tree (MST)-based clustering algorithm in this letter. First, we separate the connected objects by applying a density-based coarsening phase, resulting in a low-rank matrix in which the element denotes the supernode by combining a set of nodes. Then a greedy method is presented to partition those supernodes through working on the low-rank matrix. Instead of removing the longest edges from MST, our algorithm groups the data set based on the minimax similarity. Finally, the assignment of all data points can be achieved through their corresponding supernodes. Experimental results on many synthetic and real-world data sets show that our algorithm consistently outperforms compared clustering algorithms.

  1. Symplectic Geometric Algorithms for Hamiltonian Systems

    CERN Document Server

    Feng, Kang

    2010-01-01

    "Symplectic Geometry Algorithms for Hamiltonian Systems" will be useful not only for numerical analysts, but also for those in theoretical physics, computational chemistry, celestial mechanics, etc. The book generalizes and develops the generating function and Hamilton-Jacobi equation theory from the perspective of the symplectic geometry and symplectic algebra. It will be a useful resource for engineers and scientists in the fields of quantum theory, astrophysics, atomic and molecular dynamics, climate prediction, oil exploration, etc. Therefore a systematic research and development

  2. Geometric Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodari; Zeh, Norbert

    2010-01-01

    -D convex hulls. These results are obtained by analyzing adaptations of either the PEM merge sort algorithm or PRAM algorithms. For the second group of problems—orthogonal line segment intersection reporting, batched range reporting, and related problems—more effort is required. What distinguishes......We study techniques for obtaining efficient algorithms for geometric problems on private-cache chip multiprocessors. We show how to obtain optimal algorithms for interval stabbing counting, 1-D range counting, weighted 2-D dominance counting, and for computing 3-D maxima, 2-D lower envelopes, and 2...... these problems from the ones in the previous group is the variable output size, which requires I/O-efficient load balancing strategies based on the contribution of the individual input elements to the output size. To obtain nearly optimal algorithms for these problems, we introduce a parallel distribution...

  3. Optimizing clinical performance and geometrical robustness of a new electrode device for intracranial tumor electroporation

    DEFF Research Database (Denmark)

    Mahmood, Faisal; Gehl, Julie

    2011-01-01

    and genes to intracranial tumors in humans, and demonstrate a method to optimize the design (i.e. geometry) of the electrode device prototype to improve both clinical performance and geometrical tolerance (robustness). We have employed a semiempirical objective function based on constraints similar to those...... sensitive to random geometrical deviations. The method is readily applicable to other electrode configurations....

  4. Robust ray-tracing algorithms for interactive dose rate evaluation

    International Nuclear Information System (INIS)

    Perrotte, L.

    2011-01-01

    More than ever, it is essential today to develop simulation tools to rapidly evaluate the dose rate received by operators working on nuclear sites. In order to easily study numerous different scenarios of intervention, computation times of available softwares have to be all lowered. This mainly implies to accelerate the geometrical computations needed for the dose rate evaluation. These computations consist in finding and sorting the whole list of intersections between a big 3D scene and multiple groups of 'radiative' rays meeting at the point where the dose has to be measured. In order to perform all these computations in less than a second, we first propose a GPU algorithm that enables the efficient management of one big group of coherent rays. Then we present a modification of this algorithm that guarantees the robustness of the ray-triangle intersection tests through the elimination of the precision issues due to floating-point arithmetic. This modification does not require the definition of scene-dependent coefficients ('epsilon' style) and only implies a small loss of performance (less than 10%). Finally we propose an efficient strategy to handle multiple ray groups (corresponding to multiple radiative objects) which use the previous results.Thanks to these improvements, we are able to perform an interactive and robust dose rate evaluation on big 3D scenes: all of the intersections (more than 13 million) between 700 000 triangles and 12 groups of 100 000 rays each are found, sorted along each ray and transferred to the CPU in 470 milliseconds. (author) [fr

  5. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. In the ATLAS track reconstruction algorithm, an artificial neural network is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The robustness of the neural network algorithm is presented, probing its sensitivity to uncertainties in the detector conditions. The robustness is studied by evaluating the stability of the algorithm's performance under a range of variations in the inputs to the neural networks. Within reasonable variation magnitudes, the neural networks prove to be robust to most variation types.

  6. A Novel Evolutionary Algorithm for Designing Robust Analog Filters

    Directory of Open Access Journals (Sweden)

    Shaobo Li

    2018-03-01

    Full Text Available Designing robust circuits that withstand environmental perturbation and device degradation is critical for many applications. Traditional robust circuit design is mainly done by tuning parameters to improve system robustness. However, the topological structure of a system may set a limit on the robustness achievable through parameter tuning. This paper proposes a new evolutionary algorithm for robust design that exploits the open-ended topological search capability of genetic programming (GP coupled with bond graph modeling. We applied our GP-based robust design (GPRD algorithm to evolve robust lowpass and highpass analog filters. Compared with a traditional robust design approach based on a state-of-the-art real-parameter genetic algorithm (GA, our GPRD algorithm with a fitness criterion rewarding robustness, with respect to parameter perturbations, can evolve more robust filters than what was achieved through parameter tuning alone. We also find that inappropriate GA tuning may mislead the search process and that multiple-simulation and perturbed fitness evaluation methods for evolving robustness have complementary behaviors with no absolute advantage of one over the other.

  7. Finite Algorithms for Robust Linear Regression

    DEFF Research Database (Denmark)

    Madsen, Kaj; Nielsen, Hans Bruun

    1990-01-01

    The Huber M-estimator for robust linear regression is analyzed. Newton type methods for solution of the problem are defined and analyzed, and finite convergence is proved. Numerical experiments with a large number of test problems demonstrate efficiency and indicate that this kind of approach may...

  8. Robust stability analysis of adaptation algorithms for single perceptron.

    Science.gov (United States)

    Hui, S; Zak, S H

    1991-01-01

    The problem of robust stability and convergence of learning parameters of adaptation algorithms in a noisy environment for the single preceptron is addressed. The case in which the same input pattern is presented in the adaptation cycle is analyzed. The algorithm proposed is of the Widrow-Hoff type. It is concluded that this algorithm is robust. However, the weight vectors do not necessarily converge in the presence of measurement noise. A modified version of this algorithm in which the reduction factors are allowed to vary with time is proposed, and it is shown that this algorithm is robust and that the weight vectors converge in the presence of bounded noise. Only deterministic-type arguments are used in the analysis. An ultimate bound on the error in terms of a convex combination of the initial error and the bound on the noise is obtained.

  9. A generic algorithm for constructing hierarchical representations of geometric objects

    International Nuclear Information System (INIS)

    Xavier, P.G.

    1995-01-01

    For a number of years, robotics researchers have exploited hierarchical representations of geometrical objects and scenes in motion-planning, collision-avoidance, and simulation. However, few general techniques exist for automatically constructing them. We present a generic, bottom-up algorithm that uses a heuristic clustering technique to produced balanced, coherent hierarchies. Its worst-case running time is O(N 2 logN), but for non-pathological cases it is O(NlogN), where N is the number of input primitives. We have completed a preliminary C++ implementation for input collections of 3D convex polygons and 3D convex polyhedra and conducted simple experiments with scenes of up to 12,000 polygons, which take only a few minutes to process. We present examples using spheres and convex hulls as hierarchy primitives

  10. DC Algorithm for Extended Robust Support Vector Machine.

    Science.gov (United States)

    Fujiwara, Shuhei; Takeda, Akiko; Kanamori, Takafumi

    2017-05-01

    Nonconvex variants of support vector machines (SVMs) have been developed for various purposes. For example, robust SVMs attain robustness to outliers by using a nonconvex loss function, while extended [Formula: see text]-SVM (E[Formula: see text]-SVM) extends the range of the hyperparameter by introducing a nonconvex constraint. Here, we consider an extended robust support vector machine (ER-SVM), a robust variant of E[Formula: see text]-SVM. ER-SVM combines two types of nonconvexity from robust SVMs and E[Formula: see text]-SVM. Because of the two nonconvexities, the existing algorithm we proposed needs to be divided into two parts depending on whether the hyperparameter value is in the extended range or not. The algorithm also heuristically solves the nonconvex problem in the extended range. In this letter, we propose a new, efficient algorithm for ER-SVM. The algorithm deals with two types of nonconvexity while never entailing more computations than either E[Formula: see text]-SVM or robust SVM, and it finds a critical point of ER-SVM. Furthermore, we show that ER-SVM includes the existing robust SVMs as special cases. Numerical experiments confirm the effectiveness of integrating the two nonconvexities.

  11. A HYBRID ALGORITHM FOR THE ROBUST GRAPH COLORING PROBLEM

    Directory of Open Access Journals (Sweden)

    Román Anselmo Mora Gutiérrez

    2016-08-01

    Full Text Available A hybridalgorithm which combines mathematical programming techniques (Kruskal’s algorithm and the strategy of maintaining arc consistency to solve constraint satisfaction problem “CSP” and heuristic methods (musical composition method and DSATUR to resolve the robust graph coloring problem (RGCP is proposed in this paper. Experimental result shows that this algorithm is better than the other algorithms presented on the literature.

  12. Frequency-Domain Robust Performance Condition for Controller Uncertainty in SISO LTI Systems: A Geometric Approach

    Directory of Open Access Journals (Sweden)

    Vahid Raissi Dehkordi

    2009-01-01

    Full Text Available This paper deals with the robust performance problem of a linear time-invariant control system in the presence of robust controller uncertainty. Assuming that plant uncertainty is modeled as an additive perturbation, a geometrical approach is followed in order to find a necessary and sufficient condition for robust performance in the form of a bound on the magnitude of controller uncertainty. This frequency domain bound is derived by converting the problem into an optimization problem, whose solution is shown to be more time-efficient than a conventional structured singular value calculation. The bound on controller uncertainty can be used in controller order reduction and implementation problems.

  13. A robust firearm identification algorithm of forensic ballistics specimens

    Science.gov (United States)

    Chuan, Z. L.; Jemain, A. A.; Liong, C.-Y.; Ghani, N. A. M.; Tan, L. K.

    2017-09-01

    There are several inherent difficulties in the existing firearm identification algorithms, include requiring the physical interpretation and time consuming. Therefore, the aim of this study is to propose a robust algorithm for a firearm identification based on extracting a set of informative features from the segmented region of interest (ROI) using the simulated noisy center-firing pin impression images. The proposed algorithm comprises Laplacian sharpening filter, clustering-based threshold selection, unweighted least square estimator, and segment a square ROI from the noisy images. A total of 250 simulated noisy images collected from five different pistols of the same make, model and caliber are used to evaluate the robustness of the proposed algorithm. This study found that the proposed algorithm is able to perform the identical task on the noisy images with noise levels as high as 70%, while maintaining a firearm identification accuracy rate of over 90%.

  14. Green cloud environment by using robust planning algorithm

    Directory of Open Access Journals (Sweden)

    Jyoti Thaman

    2017-11-01

    Full Text Available Cloud computing provided a framework for seamless access to resources through network. Access to resources is quantified through SLA between service providers and users. Service provider tries to best exploit their resources and reduce idle times of the resources. Growing energy concerns further makes the life of service providers miserable. User’s requests are served by allocating users tasks to resources in Clouds and Grid environment through scheduling algorithms and planning algorithms. With only few Planning algorithms in existence rarely planning and scheduling algorithms are differentiated. This paper proposes a robust hybrid planning algorithm, Robust Heterogeneous-Earliest-Finish-Time (RHEFT for binding tasks to VMs. The allocation of tasks to VMs is based on a novel task matching algorithm called Interior Scheduling. The consistent performance of proposed RHEFT algorithm is compared with Heterogeneous-Earliest-Finish-Time (HEFT and Distributed HEFT (DHEFT for various parameters like utilization ratio, makespan, Speed-up and Energy Consumption. RHEFT’s consistent performance against HEFT and DHEFT has established the robustness of the hybrid planning algorithm through rigorous simulations.

  15. GeoBuilder: a geometric algorithm visualization and debugging system for 2D and 3D geometric computing.

    Science.gov (United States)

    Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai

    2009-01-01

    Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.

  16. A Robust Parallel Algorithm for Combinatorial Compressed Sensing

    Science.gov (United States)

    Mendoza-Smith, Rodrigo; Tanner, Jared W.; Wechsung, Florian

    2018-04-01

    In previous work two of the authors have shown that a vector $x \\in \\mathbb{R}^n$ with at most $k Parallel-$\\ell_0$ decoding algorithm, where $\\mathrm{nnz}(A)$ denotes the number of nonzero entries in $A \\in \\mathbb{R}^{m \\times n}$. In this paper we present the Robust-$\\ell_0$ decoding algorithm, which robustifies Parallel-$\\ell_0$ when the sketch $Ax$ is corrupted by additive noise. This robustness is achieved by approximating the asymptotic posterior distribution of values in the sketch given its corrupted measurements. We provide analytic expressions that approximate these posteriors under the assumptions that the nonzero entries in the signal and the noise are drawn from continuous distributions. Numerical experiments presented show that Robust-$\\ell_0$ is superior to existing greedy and combinatorial compressed sensing algorithms in the presence of small to moderate signal-to-noise ratios in the setting of Gaussian signals and Gaussian additive noise.

  17. Constant-work-space algorithms for geometric problems

    Directory of Open Access Journals (Sweden)

    Tetsuo Asano

    2011-07-01

    Full Text Available Constant-work-space algorithms may use only constantly many cells of storage in addition to their input, which is provided as a read-only array. We show how to construct several geometric structures efficiently in the constant-work-space model. Traditional algorithms process the input into a suitable data structure (like a doubly-connected edge list that allows efficient traversal of the structure at hand. In the constant-work-space setting, however, we cannot afford to do this. Instead, we provide operations that compute the desired features on the fly by accessing the input with no extra space. The whole geometric structure can be obtained by using these operations to enumerate all the features. Of course, we must pay for the space savings by slower running times. While the standard data structure allows us to implement traversal operations in constant time, our schemes typically take linear time to read the input data in each step.We begin with two simple problems: triangulating a planar point set and finding the trapezoidal decomposition of a simple polygon. In both cases adjacent features can be enumerated in linear time per step, resulting in total quadratic running time to output the whole structure. Actually, we show that the former result carries over to the Delaunay triangulation, and hence the Voronoi diagram. This also means that we can compute the largest empty circle of a planar point set in quadratic time and constant work-space. As another application, we demonstrate how to enumerate the features of an Euclidean minimum spanning tree (EMST in quadratic time per step, so that the whole EMST can be found in cubic time using constant work-space.Finally, we describe how to compute a shortest geodesic path between two points in a simple polygon. Although the shortest path problem in general graphs is NL-complete (Jakoby and Tantau 2003, this constrained problem can be solved in quadratic time using only constant work-space.

  18. Robust reactor power control system design by genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Joon; Cho, Kyung Ho; Kim, Sin [Cheju National University, Cheju (Korea, Republic of)

    1997-12-31

    The H{sub {infinity}} robust controller for the reactor power control system is designed by use of the mixed weight sensitivity. The system is configured into the typical two-port model with which the weight functions are augmented. Since the solution depends on the weighting functions and the problem is of nonconvex, the genetic algorithm is used to determine the weighting functions. The cost function applied in the genetic algorithm permits the direct control of the power tracking performances. In addition, the actual operating constraints such as rod velocity and acceleration can be treated as design parameters. Compared with the conventional approach, the controller designed by the genetic algorithm results in the better performances with the realistic constraints. Also, it is found that the genetic algorithm could be used as an effective tool in the robust design. 4 refs., 6 figs. (Author)

  19. Robust reactor power control system design by genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Joon; Cho, Kyung Ho; Kim, Sin [Cheju National University, Cheju (Korea, Republic of)

    1998-12-31

    The H{sub {infinity}} robust controller for the reactor power control system is designed by use of the mixed weight sensitivity. The system is configured into the typical two-port model with which the weight functions are augmented. Since the solution depends on the weighting functions and the problem is of nonconvex, the genetic algorithm is used to determine the weighting functions. The cost function applied in the genetic algorithm permits the direct control of the power tracking performances. In addition, the actual operating constraints such as rod velocity and acceleration can be treated as design parameters. Compared with the conventional approach, the controller designed by the genetic algorithm results in the better performances with the realistic constraints. Also, it is found that the genetic algorithm could be used as an effective tool in the robust design. 4 refs., 6 figs. (Author)

  20. Robustness of the ATLAS pixel clustering neural network algorithm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00407780; The ATLAS collaboration

    2016-01-01

    Proton-proton collisions at the energy frontier puts strong constraints on track reconstruction algorithms. The algorithms depend heavily on accurate estimation of the position of particles as they traverse the inner detector elements. An artificial neural network algorithm is utilised to identify and split clusters of neighbouring read-out elements in the ATLAS pixel detector created by multiple charged particles. The method recovers otherwise lost tracks in dense environments where particles are separated by distances comparable to the size of the detector read-out elements. Such environments are highly relevant for LHC run 2, e.g. in searches for heavy resonances. Within the scope of run 2 track reconstruction performance and upgrades, the robustness of the neural network algorithm will be presented. The robustness has been studied by evaluating the stability of the algorithm’s performance under a range of variations in the pixel detector conditions.

  1. A Feed-forward Geometrical Compensation and Adaptive Feedback Control Algorithm for Hydraulic Robot Manipulators

    DEFF Research Database (Denmark)

    Conrad, Finn; Zhou, Jianjun; Gabacik, Andrzej

    1998-01-01

    Invited paper presents a new control algorithm based on feed-forward geometrical compensation strategy combined with adaptive feedback control.......Invited paper presents a new control algorithm based on feed-forward geometrical compensation strategy combined with adaptive feedback control....

  2. Geomagnetic matching navigation algorithm based on robust estimation

    Science.gov (United States)

    Xie, Weinan; Huang, Liping; Qu, Zhenshen; Wang, Zhenhuan

    2017-08-01

    The outliers in the geomagnetic survey data seriously affect the precision of the geomagnetic matching navigation and badly disrupt its reliability. A novel algorithm which can eliminate the outliers influence is investigated in this paper. First, the weight function is designed and its principle of the robust estimation is introduced. By combining the relation equation between the matching trajectory and the reference trajectory with the Taylor series expansion for geomagnetic information, a mathematical expression of the longitude, latitude and heading errors is acquired. The robust target function is obtained by the weight function and the mathematical expression. Then the geomagnetic matching problem is converted to the solutions of nonlinear equations. Finally, Newton iteration is applied to implement the novel algorithm. Simulation results show that the matching error of the novel algorithm is decreased to 7.75% compared to the conventional mean square difference (MSD) algorithm, and is decreased to 18.39% to the conventional iterative contour matching algorithm when the outlier is 40nT. Meanwhile, the position error of the novel algorithm is 0.017° while the other two algorithms fail to match when the outlier is 400nT.

  3. A robust embedded vision system feasible white balance algorithm

    Science.gov (United States)

    Wang, Yuan; Yu, Feihong

    2018-01-01

    White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.

  4. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  5. Ant Colony Algorithm and Simulation for Robust Airport Gate Assignment

    Directory of Open Access Journals (Sweden)

    Hui Zhao

    2014-01-01

    Full Text Available Airport gate assignment is core task for airport ground operations. Due to the fact that the departure and arrival time of flights may be influenced by many random factors, the airport gate assignment scheme may encounter gate conflict and many other problems. This paper aims at finding a robust solution for airport gate assignment problem. A mixed integer model is proposed to formulate the problem, and colony algorithm is designed to solve this model. Simulation result shows that, in consideration of robustness, the ability of antidisturbance for airport gate assignment scheme has much improved.

  6. A robust color image watermarking algorithm against rotation attacks

    Science.gov (United States)

    Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min

    2018-01-01

    A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.

  7. ROBUST ALGORITHMS OF PARAMETRIC ESTIMATION IN SOME STABILIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    A.A. Vedyakov

    2016-07-01

    Full Text Available Subject of Research.The tasks of dynamic systems provision in the stable state by means of ensuring of trite solution stability for various dynamic systems in the education regime with the aid of their parameters tuning are considered. Method. The problems are solved by application of ideology of the robust finitely convergent algorithms creation. Main Results. The concepts of parametric algorithmization of stability and steady asymptotic stability are introduced and the results are presented on synthesis of coarsed gradient algorithms solving the proposed tasks for finite number of iterations with the purpose of the posed problems decision. Practical Relevance. The article results may be called for decision of practical stabilization tasks in the process of various engineering constructions and devices operation.

  8. Robust Semi-Supervised Manifold Learning Algorithm for Classification

    Directory of Open Access Journals (Sweden)

    Mingxia Chen

    2018-01-01

    Full Text Available In the recent years, manifold learning methods have been widely used in data classification to tackle the curse of dimensionality problem, since they can discover the potential intrinsic low-dimensional structures of the high-dimensional data. Given partially labeled data, the semi-supervised manifold learning algorithms are proposed to predict the labels of the unlabeled points, taking into account label information. However, these semi-supervised manifold learning algorithms are not robust against noisy points, especially when the labeled data contain noise. In this paper, we propose a framework for robust semi-supervised manifold learning (RSSML to address this problem. The noisy levels of the labeled points are firstly predicted, and then a regularization term is constructed to reduce the impact of labeled points containing noise. A new robust semi-supervised optimization model is proposed by adding the regularization term to the traditional semi-supervised optimization model. Numerical experiments are given to show the improvement and efficiency of RSSML on noisy data sets.

  9. Comparison of two (geometric) algorithms for auto OMA

    DEFF Research Database (Denmark)

    Juul, Martin; Olsen, Peter; Balling, Ole

    2018-01-01

    parameters. The two algorithms are compared and illustrated on simulated data. Different choices of distance measures are discussed and evaluated. It is illustrated how a simple distance measure outperforms traditional distance measures from other Auto OMA algorithms. Traditional measures are unable...

  10. Geometric Semantic Genetic Programming Algorithm and Slump Prediction

    OpenAIRE

    Xu, Juncai; Shen, Zhenzhong; Ren, Qingwen; Xie, Xin; Yang, Zhengyu

    2017-01-01

    Research on the performance of recycled concrete as building material in the current world is an important subject. Given the complex composition of recycled concrete, conventional methods for forecasting slump scarcely obtain satisfactory results. Based on theory of nonlinear prediction method, we propose a recycled concrete slump prediction model based on geometric semantic genetic programming (GSGP) and combined it with recycled concrete features. Tests show that the model can accurately p...

  11. Transform Domain Robust Variable Step Size Griffiths' Adaptive Algorithm for Noise Cancellation in ECG

    Science.gov (United States)

    Hegde, Veena; Deekshit, Ravishankar; Satyanarayana, P. S.

    2011-12-01

    The electrocardiogram (ECG) is widely used for diagnosis of heart diseases. Good quality of ECG is utilized by physicians for interpretation and identification of physiological and pathological phenomena. However, in real situations, ECG recordings are often corrupted by artifacts or noise. Noise severely limits the utility of the recorded ECG and thus needs to be removed, for better clinical evaluation. In the present paper a new noise cancellation technique is proposed for removal of random noise like muscle artifact from ECG signal. A transform domain robust variable step size Griffiths' LMS algorithm (TVGLMS) is proposed for noise cancellation. For the TVGLMS, the robust variable step size has been achieved by using the Griffiths' gradient which uses cross-correlation between the desired signal contaminated with observation or random noise and the input. The algorithm is discrete cosine transform (DCT) based and uses symmetric property of the signal to represent the signal in frequency domain with lesser number of frequency coefficients when compared to that of discrete Fourier transform (DFT). The algorithm is implemented for adaptive line enhancer (ALE) filter which extracts the ECG signal in a noisy environment using LMS filter adaptation. The proposed algorithm is found to have better convergence error/misadjustment when compared to that of ordinary transform domain LMS (TLMS) algorithm, both in the presence of white/colored observation noise. The reduction in convergence error achieved by the new algorithm with desired signal decomposition is found to be lower than that obtained without decomposition. The experimental results indicate that the proposed method is better than traditional adaptive filter using LMS algorithm in the aspects of retaining geometrical characteristics of ECG signal.

  12. Quantum algorithms for topological and geometric analysis of data

    Science.gov (United States)

    Lloyd, Seth; Garnerone, Silvano; Zanardi, Paolo

    2016-01-01

    Extracting useful information from large data sets can be a daunting task. Topological methods for analysing data sets provide a powerful technique for extracting such information. Persistent homology is a sophisticated tool for identifying topological features and for determining how such features persist as the data is viewed at different scales. Here we present quantum machine learning algorithms for calculating Betti numbers—the numbers of connected components, holes and voids—in persistent homology, and for finding eigenvectors and eigenvalues of the combinatorial Laplacian. The algorithms provide an exponential speed-up over the best currently known classical algorithms for topological data analysis. PMID:26806491

  13. ROBUST CONTROL ALGORITHM FOR MULTIVARIABLE PLANTS WITH QUANTIZED OUTPUT

    Directory of Open Access Journals (Sweden)

    A. A. Margun

    2017-01-01

    Full Text Available The paper deals with robust output control algorithm for multivariable plants under disturbances. A plant is described by the system of linear differential equations with known relative degrees. Plant parameters are unknown but belong to the known closed bounded set. Plant state vector is unmeasured. Plant output is measured only via static quantizer. Control system algorithm is based on the high gain feedback method. Developed controller provides exponential convergence of tracking error to the bounded area. The area bounds depend on quantizer parameters and the value of external disturbances. Experimental approbation of the proposed control algorithm is performed with the use of Twin Rotor MIMO System laboratory bench. This bench is a helicopter like model with two degrees of freedom (pitch and yaw. DC motors are used as actuators. The output signals are measured via optical encoders. Mathematical model of laboratory bench is obtained. Proposed algorithm was compared with proportional - integral – differential controller in conditions of output quantization. Obtained results have confirmed the efficiency of proposed controller.

  14. Structure-preserving geometric algorithms for plasma physics and beam physics

    Science.gov (United States)

    Qin, Hong

    2017-10-01

    Standard algorithms in the plasma physics and beam physics do not possess the long-term accuracy and fidelity required in the study of multi-scale dynamics, because they do not preserve the geometric structures of the physical systems, such as the local energy-momentum conservation, symplectic structure and gauge symmetry. As a result, numerical errors accumulate coherently with time and long-term simulation results are not reliable. To overcome this difficulty, since 2008 structure-preserving geometric algorithms have been developed. This new generation of algorithms utilizes advanced techniques, such as interpolating differential forms, canonical and non-canonical symplectic integrators, and finite element exterior calculus to guarantee gauge symmetry and charge conservation, and the conservation of energy-momentum and symplectic structure. It is our vision that future numerical capabilities in plasma physics and beam physics will be based on the structure-preserving geometric algorithms.

  15. Discrete geometric analysis of message passing algorithm on graphs

    Science.gov (United States)

    Watanabe, Yusuke

    2010-04-01

    We often encounter probability distributions given as unnormalized products of non-negative functions. The factorization structures are represented by hypergraphs called factor graphs. Such distributions appear in various fields, including statistics, artificial intelligence, statistical physics, error correcting codes, etc. Given such a distribution, computations of marginal distributions and the normalization constant are often required. However, they are computationally intractable because of their computational costs. One successful approximation method is Loopy Belief Propagation (LBP) algorithm. The focus of this thesis is an analysis of the LBP algorithm. If the factor graph is a tree, i.e. having no cycle, the algorithm gives the exact quantities. If the factor graph has cycles, however, the LBP algorithm does not give exact results and possibly exhibits oscillatory and non-convergent behaviors. The thematic question of this thesis is "How the behaviors of the LBP algorithm are affected by the discrete geometry of the factor graph?" The primary contribution of this thesis is the discovery of a formula that establishes the relation between the LBP, the Bethe free energy and the graph zeta function. This formula provides new techniques for analysis of the LBP algorithm, connecting properties of the graph and of the LBP and the Bethe free energy. We demonstrate applications of the techniques to several problems including (non) convexity of the Bethe free energy, the uniqueness and stability of the LBP fixed point. We also discuss the loop series initiated by Chertkov and Chernyak. The loop series is a subgraph expansion of the normalization constant, or partition function, and reflects the graph geometry. We investigate theoretical natures of the series. Moreover, we show a partial connection between the loop series and the graph zeta function.

  16. A new deconvolution approach to robust fluence for intensity modulation under geometrical uncertainty

    Science.gov (United States)

    Zhang, Pengcheng; De Crevoisier, Renaud; Simon, Antoine; Haigron, Pascal; Coatrieux, Jean-Louis; Li, Baosheng; Shu, Huazhong

    2013-09-01

    This work addresses random geometrical uncertainties that are intrinsically observed in radiation therapy by means of a new deconvolution method combining a series expansion and a Butterworth filter. The method efficiently suppresses high-frequency components by discarding the higher order terms of the series expansion and then filtering out deviations on the field edges. An additional approximation is made in order to set the fluence values outside the field to zero in the robust profiles. This method is compared to the deconvolution kernel method for a regular 2D fluence map, a real intensity-modulated radiation therapy field, and a prostate case. The results show that accuracy is improved while fulfilling clinical planning requirements.

  17. A new deconvolution approach to robust fluence for intensity modulation under geometrical uncertainty

    International Nuclear Information System (INIS)

    Zhang Pengcheng; Coatrieux, Jean-Louis; Shu Huazhong; De Crevoisier, Renaud; Simon, Antoine; Haigron, Pascal; Li Baosheng

    2013-01-01

    This work addresses random geometrical uncertainties that are intrinsically observed in radiation therapy by means of a new deconvolution method combining a series expansion and a Butterworth filter. The method efficiently suppresses high-frequency components by discarding the higher order terms of the series expansion and then filtering out deviations on the field edges. An additional approximation is made in order to set the fluence values outside the field to zero in the robust profiles. This method is compared to the deconvolution kernel method for a regular 2D fluence map, a real intensity-modulated radiation therapy field, and a prostate case. The results show that accuracy is improved while fulfilling clinical planning requirements. (paper)

  18. Parallel algorithms for geometric connected component labeling on a hypercube multiprocessor

    Science.gov (United States)

    Belkhale, K. P.; Banerjee, P.

    1992-01-01

    Different algorithms for the geometric connected component labeling (GCCL) problem are defined each of which involves d stages of message passing, for a d-dimensional hypercube. The major idea is that in each stage a hypercube multiprocessor increases its knowledge of domain. The algorithms under consideration include the QUAD algorithm for small number of processors and the Overlap Quad algorithm for large number of processors, subject to the locality of the connected sets. These algorithms differ in their run time, memory requirements, and message complexity. They were implemented on an Intel iPSC2/D4/MX hypercube.

  19. Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness

    Science.gov (United States)

    Sattarivand, Mike; Kusano, Maggie; Poon, Ian; Caldwell, Curtis

    2012-11-01

    Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET point spread function (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

  20. Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness

    International Nuclear Information System (INIS)

    Sattarivand, Mike; Caldwell, Curtis; Kusano, Maggie; Poon, Ian

    2012-01-01

    Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET point spread function (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

  1. A robust star identification algorithm with star shortlisting

    Science.gov (United States)

    Mehta, Deval Samirbhai; Chen, Shoushun; Low, Kay Soon

    2018-05-01

    A star tracker provides the most accurate attitude solution in terms of arc seconds compared to the other existing attitude sensors. When no prior attitude information is available, it operates in "Lost-In-Space (LIS)" mode. Star pattern recognition, also known as star identification algorithm, forms the most crucial part of a star tracker in the LIS mode. Recognition reliability and speed are the two most important parameters of a star pattern recognition technique. In this paper, a novel star identification algorithm with star ID shortlisting is proposed. Firstly, the star IDs are shortlisted based on worst-case patch mismatch, and later stars are identified in the image by an initial match confirmed with a running sequential angular match technique. The proposed idea is tested on 16,200 simulated star images having magnitude uncertainty, noise stars, positional deviation, and varying size of the field of view. The proposed idea is also benchmarked with the state-of-the-art star pattern recognition techniques. Finally, the real-time performance of the proposed technique is tested on the 3104 real star images captured by a star tracker SST-20S currently mounted on a satellite. The proposed technique can achieve an identification accuracy of 98% and takes only 8.2 ms for identification on real images. Simulation and real-time results depict that the proposed technique is highly robust and achieves a high speed of identification suitable for actual space applications.

  2. Designing an Iterative Learning Control Algorithm Based on Process History using limited post process geometrical information

    DEFF Research Database (Denmark)

    Endelt, Benny Ørtoft; Volk, Wolfram

    2013-01-01

    , there is a number of obstacles which need to be addressed before an industrial implementation is possible, e.g. the proposed control algorithms are often limited by the ability to sample process data with both sufficient accuracy and robustness - this lack of robust sampling technologies is one of the main barriers...

  3. Robust and accurate detection algorithm for multimode polymer optical FBG sensor system

    DEFF Research Database (Denmark)

    Ganziy, Denis; Jespersen, O.; Rose, B.

    2015-01-01

    We propose a novel dynamic gate algorithm (DGA) for robust and fast peak detection. The algorithm uses a threshold determined detection window and center of gravity algorithm with bias compensation. Our experiment demonstrates that the DGA method is fast and robust with better stability and accur...

  4. Robust digital image inpainting algorithm in the wireless environment

    Science.gov (United States)

    Karapetyan, G.; Sarukhanyan, H. G.; Agaian, S. S.

    2014-05-01

    and implementation steps of the presented algorithm. Furthermore, the simulation results show that the presented method is among the state-of-the-art and compares favorably against many available methods in the wireless environment. Robustness in the wireless environment with respect to the shape of the manually selected "marked" region is also illustrated. Currently, we are working on the expansion of this work to video and 3-D data.

  5. Robust Layout Synthesis of a MEM Crab-Leg Resonator Using a Constrained Genetic Algorithm

    DEFF Research Database (Denmark)

    Fan, Zhun; Achiche, Sofiane

    2007-01-01

    The research work carried out in this paper introduces a robust design method for layout synthesis of MEM resonator subject to inherent geometric uncertainties such as the fabrication error on the sidewall of the structure. The robust design problem is formulated as a multi-objective constrained...

  6. Numerical nonlinear complex geometrical optics algorithm for the 3D Calderón problem

    DEFF Research Database (Denmark)

    Delbary, Fabrice; Knudsen, Kim

    2014-01-01

    to the generalized Laplace equation. The 3D problem was solved in theory in late 1980s using complex geometrical optics solutions and a scattering transform. Several approximations to the reconstruction method have been suggested and implemented numerically in the literature, but here, for the first time, a complete...... computer implementation of the full nonlinear algorithm is given. First a boundary integral equation is solved by a Nystrom method for the traces of the complex geometrical optics solutions, second the scattering transform is computed and inverted using fast Fourier transform, and finally a boundary value...

  7. Comparing the Robustness of Evolutionary Algorithms on the Basis of Benchmark Functions

    Directory of Open Access Journals (Sweden)

    DENIZ ULKER, E.

    2013-05-01

    Full Text Available In real-world optimization problems, even though the solution quality is of great importance, the robustness of the solution is also an important aspect. This paper investigates how the optimization algorithms are sensitive to the variations of control parameters and to the random initialization of the solution set for fixed control parameters. The comparison is performed of three well-known evolutionary algorithms which are Particle Swarm Optimization (PSO algorithm, Differential Evolution (DE algorithm and the Harmony Search (HS algorithm. Various benchmark functions with different characteristics are used for the evaluation of these algorithms. The experimental results show that the solution quality of the algorithms is not directly related to their robustness. In particular, the algorithm that is highly robust can have a low solution quality, or the algorithm that has a high quality of solution can be quite sensitive to the parameter variations.

  8. Area collapse algorithm computing new curve of 2D geometric objects

    Science.gov (United States)

    Buczek, Michał Mateusz

    2017-06-01

    The processing of cartographic data demands human involvement. Up-to-date algorithms try to automate a part of this process. The goal is to obtain a digital model, or additional information about shape and topology of input geometric objects. A topological skeleton is one of the most important tools in the branch of science called shape analysis. It represents topological and geometrical characteristics of input data. Its plot depends on using algorithms such as medial axis, skeletonization, erosion, thinning, area collapse and many others. Area collapse, also known as dimension change, replaces input data with lower-dimensional geometric objects like, for example, a polygon with a polygonal chain, a line segment with a point. The goal of this paper is to introduce a new algorithm for the automatic calculation of polygonal chains representing a 2D polygon. The output is entirely contained within the area of the input polygon, and it has a linear plot without branches. The computational process is automatic and repeatable. The requirements of input data are discussed. The author analyzes results based on the method of computing ends of output polygonal chains. Additional methods to improve results are explored. The algorithm was tested on real-world cartographic data received from BDOT/GESUT databases, and on point clouds from laser scanning. An implementation for computing hatching of embankment is described.

  9. Parallel Algorithm of Geometrical Hashing Based on NumPy Package and Processes Pool

    Directory of Open Access Journals (Sweden)

    Klyachin Vladimir Aleksandrovich

    2015-10-01

    Full Text Available The article considers the problem of multi-dimensional geometric hashing. The paper describes a mathematical model of geometric hashing and considers an example of its use in localization problems for the point. A method of constructing the corresponding hash matrix by parallel algorithm is considered. In this paper an algorithm of parallel geometric hashing using a development pattern «pool processes» is proposed. The implementation of the algorithm is executed using the Python programming language and NumPy package for manipulating multidimensional data. To implement the process pool it is proposed to use a class Process Pool Executor imported from module concurrent.futures, which is included in the distribution of the interpreter Python since version 3.2. All the solutions are presented in the paper by corresponding UML class diagrams. Designed GeomNash package includes classes Data, Result, GeomHash, Job. The results of the developed program presents the corresponding graphs. Also, the article presents the theoretical justification for the application process pool for the implementation of parallel algorithms. It is obtained condition t2 > (p/(p-1*t1 of the appropriateness of process pool. Here t1 - the time of transmission unit of data between processes, and t2 - the time of processing unit data by one processor.

  10. TARCMO: Theory and Algorithms for Robust, Combinatorial, Multicriteria Optimization

    Science.gov (United States)

    2016-11-28

    methods is presented in the book chapter [CG16d]. 4.4 Robust Timetable Information Problems. Timetable information is the process of determining a...Princeton and Oxford, 2009. [BTN98] A. Ben-Tal and A. Nemirovski. Robust convex optimization. Math - ematics of Operations Research, 23(4):769–805...Goerigk. A note on upper bounds to the robust knapsack problem with discrete scenarios. Annals of Operations Research, 223(1):461–469, 2014. [GS16] M

  11. Robustness of Multiple Clustering Algorithms on Hyperspectral Images

    National Research Council Canada - National Science Library

    Williams, Jason P

    2007-01-01

    .... Various clustering algorithms were employed, including a hierarchical method, ISODATA, K-means, and X-means, and were used on a simple two dimensional dataset in order to discover potential problems with the algorithms...

  12. An improved geometric algorithm for calculating the topology of lattice gauge fields

    International Nuclear Information System (INIS)

    Pugh, D.J.R.; Teper, M.; Oxford Univ.

    1989-01-01

    We implement the algorithm of Phillips and Stone on a hypercubic, periodic lattice and show that at currently accessible couplings the SU(2) topological charge so calculated is dominated by short-distance fluctuations. We propose and test an improvement to rid the measure of such lattice artifacts. We find that the improved algorithm produces a topological susceptibility that is consistent with that obtained by the alternative cooling method, thus resolving the controversial discrepancy between geometric and cooling methods. We briefly discuss the reasons for this and point out that our improvement is likely to be particularly effective when applied to the case of SU(3). (orig.)

  13. Canonical symplectic structure and structure-preserving geometric algorithms for Schrödinger-Maxwell systems

    Science.gov (United States)

    Chen, Qiang; Qin, Hong; Liu, Jian; Xiao, Jianyuan; Zhang, Ruili; He, Yang; Wang, Yulei

    2017-11-01

    An infinite dimensional canonical symplectic structure and structure-preserving geometric algorithms are developed for the photon-matter interactions described by the Schrödinger-Maxwell equations. The algorithms preserve the symplectic structure of the system and the unitary nature of the wavefunctions, and bound the energy error of the simulation for all time-steps. This new numerical capability enables us to carry out first-principle based simulation study of important photon-matter interactions, such as the high harmonic generation and stabilization of ionization, with long-term accuracy and fidelity.

  14. Markov chain algorithms: a template for building future robust low-power systems

    Science.gov (United States)

    Deka, Biplab; Birklykke, Alex A.; Duwe, Henry; Mansinghka, Vikash K.; Kumar, Rakesh

    2014-01-01

    Although computational systems are looking towards post CMOS devices in the pursuit of lower power, the expected inherent unreliability of such devices makes it difficult to design robust systems without additional power overheads for guaranteeing robustness. As such, algorithmic structures with inherent ability to tolerate computational errors are of significant interest. We propose to cast applications as stochastic algorithms based on Markov chains (MCs) as such algorithms are both sufficiently general and tolerant to transition errors. We show with four example applications—Boolean satisfiability, sorting, low-density parity-check decoding and clustering—how applications can be cast as MC algorithms. Using algorithmic fault injection techniques, we demonstrate the robustness of these implementations to transition errors with high error rates. Based on these results, we make a case for using MCs as an algorithmic template for future robust low-power systems. PMID:24842030

  15. Robust K-Median and K-Means Clustering Algorithms for Incomplete Data

    Directory of Open Access Journals (Sweden)

    Jinhua Li

    2016-01-01

    Full Text Available Incomplete data with missing feature values are prevalent in clustering problems. Traditional clustering methods first estimate the missing values by imputation and then apply the classical clustering algorithms for complete data, such as K-median and K-means. However, in practice, it is often hard to obtain accurate estimation of the missing values, which deteriorates the performance of clustering. To enhance the robustness of clustering algorithms, this paper represents the missing values by interval data and introduces the concept of robust cluster objective function. A minimax robust optimization (RO formulation is presented to provide clustering results, which are insensitive to estimation errors. To solve the proposed RO problem, we propose robust K-median and K-means clustering algorithms with low time and space complexity. Comparisons and analysis of experimental results on both artificially generated and real-world incomplete data sets validate the robustness and effectiveness of the proposed algorithms.

  16. Creating geometrically robust designs for highly sensitive problems using topology optimization: Acoustic cavity design

    DEFF Research Database (Denmark)

    Christiansen, Rasmus E.; Lazarov, Boyan S.; Jensen, Jakob S.

    2015-01-01

    Resonance and wave-propagation problems are known to be highly sensitive towards parameter variations. This paper discusses topology optimization formulations for creating designs that perform robustly under spatial variations for acoustic cavity problems. For several structural problems, robust...... and limitations are discussed. In addition, a known explicit penalization approach is considered for comparison. For near-uniform spatial variations it is shown that highly robust designs can be obtained using the double filter approach. It is finally demonstrated that taking non-uniform variations into account...... further improves the robustness of the designs....

  17. A methodology for the geometric design of heat recovery steam generators applying genetic algorithms

    International Nuclear Information System (INIS)

    Durán, M. Dolores; Valdés, Manuel; Rovira, Antonio; Rincón, E.

    2013-01-01

    This paper shows how the geometric design of heat recovery steam generators (HRSG) can be achieved. The method calculates the product of the overall heat transfer coefficient (U) by the area of the heat exchange surface (A) as a function of certain thermodynamic design parameters of the HRSG. A genetic algorithm is then applied to determine the best set of geometric parameters which comply with the desired UA product and, at the same time, result in a small heat exchange area and low pressure losses in the HRSG. In order to test this method, the design was applied to the HRSG of an existing plant and the results obtained were compared with the real exchange area of the steam generator. The findings show that the methodology is sound and offers reliable results even for complex HRSG designs. -- Highlights: ► The paper shows a methodology for the geometric design of heat recovery steam generators. ► Calculates product of the overall heat transfer coefficient by heat exchange area as a function of certain HRSG thermodynamic design parameters. ► It is a complement for the thermoeconomic optimization method. ► Genetic algorithms are used for solving the optimization problem

  18. A Robust Level-Set Algorithm for Centerline Extraction

    NARCIS (Netherlands)

    Telea, Alexandru; Vilanova, Anna

    2003-01-01

    We present a robust method for extracting 3D centerlines from volumetric datasets. We start from a 2D skeletonization method to locate voxels centered with respect to three orthogonal slicing directions. Next, we introduce a new detection criterion to extract the centerline voxels from the above

  19. A Prefiltered Cuckoo Search Algorithm with Geometric Operators for Solving Sudoku Problems

    Directory of Open Access Journals (Sweden)

    Ricardo Soto

    2014-01-01

    Full Text Available The Sudoku is a famous logic-placement game, originally popularized in Japan and today widely employed as pastime and as testbed for search algorithms. The classic Sudoku consists in filling a 9×9 grid, divided into nine 3×3 regions, so that each column, row, and region contains different digits from 1 to 9. This game is known to be NP-complete, with existing various complete and incomplete search algorithms able to solve different instances of it. In this paper, we present a new cuckoo search algorithm for solving Sudoku puzzles combining prefiltering phases and geometric operations. The geometric operators allow one to correctly move toward promising regions of the combinatorial space, while the prefiltering phases are able to previously delete from domains the values that do not conduct to any feasible solution. This integration leads to a more efficient domain filtering and as a consequence to a faster solving process. We illustrate encouraging experimental results where our approach noticeably competes with the best approximate methods reported in the literature.

  20. Robust consensus algorithm for multi-agent systems with exogenous disturbances under convergence conditions

    Science.gov (United States)

    Jiang, Yulian; Liu, Jianchang; Tan, Shubin; Ming, Pingsong

    2014-09-01

    In this paper, a robust consensus algorithm is developed and sufficient conditions for convergence to consensus are proposed for a multi-agent system (MAS) with exogenous disturbances subject to partial information. By utilizing H∞ robust control, differential game theory and a design-based approach, the consensus problem of the MAS with exogenous bounded interference is resolved and the disturbances are restrained, simultaneously. Attention is focused on designing an H∞ robust controller (the robust consensus algorithm) based on minimisation of our proposed rational and individual cost functions according to goals of the MAS. Furthermore, sufficient conditions for convergence of the robust consensus algorithm are given. An example is employed to demonstrate that our results are effective and more capable to restrain exogenous disturbances than the existing literature.

  1. Robust optimization model and algorithm for railway freight center location problem in uncertain environment.

    Science.gov (United States)

    Liu, Xing-Cai; He, Shi-Wei; Song, Rui; Sun, Yang; Li, Hao-Dong

    2014-01-01

    Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  2. Robust Optimization Model and Algorithm for Railway Freight Center Location Problem in Uncertain Environment

    Directory of Open Access Journals (Sweden)

    Xing-cai Liu

    2014-01-01

    Full Text Available Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  3. Robust perception algorithms for road and track autonomous following

    Science.gov (United States)

    Marion, Vincent; Lecointe, Olivier; Lewandowski, Cecile; Morillon, Joel G.; Aufrere, Romuald; Marcotegui, Beatrix; Chapuis, Roland; Beucher, Serge

    2004-09-01

    The French Military Robotic Study Program (introduced in Aerosense 2003), sponsored by the French Defense Procurement Agency and managed by Thales Airborne Systems as the prime contractor, focuses on about 15 robotic themes, which can provide an immediate "operational add-on value." The paper details the "road and track following" theme (named AUT2), which main purpose was to develop a vision based sub-system to automatically detect roadsides of an extended range of roads and tracks suitable to military missions. To achieve the goal, efforts focused on three main areas: (1) Improvement of images quality at algorithms inputs, thanks to the selection of adapted video cameras, and the development of a THALES patented algorithm: it removes in real time most of the disturbing shadows in images taken in natural environments, enhances contrast and lowers reflection effect due to films of water. (2) Selection and improvement of two complementary algorithms (one is segment oriented, the other region based) (3) Development of a fusion process between both algorithms, which feeds in real time a road model with the best available data. Each previous step has been developed so that the global perception process is reliable and safe: as an example, the process continuously evaluates itself and outputs confidence criteria qualifying roadside detection. The paper presents the processes in details, and the results got from passed military acceptance tests, which trigger the next step: autonomous track following (named AUT3).

  4. Hybrid Robust Multi-Objective Evolutionary Optimization Algorithm

    Science.gov (United States)

    2009-03-10

    xfar by xint. Else, generate a new individual, using the Sobol pseudo- random sequence generator within the upper and lower bounds of the variables...12. Deb, K., Multi-Objective Optimization Using Evolutionary Algorithms, John Wiley & Sons. 2002. 13. Sobol , I. M., "Uniformly Distributed Sequences

  5. Fast and Easy 3D Reconstruction with the Help of Geometric Constraints and Genetic Algorithms

    Science.gov (United States)

    Annich, Afafe; El Abderrahmani, Abdellatif; Satori, Khalid

    2017-09-01

    The purpose of the work presented in this paper is to describe new method of 3D reconstruction from one or more uncalibrated images. This method is based on two important concepts: geometric constraints and genetic algorithms (GAs). At first, we are going to discuss the combination between bundle adjustment and GAs that we have proposed in order to improve 3D reconstruction efficiency and success. We used GAs in order to improve fitness quality of initial values that are used in the optimization problem. It will increase surely convergence rate. Extracted geometric constraints are used first to obtain an estimated value of focal length that helps us in the initialization step. Matching homologous points and constraints is used to estimate the 3D model. In fact, our new method gives us a lot of advantages: reducing the estimated parameter number in optimization step, decreasing used image number, winning time and stabilizing good quality of 3D results. At the end, without any prior information about our 3D scene, we obtain an accurate calibration of the cameras, and a realistic 3D model that strictly respects the geometric constraints defined before in an easy way. Various data and examples will be used to highlight the efficiency and competitiveness of our present approach.

  6. Robust Bayesian Algorithm for Targeted Compound Screening in Forensic Toxicology.

    Science.gov (United States)

    Woldegebriel, Michael; Gonsalves, John; van Asten, Arian; Vivó-Truyols, Gabriel

    2016-02-16

    As part of forensic toxicological investigation of cases involving unexpected death of an individual, targeted or untargeted xenobiotic screening of post-mortem samples is normally conducted. To this end, liquid chromatography (LC) coupled to high-resolution mass spectrometry (MS) is typically employed. For data analysis, almost all commonly applied algorithms are threshold-based (frequentist). These algorithms examine the value of a certain measurement (e.g., peak height) to decide whether a certain xenobiotic of interest (XOI) is present/absent, yielding a binary output. Frequentist methods pose a problem when several sources of information [e.g., shape of the chromatographic peak, isotopic distribution, estimated mass-to-charge ratio (m/z), adduct, etc.] need to be combined, requiring the approach to make arbitrary decisions at substep levels of data analysis. We hereby introduce a novel Bayesian probabilistic algorithm for toxicological screening. The method tackles the problem with a different strategy. It is not aimed at reaching a final conclusion regarding the presence of the XOI, but it estimates its probability. The algorithm effectively and efficiently combines all possible pieces of evidence from the chromatogram and calculates the posterior probability of the presence/absence of XOI features. This way, the model can accommodate more information by updating the probability if extra evidence is acquired. The final probabilistic result assists the end user to make a final decision with respect to the presence/absence of the xenobiotic. The Bayesian method was validated and found to perform better (in terms of false positives and false negatives) than the vendor-supplied software package.

  7. Geometric Distribution-Based Readers Scheduling Optimization Algorithm Using Artificial Immune System

    Directory of Open Access Journals (Sweden)

    Litian Duan

    2016-11-01

    Full Text Available In the multiple-reader environment (MRE of radio frequency identification (RFID system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range.

  8. ROBUST-HYBRID GENETIC ALGORITHM FOR A FLOW-SHOP SCHEDULING PROBLEM (A Case Study at PT FSCM Manufacturing Indonesia

    Directory of Open Access Journals (Sweden)

    Johan Soewanda

    2007-01-01

    Full Text Available This paper discusses the application of Robust Hybrid Genetic Algorithm to solve a flow-shop scheduling problem. The proposed algorithm attempted to reach minimum makespan. PT. FSCM Manufacturing Indonesia Plant 4's case was used as a test case to evaluate the performance of the proposed algorithm. The proposed algorithm was compared to Ant Colony, Genetic-Tabu, Hybrid Genetic Algorithm, and the company's algorithm. We found that Robust Hybrid Genetic produces statistically better result than the company's, but the same as Ant Colony, Genetic-Tabu, and Hybrid Genetic. In addition, Robust Hybrid Genetic Algorithm required less computational time than Hybrid Genetic Algorithm

  9. Robust Cyclic MUSIC Algorithm for Finding Directions in Impulsive Noise Environment

    Directory of Open Access Journals (Sweden)

    Sen Li

    2017-01-01

    Full Text Available This paper addresses the issue of direction finding of a cyclostationary signal under impulsive noise environments modeled by α-stable distribution. Since α-stable distribution does not have finite second-order statistics, the conventional cyclic correlation-based signal-selective direction finding algorithms do not work effectively. To resolve this problem, we define two robust cyclic correlation functions which are derived from robust statistics property of the correntropy and the nonlinear transformation, respectively. The MUSIC algorithm with the robust cyclic correlation matrix of the received signals of arrays is then used to estimate the direction of cyclostationary signal in the presence of impulsive noise. The computer simulation results demonstrate that the two proposed robust cyclic correlation-based algorithms outperform the conventional cyclic correlation and the fractional lower order cyclic correlation based methods.

  10. A Modified LQG Algorithm (MLQG for Robust Control of Nonlinear Multivariable Systems

    Directory of Open Access Journals (Sweden)

    Jens G. Balchen

    1993-07-01

    Full Text Available The original LQG algorithm is often characterized for its lack of robustness. This is because in the design of the estimator (Kalman filter the process disturbance is assumed to be white noise. If the estimator is to give good estimates, the Kalman gain is increased which means that the estimator fails to become robust. A solution to this problem is to replace the proportional Kalman gain matrix by a dynamic PI algorithm and the proportional LQ feedback gain matrix by a PI algorithm. A tuning method is developed which facilitates the tuning of a modified LQG control system (MLQG by only two tuning parameters.

  11. Algorithms and Array Design Criteria for Robust Imaging in Interferometry

    Science.gov (United States)

    Kurien, Binoy George

    Optical interferometry is a technique for obtaining high-resolution imagery of a distant target by interfering light from multiple telescopes. Image restoration from interferometric measurements poses a unique set of challenges. The first challenge is that the measurement set provides only a sparse-sampling of the object's Fourier Transform and hence image formation from these measurements is an inherently ill-posed inverse problem. Secondly, atmospheric turbulence causes severe distortion of the phase of the Fourier samples. We develop array design conditions for unique Fourier phase recovery, as well as a comprehensive algorithmic framework based on the notion of redundant-spaced-calibration (RSC), which together achieve reliable image reconstruction in spite of these challenges. Within this framework, we see that classical interferometric observables such as the bispectrum and closure phase can limit sensitivity, and that generalized notions of these observables can improve both theoretical and empirical performance. Our framework leverages techniques from lattice theory to resolve integer phase ambiguities in the interferometric phase measurements, and from graph theory, to select a reliable set of generalized observables. We analyze the expected shot-noise-limited performance of our algorithm for both pairwise and Fizeau interferometric architectures and corroborate this analysis with simulation results. We apply techniques from the field of compressed sensing to perform image reconstruction from the estimates of the object's Fourier coefficients. The end result is a comprehensive strategy to achieve well-posed and easily-predictable reconstruction performance in optical interferometry.

  12. APPLICATION OF GENETIC ALGORITHMS FOR ROBUST PARAMETER OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    N. Belavendram

    2010-12-01

    Full Text Available Parameter optimization can be achieved by many methods such as Monte-Carlo, full, and fractional factorial designs. Genetic algorithms (GA are fairly recent in this respect but afford a novel method of parameter optimization. In GA, there is an initial pool of individuals each with its own specific phenotypic trait expressed as a ‘genetic chromosome’. Different genes enable individuals with different fitness levels to reproduce according to natural reproductive gene theory. This reproduction is established in terms of selection, crossover and mutation of reproducing genes. The resulting child generation of individuals has a better fitness level akin to natural selection, namely evolution. Populations evolve towards the fittest individuals. Such a mechanism has a parallel application in parameter optimization. Factors in a parameter design can be expressed as a genetic analogue in a pool of sub-optimal random solutions. Allowing this pool of sub-optimal solutions to evolve over several generations produces fitter generations converging to a pre-defined engineering optimum. In this paper, a genetic algorithm is used to study a seven factor non-linear equation for a Wheatstone bridge as the equation to be optimized. A comparison of the full factorial design against a GA method shows that the GA method is about 1200 times faster in finding a comparable solution.

  13. A multi-frame particle tracking algorithm robust against input noise

    International Nuclear Information System (INIS)

    Li, Dongning; Zhang, Yuanhui; Sun, Yigang; Yan, Wei

    2008-01-01

    The performance of a particle tracking algorithm which detects particle trajectories from discretely recorded particle positions could be substantially hindered by the input noise. In this paper, a particle tracking algorithm is developed which is robust against input noise. This algorithm employs the regression method instead of the extrapolation method usually employed by existing algorithms to predict future particle positions. If a trajectory cannot be linked to a particle at a frame, the algorithm can still proceed by trying to find a candidate at the next frame. The connectivity of tracked trajectories is inspected to remove the false ones. The algorithm is validated with synthetic data. The result shows that the algorithm is superior to traditional algorithms in the aspect of tracking long trajectories

  14. Developing robust arsenic awareness prediction models using machine learning algorithms.

    Science.gov (United States)

    Singh, Sushant K; Taylor, Robert W; Rahman, Mohammad Mahmudur; Pradhan, Biswajeet

    2018-04-01

    Arsenic awareness plays a vital role in ensuring the sustainability of arsenic mitigation technologies. Thus far, however, few studies have dealt with the sustainability of such technologies and its associated socioeconomic dimensions. As a result, arsenic awareness prediction has not yet been fully conceptualized. Accordingly, this study evaluated arsenic awareness among arsenic-affected communities in rural India, using a structured questionnaire to record socioeconomic, demographic, and other sociobehavioral factors with an eye to assessing their association with and influence on arsenic awareness. First a logistic regression model was applied and its results compared with those produced by six state-of-the-art machine-learning algorithms (Support Vector Machine [SVM], Kernel-SVM, Decision Tree [DT], k-Nearest Neighbor [k-NN], Naïve Bayes [NB], and Random Forests [RF]) as measured by their accuracy at predicting arsenic awareness. Most (63%) of the surveyed population was found to be arsenic-aware. Significant arsenic awareness predictors were divided into three types: (1) socioeconomic factors: caste, education level, and occupation; (2) water and sanitation behavior factors: number of family members involved in water collection, distance traveled and time spent for water collection, places for defecation, and materials used for handwashing after defecation; and (3) social capital and trust factors: presence of anganwadi and people's trust in other community members, NGOs, and private agencies. Moreover, individuals' having higher social network positively contributed to arsenic awareness in the communities. Results indicated that both the SVM and the RF algorithms outperformed at overall prediction of arsenic awareness-a nonlinear classification problem. Lower-caste, less educated, and unemployed members of the population were found to be the most vulnerable, requiring immediate arsenic mitigation. To this end, local social institutions and NGOs could play a

  15. A Self-embedding Robust Digital Watermarking Algorithm with Blind Detection

    Directory of Open Access Journals (Sweden)

    Gong Yunfeng

    2014-08-01

    Full Text Available In order to achieve the perfectly blind detection of robustness watermarking algorithm, a novel self-embedding robust digital watermarking algorithm with blind detection is proposed in this paper. Firstly the original image is divided to not overlap image blocks and then decomposable coefficients are obtained by lifting-based wavelet transform in every image blocks. Secondly the low-frequency coefficients of block images are selected and then approximately represented as a product of a base matrix and a coefficient matrix using NMF. Then the feature vector represent original image is obtained by quantizing coefficient matrix, and finally the adaptive quantization of the robustness watermark is embedded in the low-frequency coefficients of LWT. Experimental results show that the scheme is robust against common signal processing attacks, meanwhile perfect blind detection is achieve.

  16. Robust Floor Determination Algorithm for Indoor Wireless Localization Systems under Reference Node Failure

    Directory of Open Access Journals (Sweden)

    Kriangkrai Maneerat

    2016-01-01

    Full Text Available One of the challenging problems for indoor wireless multifloor positioning systems is the presence of reference node (RN failures, which cause the values of received signal strength (RSS to be missed during the online positioning phase of the location fingerprinting technique. This leads to performance degradation in terms of floor accuracy, which in turn affects other localization procedures. This paper presents a robust floor determination algorithm called Robust Mean of Sum-RSS (RMoS, which can accurately determine the floor on which mobile objects are located and can work under either the fault-free scenario or the RN-failure scenarios. The proposed fault tolerance floor algorithm is based on the mean of the summation of the strongest RSSs obtained from the IEEE 802.15.4 Wireless Sensor Networks (WSNs during the online phase. The performance of the proposed algorithm is compared with those of different floor determination algorithms in literature. The experimental results show that the proposed robust floor determination algorithm outperformed the other floor algorithms and can achieve the highest percentage of floor determination accuracy in all scenarios tested. Specifically, the proposed algorithm can achieve greater than 95% correct floor determination under the scenario in which 40% of RNs failed.

  17. Assessing the Stability and Robustness of Semantic Web Services Recommendation Algorithms Under Profile Injection Attacks

    Directory of Open Access Journals (Sweden)

    GRANDIN, P. H.

    2014-06-01

    Full Text Available Recommendation systems based on collaborative filtering are open by nature, what makes them vulnerable to profile injection attacks that insert biased evaluations in the system database in order to manipulate recommendations. In this paper we evaluate the stability and robustness of collaborative filtering algorithms applied to semantic web services recommendation when submitted to random and segment profile injection attacks. We evaluated four algorithms: (1 IMEAN, that makes predictions using the average of the evaluations received by the target item; (2 UMEAN, that makes predictions using the average of the evaluation made by the target user; (3 an algorithm based on the k-nearest neighbor (k-NN method and (4, an algorithm based on the k-means clustering method.The experiments showed that the UMEAN algorithm is not affected by the attacks and that IMEAN is the most vulnerable of all algorithms tested. Nevertheless, both UMEAN and IMEAN have little practical application due to the low precision of their predictions. Among the algorithms with intermediate tolerance to attacks but with good prediction performance, the algorithm based on k-nn proved to be more robust and stable than the algorithm based on k-means.

  18. Robust PD Sway Control of a Lifted Load for a Crane Using a Genetic Algorithm

    Science.gov (United States)

    Kawada, Kazuo; Sogo, Hiroyuki; Yamamoto, Toru; Mada, Yasuhiro

    PID control schemes still continue to be widely used for most industrial control systems. This is mainly because PID controllers have simple control structures, and are simple to maintain and tune. However, it is difficult to find a set of suitable control parameters in the case of time-varying and/or nonlinear systems. For such a problem, the robust controller has been proposed.Although it is important to choose the suitable nominal model in designing the robust controller, it is not usually easy.In this paper, a new robust PD controller design scheme is proposed, which utilizes a genetic algorithm.

  19. Newton-Gauss Algorithm of Robust Weighted Total Least Squares Model

    Directory of Open Access Journals (Sweden)

    WANG Bin

    2015-06-01

    Full Text Available Based on the Newton-Gauss iterative algorithm of weighted total least squares (WTLS, a robust WTLS (RWTLS model is presented. The model utilizes the standardized residuals to construct the weight factor function and the square root of the variance component estimator with robustness is obtained by introducing the median method. Therefore, the robustness in both the observation and structure spaces can be simultaneously achieved. To obtain standardized residuals, the linearly approximate cofactor propagation law is employed to derive the expression of the cofactor matrix of WTLS residuals. The iterative calculation steps for RWTLS are also described. The experiment indicates that the model proposed in this paper exhibits satisfactory robustness for gross errors handling problem of WTLS, the obtained parameters have no significant difference with the results of WTLS without gross errors. Therefore, it is superior to the robust weighted total least squares model directly constructed with residuals.

  20. ALGORITHM FOR THE AUTOMATIC ESTIMATION OF AGRICULTURAL TREE GEOMETRIC PARAMETERS USING AIRBORNE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    E. Hadaś

    2016-06-01

    Full Text Available The estimation of dendrometric parameters has become an important issue for the agricultural planning and management. Since the classical field measurements are time consuming and inefficient, Airborne Laser Scanning (ALS data can be used for this purpose. Point clouds acquired for orchard areas allow to determine orchard structures and geometric parameters of individual trees. In this research we propose an automatic method that allows to determine geometric parameters of individual olive trees using ALS data. The method is based on the α-shape algorithm applied for normalized point clouds. The algorithm returns polygons representing crown shapes. For points located inside each polygon, we select the maximum height and the minimum height and then we estimate the tree height and the crown base height. We use the first two components of the Principal Component Analysis (PCA as the estimators for crown diameters. The α-shape algorithm requires to define the radius parameter R. In this study we investigated how sensitive are the results to the radius size, by comparing the results obtained with various settings of the R with reference values of estimated parameters from field measurements. Our study area was the olive orchard located in the Castellon Province, Spain. We used a set of ALS data with an average density of 4 points m−2. We noticed, that there was a narrow range of the R parameter, from 0.48 m to 0.80 m, for which all trees were detected and for which we obtained a high correlation coefficient (> 0.9 between estimated and measured values. We compared our estimates with field measurements. The RMSE of differences was 0.8 m for the tree height, 0.5 m for the crown base height, 0.6 m and 0.4 m for the longest and shorter crown diameter, respectively. The accuracy obtained with the method is thus sufficient for agricultural applications.

  1. A Robust Formant Extraction Algorithm Combining Spectral Peak Picking and Root Polishing

    Directory of Open Access Journals (Sweden)

    Seo Kwang-deok

    2006-01-01

    Full Text Available We propose a robust formant extraction algorithm that combines the spectral peak picking, formants location examining for peak merger checking, and the root extraction methods. The spectral peak picking method is employed to locate the formant candidates, and the root extraction is used for solving the peak merger problem. The location and the distance between the extracted formants are also utilized to efficiently find out suspected peak mergers. The proposed algorithm does not require much computation, and is shown to be superior to previous formant extraction algorithms through extensive tests using TIMIT speech database.

  2. An Effective, Robust And Parallel Implementation Of An Interior Point Algorithm For Limit State Optimization

    DEFF Research Database (Denmark)

    Dollerup, Niels; Jepsen, Michael S.; Damkilde, Lars

    2013-01-01

    The artide describes a robust and effective implementation of the interior point optimization algorithm. The adopted method includes a precalculation step, which reduces the number of variables by fulfilling the equilibrium equations a priori. This work presents an improved implementation of the ...

  3. An outlook on robust model predictive control algorithms : Reflections on performance and computational aspects

    NARCIS (Netherlands)

    Saltik, M.B.; Özkan, L.; Ludlage, J.H.A.; Weiland, S.; Van den Hof, P.M.J.

    2018-01-01

    In this paper, we discuss the model predictive control algorithms that are tailored for uncertain systems. Robustness notions with respect to both deterministic (or set based) and stochastic uncertainties are discussed and contributions are reviewed in the model predictive control literature. We

  4. An algorithm for robust non-linear analysis of radioimmunoassays and other bioassays

    International Nuclear Information System (INIS)

    Normolle, D.P.

    1993-01-01

    The four-parameter logistic function is an appropriate model for many types of bioassays that have continuous response variables, such as radioimmunoassays. By modelling the variance of replicates in an assay, one can modify the usual parameter estimation techniques (for example, Gauss-Newton or Marquardt-Levenberg) to produce parameter estimates for the standard curve that are robust against outlying observations. This article describes the computation of robust (M-) estimates for the parameters of the four-parameter logistic function. It describes techniques for modelling the variance structure of the replicates, modifications to the usual iterative algorithms for parameter estimation in non-linear models, and a formula for inverse confidence intervals. To demonstrate the algorithm, the article presents examples where the robustly estimated four-parameter logistic model is compared with the logit-log and four-parameter logistic models with least-squares estimates. (author)

  5. Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application

    Science.gov (United States)

    Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin

    2007-12-01

    We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.

  6. Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application

    Directory of Open Access Journals (Sweden)

    Yang Jian

    2007-01-01

    Full Text Available We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.

  7. A Robust Planning Algorithm for Groups of Entities in Discrete Spaces

    Directory of Open Access Journals (Sweden)

    Igor Wojnicki

    2015-07-01

    Full Text Available Automated planning is a well-established field of artificial intelligence (AI, with applications in route finding, robotics and operational research, among others. The task of developing a plan is often solved by finding a path in a graph representing the search domain; a robust plan consists of numerous paths that can be chosen if the execution of the best (optimal one fails. While robust planning for a single entity is rather simple, development of a robust plan for multiple entities in a common environment can lead to combinatorial explosion. This paper proposes a novel hybrid approach, joining heuristic search and the wavefront algorithm to provide a plan featuring robustness in areas where it is needed, while maintaining a low level of computational complexity.

  8. A FAST AND ROBUST ALGORITHM FOR ROAD EDGES EXTRACTION FROM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    K. Qiu

    2016-06-01

    Full Text Available Fast mapping of roads plays an important role in many geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance. How to extract various road edges fast and robustly is a challenging task. In this paper, we present a fast and robust algorithm for the automatic road edges extraction from terrestrial mobile LiDAR data. The algorithm is based on a key observation: most roads around edges have difference in elevation and road edges with pavement are seen in two different planes. In our algorithm, we firstly extract a rough plane based on RANSAC algorithm, and then multiple refined planes which only contains pavement are extracted from the rough plane. The road edges are extracted based on these refined planes. In practice, there is a serious problem that the rough and refined planes usually extracted badly due to rough roads and different density of point cloud. To eliminate the influence of rough roads, the technology which is similar with the difference of DSM (digital surface model and DTM (digital terrain model is used, and we also propose a method which adjust the point clouds to a similar density to eliminate the influence of different density. Experiments show the validities of the proposed method with multiple datasets (e.g. urban road, highway, and some rural road. We use the same parameters through the experiments and our algorithm can achieve real-time processing speeds.

  9. Weighing Efficiency-Robustness in Supply Chain Disruption by Multi-Objective Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Tong Shu

    2016-03-01

    Full Text Available This paper investigates various supply chain disruptions in terms of scenario planning, including node disruption and chain disruption; namely, disruptions in distribution centers and disruptions between manufacturing centers and distribution centers. Meanwhile, it also focuses on the simultaneous disruption on one node or a number of nodes, simultaneous disruption in one chain or a number of chains and the corresponding mathematical models and exemplification in relation to numerous manufacturing centers and diverse products. Robustness of the design of the supply chain network is examined by weighing efficiency against robustness during supply chain disruptions. Efficiency is represented by operating cost; robustness is indicated by the expected disruption cost and the weighing issue is calculated by the multi-objective firefly algorithm for consistency in the results. It has been shown that the total cost achieved by the optimal target function is lower than that at the most effective time of supply chains. In other words, the decrease of expected disruption cost by improving robustness in supply chains is greater than the increase of operating cost by reducing efficiency, thus leading to cost advantage. Consequently, by approximating the Pareto Front Chart of weighing between efficiency and robustness, enterprises can choose appropriate efficiency and robustness for their longer-term development.

  10. Small Body GN&C Research Report: A Robust Model Predictive Control Algorithm with Guaranteed Resolvability

    Science.gov (United States)

    Acikmese, Behcet A.; Carson, John M., III

    2005-01-01

    A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees the resolvability of the associated finite-horizon optimal control problem in a receding-horizon implementation. The control consists of two components; (i) feedforward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives, and derivatives in polytopes. An illustrative numerical example is also provided.

  11. Robust video watermarking via optimization algorithm for quantization of pseudo-random semi-global statistics

    Science.gov (United States)

    Kucukgoz, Mehmet; Harmanci, Oztan; Mihcak, Mehmet K.; Venkatesan, Ramarathnam

    2005-03-01

    In this paper, we propose a novel semi-blind video watermarking scheme, where we use pseudo-random robust semi-global features of video in the three dimensional wavelet transform domain. We design the watermark sequence via solving an optimization problem, such that the features of the mark-embedded video are the quantized versions of the features of the original video. The exact realizations of the algorithmic parameters are chosen pseudo-randomly via a secure pseudo-random number generator, whose seed is the secret key, that is known (resp. unknown) by the embedder and the receiver (resp. by the public). We experimentally show the robustness of our algorithm against several attacks, such as conventional signal processing modifications and adversarial estimation attacks.

  12. A robust controller design method for feedback substitution schemes using genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Trujillo, Mirsha M; Hadjiloucas, Sillas; Becerra, Victor M, E-mail: s.hadjiloucas@reading.ac.uk [Cybernetics, School of Systems Engineering, University of Reading, RG6 6AY (United Kingdom)

    2011-08-17

    Controllers for feedback substitution schemes demonstrate a trade-off between noise power gain and normalized response time. Using as an example the design of a controller for a radiometric transduction process subjected to arbitrary noise power gain and robustness constraints, a Pareto-front of optimal controller solutions fulfilling a range of time-domain design objectives can be derived. In this work, we consider designs using a loop shaping design procedure (LSDP). The approach uses linear matrix inequalities to specify a range of objectives and a genetic algorithm (GA) to perform a multi-objective optimization for the controller weights (MOGA). A clonal selection algorithm is used to further provide a directed search of the GA towards the Pareto front. We demonstrate that with the proposed methodology, it is possible to design higher order controllers with superior performance in terms of response time, noise power gain and robustness.

  13. Ensemble of data-driven prognostic algorithms for robust prediction of remaining useful life

    International Nuclear Information System (INIS)

    Hu Chao; Youn, Byeng D.; Wang Pingfeng; Taek Yoon, Joung

    2012-01-01

    Prognostics aims at determining whether a failure of an engineered system (e.g., a nuclear power plant) is impending and estimating the remaining useful life (RUL) before the failure occurs. The traditional data-driven prognostic approach is to construct multiple candidate algorithms using a training data set, evaluate their respective performance using a testing data set, and select the one with the best performance while discarding all the others. This approach has three shortcomings: (i) the selected standalone algorithm may not be robust; (ii) it wastes the resources for constructing the algorithms that are discarded; (iii) it requires the testing data in addition to the training data. To overcome these drawbacks, this paper proposes an ensemble data-driven prognostic approach which combines multiple member algorithms with a weighted-sum formulation. Three weighting schemes, namely the accuracy-based weighting, diversity-based weighting and optimization-based weighting, are proposed to determine the weights of member algorithms. The k-fold cross validation (CV) is employed to estimate the prediction error required by the weighting schemes. The results obtained from three case studies suggest that the ensemble approach with any weighting scheme gives more accurate RUL predictions compared to any sole algorithm when member algorithms producing diverse RUL predictions have comparable prediction accuracy and that the optimization-based weighting scheme gives the best overall performance among the three weighting schemes.

  14. A systematic approach to robust preconditioning for gradient-based inverse scattering algorithms

    International Nuclear Information System (INIS)

    Nordebo, Sven; Fhager, Andreas; Persson, Mikael; Gustafsson, Mats

    2008-01-01

    This paper presents a systematic approach to robust preconditioning for gradient-based nonlinear inverse scattering algorithms. In particular, one- and two-dimensional inverse problems are considered where the permittivity and conductivity profiles are unknown and the input data consist of the scattered field over a certain bandwidth. A time-domain least-squares formulation is employed and the inversion algorithm is based on a conjugate gradient or quasi-Newton algorithm together with an FDTD-electromagnetic solver. A Fisher information analysis is used to estimate the Hessian of the error functional. A robust preconditioner is then obtained by incorporating a parameter scaling such that the scaled Fisher information has a unit diagonal. By improving the conditioning of the Hessian, the convergence rate of the conjugate gradient or quasi-Newton methods are improved. The preconditioner is robust in the sense that the scaling, i.e. the diagonal Fisher information, is virtually invariant to the numerical resolution and the discretization model that is employed. Numerical examples of image reconstruction are included to illustrate the efficiency of the proposed technique

  15. A ROBUST METHOD FOR STEREO VISUAL ODOMETRY BASED ON MULTIPLE EUCLIDEAN DISTANCE CONSTRAINT AND RANSAC ALGORITHM

    Directory of Open Access Journals (Sweden)

    Q. Zhou

    2017-07-01

    Full Text Available Visual Odometry (VO is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC and Random Sample Consensus (RANSAC algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation. The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.

  16. An integer optimization algorithm for robust identification of non-linear gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Chemmangattuvalappil Nishanth

    2012-09-01

    Full Text Available Abstract Background Reverse engineering gene networks and identifying regulatory interactions are integral to understanding cellular decision making processes. Advancement in high throughput experimental techniques has initiated innovative data driven analysis of gene regulatory networks. However, inherent noise associated with biological systems requires numerous experimental replicates for reliable conclusions. Furthermore, evidence of robust algorithms directly exploiting basic biological traits are few. Such algorithms are expected to be efficient in their performance and robust in their prediction. Results We have developed a network identification algorithm to accurately infer both the topology and strength of regulatory interactions from time series gene expression data in the presence of significant experimental noise and non-linear behavior. In this novel formulism, we have addressed data variability in biological systems by integrating network identification with the bootstrap resampling technique, hence predicting robust interactions from limited experimental replicates subjected to noise. Furthermore, we have incorporated non-linearity in gene dynamics using the S-system formulation. The basic network identification formulation exploits the trait of sparsity of biological interactions. Towards that, the identification algorithm is formulated as an integer-programming problem by introducing binary variables for each network component. The objective function is targeted to minimize the network connections subjected to the constraint of maximal agreement between the experimental and predicted gene dynamics. The developed algorithm is validated using both in silico and experimental data-sets. These studies show that the algorithm can accurately predict the topology and connection strength of the in silico networks, as quantified by high precision and recall, and small discrepancy between the actual and predicted kinetic parameters

  17. Robust and unobtrusive algorithm based on position independence for step detection

    Science.gov (United States)

    Qiu, KeCheng; Li, MengYang; Luo, YiHan

    2018-04-01

    Running is becoming one of the most popular exercises among the people, monitoring steps can help users better understand their running process and improve exercise efficiency. In this paper, we design and implement a robust and unobtrusive algorithm based on position independence for step detection under real environment. It applies Butterworth filter to suppress high frequency interference and then employs the projection based on mathematics to transform system to solve the problem of unknown position of smartphone. Finally, using sliding window to suppress the false peak. The algorithm was tested for eight participants on the Android 7.0 platform. In our experiments, the results show that the proposed algorithm can achieve desired effect in spite of device pose.

  18. Chinese License Plates Recognition Method Based on A Robust and Efficient Feature Extraction and BPNN Algorithm

    Science.gov (United States)

    Zhang, Ming; Xie, Fei; Zhao, Jing; Sun, Rui; Zhang, Lei; Zhang, Yue

    2018-04-01

    The prosperity of license plate recognition technology has made great contribution to the development of Intelligent Transport System (ITS). In this paper, a robust and efficient license plate recognition method is proposed which is based on a combined feature extraction model and BPNN (Back Propagation Neural Network) algorithm. Firstly, the candidate region of the license plate detection and segmentation method is developed. Secondly, a new feature extraction model is designed considering three sets of features combination. Thirdly, the license plates classification and recognition method using the combined feature model and BPNN algorithm is presented. Finally, the experimental results indicate that the license plate segmentation and recognition both can be achieved effectively by the proposed algorithm. Compared with three traditional methods, the recognition accuracy of the proposed method has increased to 95.7% and the consuming time has decreased to 51.4ms.

  19. A Robust Automated Cataract Detection Algorithm Using Diagnostic Opinion Based Parameter Thresholding for Telemedicine Application

    Directory of Open Access Journals (Sweden)

    Shashwat Pathak

    2016-09-01

    Full Text Available This paper proposes and evaluates an algorithm to automatically detect the cataracts from color images in adult human subjects. Currently, methods available for cataract detection are based on the use of either fundus camera or Digital Single-Lens Reflex (DSLR camera; both are very expensive. The main motive behind this work is to develop an inexpensive, robust and convenient algorithm which in conjugation with suitable devices will be able to diagnose the presence of cataract from the true color images of an eye. An algorithm is proposed for cataract screening based on texture features: uniformity, intensity and standard deviation. These features are first computed and mapped with diagnostic opinion by the eye expert to define the basic threshold of screening system and later tested on real subjects in an eye clinic. Finally, a tele-ophthamology model using our proposed system has been suggested, which confirms the telemedicine application of the proposed system.

  20. A fast, robust algorithm for power line interference cancellation in neural recording

    Science.gov (United States)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2014-04-01

    Objective. Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. Approach. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. Main results. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. Significance. The proposed algorithm features a highly robust operation, fast adaptation to

  1. DETERMINATION ALGORITHM OF OPTIMAL GEOMETRICAL PARAMETERS FOR COMPONENTS OF FREIGHT CARS ON THE BASIS OF GENERALIZED MATHEMATICAL MODELS

    Directory of Open Access Journals (Sweden)

    O. V. Fomin

    2013-10-01

    Full Text Available Purpose. Presentation of features and example of the use of the offered determination algorithm of optimum geometrical parameters for the components of freight cars on the basis of the generalized mathematical models, which is realized using computer. Methodology. The developed approach to search for optimal geometrical parameters can be described as the determination of optimal decision of the selected set of possible variants. Findings. The presented application example of the offered algorithm proved its operation capacity and efficiency of use. Originality. The determination procedure of optimal geometrical parameters for freight car components on the basis of the generalized mathematical models was formalized in the paper. Practical value. Practical introduction of the research results for universal open cars allows one to reduce container of their design and accordingly to increase the carrying capacity almost by100 kg with the improvement of strength characteristics. Taking into account the mass of their park this will provide a considerable economic effect when producing and operating. The offered approach is oriented to the distribution of the software packages (for example Microsoft Excel, which are used by technical services of the most enterprises, and does not require additional capital investments (acquisitions of the specialized programs and proper technical staff training. This proves the correctness of the research direction. The offered algorithm can be used for the solution of other optimization tasks on the basis of the generalized mathematical models.

  2. Robust

    DEFF Research Database (Denmark)

    2017-01-01

    Robust – Reflections on Resilient Architecture’, is a scientific publication following the conference of the same name in November of 2017. Researches and PhD-Fellows, associated with the Masters programme: Cultural Heritage, Transformation and Restoration (Transformation), at The Royal Danish...

  3. Collateral missing value imputation: a new robust missing value estimation algorithm for microarray data.

    Science.gov (United States)

    Sehgal, Muhammad Shoaib B; Gondal, Iqbal; Dooley, Laurence S

    2005-05-15

    Microarray data are used in a range of application areas in biology, although often it contains considerable numbers of missing values. These missing values can significantly affect subsequent statistical analysis and machine learning algorithms so there is a strong motivation to estimate these values as accurately as possible before using these algorithms. While many imputation algorithms have been proposed, more robust techniques need to be developed so that further analysis of biological data can be accurately undertaken. In this paper, an innovative missing value imputation algorithm called collateral missing value estimation (CMVE) is presented which uses multiple covariance-based imputation matrices for the final prediction of missing values. The matrices are computed and optimized using least square regression and linear programming methods. The new CMVE algorithm has been compared with existing estimation techniques including Bayesian principal component analysis imputation (BPCA), least square impute (LSImpute) and K-nearest neighbour (KNN). All these methods were rigorously tested to estimate missing values in three separate non-time series (ovarian cancer based) and one time series (yeast sporulation) dataset. Each method was quantitatively analyzed using the normalized root mean square (NRMS) error measure, covering a wide range of randomly introduced missing value probabilities from 0.01 to 0.2. Experiments were also undertaken on the yeast dataset, which comprised 1.7% actual missing values, to test the hypothesis that CMVE performed better not only for randomly occurring but also for a real distribution of missing values. The results confirmed that CMVE consistently demonstrated superior and robust estimation capability of missing values compared with other methods for both series types of data, for the same order of computational complexity. A concise theoretical framework has also been formulated to validate the improved performance of the CMVE

  4. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    Science.gov (United States)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement

  5. A Novel Robust Audio Watermarking Algorithm by Modifying the Average Amplitude in Transform Domain

    Directory of Open Access Journals (Sweden)

    Qiuling Wu

    2018-05-01

    Full Text Available In order to improve the robustness and imperceptibility in practical application, a novel audio watermarking algorithm with strong robustness is proposed by exploring the multi-resolution characteristic of discrete wavelet transform (DWT and the energy compaction capability of discrete cosine transform (DCT. The human auditory system is insensitive to the minor changes in the frequency components of the audio signal, so the watermarks can be embedded by slightly modifying the frequency components of the audio signal. The audio fragments segmented from the cover audio signal are decomposed by DWT to obtain several groups of wavelet coefficients with different frequency bands, and then the fourth level detail coefficient is selected to be divided into the former packet and the latter packet, which are executed for DCT to get two sets of transform domain coefficients (TDC respectively. Finally, the average amplitudes of the two sets of TDC are modified to embed the binary image watermark according to the special embedding rule. The watermark extraction is blind without the carrier audio signal. Experimental results confirm that the proposed algorithm has good imperceptibility, large payload capacity and strong robustness when resisting against various attacks such as MP3 compression, low-pass filtering, re-sampling, re-quantization, amplitude scaling, echo addition and noise corruption.

  6. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2017-04-01

    Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  7. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  8. Robust and Low-Complexity Timing Synchronization Algorithm and its Architecture for ADSRC Applications

    Directory of Open Access Journals (Sweden)

    KIM, J.

    2009-10-01

    Full Text Available 5.9 GHz advanced dedicated short range communications (ADSRC is a short-to-medium range communication standard that supports both public safety and private operations in roadside-to-vehicle and vehicle-to-vehicle communication environments. The core technology of physical layer in ADSRC is orthogonal frequency division multiplexing (OFDM, which is sensitive to timing synchronization error. In this paper, a robust and low-complexity timing synchronization algorithm suitable for ADSRC system and its efficient hardware architecture are proposed. The implementation of the proposed architecture is performed with Xilinx Vertex-II XC2V1000 Field Programmable Gate Array (FPGA. The proposed algorithm is based on cross-correlation technique, which is employed to detect the starting point of short training symbol and the guard interval of the long training symbol. Synchronization error rate (SER evaluation results and post-layout simulation results show that the proposed algorithm is efficient in high-mobility environments. The post-layout results of implementation demonstrate the robustness and low-complexity of the proposed architecture.

  9. A Robust Vision-based Runway Detection and Tracking Algorithm for Automatic UAV Landing

    KAUST Repository

    Abu Jbara, Khaled F.

    2015-05-01

    This work presents a novel real-time algorithm for runway detection and tracking applied to the automatic takeoff and landing of Unmanned Aerial Vehicles (UAVs). The algorithm is based on a combination of segmentation based region competition and the minimization of a specific energy function to detect and identify the runway edges from streaming video data. The resulting video-based runway position estimates are updated using a Kalman Filter, which can integrate other sensory information such as position and attitude angle estimates to allow a more robust tracking of the runway under turbulence. We illustrate the performance of the proposed lane detection and tracking scheme on various experimental UAV flights conducted by the Saudi Aerospace Research Center. Results show an accurate tracking of the runway edges during the landing phase under various lighting conditions. Also, it suggests that such positional estimates would greatly improve the positional accuracy of the UAV during takeoff and landing phases. The robustness of the proposed algorithm is further validated using Hardware in the Loop simulations with diverse takeoff and landing videos generated using a commercial flight simulator.

  10. Robust state feedback controller design of STATCOM using chaotic optimization algorithm

    Directory of Open Access Journals (Sweden)

    Safari Amin

    2010-01-01

    Full Text Available In this paper, a new design technique for the design of robust state feedback controller for static synchronous compensator (STATCOM using Chaotic Optimization Algorithm (COA is presented. The design is formulated as an optimization problem which is solved by the COA. Since chaotic planning enjoys reliability, ergodicity and stochastic feature, the proposed technique presents chaos mapping using Lozi map chaotic sequences which increases its convergence rate. To ensure the robustness of the proposed damping controller, the design process takes into account a wide range of operating conditions and system configurations. The simulation results reveal that the proposed controller has an excellent capability in damping power system low frequency oscillations and enhances greatly the dynamic stability of the power systems. Moreover, the system performance analysis under different operating conditions shows that the phase based controller is superior compare to the magnitude based controller.

  11. A novel rotational matrix and translation vector algorithm: geometric accuracy for augmented reality in oral and maxillofacial surgeries.

    Science.gov (United States)

    Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C

    2018-06-01

    Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.

  12. Autopiquer - a Robust and Reliable Peak Detection Algorithm for Mass Spectrometry.

    Science.gov (United States)

    Kilgour, David P A; Hughes, Sam; Kilgour, Samantha L; Mackay, C Logan; Palmblad, Magnus; Tran, Bao Quoc; Goo, Young Ah; Ernst, Robert K; Clarke, David J; Goodlett, David R

    2017-02-01

    We present a simple algorithm for robust and unsupervised peak detection by determining a noise threshold in isotopically resolved mass spectrometry data. Solving this problem will greatly reduce the subjective and time-consuming manual picking of mass spectral peaks and so will prove beneficial in many research applications. The Autopiquer approach uses autocorrelation to test for the presence of (isotopic) structure in overlapping windows across the spectrum. Within each window, a noise threshold is optimized to remove the most unstructured data, whilst keeping as much of the (isotopic) structure as possible. This algorithm has been successfully demonstrated for both peak detection and spectral compression on data from many different classes of mass spectrometer and for different sample types, and this approach should also be extendible to other types of data that contain regularly spaced discrete peaks. Graphical Abstract ᅟ.

  13. A Color Image Watermarking Scheme Resistant against Geometrical Attacks

    Directory of Open Access Journals (Sweden)

    Y. Xing

    2010-04-01

    Full Text Available The geometrical attacks are still a problem for many digital watermarking algorithms at present. In this paper, we propose a watermarking algorithm for color images resistant to geometrical distortions (rotation and scaling. The singular value decomposition is used for watermark embedding and extraction. The log-polar map- ping (LPM and phase correlation method are used to register the position of geometrical distortion suffered by the watermarked image. Experiments with different kinds of color images and watermarks demonstrate that the watermarking algorithm is robust to common image processing attacks, especially geometrical attacks.

  14. Mechanical Model of Geometric Cell and Topological Algorithm for Cell Dynamics from Single-Cell to Formation of Monolayered Tissues with Pattern

    KAUST Repository

    Kachalo, Së ma; Naveed, Hammad; Cao, Youfang; Zhao, Jieling; Liang, Jie

    2015-01-01

    development, and other emerging behavior. Here we describe a cell model and an efficient geometric algorithm for studying the dynamic process of tissue formation in 2D (e.g. epithelial tissues). Our approach improves upon previous methods by incorporating

  15. A Robust Inversion Algorithm for Surface Leaf and Soil Temperatures Using the Vegetation Clumping Index

    Directory of Open Access Journals (Sweden)

    Zunjian Bian

    2017-07-01

    Full Text Available The inversion of land surface component temperatures is an essential source of information for mapping heat fluxes and the angular normalization of thermal infrared (TIR observations. Leaf and soil temperatures can be retrieved using multiple-view-angle TIR observations. In a satellite-scale pixel, the clumping effect of vegetation is usually present, but it is not completely considered during the inversion process. Therefore, we introduced a simple inversion procedure that uses gap frequency with a clumping index (GCI for leaf and soil temperatures over both crop and forest canopies. Simulated datasets corresponding to turbid vegetation, regularly planted crops and randomly distributed forest were generated using a radiosity model and were used to test the proposed inversion algorithm. The results indicated that the GCI algorithm performed well for both crop and forest canopies, with root mean squared errors of less than 1.0 °C against simulated values. The proposed inversion algorithm was also validated using measured datasets over orchard, maize and wheat canopies. Similar results were achieved, demonstrating that using the clumping index can improve inversion results. In all evaluations, we recommend using the GCI algorithm as a foundation for future satellite-based applications due to its straightforward form and robust performance for both crop and forest canopies using the vegetation clumping index.

  16. A Robust Dynamic Heart-Rate Detection Algorithm Framework During Intense Physical Activities Using Photoplethysmographic Signals

    Directory of Open Access Journals (Sweden)

    Jiajia Song

    2017-10-01

    Full Text Available Dynamic accurate heart-rate (HR estimation using a photoplethysmogram (PPG during intense physical activities is always challenging due to corruption by motion artifacts (MAs. It is difficult to reconstruct a clean signal and extract HR from contaminated PPG. This paper proposes a robust HR-estimation algorithm framework that uses one-channel PPG and tri-axis acceleration data to reconstruct the PPG and calculate the HR based on features of the PPG and spectral analysis. Firstly, the signal is judged by the presence of MAs. Then, the spectral peaks corresponding to acceleration data are filtered from the periodogram of the PPG when MAs exist. Different signal-processing methods are applied based on the amount of remaining PPG spectral peaks. The main MA-removal algorithm (NFEEMD includes the repeated single-notch filter and ensemble empirical mode decomposition. Finally, HR calibration is designed to ensure the accuracy of HR tracking. The NFEEMD algorithm was performed on the 23 datasets from the 2015 IEEE Signal Processing Cup Database. The average estimation errors were 1.12 BPM (12 training datasets, 2.63 BPM (10 testing datasets and 1.87 BPM (all 23 datasets, respectively. The Pearson correlation was 0.992. The experiment results illustrate that the proposed algorithm is not only suitable for HR estimation during continuous activities, like slow running (13 training datasets, but also for intense physical activities with acceleration, like arm exercise (10 testing datasets.

  17. A robust algorithm to solve the signal setting problem considering different traffic assignment approaches

    Directory of Open Access Journals (Sweden)

    Adacher Ludovica

    2017-12-01

    Full Text Available In this paper we extend a stochastic discrete optimization algorithm so as to tackle the signal setting problem. Signalized junctions represent critical points of an urban transportation network, and the efficiency of their traffic signal setting influences the overall network performance. Since road congestion usually takes place at or close to junction areas, an improvement in signal settings contributes to improving travel times, drivers’ comfort, fuel consumption efficiency, pollution and safety. In a traffic network, the signal control strategy affects the travel time on the roads and influences drivers’ route choice behavior. The paper presents an algorithm for signal setting optimization of signalized junctions in a congested road network. The objective function used in this work is a weighted sum of delays caused by the signalized intersections. We propose an iterative procedure to solve the problem by alternately updating signal settings based on fixed flows and traffic assignment based on fixed signal settings. To show the robustness of our method, we consider two different assignment methods: one based on user equilibrium assignment, well established in the literature as well as in practice, and the other based on a platoon simulation model with vehicular flow propagation and spill-back. Our optimization algorithm is also compared with others well known in the literature for this problem. The surrogate method (SM, particle swarm optimization (PSO and the genetic algorithm (GA are compared for a combined problem of global optimization of signal settings and traffic assignment (GOSSTA. Numerical experiments on a real test network are reported.

  18. Multilayer perceptron for robust nonlinear interval regression analysis using genetic algorithms.

    Science.gov (United States)

    Hu, Yi-Chung

    2014-01-01

    On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets.

  19. Particle Filter-Based Target Tracking Algorithm for Magnetic Resonance-Guided Respiratory Compensation : Robustness and Accuracy Assessment

    NARCIS (Netherlands)

    Bourque, Alexandra E; Bedwani, Stéphane; Carrier, Jean-François; Ménard, Cynthia; Borman, Pim; Bos, Clemens; Raaymakers, Bas W; Mickevicius, Nikolai; Paulson, Eric; Tijssen, Rob H N

    PURPOSE: To assess overall robustness and accuracy of a modified particle filter-based tracking algorithm for magnetic resonance (MR)-guided radiation therapy treatments. METHODS AND MATERIALS: An improved particle filter-based tracking algorithm was implemented, which used a normalized

  20. Fast and robust ray casting algorithms for virtual X-ray imaging

    International Nuclear Information System (INIS)

    Freud, N.; Duvauchelle, P.; Letang, J.M.; Babot, D.

    2006-01-01

    Deterministic calculations based on ray casting techniques are known as a powerful alternative to the Monte Carlo approach to simulate X- or γ-ray imaging modalities (e.g. digital radiography and computed tomography), whenever computation time is a critical issue. One of the key components, from the viewpoint of computing resource expense, is the algorithm which determines the path length travelled by each ray through complex 3D objects. This issue has given rise to intensive research in the field of 3D rendering (in the visible light domain) during the last decades. The present work proposes algorithmic solutions adapted from state-of-the-art computer graphics to carry out ray casting in X-ray imaging configurations. This work provides an algorithmic basis to simulate direct transmission of X-rays, as well as scattering and secondary emission of radiation. Emphasis is laid on the speed and robustness issues. Computation times are given in a typical case of radiography simulation

  1. Fast and robust ray casting algorithms for virtual X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Freud, N. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France)]. E-mail: Nicolas.Freud@insa-lyon.fr; Duvauchelle, P. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Letang, J.M. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Babot, D. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France)

    2006-07-15

    Deterministic calculations based on ray casting techniques are known as a powerful alternative to the Monte Carlo approach to simulate X- or {gamma}-ray imaging modalities (e.g. digital radiography and computed tomography), whenever computation time is a critical issue. One of the key components, from the viewpoint of computing resource expense, is the algorithm which determines the path length travelled by each ray through complex 3D objects. This issue has given rise to intensive research in the field of 3D rendering (in the visible light domain) during the last decades. The present work proposes algorithmic solutions adapted from state-of-the-art computer graphics to carry out ray casting in X-ray imaging configurations. This work provides an algorithmic basis to simulate direct transmission of X-rays, as well as scattering and secondary emission of radiation. Emphasis is laid on the speed and robustness issues. Computation times are given in a typical case of radiography simulation.

  2. A hybrid multi-objective imperialist competitive algorithm and Monte Carlo method for robust safety design of a rail vehicle

    Science.gov (United States)

    Nejlaoui, Mohamed; Houidi, Ajmi; Affi, Zouhaier; Romdhane, Lotfi

    2017-10-01

    This paper deals with the robust safety design optimization of a rail vehicle system moving in short radius curved tracks. A combined multi-objective imperialist competitive algorithm and Monte Carlo method is developed and used for the robust multi-objective optimization of the rail vehicle system. This robust optimization of rail vehicle safety considers simultaneously the derailment angle and its standard deviation where the design parameters uncertainties are considered. The obtained results showed that the robust design reduces significantly the sensitivity of the rail vehicle safety to the design parameters uncertainties compared to the determinist one and to the literature results.

  3. Robust total energy demand estimation with a hybrid Variable Neighborhood Search – Extreme Learning Machine algorithm

    International Nuclear Information System (INIS)

    Sánchez-Oro, J.; Duarte, A.; Salcedo-Sanz, S.

    2016-01-01

    Highlights: • The total energy demand in Spain is estimated with a Variable Neighborhood algorithm. • Socio-economic variables are used, and one year ahead prediction horizon is considered. • Improvement of the prediction with an Extreme Learning Machine network is considered. • Experiments are carried out in real data for the case of Spain. - Abstract: Energy demand prediction is an important problem whose solution is evaluated by policy makers in order to take key decisions affecting the economy of a country. A number of previous approaches to improve the quality of this estimation have been proposed in the last decade, the majority of them applying different machine learning techniques. In this paper, the performance of a robust hybrid approach, composed of a Variable Neighborhood Search algorithm and a new class of neural network called Extreme Learning Machine, is discussed. The Variable Neighborhood Search algorithm is focused on obtaining the most relevant features among the set of initial ones, by including an exponential prediction model. While previous approaches consider that the number of macroeconomic variables used for prediction is a parameter of the algorithm (i.e., it is fixed a priori), the proposed Variable Neighborhood Search method optimizes both: the number of variables and the best ones. After this first step of feature selection, an Extreme Learning Machine network is applied to obtain the final energy demand prediction. Experiments in a real case of energy demand estimation in Spain show the excellent performance of the proposed approach. In particular, the whole method obtains an estimation of the energy demand with an error lower than 2%, even when considering the crisis years, which are a real challenge.

  4. A general and Robust Ray-Casting-Based Algorithm for Triangulating Surfaces at the Nanoscale

    Science.gov (United States)

    Decherchi, Sergio; Rocchia, Walter

    2013-01-01

    We present a general, robust, and efficient ray-casting-based approach to triangulating complex manifold surfaces arising in the nano-bioscience field. This feature is inserted in a more extended framework that: i) builds the molecular surface of nanometric systems according to several existing definitions, ii) can import external meshes, iii) performs accurate surface area estimation, iv) performs volume estimation, cavity detection, and conditional volume filling, and v) can color the points of a grid according to their locations with respect to the given surface. We implemented our methods in the publicly available NanoShaper software suite (www.electrostaticszone.eu). Robustness is achieved using the CGAL library and an ad hoc ray-casting technique. Our approach can deal with any manifold surface (including nonmolecular ones). Those explicitly treated here are the Connolly-Richards (SES), the Skin, and the Gaussian surfaces. Test results indicate that it is robust to rotation, scale, and atom displacement. This last aspect is evidenced by cavity detection of the highly symmetric structure of fullerene, which fails when attempted by MSMS and has problems in EDTSurf. In terms of timings, NanoShaper builds the Skin surface three times faster than the single threaded version in Lindow et al. on a 100,000 atoms protein and triangulates it at least ten times more rapidly than the Kruithof algorithm. NanoShaper was integrated with the DelPhi Poisson-Boltzmann equation solver. Its SES grid coloring outperformed the DelPhi counterpart. To test the viability of our method on large systems, we chose one of the biggest molecular structures in the Protein Data Bank, namely the 1VSZ entry, which corresponds to the human adenovirus (180,000 atoms after Hydrogen addition). We were able to triangulate the corresponding SES and Skin surfaces (6.2 and 7.0 million triangles, respectively, at a scale of 2 grids per Å) on a middle-range workstation. PMID:23577073

  5. Robust and Accurate Algorithm for Wearable Stereoscopic Augmented Reality with Three Indistinguishable Markers

    Directory of Open Access Journals (Sweden)

    Fabrizio Cutolo

    2016-09-01

    Full Text Available In the context of surgical navigation systems based on augmented reality (AR, the key challenge is to ensure the highest degree of realism in merging computer-generated elements with live views of the surgical scene. This paper presents an algorithm suited for wearable stereoscopic augmented reality video see-through systems for use in a clinical scenario. A video-based tracking solution is proposed that relies on stereo localization of three monochromatic markers rigidly constrained to the scene. A PnP-based optimization step is introduced to refine separately the pose of the two cameras. Video-based tracking methods using monochromatic markers are robust to non-controllable and/or inconsistent lighting conditions. The two-stage camera pose estimation algorithm provides sub-pixel registration accuracy. From a technological and an ergonomic standpoint, the proposed approach represents an effective solution to the implementation of wearable AR-based surgical navigation systems wherever rigid anatomies are involved.

  6. A Robust and Efficient Algorithm for Tool Recognition and Localization for Space Station Robot

    Directory of Open Access Journals (Sweden)

    Lingbo Cheng

    2014-12-01

    Full Text Available This paper studies a robust target recognition and localization method for a maintenance robot in a space station, and its main goal is to solve the target affine transformation caused by microgravity and the strong reflection and refraction of sunlight and lamplight in the cabin, as well as the occlusion of other objects. In this method, an Affine Scale Invariant Feature Transform (Affine-SIFT algorithm is proposed to extract enough local feature points with a fully affine invariant, and the stable matching point is obtained from the above point for target recognition by the selected Random Sample Consensus (RANSAC algorithm. Then, in order to localize the target, the effective and appropriate 3D grasping scope of the target is defined, and we determine and evaluate the grasping precision with the estimated affine transformation parameters presented in this paper. Finally, the threshold of RANSAC is optimized to enhance the accuracy and efficiency of target recognition and localization, and the scopes of illumination, vision distance and viewpoint angle for robot are evaluated to obtain effective image data by Root-Mean-Square Error (RMSE. An experimental system to simulate the illumination environment in a space station is established. Enough experiments have been carried out, and the experimental results show both the validity of the proposed definition of the grasping scope and the feasibility of the proposed recognition and localization method.

  7. A robust algorithm for optimizing protein structures with NMR chemical shifts

    Energy Technology Data Exchange (ETDEWEB)

    Berjanskii, Mark; Arndt, David; Liang, Yongjie; Wishart, David S., E-mail: david.wishart@ualberta.ca [University of Alberta, Department of Computing Science (Canada)

    2015-11-15

    Over the past decade, a number of methods have been developed to determine the approximate structure of proteins using minimal NMR experimental information such as chemical shifts alone, sparse NOEs alone or a combination of comparative modeling data and chemical shifts. However, there have been relatively few methods that allow these approximate models to be substantively refined or improved using the available NMR chemical shift data. Here, we present a novel method, called Chemical Shift driven Genetic Algorithm for biased Molecular Dynamics (CS-GAMDy), for the robust optimization of protein structures using experimental NMR chemical shifts. The method incorporates knowledge-based scoring functions and structural information derived from NMR chemical shifts via a unique combination of multi-objective MD biasing, a genetic algorithm, and the widely used XPLOR molecular modelling language. Using this approach, we demonstrate that CS-GAMDy is able to refine and/or fold models that are as much as 10 Å (RMSD) away from the correct structure using only NMR chemical shift data. CS-GAMDy is also able to refine of a wide range of approximate or mildly erroneous protein structures to more closely match the known/correct structure and the known/correct chemical shifts. We believe CS-GAMDy will allow protein models generated by sparse restraint or chemical-shift-only methods to achieve sufficiently high quality to be considered fully refined and “PDB worthy”. The CS-GAMDy algorithm is explained in detail and its performance is compared over a range of refinement scenarios with several commonly used protein structure refinement protocols. The program has been designed to be easily installed and easily used and is available at http://www.gamdy.ca http://www.gamdy.ca.

  8. Geometrical features assessment of liver's tumor with application of artificial neural network evolved by imperialist competitive algorithm.

    Science.gov (United States)

    Keshavarz, M; Mojra, A

    2015-05-01

    Geometrical features of a cancerous tumor embedded in biological soft tissue, including tumor size and depth, are a necessity in the follow-up procedure and making suitable therapeutic decisions. In this paper, a new socio-politically motivated global search strategy which is called imperialist competitive algorithm (ICA) is implemented to train a feed forward neural network (FFNN) to estimate the tumor's geometrical characteristics (FFNNICA). First, a viscoelastic model of liver tissue is constructed by using a series of in vitro uniaxial and relaxation test data. Then, 163 samples of the tissue including a tumor with different depths and diameters are generated by making use of PYTHON programming to link the ABAQUS and MATLAB together. Next, the samples are divided into 123 samples as training dataset and 40 samples as testing dataset. Training inputs of the network are mechanical parameters extracted from palpation of the tissue through a developing noninvasive technology called artificial tactile sensing (ATS). Last, to evaluate the FFNNICA performance, outputs of the network including tumor's depth and diameter are compared with desired values for both training and testing datasets. Deviations of the outputs from desired values are calculated by a regression analysis. Statistical analysis is also performed by measuring Root Mean Square Error (RMSE) and Efficiency (E). RMSE in diameter and depth estimations are 0.50 mm and 1.49, respectively, for the testing dataset. Results affirm that the proposed optimization algorithm for training neural network can be useful to characterize soft tissue tumors accurately by employing an artificial palpation approach. Copyright © 2015 John Wiley & Sons, Ltd.

  9. System optimization for HVAC energy management using the robust evolutionary algorithm

    International Nuclear Information System (INIS)

    Fong, K.F.; Hanby, V.I.; Chow, T.T.

    2009-01-01

    For an installed centralized heating, ventilating and air conditioning (HVAC) system, appropriate energy management measures would achieve energy conservation targets through the optimal control and operation. The performance optimization of conventional HVAC systems may be handled by operation experience, but it may not cover different optimization scenarios and parameters in response to a variety of load and weather conditions. In this regard, it is common to apply the suitable simulation-optimization technique to model the system then determine the required operation parameters. The particular plant simulation models can be built up by either using the available simulation programs or a system of mathematical expressions. To handle the simulation models, iterations would be involved in the numerical solution methods. Since the gradient information is not easily available due to the complex nature of equations, the traditional gradient-based optimization methods are not applicable for this kind of system models. For the heuristic optimization methods, the continual search is commonly necessary, and the system function call is required for each search. The frequency of simulation function calls would then be a time-determining step, and an efficient optimization method is crucial, in order to find the solution through a number of function calls in a reasonable computational period. In this paper, the robust evolutionary algorithm (REA) is presented to tackle this nature of the HVAC simulation models. REA is based on one of the paradigms of evolutionary algorithm, evolution strategy, which is a stochastic population-based searching technique emphasized on mutation. The REA, which incorporates the Cauchy deterministic mutation, tournament selection and arithmetic recombination, would provide a synergetic effect for optimal search. The REA is effective to cope with the complex simulation models, as well as those represented by explicit mathematical expressions of

  10. Color-SIFT model: a robust and an accurate shot boundary detection algorithm

    Science.gov (United States)

    Sharmila Kumari, M.; Shekar, B. H.

    2010-02-01

    In this paper, a new technique called color-SIFT model is devised for shot boundary detection. Unlike scale invariant feature transform model that uses only grayscale information and misses important visual information regarding color, here we have adopted different color planes to extract keypoints which are subsequently used to detect shot boundaries. The basic SIFT model has four stages namely scale-space peak selection, keypoint localization, orientation assignment and keypoint descriptor and all these four stages were employed to extract key descriptors in each color plane. The proposed model works on three different color planes and a fusion has been made to take a decision on number of keypoint matches for shot boundary identification and hence is different from the color global scale invariant feature transform that works on quantized images. In addition, the proposed algorithm possess invariance to linear transformation and robust to occlusion and noisy environment. Experiments have been conducted on the standard TRECVID video database to reveal the performance of the proposed model.

  11. A robust, efficient and accurate β- pdf integration algorithm in nonpremixed turbulent combustion

    International Nuclear Information System (INIS)

    Liu, H.; Lien, F.S.; Chui, E.

    2005-01-01

    Among many presumed-shape pdf approaches, the presumed β-function pdf is widely used in nonpremixed turbulent combustion models in the literature. However, singularity difficulties at Z = 0 and 1, Z being the mixture fraction, may be encountered in the numerical integration of the b-function pdf and there are few publications addressing this issue to date. The present study proposes an efficient, robust and accurate algorithm to overcome these numerical difficulties. The present treatment of the β-pdf integration is firstly used in the Burke-Schumann solution in conjunction with the k - ε turbulent model in the case of CH 4 /H 2 bluff-body jets and flames. Afterward it is extended to a more complex model, the laminar flamelet model, for the same flow. Numerical results obtained by using the proposed β-pdf integration method are compared to experimental values of the velocity field, temperature and constituent mass fraction to illustrate the efficiency and accuracy of the present method. (author)

  12. TVR-DART: A More Robust Algorithm for Discrete Tomography From Limited Projection Data With Automated Gray Value Estimation.

    Science.gov (United States)

    Xiaodong Zhuge; Palenstijn, Willem Jan; Batenburg, Kees Joost

    2016-01-01

    In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.

  13. Geometric optimization of thermoelectric coolers in a confined volume using genetic algorithms

    International Nuclear Information System (INIS)

    Cheng, Y.-H.; Lin, W.-K.

    2005-01-01

    The demand for thermoelectric coolers (TEC) has grown significantly because of the need for a steady, low-temperature operating environment for various electronic devices such as laser diodes, semiconductor equipment, infrared detectors and others. The cooling capacity and its coefficient of performance (COP) are both extremely important in considering applications. Optimizing the dimensions of the TEC legs provides the advantage of increasing the cooling capacity, while simultaneously considering its minimum COP. This study proposed a method of optimizing the dimensions of the TEC legs using genetic algorithms (GAs), to maximize the cooling capacity. A confined volume in which the TEC can be placed and the technological limitation in manufacturing a TEC leg were considered, and three parameters - leg length, leg area and the number of legs - were taken as the variables to be optimized. The constraints of minimum COP and maximum cost of the material were set, and a genetic search was performed to determine the optimal dimensions of the TEC legs. This work reveals that optimizing the dimensions of the TEC can increase its cooling capacity. The results also show that GAs can determine the optimal dimensions according to various input currents and various cold-side operating temperatures

  14. a Robust Registration Algorithm for Point Clouds from Uav Images for Change Detection

    Science.gov (United States)

    Al-Rawabdeh, A.; Al-Gurrani, H.; Al-Durgham, K.; Detchev, I.; He, F.; El-Sheimy, N.; Habib, A.

    2016-06-01

    Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs

  15. Robustness of SOC Estimation Algorithms for EV Lithium-Ion Batteries against Modeling Errors and Measurement Noise

    Directory of Open Access Journals (Sweden)

    Xue Li

    2015-01-01

    Full Text Available State of charge (SOC is one of the most important parameters in battery management system (BMS. There are numerous algorithms for SOC estimation, mostly of model-based observer/filter types such as Kalman filters, closed-loop observers, and robust observers. Modeling errors and measurement noises have critical impact on accuracy of SOC estimation in these algorithms. This paper is a comparative study of robustness of SOC estimation algorithms against modeling errors and measurement noises. By using a typical battery platform for vehicle applications with sensor noise and battery aging characterization, three popular and representative SOC estimation methods (extended Kalman filter, PI-controlled observer, and H∞ observer are compared on such robustness. The simulation and experimental results demonstrate that deterioration of SOC estimation accuracy under modeling errors resulted from aging and larger measurement noise, which is quantitatively characterized. The findings of this paper provide useful information on the following aspects: (1 how SOC estimation accuracy depends on modeling reliability and voltage measurement accuracy; (2 pros and cons of typical SOC estimators in their robustness and reliability; (3 guidelines for requirements on battery system identification and sensor selections.

  16. Robust Fault-Tolerant Control for Satellite Attitude Stabilization Based on Active Disturbance Rejection Approach with Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Fei Song

    2014-01-01

    Full Text Available This paper proposed a robust fault-tolerant control algorithm for satellite stabilization based on active disturbance rejection approach with artificial bee colony algorithm. The actuating mechanism of attitude control system consists of three working reaction flywheels and one spare reaction flywheel. The speed measurement of reaction flywheel is adopted for fault detection. If any reaction flywheel fault is detected, the corresponding fault flywheel is isolated and the spare reaction flywheel is activated to counteract the fault effect and ensure that the satellite is working safely and reliably. The active disturbance rejection approach is employed to design the controller, which handles input information with tracking differentiator, estimates system uncertainties with extended state observer, and generates control variables by state feedback and compensation. The designed active disturbance rejection controller is robust to both internal dynamics and external disturbances. The bandwidth parameter of extended state observer is optimized by the artificial bee colony algorithm so as to improve the performance of attitude control system. A series of simulation experiment results demonstrate the performance superiorities of the proposed robust fault-tolerant control algorithm.

  17. Robust iris segmentation through parameterization of the Chan-Vese algorithm

    CSIR Research Space (South Africa)

    Mabuza-Hocquet, G

    2015-06-01

    Full Text Available The performance of an iris recognition system relies on automated processes from the segmentation stage to the matching stage. Each stage has traditional algorithms used successfully over the years. The drawback is that these algorithms assume...

  18. A Robust Algorithm to Determine the Topology of Space from the Cosmic Microwave Background Radiation

    OpenAIRE

    Weeks, Jeffrey R.

    2001-01-01

    Satellite measurements of the cosmic microwave back-ground radiation will soon provide an opportunity to test whether the universe is multiply connected. This paper presents a new algorithm for deducing the topology of the universe from the microwave background data. Unlike an older algorithm, the new algorithm gives the curvature of space and the radius of the last scattering surface as outputs, rather than requiring them as inputs. The new algorithm is also more tolerant of erro...

  19. Robust frequency diversity based algorithm for clutter noise reduction of ultrasonic signals using multiple sub-spectrum phase coherence

    Energy Technology Data Exchange (ETDEWEB)

    Gongzhang, R.; Xiao, B.; Lardner, T.; Gachagan, A. [Centre for Ultrasonic Engineering, University of Strathclyde, Glasgow, G1 1XW (United Kingdom); Li, M. [School of Engineering, University of Glasgow, Glasgow, G12 8QQ (United Kingdom)

    2014-02-18

    This paper presents a robust frequency diversity based algorithm for clutter reduction in ultrasonic A-scan waveforms. The performance of conventional spectral-temporal techniques like Split Spectrum Processing (SSP) is highly dependent on the parameter selection, especially when the signal to noise ratio (SNR) is low. Although spatial beamforming offers noise reduction with less sensitivity to parameter variation, phased array techniques are not always available. The proposed algorithm first selects an ascending series of frequency bands. A signal is reconstructed for each selected band in which a defect is present when all frequency components are in uniform sign. Combining all reconstructed signals through averaging gives a probability profile of potential defect position. To facilitate data collection and validate the proposed algorithm, Full Matrix Capture is applied on the austenitic steel and high nickel alloy (HNA) samples with 5MHz transducer arrays. When processing A-scan signals with unrefined parameters, the proposed algorithm enhances SNR by 20dB for both samples and consequently, defects are more visible in B-scan images created from the large amount of A-scan traces. Importantly, the proposed algorithm is considered robust, while SSP is shown to fail on the austenitic steel data and achieves less SNR enhancement on the HNA data.

  20. A ROBUST REGISTRATION ALGORITHM FOR POINT CLOUDS FROM UAV IMAGES FOR CHANGE DETECTION

    Directory of Open Access Journals (Sweden)

    A. Al-Rawabdeh

    2016-06-01

    Full Text Available Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs of the camera and the Exterior Orientation Parameters (EOPs of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV action camera which facilitated capturing high-resolution geo-tagged images

  1. A Robust Vision-based Runway Detection and Tracking Algorithm for Automatic UAV Landing

    KAUST Repository

    Abu Jbara, Khaled F.

    2015-01-01

    and attitude angle estimates to allow a more robust tracking of the runway under turbulence. We illustrate the performance of the proposed lane detection and tracking scheme on various experimental UAV flights conducted by the Saudi Aerospace Research Center

  2. An efficient and robust algorithm for parallel groupwise registration of bone surfaces

    NARCIS (Netherlands)

    van de Giessen, Martijn; Vos, Frans M.; Grimbergen, Cornelis A.; van Vliet, Lucas J.; Streekstra, Geert J.

    2012-01-01

    In this paper a novel groupwise registration algorithm is proposed for the unbiased registration of a large number of densely sampled point clouds. The method fits an evolving mean shape to each of the example point clouds thereby minimizing the total deformation. The registration algorithm

  3. Angle Statistics Reconstruction: a robust reconstruction algorithm for Muon Scattering Tomography

    Science.gov (United States)

    Stapleton, M.; Burns, J.; Quillin, S.; Steer, C.

    2014-11-01

    Muon Scattering Tomography (MST) is a technique for using the scattering of cosmic ray muons to probe the contents of enclosed volumes. As a muon passes through material it undergoes multiple Coulomb scattering, where the amount of scattering is dependent on the density and atomic number of the material as well as the path length. Hence, MST has been proposed as a means of imaging dense materials, for instance to detect special nuclear material in cargo containers. Algorithms are required to generate an accurate reconstruction of the material density inside the volume from the muon scattering information and some have already been proposed, most notably the Point of Closest Approach (PoCA) and Maximum Likelihood/Expectation Maximisation (MLEM) algorithms. However, whilst PoCA-based algorithms are easy to implement, they perform rather poorly in practice. Conversely, MLEM is a complicated algorithm to implement and computationally intensive and there is currently no published, fast and easily-implementable algorithm that performs well in practice. In this paper, we first provide a detailed analysis of the source of inaccuracy in PoCA-based algorithms. We then motivate an alternative method, based on ideas first laid out by Morris et al, presenting and fully specifying an algorithm that performs well against simulations of realistic scenarios. We argue this new algorithm should be adopted by developers of Muon Scattering Tomography as an alternative to PoCA.

  4. Get Your Atoms in Order--An Open-Source Implementation of a Novel and Robust Molecular Canonicalization Algorithm.

    Science.gov (United States)

    Schneider, Nadine; Sayle, Roger A; Landrum, Gregory A

    2015-10-26

    Finding a canonical ordering of the atoms in a molecule is a prerequisite for generating a unique representation of the molecule. The canonicalization of a molecule is usually accomplished by applying some sort of graph relaxation algorithm, the most common of which is the Morgan algorithm. There are known issues with that algorithm that lead to noncanonical atom orderings as well as problems when it is applied to large molecules like proteins. Furthermore, each cheminformatics toolkit or software provides its own version of a canonical ordering, most based on unpublished algorithms, which also complicates the generation of a universal unique identifier for molecules. We present an alternative canonicalization approach that uses a standard stable-sorting algorithm instead of a Morgan-like index. Two new invariants that allow canonical ordering of molecules with dependent chirality as well as those with highly symmetrical cyclic graphs have been developed. The new approach proved to be robust and fast when tested on the 1.45 million compounds of the ChEMBL 20 data set in different scenarios like random renumbering of input atoms or SMILES round tripping. Our new algorithm is able to generate a canonical order of the atoms of protein molecules within a few milliseconds. The novel algorithm is implemented in the open-source cheminformatics toolkit RDKit. With this paper, we provide a reference Python implementation of the algorithm that could easily be integrated in any cheminformatics toolkit. This provides a first step toward a common standard for canonical atom ordering to generate a universal unique identifier for molecules other than InChI.

  5. Simple Algorithms to Calculate Asymptotic Null Distributions of Robust Tests in Case-Control Genetic Association Studies in R

    Directory of Open Access Journals (Sweden)

    Wing Kam Fung

    2010-02-01

    Full Text Available The case-control study is an important design for testing association between genetic markers and a disease. The Cochran-Armitage trend test (CATT is one of the most commonly used statistics for the analysis of case-control genetic association studies. The asymptotically optimal CATT can be used when the underlying genetic model (mode of inheritance is known. However, for most complex diseases, the underlying genetic models are unknown. Thus, tests robust to genetic model misspecification are preferable to the model-dependant CATT. Two robust tests, MAX3 and the genetic model selection (GMS, were recently proposed. Their asymptotic null distributions are often obtained by Monte-Carlo simulations, because they either have not been fully studied or involve multiple integrations. In this article, we study how components of each robust statistic are correlated, and find a linear dependence among the components. Using this new finding, we propose simple algorithms to calculate asymptotic null distributions for MAX3 and GMS, which greatly reduce the computing intensity. Furthermore, we have developed the R package Rassoc implementing the proposed algorithms to calculate the empirical and asymptotic p values for MAX3 and GMS as well as other commonly used tests in case-control association studies. For illustration, Rassoc is applied to the analysis of case-control data of 17 most significant SNPs reported in four genome-wide association studies.

  6. Combining model based and data based techniques in a robust bridge health monitoring algorithm.

    Science.gov (United States)

    2014-09-01

    Structural Health Monitoring (SHM) aims to analyze civil, mechanical and aerospace systems in order to assess : incipient damage occurrence. In this project, we are concerned with the development of an algorithm within the : SHM paradigm for applicat...

  7. An effective, robust and parallel implementation of an interior point algorithm for limit state optimization

    DEFF Research Database (Denmark)

    Dollerup, Niels; Jepsen, Michael S.; Frier, Christian

    2014-01-01

    A robust and effective finite element based implementation of lower bound limit state analysis applying an interior point formulation is presented in this paper. The lower bound formulation results in a convex optimization problem consisting of a number of linear constraints from the equilibrium...

  8. A> L1-TV algorithm for robust perspective photometric stereo with spatially-varying lightings

    DEFF Research Database (Denmark)

    Quéau, Yvain; Lauze, Francois Bernard; Durou, Jean-Denis

    2015-01-01

    We tackle the problem of perspective 3D-reconstruction of Lambertian surfaces through photometric stereo, in the presence of outliers to Lambert's law, depth discontinuities, and unknown spatially-varying lightings. To this purpose, we introduce a robust $L^1$-TV variational formulation of the re...

  9. A Robust Wireless Sensor Network Localization Algorithm in Mixed LOS/NLOS Scenario.

    Science.gov (United States)

    Li, Bing; Cui, Wei; Wang, Bin

    2015-09-16

    Localization algorithms based on received signal strength indication (RSSI) are widely used in the field of target localization due to its advantages of convenient application and independent from hardware devices. Unfortunately, the RSSI values are susceptible to fluctuate under the influence of non-line-of-sight (NLOS) in indoor space. Existing algorithms often produce unreliable estimated distances, leading to low accuracy and low effectiveness in indoor target localization. Moreover, these approaches require extra prior knowledge about the propagation model. As such, we focus on the problem of localization in mixed LOS/NLOS scenario and propose a novel localization algorithm: Gaussian mixed model based non-metric Multidimensional (GMDS). In GMDS, the RSSI is estimated using a Gaussian mixed model (GMM). The dissimilarity matrix is built to generate relative coordinates of nodes by a multi-dimensional scaling (MDS) approach. Finally, based on the anchor nodes' actual coordinates and target's relative coordinates, the target's actual coordinates can be computed via coordinate transformation. Our algorithm could perform localization estimation well without being provided with prior knowledge. The experimental verification shows that GMDS effectively reduces NLOS error and is of higher accuracy in indoor mixed LOS/NLOS localization and still remains effective when we extend single NLOS to multiple NLOS.

  10. A robust and accurate binning algorithm for metagenomic sequences with arbitrary species abundance ratio.

    Science.gov (United States)

    Leung, Henry C M; Yiu, S M; Yang, Bin; Peng, Yu; Wang, Yi; Liu, Zhihua; Chen, Jingchi; Qin, Junjie; Li, Ruiqiang; Chin, Francis Y L

    2011-06-01

    With the rapid development of next-generation sequencing techniques, metagenomics, also known as environmental genomics, has emerged as an exciting research area that enables us to analyze the microbial environment in which we live. An important step for metagenomic data analysis is the identification and taxonomic characterization of DNA fragments (reads or contigs) resulting from sequencing a sample of mixed species. This step is referred to as 'binning'. Binning algorithms that are based on sequence similarity and sequence composition markers rely heavily on the reference genomes of known microorganisms or phylogenetic markers. Due to the limited availability of reference genomes and the bias and low availability of markers, these algorithms may not be applicable in all cases. Unsupervised binning algorithms which can handle fragments from unknown species provide an alternative approach. However, existing unsupervised binning algorithms only work on datasets either with balanced species abundance ratios or rather different abundance ratios, but not both. In this article, we present MetaCluster 3.0, an integrated binning method based on the unsupervised top--down separation and bottom--up merging strategy, which can bin metagenomic fragments of species with very balanced abundance ratios (say 1:1) to very different abundance ratios (e.g. 1:24) with consistently higher accuracy than existing methods. MetaCluster 3.0 can be downloaded at http://i.cs.hku.hk/~alse/MetaCluster/.

  11. Robust low frequency current ripple elimination algorithm for grid-connected fuel cell systems with power balancing technique

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong-Soo; Choe, Gyu-Yeong; Lee, Byoung-Kuk [School of Information and Communication Engineering, Sungkyunkwan University, 300 Cheoncheon-dong, Jangan-gu, Suwon, Gyeonggi-do 440-746 (Korea, Republic of); Kang, Hyun-Soo [R and D Center, Advanced Drive Technology (ADT) Company, 689-26 Geumjeong-dong, Gunpo-si, Gyeonggi-do 435-862 (Korea, Republic of)

    2011-05-15

    The low frequency current ripple in grid-connected fuel cell systems is generated from dc-ac inverter operation, which generates 60 Hz fundamental component, and gives harmful effects on fuel cell stack itself, such as making cathode surface responses slower, causing an increase of more than 10% in the fuel consumption, creating oxygen starvation, causing a reduction in the operating lifetime, and incurring a nuisance tripping such as overload situation. With these reasons, low frequency current ripple makes fuel cell system unstable and lifetime of fuel cell stack itself short. This paper presents a fast and robust control algorithm to eliminate low frequency current ripple in grid-connected fuel cell systems. Compared with the conventional methods, in the proposed control algorithm, dc link voltage controller is shifted from dc-dc converter to dc-ac inverter, resulting that dc-ac inverter handles dc link voltage control and output current control simultaneously with help of power balancing technique. The results indicate that the proposed algorithm can not only completely eliminate current ripple but also significantly reduce the overshoot or undershoot during transient states without any extra hardware. The validity of the proposed algorithm is verified by computer simulations and also by experiments with a 1 kW laboratory prototype. (author)

  12. Robust and rapid algorithms facilitate large-scale whole genome sequencing downstream analysis in an integrative framework.

    Science.gov (United States)

    Li, Miaoxin; Li, Jiang; Li, Mulin Jun; Pan, Zhicheng; Hsu, Jacob Shujui; Liu, Dajiang J; Zhan, Xiaowei; Wang, Junwen; Song, Youqiang; Sham, Pak Chung

    2017-05-19

    Whole genome sequencing (WGS) is a promising strategy to unravel variants or genes responsible for human diseases and traits. However, there is a lack of robust platforms for a comprehensive downstream analysis. In the present study, we first proposed three novel algorithms, sequence gap-filled gene feature annotation, bit-block encoded genotypes and sectional fast access to text lines to address three fundamental problems. The three algorithms then formed the infrastructure of a robust parallel computing framework, KGGSeq, for integrating downstream analysis functions for whole genome sequencing data. KGGSeq has been equipped with a comprehensive set of analysis functions for quality control, filtration, annotation, pathogenic prediction and statistical tests. In the tests with whole genome sequencing data from 1000 Genomes Project, KGGSeq annotated several thousand more reliable non-synonymous variants than other widely used tools (e.g. ANNOVAR and SNPEff). It took only around half an hour on a small server with 10 CPUs to access genotypes of ∼60 million variants of 2504 subjects, while a popular alternative tool required around one day. KGGSeq's bit-block genotype format used 1.5% or less space to flexibly represent phased or unphased genotypes with multiple alleles and achieved a speed of over 1000 times faster to calculate genotypic correlation. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. Cooperative vehicles for robust traffic congestion reduction: An analysis based on algorithmic, environmental and agent behavioral factors.

    Directory of Open Access Journals (Sweden)

    Prajakta Desai

    Full Text Available Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN, wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction, across variations in: (a environmental parameters such as road network topology and configuration; (b algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies.

  14. Cooperative vehicles for robust traffic congestion reduction: An analysis based on algorithmic, environmental and agent behavioral factors.

    Science.gov (United States)

    Desai, Prajakta; Loke, Seng W; Desai, Aniruddha

    2017-01-01

    Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN), wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction), across variations in: (a) environmental parameters such as road network topology and configuration; (b) algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c) agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies.

  15. Towards a robust algorithm to determine topological domains from colocalization data

    Directory of Open Access Journals (Sweden)

    Alexander P. Moscalets

    2015-09-01

    Full Text Available One of the most important tasks in understanding the complex spatial organization of the genome consists in extracting information about this spatial organization, the function and structure of chromatin topological domains from existing experimental data, in particular, from genome colocalization (Hi-C matrices. Here we present an algorithm allowing to reveal the underlying hierarchical domain structure of a polymer conformation from analyzing the modularity of colocalization matrices. We also test this algorithm on several model polymer structures: equilibrium globules, random fractal globules and regular fractal (Peano conformations. We define what we call a spectrum of cluster borders, and show that these spectra behave strikingly di erently for equilibrium and fractal conformations, allowing us to suggest an additional criterion to identify fractal polymer conformations.

  16. Meta-algorithmics patterns for robust, low cost, high quality systems

    CERN Document Server

    Simske, Steven J

    2013-01-01

    The confluence of cloud computing, parallelism and advanced machine intelligence approaches has created a world in which the optimum knowledge system will usually be architected from the combination of two or more knowledge-generating systems. There is a need, then, to provide a reusable, broadly-applicable set of design patterns to empower the intelligent system architect to take advantage of this opportunity. This book explains how to design and build intelligent systems that are optimized for changing system requirements (adaptability), optimized for changing system input (robustness), an

  17. Robust surface registration using salient anatomical features for image-guided liver surgery: Algorithm and validation

    OpenAIRE

    Clements, Logan W.; Chapman, William C.; Dawant, Benoit M.; Galloway, Robert L.; Miga, Michael I.

    2008-01-01

    A successful surface-based image-to-physical space registration in image-guided liver surgery (IGLS) is critical to provide reliable guidance information to surgeons and pertinent surface displacement data for use in deformation correction algorithms. The current protocol used to perform the image-to-physical space registration involves an initial pose estimation provided by a point based registration of anatomical landmarks identifiable in both the preoperative tomograms and the intraoperati...

  18. A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain

    Directory of Open Access Journals (Sweden)

    Ibn-Elhaj E

    2009-01-01

    Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.

  19. A Robust Subpixel Motion Estimation Algorithm Using HOS in the Parametric Domain

    Directory of Open Access Journals (Sweden)

    E. M. Ismaili Aalaoui

    2009-02-01

    Full Text Available Motion estimation techniques are widely used in todays video processing systems. The most frequently used techniques are the optical flow method and phase correlation method. The vast majority of these algorithms consider noise-free data. Thus, in the case of the image sequences are severely corrupted by additive Gaussian (perhaps non-Gaussian noises of unknown covariance, the classical techniques will fail to work because they will also estimate the noise spatial correlation. In this paper, we have studied this topic from a viewpoint different from the above to explore the fundamental limits in image motion estimation. Our scheme is based on subpixel motion estimation algorithm using bispectrum in the parametric domain. The motion vector of a moving object is estimated by solving linear equations involving third-order hologram and the matrix containing Dirac delta function. Simulation results are presented and compared to the optical flow and phase correlation algorithms; this approach provides more reliable displacement estimates particularly for complex noisy image sequences. In our simulation, we used the database freely available on the web.

  20. An Efficient and Robust Moving Shadow Removal Algorithm and Its Applications in ITS

    Directory of Open Access Journals (Sweden)

    Shou Yu-Wen

    2010-01-01

    Full Text Available We propose an efficient algorithm for removing shadows of moving vehicles caused by non-uniform distributions of light reflections in the daytime. This paper presents a brand-new and complete structure in feature combination as well as analysis for orientating and labeling moving shadows so as to extract the defined objects in foregrounds more easily in each snapshot of the original files of videos which are acquired in the real traffic situations. Moreover, we make use of Gaussian Mixture Model (GMM for background removal and detection of moving shadows in our tested images, and define two indices for characterizing non-shadowed regions where one indicates the characteristics of lines and the other index can be characterized by the information in gray scales of images which helps us to build a newly defined set of darkening ratios (modified darkening factors based on Gaussian models. To prove the effectiveness of our moving shadow algorithm, we carry it out with a practical application of traffic flow detection in ITS (Intelligent Transportation System—vehicle counting. Our algorithm shows the faster processing speed, 13.84 ms/frame, and can improve the accuracy rate in 4%~10% for our three tested videos in the experimental results of vehicle counting.

  1. A novel robust and efficient algorithm for charge particle tracking in high background flux

    International Nuclear Information System (INIS)

    Fanelli, C; Cisbani, E; Dotto, A Del

    2015-01-01

    The high luminosity that will be reached in the new generation of High Energy Particle and Nuclear physics experiments implies large high background rate and large tracker occupancy, representing therefore a new challenge for particle tracking algorithms. For instance, at Jefferson Laboratory (JLab) (VA,USA), one of the most demanding experiment in this respect, performed with a 12 GeV electron beam, is characterized by a luminosity up to 10 39 cm -2 s -1 . To this scope, Gaseous Electron Multiplier (GEM) based trackers are under development for a new spectrometer that will operate at these high rates in the Hall A of JLab. Within this context, we developed a new tracking algorithm, based on a multistep approach: (i) all hardware - time and charge - information are exploited to minimize the number of hits to associate; (ii) a dedicated Neural Network (NN) has been designed for a fast and efficient association of the hits measured by the GEM detector; (iii) the measurements of the associated hits are further improved in resolution through the application of Kalman filter and Rauch- Tung-Striebel smoother. The algorithm is shortly presented along with a discussion of the promising first results. (paper)

  2. A unified classifier for robust face recognition based on combining multiple subspace algorithms

    Science.gov (United States)

    Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad

    2012-10-01

    Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.

  3. Stochastic algorithm for channel optimized vector quantization: application to robust narrow-band speech coding

    International Nuclear Information System (INIS)

    Bouzid, M.; Benkherouf, H.; Benzadi, K.

    2011-01-01

    In this paper, we propose a stochastic joint source-channel scheme developed for efficient and robust encoding of spectral speech LSF parameters. The encoding system, named LSF-SSCOVQ-RC, is an LSF encoding scheme based on a reduced complexity stochastic split vector quantizer optimized for noisy channel. For transmissions over noisy channel, we will show first that our LSF-SSCOVQ-RC encoder outperforms the conventional LSF encoder designed by the split vector quantizer. After that, we applied the LSF-SSCOVQ-RC encoder (with weighted distance) for the robust encoding of LSF parameters of the 2.4 Kbits/s MELP speech coder operating over a noisy/noiseless channel. The simulation results will show that the proposed LSF encoder, incorporated in the MELP, ensure better performances than the original MELP MSVQ of 25 bits/frame; especially when the transmission channel is highly disturbed. Indeed, we will show that the LSF-SSCOVQ-RC yields significant improvement to the LSFs encoding performances by ensuring reliable transmissions over noisy channel.

  4. Application of a New Robust ECG T-Wave Delineation Algorithm for the Evaluation of the Autonomic Innervation of the Myocardium

    DEFF Research Database (Denmark)

    Cesari, Matteo; Mehlsen, Jesper; Mehlsen, Anne-Birgitte

    2016-01-01

    T-wave amplitude (TWA) is a well know index of the autonomic innervation of the myocardium. However, until now it has been evaluated only manually or with simple and inefficient algorithms. In this paper, we developed a new robust single-lead electrocardiogram (ECG) T-wave delineation algorithm...

  5. A fast and robust iterative algorithm for prediction of RNA pseudoknotted secondary structures

    Science.gov (United States)

    2014-01-01

    Background Improving accuracy and efficiency of computational methods that predict pseudoknotted RNA secondary structures is an ongoing challenge. Existing methods based on free energy minimization tend to be very slow and are limited in the types of pseudoknots that they can predict. Incorporating known structural information can improve prediction accuracy; however, there are not many methods for prediction of pseudoknotted structures that can incorporate structural information as input. There is even less understanding of the relative robustness of these methods with respect to partial information. Results We present a new method, Iterative HFold, for pseudoknotted RNA secondary structure prediction. Iterative HFold takes as input a pseudoknot-free structure, and produces a possibly pseudoknotted structure whose energy is at least as low as that of any (density-2) pseudoknotted structure containing the input structure. Iterative HFold leverages strengths of earlier methods, namely the fast running time of HFold, a method that is based on the hierarchical folding hypothesis, and the energy parameters of HotKnots V2.0. Our experimental evaluation on a large data set shows that Iterative HFold is robust with respect to partial information, with average accuracy on pseudoknotted structures steadily increasing from roughly 54% to 79% as the user provides up to 40% of the input structure. Iterative HFold is much faster than HotKnots V2.0, while having comparable accuracy. Iterative HFold also has significantly better accuracy than IPknot on our HK-PK and IP-pk168 data sets. Conclusions Iterative HFold is a robust method for prediction of pseudoknotted RNA secondary structures, whose accuracy with more than 5% information about true pseudoknot-free structures is better than that of IPknot, and with about 35% information about true pseudoknot-free structures compares well with that of HotKnots V2.0 while being significantly faster. Iterative HFold and all data used in

  6. SU-F-J-74: High Z Geometric Integrity and Beam Hardening Artifact Assessment Using a Retrospective Metal Artifact Reduction (MAR) Reconstruction Algorithm

    International Nuclear Information System (INIS)

    Woods, K; DiCostanzo, D; Gupta, N

    2016-01-01

    Purpose: To test the efficacy of a retrospective metal artifact reduction (MAR) reconstruction algorithm for a commercial computed tomography (CT) scanner for radiation therapy purposes. Methods: High Z geometric integrity and artifact reduction analysis was performed with three phantoms using General Electric’s (GE) Discovery CT. The three phantoms included: a Computerized Imaging Reference Systems (CIRS) electron density phantom (Model 062) with a 6.5 mm diameter titanium rod insert, a custom spine phantom using Synthes Spine hardware submerged in water, and a dental phantom with various high Z fillings submerged in water. Each phantom was reconstructed using MAR and compared against the original scan. Furthermore, each scenario was tested using standard and extended Hounsfield Unit (HU) ranges. High Z geometric integrity was performed using the CIRS phantom, while the artifact reduction was performed using all three phantoms. Results: Geometric integrity of the 6.5 mm diameter rod was slightly overestimated for non-MAR scans for both standard and extended HU. With MAR reconstruction, the rod was underestimated for both standard and extended HU. For artifact reduction, the mean and standard deviation was compared in a volume of interest (VOI) in the surrounding material (water and water equivalent material, ∼0HU). Overall, the mean value of the VOI was closer to 0 HU for the MAR reconstruction compared to the non-MAR scan for most phantoms. Additionally, the standard deviations for all phantoms were greatly reduced using MAR reconstruction. Conclusion: GE’s MAR reconstruction algorithm improves image quality with the presence of high Z material with minimal degradation of its geometric integrity. High Z delineation can be carried out with proper contouring techniques. The effects of beam hardening artifacts are greatly reduced with MAR reconstruction. Tissue corrections due to these artifacts can be eliminated for simple high Z geometries and greatly

  7. SU-F-J-74: High Z Geometric Integrity and Beam Hardening Artifact Assessment Using a Retrospective Metal Artifact Reduction (MAR) Reconstruction Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Woods, K; DiCostanzo, D; Gupta, N [Ohio State University Columbus, OH (United States)

    2016-06-15

    Purpose: To test the efficacy of a retrospective metal artifact reduction (MAR) reconstruction algorithm for a commercial computed tomography (CT) scanner for radiation therapy purposes. Methods: High Z geometric integrity and artifact reduction analysis was performed with three phantoms using General Electric’s (GE) Discovery CT. The three phantoms included: a Computerized Imaging Reference Systems (CIRS) electron density phantom (Model 062) with a 6.5 mm diameter titanium rod insert, a custom spine phantom using Synthes Spine hardware submerged in water, and a dental phantom with various high Z fillings submerged in water. Each phantom was reconstructed using MAR and compared against the original scan. Furthermore, each scenario was tested using standard and extended Hounsfield Unit (HU) ranges. High Z geometric integrity was performed using the CIRS phantom, while the artifact reduction was performed using all three phantoms. Results: Geometric integrity of the 6.5 mm diameter rod was slightly overestimated for non-MAR scans for both standard and extended HU. With MAR reconstruction, the rod was underestimated for both standard and extended HU. For artifact reduction, the mean and standard deviation was compared in a volume of interest (VOI) in the surrounding material (water and water equivalent material, ∼0HU). Overall, the mean value of the VOI was closer to 0 HU for the MAR reconstruction compared to the non-MAR scan for most phantoms. Additionally, the standard deviations for all phantoms were greatly reduced using MAR reconstruction. Conclusion: GE’s MAR reconstruction algorithm improves image quality with the presence of high Z material with minimal degradation of its geometric integrity. High Z delineation can be carried out with proper contouring techniques. The effects of beam hardening artifacts are greatly reduced with MAR reconstruction. Tissue corrections due to these artifacts can be eliminated for simple high Z geometries and greatly

  8. Robust Algorithms for Detecting a Change in a Stochastic Process with Infinite Memory

    Science.gov (United States)

    1988-03-01

    breakdown point and the additional assumption of 0-mixing on the nominal meas- influence function . The structure of the optimal algorithm ures. Then Huber’s...are i.i.d. sequences of Gaus- For the breakdown point and the influence function sian random variables, with identical variance o2 . Let we will use...algebraic sign for i=0,1. Here z will be chosen such = f nthat it leads to worst case or earliest breakdown. i (14) Next, the influence function measures

  9. Robust Longitudinal Aircraft- Control Based on an Adaptive Fuzzy-Logic Algorithm

    Directory of Open Access Journals (Sweden)

    Abdel- Latif Elshafei

    2002-06-01

    Full Text Available To study the aircraft response to a fast pull-up manoeuvre, a short period approximation of the longitudinal model is considered. The model is highly nonlinear and includes parametric uncertainties. To cope with a wide range of command signals, a robust adaptive fuzzy logic controller is proposed. The proposed controller adopts a dynamic inversion approach. Since feedback linearization is practically imperfect, robustifying and adaptive components are included in the control law to compensate for modeling errors and achieve acceptable tracking errors. Two fuzzy systems are implemented. The first system models the nominal values of the system’s nonlinearity. The second system is an adaptive one that compensates for modeling errors. The derivation of the control law based on a dynamic game approach is given in detail. Stability of the closed-loop control system is also verified. Simulation results based on an F16-model illustrate a successful tracking performance of the proposed controller.

  10. A Robust In-Situ Warp-Correction Algorithm For VISAR Streak Camera Data at the National Ignition Facility

    International Nuclear Information System (INIS)

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.

    2015-01-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  11. A Robust In-Situ Warp-Correction Algorithm For VISAR Streak Camera Data at the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Labaria, George R. [Univ. of California, Santa Cruz, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Warrick, Abbie L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Celliers, Peter M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kalantar, Daniel H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-01-12

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  12. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    CERN Document Server

    Parasuraman, Ramviyas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide red...

  13. Square-Wave Voltage Injection Algorithm for PMSM Position Sensorless Control With High Robustness to Voltage Errors

    DEFF Research Database (Denmark)

    Ni, Ronggang; Xu, Dianguo; Blaabjerg, Frede

    2017-01-01

    relationship with the magnetic field distortion. Position estimation errors caused by higher order harmonic inductances and voltage harmonics generated by the SVPWM are also discussed. Both simulations and experiments are carried out based on a commercial PMSM to verify the superiority of the proposed method......Rotor position estimated with high-frequency (HF) voltage injection methods can be distorted by voltage errors due to inverter nonlinearities, motor resistance, and rotational voltage drops, etc. This paper proposes an improved HF square-wave voltage injection algorithm, which is robust to voltage...... errors without any compensations meanwhile has less fluctuation in the position estimation error. The average position estimation error is investigated based on the analysis of phase harmonic inductances, and deduced in the form of the phase shift of the second-order harmonic inductances to derive its...

  14. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    Science.gov (United States)

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734

  15. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    Directory of Open Access Journals (Sweden)

    Ramviyas Parasuraman

    2014-12-01

    Full Text Available The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS. When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities, there is a possibility that some electronic components may fail randomly (due to radiation effects, which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.

  16. On the dosimetric behaviour of photon dose calculation algorithms in the presence of simple geometric heterogeneities: comparison with Monte Carlo calculations

    Science.gov (United States)

    Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk; Brink, Carsten; Clivio, Alessandro; Knöös, Tommy; Nicolini, Giorgia; Cozzi, Luca

    2007-03-01

    A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (ρ = 1.00 g cm-3) with inserts of different densities simulating light lung tissue (ρ = 0.035 g cm-3), normal lung (ρ = 0.20 g cm-3) and cortical bone tissue (ρ = 1.80 g cm-3). Experiments were performed for low- and high-energy photon beams (6 and 15 MV) and for square (13 × 13 cm2) and elongated rectangular (2.8 × 13 cm2) fields. Analysis was carried out on the basis of depth dose curves and transverse profiles at several depths. Assuming the MC data as reference, γ index analysis was carried out distinguishing between regions inside the non-water inserts or inside the uniform water. For this study, a distance to agreement was set to 3 mm while the dose difference varied from 2% to 10%. In general all algorithms based on pencil-beam convolutions showed a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (ρ = 0.035 g cm-3), enhanced for the most energetic beam. For denser, and more clinical, densities a better agreement among the sophisticated algorithms with respect to MC was observed.

  17. Geometrical determinations of IMRT photon pencil-beam path in radiotherapy wedges and limit divergence angle with the Anisotropic Analytic Algorithm (AAA

    Directory of Open Access Journals (Sweden)

    Francisco Casesnoves

    2014-08-01

    Full Text Available Purpose: Static wedge filters (WF are commonly used in radiation therapy, forward and/or inverse planning. We calculated the exact 2D/3D geometrical pathway of the photon-beam through the usual alloy WF, in order to get a better dose related to the beam intensity attenuation factor(s, after the beam has passed through the WF. The objective was to provide general formulation into the Anisotropic Analytical Algorithm (AAA model coordinates system (depending on collimator/wedge angles that also can be applied to other models. Additionally, second purpose of this study was to develop integral formulation for 3D wedge exponential factor with statistical approximations, with introduction for the limit angle/conformal wedge.Methods: The radiotherapy model used to develop this mathematical task is the classical superposition-convolution algorithm, AAA (developed by Ulmer and Harder. We worked with optimal geometrical approximations to make the computational IMRT calculations quicker/reduce the planning-system time. Analytic geometry/computational-techniques to carry out simulations (for standard wedges are detailed/developed sharply. Integral developments/integral-statistical approximations are explained. Beam-divergence limit Angle for optimal wedge filtration formulas is calculated/sketched, with geometrical approximations. Fundamental trigonometry is used for this purpose.Results: Extent simulation tables for WF of 15º, 30º, 45º, and 60º are shown with errors. As a result, it is possible to determine the best individual treatment dose distribution for each patient. We presented these basic simulations/numerical examples for standard manufacturing WF of straight sloping surface, to check the accuracy/errors of the calculations. Simulations results give low RMS/Relative Error values (formulated for WF of 15º, 30º, 45º, and 60º.Conclusion: We obtained a series of formulas of analytic geometry for WF that can be applied for any particular dose

  18. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm.

    Science.gov (United States)

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-09-19

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.

  19. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    Science.gov (United States)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  20. Robust species taxonomy assignment algorithm for 16S rRNA NGS reads: application to oral carcinoma samples

    Directory of Open Access Journals (Sweden)

    Nezar Noor Al-Hebshi

    2015-09-01

    Full Text Available Background: Usefulness of next-generation sequencing (NGS in assessing bacteria associated with oral squamous cell carcinoma (OSCC has been undermined by inability to classify reads to the species level. Objective: The purpose of this study was to develop a robust algorithm for species-level classification of NGS reads from oral samples and to pilot test it for profiling bacteria within OSCC tissues. Methods: Bacterial 16S V1-V3 libraries were prepared from three OSCC DNA samples and sequenced using 454's FLX chemistry. High-quality, well-aligned, and non-chimeric reads ≥350 bp were classified using a novel, multi-stage algorithm that involves matching reads to reference sequences in revised versions of the Human Oral Microbiome Database (HOMD, HOMD extended (HOMDEXT, and Greengene Gold (GGG at alignment coverage and percentage identity ≥98%, followed by assignment to species level based on top hit reference sequences. Priority was given to hits in HOMD, then HOMDEXT and finally GGG. Unmatched reads were subject to operational taxonomic unit analysis. Results: Nearly, 92.8% of the reads were matched to updated-HOMD 13.2, 1.83% to trusted-HOMDEXT, and 1.36% to modified-GGG. Of all matched reads, 99.6% were classified to species level. A total of 228 species-level taxa were identified, representing 11 phyla; the most abundant were Proteobacteria, Bacteroidetes, Firmicutes, Fusobacteria, and Actinobacteria. Thirty-five species-level taxa were detected in all samples. On average, Prevotella oris, Neisseria flava, Neisseria flavescens/subflava, Fusobacterium nucleatum ss polymorphum, Aggregatibacter segnis, Streptococcus mitis, and Fusobacterium periodontium were the most abundant. Bacteroides fragilis, a species rarely isolated from the oral cavity, was detected in two samples. Conclusion: This multi-stage algorithm maximizes the fraction of reads classified to the species level while ensuring reliable classification by giving priority to the

  1. A Robust Computational Technique for Model Order Reduction of Two-Time-Scale Discrete Systems via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Othman M. K. Alsmadi

    2015-01-01

    Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  2. A robust Hough transform algorithm for determining the radiation centers of circular and rectangular fields with subpixel accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Du Weiliang; Yang, James [Department of Radiation Physics, University of Texas M D Anderson Cancer Center, 1515 Holcombe Blvd, Unit 94, Houston, TX 77030 (United States)], E-mail: wdu@mdanderson.org

    2009-02-07

    Uncertainty in localizing the radiation field center is among the major components that contribute to the overall positional error and thus must be minimized. In this study, we developed a Hough transform (HT)-based computer algorithm to localize the radiation center of a circular or rectangular field with subpixel accuracy. We found that the HT method detected the centers of the test circular fields with an absolute error of 0.037 {+-} 0.019 pixels. On a typical electronic portal imager with 0.5 mm image resolution, this mean detection error was translated to 0.02 mm, which was much finer than the image resolution. It is worth noting that the subpixel accuracy described here does not include experimental uncertainties such as linac mechanical instability or room laser inaccuracy. The HT method was more accurate and more robust to image noise and artifacts than the traditional center-of-mass method. Application of the HT method in Winston-Lutz tests was demonstrated to measure the ball-radiation center alignment with subpixel accuracy. Finally, the method was applied to quantitative evaluation of the radiation center wobble during collimator rotation.

  3. Performance and robustness of optimal fractional fuzzy PID controllers for pitch control of a wind turbine using chaotic optimization algorithms.

    Science.gov (United States)

    Asgharnia, Amirhossein; Shahnazi, Reza; Jamali, Ali

    2018-05-11

    The most studied controller for pitch control of wind turbines is proportional-integral-derivative (PID) controller. However, due to uncertainties in wind turbine modeling and wind speed profiles, the need for more effective controllers is inevitable. On the other hand, the parameters of PID controller usually are unknown and should be selected by the designer which is neither a straightforward task nor optimal. To cope with these drawbacks, in this paper, two advanced controllers called fuzzy PID (FPID) and fractional-order fuzzy PID (FOFPID) are proposed to improve the pitch control performance. Meanwhile, to find the parameters of the controllers the chaotic evolutionary optimization methods are used. Using evolutionary optimization methods not only gives us the unknown parameters of the controllers but also guarantees the optimality based on the chosen objective function. To improve the performance of the evolutionary algorithms chaotic maps are used. All the optimization procedures are applied to the 2-mass model of 5-MW wind turbine model. The proposed optimal controllers are validated using simulator FAST developed by NREL. Simulation results demonstrate that the FOFPID controller can reach to better performance and robustness while guaranteeing fewer fatigue damages in different wind speeds in comparison to FPID, fractional-order PID (FOPID) and gain-scheduling PID (GSPID) controllers. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Intrinsic Grassmann Averages for Online Linear and Robust Subspace Learning

    DEFF Research Database (Denmark)

    Chakraborty, Rudrasis; Hauberg, Søren; Vemuri, Baba C.

    2017-01-01

    Principal Component Analysis (PCA) is a fundamental method for estimating a linear subspace approximation to high-dimensional data. Many algorithms exist in literature to achieve a statistically robust version of PCA called RPCA. In this paper, we present a geometric framework for computing the p...

  5. SU-E-J-94: Geometric and Dosimetric Evaluation of Deformation Image Registration Algorithms Using Virtual Phantoms Generated From Patients with Lung Cancer

    International Nuclear Information System (INIS)

    Shen, Z; Greskovich, J; Xia, P; Bzdusek, K

    2015-01-01

    Purpose: To generate virtual phantoms with clinically relevant deformation and use them to objectively evaluate geometric and dosimetric uncertainties of deformable image registration (DIR) algorithms. Methods: Ten lung cancer patients undergoing adaptive 3DCRT planning were selected. For each patient, a pair of planning CT (pCT) and replanning CT (rCT) were used as the basis for virtual phantom generation. Manually adjusted meshes were created for selected ROIs (e.g. PTV, lungs, spinal cord, esophagus, and heart) on pCT and rCT. The mesh vertices were input into a thin-plate spline algorithm to generate a reference displacement vector field (DVF). The reference DVF was used to deform pCT to generate a simulated replanning CT (srCT) that was closely matched to rCT. Three DIR algorithms (Demons, B-Spline, and intensity-based) were applied to these ten virtual phantoms. The images, ROIs, and doses were mapped from pCT to srCT using the DVFs computed by these three DIRs and compared to those mapped using the reference DVF. Results: The average Dice coefficients for selected ROIs were from 0.85 to 0.96 for Demons, from 0.86 to 0.97 for intensity-based, and from 0.76 to 0.95 for B-Spline. The average Hausdorff distances for selected ROIs were from 2.2 to 5.4 mm for Demons, from 2.3 to 6.8 mm for intensity-based, and from 2.4 to 11.4 mm for B-Spline. The average absolute dose errors for selected ROIs were from 0.2 to 0.6 Gy for Demons, from 0.1 to 0.5 Gy for intensity-based, and from 0.5 to 1.5 Gy for B-Spline. Conclusion: Virtual phantoms were modeled after patients with lung cancer and were clinically relevant for adaptive radiotherapy treatment replanning. Virtual phantoms with known DVFs serve as references and can provide a fair comparison when evaluating different DIRs. Demons and intensity-based DIRs were shown to have smaller geometric and dosimetric uncertainties than B-Spline. Z Shen: None; K Bzdusek: an employee of Philips Healthcare; J Greskovich: None; P Xia

  6. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  7. Mechanical Model of Geometric Cell and Topological Algorithm for Cell Dynamics from Single-Cell to Formation of Monolayered Tissues with Pattern

    KAUST Repository

    Kachalo, Sëma

    2015-05-14

    Geometric and mechanical properties of individual cells and interactions among neighboring cells are the basis of formation of tissue patterns. Understanding the complex interplay of cells is essential for gaining insight into embryogenesis, tissue development, and other emerging behavior. Here we describe a cell model and an efficient geometric algorithm for studying the dynamic process of tissue formation in 2D (e.g. epithelial tissues). Our approach improves upon previous methods by incorporating properties of individual cells as well as detailed description of the dynamic growth process, with all topological changes accounted for. Cell size, shape, and division plane orientation are modeled realistically. In addition, cell birth, cell growth, cell shrinkage, cell death, cell division, cell collision, and cell rearrangements are now fully accounted for. Different models of cell-cell interactions, such as lateral inhibition during the process of growth, can be studied in detail. Cellular pattern formation for monolayered tissues from arbitrary initial conditions, including that of a single cell, can also be studied in detail. Computational efficiency is achieved through the employment of a special data structure that ensures access to neighboring cells in constant time, without additional space requirement. We have successfully generated tissues consisting of more than 20,000 cells starting from 2 cells within 1 hour. We show that our model can be used to study embryogenesis, tissue fusion, and cell apoptosis. We give detailed study of the classical developmental process of bristle formation on the epidermis of D. melanogaster and the fundamental problem of homeostatic size control in epithelial tissues. Simulation results reveal significant roles of solubility of secreted factors in both the bristle formation and the homeostatic control of tissue size. Our method can be used to study broad problems in monolayered tissue formation. Our software is publicly

  8. Design of Robust Supertwisting Algorithm Based Second-Order Sliding Mode Controller for Nonlinear Systems with Both Matched and Unmatched Uncertainty

    Directory of Open Access Journals (Sweden)

    Marwa Jouini

    2017-01-01

    Full Text Available This paper proposes a robust supertwisting algorithm (STA design for nonlinear systems where both matched and unmatched uncertainties are considered. The main contributions reside primarily to conceive a novel structure of STA, in order to ensure the desired performance of the uncertain nonlinear system. The modified algorithm is formed of double closed-loop feedback, in which two linear terms are added to the classical STA. In addition, an integral sliding mode switching surface is proposed to construct the attractiveness and reachability of sliding mode. Sufficient conditions are derived to guarantee the exact differentiation stability in finite time based on Lyapunov function theory. Finally, a comparative study for a variable-length pendulum system illustrates the robustness and the effectiveness of the proposed approach compared to other STA schemes.

  9. Robust efficient video fingerprinting

    Science.gov (United States)

    Puri, Manika; Lubin, Jeffrey

    2009-02-01

    We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.

  10. Quantification of differences between nailfold capillaroscopy images with a scleroderma pattern and normal pattern using measures of geometric and algorithmic complexity.

    Science.gov (United States)

    Urwin, Samuel George; Griffiths, Bridget; Allen, John

    2017-02-01

    This study aimed to quantify and investigate differences in the geometric and algorithmic complexity of the microvasculature in nailfold capillaroscopy (NFC) images displaying a scleroderma pattern and those displaying a 'normal' pattern. 11 NFC images were qualitatively classified by a capillary specialist as indicative of 'clear microangiopathy' (CM), i.e. a scleroderma pattern, and 11 as 'not clear microangiopathy' (NCM), i.e. a 'normal' pattern. Pre-processing was performed, and fractal dimension (FD) and Kolmogorov complexity (KC) were calculated following image binarisation. FD and KC were compared between groups, and a k-means cluster analysis (n  =  2) on all images was performed, without prior knowledge of the group assigned to them (i.e. CM or NCM), using FD and KC as inputs. CM images had significantly reduced FD and KC compared to NCM images, and the cluster analysis displayed promising results that the quantitative classification of images into CM and NCM groups is possible using the mathematical measures of FD and KC. The analysis techniques used show promise for quantitative microvascular investigation in patients with systemic sclerosis.

  11. Are decisions using cost-utility analyses robust to choice of SF-36/SF-12 preference-based algorithm?

    Directory of Open Access Journals (Sweden)

    Walton Surrey M

    2005-03-01

    Full Text Available Abstract Background Cost utility analysis (CUA using SF-36/SF-12 data has been facilitated by the development of several preference-based algorithms. The purpose of this study was to illustrate how decision-making could be affected by the choice of preference-based algorithms for the SF-36 and SF-12, and provide some guidance on selecting an appropriate algorithm. Methods Two sets of data were used: (1 a clinical trial of adult asthma patients; and (2 a longitudinal study of post-stroke patients. Incremental costs were assumed to be $2000 per year over standard treatment, and QALY gains realized over a 1-year period. Ten published algorithms were identified, denoted by first author: Brazier (SF-36, Brazier (SF-12, Shmueli, Fryback, Lundberg, Nichol, Franks (3 algorithms, and Lawrence. Incremental cost-utility ratios (ICURs for each algorithm, stated in dollars per quality-adjusted life year ($/QALY, were ranked and compared between datasets. Results In the asthma patients, estimated ICURs ranged from Lawrence's SF-12 algorithm at $30,769/QALY (95% CI: 26,316 to 36,697 to Brazier's SF-36 algorithm at $63,492/QALY (95% CI: 48,780 to 83,333. ICURs for the stroke cohort varied slightly more dramatically. The MEPS-based algorithm by Franks et al. provided the lowest ICUR at $27,972/QALY (95% CI: 20,942 to 41,667. The Fryback and Shmueli algorithms provided ICURs that were greater than $50,000/QALY and did not have confidence intervals that overlapped with most of the other algorithms. The ICUR-based ranking of algorithms was strongly correlated between the asthma and stroke datasets (r = 0.60. Conclusion SF-36/SF-12 preference-based algorithms produced a wide range of ICURs that could potentially lead to different reimbursement decisions. Brazier's SF-36 and SF-12 algorithms have a strong methodological and theoretical basis and tended to generate relatively higher ICUR estimates, considerations that support a preference for these algorithms over the

  12. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  13. Geometric Algorithms for Trajectory Analysis

    NARCIS (Netherlands)

    Staals, Frank

    2015-01-01

    Technology such as the Global Positing System (GPS) has made tracking moving entities easy and cheap. As a result there is a large amount of trajectory data available, and an increasing demand on tools and techniques to analyze such data. We consider several analysis tasks for trajectory data,

  14. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    Science.gov (United States)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  15. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration.

    Directory of Open Access Journals (Sweden)

    Hengkai Guo

    Full Text Available Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US and magnetic resonance (MR. Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods.

  16. Mobile Watermarking against Geometrical Distortions

    Directory of Open Access Journals (Sweden)

    Jing Zhang

    2015-08-01

    Full Text Available Mobile watermarking robust to geometrical distortions is still a great challenge. In mobile watermarking, efficient computation is necessary because mobile devices have very limited resources due to power consumption. In this paper, we propose a low-complexity geometrically resilient watermarking approach based on the optimal tradeoff circular harmonic function (OTCHF correlation filter and the minimum average correlation energy Mellin radial harmonic (MACE-MRH correlation filter. By the rotation, translation and scale tolerance properties of the two kinds of filter, the proposed watermark detector can be robust to geometrical attacks. The embedded watermark is weighted by a perceptual mask which matches very well with the properties of the human visual system. Before correlation, a whitening process is utilized to improve watermark detection reliability. Experimental results demonstrate that the proposed watermarking approach is computationally efficient and robust to geometrical distortions.

  17. Solving advanced multi-objective robust designs by means of multiple objective evolutionary algorithms (MOEA): A reliability application

    Energy Technology Data Exchange (ETDEWEB)

    Salazar A, Daniel E. [Division de Computacion Evolutiva (CEANI), Instituto de Sistemas Inteligentes y Aplicaciones Numericas en Ingenieria (IUSIANI), Universidad de Las Palmas de Gran Canaria. Canary Islands (Spain)]. E-mail: danielsalazaraponte@gmail.com; Rocco S, Claudio M. [Universidad Central de Venezuela, Facultad de Ingenieria, Caracas (Venezuela)]. E-mail: crocco@reacciun.ve

    2007-06-15

    This paper extends the approach proposed by the second author in [Rocco et al. Robust design using a hybrid-cellular-evolutionary and interval-arithmetic approach: a reliability application. In: Tarantola S, Saltelli A, editors. SAMO 2001: Methodological advances and useful applications of sensitivity analysis. Reliab Eng Syst Saf 2003;79(2):149-59 [special issue

  18. Robust Selection Algorithm (RSA) for Multi-Omic Biomarker Discovery; Integration with Functional Network Analysis to Identify miRNA Regulated Pathways in Multiple Cancers.

    Science.gov (United States)

    Sehgal, Vasudha; Seviour, Elena G; Moss, Tyler J; Mills, Gordon B; Azencott, Robert; Ram, Prahlad T

    2015-01-01

    MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases.

  19. Geometrical parton

    Energy Technology Data Exchange (ETDEWEB)

    Ebata, T [Tohoku Univ., Sendai (Japan). Coll. of General Education

    1976-06-01

    The geometrical distribution inferred from the inelastic cross section is assumed to be proportional to the partial waves. The precocious scaling and the Q/sup 2/-dependence of various quantities are treated from the geometrical point of view. It is shown that the approximate conservation of the orbital angular momentum may be a very practical rule to understand the helicity structure of various hadronic and electromagnetic reactions. The rule can be applied to inclusive reactions as well. The model is also applied to large angle processes. Through the discussion, it is suggested that many peculiar properties of the quark-parton can be ascribed to the geometrical effects.

  20. Robustness and precision of an automatic marker detection algorithm for online prostate daily targeting using a standard V-EPID.

    Science.gov (United States)

    Aubin, S; Beaulieu, L; Pouliot, S; Pouliot, J; Roy, R; Girouard, L M; Martel-Brisson, N; Vigneault, E; Laverdière, J

    2003-07-01

    An algorithm for the daily localization of the prostate using implanted markers and a standard video-based electronic portal imaging device (V-EPID) has been tested. Prior to planning, three gold markers were implanted in the prostate of seven patients. The clinical images were acquired with a BeamViewPlus 2.1 V-EPID for each field during the normal course radiotherapy treatment and are used off-line to determine the ability of the automatic marker detection algorithm to adequately and consistently detect the markers. Clinical images were obtained with various dose levels from ranging 2.5 to 75 MU. The algorithm is based on marker attenuation characterization in the portal image and spatial distribution. A total of 1182 clinical images were taken. The results show an average efficiency of 93% for the markers detected individually and 85% for the group of markers. This algorithm accomplishes the detection and validation in 0.20-0.40 s. When the center of mass of the group of implanted markers is used, then all displacements can be corrected to within 1.0 mm in 84% of the cases and within 1.5 mm in 97% of cases. The standard video-based EPID tested provides excellent marker detection capability even with low dose levels. The V-EPID can be used successfully with radiopaque markers and the automatic detection algorithm to track and correct the daily setup deviations due to organ motions.

  1. Geometric analysis

    CERN Document Server

    Bray, Hubert L; Mazzeo, Rafe; Sesum, Natasa

    2015-01-01

    This volume includes expanded versions of the lectures delivered in the Graduate Minicourse portion of the 2013 Park City Mathematics Institute session on Geometric Analysis. The papers give excellent high-level introductions, suitable for graduate students wishing to enter the field and experienced researchers alike, to a range of the most important areas of geometric analysis. These include: the general issue of geometric evolution, with more detailed lectures on Ricci flow and Kähler-Ricci flow, new progress on the analytic aspects of the Willmore equation as well as an introduction to the recent proof of the Willmore conjecture and new directions in min-max theory for geometric variational problems, the current state of the art regarding minimal surfaces in R^3, the role of critical metrics in Riemannian geometry, and the modern perspective on the study of eigenfunctions and eigenvalues for Laplace-Beltrami operators.

  2. Robust point matching via vector field consensus.

    Science.gov (United States)

    Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu

    2014-04-01

    In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.

  3. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  4. An O(n log n) Version of the Averbakh-Berman Algorithm for the Robust Median of a Tree

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Georgiadis, Loukas; Katriel, Irit

    2008-01-01

    We show that the minmax regret median of a tree can be found in O(nlog n) time. This is obtained by a modification of Averbakh and Berman's O(nlog2 n)-time algorithm: We design a dynamic solution to their bottleneck subproblem of finding the middle of every root-leaf path in a tree....

  5. Trajectory Tracking of a Tri-Rotor Aerial Vehicle Using an MRAC-Based Robust Hybrid Control Algorithm

    Directory of Open Access Journals (Sweden)

    Zain Anwar Ali

    2017-01-01

    Full Text Available In this paper, a novel Model Reference Adaptive Control (MRAC-based hybrid control algorithm is presented for the trajectory tracking of a tri-rotor Unmanned Aerial Vehicle (UAV. The mathematical model of the tri-rotor is based on the Newton–Euler formula, whereas the MRAC-based hybrid controller consists of Fuzzy Proportional Integral Derivative (F-PID and Fuzzy Proportional Derivative (F-PD controllers. MRAC is used as the main controller for the dynamics, while the parameters of the adaptive controller are fine-tuned by the F-PD controller for the altitude control subsystem and the F-PID controller for the attitude control subsystem of the UAV. The stability of the system is ensured and proven by Lyapunov stability analysis. The proposed control algorithm is tested and verified using computer simulations for the trajectory tracking of the desired path as an input. The effectiveness of our proposed algorithm is compared with F-PID and the Fuzzy Logic Controller (FLC. Our proposed controller exhibits much less steady state error, quick error convergence in the presence of disturbance or noise, and model uncertainties.

  6. Synthesis of multi-wavelength temporal phase-shifting algorithms optimized for high signal-to-noise ratio and high detuning robustness using the frequency transfer function.

    Science.gov (United States)

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-05-02

    Synthesis of single-wavelength temporal phase-shifting algorithms (PSA) for interferometry is well-known and firmly based on the frequency transfer function (FTF) paradigm. Here we extend the single-wavelength FTF-theory to dual and multi-wavelength PSA-synthesis when several simultaneous laser-colors are present. The FTF-based synthesis for dual-wavelength (DW) PSA is optimized for high signal-to-noise ratio and minimum number of temporal phase-shifted interferograms. The DW-PSA synthesis herein presented may be used for interferometric contouring of discontinuous industrial objects. Also DW-PSA may be useful for DW shop-testing of deep free-form aspheres. As shown here, using the FTF-based synthesis one may easily find explicit DW-PSA formulae optimized for high signal-to-noise and high detuning robustness. To this date, no general synthesis and analysis for temporal DW-PSAs has been given; only ad hoc DW-PSAs formulas have been reported. Consequently, no explicit formulae for their spectra, their signal-to-noise, their detuning and harmonic robustness has been given. Here for the first time a fully general procedure for designing DW-PSAs (or triple-wavelengths PSAs) with desire spectrum, signal-to-noise ratio and detuning robustness is given. We finally generalize DW-PSA to higher number of wavelength temporal PSAs.

  7. An Enhanced Data Integrity Model In Mobile Cloud Environment Using Digital Signature Algorithm And Robust Reversible Watermarking

    Directory of Open Access Journals (Sweden)

    Boukari Souley

    2017-10-01

    Full Text Available the increase use of hand held devices such as smart phones to access multimedia content in the cloud is increasing with rise and growth in information technology. Mobile cloud computing is increasingly used today because it allows users to have access to variety of resources in the cloud such as image video audio and software applications with minimal usage of their inbuilt resources such as storage memory by using the one available in the cloud. The major challenge faced with mobile cloud computing is security. Watermarking and digital signature are some techniques used to provide security and authentication on user data in the cloud. Watermarking is a technique used to embed digital data within a multimedia content such as image video or audio in order to prevent authorized access to those content by intruders whereas digital signature is used to identify and verify user data when accessed. In this work we implemented digital signature and robust reversible image watermarking in order enhance mobile cloud computing security and integrity of data by providing double authentication techniques. The results obtained show the effectiveness of combining the two techniques robust reversible watermarking and digital signature by providing strong authentication to ensures data integrity and extract the original content watermarked without changes.

  8. Geometric metamorphosis.

    Science.gov (United States)

    Niethammer, Marc; Hart, Gabriel L; Pace, Danielle F; Vespa, Paul M; Irimia, Andrei; Van Horn, John D; Aylward, Stephen R

    2011-01-01

    Standard image registration methods do not account for changes in image appearance. Hence, metamorphosis approaches have been developed which jointly estimate a space deformation and a change in image appearance to construct a spatio-temporal trajectory smoothly transforming a source to a target image. For standard metamorphosis, geometric changes are not explicitly modeled. We propose a geometric metamorphosis formulation, which explains changes in image appearance by a global deformation, a deformation of a geometric model, and an image composition model. This work is motivated by the clinical challenge of predicting the long-term effects of traumatic brain injuries based on time-series images. This work is also applicable to the quantification of tumor progression (e.g., estimating its infiltrating and displacing components) and predicting chronic blood perfusion changes after stroke. We demonstrate the utility of the method using simulated data as well as scans from a clinical traumatic brain injury patient.

  9. An automated and robust image processing algorithm for glaucoma diagnosis from fundus images using novel blood vessel tracking and bend point detection.

    Science.gov (United States)

    M, Soorya; Issac, Ashish; Dutta, Malay Kishore

    2018-02-01

    Glaucoma is an ocular disease which can cause irreversible blindness. The disease is currently identified using specialized equipment operated by optometrists manually. The proposed work aims to provide an efficient imaging solution which can help in automating the process of Glaucoma diagnosis using computer vision techniques from digital fundus images. The proposed method segments the optic disc using a geometrical feature based strategic framework which improves the detection accuracy and makes the algorithm invariant to illumination and noise. Corner thresholding and point contour joining based novel methods are proposed to construct smooth contours of Optic Disc. Based on a clinical approach as used by ophthalmologist, the proposed algorithm tracks blood vessels inside the disc region and identifies the points at which first vessel bend from the optic disc boundary and connects them to obtain the contours of Optic Cup. The proposed method has been compared with the ground truth marked by the medical experts and the similarity parameters, used to determine the performance of the proposed method, have yield a high similarity of segmentation. The proposed method has achieved a macro-averaged f-score of 0.9485 and accuracy of 97.01% in correctly classifying fundus images. The proposed method is clinically significant and can be used for Glaucoma screening over a large population which will work in a real time. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  11. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  12. Robust multiple cue fusion-based high-speed and nonrigid object tracking algorithm for short track speed skating

    Science.gov (United States)

    Liu, Chenguang; Cheng, Heng-Da; Zhang, Yingtao; Wang, Yuxuan; Xian, Min

    2016-01-01

    This paper presents a methodology for tracking multiple skaters in short track speed skating competitions. Nonrigid skaters move at high speed with severe occlusions happening frequently among them. The camera is panned quickly in order to capture the skaters in a large and dynamic scene. To automatically track the skaters and precisely output their trajectories becomes a challenging task in object tracking. We employ the global rink information to compensate camera motion and obtain the global spatial information of skaters, utilize random forest to fuse multiple cues and predict the blob of each skater, and finally apply a silhouette- and edge-based template-matching and blob-evolving method to labelling pixels to a skater. The effectiveness and robustness of the proposed method are verified through thorough experiments.

  13. Experimental implementation of a robust damped-oscillation control algorithm on a full-sized, two-degree-of-freedom, AC induction motor-driven crane

    International Nuclear Information System (INIS)

    Kress, R.L.; Jansen, J.F.; Noakes, M.W.

    1994-01-01

    When suspended payloads are moved with an overhead crane, pendulum like oscillations are naturally introduced. This presents a problem any time a crane is used, especially when expensive and/or delicate objects are moved, when moving in a cluttered an or hazardous environment, and when objects are to be placed in tight locations. Damped-oscillation control algorithms have been demonstrated over the past several years for laboratory-scale robotic systems on dc motor-driven overhead cranes. Most overhead cranes presently in use in industry are driven by ac induction motors; consequently, Oak Ridge National Laboratory has implemented damped-oscillation crane control on one of its existing facility ac induction motor-driven overhead cranes. The purpose of this test was to determine feasibility, to work out control and interfacing specifications, and to establish the capability of newly available ac motor control hardware with respect to use in damped-oscillation-controlled systems. Flux vector inverter drives are used to investigate their acceptability for damped-oscillation crane control. The purpose of this paper is to describe the experimental implementation of a control algorithm on a full-sized, two-degree-of-freedom, industrial crane; describe the experimental evaluation of the controller including robustness to payload length changes; explain the results of experiments designed to determine the hardware required for implementation of the control algorithms; and to provide a theoretical description of the controller

  14. New robust algorithm for tracking cells in videos of Drosophila morphogenesis based on finding an ideal path in segmented spatio-temporal cellular structures.

    Science.gov (United States)

    Bellaïche, Yohanns; Bosveld, Floris; Graner, François; Mikula, Karol; Remesíková, Mariana; Smísek, Michal

    2011-01-01

    In this paper, we present a novel algorithm for tracking cells in time lapse confocal microscopy movie of a Drosophila epithelial tissue during pupal morphogenesis. We consider a 2D + time video as a 3D static image, where frames are stacked atop each other, and using a spatio-temporal segmentation algorithm we obtain information about spatio-temporal 3D tubes representing evolutions of cells. The main idea for tracking is the usage of two distance functions--first one from the cells in the initial frame and second one from segmented boundaries. We track the cells backwards in time. The first distance function attracts the subsequently constructed cell trajectories to the cells in the initial frame and the second one forces them to be close to centerlines of the segmented tubular structures. This makes our tracking algorithm robust against noise and missing spatio-temporal boundaries. This approach can be generalized to a 3D + time video analysis, where spatio-temporal tubes are 4D objects.

  15. A robust multi-frequency mixing algorithm for suppression of rivet signal in GMR inspection of riveted structures

    Science.gov (United States)

    Safdernejad, Morteza S.; Karpenko, Oleksii; Ye, Chaofeng; Udpa, Lalita; Udpa, Satish

    2016-02-01

    The advent of Giant Magneto-Resistive (GMR) technology permits development of novel highly sensitive array probes for Eddy Current (EC) inspection of multi-layer riveted structures. Multi-frequency GMR measurements with different EC pene-tration depths show promise for detection of bottom layer notches at fastener sites. However, the distortion of the induced magnetic field due to flaws is dominated by the strong fastener signal, which makes defect detection and classification a challenging prob-lem. This issue is more pronounced for ferromagnetic fasteners that concentrate most of the magnetic flux. In the present work, a novel multi-frequency mixing algorithm is proposed to suppress rivet signal response and enhance defect detection capability of the GMR array probe. The algorithm is baseline-free and does not require any assumptions about the sample geometry being inspected. Fastener signal suppression is based upon the random sample consensus (RANSAC) method, which iteratively estimates parameters of a mathematical model from a set of observed data with outliers. Bottom layer defects at fastener site are simulated as EDM notches of different length. Performance of the proposed multi-frequency mixing approach is evaluated on finite element data and experimental GMR measurements obtained with unidirectional planar current excitation. Initial results are promising demonstrating the feasibility of the approach.

  16. The Key Role of the Vector Optimization Algorithm and Robust Design Approach for the Design of Polygeneration Systems

    Directory of Open Access Journals (Sweden)

    Alfredo Gimelli

    2018-04-01

    Full Text Available In recent decades, growing concerns about global warming and climate change effects have led to specific directives, especially in Europe, promoting the use of primary energy-saving techniques and renewable energy systems. The increasingly stringent requirements for carbon dioxide reduction have led to a more widespread adoption of distributed energy systems. In particular, besides renewable energy systems for power generation, one of the most effective techniques used to face the energy-saving challenges has been the adoption of polygeneration plants for combined heating, cooling, and electricity generation. This technique offers the possibility to achieve a considerable enhancement in energy and cost savings as well as a simultaneous reduction of greenhouse gas emissions. However, the use of small-scale polygeneration systems does not ensure the achievement of mandatory, but sometimes conflicting, aims without the proper sizing and operation of the plant. This paper is focused on a methodology based on vector optimization algorithms and developed by the authors for the identification of optimal polygeneration plant solutions. To this aim, a specific calculation algorithm for the study of cogeneration systems has also been developed. This paper provides, after a detailed description of the proposed methodology, some specific applications to the study of combined heat and power (CHP and organic Rankine cycle (ORC plants, thus highlighting the potential of the proposed techniques and the main results achieved.

  17. A fast and robust bulk-loading algorithm for indexing very large digital elevation datasets II. Experimental results

    Science.gov (United States)

    Rodríguez, Félix R.; Barrena, Manuel

    2011-07-01

    The spatial indexing of eventually all the available topographic information of Earth is a highly valuable tool for different geoscientific application domains. The Shuttle Radar Topography Mission (SRTM) collected and made available to the public one of the world's largest digital elevation models (DEMs). With the aim of providing on easier and faster access to these data by improving their further analysis and processing, we have indexed the SRTM DEM by means of a spatial index based on the kd-tree data structure, called the Q-tree. This paper is the second in a two-part series that includes a thorough performance analysis to validate the bulk-load algorithm efficiency of the Q-tree. We investigate performance measuring elapsed time in different contexts, analyzing disk space usage, testing response time with typical queries, and validating the final index structure balance. In addition, the paper includes performance comparisons with Oracle 11g that helps to understand the real cost of our proposal. Our tests prove that the proposed algorithm outperforms Oracle 11g using around a 9% of the elapsed time, taking six times less storage with more than 96% of page utilization, and getting faster response times to spatial queries issued on 4.5 million points. In addition to this, the behavior of the spatial index has been successfully tested on both an open GIS (VT Builder) and a visualizer tool derived from the previous one.

  18. Geometrically Consistent Mesh Modification

    KAUST Repository

    Bonito, A.

    2010-01-01

    A new paradigm of adaptivity is to execute refinement, coarsening, and smoothing of meshes on manifolds with incomplete information about their geometry and yet preserve position and curvature accuracy. We refer to this collectively as geometrically consistent (GC) mesh modification. We discuss the concept of discrete GC, show the failure of naive approaches, and propose and analyze a simple algorithm that is GC and accuracy preserving. © 2010 Society for Industrial and Applied Mathematics.

  19. On the dosimetric behaviour of photon dose calculation algorithms in the presence of simple geometric heterogeneities: comparison with Monte Carlo calculations

    DEFF Research Database (Denmark)

    Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk

    2007-01-01

    A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam...... a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (rho = 0.035 g cm(-3)), enhanced for the most energetic beam. For denser...

  20. Copyright Protection of Color Imaging Using Robust-Encoded Watermarking

    Directory of Open Access Journals (Sweden)

    M. Cedillo-Hernandez

    2015-04-01

    Full Text Available In this paper we present a robust-encoded watermarking method applied to color images for copyright protection, which presents robustness against several geometric and signal processing distortions. Trade-off between payload, robustness and imperceptibility is a very important aspect which has to be considered when a watermark algorithm is designed. In our proposed scheme, previously to be embedded into the image, the watermark signal is encoded using a convolutional encoder, which can perform forward error correction achieving better robustness performance. Then, the embedding process is carried out through the discrete cosine transform domain (DCT of an image using the image normalization technique to accomplish robustness against geometric and signal processing distortions. The embedded watermark coded bits are extracted and decoded using the Viterbi algorithm. In order to determine the presence or absence of the watermark into the image we compute the bit error rate (BER between the recovered and the original watermark data sequence. The quality of the watermarked image is measured using the well-known indices: Peak Signal to Noise Ratio (PSNR, Visual Information Fidelity (VIF and Structural Similarity Index (SSIM. The color difference between the watermarked and original images is obtained by using the Normalized Color Difference (NCD measure. The experimental results show that the proposed method provides good performance in terms of imperceptibility and robustness. The comparison among the proposed and previously reported methods based on different techniques is also provided.

  1. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  2. Damped least square based genetic algorithm with Gaussian distribution of damping factor for singularity-robust inverse kinematics

    International Nuclear Information System (INIS)

    Phuoc, Le Minh; Lee, Suk Han; Kim, Hun Mo; Martinet, Philippe

    2008-01-01

    Robot inverse kinematics based on Jacobian inversion encounters critical issues of kinematic singularities. In this paper, several techniques based on damped least squares are proposed to lead robot pass through kinematic singularities without excessive joint velocities. Unlike other work in which the same damping factor is used for all singular vectors, this paper proposes a different damping coefficient for each singular vector based on corresponding singular value of the Jacobian. Moreover, a continuous distribution of damping factor following Gaussian function guarantees the continuous in joint velocities. A genetic algorithm is utilized to search for the best maximum damping factor and singular region, which used to require ad hoc searching in other works. As a result, end effector tracking error, which is inherited from damped least squares by introducing damping factors, is minimized. The effectiveness of our approach is compared with other methods in both non-redundant robot and redundant robot

  3. Damped least square based genetic algorithm with Gaussian distribution of damping factor for singularity-robust inverse kinematics

    Energy Technology Data Exchange (ETDEWEB)

    Phuoc, Le Minh; Lee, Suk Han; Kim, Hun Mo [Sungkyunkwan University, Suwon (Korea, Republic of); Martinet, Philippe [Blaise Pascal University, Clermont-Ferrand Cedex (France)

    2008-07-15

    Robot inverse kinematics based on Jacobian inversion encounters critical issues of kinematic singularities. In this paper, several techniques based on damped least squares are proposed to lead robot pass through kinematic singularities without excessive joint velocities. Unlike other work in which the same damping factor is used for all singular vectors, this paper proposes a different damping coefficient for each singular vector based on corresponding singular value of the Jacobian. Moreover, a continuous distribution of damping factor following Gaussian function guarantees the continuous in joint velocities. A genetic algorithm is utilized to search for the best maximum damping factor and singular region, which used to require ad hoc searching in other works. As a result, end effector tracking error, which is inherited from damped least squares by introducing damping factors, is minimized. The effectiveness of our approach is compared with other methods in both non-redundant robot and redundant robot

  4. Robust information encryption diffractive-imaging-based scheme with special phase retrieval algorithm for a customized data container

    Science.gov (United States)

    Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong; Zhou, Nanrun

    2018-06-01

    The diffractive-imaging-based encryption (DIBE) scheme has aroused wide interesting due to its compact architecture and low requirement of conditions. Nevertheless, the primary information can hardly be recovered exactly in the real applications when considering the speckle noise and potential occlusion imposed on the ciphertext. To deal with this issue, the customized data container (CDC) into DIBE is introduced and a new phase retrieval algorithm (PRA) for plaintext retrieval is proposed. The PRA, designed according to the peculiarity of the CDC, combines two key techniques from previous approaches, i.e., input-support-constraint and median-filtering. The proposed scheme can guarantee totally the reconstruction of the primary information despite heavy noise or occlusion and its effectiveness and feasibility have been demonstrated with simulation results.

  5. Towards a Robust Solution of the Non-Linear Kinematics for the General Stewart Platform with Estimation of Distribution Algorithms

    Directory of Open Access Journals (Sweden)

    Eusebio Eduardo Hernández Martinez

    2013-01-01

    Full Text Available In robotics, solving the direct kinematics problem (DKP for parallel robots is very often more difficult and time consuming than for their serial counterparts. The problem is stated as follows: given the joint variables, the Cartesian variables should be computed, namely the pose of the mobile platform. Most of the time, the DKP requires solving a non-linear system of equations. In addition, given that the system could be non-convex, Newton or Quasi-Newton (Dogleg based solvers get trapped on local minima. The capacity of such kinds of solvers to find an adequate solution strongly depends on the starting point. A well-known problem is the selection of such a starting point, which requires a priori information about the neighbouring region of the solution. In order to circumvent this issue, this article proposes an efficient method to select and to generate the starting point based on probabilistic learning. Experiments and discussion are presented to show the method performance. The method successfully avoids getting trapped on local minima without the need for human intervention, which increases its robustness when compared with a single Dogleg approach. This proposal can be extended to other structures, to any non-linear system of equations, and of course, to non-linear optimization problems.

  6. On the Use of Evolutionary Algorithms to Improve the Robustness of Continuous Speech Recognition Systems in Adverse Conditions

    Directory of Open Access Journals (Sweden)

    Sid-Ahmed Selouani

    2003-07-01

    Full Text Available Limiting the decrease in performance due to acoustic environment changes remains a major challenge for continuous speech recognition (CSR systems. We propose a novel approach which combines the Karhunen-Loève transform (KLT in the mel-frequency domain with a genetic algorithm (GA to enhance the data representing corrupted speech. The idea consists of projecting noisy speech parameters onto the space generated by the genetically optimized principal axis issued from the KLT. The enhanced parameters increase the recognition rate for highly interfering noise environments. The proposed hybrid technique, when included in the front-end of an HTK-based CSR system, outperforms that of the conventional recognition process in severe interfering car noise environments for a wide range of signal-to-noise ratios (SNRs varying from 16 dB to −4 dB. We also showed the effectiveness of the KLT-GA method in recognizing speech subject to telephone channel degradations.

  7. An Experiment of Robust Parallel Algorithm for the Eigenvalue problem of a Multigroup Neutron Diffusion based on modified FETI-DP : Part 2

    International Nuclear Information System (INIS)

    Chang, Jonghwa

    2014-01-01

    Today, we can use a computer cluster consist of a few hundreds CPUs with reasonable budget. Such computer system enables us to do detailed modeling of reactor core. The detailed modeling will improve the safety and the economics of a nuclear reactor by eliminating un-necessary conservatism or missing consideration. To take advantage of such a cluster computer, efficient parallel algorithms must be developed. Mechanical structure analysis community has studied the domain decomposition method to solve the stress-strain equation using the finite element methods. One of the most successful domain decomposition method in terms of robustness is FETI-DP. We have modified the original FETI-DP to solve the eigenvalue problem for the multi-group diffusion problem in previous study. In this study, we report the result of recent modification to handle the three-dimensional subdomain partitioning, and the sub-domain multi-group problem. Modified FETI-DP algorithm has been successfully applied for the eigenvalue problem of multi-group neutron diffusion equation. The overall CPU time is decreasing as number of sub-domains (partitions) is increasing. However, there may be a limit in decrement due to increment of the number of primal points will increase the CPU time spent by the solution of the global equation. Even distribution of computational load (criterion a) is important to achieve fast computation. The subdomain partition can be effectively performed using suitable graph theory partition package such as MeTIS

  8. An Experiment of Robust Parallel Algorithm for the Eigenvalue problem of a Multigroup Neutron Diffusion based on modified FETI-DP : Part 2

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Jonghwa [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    Today, we can use a computer cluster consist of a few hundreds CPUs with reasonable budget. Such computer system enables us to do detailed modeling of reactor core. The detailed modeling will improve the safety and the economics of a nuclear reactor by eliminating un-necessary conservatism or missing consideration. To take advantage of such a cluster computer, efficient parallel algorithms must be developed. Mechanical structure analysis community has studied the domain decomposition method to solve the stress-strain equation using the finite element methods. One of the most successful domain decomposition method in terms of robustness is FETI-DP. We have modified the original FETI-DP to solve the eigenvalue problem for the multi-group diffusion problem in previous study. In this study, we report the result of recent modification to handle the three-dimensional subdomain partitioning, and the sub-domain multi-group problem. Modified FETI-DP algorithm has been successfully applied for the eigenvalue problem of multi-group neutron diffusion equation. The overall CPU time is decreasing as number of sub-domains (partitions) is increasing. However, there may be a limit in decrement due to increment of the number of primal points will increase the CPU time spent by the solution of the global equation. Even distribution of computational load (criterion a) is important to achieve fast computation. The subdomain partition can be effectively performed using suitable graph theory partition package such as MeTIS.

  9. An Experiment of Robust Parallel Algorithm for the Eigenvalue problem of a Multigroup Neutron Diffusion based on modified FETI-DP

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Jonghwa [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Parallelization of Monte Carlo simulation is widely adpoted. There are also several parallel algorithms developed for the SN transport theory using the parallel wave sweeping algorithm and for the CPM using parallel ray tracing. For practical purpose of reactor physics application, the thermal feedback and burnup effects on the multigroup cross section should be considered. In this respect, the domain decomposition method(DDM) is suitable for distributing the expensive cross section calculation work. Parallel transport code and diffusion code based on the Raviart-Thomas mixed finite element method was developed. However most of the developed methods rely on the heuristic convergence of flux and current at the domain interfaces. Convergence was not attained in some cases. Mechanical stress computation community has also work on the DDM to solve the stress-strain equation using the finite element methods. The most successful domain decomposition method in terms of robustness is FETI-DP. We have modified the original FETI-DP to solve the eigenvalue problem for the multigroup diffusion problem in this study.

  10. An Evolutionary Approach for Robust Layout Synthesis of MEMS

    DEFF Research Database (Denmark)

    Fan, Zhun; Wang, Jiachuan; Goodman, Erik

    2005-01-01

    The paper introduces a robust design method for layout synthesis of MEM resonators subject to inherent geometric uncertainties such as the fabrication error on the sidewall of the structure. The robust design problem is formulated as a multi-objective constrained optimisation problem after certain...... assumptions and treated with multiobjective genetic algorithm (MOGA), a special type of evolutionary computing approaches. Case study based on layout synthesis of a comb-driven MEM resonator shows that the approach proposed in this paper can lead to design results that meet the target performance and are less...

  11. Impact reduction of the uncertain geometrical parameters on magnetic material identification of an EI electromagnetic inductor using an adaptive inverse algorithm

    International Nuclear Information System (INIS)

    Abdallh, A.; Crevecoeur, G.; Dupré, L.

    2012-01-01

    The magnetic characteristics of the electromagnetic devices' core materials can be recovered by solving an inverse problem, where sets of measurements need to be properly interpreted using a forward numerical model of the device. However, the uncertainties of the geometrical parameter values in the forward model lead to appreciable recovery errors in the recovered values of the material parameters. In this paper, we propose an effective inverse approach technique, in which the influences of the uncertainties in the geometrical model parameters are minimized. In this proposed approach, the cost function that needs to be minimized is adapted with respect to the uncertain geometrical model parameters. The proposed methodology is applied onto the identification of the magnetizing B–H curve of the magnetic material of an EI core inductor. The numerical results show a significant reduction of the recovery errors in the identified magnetic material parameter values. Moreover, the proposed methodology is validated by solving an inverse problem starting from real magnetic measurements. - Highlights: ► A new method to minimize the influence of the uncertain parameters in inverse problems is proposed. ► The technique is based on adapting iteratively the objective function that needs to be minimized. ► The objective function is adapted by the model response sensitivity to the uncertain parameters. ► The proposed technique is applied for recovering the B–H curve of an EI core inductor material. ► The error in the inverse problem solution is dramatically reduced using the proposed methodology.

  12. 三维装配几何约束闭环系统的递归分解方法%A Recursive Decomposition Algorithm for 3D Assembly Geometric Constraint System with Closed-loops

    Institute of Scientific and Technical Information of China (English)

    黄学良; 李娜; 陈立平

    2013-01-01

    Numerical methods are always employed to solve 3D assembly geometric constraint system with closed-loops which can not be decomposed by the existing decomposition methods,but their inherent inefficiency and instability can not be overcome.In this paper,with the analysis of the structural constraint of serial kinematic chain and the topological structure of geometric constraint closed-loop graph,a recursive decomposition algorithm for 3D geometric constraint system with closed-loops is proposed.The basic idea of the proposed algorithm is to introduce the equivalent geometric constraint combination to substitute the structural constraint of serial kinematic chain,and separate the geometric constraint subsystems which can be solved independently from the geometric constraint system with closed-loops.The proposed method can decompose most 3D geometric constraint closed-loop systems which are always solved by numerical method into a series of geometric constraint subsystems between two rigid bodies which can be solved by analytical or reasoning method,so that the computational efficiency and stability can be improved dramatically.Finally,a typical example has been given to validate the correctness and effectiveness of the proposed method.%由于现有几何约束分解方法无法分解三维装配几何约束闭环系统,故常采用数值迭代方法对其进行求解,但存在效率低、稳定性差等问题.为此,通过分析几何约束闭环图的拓扑结构和串联运动链的结构约束,提出基于串联运动链结构约束等价替换的三维几何约束闭环系统的递归分解方法.该方法通过不断地引入几何约束组合等价替换串联运动链的结构约束,从几何约束闭环系统中分离出可独立求解的子系统,实现几何约束闭环系统的递归分解.该方法可将此前许多必须整体迭代求解的三维几何约束闭环系统分解为一系列可解析求解的2个刚体之间的几何约束

  13. Geometric reconstruction methods for electron tomography

    DEFF Research Database (Denmark)

    Alpers, Andreas; Gardner, Richard J.; König, Stefan

    2013-01-01

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts...... and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed...

  14. Artefacts in geometric phase analysis of compound materials.

    Science.gov (United States)

    Peters, Jonathan J P; Beanland, Richard; Alexe, Marin; Cockburn, John W; Revin, Dmitry G; Zhang, Shiyong Y; Sanchez, Ana M

    2015-10-01

    The geometric phase analysis (GPA) algorithm is known as a robust and straightforward technique that can be used to measure lattice strains in high resolution transmission electron microscope (TEM) images. It is also attractive for analysis of aberration-corrected scanning TEM (ac-STEM) images that resolve every atom column, since it uses Fourier transforms and does not require real-space peak detection and assignment to appropriate sublattices. Here it is demonstrated that, in ac-STEM images of compound materials with compositionally distinct atom columns, an additional geometric phase is present in the Fourier transform. If the structure changes from one area to another in the image (e.g. across an interface), the change in this additional phase will appear as a strain in conventional GPA, even if there is no lattice strain. Strategies to avoid this pitfall are outlined. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Geometric recursion

    DEFF Research Database (Denmark)

    Andersen, Jørgen Ellegaard; Borot, Gaëtan; Orantin, Nicolas

    We propose a general theory whose main component are functorial assignments ∑→Ω∑ ∈ E (∑), for a large class of functors E from a certain category of bordered surfaces (∑'s) to a suitable a target category of topological vector spaces. The construction is done by summing appropriate compositions...... as Poisson structures on the moduli space of flat connections. The theory has a wider scope than that and one expects that many functorial objects in low-dimensional geometry and topology should have a GR construction. The geometric recursion has various projections to topological recursion (TR) and we...... in particular show it retrieves all previous variants and applications of TR. We also show that, for any initial data for topological recursion, one can construct initial data for GR with values in Frobenius algebra-valued continuous functions on Teichmueller space, such that the ωg,n of TR are obtained...

  16. A Robust Algorithm of Multiquadric Method Based on an Improved Huber Loss Function for Interpolating Remote-Sensing-Derived Elevation Data Sets

    Directory of Open Access Journals (Sweden)

    Chuanfa Chen

    2015-03-01

    Full Text Available Remote-sensing-derived elevation data sets often suffer from noise and outliers due to various reasons, such as the physical limitations of sensors, multiple reflectance, occlusions and low contrast of texture. Outliers generally have a seriously negative effect on DEM construction. Some interpolation methods like ordinary kriging (OK are capable of smoothing noise inherent in sample points, but are sensitive to outliers. In this paper, a robust algorithm of multiquadric method (MQ based on an Improved Huber loss function (MQ-IH has been developed to decrease the impact of outliers on DEM construction. Theoretically, the improved Huber loss function is null for outliers, quadratic for small errors, and linear for others. Simulated data sets drawn from a mathematical surface with different error distributions were employed to analyze the robustness of MQ-IH. Results indicate that MQ-IH obtains a good balance between efficiency and robustness. Namely, the performance of MQ-IH is comparative to those of the classical MQ and MQ based on the Classical Huber loss function (MQ-CH when sample points follow a normal distribution, and the former outperforms the latter two when sample points are subject to outliers. For example, for the Cauchy error distribution with the location parameter of 0 and scale parameter of 1, the root mean square errors (RMSEs of MQ-CH and the classical MQ are 0.3916 and 1.4591, respectively, whereas that of MQ-IH is 0.3698. The performance of MQ-IH is further evaluated by qualitative and quantitative analysis through a real-world example of DEM construction with the stereo-images-derived elevation points. Results demonstrate that compared with the classical interpolation methods, including natural neighbor (NN, OK and ANUDEM (a program that calculates regular grid digital elevation models (DEMs with sensible shape and drainage structure from arbitrarily large topographic data sets, and two versions of MQ, including the

  17. Geometric Algebra Computing

    CERN Document Server

    Corrochano, Eduardo Bayro

    2010-01-01

    This book presents contributions from a global selection of experts in the field. This useful text offers new insights and solutions for the development of theorems, algorithms and advanced methods for real-time applications across a range of disciplines. Written in an accessible style, the discussion of all applications is enhanced by the inclusion of numerous examples, figures and experimental analysis. Features: provides a thorough discussion of several tasks for image processing, pattern recognition, computer vision, robotics and computer graphics using the geometric algebra framework; int

  18. A Robust Algorithm for Optimisation and Customisation of Fractal Dimensions of Time Series Modified by Nonlinearly Scaling Their Time Derivatives: Mathematical Theory and Practical Applications

    Directory of Open Access Journals (Sweden)

    Franz Konstantin Fuss

    2013-01-01

    Full Text Available Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal’s time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  19. A robust algorithm for optimisation and customisation of fractal dimensions of time series modified by nonlinearly scaling their time derivatives: mathematical theory and practical applications.

    Science.gov (United States)

    Fuss, Franz Konstantin

    2013-01-01

    Standard methods for computing the fractal dimensions of time series are usually tested with continuous nowhere differentiable functions, but not benchmarked with actual signals. Therefore they can produce opposite results in extreme signals. These methods also use different scaling methods, that is, different amplitude multipliers, which makes it difficult to compare fractal dimensions obtained from different methods. The purpose of this research was to develop an optimisation method that computes the fractal dimension of a normalised (dimensionless) and modified time series signal with a robust algorithm and a running average method, and that maximises the difference between two fractal dimensions, for example, a minimum and a maximum one. The signal is modified by transforming its amplitude by a multiplier, which has a non-linear effect on the signal's time derivative. The optimisation method identifies the optimal multiplier of the normalised amplitude for targeted decision making based on fractal dimensions. The optimisation method provides an additional filter effect and makes the fractal dimensions less noisy. The method is exemplified by, and explained with, different signals, such as human movement, EEG, and acoustic signals.

  20. Geometrical methods in learning theory

    International Nuclear Information System (INIS)

    Burdet, G.; Combe, Ph.; Nencka, H.

    2001-01-01

    The methods of information theory provide natural approaches to learning algorithms in the case of stochastic formal neural networks. Most of the classical techniques are based on some extremization principle. A geometrical interpretation of the associated algorithms provides a powerful tool for understanding the learning process and its stability and offers a framework for discussing possible new learning rules. An illustration is given using sequential and parallel learning in the Boltzmann machine

  1. Height and Tilt Geometric Texture

    DEFF Research Database (Denmark)

    Andersen, Vedrana; Desbrun, Mathieu; Bærentzen, Jakob Andreas

    2009-01-01

    compromise between functionality and simplicity: it can efficiently handle and process geometric texture too complex to be represented as a height field, without having recourse to full blown mesh editing algorithms. The height-and-tilt representation proposed here is fully intrinsic to the mesh, making...

  2. A MATCHING METHOD TO REDUCE THE INFLUENCE OF SAR GEOMETRIC DEFORMATION

    Directory of Open Access Journals (Sweden)

    C. Gao

    2018-04-01

    Full Text Available There are large geometrical deformations in SAR image, including foreshortening, layover, shade,which leads to SAR Image matching with low accuracy. Especially in complex terrain area, the control points are difficult to obtain, and the matching is difficult to achieve. Considering the impact of geometric distortions in SAR image pairs, a matching algorithm with a combination of speeded up robust features (SURF and summed of normalize cross correlation (SNCC was proposed, which can avoid the influence of SAR geometric deformation. Firstly, SURF algorithm was utilized to predict the search area. Then the matching point pairs was selected based on summed of normalized cross correlation. Finally, false match points were eliminated by the bidirectional consistency. SURF algorithm can control the range of matching points, and the matching points extracted from the deformation area are eliminated, and the matching points with stable and even distribution are obtained. The experimental results demonstrated that the proposed algorithm had high precision, and can effectively avoid the effect of geometric distortion on SAR image matching. Meet accuracy requirements of the block adjustment with sparse control points.

  3. IDENTIFICATION OF A ROBUST LICHEN INDEX FOR THE DECONVOLUTION OF LICHEN AND ROCK MIXTURES USING PATTERN SEARCH ALGORITHM (CASE STUDY: GREENLAND

    Directory of Open Access Journals (Sweden)

    S. Salehi

    2016-06-01

    Full Text Available Lichens are the dominant autotrophs of polar and subpolar ecosystems commonly encrust the rock outcrops. Spectral mixing of lichens and bare rock can shift diagnostic spectral features of materials of interest thus leading to misinterpretation and false positives if mapping is done based on perfect spectral matching methodologies. Therefore, the ability to distinguish the lichen coverage from rock and decomposing a mixed pixel into a collection of pure reflectance spectra, can improve the applicability of hyperspectral methods for mineral exploration. The objective of this study is to propose a robust lichen index that can be used to estimate lichen coverage, regardless of the mineral composition of the underlying rocks. The performance of three index structures of ratio, normalized ratio and subtraction have been investigated using synthetic linear mixtures of pure rock and lichen spectra with prescribed mixing ratios. Laboratory spectroscopic data are obtained from lichen covered samples collected from Karrat, Liverpool Land, and Sisimiut regions in Greenland. The spectra are then resampled to Hyperspectral Mapper (HyMAP resolution, in order to further investigate the functionality of the indices for the airborne platform. In both resolutions, a Pattern Search (PS algorithm is used to identify the optimal band wavelengths and bandwidths for the lichen index. The results of our band optimization procedure revealed that the ratio between R894-1246 and R1110 explains most of the variability in the hyperspectral data at the original laboratory resolution (R2=0.769. However, the normalized index incorporating R1106-1121 and R904-1251 yields the best results for the HyMAP resolution (R2=0.765.

  4. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    Energy Technology Data Exchange (ETDEWEB)

    Maglevanny, I.I., E-mail: sianko@list.ru [Volgograd State Social Pedagogical University, 27 Lenin Avenue, Volgograd 400131 (Russian Federation); Smolar, V.A. [Volgograd State Technical University, 28 Lenin Avenue, Volgograd 400131 (Russian Federation)

    2016-01-15

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  5. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    International Nuclear Information System (INIS)

    Maglevanny, I.I.; Smolar, V.A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  6. Color Image Secret Watermarking Erase and Write Algorithm Based on SIFT

    Science.gov (United States)

    Qu, Jubao

    The use of adaptive characteristics of SIFT, image features, the implementation of the write, erase operations on Extraction and color image hidden watermarking. From the experimental results, this algorithm has better imperceptibility and at the same time, is robust against geometric attacks and common signal processing.

  7. [Consideration of algorithms to presume the lesion location by using X-ray images of the stomach--geometric analysis of four direction radiography for the U region].

    Science.gov (United States)

    Henmi, Shuichi

    2013-01-01

    The author considered algorithms to presume the lesion location from a series of X-ray images obtained by four direction radiography without blind area for the U region of the stomach. The objects of analysis were six cases that protruding lesions were noticed in the U region. Firstly, from the length of short axis and measure of the lateral width of U region projected on the film, we presumed the length of longitudinal axis and angle between short axis and the film. Secondly, we calculated the rate of length to stomach walls from right side and left side of every image to the lateral width at the height passing through the center of the lesion. Using the lesion location calculated from these values, we presumed that the values that almost agreed between two images to be the lesion location. As the result of analysis, there were some cases that the lesion location could be presumed certainly or un-certainly, on the other hand, there were some cases that the lesion location could not be presumed. Since the form of the U region can be distorted by a change of position, or the angle between longitudinal axis and sagittal plane was changed, the error might have been made in calculation, and so it was considered that the lesion location could not be presumed.

  8. Robust real-time pattern matching using bayesian sequential hypothesis testing.

    Science.gov (United States)

    Pele, Ofir; Werman, Michael

    2008-08-01

    This paper describes a method for robust real time pattern matching. We first introduce a family of image distance measures, the "Image Hamming Distance Family". Members of this family are robust to occlusion, small geometrical transforms, light changes and non-rigid deformations. We then present a novel Bayesian framework for sequential hypothesis testing on finite populations. Based on this framework, we design an optimal rejection/acceptance sampling algorithm. This algorithm quickly determines whether two images are similar with respect to a member of the Image Hamming Distance Family. We also present a fast framework that designs a near-optimal sampling algorithm. Extensive experimental results show that the sequential sampling algorithm performance is excellent. Implemented on a Pentium 4 3 GHz processor, detection of a pattern with 2197 pixels, in 640 x 480 pixel frames, where in each frame the pattern rotated and was highly occluded, proceeds at only 0.022 seconds per frame.

  9. On bivariate geometric distribution

    Directory of Open Access Journals (Sweden)

    K. Jayakumar

    2013-05-01

    Full Text Available Characterizations of bivariate geometric distribution using univariate and bivariate geometric compounding are obtained. Autoregressive models with marginals as bivariate geometric distribution are developed. Various bivariate geometric distributions analogous to important bivariate exponential distributions like, Marshall-Olkin’s bivariate exponential, Downton’s bivariate exponential and Hawkes’ bivariate exponential are presented.

  10. Visualizing the Geometric Series.

    Science.gov (United States)

    Bennett, Albert B., Jr.

    1989-01-01

    Mathematical proofs often leave students unconvinced or without understanding of what has been proved, because they provide no visual-geometric representation. Presented are geometric models for the finite geometric series when r is a whole number, and the infinite geometric series when r is the reciprocal of a whole number. (MNS)

  11. Robustness of holonomic quantum gates

    International Nuclear Information System (INIS)

    Solinas, P.; Zanardi, P.; Zanghi, N.

    2005-01-01

    Full text: If the driving field fluctuates during the quantum evolution this produces errors in the applied operator. The holonomic (and geometrical) quantum gates are believed to be robust against some kind of noise. Because of the geometrical dependence of the holonomic operators can be robust against this kind of noise; in fact if the fluctuations are fast enough they cancel out leaving the final operator unchanged. I present the numerical studies of holonomic quantum gates subject to this parametric noise, the fidelity of the noise and ideal evolution is calculated for different noise correlation times. The holonomic quantum gates seem robust not only for fast fluctuating fields but also for slow fluctuating fields. These results can be explained as due to the geometrical feature of the holonomic operator: for fast fluctuating fields the fluctuations are canceled out, for slow fluctuating fields the fluctuations do not perturb the loop in the parameter space. (author)

  12. Robust Self Tuning Controllers

    DEFF Research Database (Denmark)

    Poulsen, Niels Kjølstad

    1985-01-01

    The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...... has several operation modes and a detector for controlling the mode. A special self tuning controller has been developed to regulate plant with changing time delay.......The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...

  13. Robust loss functions for boosting.

    Science.gov (United States)

    Kanamori, Takafumi; Takenouchi, Takashi; Eguchi, Shinto; Murata, Noboru

    2007-08-01

    Boosting is known as a gradient descent algorithm over loss functions. It is often pointed out that the typical boosting algorithm, Adaboost, is highly affected by outliers. In this letter, loss functions for robust boosting are studied. Based on the concept of robust statistics, we propose a transformation of loss functions that makes boosting algorithms robust against extreme outliers. Next, the truncation of loss functions is applied to contamination models that describe the occurrence of mislabels near decision boundaries. Numerical experiments illustrate that the proposed loss functions derived from the contamination models are useful for handling highly noisy data in comparison with other loss functions.

  14. Geometric algorithms for delineating geographic regions

    NARCIS (Netherlands)

    Reinbacher, I.

    2006-01-01

    Everyone of us is used to geographical regions like the south of Utrecht, the dutch Randstad, or the mountainous areas of Austria. Some of these regions have crisp, fixed boundaries like Utrecht or Austria. Others, like the dutch Randstad and the Austrian mountains, have no such boundaries and are

  15. Image understanding using geometric context

    Science.gov (United States)

    Zhang, Xiaochun; Liu, Chuancai

    2017-07-01

    A Gibbs Sampler based topic model for image annotation, which takes into account the interaction between visual geometric context and related topic, is presented. Most of the existing topic models for scene annotation use segmentation-based algorithm. However, topic models using segmentation algorithm alone sometimes can produce erroneous results when used to annotate real-life scene pictures. Therefore, our algorithm makes use of peaks of image surface instead of segmentation regions. Existing approaches use SIFT algorithm and treat the peaks as round blob features. In this paper, the peaks are treated as anisotropic blob features, which models low level visual elements more precisely. In order to better utilize visual features, our model not only takes into consideration visual codeword, but also considers influence of visual properties to topic formation, such as orientation, width, length and color. The basic idea is based on the assumption that different topics will produce distinct visual appearance, and different visual appearance is helpful to distinguish topics. During the learning stage, each topic will be associated with a set of distributions of visual properties, which depicts appearance of the topic. This paper considers more geometric properties, which will reduce topic uncertainty and learn the images better. Tested with Corel5K, SAIAPR-TC12 and Espgame100k Datasets, our method performs moderately better than some state of the arts methods.

  16. Data-Driven Neural Network Model for Robust Reconstruction of Automobile Casting

    Science.gov (United States)

    Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Lu

    2017-09-01

    In computer vision system, it is a challenging task to robustly reconstruct complex 3D geometries of automobile castings. However, 3D scanning data is usually interfered by noises, the scanning resolution is low, these effects normally lead to incomplete matching and drift phenomenon. In order to solve these problems, a data-driven local geometric learning model is proposed to achieve robust reconstruction of automobile casting. In order to relieve the interference of sensor noise and to be compatible with incomplete scanning data, a 3D convolution neural network is established to match the local geometric features of automobile casting. The proposed neural network combines the geometric feature representation with the correlation metric function to robustly match the local correspondence. We use the truncated distance field(TDF) around the key point to represent the 3D surface of casting geometry, so that the model can be directly embedded into the 3D space to learn the geometric feature representation; Finally, the training labels is automatically generated for depth learning based on the existing RGB-D reconstruction algorithm, which accesses to the same global key matching descriptor. The experimental results show that the matching accuracy of our network is 92.2% for automobile castings, the closed loop rate is about 74.0% when the matching tolerance threshold τ is 0.2. The matching descriptors performed well and retained 81.6% matching accuracy at 95% closed loop. For the sparse geometric castings with initial matching failure, the 3D matching object can be reconstructed robustly by training the key descriptors. Our method performs 3D reconstruction robustly for complex automobile castings.

  17. Geometric Design Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — Purpose: The mission of the Geometric Design Laboratory (GDL) is to support the Office of Safety Research and Development in research related to the geometric design...

  18. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun

    2015-01-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)

  19. Stock price prediction using geometric Brownian motion

    Science.gov (United States)

    Farida Agustini, W.; Restu Affianti, Ika; Putri, Endah RM

    2018-03-01

    Geometric Brownian motion is a mathematical model for predicting the future price of stock. The phase that done before stock price prediction is determine stock expected price formulation and determine the confidence level of 95%. On stock price prediction using geometric Brownian Motion model, the algorithm starts from calculating the value of return, followed by estimating value of volatility and drift, obtain the stock price forecast, calculating the forecast MAPE, calculating the stock expected price and calculating the confidence level of 95%. Based on the research, the output analysis shows that geometric Brownian motion model is the prediction technique with high rate of accuracy. It is proven with forecast MAPE value ≤ 20%.

  20. International Conference on Robust Statistics

    CERN Document Server

    Filzmoser, Peter; Gather, Ursula; Rousseeuw, Peter

    2003-01-01

    Aspects of Robust Statistics are important in many areas. Based on the International Conference on Robust Statistics 2001 (ICORS 2001) in Vorau, Austria, this volume discusses future directions of the discipline, bringing together leading scientists, experienced researchers and practitioners, as well as younger researchers. The papers cover a multitude of different aspects of Robust Statistics. For instance, the fundamental problem of data summary (weights of evidence) is considered and its robustness properties are studied. Further theoretical subjects include e.g.: robust methods for skewness, time series, longitudinal data, multivariate methods, and tests. Some papers deal with computational aspects and algorithms. Finally, the aspects of application and programming tools complete the volume.

  1. A Robust and Self-Paced BCI System Based on a Four Class SSVEP Paradigm: Algorithms and Protocols for a High-Transfer-Rate Direct Brain Communication

    Directory of Open Access Journals (Sweden)

    Sergio Parini

    2009-01-01

    Full Text Available In this paper, we present, with particular focus on the adopted processing and identification chain and protocol-related solutions, a whole self-paced brain-computer interface system based on a 4-class steady-state visual evoked potentials (SSVEPs paradigm. The proposed system incorporates an automated spatial filtering technique centred on the common spatial patterns (CSPs method, an autoscaled and effective signal features extraction which is used for providing an unsupervised biofeedback, and a robust self-paced classifier based on the discriminant analysis theory. The adopted operating protocol is structured in a screening, training, and testing phase aimed at collecting user-specific information regarding best stimulation frequencies, optimal sources identification, and overall system processing chain calibration in only a few minutes. The system, validated on 11 healthy/pathologic subjects, has proven to be reliable in terms of achievable communication speed (up to 70 bit/min and very robust to false positive identifications.

  2. A Robust Obstacle Avoidance for Service Robot Using Bayesian Approach

    Directory of Open Access Journals (Sweden)

    Widodo Budiharto

    2011-03-01

    Full Text Available The objective of this paper is to propose a robust obstacle avoidance method for service robot in indoor environment. The method for obstacles avoidance uses information about static obstacles on the landmark using edge detection. Speed and direction of people that walks as moving obstacle obtained by single camera using tracking and recognition system and distance measurement using 3 ultrasonic sensors. A new geometrical model and maneuvering method for moving obstacle avoidance introduced and combined with Bayesian approach for state estimation. The obstacle avoidance problem is formulated using decision theory, prior and posterior distribution and loss function to determine an optimal response based on inaccurate sensor data. Algorithms for moving obstacles avoidance method proposed and experiment results implemented to service robot also presented. Various experiments show that our proposed method very fast, robust and successfully implemented to service robot called Srikandi II that equipped with 4 DOF arm robot developed in our laboratory.

  3. Robust Image Hashing Using Radon Transform and Invariant Features

    Directory of Open Access Journals (Sweden)

    Y.L. Liu

    2016-09-01

    Full Text Available A robust image hashing method based on radon transform and invariant features is proposed for image authentication, image retrieval, and image detection. Specifically, an input image is firstly converted into a counterpart with a normalized size. Then the invariant centroid algorithm is applied to obtain the invariant feature point and the surrounding circular area, and the radon transform is employed to acquire the mapping coefficient matrix of the area. Finally, the hashing sequence is generated by combining the feature vectors and the invariant moments calculated from the coefficient matrix. Experimental results show that this method not only can resist against the normal image processing operations, but also some geometric distortions. Comparisons of receiver operating characteristic (ROC curve indicate that the proposed method outperforms some existing methods in classification between perceptual robustness and discrimination.

  4. Reconstruction of an InAs nanowire using geometric tomography

    DEFF Research Database (Denmark)

    Pennington, Robert S.; König, Stefan; Alpers, Andreas

    Geometric tomography and conventional algebraic tomography algorithms are used to reconstruct cross-sections of an InAs nanowire from a tilt series of experimental annular dark-field images. Both algorithms are also applied to a test object to assess what factors affect the reconstruction quality....... When using the present algorithms, geometric tomography is faster, but artifacts in the reconstruction may be difficult to recognize....

  5. Sudan-decoding generalized geometric Goppa codes

    DEFF Research Database (Denmark)

    Heydtmann, Agnes Eileen

    2003-01-01

    Generalized geometric Goppa codes are vector spaces of n-tuples with entries from different extension fields of a ground field. They are derived from evaluating functions similar to conventional geometric Goppa codes, but allowing evaluation in places of arbitrary degree. A decoding scheme...... for these codes based on Sudan's improved algorithm is presented and its error-correcting capacity is analyzed. For the implementation of the algorithm it is necessary that the so-called increasing zero bases of certain spaces of functions are available. A method to obtain such bases is developed....

  6. A Robust Motion Artifact Detection Algorithm for Accurate Detection of Heart Rates From Photoplethysmographic Signals Using Time-Frequency Spectral Features.

    Science.gov (United States)

    Dao, Duy; Salehizadeh, S M A; Noh, Yeonsik; Chong, Jo Woon; Cho, Chae Ho; McManus, Dave; Darling, Chad E; Mendelson, Yitzhak; Chon, Ki H

    2017-09-01

    Motion and noise artifacts (MNAs) impose limits on the usability of the photoplethysmogram (PPG), particularly in the context of ambulatory monitoring. MNAs can distort PPG, causing erroneous estimation of physiological parameters such as heart rate (HR) and arterial oxygen saturation (SpO2). In this study, we present a novel approach, "TifMA," based on using the time-frequency spectrum of PPG to first detect the MNA-corrupted data and next discard the nonusable part of the corrupted data. The term "nonusable" refers to segments of PPG data from which the HR signal cannot be recovered accurately. Two sequential classification procedures were included in the TifMA algorithm. The first classifier distinguishes between MNA-corrupted and MNA-free PPG data. Once a segment of data is deemed MNA-corrupted, the next classifier determines whether the HR can be recovered from the corrupted segment or not. A support vector machine (SVM) classifier was used to build a decision boundary for the first classification task using data segments from a training dataset. Features from time-frequency spectra of PPG were extracted to build the detection model. Five datasets were considered for evaluating TifMA performance: (1) and (2) were laboratory-controlled PPG recordings from forehead and finger pulse oximeter sensors with subjects making random movements, (3) and (4) were actual patient PPG recordings from UMass Memorial Medical Center with random free movements and (5) was a laboratory-controlled PPG recording dataset measured at the forehead while the subjects ran on a treadmill. The first dataset was used to analyze the noise sensitivity of the algorithm. Datasets 2-4 were used to evaluate the MNA detection phase of the algorithm. The results from the first phase of the algorithm (MNA detection) were compared to results from three existing MNA detection algorithms: the Hjorth, kurtosis-Shannon entropy, and time-domain variability-SVM approaches. This last is an approach

  7. Robust surface registration using N-points approximate congruent sets

    Directory of Open Access Journals (Sweden)

    Yao Jian

    2011-01-01

    Full Text Available Abstract Scans acquired by 3D sensors are typically represented in a local coordinate system. When multiple scans, taken from different locations, represent the same scene these must be registered to a common reference frame. We propose a fast and robust registration approach to automatically align two scans by finding two sets of N-points, that are approximately congruent under rigid transformation and leading to a good estimate of the transformation between their corresponding point clouds. Given two scans, our algorithm randomly searches for the best sets of congruent groups of points using a RANSAC-based approach. To successfully and reliably align two scans when there is only a small overlap, we improve the basic RANSAC random selection step by employing a weight function that approximates the probability of each pair of points in one scan to match one pair in the other. The search time to find pairs of congruent sets of N-points is greatly reduced by employing a fast search codebook based on both binary and multi-dimensional lookup tables. Moreover, we introduce a novel indicator of the overlapping region quality which is used to verify the estimated rigid transformation and to improve the alignment robustness. Our framework is general enough to incorporate and efficiently combine different point descriptors derived from geometric and texture-based feature points or scene geometrical characteristics. We also present a method to improve the matching effectiveness of texture feature descriptors by extracting them from an atlas of rectified images recovered from the scan reflectance image. Our algorithm is robust with respect to different sampling densities and also resilient to noise and outliers. We demonstrate its robustness and efficiency on several challenging scan datasets with varying degree of noise, outliers, extent of overlap, acquired from indoor and outdoor scenarios.

  8. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  9. Robust and Blind 3D Mesh Watermarking in Spatial Domain Based on Faces Categorization and Sorting

    Science.gov (United States)

    Molaei, Amir Masoud; Ebrahimnezhad, Hossein; Sedaaghi, Mohammad Hossein

    2016-06-01

    In this paper, a 3D watermarking algorithm in spatial domain is presented with blind detection. In the proposed method, a negligible visual distortion is observed in host model. Initially, a preprocessing is applied on the 3D model to make it robust against geometric transformation attacks. Then, a number of triangle faces are determined as mark triangles using a novel systematic approach in which faces are categorized and sorted robustly. In order to enhance the capability of information retrieval by attacks, block watermarks are encoded using Reed-Solomon block error-correcting code before embedding into the mark triangles. Next, the encoded watermarks are embedded in spherical coordinates. The proposed method is robust against additive noise, mesh smoothing and quantization attacks. Also, it is stout next to geometric transformation, vertices and faces reordering attacks. Moreover, the proposed algorithm is designed so that it is robust against the cropping attack. Simulation results confirm that the watermarked models confront very low distortion if the control parameters are selected properly. Comparison with other methods demonstrates that the proposed method has good performance against the mesh smoothing attacks.

  10. Manipulation Robustness of Collaborative Filtering

    OpenAIRE

    Benjamin Van Roy; Xiang Yan

    2010-01-01

    A collaborative filtering system recommends to users products that similar users like. Collaborative filtering systems influence purchase decisions and hence have become targets of manipulation by unscrupulous vendors. We demonstrate that nearest neighbors algorithms, which are widely used in commercial systems, are highly susceptible to manipulation and introduce new collaborative filtering algorithms that are relatively robust.

  11. Synthesis of multi-wavelength temporal phase-shifting algorithms optimized for high signal-to-noise ratio and high detuning robustness using the frequency transfer function

    OpenAIRE

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-01-01

    Synthesis of single-wavelength temporal phase-shifting algorithms (PSA) for interferometry is well-known and firmly based on the frequency transfer function (FTF) paradigm. Here we extend the single-wavelength FTF-theory to dual and multi-wavelength PSA-synthesis when several simultaneous laser-colors are present. The FTF-based synthesis for dual-wavelength PSA (DW-PSA) is optimized for high signal-to-noise ratio and minimum number of temporal phase-shifted interferograms. The DW-PSA synthesi...

  12. Robust factorization

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Fisker, Rune; Åström, Kalle

    2002-01-01

    Factorization algorithms for recovering structure and motion from an image stream have many advantages, but they usually require a set of well-tracked features. Such a set is in generally not available in practical applications. There is thus a need for making factorization algorithms deal effect...

  13. Artefacts in geometric phase analysis of compound materials

    Energy Technology Data Exchange (ETDEWEB)

    Peters, Jonathan J.P., E-mail: j.j.p.peters@warwick.ac.uk [Department of Physics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL (United Kingdom); Beanland, Richard; Alexe, Marin [Department of Physics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL (United Kingdom); Cockburn, John W.; Revin, Dmitry G.; Zhang, Shiyong Y. [Department of Physics and Astronomy, University of Sheffield, Hounsfield Road, Sheffield S3 7RH (United Kingdom); Sanchez, Ana M., E-mail: a.m.sanchez@warwick.ac.uk [Department of Physics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL (United Kingdom)

    2015-10-15

    The geometric phase analysis (GPA) algorithm is known as a robust and straightforward technique that can be used to measure lattice strains in high resolution transmission electron microscope (TEM) images. It is also attractive for analysis of aberration-corrected scanning TEM (ac-STEM) images that resolve every atom column, since it uses Fourier transforms and does not require real-space peak detection and assignment to appropriate sublattices. Here it is demonstrated that, in ac-STEM images of compound materials with compositionally distinct atom columns, an additional geometric phase is present in the Fourier transform. If the structure changes from one area to another in the image (e.g. across an interface), the change in this additional phase will appear as a strain in conventional GPA, even if there is no lattice strain. Strategies to avoid this pitfall are outlined. - Highlights: • GPA is shown to produce incorrect strains when applied to images of compound materials. • A mathematical description is laid out for why GPA can produce artefacts. • The artefact is demonstrated using experimental and simulated data. • A ‘rule’ is set to avoid this artefact in GPA.

  14. Geometrical methods for power network analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bellucci, Stefano; Tiwari, Bhupendra Nath [Istituto Nazioneale di Fisica Nucleare, Frascati, Rome (Italy). Lab. Nazionali di Frascati; Gupta, Neeraj [Indian Institute of Technology, Kanpur (India). Dept. of Electrical Engineering

    2013-02-01

    Uses advanced geometrical methods to analyse power networks. Provides a self-contained and tutorial introduction. Includes a fully worked-out example for the IEEE 5 bus system. This book is a short introduction to power system planning and operation using advanced geometrical methods. The approach is based on well-known insights and techniques developed in theoretical physics in the context of Riemannian manifolds. The proof of principle and robustness of this approach is examined in the context of the IEEE 5 bus system. This work addresses applied mathematicians, theoretical physicists and power engineers interested in novel mathematical approaches to power network theory.

  15. Robust Scientists

    DEFF Research Database (Denmark)

    Gorm Hansen, Birgitte

    their core i nterests, 2) developing a selfsupply of industry interests by becoming entrepreneurs and thus creating their own compliant industry partner and 3) balancing resources within a larger collective of researchers, thus countering changes in the influx of funding caused by shifts in political...... knowledge", Danish research policy seems to have helped develop politically and economically "robust scientists". Scientific robustness is acquired by way of three strategies: 1) tasting and discriminating between resources so as to avoid funding that erodes academic profiles and push scientists away from...

  16. Computationally Efficient Robust Color Image Watermarking Using Fast Walsh Hadamard Transform

    Directory of Open Access Journals (Sweden)

    Suja Kalarikkal Pullayikodi

    2017-10-01

    Full Text Available Watermark is the copy deterrence mechanism used in the multimedia signal that is to be protected from hacking and piracy such a way that it can later be extracted from the watermarked signal by the decoder. Watermarking can be used in various applications such as authentication, video indexing, copyright protection and access control. In this paper a new CDMA (Code Division Multiple Access based robust watermarking algorithm using customized 8 × 8 Walsh Hadamard Transform, is proposed for the color images and detailed performance and robustness analysis have been performed. The paper studies in detail the effect of spreading code length, number of spreading codes and type of spreading codes on the performance of the watermarking system. Compared to the existing techniques the proposed scheme is computationally more efficient and consumes much less time for execution. Furthermore, the proposed scheme is robust and survives most of the common signal processing and geometric attacks.

  17. Geometric group theory

    CERN Document Server

    Druţu, Cornelia

    2018-01-01

    The key idea in geometric group theory is to study infinite groups by endowing them with a metric and treating them as geometric spaces. This applies to many groups naturally appearing in topology, geometry, and algebra, such as fundamental groups of manifolds, groups of matrices with integer coefficients, etc. The primary focus of this book is to cover the foundations of geometric group theory, including coarse topology, ultralimits and asymptotic cones, hyperbolic groups, isoperimetric inequalities, growth of groups, amenability, Kazhdan's Property (T) and the Haagerup property, as well as their characterizations in terms of group actions on median spaces and spaces with walls. The book contains proofs of several fundamental results of geometric group theory, such as Gromov's theorem on groups of polynomial growth, Tits's alternative, Stallings's theorem on ends of groups, Dunwoody's accessibility theorem, the Mostow Rigidity Theorem, and quasiisometric rigidity theorems of Tukia and Schwartz. This is the f...

  18. Geometric and engineering drawing

    CERN Document Server

    Morling, K

    2010-01-01

    The new edition of this successful text describes all the geometric instructions and engineering drawing information that are likely to be needed by anyone preparing or interpreting drawings or designs with plenty of exercises to practice these principles.

  19. Differential geometric structures

    CERN Document Server

    Poor, Walter A

    2007-01-01

    This introductory text defines geometric structure by specifying parallel transport in an appropriate fiber bundle and focusing on simplest cases of linear parallel transport in a vector bundle. 1981 edition.

  20. Geometric ghosts and unitarity

    International Nuclear Information System (INIS)

    Ne'eman, Y.

    1980-09-01

    A review is given of the geometrical identification of the renormalization ghosts and the resulting derivation of Unitarity equations (BRST) for various gauges: Yang-Mills, Kalb-Ramond, and Soft-Group-Manifold

  1. Asymptotic and geometrical quantization

    International Nuclear Information System (INIS)

    Karasev, M.V.; Maslov, V.P.

    1984-01-01

    The main ideas of geometric-, deformation- and asymptotic quantizations are compared. It is shown that, on the one hand, the asymptotic approach is a direct generalization of exact geometric quantization, on the other hand, it generates deformation in multiplication of symbols and Poisson brackets. Besides investigating the general quantization diagram, its applications to the calculation of asymptotics of a series of eigenvalues of operators possessing symmetry groups are considered

  2. Geometrical method of decoupling

    Directory of Open Access Journals (Sweden)

    C. Baumgarten

    2012-12-01

    Full Text Available The computation of tunes and matched beam distributions are essential steps in the analysis of circular accelerators. If certain symmetries—like midplane symmetry—are present, then it is possible to treat the betatron motion in the horizontal, the vertical plane, and (under certain circumstances the longitudinal motion separately using the well-known Courant-Snyder theory, or to apply transformations that have been described previously as, for instance, the method of Teng and Edwards. In a preceding paper, it has been shown that this method requires a modification for the treatment of isochronous cyclotrons with non-negligible space charge forces. Unfortunately, the modification was numerically not as stable as desired and it was still unclear, if the extension would work for all conceivable cases. Hence, a systematic derivation of a more general treatment seemed advisable. In a second paper, the author suggested the use of real Dirac matrices as basic tools for coupled linear optics and gave a straightforward recipe to decouple positive definite Hamiltonians with imaginary eigenvalues. In this article this method is generalized and simplified in order to formulate a straightforward method to decouple Hamiltonian matrices with eigenvalues on the real and the imaginary axis. The decoupling of symplectic matrices which are exponentials of such Hamiltonian matrices can be deduced from this in a few steps. It is shown that this algebraic decoupling is closely related to a geometric “decoupling” by the orthogonalization of the vectors E[over →], B[over →], and P[over →], which were introduced with the so-called “electromechanical equivalence.” A mathematical analysis of the problem can be traced down to the task of finding a structure-preserving block diagonalization of symplectic or Hamiltonian matrices. Structure preservation means in this context that the (sequence of transformations must be symplectic and hence canonical. When

  3. Geometrical superresolved imaging using nonperiodic spatial masking.

    Science.gov (United States)

    Borkowski, Amikam; Zalevsky, Zeev; Javidi, Bahram

    2009-03-01

    The resolution of every imaging system is limited either by the F-number of its optics or by the geometry of its detection array. The geometrical limitation is caused by lack of spatial sampling points as well as by the shape of every sampling pixel that generates spectral low-pass filtering. We present a novel approach to overcome the low-pass filtering that is due to the shape of the sampling pixels. The approach combines special algorithms together with spatial masking placed in the intermediate image plane and eventually allows geometrical superresolved imaging without relation to the actual shape of the pixels.

  4. On geometrized gravitation theories

    International Nuclear Information System (INIS)

    Logunov, A.A.; Folomeshkin, V.N.

    1977-01-01

    General properties of the geometrized gravitation theories have been considered. Geometrization of the theory is realized only to the extent that by necessity follows from an experiment (geometrization of the density of the matter Lagrangian only). Aor a general case the gravitation field equations and the equations of motion for matter are formulated in the different Riemann spaces. A covariant formulation of the energy-momentum conservation laws is given in an arbitrary geometrized theory. The noncovariant notion of ''pseudotensor'' is not required in formulating the conservation laws. It is shown that in the general case (i.e., when there is an explicit dependence of the matter Lagrangian density on the covariant derivatives) a symmetric energy-momentum tensor of the matter is explicitly dependent on the curvature tensor. There are enlisted different geometrized theories that describe a known set of the experimental facts. The properties of one of the versions of the quasilinear geometrized theory that describes the experimental facts are considered. In such a theory the fundamental static spherically symmetrical solution has a singularity only in the coordinate origin. The theory permits to create a satisfactory model of the homogeneous nonstationary Universe

  5. Almagest, a new trackless ring finding algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lamanna, G., E-mail: gianluca.lamanna@cern.ch

    2014-12-01

    A fast ring finding algorithm is a crucial point to allow the use of RICH in on-line trigger selection. The present algorithms are either too slow (with respect to the incoming data rate) or need the information coming from a tracking system. Digital image techniques, assuming limited computing power (as for example Hough transform), are not perfectly robust for what concerns the noise immunity. We present a novel technique based on Ptolemy's theorem for multi-ring pattern recognition. Starting from purely geometrical considerations, this algorithm (also known as “Almagest”) allows fast and trackless rings reconstruction, with spatial resolution comparable with other offline techniques. Almagest is particularly suitable for parallel implementation on multi-cores machines. Preliminary tests on GPUs (multi-cores video card processors) show that, thanks to an execution time smaller than 10 μs per event, this algorithm could be employed for on-line selection in trigger systems. The user case of the NA62 RICH trigger, based on GPU, will be discussed. - Highlights: • A new algorithm for fast multiple ring searching in RICH detectors is presented. • The Almagest algorithm exploits the computing power of Graphics processers (GPUs). • A preliminary implementation for on-line triggering in the NA62 experiment shows encouraging results.

  6. Robust continuous clustering.

    Science.gov (United States)

    Shah, Sohil Atul; Koltun, Vladlen

    2017-09-12

    Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank.

  7. An information geometric approach to least squares minimization

    Science.gov (United States)

    Transtrum, Mark; Machta, Benjamin; Sethna, James

    2009-03-01

    Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.

  8. Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems

    Directory of Open Access Journals (Sweden)

    Shuqiang Huang

    2017-01-01

    Full Text Available Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest and the population optimum (gbest; thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K-center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms.

  9. Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems.

    Science.gov (United States)

    Huang, Shuqiang; Tao, Ming

    2017-01-22

    Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K -center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms.

  10. Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems

    Science.gov (United States)

    Huang, Shuqiang; Tao, Ming

    2017-01-01

    Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K-center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms. PMID:28117735

  11. Robust statistical methods with R

    CERN Document Server

    Jureckova, Jana

    2005-01-01

    Robust statistical methods were developed to supplement the classical procedures when the data violate classical assumptions. They are ideally suited to applied research across a broad spectrum of study, yet most books on the subject are narrowly focused, overly theoretical, or simply outdated. Robust Statistical Methods with R provides a systematic treatment of robust procedures with an emphasis on practical application.The authors work from underlying mathematical tools to implementation, paying special attention to the computational aspects. They cover the whole range of robust methods, including differentiable statistical functions, distance of measures, influence functions, and asymptotic distributions, in a rigorous yet approachable manner. Highlighting hands-on problem solving, many examples and computational algorithms using the R software supplement the discussion. The book examines the characteristics of robustness, estimators of real parameter, large sample properties, and goodness-of-fit tests. It...

  12. Geometrical optical illusionists.

    Science.gov (United States)

    Wade, Nicholas J

    2014-01-01

    Geometrical optical illusions were given this title by Oppel in 1855. Variants on such small distortions of visual space were illustrated thereafter, many of which bear the names of those who first described them. Some original forms of the geometrical optical illusions are shown together with 'perceptual portraits' of those who described them. These include: Roget, Chevreul, Fick, Zöllner, Poggendorff, Hering, Kundt, Delboeuf Mach, Helmholtz, Hermann, von Bezold, Müller-Lyer, Lipps, Thiéry, Wundt, Münsterberg, Ebbinghaus, Titchener, Ponzo, Luckiesh, Sander, Ehrenstein, Gregory, Heard, White, Shepard, and. Lingelbach. The illusions are grouped under the headings of orientation, size, the combination of size and orientation, and contrast. Early theories of illusions, before geometrical optical illusions were so named, are mentioned briefly.

  13. A mixture model for robust registration in Kinect sensor

    Science.gov (United States)

    Peng, Li; Zhou, Huabing; Zhu, Shengguo

    2018-03-01

    The Microsoft Kinect sensor has been widely used in many applications, but it suffers from the drawback of low registration precision between color image and depth image. In this paper, we present a robust method to improve the registration precision by a mixture model that can handle multiply images with the nonparametric model. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS).The estimation is performed by the EM algorithm which by also estimating the variance of the prior model is able to obtain good estimates. We illustrate the proposed method on the public available dataset. The experimental results show that our approach outperforms the baseline methods.

  14. Robust Multimodal Dictionary Learning

    Science.gov (United States)

    Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc

    2014-01-01

    We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674

  15. Forward error correction based on algebraic-geometric theory

    CERN Document Server

    A Alzubi, Jafar; M Chen, Thomas

    2014-01-01

    This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.

  16. Acuros XB: Impact dosimetric heterogeneous geometric appliance low density AXB represents a step forward in terms of algorithms of calculation for the planning of treatments; Acuros XB: impacto dosimetrico de artefactos geometricos heterogeneos de baja densidad

    Energy Technology Data Exchange (ETDEWEB)

    Jurado Bruggeman, D.; Munoz Montplet, C.; Agramunt Chaler, S.; Bueno Vizcarra, M.; Duch Guillen, M. A.

    2013-07-01

    In some situations, the dose distributions which provides are significantly different from those obtained with other marketed algorithms, which entails new challenges for evaluation and optimization of the treatment. (Author)

  17. Geometric Rationalization for Freeform Architecture

    KAUST Repository

    Jiang, Caigui

    2016-06-20

    The emergence of freeform architecture provides interesting geometric challenges with regards to the design and manufacturing of large-scale structures. To design these architectural structures, we have to consider two types of constraints. First, aesthetic constraints are important because the buildings have to be visually impressive. Sec- ond, functional constraints are important for the performance of a building and its e cient construction. This thesis contributes to the area of architectural geometry. Specifically, we are interested in the geometric rationalization of freeform architec- ture with the goal of combining aesthetic and functional constraints and construction requirements. Aesthetic requirements typically come from designers and architects. To obtain visually pleasing structures, they favor smoothness of the building shape, but also smoothness of the visible patterns on the surface. Functional requirements typically come from the engineers involved in the construction process. For exam- ple, covering freeform structures using planar panels is much cheaper than using non-planar ones. Further, constructed buildings have to be stable and should not collapse. In this thesis, we explore the geometric rationalization of freeform archi- tecture using four specific example problems inspired by real life applications. We achieve our results by developing optimization algorithms and a theoretical study of the underlying geometrical structure of the problems. The four example problems are the following: (1) The design of shading and lighting systems which are torsion-free structures with planar beams based on quad meshes. They satisfy the functionality requirements of preventing light from going inside a building as shad- ing systems or reflecting light into a building as lighting systems. (2) The Design of freeform honeycomb structures that are constructed based on hex-dominant meshes with a planar beam mounted along each edge. The beams intersect without

  18. Robust Return Algorithm for Anisotropic Plasticity Models

    DEFF Research Database (Denmark)

    Tidemann, L.; Krenk, Steen

    2017-01-01

    Plasticity models can be defined by an energy potential, a plastic flow potential and a yield surface. The energy potential defines the relation between the observable elastic strains ϒe and the energy conjugate stresses Τe and between the non-observable internal strains i and the energy conjugat...

  19. Robust pattern decoding in shape-coded structured light

    Science.gov (United States)

    Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai

    2017-09-01

    Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.

  20. Universal geometrical module for MARS program

    International Nuclear Information System (INIS)

    Talanov, V.V.

    1992-01-01

    Geometrical program module for modeling hadron and electromagnetic cascades, which accomplishes comparison of physical coordinates with the particle current state of one of the auxilliary cells, is described. The whole medium wherein the particles are tracked, is divided into a certain number of auxilliary cells. The identification algorithm of the cell, through which the particle trajectory passes, is considered in detail. The described algorithm for cell identification was developed for the MARS program and realized in form of a set of subprograms written in the FORTRAN language. 4 refs., 1 tab

  1. Geometric Liouville gravity

    International Nuclear Information System (INIS)

    La, H.

    1992-01-01

    A new geometric formulation of Liouville gravity based on the area preserving diffeo-morphism is given and a possible alternative to reinterpret Liouville gravity is suggested, namely, a scalar field coupled to two-dimensional gravity with a curvature constraint

  2. A Geometric Dissection Problem

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 7. A Geometric Dissection Problem. M N Deshpande. Think It Over Volume 7 Issue 7 July 2002 pp 91-91. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/07/0091-0091. Author Affiliations.

  3. Geometric statistical inference

    International Nuclear Information System (INIS)

    Periwal, Vipul

    1999-01-01

    A reparametrization-covariant formulation of the inverse problem of probability is explicitly solved for finite sample sizes. The inferred distribution is explicitly continuous for finite sample size. A geometric solution of the statistical inference problem in higher dimensions is outlined

  4. Geometric Series via Probability

    Science.gov (United States)

    Tesman, Barry

    2012-01-01

    Infinite series is a challenging topic in the undergraduate mathematics curriculum for many students. In fact, there is a vast literature in mathematics education research on convergence issues. One of the most important types of infinite series is the geometric series. Their beauty lies in the fact that they can be evaluated explicitly and that…

  5. Structure-Based Algorithms for Microvessel Classification

    KAUST Repository

    Smith, Amy F.

    2015-02-01

    © 2014 The Authors. Microcirculation published by John Wiley & Sons Ltd. Objective: Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods: Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results: The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions: The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries, and venules.

  6. Geometric reconstruction methods for electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Alpers, Andreas, E-mail: alpers@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Gardner, Richard J., E-mail: Richard.Gardner@wwu.edu [Department of Mathematics, Western Washington University, Bellingham, WA 98225-9063 (United States); König, Stefan, E-mail: koenig@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Pennington, Robert S., E-mail: robert.pennington@uni-ulm.de [Center for Electron Nanoscopy, Technical University of Denmark, DK-2800 Kongens Lyngby (Denmark); Boothroyd, Chris B., E-mail: ChrisBoothroyd@cantab.net [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Houben, Lothar, E-mail: l.houben@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Dunin-Borkowski, Rafal E., E-mail: rdb@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Joost Batenburg, Kees, E-mail: Joost.Batenburg@cwi.nl [Centrum Wiskunde and Informatica, NL-1098XG, Amsterdam, The Netherlands and Vision Lab, Department of Physics, University of Antwerp, B-2610 Wilrijk (Belgium)

    2013-05-15

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand.

  7. Geometric reconstruction methods for electron tomography

    International Nuclear Information System (INIS)

    Alpers, Andreas; Gardner, Richard J.; König, Stefan; Pennington, Robert S.; Boothroyd, Chris B.; Houben, Lothar; Dunin-Borkowski, Rafal E.; Joost Batenburg, Kees

    2013-01-01

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand

  8. Pragmatic geometric model evaluation

    Science.gov (United States)

    Pamer, Robert

    2015-04-01

    Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to

  9. A curvature-based weighted fuzzy c-means algorithm for point clouds de-noising

    Science.gov (United States)

    Cui, Xin; Li, Shipeng; Yan, Xiutian; He, Xinhua

    2018-04-01

    In order to remove the noise of three-dimensional scattered point cloud and smooth the data without damnify the sharp geometric feature simultaneity, a novel algorithm is proposed in this paper. The feature-preserving weight is added to fuzzy c-means algorithm which invented a curvature weighted fuzzy c-means clustering algorithm. Firstly, the large-scale outliers are removed by the statistics of r radius neighboring points. Then, the algorithm estimates the curvature of the point cloud data by using conicoid parabolic fitting method and calculates the curvature feature value. Finally, the proposed clustering algorithm is adapted to calculate the weighted cluster centers. The cluster centers are regarded as the new points. The experimental results show that this approach is efficient to different scale and intensities of noise in point cloud with a high precision, and perform a feature-preserving nature at the same time. Also it is robust enough to different noise model.

  10. A general modeling method for I-V characteristics of geometrically and electrically configured photovoltaic arrays

    International Nuclear Information System (INIS)

    Liu Guangyu; Nguang, Sing Kiong; Partridge, Ashton

    2011-01-01

    Highlights: → A novel and general method is proposed for modeling PV arrays or modules. → A robust algorithm is used for the first time to improve the convergence to solution. → Auxiliary functions in other general methods are not compulsory in our method. → It is novel that geometric configuration is also incorporated. → A case study is performed to show the approach's advantages and unique features. - Abstract: A general method for modeling typical photovoltaic (PV) arrays and modules is proposed to find the exact current and voltage relationship of PV arrays or modules of geometrically and electrically different configurations. Nonlinear characteristic equations of electrical devices in solar array or module systems are numerically constructed without adding any virtual electrical components. Then, a robust damped Newton method is used to find exact I-V relationship of these general nonlinear equations, where the convergence is guaranteed. The model can deal with different mismatch effects such as different configurations of bypass diodes, and partial shading. Geometry coordinates of PV components are also considered to facilitate the modeling of the actual physical configuration. Simulation of a PV array with 48 modules, partially shaded by a concrete structure, is performed to verify the effectiveness and advantages of the proposed method.

  11. Can the same edge-detection algorithm be applied to on-line and off-line analysis systems? Validation of a new cinefilm-based geometric coronary measurement software

    NARCIS (Netherlands)

    J. Haase (Jürgen); C. di Mario (Carlo); P.W.J.C. Serruys (Patrick); M.M.J.M. van der Linden (Mark); D.P. Foley (David); W.J. van der Giessen (Wim)

    1993-01-01

    textabstractIn the Cardiovascular Measurement System (CMS) the edge-detection algorithm, which was primarily designed for the Philips digital cardiac imaging system (DCI), is applied to cinefilms. Comparative validation of CMS and DCI was performed in vitro and in vivo with intracoronary insertion

  12. Robust and Efficient Parametric Face Alignment

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    2011-01-01

    We propose a correlation-based approach to parametric object alignment particularly suitable for face analysis applications which require efficiency and robustness against occlusions and illumination changes. Our algorithm registers two images by iteratively maximizing their correlation coefficient

  13. Robust glint detection through homography normalization

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Roholm, Lars; García Ferreiros, Iván

    2014-01-01

    A novel normalization principle for robust glint detection is presented. The method is based on geometric properties of corneal reflections and allows for simple and effective detection of glints even in the presence of several spurious and identically appearing reflections. The method is tested...

  14. Dynamics in geometrical confinement

    CERN Document Server

    Kremer, Friedrich

    2014-01-01

    This book describes the dynamics of low molecular weight and polymeric molecules when they are constrained under conditions of geometrical confinement. It covers geometrical confinement in different dimensionalities: (i) in nanometer thin layers or self supporting films (1-dimensional confinement) (ii) in pores or tubes with nanometric diameters (2-dimensional confinement) (iii) as micelles embedded in matrices (3-dimensional) or as nanodroplets.The dynamics under such conditions have been a much discussed and central topic in the focus of intense worldwide research activities within the last two decades. The present book discusses how the resulting molecular mobility is influenced by the subtle counterbalance between surface effects (typically slowing down molecular dynamics through attractive guest/host interactions) and confinement effects (typically increasing the mobility). It also explains how these influences can be modified and tuned, e.g. through appropriate surface coatings, film thicknesses or pore...

  15. Geometric group theory

    CERN Document Server

    Bestvina, Mladen; Vogtmann, Karen

    2014-01-01

    Geometric group theory refers to the study of discrete groups using tools from topology, geometry, dynamics and analysis. The field is evolving very rapidly and the present volume provides an introduction to and overview of various topics which have played critical roles in this evolution. The book contains lecture notes from courses given at the Park City Math Institute on Geometric Group Theory. The institute consists of a set of intensive short courses offered by leaders in the field, designed to introduce students to exciting, current research in mathematics. These lectures do not duplicate standard courses available elsewhere. The courses begin at an introductory level suitable for graduate students and lead up to currently active topics of research. The articles in this volume include introductions to CAT(0) cube complexes and groups, to modern small cancellation theory, to isometry groups of general CAT(0) spaces, and a discussion of nilpotent genus in the context of mapping class groups and CAT(0) gro...

  16. Lectures in geometric combinatorics

    CERN Document Server

    Thomas, Rekha R

    2006-01-01

    This book presents a course in the geometry of convex polytopes in arbitrary dimension, suitable for an advanced undergraduate or beginning graduate student. The book starts with the basics of polytope theory. Schlegel and Gale diagrams are introduced as geometric tools to visualize polytopes in high dimension and to unearth bizarre phenomena in polytopes. The heart of the book is a treatment of the secondary polytope of a point configuration and its connections to the state polytope of the toric ideal defined by the configuration. These polytopes are relatively recent constructs with numerous connections to discrete geometry, classical algebraic geometry, symplectic geometry, and combinatorics. The connections rely on Gr�bner bases of toric ideals and other methods from commutative algebra. The book is self-contained and does not require any background beyond basic linear algebra. With numerous figures and exercises, it can be used as a textbook for courses on geometric, combinatorial, and computational as...

  17. Geometric information provider platform

    Directory of Open Access Journals (Sweden)

    Meisam Yousefzadeh

    2015-07-01

    Full Text Available Renovation of existing buildings is known as an essential stage in reduction of the energy loss. Considerable part of renovation process depends on geometric reconstruction of building based on semantic parameters. Following many research projects which were focused on parameterizing the energy usage, various energy modelling methods were developed during the last decade. On the other hand, by developing accurate measuring tools such as laser scanners, the interests of having accurate 3D building models are rapidly growing. But the automation of 3D building generation from laser point cloud or detection of specific objects in that is still a challenge.  The goal is designing a platform through which required geometric information can be efficiently produced to support energy simulation software. Developing a reliable procedure which extracts required information from measured data and delivers them to a standard energy modelling system is the main purpose of the project.

  18. Gravity, a geometrical course

    CERN Document Server

    Frè, Pietro Giuseppe

    2013-01-01

    ‘Gravity, a Geometrical Course’ presents general relativity (GR) in a systematic and exhaustive way, covering three aspects that are homogenized into a single texture: i) the mathematical, geometrical foundations, exposed in a self consistent contemporary formalism, ii) the main physical, astrophysical and cosmological applications,  updated to the issues of contemporary research and observations, with glimpses on supergravity and superstring theory, iii) the historical development of scientific ideas underlying both the birth of general relativity and its subsequent evolution. The book is divided in two volumes.   Volume One is dedicated to the development of the theory and basic physical applications. It guides the reader from the foundation of special relativity to Einstein field equations, illustrating some basic applications in astrophysics. A detailed  account  of the historical and conceptual development of the theory is combined with the presentation of its mathematical foundations.  Differe...

  19. Efficient Geometric Sound Propagation Using Visibility Culling

    Science.gov (United States)

    Chandak, Anish

    2011-07-01

    Simulating propagation of sound can improve the sense of realism in interactive applications such as video games and can lead to better designs in engineering applications such as architectural acoustics. In this thesis, we present geometric sound propagation techniques which are faster than prior methods and map well to upcoming parallel multi-core CPUs. We model specular reflections by using the image-source method and model finite-edge diffraction by using the well-known Biot-Tolstoy-Medwin (BTM) model. We accelerate the computation of specular reflections by applying novel visibility algorithms, FastV and AD-Frustum, which compute visibility from a point. We accelerate finite-edge diffraction modeling by applying a novel visibility algorithm which computes visibility from a region. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms. The AD-Frustum algorithm adapts its computation to the scene complexity and allows small errors in computing specular reflection paths for higher computational efficiency. FastV and our visibility algorithm from a region are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy. Our geometric propagation algorithms are an order of magnitude faster than prior approaches for modeling specular reflections and two to ten times faster for modeling finite-edge diffraction. Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes. We also compare the accuracy of our sound propagation algorithms with other methods. Once sound propagation is performed, it is desirable to listen to the propagated sound in interactive and engineering applications. We can generate smooth, artifact-free output audio signals by applying

  20. Geometric homology revisited

    OpenAIRE

    Ruffino, Fabio Ferrari

    2013-01-01

    Given a cohomology theory, there is a well-known abstract way to define the dual homology theory using the theory of spectra. In [4] the author provides a more geometric construction of the homology theory, using a generalization of the bordism groups. Such a generalization involves in its definition the vector bundle modification, which is a particular case of the Gysin map. In this paper we provide a more natural variant of that construction, which replaces the vector bundle modification wi...

  1. Geometric measure theory

    CERN Document Server

    Waerden, B

    1996-01-01

    From the reviews: "... Federer's timely and beautiful book indeed fills the need for a comprehensive treatise on geometric measure theory, and his detailed exposition leads from the foundations of the theory to the most recent discoveries. ... The author writes with a distinctive style which is both natural and powerfully economical in treating a complicated subject. This book is a major treatise in mathematics and is essential in the working library of the modern analyst." Bulletin of the London Mathematical Society.

  2. Developing geometrical reasoning

    OpenAIRE

    Brown, Margaret; Jones, Keith; Taylor, Ron; Hirst, Ann

    2004-01-01

    This paper summarises a report (Brown, Jones & Taylor, 2003) to the UK Qualifications and Curriculum Authority of the work of one geometry group. The group was charged with developing and reporting on teaching ideas that focus on the development of geometrical reasoning at the secondary school level. The group was encouraged to explore what is possible both within and beyond the current requirements of the UK National Curriculum and the Key Stage 3 strategy, and to consider the whole atta...

  3. Geometric theory of information

    CERN Document Server

    2014-01-01

    This book brings together geometric tools and their applications for Information analysis. It collects current and many uses of in the interdisciplinary fields of Information Geometry Manifolds in Advanced Signal, Image & Video Processing, Complex Data Modeling and Analysis, Information Ranking and Retrieval, Coding, Cognitive Systems, Optimal Control, Statistics on Manifolds, Machine Learning, Speech/sound recognition, and natural language treatment which are also substantially relevant for the industry.

  4. Geometric leaf placement strategies

    International Nuclear Information System (INIS)

    Fenwick, J D; Temple, S W P; Clements, R W; Lawrence, G P; Mayles, H M O; Mayles, W P M

    2004-01-01

    Geometric leaf placement strategies for multileaf collimators (MLCs) typically involve the expansion of the beam's-eye-view contour of a target by a uniform MLC margin, followed by movement of the leaves until some point on each leaf end touches the expanded contour. Film-based dose-distribution measurements have been made to determine appropriate MLC margins-characterized through an index d 90 -for multileaves set using one particular strategy to straight lines lying at various angles to the direction of leaf travel. Simple trigonometric relationships exist between different geometric leaf placement strategies and are used to generalize the results of the film work into d 90 values for several different strategies. Measured d 90 values vary both with angle and leaf placement strategy. A model has been derived that explains and describes quite well the observed variations of d 90 with angle. The d 90 angular variations of the strategies studied differ substantially, and geometric and dosimetric reasoning suggests that the best strategy is the one with the least angular variation. Using this criterion, the best straightforwardly implementable strategy studied is a 'touch circle' approach for which semicircles are imagined to be inscribed within leaf ends, the leaves being moved until the semicircles just touch the expanded target outline

  5. Studies in geometric quantization

    International Nuclear Information System (INIS)

    Tuynman, G.M.

    1988-01-01

    This thesis contains five chapters, of which the first, entitled 'What is prequantization, and what is geometric quantization?', is meant as an introduction to geometric quantization for the non-specialist. The second chapter, entitled 'Central extensions and physics' deals with the notion of central extensions of manifolds and elaborates and proves the statements made in the first chapter. Central extensions of manifolds occur in physics as the freedom of a phase factor in the quantum mechanical state vector, as the phase factor in the prequantization process of classical mechanics and it appears in mathematics when studying central extension of Lie groups. In this chapter the connection between these central extensions is investigated and a remarkable similarity between classical and quantum mechanics is shown. In chapter three a classical model is given for the hydrogen atom including spin-orbit and spin-spin interaction. The method of geometric quantization is applied to this model and the results are discussed. In the final chapters (4 and 5) an explicit method to calculate the operators corresponding to classical observables is given when the phase space is a Kaehler manifold. The obtained formula are then used to quantise symplectic manifolds which are irreducible hermitian symmetric spaces and the results are compared with other quantization procedures applied to these manifolds (in particular to Berezin's quantization). 91 refs.; 3 tabs

  6. A mixture model for robust point matching under multi-layer motion.

    Directory of Open Access Journals (Sweden)

    Jiayi Ma

    Full Text Available This paper proposes an efficient mixture model for establishing robust point correspondences between two sets of points under multi-layer motion. Our algorithm starts by creating a set of putative correspondences which can contain a number of false correspondences, or outliers, in addition to the true correspondences (inliers. Next we solve for correspondence by interpolating a set of spatial transformations on the putative correspondence set based on a mixture model, which involves estimating a consensus of inlier points whose matching follows a non-parametric geometrical constraint. We formulate this as a maximum a posteriori (MAP estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation. We further provide a fast implementation based on sparse approximation which can achieve a significant speed-up without much performance degradation. We illustrate the proposed method on 2D and 3D real images for sparse feature correspondence, as well as a public available dataset for shape matching. The quantitative results demonstrate that our method is robust to non-rigid deformation and multi-layer/large discontinuous motion.

  7. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  8. Geometrical model of multiple production

    International Nuclear Information System (INIS)

    Chikovani, Z.E.; Jenkovszky, L.L.; Kvaratshelia, T.M.; Struminskij, B.V.

    1988-01-01

    The relation between geometrical and KNO-scaling and their violation is studied in a geometrical model of multiple production of hadrons. Predictions concerning the behaviour of correlation coefficients at future accelerators are given

  9. Geometric Computing for Freeform Architecture

    KAUST Repository

    Wallner, J.; Pottmann, Helmut

    2011-01-01

    Geometric computing has recently found a new field of applications, namely the various geometric problems which lie at the heart of rationalization and construction-aware design processes of freeform architecture. We report on our work in this area

  10. Algorithms and file structures for computational geometry

    International Nuclear Information System (INIS)

    Hinrichs, K.; Nievergelt, J.

    1983-01-01

    Algorithms for solving geometric problems and file structures for storing large amounts of geometric data are of increasing importance in computer graphics and computer-aided design. As examples of recent progress in computational geometry, we explain plane-sweep algorithms, which solve various topological and geometric problems efficiently; and we present the grid file, an adaptable, symmetric multi-key file structure that provides efficient access to multi-dimensional data along any space dimension. (orig.)

  11. Geometric Constructions with the Computer.

    Science.gov (United States)

    Chuan, Jen-chung

    The computer can be used as a tool to represent and communicate geometric knowledge. With the appropriate software, a geometric diagram can be manipulated through a series of animation that offers more than one particular snapshot as shown in a traditional mathematical text. Geometric constructions with the computer enable the learner to see and…

  12. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    Science.gov (United States)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time

  13. Implementation and efficiency of two geometric stiffening approaches

    International Nuclear Information System (INIS)

    Lugris, Urbano; Naya, Miguel A.; Perez, Jose A.; Cuadrado, Javier

    2008-01-01

    When the modeling of flexible bodies is required in multibody systems, the floating frame of reference formulations are probably the most efficient methods available. In the case of beams undergoing high speed rotations, the geometric stiffening effect can appear due to geometric nonlinearities, and it is often not captured by the aforementioned methods, since it is common to linearize the elastic forces assuming small deformations. The present work discusses the implementation of different existing methods developed to consider such geometric nonlinearities within a floating frame of reference formulation in natural coordinates, making emphasis on the relation between efficiency and accuracy of the resulting algorithms, seeking to provide practical criteria of use

  14. Algoritmo para generar formulas de características geométricas de las secciones planas, su implementación en DERIVE // Algorithm for the calculation of the geometric characteristics of the plane sections defined for polygonal, their implementation in DERI

    Directory of Open Access Journals (Sweden)

    Rolando Rivero-Galán

    2010-01-01

    Full Text Available ResumenEn la esfera del diseno de estructuras y de elementos de maquinas, se presenta con relativafrecuencia el calculo de determinadas caracteristicas o propiedades geometricas de seccionesplanas, como son entre otras: el area de una seccion transversal, el centro de gravedad, unmomento de inercia o mas general la determinacion de alguna caracteristica geometrica definidapor una integral doble extendida en la region del plano que ocupa la seccion, pieza o elemento.El presente trabajo tiene como objetivo la confeccion de un programa para computadora,utilizando el asistente matematico DERIVE, para la determinacion de las caracteristicas geometricasde secciones planas cuyo contorno este constituido por segmentos de rectas.Palabras claves: Algoritmo, seccion plana, DERIVE, momentos, centro de gravedad.______________________________________________________________AbstractIn the sphere of the design of structures, is relative frequency the calculation of certaincharacteristic or geometric properties of plane sections, like: the area of a traverse section, thecenter of gravity, a moment of inertia or more general the determination of some geometriccharacteristic defined by a double integral extended in the region of the plane that occupies thesection, piece or element.The present work has as objective the making of a program for computer, using the mathematicalassistant DERIVE, for the determination of the geometric characteristics of plane sections whosecontour this constituted by segments of right.Key words: Algorithm, plane section, DERIVE, moments, center of gravity

  15. Ins-Robust Primitive Words

    OpenAIRE

    Srivastava, Amit Kumar; Kapoor, Kalpesh

    2017-01-01

    Let Q be the set of primitive words over a finite alphabet with at least two symbols. We characterize a class of primitive words, Q_I, referred to as ins-robust primitive words, which remain primitive on insertion of any letter from the alphabet and present some properties that characterizes words in the set Q_I. It is shown that the language Q_I is dense. We prove that the language of primitive words that are not ins-robust is not context-free. We also present a linear time algorithm to reco...

  16. Non-Markovian effect on the geometric phase of a dissipative qubit

    International Nuclear Information System (INIS)

    Chen Juanjuan; Tong Qingjun; An Junhong; Luo Honggang; Oh, C. H.

    2010-01-01

    We studied the geometric phase of a two-level atom coupled to an environment with Lorentzian spectral density. The non-Markovian effect on the geometric phase is explored analytically and numerically. In the weak coupling limit, the lowest order correction to the geometric phase is derived analytically and the general case is calculated numerically. It was found that the correction to the geometric phase is significantly large if the spectral width is small, and in this case the non-Markovian dynamics has a significant impact on the geometric phase. When the spectral width increases, the correction to the geometric phase becomes negligible, which shows the robustness of the geometric phase to the environmental white noises. The result is significant to the quantum information processing based on the geometric phase.

  17. Geometric multipartite entanglement measures

    International Nuclear Information System (INIS)

    Paz-Silva, Gerardo A.; Reina, John H.

    2007-01-01

    Within the framework of constructions for quantifying entanglement, we build a natural scenario for the assembly of multipartite entanglement measures based on Hopf bundle-like mappings obtained through Clifford algebra representations. Then, given the non-factorizability of an arbitrary two-qubit density matrix, we give an alternate quantity that allows the construction of two types of entanglement measures based on their arithmetical and geometrical averages over all pairs of qubits in a register of size N, and thus fully characterize its degree and type of entanglement. We find that such an arithmetical average is both additive and strongly super additive

  18. Geometric correlations and multifractals

    International Nuclear Information System (INIS)

    Amritkar, R.E.

    1991-07-01

    There are many situations where the usual statistical methods are not adequate to characterize correlations in the system. To characterize such situations we introduce mutual correlation dimensions which describe geometric correlations in the system. These dimensions allow us to distinguish between variables which are perfectly correlated with or without a phase lag, variables which are uncorrelated and variables which are partially correlated. We demonstrate the utility of our formalism by considering two examples from dynamical systems. The first example is about the loss of memory in chaotic signals and describes auto-correlations while the second example is about synchronization of chaotic signals and describes cross-correlations. (author). 19 refs, 6 figs

  19. Simulating geometrically complex blast scenarios

    Directory of Open Access Journals (Sweden)

    Ian G. Cullis

    2016-04-01

    Full Text Available The effects of blast waves generated by energetic and non-energetic sources are of continuing interest to the ballistics research community. Modern conflicts are increasingly characterised by asymmetric urban warfare, with improvised explosive devices (IEDs often playing a dominant role on the one hand and an armed forces requirement for minimal collateral effects from their weapons on the other. These problems are characterised by disparate length- and time-scales and may also be governed by complex physics. There is thus an increasing need to be able to rapidly assess and accurately predict the effects of energetic blast in topologically complex scenarios. To this end, this paper presents a new QinetiQ-developed advanced computational package called EAGLE-Blast, which is capable of accurately resolving the generation, propagation and interaction of blast waves around geometrically complex shapes such as vehicles and buildings. After a brief description of the numerical methodology, various blast scenario simulations are described and the results compared with experimental data to demonstrate the validation of the scheme and its ability to describe these complex scenarios accurately and efficiently. The paper concludes with a brief discussion on the use of the code in supporting the development of algorithms for fast running engineering models.

  20. Geometric description of images as topographic maps

    CERN Document Server

    Caselles, Vicent

    2010-01-01

    This volume discusses the basic geometric contents of an image and presents a tree data structure to handle those contents efficiently. The nodes of the tree are derived from connected components of level sets of the intensity, while the edges represent inclusion information. Grain filters, morphological operators simplifying these geometric contents, are analyzed and several applications to image comparison and registration, and to edge and corner detection, are presented. The mathematically inclined reader may be most interested in Chapters 2 to 6, which generalize the topological Morse description to continuous or semicontinuous functions, while mathematical morphologists may more closely consider grain filters in Chapter 3. Computer scientists will find algorithmic considerations in Chapters 6 and 7, the full justification of which may be found in Chapters 2 and 4 respectively. Lastly, all readers can learn more about the motivation for this work in the image processing applications presented in Chapter 8...

  1. Reconstruction of an InAs nanowire using geometric and algebraic tomography

    International Nuclear Information System (INIS)

    Pennington, R S; Boothroyd, C B; König, S; Alpers, A; Dunin-Borkowski, R E

    2011-01-01

    Geometric tomography and conventional algebraic tomography algorithms are used to reconstruct cross-sections of an InAs nanowire from a tilt series of experimental annular dark-field images. Both algorithms are also applied to a test object to assess what factors affect the reconstruction quality. When using the present algorithms, geometric tomography is faster, but artifacts in the reconstruction may be difficult to recognize.

  2. Reconstruction of an InAs nanowire using geometric and algebraic tomography

    DEFF Research Database (Denmark)

    Pennington, Robert S.; König, S.; Alpers, A.

    2011-01-01

    Geometric tomography and conventional algebraic tomography algorithms are used to reconstruct cross-sections of an InAs nanowire from a tilt series of experimental annular dark-field images. Both algorithms are also applied to a test object to assess what factors affect the reconstruction quality....... When using the present algorithms, geometric tomography is faster, but artifacts in the reconstruction may be difficult to recognize....

  3. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-offs on Phenotype Robustness in Biological Networks. Part III: Synthetic Gene Networks in Synthetic Biology

    Science.gov (United States)

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties that are observed in biological systems at many different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be large enough to confer: intrinsic robustness for tolerating intrinsic parameter fluctuations; genetic robustness for buffering genetic variations; and environmental robustness for resisting environmental disturbances. Network robustness is needed so phenotype stability of biological network can be maintained, guaranteeing phenotype robustness. Synthetic biology is foreseen to have important applications in biotechnology and medicine; it is expected to contribute significantly to a better understanding of functioning of complex biological systems. This paper presents a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation for synthetic gene networks in synthetic biology. Further, from the unifying mathematical framework, we found that the phenotype robustness criterion for synthetic gene networks is the following: if intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in synthetic biology can also be investigated through corresponding phenotype robustness criteria from the systematic point of view. Finally, a robust synthetic design that involves network evolution algorithms with desired behavior under intrinsic parameter fluctuations, genetic variations, and environmental

  4. de Casteljau's Algorithm Revisited

    DEFF Research Database (Denmark)

    Gravesen, Jens

    1998-01-01

    It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...

  5. Automated sulcal depth measurement on cortical surface reflecting geometrical properties of sulci.

    Directory of Open Access Journals (Sweden)

    Hyuk Jin Yun

    Full Text Available Sulcal depth that is one of the quantitative measures of cerebral cortex has been widely used as an important marker for brain morphological studies. Several studies have employed Euclidean (EUD or geodesic (GED algorithms to measure sulcal depth, which have limitations that ignore sulcal geometry in highly convoluted regions and result in under or overestimated depth. In this study, we proposed an automated measurement for sulcal depth on cortical surface reflecting geometrical properties of sulci, which named the adaptive distance transform (ADT. We first defined the volume region of cerebrospinal fluid between the 3D convex hull and the cortical surface, and constructed local coordinates for that restricted region. Dijkstra's algorithm was then used to compute the shortest paths from the convex hull to the vertices of the cortical surface based on the local coordinates, which may be the most proper approach for defining sulcal depth. We applied our algorithm to both a clinical dataset including patients with mild Alzheimer's disease (AD and 25 normal controls and a simulated dataset whose shape was similar to a single sulcus. The mean sulcal depth in the mild AD group was significantly lower than controls (p = 0.007, normal [mean±SD]: 7.29±0.23 mm, AD: 7.11±0.29 and the area under the receiver operating characteristic curve was relatively high, showing the value of 0.818. Results from clinical dataset that were consistent with former studies using EUD or GED demonstrated that ADT was sensitive to cortical atrophy. The robustness against inter-individual variability of ADT was highlighted through simulation dataset. ADT showed a low and constant normalized difference between the depth of the simulated data and the calculated depth, whereas EUD and GED had high and variable differences. We suggest that ADT is more robust than EUD or GED and might be a useful alternative algorithm for measuring sulcal depth.

  6. A scalable geometric multigrid solver for nonsymmetric elliptic systems with application to variable-density flows

    Science.gov (United States)

    Esmaily, M.; Jofre, L.; Mani, A.; Iaccarino, G.

    2018-03-01

    A geometric multigrid algorithm is introduced for solving nonsymmetric linear systems resulting from the discretization of the variable density Navier-Stokes equations on nonuniform structured rectilinear grids and high-Reynolds number flows. The restriction operation is defined such that the resulting system on the coarser grids is symmetric, thereby allowing for the use of efficient smoother algorithms. To achieve an optimal rate of convergence, the sequence of interpolation and restriction operations are determined through a dynamic procedure. A parallel partitioning strategy is introduced to minimize communication while maintaining the load balance between all processors. To test the proposed algorithm, we consider two cases: 1) homogeneous isotropic turbulence discretized on uniform grids and 2) turbulent duct flow discretized on stretched grids. Testing the algorithm on systems with up to a billion unknowns shows that the cost varies linearly with the number of unknowns. This O (N) behavior confirms the robustness of the proposed multigrid method regarding ill-conditioning of large systems characteristic of multiscale high-Reynolds number turbulent flows. The robustness of our method to density variations is established by considering cases where density varies sharply in space by a factor of up to 104, showing its applicability to two-phase flow problems. Strong and weak scalability studies are carried out, employing up to 30,000 processors, to examine the parallel performance of our implementation. Excellent scalability of our solver is shown for a granularity as low as 104 to 105 unknowns per processor. At its tested peak throughput, it solves approximately 4 billion unknowns per second employing over 16,000 processors with a parallel efficiency higher than 50%.

  7. Probabilistic active recognition of multiple objects using Hough-based geometric matching features

    CSIR Research Space (South Africa)

    Govender, N

    2015-01-01

    Full Text Available be recognized simultaneously, and occlusion and clutter (through distracter objects) is common. We propose a representation for object viewpoints using Hough transform based geometric matching features, which are robust in such circumstances. We show how...

  8. A GEOMETRICAL HEIGHT SCALE FOR SUNSPOT PENUMBRAE

    International Nuclear Information System (INIS)

    Puschmann, K. G.; Ruiz Cobo, B.; MartInez Pillet, V.

    2010-01-01

    Inversions of spectropolarimetric observations of penumbral filaments deliver the stratification of different physical quantities in an optical depth scale. However, without establishing a geometrical height scale, their three-dimensional geometrical structure cannot be derived. This is crucial in understanding the correct spatial variation of physical properties in the penumbral atmosphere and to provide insights into the mechanism capable of explaining the observed penumbral brightness. The aim of this work is to determine a global geometrical height scale in the penumbra by minimizing the divergence of the magnetic field vector and the deviations from static equilibrium as imposed by a force balance equation that includes pressure gradients, gravity, and the Lorentz force. Optical depth models are derived from the inversion of spectropolarimetric data of an active region observed with the Solar Optical Telescope on board the Hinode satellite. We use a genetic algorithm to determine the boundary condition for the inference of geometrical heights. The retrieved geometrical height scale permits the evaluation of the Wilson depression at each pixel and the correlation of physical quantities at each height. Our results fit into the uncombed penumbral scenario, i.e., a penumbra composed of flux tubes with channeled mass flow and with a weaker and more horizontal magnetic field as compared with the background field. The ascending material is hotter and denser than their surroundings. We do not find evidence of overturning convection or field-free regions in the inner penumbral area analyzed. The penumbral brightness can be explained by the energy transfer of the ascending mass carried by the Evershed flow, if the physical quantities below z = -75 km are extrapolated from the results of the inversion.

  9. 2D Vector Field Simplification Based on Robustness

    KAUST Repository

    Skraba, Primoz

    2014-03-01

    Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. These geometric metrics do not consider the flow magnitude, an important physical property of the flow. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness, which provides a complementary view on flow structure compared to the traditional topological-skeleton-based approaches. Robustness enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory, has fewer boundary restrictions, and so can handle more general cases. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. © 2014 IEEE.

  10. Robust power system frequency control

    CERN Document Server

    Bevrani, Hassan

    2014-01-01

    This updated edition of the industry standard reference on power system frequency control provides practical, systematic and flexible algorithms for regulating load frequency, offering new solutions to the technical challenges introduced by the escalating role of distributed generation and renewable energy sources in smart electric grids. The author emphasizes the physical constraints and practical engineering issues related to frequency in a deregulated environment, while fostering a conceptual understanding of frequency regulation and robust control techniques. The resulting control strategi

  11. Geometrization of quantum physics

    International Nuclear Information System (INIS)

    Ol'khov, O.A.

    2009-01-01

    It is shown that the Dirac equation for a free particle can be considered as a description of specific distortion of the space Euclidean geometry (space topological defect). This approach is based on the possibility of interpretation of the wave function as vector realizing representation of the fundamental group of the closed topological space-time 4-manifold. Mass and spin appear to be topological invariants. Such a concept explains all so-called 'strange' properties of quantum formalism: probabilities, wave-particle duality, nonlocal instantaneous correlation between noninteracting particles (EPR-paradox) and so on. Acceptance of the suggested geometrical concept means rejection of atomistic concept where all matter is considered as consisting of more and more small elementary particles. There are no any particles a priory, before measurement: the notions of particles appear as a result of classical interpretation of the contact of the region of the curved space with a device

  12. Geometrization of quantum physics

    Science.gov (United States)

    Ol'Khov, O. A.

    2009-12-01

    It is shown that the Dirac equation for free particle can be considered as a description of specific distortion of the space euclidean geometry (space topological defect). This approach is based on possibility of interpretation of the wave function as vector realizing representation of the fundamental group of the closed topological space-time 4-manifold. Mass and spin appear to be topological invariants. Such concept explains all so called “strange” properties of quantum formalism: probabilities, wave-particle duality, nonlocal instantaneous correlation between noninteracting particles (EPR-paradox) and so on. Acceptance of suggested geometrical concept means rejection of atomistic concept where all matter is considered as consisting of more and more small elementary particles. There is no any particles a priori, before measurement: the notions of particles appear as a result of classical interpretation of the contact of the region of the curved space with a device.

  13. Geometrical Image Transforms

    OpenAIRE

    Havelka, Jan

    2008-01-01

    Tato diplomová práce se zabývá akcelerací geometrických transformací obrazu s využitím GPU a architektury NVIDIA (R) CUDA TM. Časově kritické části kódu jsou přesunuty na GPU a vykonány paralelně. Jedním z výsledků je demonstrační aplikace pro porovnání výkonnosti obou architektur: CPU, a GPU v kombinaci s CPU. Pro referenční implementaci jsou použity vysoce optimalizované algoritmy z knihovny OpenCV, od firmy Intel. This master's thesis deals with acceleration of geometrical image transfo...

  14. Geometric covers, graph orientations, counter games

    DEFF Research Database (Denmark)

    Berglin, Edvin

    -directed graph is dynamic (can be altered by some outside actor), some orientations may need to be reversed in order to maintain the low out-degree. We present a new algorithm that is simpler than earlier work, yet matches or outperforms the efficiency of these results with very few exceptions. Counter games...... example is Line Cover, also known as Point-Line Cover, where a set of points in a geometric space are to be covered by placing a restricted number of lines. We present new FPT algorithms for the sub-family Curve Cover (which includes Line Cover), as well as for Hyperplane Cover restricted to R 3 (i...... are a type of abstract game played over a set of counters holding values, and these values may be moved between counters according to some set of rules. Typically they are played between two players: the adversary who tries to concentrate the greatest value possible in a single counter, and the benevolent...

  15. Time Series Analysis Using Geometric Template Matching.

    Science.gov (United States)

    Frank, Jordan; Mannor, Shie; Pineau, Joelle; Precup, Doina

    2013-03-01

    We present a novel framework for analyzing univariate time series data. At the heart of the approach is a versatile algorithm for measuring the similarity of two segments of time series called geometric template matching (GeTeM). First, we use GeTeM to compute a similarity measure for clustering and nearest-neighbor classification. Next, we present a semi-supervised learning algorithm that uses the similarity measure with hierarchical clustering in order to improve classification performance when unlabeled training data are available. Finally, we present a boosting framework called TDEBOOST, which uses an ensemble of GeTeM classifiers. TDEBOOST augments the traditional boosting approach with an additional step in which the features used as inputs to the classifier are adapted at each step to improve the training error. We empirically evaluate the proposed approaches on several datasets, such as accelerometer data collected from wearable sensors and ECG data.

  16. Robust optimization based upon statistical theory.

    Science.gov (United States)

    Sobotta, B; Söhn, M; Alber, M

    2010-08-01

    Organ movement is still the biggest challenge in cancer treatment despite advances in online imaging. Due to the resulting geometric uncertainties, the delivered dose cannot be predicted precisely at treatment planning time. Consequently, all associated dose metrics (e.g., EUD and maxDose) are random variables with a patient-specific probability distribution. The method that the authors propose makes these distributions the basis of the optimization and evaluation process. The authors start from a model of motion derived from patient-specific imaging. On a multitude of geometry instances sampled from this model, a dose metric is evaluated. The resulting pdf of this dose metric is termed outcome distribution. The approach optimizes the shape of the outcome distribution based on its mean and variance. This is in contrast to the conventional optimization of a nominal value (e.g., PTV EUD) computed on a single geometry instance. The mean and variance allow for an estimate of the expected treatment outcome along with the residual uncertainty. Besides being applicable to the target, the proposed method also seamlessly includes the organs at risk (OARs). The likelihood that a given value of a metric is reached in the treatment is predicted quantitatively. This information reveals potential hazards that may occur during the course of the treatment, thus helping the expert to find the right balance between the risk of insufficient normal tissue sparing and the risk of insufficient tumor control. By feeding this information to the optimizer, outcome distributions can be obtained where the probability of exceeding a given OAR maximum and that of falling short of a given target goal can be minimized simultaneously. The method is applicable to any source of residual motion uncertainty in treatment delivery. Any model that quantifies organ movement and deformation in terms of probability distributions can be used as basis for the algorithm. Thus, it can generate dose

  17. Salt bridges: geometrically specific, designable interactions.

    Science.gov (United States)

    Donald, Jason E; Kulp, Daniel W; DeGrado, William F

    2011-03-01

    Salt bridges occur frequently in proteins, providing conformational specificity and contributing to molecular recognition and catalysis. We present a comprehensive analysis of these interactions in protein structures by surveying a large database of protein structures. Salt bridges between Asp or Glu and His, Arg, or Lys display extremely well-defined geometric preferences. Several previously observed preferences are confirmed, and others that were previously unrecognized are discovered. Salt bridges are explored for their preferences for different separations in sequence and in space, geometric preferences within proteins and at protein-protein interfaces, co-operativity in networked salt bridges, inclusion within metal-binding sites, preference for acidic electrons, apparent conformational side chain entropy reduction on formation, and degree of burial. Salt bridges occur far more frequently between residues at close than distant sequence separations, but, at close distances, there remain strong preferences for salt bridges at specific separations. Specific types of complex salt bridges, involving three or more members, are also discovered. As we observe a strong relationship between the propensity to form a salt bridge and the placement of salt-bridging residues in protein sequences, we discuss the role that salt bridges might play in kinetically influencing protein folding and thermodynamically stabilizing the native conformation. We also develop a quantitative method to select appropriate crystal structure resolution and B-factor cutoffs. Detailed knowledge of these geometric and sequence dependences should aid de novo design and prediction algorithms. Copyright © 2010 Wiley-Liss, Inc.

  18. Efficient 3D geometric and Zernike moments computation from unstructured surface meshes.

    Science.gov (United States)

    Pozo, José María; Villa-Uriol, Maria-Cruz; Frangi, Alejandro F

    2011-03-01

    This paper introduces and evaluates a fast exact algorithm and a series of faster approximate algorithms for the computation of 3D geometric moments from an unstructured surface mesh of triangles. Being based on the object surface reduces the computational complexity of these algorithms with respect to volumetric grid-based algorithms. In contrast, it can only be applied for the computation of geometric moments of homogeneous objects. This advantage and restriction is shared with other proposed algorithms based on the object boundary. The proposed exact algorithm reduces the computational complexity for computing geometric moments up to order N with respect to previously proposed exact algorithms, from N(9) to N(6). The approximate series algorithm appears as a power series on the rate between triangle size and object size, which can be truncated at any desired degree. The higher the number and quality of the triangles, the better the approximation. This approximate algorithm reduces the computational complexity to N(3). In addition, the paper introduces a fast algorithm for the computation of 3D Zernike moments from the computed geometric moments, with a computational complexity N(4), while the previously proposed algorithm is of order N(6). The error introduced by the proposed approximate algorithms is evaluated in different shapes and the cost-benefit ratio in terms of error, and computational time is analyzed for different moment orders.

  19. Graph Algorithm Animation with Grrr

    OpenAIRE

    Rodgers, Peter; Vidal, Natalia

    2000-01-01

    We discuss geometric positioning, highlighting of visited nodes and user defined highlighting that form the algorithm animation facilities in the Grrr graph rewriting programming language. The main purpose of animation was initially for the debugging and profiling of Grrr code, but recently it has been extended for the purpose of teaching algorithms to undergraduate students. The animation is restricted to graph based algorithms such as graph drawing, list manipulation or more traditional gra...

  20. Correcting geometric and photometric distortion of document images on a smartphone

    Science.gov (United States)

    Simon, Christian; Williem; Park, In Kyu

    2015-01-01

    A set of document image processing algorithms for improving the optical character recognition (OCR) capability of smartphone applications is presented. The scope of the problem covers the geometric and photometric distortion correction of document images. The proposed framework was developed to satisfy industrial requirements. It is implemented on an off-the-shelf smartphone with limited resources in terms of speed and memory. Geometric distortions, i.e., skew and perspective distortion, are corrected by sending horizontal and vertical vanishing points toward infinity in a downsampled image. Photometric distortion includes image degradation from moiré pattern noise and specular highlights. Moiré pattern noise is removed using low-pass filters with different sizes independently applied to the background and text region. The contrast of the text in a specular highlighted area is enhanced by locally enlarging the intensity difference between the background and text while the noise is suppressed. Intensive experiments indicate that the proposed methods show a consistent and robust performance on a smartphone with a runtime of less than 1 s.

  1. Harmonic and geometric analysis

    CERN Document Server

    Citti, Giovanna; Pérez, Carlos; Sarti, Alessandro; Zhong, Xiao

    2015-01-01

    This book presents an expanded version of four series of lectures delivered by the authors at the CRM. Harmonic analysis, understood in a broad sense, has a very wide interplay with partial differential equations and in particular with the theory of quasiconformal mappings and its applications. Some areas in which real analysis has been extremely influential are PDE's and geometric analysis. Their foundations and subsequent developments made extensive use of the Calderón–Zygmund theory, especially the Lp inequalities for Calderón–Zygmund operators (Beurling transform and Riesz transform, among others) and the theory of Muckenhoupt weights.  The first chapter is an application of harmonic analysis and the Heisenberg group to understanding human vision, while the second and third chapters cover some of the main topics on linear and multilinear harmonic analysis. The last serves as a comprehensive introduction to a deep result from De Giorgi, Moser and Nash on the regularity of elliptic partial differen...

  2. Robust image authentication in the presence of noise

    CERN Document Server

    2015-01-01

    This book addresses the problems that hinder image authentication in the presence of noise. It considers the advantages and disadvantages of existing algorithms for image authentication and shows new approaches and solutions for robust image authentication. The state of the art algorithms are compared and, furthermore, innovative approaches and algorithms are introduced. The introduced algorithms are applied to improve image authentication, watermarking and biometry.    Aside from presenting new directions and algorithms for robust image authentication in the presence of noise, as well as image correction, this book also:   Provides an overview of the state of the art algorithms for image authentication in the presence of noise and modifications, as well as a comparison of these algorithms, Presents novel algorithms for robust image authentication, whereby the image is tried to be corrected and authenticated, Examines different views for the solution of problems connected to image authentication in the pre...

  3. Methods for robustness programming

    NARCIS (Netherlands)

    Olieman, N.J.

    2008-01-01

    Robustness of an object is defined as the probability that an object will have properties as required. Robustness Programming (RP) is a mathematical approach for Robustness estimation and Robustness optimisation. An example in the context of designing a food product, is finding the best composition

  4. Robustness in laying hens

    NARCIS (Netherlands)

    Star, L.

    2008-01-01

    The aim of the project ‘The genetics of robustness in laying hens’ was to investigate nature and regulation of robustness in laying hens under sub-optimal conditions and the possibility to increase robustness by using animal breeding without loss of production. At the start of the project, a robust

  5. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  6. Regular Polygons and Geometric Series.

    Science.gov (United States)

    Jarrett, Joscelyn A.

    1982-01-01

    Examples of some geometric illustrations of limits are presented. It is believed the limit concept is among the most important topics in mathematics, yet many students do not have good intuitive feelings for the concept, since it is often taught very abstractly. Geometric examples are suggested as meaningful tools. (MP)

  7. Geometric Invariants and Object Recognition.

    Science.gov (United States)

    1992-08-01

    University of Chicago Press. Maybank , S.J. [1992], "The Projection of Two Non-coplanar Conics", in Geometric Invariance in Machine Vision, eds. J.L...J.L. Mundy and A. Zisserman, MIT Press, Cambridge, MA. Mundy, J.L., Kapur, .. , Maybank , S.J., and Quan, L. [1992a] "Geometric Inter- pretation of

  8. Transmuted Complementary Weibull Geometric Distribution

    Directory of Open Access Journals (Sweden)

    Ahmed Z. A…fify

    2014-12-01

    Full Text Available This paper provides a new generalization of the complementary Weibull geometric distribution that introduced by Tojeiro et al. (2014, using the quadratic rank transmutation map studied by Shaw and Buckley (2007. The new distribution is referred to as transmuted complementary Weibull geometric distribution (TCWGD. The TCWG distribution includes as special cases the complementary Weibull geometric distribution (CWGD, complementary exponential geometric distribution(CEGD,Weibull distribution (WD and exponential distribution (ED. Various structural properties of the new distribution including moments, quantiles, moment generating function and RØnyi entropy of the subject distribution are derived. We proposed the method of maximum likelihood for estimating the model parameters and obtain the observed information matrix. A real data set are used to compare the ‡exibility of the transmuted version versus the complementary Weibull geometric distribution.

  9. Geometric scalar theory of gravity beyond spherical symmetry

    Science.gov (United States)

    Moschella, U.; Novello, M.

    2017-04-01

    We construct several exact solutions for a recently proposed geometric scalar theory of gravity. We focus on a class of axisymmetric geometries and a big-bang-like geometry and discuss their Lorentzian character. The axisymmetric solutions are parametrized by an integer angular momentum l . The l =0 (spherical) case gives rise to the Schwarzschild geometry. The other solutions have naked singular surfaces. While not a priori obvious, all the solutions that we present here are globally Lorentzian. The Lorentzian signature appears to be a robust property of the disformal geometries solving the vacuum geometric scalar theory of gravity equations.

  10. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  11. Robust gates for holonomic quantum computation

    International Nuclear Information System (INIS)

    Florio, Giuseppe; Pascazio, Saverio; Facchi, Paolo; Fazio, Rosario; Giovannetti, Vittorio

    2006-01-01

    Non-Abelian geometric phases are attracting increasing interest because of possible experimental application in quantum computation. We study the effects of the environment (modeled as an ensemble of harmonic oscillators) on a holonomic transformation and write the corresponding master equation. The solution is analytically and numerically investigated and the behavior of the fidelity analyzed: fidelity revivals are observed and an optimal finite operation time is determined at which the gate is most robust against noise

  12. Robust Object Tracking Using Valid Fragments Selection.

    Science.gov (United States)

    Zheng, Jin; Li, Bo; Tian, Peng; Luo, Gang

    Local features are widely used in visual tracking to improve robustness in cases of partial occlusion, deformation and rotation. This paper proposes a local fragment-based object tracking algorithm. Unlike many existing fragment-based algorithms that allocate the weights to each fragment, this method firstly defines discrimination and uniqueness for local fragment, and builds an automatic pre-selection of useful fragments for tracking. Then, a Harris-SIFT filter is used to choose the current valid fragments, excluding occluded or highly deformed fragments. Based on those valid fragments, fragment-based color histogram provides a structured and effective description for the object. Finally, the object is tracked using a valid fragment template combining the displacement constraint and similarity of each valid fragment. The object template is updated by fusing feature similarity and valid fragments, which is scale-adaptive and robust to partial occlusion. The experimental results show that the proposed algorithm is accurate and robust in challenging scenarios.

  13. Development and validation of a novel large field of view phantom and a software module for the quality assurance of geometric distortion in magnetic resonance imaging.

    Science.gov (United States)

    Torfeh, Tarraf; Hammoud, Rabih; McGarry, Maeve; Al-Hammadi, Noora; Perkins, Gregory

    2015-09-01

    To develop and validate a large field of view phantom and quality assurance software tool for the assessment and characterization of geometric distortion in MRI scanners commissioned for radiation therapy planning. A purpose built phantom was developed consisting of 357 rods (6mm in diameter) of polymethyl-methacrylat separated by 20mm intervals, providing a three dimensional array of control points at known spatial locations covering a large field of view up to a diameter of 420mm. An in-house software module was developed to allow automatic geometric distortion assessment. This software module was validated against a virtual dataset of the phantom that reproduced the exact geometry of the physical phantom, but with known translational and rotational displacements and warping. For validation experiments, clinical MRI sequences were acquired with and without the application of a commercial 3D distortion correction algorithm (Gradwarp™). The software module was used to characterize and assess system-related geometric distortion in the sequences relative to a benchmark CT dataset, and the efficacy of the vendor geometric distortion correction algorithms (GDC) was also assessed. Results issued from the validation of the software against virtual images demonstrate the algorithm's ability to accurately calculate geometric distortion with sub-pixel precision by the extraction of rods and quantization of displacements. Geometric distortion was assessed for the typical sequences used in radiotherapy applications and over a clinically relevant 420mm field of view (FOV). As expected and towards the edges of the field of view (FOV), distortion increased with increasing FOV. For all assessed sequences, the vendor GDC was able to reduce the mean distortion to below 1mm over a field of view of 5, 10, 15 and 20cm radius respectively. Results issued from the application of the developed phantoms and algorithms demonstrate a high level of precision. The results indicate that this

  14. A differential-geometric approach to generalized linear models with grouped predictors

    NARCIS (Netherlands)

    Augugliaro, Luigi; Mineo, Angelo M.; Wit, Ernst C.

    We propose an extension of the differential-geometric least angle regression method to perform sparse group inference in a generalized linear model. An efficient algorithm is proposed to compute the solution curve. The proposed group differential-geometric least angle regression method has important

  15. Fifth SIAM conference on geometric design 97: Final program and abstracts. Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    The meeting was divided into the following sessions: (1) CAD/CAM; (2) Curve/Surface Design; (3) Geometric Algorithms; (4) Multiresolution Methods; (5) Robotics; (6) Solid Modeling; and (7) Visualization. This report contains the abstracts of papers presented at the meeting. Proceding the conference there was a short course entitled ``Wavelets for Geometric Modeling and Computer Graphics``.

  16. Device-Free Localization via an Extreme Learning Machine with Parameterized Geometrical Feature Extraction

    Directory of Open Access Journals (Sweden)

    Jie Zhang

    2017-04-01

    Full Text Available Device-free localization (DFL is becoming one of the new technologies in wireless localization field, due to its advantage that the target to be localized does not need to be attached to any electronic device. In the radio-frequency (RF DFL system, radio transmitters (RTs and radio receivers (RXs are used to sense the target collaboratively, and the location of the target can be estimated by fusing the changes of the received signal strength (RSS measurements associated with the wireless links. In this paper, we will propose an extreme learning machine (ELM approach for DFL, to improve the efficiency and the accuracy of the localization algorithm. Different from the conventional machine learning approaches for wireless localization, in which the above differential RSS measurements are trivially used as the only input features, we introduce the parameterized geometrical representation for an affected link, which consists of its geometrical intercepts and differential RSS measurement. Parameterized geometrical feature extraction (PGFE is performed for the affected links and the features are used as the inputs of ELM. The proposed PGFE-ELM for DFL is trained in the offline phase and performed for real-time localization in the online phase, where the estimated location of the target is obtained through the created ELM. PGFE-ELM has the advantages that the affected links used by ELM in the online phase can be different from those used for training in the offline phase, and can be more robust to deal with the uncertain combination of the detectable wireless links. Experimental results show that the proposed PGFE-ELM can improve the localization accuracy and learning speed significantly compared with a number of the existing machine learning and DFL approaches, including the weighted K-nearest neighbor (WKNN, support vector machine (SVM, back propagation neural network (BPNN, as well as the well-known radio tomographic imaging (RTI DFL approach.

  17. Implementation of an algorithm for cylindrical object identification using range data

    Science.gov (United States)

    Bozeman, Sylvia T.; Martin, Benjamin J.

    1989-01-01

    One of the problems in 3-D object identification and localization is addressed. In robotic and navigation applications the vision system must be able to distinguish cylindrical or spherical objects as well as those of other geometric shapes. An algorithm was developed to identify cylindrical objects in an image when range data is used. The algorithm incorporates the Hough transform for line detection using edge points which emerge from a Sobel mask. Slices of the data are examined to locate arcs of circles using the normal equations of an over-determined linear system. Current efforts are devoted to testing the computer implementation of the algorithm. Refinements are expected to continue in order to accommodate cylinders in various positions. A technique is sought which is robust in the presence of noise and partial occlusions.

  18. Geometric inequalities for black holes

    International Nuclear Information System (INIS)

    Dain, Sergio

    2013-01-01

    Full text: A geometric inequality in General Relativity relates quantities that have both a physical interpretation and a geometrical definition. It is well known that the parameters that characterize the Kerr-Newman black hole satisfy several important geometric inequalities. Remarkably enough, some of these inequalities also hold for dynamical black holes. This kind of inequalities, which are valid in the dynamical and strong field regime, play an important role in the characterization of the gravitational collapse. They are closed related with the cosmic censorship conjecture. In this talk I will review recent results in this subject. (author)

  19. Geometric Computing for Freeform Architecture

    KAUST Repository

    Wallner, J.

    2011-06-03

    Geometric computing has recently found a new field of applications, namely the various geometric problems which lie at the heart of rationalization and construction-aware design processes of freeform architecture. We report on our work in this area, dealing with meshes with planar faces and meshes which allow multilayer constructions (which is related to discrete surfaces and their curvatures), triangles meshes with circle-packing properties (which is related to conformal uniformization), and with the paneling problem. We emphasize the combination of numerical optimization and geometric knowledge.

  20. Optical traps with geometric aberrations

    International Nuclear Information System (INIS)

    Roichman, Yael; Waldron, Alex; Gardel, Emily; Grier, David G.

    2006-01-01

    We assess the influence of geometric aberrations on the in-plane performance of optical traps by studying the dynamics of trapped colloidal spheres in deliberately distorted holographic optical tweezers. The lateral stiffness of the traps turns out to be insensitive to moderate amounts of coma, astigmatism, and spherical aberration. Moreover holographic aberration correction enables us to compensate inherent shortcomings in the optical train, thereby adaptively improving its performance. We also demonstrate the effects of geometric aberrations on the intensity profiles of optical vortices, whose readily measured deformations suggest a method for rapidly estimating and correcting geometric aberrations in holographic trapping systems

  1. Geometric inequalities for black holes

    Energy Technology Data Exchange (ETDEWEB)

    Dain, Sergio [Universidad Nacional de Cordoba (Argentina)

    2013-07-01

    Full text: A geometric inequality in General Relativity relates quantities that have both a physical interpretation and a geometrical definition. It is well known that the parameters that characterize the Kerr-Newman black hole satisfy several important geometric inequalities. Remarkably enough, some of these inequalities also hold for dynamical black holes. This kind of inequalities, which are valid in the dynamical and strong field regime, play an important role in the characterization of the gravitational collapse. They are closed related with the cosmic censorship conjecture. In this talk I will review recent results in this subject. (author)

  2. approche algorithme génétique

    African Journals Online (AJOL)

    Structure / acentric factor relationship of alcohols and phenols: genetic ... descriptors of geometrical type selected by genetic algorithm, among more than 1600 ..... Practical handbook of genetic algorithms: Applications Volume I; CRC Press.

  3. Noncyclic geometric changes of quantum states

    International Nuclear Information System (INIS)

    Kult, David; Sjoeqvist, Erik; Aaberg, Johan

    2006-01-01

    Non-Abelian quantum holonomies, i.e., unitary state changes solely induced by geometric properties of a quantum system, have been much under focus in the physics community as generalizations of the Abelian Berry phase. Apart from being a general phenomenon displayed in various subfields of quantum physics, the use of holonomies has lately been suggested as a robust technique to obtain quantum gates; the building blocks of quantum computers. Non-Abelian holonomies are usually associated with cyclic changes of quantum systems, but here we consider a generalization to noncyclic evolutions. We argue that this open-path holonomy can be used to construct quantum gates. We also show that a structure of partially defined holonomies emerges from the open-path holonomy. This structure has no counterpart in the Abelian setting. We illustrate the general ideas using an example that may be accessible to tests in various physical systems

  4. Geometric differential evolution for combinatorial and programs spaces.

    Science.gov (United States)

    Moraglio, A; Togelius, J; Silva, S

    2013-01-01

    Geometric differential evolution (GDE) is a recently introduced formal generalization of traditional differential evolution (DE) that can be used to derive specific differential evolution algorithms for both continuous and combinatorial spaces retaining the same geometric interpretation of the dynamics of the DE search across representations. In this article, we first review the theory behind the GDE algorithm, then, we use this framework to formally derive specific GDE for search spaces associated with binary strings, permutations, vectors of permutations and genetic programs. The resulting algorithms are representation-specific differential evolution algorithms searching the target spaces by acting directly on their underlying representations. We present experimental results for each of the new algorithms on a number of well-known problems comprising NK-landscapes, TSP, and Sudoku, for binary strings, permutations, and vectors of permutations. We also present results for the regression, artificial ant, parity, and multiplexer problems within the genetic programming domain. Experiments show that overall the new DE algorithms are competitive with well-tuned standard search algorithms.

  5. Perceptual Robust Design

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard

    The research presented in this PhD thesis has focused on a perceptual approach to robust design. The results of the research and the original contribution to knowledge is a preliminary framework for understanding, positioning, and applying perceptual robust design. Product quality is a topic...... been presented. Therefore, this study set out to contribute to the understanding and application of perceptual robust design. To achieve this, a state-of-the-art and current practice review was performed. From the review two main research problems were identified. Firstly, a lack of tools...... for perceptual robustness was found to overlap with the optimum for functional robustness and at most approximately 2.2% out of the 14.74% could be ascribed solely to the perceptual robustness optimisation. In conclusion, the thesis have offered a new perspective on robust design by merging robust design...

  6. Discrete geometric structures for architecture

    KAUST Repository

    Pottmann, Helmut

    2010-01-01

    . The talk will provide an overview of recent progress in this field, with a particular focus on discrete geometric structures. Most of these result from practical requirements on segmenting a freeform shape into planar panels and on the physical realization

  7. Geometric Rationalization for Freeform Architecture

    KAUST Repository

    Jiang, Caigui

    2016-01-01

    The emergence of freeform architecture provides interesting geometric challenges with regards to the design and manufacturing of large-scale structures. To design these architectural structures, we have to consider two types of constraints. First

  8. Geometrical optics in general relativity

    OpenAIRE

    Loinger, A.

    2006-01-01

    General relativity includes geometrical optics. This basic fact has relevant consequences that concern the physical meaning of the discontinuity surfaces propagated in the gravitational field - as it was first emphasized by Levi-Civita.

  9. Uhlmann's geometric phase in presence of isotropic decoherence

    International Nuclear Information System (INIS)

    Tidstroem, Jonas; Sjoeqvist, Erik

    2003-01-01

    Uhlmann's mixed state geometric phase [Rep. Math. Phys. 24, 229 (1986)] is analyzed in the case of a qubit affected by isotropic decoherence treated in the Markovian approximation. It is demonstrated that this phase decreases rapidly with increasing decoherence rate and that it is most fragile to weak decoherence for pure or nearly pure initial states. In the unitary case, we compare Uhlmann's geometric phase for mixed states with that occurring in standard Mach-Zehnder interferometry [Phys. Rev. Lett. 85, 2845 (2000)] and show that the latter is more robust to reduction in the length of the Bloch vector. We also describe how Uhlmann's geometric phase in the present case could in principle be realized experimentally

  10. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    Science.gov (United States)

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  11. Geometrical basis for the Standard Model

    Science.gov (United States)

    Potter, Franklin

    1994-02-01

    The robust character of the Standard Model is confirmed. Examination of its geometrical basis in three equivalent internal symmetry spaces-the unitary plane C 2, the quaternion space Q, and the real space R 4—as well as the real space R 3 uncovers mathematical properties that predict the physical properties of leptons and quarks. The finite rotational subgroups of the gauge group SU(2) L × U(1) Y generate exactly three lepton families and four quark families and reveal how quarks and leptons are related. Among the physical properties explained are the mass ratios of the six leptons and eight quarks, the origin of the left-handed preference by the weak interaction, the geometrical source of color symmetry, and the zero neutrino masses. The ( u, d) and ( c, s) quark families team together to satisfy the triangle anomaly cancellation with the electron family, while the other families pair one-to-one for cancellation. The spontaneously broken symmetry is discrete and needs no Higgs mechanism. Predictions include all massless neutrinos, the top quark at 160 GeV/ c 2, the b' quark at 80 GeV/ c 2, and the t' quark at 2600 GeV/ c 2.

  12. Geometric inequalities methods of proving

    CERN Document Server

    Sedrakyan, Hayk

    2017-01-01

    This unique collection of new and classical problems provides full coverage of geometric inequalities. Many of the 1,000 exercises are presented with detailed author-prepared-solutions, developing creativity and an arsenal of new approaches for solving mathematical problems. This book can serve teachers, high-school students, and mathematical competitors. It may also be used as supplemental reading, providing readers with new and classical methods for proving geometric inequalities. .

  13. Geometric integrator for simulations in the canonical ensemble

    Energy Technology Data Exchange (ETDEWEB)

    Tapias, Diego, E-mail: diego.tapias@nucleares.unam.mx [Departamento de Física, Facultad de Ciencias, Universidad Nacional Autónoma de México, Ciudad Universitaria, Ciudad de México 04510 (Mexico); Sanders, David P., E-mail: dpsanders@ciencias.unam.mx [Departamento de Física, Facultad de Ciencias, Universidad Nacional Autónoma de México, Ciudad Universitaria, Ciudad de México 04510 (Mexico); Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 (United States); Bravetti, Alessandro, E-mail: alessandro.bravetti@iimas.unam.mx [Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Ciudad Universitaria, Ciudad de México 04510 (Mexico)

    2016-08-28

    We introduce a geometric integrator for molecular dynamics simulations of physical systems in the canonical ensemble that preserves the invariant distribution in equations arising from the density dynamics algorithm, with any possible type of thermostat. Our integrator thus constitutes a unified framework that allows the study and comparison of different thermostats and of their influence on the equilibrium and non-equilibrium (thermo-)dynamic properties of a system. To show the validity and the generality of the integrator, we implement it with a second-order, time-reversible method and apply it to the simulation of a Lennard-Jones system with three different thermostats, obtaining good conservation of the geometrical properties and recovering the expected thermodynamic results. Moreover, to show the advantage of our geometric integrator over a non-geometric one, we compare the results with those obtained by using the non-geometric Gear integrator, which is frequently used to perform simulations in the canonical ensemble. The non-geometric integrator induces a drift in the invariant quantity, while our integrator has no such drift, thus ensuring that the system is effectively sampling the correct ensemble.

  14. Geometric integrator for simulations in the canonical ensemble

    International Nuclear Information System (INIS)

    Tapias, Diego; Sanders, David P.; Bravetti, Alessandro

    2016-01-01

    We introduce a geometric integrator for molecular dynamics simulations of physical systems in the canonical ensemble that preserves the invariant distribution in equations arising from the density dynamics algorithm, with any possible type of thermostat. Our integrator thus constitutes a unified framework that allows the study and comparison of different thermostats and of their influence on the equilibrium and non-equilibrium (thermo-)dynamic properties of a system. To show the validity and the generality of the integrator, we implement it with a second-order, time-reversible method and apply it to the simulation of a Lennard-Jones system with three different thermostats, obtaining good conservation of the geometrical properties and recovering the expected thermodynamic results. Moreover, to show the advantage of our geometric integrator over a non-geometric one, we compare the results with those obtained by using the non-geometric Gear integrator, which is frequently used to perform simulations in the canonical ensemble. The non-geometric integrator induces a drift in the invariant quantity, while our integrator has no such drift, thus ensuring that the system is effectively sampling the correct ensemble.

  15. Geometrical-optics approximation of forward scattering by coated particles.

    Science.gov (United States)

    Xu, Feng; Cai, Xiaoshu; Ren, Kuanfang

    2004-03-20

    By means of geometrical optics we present an approximation algorithm with which to accelerate the computation of scattering intensity distribution within a forward angular range (0 degrees-60 degrees) for coated particles illuminated by a collimated incident beam. Phases of emerging rays are exactly calculated to improve the approximation precision. This method proves effective for transparent and tiny absorbent particles with size parameters larger than 75 but fails to give good approximation results at scattering angles at which refractive rays are absent. When the absorption coefficient of a particle is greater than 0.01, the geometrical optics approximation is effective only for forward small angles, typically less than 10 degrees or so.

  16. Geometrical analysis of woven fabric microstructure based on micron-resolution computed tomography data

    Science.gov (United States)

    Krieger, Helga; Seide, Gunnar; Gries, Thomas; Stapleton, Scott E.

    2018-04-01

    The global mechanical properties of textiles such as elasticity and strength, as well as transport properties such as permeability depend strongly on the microstructure of the textile. Textiles are heterogeneous structures with highly anisotropic material properties, including local fiber orientation and local fiber volume fraction. In this paper, an algorithm is presented to generate a virtual 3D-model of a woven fabric architecture with information about the local fiber orientation and the local fiber volume fraction. The geometric data of the woven fabric impregnated with resin was obtained by micron-resolution computed tomography (μCT). The volumetric μCT-scan was discretized into cells and the microstructure of each cell was analyzed and homogenized. Furthermore, the discretized data was used to calculate the local permeability tensors of each cell. An example application of the analyzed data is the simulation of the resin flow through a woven fabric based on the determined local permeability tensors and on Darcy's law. The presented algorithm is an automated and robust method of going from μCT-scans to structural or flow models.

  17. A novel scheme for automatic nonrigid image registration using deformation invariant feature and geometric constraint

    Science.gov (United States)

    Deng, Zhipeng; Lei, Lin; Zhou, Shilin

    2015-10-01

    Automatic image registration is a vital yet challenging task, particularly for non-rigid deformation images which are more complicated and common in remote sensing images, such as distorted UAV (unmanned aerial vehicle) images or scanning imaging images caused by flutter. Traditional non-rigid image registration methods are based on the correctly matched corresponding landmarks, which usually needs artificial markers. It is a rather challenging task to locate the accurate position of the points and get accurate homonymy point sets. In this paper, we proposed an automatic non-rigid image registration algorithm which mainly consists of three steps: To begin with, we introduce an automatic feature point extraction method based on non-linear scale space and uniform distribution strategy to extract the points which are uniform distributed along the edge of the image. Next, we propose a hybrid point matching algorithm using DaLI (Deformation and Light Invariant) descriptor and local affine invariant geometric constraint based on triangulation which is constructed by K-nearest neighbor algorithm. Based on the accurate homonymy point sets, the two images are registrated by the model of TPS (Thin Plate Spline). Our method is demonstrated by three deliberately designed experiments. The first two experiments are designed to evaluate the distribution of point set and the correctly matching rate on synthetic data and real data respectively. The last experiment is designed on the non-rigid deformation remote sensing images and the three experimental results demonstrate the accuracy, robustness, and efficiency of the proposed algorithm compared with other traditional methods.

  18. Algorithmic and geometric topics around free groups and automorphisms

    CERN Document Server

    Lustig, Martin; Ventura, Enric

    2017-01-01

    This volume presents the lecture notes from the authors’ three summer courses offered during the program “Automorphisms of Free Groups: Geometry, Topology, and Dynamics” held at the Centre de Recerca Matemàtica (CRM) in Bellaterra, Spain. The first two chapters present the basic tools needed, from formal language theory (regular and context-free languages, automata, rewriting systems, transducers, etc) and emphasize their connections to group theory, mostly relating to free and virtually-free groups. The material covered is sufficient to present full proofs of many of the existing interesting characterizations of virtually-free groups. In turn, the last chapter comprehensively describes Bonahon’s construction of Thurston’s compactification of Teichmüller space in terms of geodesic currents on surfaces. It also includes several intriguing extensions of the notion of geodesic current to various other, more general settings.

  19. Scalability and efficiency of genetic algorithms for geometrical applications

    NARCIS (Netherlands)

    Dijk, van S.F.; Thierens, D.; Berg, de M.; Schoenauer, M.

    2000-01-01

    We study the scalability and efficiency of a GA that we developed earlier to solve the practical cartographic problem of labeling a map with point features. We argue that the special characteristics of our GA make that it fits in well with theoretical models predicting the optimal population size

  20. A content-based digital image watermarking scheme resistant to local geometric distortions

    International Nuclear Information System (INIS)

    Yang, Hong-ying; Chen, Li-li; Wang, Xiang-yang

    2011-01-01

    Geometric distortion is known as one of the most difficult attacks to resist, as it can desynchronize the location of the watermark and hence cause incorrect watermark detection. Geometric distortion can be decomposed into two classes: global affine transforms and local geometric distortions. Most countermeasures proposed in the literature only address the problem of global affine transforms. It is a challenging problem to design a robust image watermarking scheme against local geometric distortions. In this paper, we propose a new content-based digital image watermarking scheme with good visual quality and reasonable resistance against local geometric distortions. Firstly, the robust feature points, which can survive various common image processing and global affine transforms, are extracted by using a multi-scale SIFT (scale invariant feature transform) detector. Then, the affine covariant local feature regions (LFRs) are constructed adaptively according to the feature scale and local invariant centroid. Finally, the digital watermark is embedded into the affine covariant LFRs by modulating the magnitudes of discrete Fourier transform (DFT) coefficients. By binding the watermark with the affine covariant LFRs, the watermark detection can be done without synchronization error. Experimental results show that the proposed image watermarking is not only invisible and robust against common image processing operations such as sharpening, noise addition, and JPEG compression, etc, but also robust against global affine transforms and local geometric distortions

  1. Robustness of Structural Systems

    DEFF Research Database (Denmark)

    Canisius, T.D.G.; Sørensen, John Dalsgaard; Baker, J.W.

    2007-01-01

    The importance of robustness as a property of structural systems has been recognised following several structural failures, such as that at Ronan Point in 1968,where the consequenceswere deemed unacceptable relative to the initiating damage. A variety of research efforts in the past decades have...... attempted to quantify aspects of robustness such as redundancy and identify design principles that can improve robustness. This paper outlines the progress of recent work by the Joint Committee on Structural Safety (JCSS) to develop comprehensive guidance on assessing and providing robustness in structural...... systems. Guidance is provided regarding the assessment of robustness in a framework that considers potential hazards to the system, vulnerability of system components, and failure consequences. Several proposed methods for quantifying robustness are reviewed, and guidelines for robust design...

  2. Robust multivariate analysis

    CERN Document Server

    J Olive, David

    2017-01-01

    This text presents methods that are robust to the assumption of a multivariate normal distribution or methods that are robust to certain types of outliers. Instead of using exact theory based on the multivariate normal distribution, the simpler and more applicable large sample theory is given.  The text develops among the first practical robust regression and robust multivariate location and dispersion estimators backed by theory.   The robust techniques  are illustrated for methods such as principal component analysis, canonical correlation analysis, and factor analysis.  A simple way to bootstrap confidence regions is also provided. Much of the research on robust multivariate analysis in this book is being published for the first time. The text is suitable for a first course in Multivariate Statistical Analysis or a first course in Robust Statistics. This graduate text is also useful for people who are familiar with the traditional multivariate topics, but want to know more about handling data sets with...

  3. Geometric Mixing, Peristalsis, and the Geometric Phase of the Stomach.

    Science.gov (United States)

    Arrieta, Jorge; Cartwright, Julyan H E; Gouillart, Emmanuelle; Piro, Nicolas; Piro, Oreste; Tuval, Idan

    2015-01-01

    Mixing fluid in a container at low Reynolds number--in an inertialess environment--is not a trivial task. Reciprocating motions merely lead to cycles of mixing and unmixing, so continuous rotation, as used in many technological applications, would appear to be necessary. However, there is another solution: movement of the walls in a cyclical fashion to introduce a geometric phase. We show using journal-bearing flow as a model that such geometric mixing is a general tool for using deformable boundaries that return to the same position to mix fluid at low Reynolds number. We then simulate a biological example: we show that mixing in the stomach functions because of the "belly phase," peristaltic movement of the walls in a cyclical fashion introduces a geometric phase that avoids unmixing.

  4. Geometric Mixing, Peristalsis, and the Geometric Phase of the Stomach.

    Directory of Open Access Journals (Sweden)

    Jorge Arrieta

    Full Text Available Mixing fluid in a container at low Reynolds number--in an inertialess environment--is not a trivial task. Reciprocating motions merely lead to cycles of mixing and unmixing, so continuous rotation, as used in many technological applications, would appear to be necessary. However, there is another solution: movement of the walls in a cyclical fashion to introduce a geometric phase. We show using journal-bearing flow as a model that such geometric mixing is a general tool for using deformable boundaries that return to the same position to mix fluid at low Reynolds number. We then simulate a biological example: we show that mixing in the stomach functions because of the "belly phase," peristaltic movement of the walls in a cyclical fashion introduces a geometric phase that avoids unmixing.

  5. Parallel implementation of geometric transformations

    Energy Technology Data Exchange (ETDEWEB)

    Clarke, K A; Ip, H H.S.

    1982-10-01

    An implementation of digitized picture rotation and magnification based on Weiman's algorithm is presented. In a programmable array machine routines to perform small transformations code efficiently. The method illustrates the interpolative nature of the algorithm. 6 references.

  6. Triangular Geometrized Sampling Heuristics for Fast Optimal Motion Planning

    Directory of Open Access Journals (Sweden)

    Ahmed Hussain Qureshi

    2015-02-01

    Full Text Available Rapidly-exploring Random Tree (RRT-based algorithms have become increasingly popular due to their lower computational complexity as compared with other path planning algorithms. The recently presented RRT* motion planning algorithm improves upon the original RRT algorithm by providing optimal path solutions. While RRT determines an initial collision-free path fairly quickly, RRT* guarantees almost certain convergence to an optimal, obstacle-free path from the start to the goal points for any given geometrical environment. However, the main limitations of RRT* include its slow processing rate and high memory consumption, due to the large number of iterations required for calculating the optimal path. In order to overcome these limitations, we present another improvement, i.e, the Triangular Geometerized-RRT* (TG-RRT* algorithm, which utilizes triangular geometrical methods to improve the performance of the RRT* algorithm in terms of the processing time and a decreased number of iterations required for an optimal path solution. Simulations comparing the performance results of the improved TG-RRT* with RRT* are presented to demonstrate the overall improvement in performance and optimal path detection.

  7. SIFT based algorithm for point feature tracking

    Directory of Open Access Journals (Sweden)

    Adrian BURLACU

    2007-12-01

    Full Text Available In this paper a tracking algorithm for SIFT features in image sequences is developed. For each point feature extracted using SIFT algorithm a descriptor is computed using information from its neighborhood. Using an algorithm based on minimizing the distance between two descriptors tracking point features throughout image sequences is engaged. Experimental results, obtained from image sequences that capture scaling of different geometrical type object, reveal the performances of the tracking algorithm.

  8. A geometric framework for evaluating rare variant tests of association.

    Science.gov (United States)

    Liu, Keli; Fast, Shannon; Zawistowski, Matthew; Tintle, Nathan L

    2013-05-01

    The wave of next-generation sequencing data has arrived. However, many questions still remain about how to best analyze sequence data, particularly the contribution of rare genetic variants to human disease. Numerous statistical methods have been proposed to aggregate association signals across multiple rare variant sites in an effort to increase statistical power; however, the precise relation between the tests is often not well understood. We present a geometric representation for rare variant data in which rare allele counts in case and control samples are treated as vectors in Euclidean space. The geometric framework facilitates a rigorous classification of existing rare variant tests into two broad categories: tests for a difference in the lengths of the case and control vectors, and joint tests for a difference in either the lengths or angles of the two vectors. We demonstrate that genetic architecture of a trait, including the number and frequency of risk alleles, directly relates to the behavior of the length and joint tests. Hence, the geometric framework allows prediction of which tests will perform best under different disease models. Furthermore, the structure of the geometric framework immediately suggests additional classes and types of rare variant tests. We consider two general classes of tests which show robustness to noncausal and protective variants. The geometric framework introduces a novel and unique method to assess current rare variant methodology and provides guidelines for both applied and theoretical researchers. © 2013 Wiley Periodicals, Inc.

  9. A geometric viewpoint on generalized hydrodynamics

    Directory of Open Access Journals (Sweden)

    Benjamin Doyon

    2018-01-01

    Full Text Available Generalized hydrodynamics (GHD is a large-scale theory for the dynamics of many-body integrable systems. It consists of an infinite set of conservation laws for quasi-particles traveling with effective (“dressed” velocities that depend on the local state. We show that these equations can be recast into a geometric dynamical problem. They are conservation equations with state-independent quasi-particle velocities, in a space equipped with a family of metrics, parametrized by the quasi-particles' type and speed, that depend on the local state. In the classical hard rod or soliton gas picture, these metrics measure the free length of space as perceived by quasi-particles; in the quantum picture, they weigh space with the density of states available to them. Using this geometric construction, we find a general solution to the initial value problem of GHD, in terms of a set of integral equations where time appears explicitly. These integral equations are solvable by iteration and provide an extremely efficient solution algorithm for GHD.

  10. Geometric Properties of Grassmannian Frames for and

    Directory of Open Access Journals (Sweden)

    Benedetto John J

    2006-01-01

    Full Text Available Grassmannian frames are frames satisfying a min-max correlation criterion. We translate a geometrically intuitive approach for two- and three-dimensional Euclidean space ( and into a new analytic method which is used to classify many Grassmannian frames in this setting. The method and associated algorithm decrease the maximum frame correlation, and hence give rise to the construction of specific examples of Grassmannian frames. Many of the results are known by other techniques, and even more generally, so that this paper can be viewed as tutorial. However, our analytic method is presented with the goal of developing it to address unresovled problems in -dimensional Hilbert spaces which serve as a setting for spherical codes, erasure channel modeling, and other aspects of communications theory.

  11. Random broadcast on random geometric graphs

    Energy Technology Data Exchange (ETDEWEB)

    Bradonjic, Milan [Los Alamos National Laboratory; Elsasser, Robert [UNIV OF PADERBORN; Friedrich, Tobias [ICSI/BERKELEY; Sauerwald, Tomas [ICSI/BERKELEY

    2009-01-01

    In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or the giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.

  12. A new geometrical gravitational theory

    International Nuclear Information System (INIS)

    Obata, T.; Chiba, J.; Oshima, H.

    1981-01-01

    A geometrical gravitational theory is developed. The field equations are uniquely determined apart from one unknown dimensionless parameter ω 2 . It is based on an extension of the Weyl geometry, and by the extension the gravitational coupling constant and the gravitational mass are made to be dynamical and geometrical. The fundamental geometrical objects in the theory are a metric gsub(μν) and two gauge scalars phi and psi. The theory satisfies the weak equivalence principle, but breaks the strong one generally. u(phi, psi) = phi is found out on the assumption that the strong one keeps holding good at least for bosons of low spins. Thus there is the simple correspondence between the geometrical objects and the gravitational objects. Since the theory satisfies the weak one, the inertial mass is also dynamical and geometrical in the same way as is the gravitational mass. Moreover, the cosmological term in the theory is a coscalar of power -4 algebraically made of psi and u(phi, psi), so it is dynamical, too. Finally spherically symmetric exact solutions are given. The permissible range of the unknown parameter ω 2 is experimentally determined by applying the solutions to the solar system. (author)

  13. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning

    International Nuclear Information System (INIS)

    Chen Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.

    2010-01-01

    Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK's interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  14. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning.

    Science.gov (United States)

    Chen, Wei; Craft, David; Madden, Thomas M; Zhang, Kewu; Kooy, Hanne M; Herman, Gabor T

    2010-09-01

    To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK'S interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  15. Pure endmember extraction using robust kernel archetypoid analysis for hyperspectral imagery

    Science.gov (United States)

    Sun, Weiwei; Yang, Gang; Wu, Ke; Li, Weiyue; Zhang, Dianfa

    2017-09-01

    A robust kernel archetypoid analysis (RKADA) method is proposed to extract pure endmembers from hyperspectral imagery (HSI). The RKADA assumes that each pixel is a sparse linear mixture of all endmembers and each endmember corresponds to a real pixel in the image scene. First, it improves the re8gular archetypal analysis with a new binary sparse constraint, and the adoption of the kernel function constructs the principal convex hull in an infinite Hilbert space and enlarges the divergences between pairwise pixels. Second, the RKADA transfers the pure endmember extraction problem into an optimization problem by minimizing residual errors with the Huber loss function. The Huber loss function reduces the effects from big noises and outliers in the convergence procedure of RKADA and enhances the robustness of the optimization function. Third, the random kernel sinks for fast kernel matrix approximation and the two-stage algorithm for optimizing initial pure endmembers are utilized to improve its computational efficiency in realistic implementations of RKADA, respectively. The optimization equation of RKADA is solved by using the block coordinate descend scheme and the desired pure endmembers are finally obtained. Six state-of-the-art pure endmember extraction methods are employed to make comparisons with the RKADA on both synthetic and real Cuprite HSI datasets, including three geometrical algorithms vertex component analysis (VCA), alternative volume maximization (AVMAX) and orthogonal subspace projection (OSP), and three matrix factorization algorithms the preconditioning for successive projection algorithm (PreSPA), hierarchical clustering based on rank-two nonnegative matrix factorization (H2NMF) and self-dictionary multiple measurement vector (SDMMV). Experimental results show that the RKADA outperforms all the six methods in terms of spectral angle distance (SAD) and root-mean-square-error (RMSE). Moreover, the RKADA has short computational times in offline

  16. Geometrical primitives reconstruction from image sequence in an interactive context

    International Nuclear Information System (INIS)

    Monchal, L.; Aubry, P.

    1995-01-01

    We propose a method to recover 3D geometrical shape from image sequence, in a context of man machine co-operation. The human operator has to point out the edges of an object in the first image and choose a corresponding geometrical model. The algorithm tracks each relevant 2D segments describing surface discontinuities or limbs, in the images. Then, knowing motion of the camera between images, the positioning and the size of the virtual object are deduced by minimising a function. The function describes how well the virtual objects is linked to the extracted segments of the sequence, its geometrical model and pieces of information given by the operator. (author). 13 refs., 7 figs., 8 tabs

  17. COMPARISON OF METHODS FOR GEOMETRIC CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    J. Hieronymus

    2012-09-01

    Full Text Available Methods for geometric calibration of cameras in close-range photogrammetry are established and well investigated. The most common one is based on test-fields with well-known pattern, which are observed from different directions. The parameters of a distortion model are calculated using bundle-block-adjustment-algorithms. This methods works well for short focal lengths, but is essentially more problematic to use with large focal lengths. Those would require very large test-fields and surrounding space. To overcome this problem, there is another common method for calibration used in remote sensing. It employs measurements using collimator and a goniometer. A third calibration method uses diffractive optical elements (DOE to project holograms of well known pattern. In this paper these three calibration methods are compared empirically, especially in terms of accuracy. A camera has been calibrated with those methods mentioned above. All methods provide a set of distortion correction parameters as used by the photogrammetric software Australis. The resulting parameter values are very similar for all investigated methods. The three sets of distortion parameters are crosscompared against all three calibration methods. This is achieved by inserting the gained distortion parameters as fixed input into the calibration algorithms and only adjusting the exterior orientation. The RMS (root mean square of the remaining image coordinate residuals are taken as a measure of distortion correction quality. There are differences resulting from the different calibration methods. Nevertheless the measure is small for every comparison, which means that all three calibration methods can be used for accurate geometric calibration.

  18. Robust Spacecraft Component Detection in Point Clouds

    Directory of Open Access Journals (Sweden)

    Quanmao Wei

    2018-03-01

    Full Text Available Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  19. Robust Spacecraft Component Detection in Point Clouds.

    Science.gov (United States)

    Wei, Quanmao; Jiang, Zhiguo; Zhang, Haopeng

    2018-03-21

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  20. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  1. The Data Transfer Kit: A geometric rendezvous-based tool for multiphysics data transfer

    International Nuclear Information System (INIS)

    Slattery, S. R.; Wilson, P. P. H.; Pawlowski, R. P.

    2013-01-01

    The Data Transfer Kit (DTK) is a software library designed to provide parallel data transfer services for arbitrary physics components based on the concept of geometric rendezvous. The rendezvous algorithm provides a means to geometrically correlate two geometric domains that may be arbitrarily decomposed in a parallel simulation. By repartitioning both domains such that they have the same geometric domain on each parallel process, efficient and load balanced search operations and data transfer can be performed at a desirable algorithmic time complexity with low communication overhead relative to other types of mapping algorithms. With the increased development efforts in multiphysics simulation and other multiple mesh and geometry problems, generating parallel topology maps for transferring fields and other data between geometric domains is a common operation. The algorithms used to generate parallel topology maps based on the concept of geometric rendezvous as implemented in DTK are described with an example using a conjugate heat transfer calculation and thermal coupling with a neutronics code. In addition, we provide the results of initial scaling studies performed on the Jaguar Cray XK6 system at Oak Ridge National Laboratory for a worse-case-scenario problem in terms of algorithmic complexity that shows good scaling on 0(1 x 104) cores for topology map generation and excellent scaling on 0(1 x 105) cores for the data transfer operation with meshes of O(1 x 109) elements. (authors)

  2. The Data Transfer Kit: A geometric rendezvous-based tool for multiphysics data transfer

    Energy Technology Data Exchange (ETDEWEB)

    Slattery, S. R.; Wilson, P. P. H. [Department of Engineering Physics, University of Wisconsin - Madison, 1500 Engineering Dr., Madison, WI 53706 (United States); Pawlowski, R. P. [Sandia National Laboratories, P.O. Box 5800, Albuquerque, NM 87185 (United States)

    2013-07-01

    The Data Transfer Kit (DTK) is a software library designed to provide parallel data transfer services for arbitrary physics components based on the concept of geometric rendezvous. The rendezvous algorithm provides a means to geometrically correlate two geometric domains that may be arbitrarily decomposed in a parallel simulation. By repartitioning both domains such that they have the same geometric domain on each parallel process, efficient and load balanced search operations and data transfer can be performed at a desirable algorithmic time complexity with low communication overhead relative to other types of mapping algorithms. With the increased development efforts in multiphysics simulation and other multiple mesh and geometry problems, generating parallel topology maps for transferring fields and other data between geometric domains is a common operation. The algorithms used to generate parallel topology maps based on the concept of geometric rendezvous as implemented in DTK are described with an example using a conjugate heat transfer calculation and thermal coupling with a neutronics code. In addition, we provide the results of initial scaling studies performed on the Jaguar Cray XK6 system at Oak Ridge National Laboratory for a worse-case-scenario problem in terms of algorithmic complexity that shows good scaling on 0(1 x 104) cores for topology map generation and excellent scaling on 0(1 x 105) cores for the data transfer operation with meshes of O(1 x 109) elements. (authors)

  3. Handling Occlusions for Robust Augmented Reality Systems

    Directory of Open Access Journals (Sweden)

    Maidi Madjid

    2010-01-01

    Full Text Available Abstract In Augmented Reality applications, the human perception is enhanced with computer-generated graphics. These graphics must be exactly registered to real objects in the scene and this requires an effective Augmented Reality system to track the user's viewpoint. In this paper, a robust tracking algorithm based on coded fiducials is presented. Square targets are identified and pose parameters are computed using a hybrid approach based on a direct method combined with the Kalman filter. An important factor for providing a robust Augmented Reality system is the correct handling of targets occlusions by real scene elements. To overcome tracking failure due to occlusions, we extend our method using an optical flow approach to track visible points and maintain virtual graphics overlaying when targets are not identified. Our proposed real-time algorithm is tested with different camera viewpoints under various image conditions and shows to be accurate and robust.

  4. Geometric group theory an introduction

    CERN Document Server

    Löh, Clara

    2017-01-01

    Inspired by classical geometry, geometric group theory has in turn provided a variety of applications to geometry, topology, group theory, number theory and graph theory. This carefully written textbook provides a rigorous introduction to this rapidly evolving field whose methods have proven to be powerful tools in neighbouring fields such as geometric topology. Geometric group theory is the study of finitely generated groups via the geometry of their associated Cayley graphs. It turns out that the essence of the geometry of such groups is captured in the key notion of quasi-isometry, a large-scale version of isometry whose invariants include growth types, curvature conditions, boundary constructions, and amenability. This book covers the foundations of quasi-geometry of groups at an advanced undergraduate level. The subject is illustrated by many elementary examples, outlooks on applications, as well as an extensive collection of exercises.

  5. Geometric procedures for civil engineers

    CERN Document Server

    Tonias, Elias C

    2016-01-01

    This book provides a multitude of geometric constructions usually encountered in civil engineering and surveying practice.  A detailed geometric solution is provided to each construction as well as a step-by-step set of programming instructions for incorporation into a computing system. The volume is comprised of 12 chapters and appendices that may be grouped in three major parts: the first is intended for those who love geometry for its own sake and its evolution through the ages, in general, and, more specifically, with the introduction of the computer. The second section addresses geometric features used in the book and provides support procedures used by the constructions presented. The remaining chapters and the appendices contain the various constructions. The volume is ideal for engineering practitioners in civil and construction engineering and allied areas.

  6. Multiscale geometric modeling of macromolecules I: Cartesian representation

    Science.gov (United States)

    Xia, Kelin; Feng, Xin; Chen, Zhan; Tong, Yiying; Wei, Guo-Wei

    2014-01-01

    This paper focuses on the geometric modeling and computational algorithm development of biomolecular structures from two data sources: Protein Data Bank (PDB) and Electron Microscopy Data Bank (EMDB) in the Eulerian (or Cartesian) representation. Molecular surface (MS) contains non-smooth geometric singularities, such as cusps, tips and self-intersecting facets, which often lead to computational instabilities in molecular simulations, and violate the physical principle of surface free energy minimization. Variational multiscale surface definitions are proposed based on geometric flows and solvation analysis of biomolecular systems. Our approach leads to geometric and potential driven Laplace-Beltrami flows for biomolecular surface evolution and formation. The resulting surfaces are free of geometric singularities and minimize the total free energy of the biomolecular system. High order partial differential equation (PDE)-based nonlinear filters are employed for EMDB data processing. We show the efficacy of this approach in feature-preserving noise reduction. After the construction of protein multiresolution surfaces, we explore the analysis and characterization of surface morphology by using a variety of curvature definitions. Apart from the classical Gaussian curvature and mean curvature, maximum curvature, minimum curvature, shape index, and curvedness are also applied to macromolecular surface analysis for the first time. Our curvature analysis is uniquely coupled to the analysis of electrostatic surface potential, which is a by-product of our variational multiscale solvation models. As an expository investigation, we particularly emphasize the numerical algorithms and computational protocols for practical applications of the above multiscale geometric models. Such information may otherwise be scattered over the vast literature on this topic. Based on the curvature and electrostatic analysis from our multiresolution surfaces, we introduce a new concept, the

  7. Multiscale geometric modeling of macromolecules I: Cartesian representation

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Kelin [Department of Mathematics, Michigan State University, MI 48824 (United States); Feng, Xin [Department of Computer Science and Engineering, Michigan State University, MI 48824 (United States); Chen, Zhan [Department of Mathematics, Michigan State University, MI 48824 (United States); Tong, Yiying [Department of Computer Science and Engineering, Michigan State University, MI 48824 (United States); Wei, Guo-Wei, E-mail: wei@math.msu.edu [Department of Mathematics, Michigan State University, MI 48824 (United States); Department of Biochemistry and Molecular Biology, Michigan State University, MI 48824 (United States)

    2014-01-15

    This paper focuses on the geometric modeling and computational algorithm development of biomolecular structures from two data sources: Protein Data Bank (PDB) and Electron Microscopy Data Bank (EMDB) in the Eulerian (or Cartesian) representation. Molecular surface (MS) contains non-smooth geometric singularities, such as cusps, tips and self-intersecting facets, which often lead to computational instabilities in molecular simulations, and violate the physical principle of surface free energy minimization. Variational multiscale surface definitions are proposed based on geometric flows and solvation analysis of biomolecular systems. Our approach leads to geometric and potential driven Laplace–Beltrami flows for biomolecular surface evolution and formation. The resulting surfaces are free of geometric singularities and minimize the total free energy of the biomolecular system. High order partial differential equation (PDE)-based nonlinear filters are employed for EMDB data processing. We show the efficacy of this approach in feature-preserving noise reduction. After the construction of protein multiresolution surfaces, we explore the analysis and characterization of surface morphology by using a variety of curvature definitions. Apart from the classical Gaussian curvature and mean curvature, maximum curvature, minimum curvature, shape index, and curvedness are also applied to macromolecular surface analysis for the first time. Our curvature analysis is uniquely coupled to the analysis of electrostatic surface potential, which is a by-product of our variational multiscale solvation models. As an expository investigation, we particularly emphasize the numerical algorithms and computational protocols for practical applications of the above multiscale geometric models. Such information may otherwise be scattered over the vast literature on this topic. Based on the curvature and electrostatic analysis from our multiresolution surfaces, we introduce a new concept, the

  8. Robustness of Structures

    DEFF Research Database (Denmark)

    Faber, Michael Havbro; Vrouwenvelder, A.C.W.M.; Sørensen, John Dalsgaard

    2011-01-01

    In 2005, the Joint Committee on Structural Safety (JCSS) together with Working Commission (WC) 1 of the International Association of Bridge and Structural Engineering (IABSE) organized a workshop on robustness of structures. Two important decisions resulted from this workshop, namely...... ‘COST TU0601: Robustness of Structures’ was initiated in February 2007, aiming to provide a platform for exchanging and promoting research in the area of structural robustness and to provide a basic framework, together with methods, strategies and guidelines enhancing robustness of structures...... the development of a joint European project on structural robustness under the COST (European Cooperation in Science and Technology) programme and the decision to develop a more elaborate document on structural robustness in collaboration between experts from the JCSS and the IABSE. Accordingly, a project titled...

  9. Robust Growth Determinants

    OpenAIRE

    Doppelhofer, Gernot; Weeks, Melvyn

    2011-01-01

    This paper investigates the robustness of determinants of economic growth in the presence of model uncertainty, parameter heterogeneity and outliers. The robust model averaging approach introduced in the paper uses a flexible and parsi- monious mixture modeling that allows for fat-tailed errors compared to the normal benchmark case. Applying robust model averaging to growth determinants, the paper finds that eight out of eighteen variables found to be significantly related to economic growth ...

  10. Robust Programming by Example

    OpenAIRE

    Bishop , Matt; Elliott , Chip

    2011-01-01

    Part 2: WISE 7; International audience; Robust programming lies at the heart of the type of coding called “secure programming”. Yet it is rarely taught in academia. More commonly, the focus is on how to avoid creating well-known vulnerabilities. While important, that misses the point: a well-structured, robust program should anticipate where problems might arise and compensate for them. This paper discusses one view of robust programming and gives an example of how it may be taught.

  11. An introduction to geometrical physics

    CERN Document Server

    Aldrovandi, R

    1995-01-01

    This book stresses the unifying power of the geometrical framework in bringing together concepts from the different areas of physics. Common underpinnings of optics, elasticity, gravitation, relativistic fields, particle mechanics and other subjects are underlined. It attempts to extricate the notion of space currently in the physical literature from the metric connotation.The book's goal is to present mathematical ideas associated with geometrical physics in a rather introductory language. Included are many examples from elementary physics and also, for those wishing to reach a higher level o

  12. Geometric scaling as traveling waves

    International Nuclear Information System (INIS)

    Munier, S.; Peschanski, R.

    2003-01-01

    We show the relevance of the nonlinear Fisher and Kolmogorov-Petrovsky-Piscounov (KPP) equation to the problem of high energy evolution of the QCD amplitudes. We explain how the traveling wave solutions of this equation are related to geometric scaling, a phenomenon observed in deep-inelastic scattering experiments. Geometric scaling is for the first time shown to result from an exact solution of nonlinear QCD evolution equations. Using general results on the KPP equation, we compute the velocity of the wave front, which gives the full high energy dependence of the saturation scale

  13. Asymptotic geometric analysis, part I

    CERN Document Server

    Artstein-Avidan, Shiri

    2015-01-01

    The authors present the theory of asymptotic geometric analysis, a field which lies on the border between geometry and functional analysis. In this field, isometric problems that are typical for geometry in low dimensions are substituted by an "isomorphic" point of view, and an asymptotic approach (as dimension tends to infinity) is introduced. Geometry and analysis meet here in a non-trivial way. Basic examples of geometric inequalities in isomorphic form which are encountered in the book are the "isomorphic isoperimetric inequalities" which led to the discovery of the "concentration phenomen

  14. Geometric integration for particle accelerators

    International Nuclear Information System (INIS)

    Forest, Etienne

    2006-01-01

    This paper is a very personal view of the field of geometric integration in accelerator physics-a field where often work of the highest quality is buried in lost technical notes or even not published; one has only to think of Simon van der Meer Nobel prize work on stochastic cooling-unpublished in any refereed journal. So I reconstructed the relevant history of geometrical integration in accelerator physics as much as I could by talking to collaborators and using my own understanding of the field. The reader should not be too surprised if this account is somewhere between history, science and perhaps even fiction

  15. Geometrical spin symmetry and spin

    International Nuclear Information System (INIS)

    Pestov, I. B.

    2011-01-01

    Unification of General Theory of Relativity and Quantum Mechanics leads to General Quantum Mechanics which includes into itself spindynamics as a theory of spin phenomena. The key concepts of spindynamics are geometrical spin symmetry and the spin field (space of defining representation of spin symmetry). The essence of spin is the bipolar structure of geometrical spin symmetry induced by the gravitational potential. The bipolar structure provides a natural derivation of the equations of spindynamics. Spindynamics involves all phenomena connected with spin and provides new understanding of the strong interaction.

  16. Geometric integration for particle accelerators

    Science.gov (United States)

    Forest, Étienne

    2006-05-01

    This paper is a very personal view of the field of geometric integration in accelerator physics—a field where often work of the highest quality is buried in lost technical notes or even not published; one has only to think of Simon van der Meer Nobel prize work on stochastic cooling—unpublished in any refereed journal. So I reconstructed the relevant history of geometrical integration in accelerator physics as much as I could by talking to collaborators and using my own understanding of the field. The reader should not be too surprised if this account is somewhere between history, science and perhaps even fiction.

  17. Lattice degeneracies of geometric fermions

    International Nuclear Information System (INIS)

    Raszillier, H.

    1983-05-01

    We give the minimal numbers of degrees of freedom carried by geometric fermions on all lattices of maximal symmetries in d = 2, 3, and 4 dimensions. These numbers are lattice dependent, but in the (free) continuum limit, part of the degrees of freedom have to escape to infinity by a Wilson mechanism built in, and 2sup(d) survive for any lattice. On self-reciprocal lattices we compare the minimal numbers of degrees of freedom of geometric fermions with the minimal numbers of naive fermions on these lattices and argue that these numbers are equal. (orig.)

  18. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  19. In Defence of Geometrical Algebra

    NARCIS (Netherlands)

    Blasjo, V.N.E.

    The geometrical algebra hypothesis was once the received interpretation of Greek mathematics. In recent decades, however, it has become anathema to many. I give a critical review of all arguments against it and offer a consistent rebuttal case against the modern consensus. Consequently, I find that

  20. Geometrical interpretation of extended supergravity

    International Nuclear Information System (INIS)

    Townsend, P.K.; Nieuwenhuizen, P.van

    1977-01-01

    SO 2 extended supergravity is shown to be a geometrical theory, whose underlying gauge group is OSp(4,2). The couplings which gauge the SO 2 symmetry as well as the accompanying cosmological and masslike terms are directly obtained, and the usual SO 2 model is obtained after a Wigner-Inoenue group contraction. (Auth.)

  1. Geometric scaling in exclusive processes

    International Nuclear Information System (INIS)

    Munier, S.; Wallon, S.

    2003-01-01

    We show that according to the present understanding of the energy evolution of the observables measured in deep-inelastic scattering, the photon-proton scattering amplitude has to exhibit geometric scaling at each impact parameter. We suggest a way to test this experimentally at HERA. A qualitative analysis based on published data is presented and discussed. (orig.)

  2. Geometric quantization and general relativity

    International Nuclear Information System (INIS)

    Souriau, J.-M.

    1977-01-01

    The purpose of geometric quantization is to give a rigorous mathematical content to the 'correspondence principle' between classical and quantum mechanics. The main tools are borrowed on one hand from differential geometry and topology (differential manifolds, differential forms, fiber bundles, homology and cohomology, homotopy), on the other hand from analysis (functions of positive type, infinite dimensional group representations, pseudo-differential operators). Some satisfactory results have been obtained in the study of dynamical systems, but some fundamental questions are still waiting for an answer. The 'geometric quantization of fields', where some further well known difficulties arise, is still in a preliminary stage. In particular, the geometric quantization on the gravitational field is still a mere project. The situation is even more uncertain due to the fact that there is no experimental evidence of any quantum gravitational effect which could give us a hint towards what we are supposed to look for. The first level of both Quantum Theory, and General Relativity describes passive matter: influence by the field without being a source of it (first quantization and equivalence principle respectively). In both cases this is only an approximation (matter is always a source). But this approximation turns out to be the least uncertain part of the description, because on one hand the first quantization avoids the problems of renormalization and on the other hand the equivalence principle does not imply any choice of field equations (it is known that one can modify Einstein equations at short distances without changing their geometrical properties). (Auth.)

  3. Geometric origin of central charges

    International Nuclear Information System (INIS)

    Lukierski, J.; Rytel, L.

    1981-05-01

    The complete set of N(N-1) central charge generators for D=4 N-extended super Poincare algebra is obtained by suitable contraction of OSp (2N; 4) superalgebra. The superspace realizations of the spinorial generators with central charges are derived. The conjugate set of N(N-1) additional bosonic superspace coordinates is introduced in an unique and geometric way. (author)

  4. Vergence, Vision, and Geometric Optics

    Science.gov (United States)

    Keating, Michael P.

    1975-01-01

    Provides a definition of vergence in terms of the curvature of the wave fronts, and gives examples to illustrate the advantages of this approach. The vergence treatment of geometrical optics provides both conceptual and algebraic advantages, particularly for the life science student, over the traditional object distance-image distance-focal length…

  5. Geometric phases and quantum computation

    International Nuclear Information System (INIS)

    Vedral, V.

    2005-01-01

    Full text: In my lectures I will talk about the notion of the geometric phase and explain its relevance for both fundamental quantum mechanics as well as quantum computation. The phase will be at first introduced via the idea of Pancharatnam which involves interference of three or more light beams. This notion will then be generalized to the evolving quantum systems. I will discuss both pure and mixed states as well as unitary and non-unitary evolutions. I will also show how the concept of the vacuum induced geometric phase arises in quantum optics. A simple measurement scheme involving a Mach Zehnder interferometer will be presented and will be used to illustrate all the concepts in the lecture. Finally, I will expose a simple generalization of the geometric phase to evolving degenerate states. This will be seen to lead to the possibility of universal quantum computation using geometric effects only. Moreover, this contains a promise of intrinsically fault tolerant quantum information processing, whose prospects will be outlined at the end of the lecture. (author)

  6. Cartan's geometrical structure of supergravity

    International Nuclear Information System (INIS)

    Baaklini, N.S.

    1977-06-01

    The geometrical partnership of the vierbein and the spin-3/2 field in the structure of the supergravity Lagrangian is emphasized. Both fields are introduced as component of the same matrix differential form. The only local symmetry of the theory is SL(2,C)

  7. Robust multi-objective calibration strategies – possibilities for improving flood forecasting

    Directory of Open Access Journals (Sweden)

    G. H. Schmitz

    2012-10-01

    Full Text Available Process-oriented rainfall-runoff models are designed to approximate the complex hydrologic processes within a specific catchment and in particular to simulate the discharge at the catchment outlet. Most of these models exhibit a high degree of complexity and require the determination of various parameters by calibration. Recently, automatic calibration methods became popular in order to identify parameter vectors with high corresponding model performance. The model performance is often assessed by a purpose-oriented objective function. Practical experience suggests that in many situations one single objective function cannot adequately describe the model's ability to represent any aspect of the catchment's behaviour. This is regardless of whether the objective is aggregated of several criteria that measure different (possibly opposite aspects of the system behaviour. One strategy to circumvent this problem is to define multiple objective functions and to apply a multi-objective optimisation algorithm to identify the set of Pareto optimal or non-dominated solutions. Nonetheless, there is a major disadvantage of automatic calibration procedures that understand the problem of model calibration just as the solution of an optimisation problem: due to the complex-shaped response surface, the estimated solution of the optimisation problem can result in different near-optimum parameter vectors that can lead to a very different performance on the validation data. Bárdossy and Singh (2008 studied this problem for single-objective calibration problems using the example of hydrological models and proposed a geometrical sampling approach called Robust Parameter Estimation (ROPE. This approach applies the concept of data depth in order to overcome the shortcomings of automatic calibration procedures and find a set of robust parameter vectors. Recent studies confirmed the effectivity of this method. However, all ROPE approaches published so far just identify

  8. Robust procedures in chemometrics

    DEFF Research Database (Denmark)

    Kotwa, Ewelina

    properties of the analysed data. The broad theoretical background of robust procedures was given as a very useful supplement to the classical methods, and a new tool, based on robust PCA, aiming at identifying Rayleigh and Raman scatters in excitation-mission (EEM) data was developed. The results show...

  9. Geometric Transformations in Engineering Geometry

    Directory of Open Access Journals (Sweden)

    I. F. Borovikov

    2015-01-01

    Full Text Available Recently, for business purposes, in view of current trends and world experience in training engineers, research and faculty staff there has been a need to transform traditional courses of descriptive geometry into the course of engineering geometry in which the geometrical transformations have to become its main section. On the basis of critical analysis the paper gives suggestions to improve a presentation technique of this section both in the classroom and in academic literature, extend an application scope of geometrical transformations to solve the position and metric tasks and simulation of surfaces, as well as to design complex engineering configurations, which meet a number of pre-specified conditions.The article offers to make a number of considerable amendments to the terms and definitions used in the existing courses of descriptive geometry. It draws some conclusions and makes the appropriate proposals on feasibility of coordination in teaching the movement transformation in the courses of analytical and descriptive geometry. This will provide interdisciplinary team teaching and allow students to be convinced that a combination of analytical and graphic ways to solve geometric tasks is useful and reasonable.The traditional sections of learning courses need to be added with a theory of projective and bi-rational transformations. In terms of application simplicity and convenience it is enough to consider the central transformations when solving the applied tasks. These transformations contain a beam of sub-invariant (low-invariant straight lines on which the invariant curve induces non-involution and involution projectivities. The expediency of nonlinear transformations application is shown in the article by a specific example of geometric modeling of the interfacing surface "spar-blade".Implementation of these suggestions will contribute to a real transformation of a traditional course of descriptive geometry to the engineering geometry

  10. Geometric entanglement in topologically ordered states

    International Nuclear Information System (INIS)

    Orús, Román; Wei, Tzu-Chieh; Buerschaper, Oliver; Nest, Maarten Van den

    2014-01-01

    Here we investigate the connection between topological order and the geometric entanglement, as measured by the logarithm of the overlap between a given state and its closest product state of blocks. We do this for a variety of topologically ordered systems such as the toric code, double semion, colour code and quantum double models. As happens for the entanglement entropy, we find that for sufficiently large block sizes the geometric entanglement is, up to possible sub-leading corrections, the sum of two contributions: a bulk contribution obeying a boundary law times the number of blocks and a contribution quantifying the underlying pattern of long-range entanglement of the topologically ordered state. This topological contribution is also present in the case of single-spin blocks in most cases, and constitutes an alternative characterization of topological order for these quantum states based on a multipartite entanglement measure. In particular, we see that the topological term for the two-dimensional colour code is twice as much as the one for the toric code, in accordance with recent renormalization group arguments (Bombin et al 2012 New J. Phys. 14 073048). Motivated by these results, we also derive a general formalism to obtain upper- and lower-bounds to the geometric entanglement of states with a non-Abelian group symmetry, and which we explicitly use to analyse quantum double models. Furthermore, we also provide an analysis of the robustness of the topological contribution in terms of renormalization and perturbation theory arguments, as well as a numerical estimation for small systems. Some of the results in this paper rely on the ability to disentangle single sites from the quantum state, which is always possible for the systems that we consider. Additionally we relate our results to the behaviour of the relative entropy of entanglement in topologically ordered systems, and discuss a number of numerical approaches based on tensor networks that could be

  11. A high-accuracy image registration algorithm using phase-only correlation for dental radiographs

    International Nuclear Information System (INIS)

    Ito, Koichi; Nikaido, Akira; Aoki, Takafumi; Kosuge, Eiko; Kawamata, Ryota; Kashima, Isamu

    2008-01-01

    Dental radiographs have been used for the accurate assessment and treatment of dental diseases. The nonlinear deformation between two dental radiographs may be observed, even if they are taken from the same oral regions of the subject. For an accurate diagnosis, the complete geometric registration between radiographs is required. This paper presents an efficient dental radiograph registration algorithm using Phase-Only Correlation (POC) function. The use of phase components in 2D (two-dimensional) discrete Fourier transforms of dental radiograph images makes possible to achieve highly robust image registration and recognition. Experimental evaluation using a dental radiograph database indicates that the proposed algorithm exhibits efficient recognition performance even for distorted radiographs. (author)

  12. A geometrical perspective on localization

    NARCIS (Netherlands)

    Dulman, S.O.; Baggio, A.; Havinga, Paul J.M.; Langendoen, K.G.; Zhang, Ying; Ye, Yinyu

    2008-01-01

    A large number of localization algorithms for wireless sensor networks (WSNs) are evaluated against the Cramer-Rao Bound (CRB) as an indicator of how good the algorithm performs. The CRB defines the lower bound on the precision of an unbiased localization estimator. The CRB concept, borrowed from

  13. Robust Trajectory Design in Highly Perturbed Environments Leveraging Continuation Methods, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Research is proposed to investigate continuation methods to improve the robustness of trajectory design algorithms for spacecraft in highly perturbed dynamical...

  14. Development changes of geometric layout product, developed by means of computer aided design

    Directory of Open Access Journals (Sweden)

    С.Г. Кєворков

    2007-01-01

    Full Text Available  Contains results of development of modification formation methodology in a product geometrical mockup made by means of CAD system. Change process of a CAD data (assembly structures, details and influencing on a product structure is considered. The analysis of the assembly version creations algorithm, which creates a product structure with certain serial number, is carried out. The algorithms of CAD user environment creations, restriction of CAD object and CAD object cancellation algorithm are created.

  15. Revival and robustness of Bures distance discord under decoherence channels

    International Nuclear Information System (INIS)

    Shi, Jia-dong; Wang, Dong; Ma, Yang-cheng; Ye, Liu

    2016-01-01

    In this paper, we demonstrate the revival and robustness of Bures distance discord in comparison with entanglement under local decoherent evolutions. The results show that in depolarizing channel Bures distance discord revives after a dark point of time, while entanglement will damp into death without revival. In addition, in hybrid channel the declining initial condition can enable Bures distance discord to decay more smoothly within a limited time, but speed up the death of entanglement. In this sense, Bures distance discord is typically more robust against decoherence than entanglement. Furthermore, we also provide a geometric interpretation concerning these phenomena. - Highlights: • Bures distance discord is more robust against decoherence than entanglement. • Bures distance discord revives after a dark point of time, while entanglement damps to death. • The initial condition enables Bures distance discord to damp smoothly, but it speeds up the death of entanglement. • A geometric interpretation concerning these phenomena has been provided.

  16. Discrete Riccati equation solutions: Distributed algorithms

    Directory of Open Access Journals (Sweden)

    D. G. Lainiotis

    1996-01-01

    Full Text Available In this paper new distributed algorithms for the solution of the discrete Riccati equation are introduced. The algorithms are used to provide robust and computational efficient solutions to the discrete Riccati equation. The proposed distributed algorithms are theoretically interesting and computationally attractive.

  17. On chromatic and geometrical calibration

    DEFF Research Database (Denmark)

    Folm-Hansen, Jørgen

    1999-01-01

    The main subject of the present thesis is different methods for the geometrical and chromatic calibration of cameras in various environments. For the monochromatic issues of the calibration we present the acquisition of monochrome images, the classic monochrome aberrations and the various sources...... the correct interpolation method is described. For the chromatic issues of calibration we present the acquisition of colour and multi-spectral images, the chromatic aberrations and the various lens/camera based non-uniformities of the illumination of the image plane. It is described how the monochromatic...... to design calibration targets for both geometrical and chromatic calibration are described. We present some possible systematical errors on the detection of the objects in the calibration targets, if viewed in a non orthogonal angle, if the intensities are uneven or if the image blurring is uneven. Finally...

  18. Geometrical approach to tumor growth.

    Science.gov (United States)

    Escudero, Carlos

    2006-08-01

    Tumor growth has a number of features in common with a physical process known as molecular beam epitaxy. Both growth processes are characterized by the constraint of growth development to the body border, and surface diffusion of cells and particles at the growing edge. However, tumor growth implies an approximate spherical symmetry that makes necessary a geometrical treatment of the growth equations. The basic model was introduced in a former paper [C. Escudero, Phys. Rev. E 73, 020902(R) (2006)], and in the present work we extend our analysis and try to shed light on the possible geometrical principles that drive tumor growth. We present two-dimensional models that reproduce the experimental observations, and analyze the unexplored three-dimensional case, for which interesting conclusions on tumor growth are derived.

  19. Geometrical interpretation of optical absorption

    Energy Technology Data Exchange (ETDEWEB)

    Monzon, J. J.; Barriuso, A. G.; Sanchez-Soto, L. L. [Departamento de Optica, Facultad de Fisica, Universidad Complutense, E-28040 Madrid (Spain); Montesinos-Amilibia, J. M. [Departamento de Geometria y Topologia, Facultad de Matematicas, Universidad Complutense, E-28040 Madrid (Spain)

    2011-08-15

    We reinterpret the transfer matrix for an absorbing system in very simple geometrical terms. In appropriate variables, the system appears as performing a Lorentz transformation in a (1 + 3)-dimensional space. Using homogeneous coordinates, we map that action on the unit sphere, which is at the realm of the Klein model of hyperbolic geometry. The effects of absorption appear then as a loxodromic transformation, that is, a rhumb line crossing all the meridians at the same angle.

  20. Parametric FEM for geometric biomembranes

    Science.gov (United States)

    Bonito, Andrea; Nochetto, Ricardo H.; Sebastian Pauletti, M.

    2010-05-01

    We consider geometric biomembranes governed by an L2-gradient flow for bending energy subject to area and volume constraints (Helfrich model). We give a concise derivation of a novel vector formulation, based on shape differential calculus, and corresponding discretization via parametric FEM using quadratic isoparametric elements and a semi-implicit Euler method. We document the performance of the new parametric FEM with a number of simulations leading to dumbbell, red blood cell and toroidal equilibrium shapes while exhibiting large deformations.

  1. Geometrical approach to tumor growth

    OpenAIRE

    Escudero, Carlos

    2006-01-01

    Tumor growth has a number of features in common with a physical process known as molecular beam epitaxy. Both growth processes are characterized by the constraint of growth development to the body border, and surface diffusion of cells/particles at the growing edge. However, tumor growth implies an approximate spherical symmetry that makes necessary a geometrical treatment of the growth equations. The basic model was introduced in a former article [C. Escudero, Phys. Rev. E 73, 020902(R) (200...

  2. Advances on geometric flux optical design method

    Science.gov (United States)

    García-Botella, Ángel; Fernández-Balbuena, Antonio Álvarez; Vázquez, Daniel

    2017-09-01

    Nonimaging optics is focused on the study of methods to design concentrators or illuminators systems. It can be included in the area of photometry and radiometry and it is governed by the laws of geometrical optics. The field vector method, which starts with the definition of the irradiance vector E, is one of the techniques used in nonimaging optics. Called "Geometrical flux vector" it has provide ideal designs. The main property of this model is, its ability to estimate how radiant energy is transferred by the optical system, from the concepts of field line, flux tube and pseudopotential surface, overcoming traditional raytrace methods. Nevertheless this model has been developed only at an academic level, where characteristic optical parameters are ideal not real and the studied geometries are simple. The main objective of the present paper is the application of the vector field method to the analysis and design of real concentration and illumination systems. We propose the development of a calculation tool for optical simulations by vector field, using algorithms based on Fermat`s principle, as an alternative to traditional tools for optical simulations by raytrace, based on reflection and refraction law. This new tool provides, first, traditional simulations results: efficiency, illuminance/irradiance calculations, angular distribution of light- with lower computation time, photometrical information needs about a few tens of field lines, in comparison with million rays needed nowadays. On the other hand the tool will provides new information as vector field maps produced by the system, composed by field lines and quasipotential surfaces. We show our first results with the vector field simulation tool.

  3. Aplicación de algoritmos de control clásico, adaptable y robusto a sistemas dinámicos de parámetros variables;Classic, adaptable and robust control algorithm application, to variant parameter dynamic system.

    Directory of Open Access Journals (Sweden)

    Orlando – Regalón Anias

    2012-11-01

    Full Text Available Existen múltiples sistemas dinámicos cuyos modelos matemáticos se caracterizan por ser de primer orden yparámetros variables con el tiempo. En estos casos las herramientas clásicas no siempre logran un sistema decontrol que sea estable, posea un buen desempeño dinámico y rechace adecuadamente las perturbaciones, cuandoel modelo de la planta se desvía del nominal, para el cual se realizó el diseño.En este trabajo se evalúa elcomportamiento de tres estrategias de control en presencia de variación de parámetros. Estas son: control clásico,control adaptable y control robusto. Se realiza un estudio comparativo de las mismas en cuanto a complejidad deldiseño, costo computacional de la implementación y sensibilidad ante variaciones en los parámetros y/o presencia dedisturbios. Se llega a conclusiones que permiten disponer de criterios para la elección más adecuada, endependencia de los requerimientos dinámicos que la aplicación demande, así como de los medios técnicos de que sedisponga.Many dynamic systems have first order mathematic models, with time variable parameters. In these cases, theclassical tools do not satisfy at all control system stability, good performance and perturbation rejection, when theplant model differs from the nominal one, for which the controller was designed.In this article, three control strategiesare evaluated in parameter variations and disturbance presence. The strategies are the followings: classical control,adaptive control and robust control. A comparative study is carried out, taking into account the design complexity, thecomputational cost and the sensitivity. The obtained conclusions helps to provide the criterion to choose the mostadequate control strategy, according to the necessary dynamic, as well as the available technical means.

  4. Riemannian geometry and geometric analysis

    CERN Document Server

    Jost, Jürgen

    2017-01-01

    This established reference work continues to provide its readers with a gateway to some of the most interesting developments in contemporary geometry. It offers insight into a wide range of topics, including fundamental concepts of Riemannian geometry, such as geodesics, connections and curvature; the basic models and tools of geometric analysis, such as harmonic functions, forms, mappings, eigenvalues, the Dirac operator and the heat flow method; as well as the most important variational principles of theoretical physics, such as Yang-Mills, Ginzburg-Landau or the nonlinear sigma model of quantum field theory. The present volume connects all these topics in a systematic geometric framework. At the same time, it equips the reader with the working tools of the field and enables her or him to delve into geometric research.  The 7th edition has been systematically reorganized and updated. Almost no page has been left unchanged. It also includes new material, for instance on symplectic geometry, as well as the B...

  5. Geometric mean for subspace selection.

    Science.gov (United States)

    Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J

    2009-02-01

    Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.

  6. Robust matching for voice recognition

    Science.gov (United States)

    Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.

    1994-10-01

    This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.

  7. Robustness Metrics: Consolidating the multiple approaches to quantify Robustness

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.

    2016-01-01

    robustness metrics; 3) Functional expectancy and dispersion robustness metrics; and 4) Probability of conformance robustness metrics. The goal was to give a comprehensive overview of robustness metrics and guidance to scholars and practitioners to understand the different types of robustness metrics...

  8. Robustness of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2008-01-01

    This paper describes the background of the robustness requirements implemented in the Danish Code of Practice for Safety of Structures and in the Danish National Annex to the Eurocode 0, see (DS-INF 146, 2003), (DS 409, 2006), (EN 1990 DK NA, 2007) and (Sørensen and Christensen, 2006). More...... frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new structures essential....... According to Danish design rules robustness shall be documented for all structures in high consequence class. The design procedure to document sufficient robustness consists of: 1) Review of loads and possible failure modes / scenarios and determination of acceptable collapse extent; 2) Review...

  9. Robustness of structures

    DEFF Research Database (Denmark)

    Vrouwenvelder, T.; Sørensen, John Dalsgaard

    2009-01-01

    After the collapse of the World Trade Centre towers in 2001 and a number of collapses of structural systems in the beginning of the century, robustness of structural systems has gained renewed interest. Despite many significant theoretical, methodical and technological advances, structural...... of robustness for structural design such requirements are not substantiated in more detail, nor have the engineering profession been able to agree on an interpretation of robustness which facilitates for its uantification. A European COST action TU 601 on ‘Robustness of structures' has started in 2007...... by a group of members of the CSS. This paper describes the ongoing work in this action, with emphasis on the development of a theoretical and risk based quantification and optimization procedure on the one side and a practical pre-normative guideline on the other....

  10. Robust Approaches to Forecasting

    OpenAIRE

    Jennifer Castle; David Hendry; Michael P. Clements

    2014-01-01

    We investigate alternative robust approaches to forecasting, using a new class of robust devices, contrasted with equilibrium correction models. Their forecasting properties are derived facing a range of likely empirical problems at the forecast origin, including measurement errors, implulses, omitted variables, unanticipated location shifts and incorrectly included variables that experience a shift. We derive the resulting forecast biases and error variances, and indicate when the methods ar...

  11. Robustness - theoretical framework

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.

    2010-01-01

    More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....

  12. Compactness and robustness: Applications in the solution of integral equations for chemical kinetics and electromagnetic scattering

    Science.gov (United States)

    Zhou, Yajun

    This thesis employs the topological concept of compactness to deduce robust solutions to two integral equations arising from chemistry and physics: the inverse Laplace problem in chemical kinetics and the vector wave scattering problem in dielectric optics. The inverse Laplace problem occurs in the quantitative understanding of biological processes that exhibit complex kinetic behavior: different subpopulations of transition events from the "reactant" state to the "product" state follow distinct reaction rate constants, which results in a weighted superposition of exponential decay modes. Reconstruction of the rate constant distribution from kinetic data is often critical for mechanistic understandings of chemical reactions related to biological macromolecules. We devise a "phase function approach" to recover the probability distribution of rate constants from decay data in the time domain. The robustness (numerical stability) of this reconstruction algorithm builds upon the continuity of the transformations connecting the relevant function spaces that are compact metric spaces. The robust "phase function approach" not only is useful for the analysis of heterogeneous subpopulations of exponential decays within a single transition step, but also is generalizable to the kinetic analysis of complex chemical reactions that involve multiple intermediate steps. A quantitative characterization of the light scattering is central to many meteoro-logical, optical, and medical applications. We give a rigorous treatment to electromagnetic scattering on arbitrarily shaped dielectric media via the Born equation: an integral equation with a strongly singular convolution kernel that corresponds to a non-compact Green operator. By constructing a quadratic polynomial of the Green operator that cancels out the kernel singularity and satisfies the compactness criterion, we reveal the universality of a real resonance mode in dielectric optics. Meanwhile, exploiting the properties of

  13. A robust and fast generic voltage sag detection technique

    DEFF Research Database (Denmark)

    L. Dantas, Joacillo; Lima, Francisco Kleber A.; Branco, Carlos Gustavo C.

    2015-01-01

    In this paper, a fast and robust voltage sag detection algorithm, named VPS2D, is introduced. Using the DSOGI, the algorithm creates a virtual positive sequence voltage and monitories the fundamental voltage component of each phase. After calculating the aggregate value in the o:;3-reference fram...

  14. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  15. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  16. Hybrid Geometric Calibration Method for Multi-Platform Spaceborne SAR Image with Sparse Gcps

    Science.gov (United States)

    Lv, G.; Tang, X.; Ai, B.; Li, T.; Chen, Q.

    2018-04-01

    Geometric calibration is able to provide high-accuracy geometric coordinates of spaceborne SAR image through accurate geometric parameters in the Range-Doppler model by ground control points (GCPs). However, it is very difficult to obtain GCPs that covering large-scale areas, especially in the mountainous regions. In addition, the traditional calibration method is only used for single platform SAR images and can't support the hybrid geometric calibration for multi-platform images. To solve the above problems, a hybrid geometric calibration method for multi-platform spaceborne SAR images with sparse GCPs is proposed in this paper. First, we calibrate the master image that contains GCPs. Secondly, the point tracking algorithm is used to obtain the tie points (TPs) between the master and slave images. Finally, we calibrate the slave images using TPs as the GCPs. We take the Beijing-Tianjin- Hebei region as an example to study SAR image hybrid geometric calibration method using 3 TerraSAR-X images, 3 TanDEM-X images and 5 GF-3 images covering more than 235 kilometers in the north-south direction. Geometric calibration of all images is completed using only 5 GCPs. The GPS data extracted from GNSS receiver are used to assess the plane accuracy after calibration. The results after geometric calibration with sparse GCPs show that the geometric positioning accuracy is 3 m for TSX/TDX images and 7.5 m for GF-3 images.

  17. A robust multilevel simultaneous eigenvalue solver

    Science.gov (United States)

    Costiner, Sorin; Taasan, Shlomo

    1993-01-01

    Multilevel (ML) algorithms for eigenvalue problems are often faced with several types of difficulties such as: the mixing of approximated eigenvectors by the solution process, the approximation of incomplete clusters of eigenvectors, the poor representation of solution on coarse levels, and the existence of close or equal eigenvalues. Algorithms that do not treat appropriately these difficulties usually fail, or their performance degrades when facing them. These issues motivated the development of a robust adaptive ML algorithm which treats these difficulties, for the calculation of a few eigenvectors and their corresponding eigenvalues. The main techniques used in the new algorithm include: the adaptive completion and separation of the relevant clusters on different levels, the simultaneous treatment of solutions within each cluster, and the robustness tests which monitor the algorithm's efficiency and convergence. The eigenvectors' separation efficiency is based on a new ML projection technique generalizing the Rayleigh Ritz projection, combined with a technique, the backrotations. These separation techniques, when combined with an FMG formulation, in many cases lead to algorithms of O(qN) complexity, for q eigenvectors of size N on the finest level. Previously developed ML algorithms are less focused on the mentioned difficulties. Moreover, algorithms which employ fine level separation techniques are of O(q(sub 2)N) complexity and usually do not overcome all these difficulties. Computational examples are presented where Schrodinger type eigenvalue problems in 2-D and 3-D, having equal and closely clustered eigenvalues, are solved with the efficiency of the Poisson multigrid solver. A second order approximation is obtained in O(qN) work, where the total computational work is equivalent to only a few fine level relaxations per eigenvector.

  18. Qualitative Robustness in Estimation

    Directory of Open Access Journals (Sweden)

    Mohammed Nasser

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of   qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.

  19. Aplicación de algoritmos de control clásico, adaptable y robusto a sistemas dinámicos de parámetros variables; Classic, adaptable and robust control algorithm application, to variant parameter dynamic system

    Directory of Open Access Journals (Sweden)

    Orlando Regalón Anias

    2012-11-01

    Full Text Available Existen múltiples sistemas dinámicos cuyos modelos matemáticos se caracterizan por ser de primer orden y parámetros variables con el tiempo. En estos casos las herramientas clásicas no siempre logran un sistema decontrol que sea estable, posea un buen desempeño dinámico y rechace adecuadamente las perturbaciones, cuando el modelo de la planta se desvía del nominal, para el cual se realizó el diseño.En este trabajo se evalúa el comportamiento de tres estrategias de control en presencia de variación de parámetros. Estas son: control clásico, control adaptable y control robusto. Se realiza un estudio comparativo de las mismas en cuanto a complejidad del diseño, costo computacional de la implementación y sensibilidad ante variaciones en los parámetros y/o presencia de disturbios. Se llega a conclusiones que permiten disponer de criterios para la elección más adecuada, en dependencia de los requerimientos dinámicos que la aplicación demande, así como de los medios técnicos de que se disponga.  Many dynamic systems have first order mathematic models, with time variable parameters. In these cases, the classical tools do not satisfy at all control system stability, good performance and perturbation rejection, when the plant model differs from the nominal one, for which the controller was designed.In this article, three control strategies are evaluated in parameter variations and disturbance presence. The strategies are the followings: classical control, adaptive control and robust control. A comparative study is carried out, taking into account the design complexity, the computational cost and the sensitivity. The obtained conclusions helps to provide the criterion to choose the mostadequate control strategy, according to the necessary dynamic, as well as the available technical means.

  20. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    Science.gov (United States)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.