WorldWideScience

Sample records for legendre transform algorithm

  1. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    1998-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were

  2. On the efficient parallel computation of Legendre transforms

    NARCIS (Netherlands)

    Inda, M.A.; Bisseling, R.H.; Maslen, D.K.

    2001-01-01

    In this article, we discuss a parallel implementation of efficient algorithms for computation of Legendre polynomial transforms and other orthogonal polynomial transforms. We develop an approach to the Driscoll-Healy algorithm using polynomial arithmetic and present experimental results on the

  3. On the efficient parallel computation of Legendre transforms

    NARCIS (Netherlands)

    Inda, M.A.; Bisseling, R.H.; Maslen, D.K.

    1999-01-01

    In this article we discuss a parallel implementation of efficient algorithms for computation of Legendre polynomial transforms and other orthogonal polynomial transforms. We develop an approach to the Driscoll-Healy algorithm using polynomial arithmetic and present experimental results on the

  4. A Fast, Simple, and Stable Chebyshev--Legendre Transform Using an Asymptotic Formula

    KAUST Repository

    Hale, Nicholas

    2014-02-06

    A fast, simple, and numerically stable transform for converting between Legendre and Chebyshev coefficients of a degree N polynomial in O(N(log N)2/ log log N) operations is derived. The fundamental idea of the algorithm is to rewrite a well-known asymptotic formula for Legendre polynomials of large degree as a weighted linear combination of Chebyshev polynomials, which can then be evaluated by using the discrete cosine transform. Numerical results are provided to demonstrate the efficiency and numerical stability. Since the algorithm evaluates a Legendre expansion at an N +1 Chebyshev grid as an intermediate step, it also provides a fast transform between Legendre coefficients and values on a Chebyshev grid. © 2014 Society for Industrial and Applied Mathematics.

  5. The continous Legendre transform, its inverse transform, and applications

    Directory of Open Access Journals (Sweden)

    P. L. Butzer

    1980-01-01

    Full Text Available This paper is concerned with the continuous Legendre transform, derived from the classical discrete Legendre transform by replacing the Legendre polynomial Pk(x by the function Pλ(x with λ real. Another approach to T.M. MacRobert's inversion formula is found; for this purpose an inverse Legendre transform, mapping L1(ℝ+ into L2(−1,1, is defined. Its inversion in turn is naturally achieved by the continuous Legendre transform. One application is devoted to the Shannon sampling theorem in the Legendre frame together with a new type of error estimate. The other deals with a new representation of Legendre functions giving information about their behaviour near the point x=−1.

  6. An Algorithm for the Convolution of Legendre Series

    KAUST Repository

    Hale, Nicholas; Townsend, Alex

    2014-01-01

    An O(N2) algorithm for the convolution of compactly supported Legendre series is described. The algorithm is derived from the convolution theorem for Legendre polynomials and the recurrence relation satisfied by spherical Bessel functions. Combining with previous work yields an O(N 2) algorithm for the convolution of Chebyshev series. Numerical results are presented to demonstrate the improved efficiency over the existing algorithm. © 2014 Society for Industrial and Applied Mathematics.

  7. Fast track segment finding in the Monitored Drift Tubes of the ATLAS Muon Spectrometer using a Legendre transform algorithm

    CERN Document Server

    Ntekas, Konstantinos; The ATLAS collaboration

    2018-01-01

    The upgrade of the ATLAS first-level muon trigger for High- Luminosity LHC foresees incorporating the precise tracking of the Monitored Drift Tubes in the current system based on Resistive Plate Chambers and Thin Gap Chambers to improve the accuracy in the transverse momentum measurement and control the single muon trigger rate by suppressing low quality fake triggers. The core of the MDT trigger algorithm is the segment identification and reconstruction which is performed per MDT chamber. The reconstructed segment positions and directions are then combined to extract the muon candidate’s transverse momentum. A fast pattern recognition segment finding algorithm, called the Legendre transform, is proposed to be used for the MDT trigger, implemented in a FPGA housed on a ATCA blade.

  8. Quadratic Lagrangians and Legendre transformation

    International Nuclear Information System (INIS)

    Magnano, G.

    1988-01-01

    In recent years interest is grown about the so-called non-linear Lagrangians for gravitation. In particular, the quadratic lagrangians are currently believed to play a fundamental role both for quantum gravity and for the super-gravity approach. The higher order and high degree of non-linearity of these theories make very difficult to extract physical information out of them. The author discusses how the Legendre transformation can be applied to a wide class of non-linear theories: it corresponds to a conformal transformation whenever the Lagrangian depends only on the scalar curvature, while it has a more general form if the Lagrangian depends on the full Ricci tensor

  9. Legendre transformations and Clairaut-type equations

    Energy Technology Data Exchange (ETDEWEB)

    Lavrov, Peter M., E-mail: lavrov@tspu.edu.ru [Tomsk State Pedagogical University, Kievskaya St. 60, 634061 Tomsk (Russian Federation); National Research Tomsk State University, Lenin Av. 36, 634050 Tomsk (Russian Federation); Merzlikin, Boris S., E-mail: merzlikin@tspu.edu.ru [National Research Tomsk Polytechnic University, Lenin Av. 30, 634050 Tomsk (Russian Federation)

    2016-05-10

    It is noted that the Legendre transformations in the standard formulation of quantum field theory have the form of functional Clairaut-type equations. It is shown that in presence of composite fields the Clairaut-type form holds after loop corrections are taken into account. A new solution to the functional Clairaut-type equation appearing in field theories with composite fields is found.

  10. Transformation formulas for legendre coefficients of double-differential cross sections

    International Nuclear Information System (INIS)

    Shi Xiangjun; Zhang Jingshang

    1989-01-01

    Approximate analytical formulas have been derived for the transformation of Legendre coefficients of double-differential continuum cross sections of two-body nuclear reactions from the center-of-mass to the laboratory system. This transformation differs from that of elastic-scattering angular distribution coefficients on its accuracy which depends not only upon the target mass, but also upon outgoing energies. A fast code has been written to transform Legendre coefficients of neutron inelastic scattering cross-sections. The calculations have been carried out using a recently introduced numerical integration method for more complicated problems in which the energy spectrum is either an evaporation spectrum or a spectrum obtained from a (pre-)compound model. The results are quite satisfactory provided that the target mass or the outgoing energy is not sufficiently low

  11. A new representation for ground states and its Legendre transforms

    International Nuclear Information System (INIS)

    Cedillo, A.

    1994-01-01

    The ground-state energy of an electronic system is a functional of the number of electrons (N) and the external potential (v): E = E(N,V), this is the energy representation for ground states. In 1982, Nalewajski defined the Legendre transforms of this representation, taking advantage of the strict concavity of E with respect to their variables (concave respect v and convex respect N), and he also constructed a scheme for the reduction of derivatives of his representations. Unfortunately, N and the electronic density (p) were the independent variables of one of these representations, but p depends explicitly on N. In this work, this problem is avoided using the energy per particle (ε) as the basic variables, and the Legendre transformations can be defined. A procedure for the reduction of derivatives is generated for the new four representations and, in contrast to the Nalewajski's procedure, it only includes derivatives of the four representations. Finally, the reduction of derivatives is used to test some relationships between the hardness and softness kernels

  12. A Fast, Simple, and Stable Chebyshev--Legendre Transform Using an Asymptotic Formula

    KAUST Repository

    Hale, Nicholas; Townsend, Alex

    2014-01-01

    -known asymptotic formula for Legendre polynomials of large degree as a weighted linear combination of Chebyshev polynomials, which can then be evaluated by using the discrete cosine transform. Numerical results are provided to demonstrate the efficiency

  13. Convexity Conditions and the Legendre-Fenchel Transform for the Product of Finitely Many Positive Definite Quadratic Forms

    International Nuclear Information System (INIS)

    Zhao Yunbin

    2010-01-01

    While the product of finitely many convex functions has been investigated in the field of global optimization, some fundamental issues such as the convexity condition and the Legendre-Fenchel transform for the product function remain unresolved. Focusing on quadratic forms, this paper is aimed at addressing the question: When is the product of finitely many positive definite quadratic forms convex, and what is the Legendre-Fenchel transform for it? First, we show that the convexity of the product is determined intrinsically by the condition number of so-called 'scaled matrices' associated with quadratic forms involved. The main result claims that if the condition number of these scaled matrices are bounded above by an explicit constant (which depends only on the number of quadratic forms involved), then the product function is convex. Second, we prove that the Legendre-Fenchel transform for the product of positive definite quadratic forms can be expressed, and the computation of the transform amounts to finding the solution to a system of equations (or equally, finding a Brouwer's fixed point of a mapping) with a special structure. Thus, a broader question than the open 'Question 11' in Hiriart-Urruty (SIAM Rev. 49, 225-273, 2007) is addressed in this paper.

  14. Fast track segment finding in the Monitored Drift Tubes (MDT) of the ATLAS Muon Spectrometer using a Legendre transform algorithm

    CERN Document Server

    Ntekas, Konstantinos; The ATLAS collaboration

    2018-01-01

    Many of the physics goals of ATLAS in the High Luminosity LHC era, including precision studies of the Higgs boson, require an unprescaled single muon trigger with a 20 GeV threshold. The selectivity of the current ATLAS first-level muon trigger is limited by the moderate spatial resolution of the muon trigger chambers. By incorporating the precise tracking of the MDT, the muon transverse momentum can be measured with an accuracy close to that of the offline reconstruction at the trigger level, sharpening the trigger turn-on curves and reducing the single muon trigger rate. A novel algorithm is proposed which reconstructs segments from MDT hits in an FPGA and find tracks within the tight latency constraints of the ATLAS first-level muon trigger. The algorithm represents MDT drift circles as curves in the Legendre space and returns one or more segment lines tangent to the maximum possible number of drift circles.  This algorithm is implemented without the need of resource and time consuming hit position calcul...

  15. Prediction of Shanghai Index based on Additive Legendre Neural Network

    Directory of Open Access Journals (Sweden)

    Yang Bin

    2017-01-01

    Full Text Available In this paper, a novel Legendre neural network model is proposed, namely additive Legendre neural network (ALNN. A new hybrid evolutionary method besed on binary particle swarm optimization (BPSO algorithm and firefly algorithm is proposed to optimize the structure and parameters of ALNN model. Shanghai stock exchange composite index is used to evaluate the performance of ALNN. Results reveal that ALNN performs better than LNN model.

  16. Composite Gauss-Legendre Quadrature with Error Control

    Science.gov (United States)

    Prentice, J. S. C.

    2011-01-01

    We describe composite Gauss-Legendre quadrature for determining definite integrals, including a means of controlling the approximation error. We compare the form and performance of the algorithm with standard Newton-Cotes quadrature. (Contains 1 table.)

  17. Identification of chaotic memristor systems based on piecewise adaptive Legendre filters

    International Nuclear Information System (INIS)

    Zhao, Yibo; Zhang, Xiuzai; Xu, Jin; Guo, Yecai

    2015-01-01

    Memristor is a nonlinear device, which plays an important role in the design and implementation of chaotic systems. In order to be able to understand in-depth the complex nonlinear dynamic behaviors in chaotic memristor systems, modeling or identification of its nonlinear model is very important premise. This paper presents a chaotic memristor system identification method based on piecewise adaptive Legendre filters. The threshold decomposition is carried out for the input vector, and also the input signal subintervals via decomposition satisfy the convergence condition of the adaptive Legendre filters. Then the adaptive Legendre filter structure and adaptive weight update algorithm are derived. Final computer simulation results show the effectiveness as well as fast convergence characteristics.

  18. Legendre transform structure and extremal properties of the relative Fisher information

    Energy Technology Data Exchange (ETDEWEB)

    Venkatesan, R.C., E-mail: ravi@systemsresearchcorp.com [Systems Research Corporation, Aundh, Pune 411007 (India); Plastino, A., E-mail: plastino@fisica.unlp.edu.ar [IFLP, National University La Plata and National Research Council (CONICET) C.C., 727 1900 La Plata (Argentina)

    2014-04-01

    Variational extremization of the relative Fisher information (RFI, hereafter) is performed. Reciprocity relations, akin to those of thermodynamics are derived, employing the extremal results of the RFI expressed in terms of probability amplitudes. A time independent Schrödinger-like equation (Schrödinger-like link) for the RFI is derived. The concomitant Legendre transform structure (LTS, hereafter) is developed by utilizing a generalized RFI-Euler theorem, which shows that the entire mathematical structure of thermodynamics translates into the RFI framework, both for equilibrium and non-equilibrium cases. The qualitatively distinct nature of the present results vis-á-vis those of prior studies utilizing the Shannon entropy and/or the Fisher information measure (FIM, hereafter) is discussed. A principled relationship between the RFI and the FIM frameworks is derived. The utility of this relationship is demonstrated by an example wherein the energy eigenvalues of the Schrödinger-like link for the RFI are inferred solely using the quantum mechanical virial theorem and the LTS of the RFI.

  19. Composite Gauss-Legendre Formulas for Solving Fuzzy Integration

    Directory of Open Access Journals (Sweden)

    Xiaobin Guo

    2014-01-01

    Full Text Available Two numerical integration rules based on composition of Gauss-Legendre formulas for solving integration of fuzzy numbers-valued functions are investigated in this paper. The methods' constructions are presented and the corresponding convergence theorems are shown in detail. Two numerical examples are given to illustrate the proposed algorithms finally.

  20. Matrix form of Legendre polynomials for solving linear integro-differential equations of high order

    Science.gov (United States)

    Kammuji, M.; Eshkuvatov, Z. K.; Yunus, Arif A. M.

    2017-04-01

    This paper presents an effective approximate solution of high order of Fredholm-Volterra integro-differential equations (FVIDEs) with boundary condition. Legendre truncated series is used as a basis functions to estimate the unknown function. Matrix operation of Legendre polynomials is used to transform FVIDEs with boundary conditions into matrix equation of Fredholm-Volterra type. Gauss Legendre quadrature formula and collocation method are applied to transfer the matrix equation into system of linear algebraic equations. The latter equation is solved by Gauss elimination method. The accuracy and validity of this method are discussed by solving two numerical examples and comparisons with wavelet and methods.

  1. On computation and use of Fourier coefficients for associated Legendre functions

    Science.gov (United States)

    Gruber, Christian; Abrykosov, Oleh

    2016-06-01

    The computation of spherical harmonic series in very high resolution is known to be delicate in terms of performance and numerical stability. A major problem is to keep results inside a numerical range of the used data type during calculations as under-/overflow arises. Extended data types are currently not desirable since the arithmetic complexity will grow exponentially with higher resolution levels. If the associated Legendre functions are computed in the spectral domain, then regular grid transformations can be applied to be highly efficient and convenient for derived quantities as well. In this article, we compare three recursive computations of the associated Legendre functions as trigonometric series, thereby ensuring a defined numerical range for each constituent wave number, separately. The results to a high degree and order show the numerical strength of the proposed method. First, the evaluation of Fourier coefficients of the associated Legendre functions has been done with respect to the floating-point precision requirements. Secondly, the numerical accuracy in the cases of standard double and long double precision arithmetic is demonstrated. Following Bessel's inequality the obtained accuracy estimates of the Fourier coefficients are directly transferable to the associated Legendre functions themselves and to derived functionals as well. Therefore, they can provide an essential insight to modern geodetic applications that depend on efficient spherical harmonic analysis and synthesis beyond [5~× ~5] arcmin resolution.

  2. Development of pattern recognition algorithms for the central drift chamber of the Belle II detector

    Energy Technology Data Exchange (ETDEWEB)

    Trusov, Viktor

    2016-11-04

    In this thesis, the development of one of the pattern recognition algorithms for the Belle II experiment based on conformal and Legendre transformations is presented. In order to optimize the performance of the algorithm (CPU time and efficiency) specialized processing steps have been introduced. To show achieved results, Monte-Carlo based efficiency measurements of the tracking algorithms in the Central Drift Chamber (CDC) has been done.

  3. Finger crease pattern recognition using Legendre moments and principal component analysis

    Science.gov (United States)

    Luo, Rongfang; Lin, Tusheng

    2007-03-01

    The finger joint lines defined as finger creases and its distribution can identify a person. In this paper, we propose a new finger crease pattern recognition method based on Legendre moments and principal component analysis (PCA). After obtaining the region of interest (ROI) for each finger image in the pre-processing stage, Legendre moments under Radon transform are applied to construct a moment feature matrix from the ROI, which greatly decreases the dimensionality of ROI and can represent principal components of the finger creases quite well. Then, an approach to finger crease pattern recognition is designed based on Karhunen-Loeve (K-L) transform. The method applies PCA to a moment feature matrix rather than the original image matrix to achieve the feature vector. The proposed method has been tested on a database of 824 images from 103 individuals using the nearest neighbor classifier. The accuracy up to 98.584% has been obtained when using 4 samples per class for training. The experimental results demonstrate that our proposed approach is feasible and effective in biometrics.

  4. Application of Legendre spectral-collocation method to delay differential and stochastic delay differential equation

    Science.gov (United States)

    Khan, Sami Ullah; Ali, Ishtiaq

    2018-03-01

    Explicit solutions to delay differential equation (DDE) and stochastic delay differential equation (SDDE) can rarely be obtained, therefore numerical methods are adopted to solve these DDE and SDDE. While on the other hand due to unstable nature of both DDE and SDDE numerical solutions are also not straight forward and required more attention. In this study, we derive an efficient numerical scheme for DDE and SDDE based on Legendre spectral-collocation method, which proved to be numerical methods that can significantly speed up the computation. The method transforms the given differential equation into a matrix equation by means of Legendre collocation points which correspond to a system of algebraic equations with unknown Legendre coefficients. The efficiency of the proposed method is confirmed by some numerical examples. We found that our numerical technique has a very good agreement with other methods with less computational effort.

  5. Essential imposition of Neumann condition in Galerkin-Legendre elliptic solvers

    CERN Document Server

    Auteri, F; Quartapelle, L

    2003-01-01

    A new Galerkin-Legendre direct spectral solver for the Neumann problem associated with Laplace and Helmholtz operators in rectangular domains is presented. The algorithm differs from other Neumann spectral solvers by the high sparsity of the matrices, exploited in conjunction with the direct product structure of the problem. The homogeneous boundary condition is satisfied exactly by expanding the unknown variable into a polynomial basis of functions which are built upon the Legendre polynomials and have a zero slope at the interval extremes. A double diagonalization process is employed pivoting around the eigenstructure of the pentadiagonal mass matrices in both directions, instead of the full stiffness matrices encountered in the classical variational formulation of the problem with a weak natural imposition of the derivative boundary condition. Nonhomogeneous Neumann data are accounted for by means of a lifting. Numerical results are given to illustrate the performance of the proposed spectral elliptic solv...

  6. Fast and Accurate Computation of Gauss--Legendre and Gauss--Jacobi Quadrature Nodes and Weights

    KAUST Repository

    Hale, Nicholas; Townsend, Alex

    2013-01-01

    An efficient algorithm for the accurate computation of Gauss-Legendre and Gauss-Jacobi quadrature nodes and weights is presented. The algorithm is based on Newton's root-finding method with initial guesses and function evaluations computed via asymptotic formulae. The n-point quadrature rule is computed in O(n) operations to an accuracy of essentially double precision for any n ≥ 100. © 2013 Society for Industrial and Applied Mathematics.

  7. Fast and Accurate Computation of Gauss--Legendre and Gauss--Jacobi Quadrature Nodes and Weights

    KAUST Repository

    Hale, Nicholas

    2013-03-06

    An efficient algorithm for the accurate computation of Gauss-Legendre and Gauss-Jacobi quadrature nodes and weights is presented. The algorithm is based on Newton\\'s root-finding method with initial guesses and function evaluations computed via asymptotic formulae. The n-point quadrature rule is computed in O(n) operations to an accuracy of essentially double precision for any n ≥ 100. © 2013 Society for Industrial and Applied Mathematics.

  8. Definite Integrals using Orthogonality and Integral Transforms

    Directory of Open Access Journals (Sweden)

    Howard S. Cohl

    2012-10-01

    Full Text Available We obtain definite integrals for products of associated Legendre functions with Bessel functions, associated Legendre functions, and Chebyshev polynomials of the first kind using orthogonality and integral transforms.

  9. Higher-Order Hierarchical Legendre Basis Functions in Applications

    DEFF Research Database (Denmark)

    Kim, Oleksiy S.; Jørgensen, Erik; Meincke, Peter

    2007-01-01

    The higher-order hierarchical Legendre basis functions have been developed for effective solution of integral equations with the method of moments. They are derived from orthogonal Legendre polynomials modified to enforce normal continuity between neighboring mesh elements, while preserving a high...

  10. Superiority of legendre polynomials to Chebyshev polynomial in ...

    African Journals Online (AJOL)

    In this paper, we proved the superiority of Legendre polynomial to Chebyshev polynomial in solving first order ordinary differential equation with rational coefficient. We generated shifted polynomial of Chebyshev, Legendre and Canonical polynomials which deal with solving differential equation by first choosing Chebyshev ...

  11. Can we use the known fast spherical fourier transforms in numerical meterology?

    OpenAIRE

    Sprengel, F.

    2001-01-01

    In numerical meteorology, there are many solvers using spectral methods. Most of the computing time is spent computing the discrete Legendre function transforms. The aim of this paper is to clarify whether the recently published fast Legendre function transforms can be used here.

  12. Legendre-tau approximations for functional differential equations

    Science.gov (United States)

    Ito, K.; Teglas, R.

    1986-01-01

    The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.

  13. Fast numerical algorithm for the linear canonical transform.

    Science.gov (United States)

    Hennelly, Bryan M; Sheridan, John T

    2005-05-01

    The linear canonical transform (LCT) describes the effect of any quadratic phase system (QPS) on an input optical wave field. Special cases of the LCT include the fractional Fourier transform (FRT), the Fourier transform (FT), and the Fresnel transform (FST) describing free-space propagation. Currently there are numerous efficient algorithms used (for purposes of numerical simulation in the area of optical signal processing) to calculate the discrete FT, FRT, and FST. All of these algorithms are based on the use of the fast Fourier transform (FFT). In this paper we develop theory for the discrete linear canonical transform (DLCT), which is to the LCT what the discrete Fourier transform (DFT) is to the FT. We then derive the fast linear canonical transform (FLCT), an N log N algorithm for its numerical implementation by an approach similar to that used in deriving the FFT from the DFT. Our algorithm is significantly different from the FFT, is based purely on the properties of the LCT, and can be used for FFT, FRT, and FST calculations and, in the most general case, for the rapid calculation of the effect of any QPS.

  14. A fast butterfly algorithm for generalized Radon transforms

    KAUST Repository

    Hu, Jingwei

    2013-06-21

    Generalized Radon transforms, such as the hyperbolic Radon transform, cannot be implemented as efficiently in the frequency domain as convolutions, thus limiting their use in seismic data processing. We have devised a fast butterfly algorithm for the hyperbolic Radon transform. The basic idea is to reformulate the transform as an oscillatory integral operator and to construct a blockwise lowrank approximation of the kernel function. The overall structure follows the Fourier integral operator butterfly algorithm. For 2D data, the algorithm runs in complexity O(N2 log N), where N depends on the maximum frequency and offset in the data set and the range of parameters (intercept time and slowness) in the model space. From a series of studies, we found that this algorithm can be significantly more efficient than the conventional time-domain integration. © 2013 Society of Exploration Geophysicists.

  15. On the analytic continuation of functions defined by Legendre series

    International Nuclear Information System (INIS)

    Grinstein, F.F.

    1981-07-01

    An infinite diagonal sequence of Punctual Pade Approximants is considered for the approximate analytical continuation of a function defined by a formal Legendre series. The technique is tested in the case of two series with exactly known analytical sum: the generating function for Legendre polynomials and the Coulombian scattering amplitude. (author)

  16. Computation of temperature-dependent legendre moments of a double-differential elastic cross section

    International Nuclear Information System (INIS)

    Arbanas, G.; Dunn, M.E.; Larson, N.M.; Leal, L.C.; Williams, M.L.; Becker, B.; Dagan, R.

    2011-01-01

    A general expression for temperature-dependent Legendre moments of a double-differential elastic scattering cross section was derived by Ouisloumen and Sanchez [Nucl. Sci. Eng. 107, 189-200 (1991)]. Attempts to compute this expression are hindered by the three-fold nested integral, limiting their practical application to just the zeroth Legendre moment of an isotropic scattering. It is shown that the two innermost integrals could be evaluated analytically to all orders of Legendre moments, and for anisotropic scattering, by a recursive application of the integration by parts method. For this method to work, the anisotropic angular distribution in the center of mass is expressed as an expansion in Legendre polynomials. The first several Legendre moments of elastic scattering of neutrons on 238 U are computed at T=1000 K at incoming energy 6.5 eV for isotropic scattering in the center of mass frame. Legendre moments of the anisotropic angular distribution given via Blatt-Biedenharn coefficients are computed at 1 keV. The results are in agreement with those computed by the Monte Carlo method. (author)

  17. Congruences concerning Legendre polynomials III

    OpenAIRE

    Sun, Zhi-Hong

    2010-01-01

    Let $p>3$ be a prime, and let $R_p$ be the set of rational numbers whose denominator is coprime to $p$. Let $\\{P_n(x)\\}$ be the Legendre polynomials. In this paper we mainly show that for $m,n,t\\in R_p$ with $m\

  18. Inversion algorithms for the spherical Radon and cosine transform

    International Nuclear Information System (INIS)

    Louis, A K; Riplinger, M; Spiess, M; Spodarev, E

    2011-01-01

    We consider two integral transforms which are frequently used in integral geometry and related fields, namely the spherical Radon and cosine transform. Fast algorithms are developed which invert the respective transforms in a numerically stable way. So far, only theoretical inversion formulae or algorithms for atomic measures have been derived, which are not so important for applications. We focus on two- and three-dimensional cases, where we also show that our method leads to a regularization. Numerical results are presented and show the validity of the resulting algorithms. First, we use synthetic data for the inversion of the Radon transform. Then we apply the algorithm for the inversion of the cosine transform to reconstruct the directional distribution of line processes from finitely many intersections of their lines with test lines (2D) or planes (3D), respectively. Finally we apply our method to analyse a series of microscopic two- and three-dimensional images of a fibre system

  19. Shifted Legendre method with residual error estimation for delay linear Fredholm integro-differential equations

    Directory of Open Access Journals (Sweden)

    Şuayip Yüzbaşı

    2017-03-01

    Full Text Available In this paper, we suggest a matrix method for obtaining the approximate solutions of the delay linear Fredholm integro-differential equations with constant coefficients using the shifted Legendre polynomials. The problem is considered with mixed conditions. Using the required matrix operations, the delay linear Fredholm integro-differential equation is transformed into a matrix equation. Additionally, error analysis for the method is presented using the residual function. Illustrative examples are given to demonstrate the efficiency of the method. The results obtained in this study are compared with the known results.

  20. The finite Fourier transform of classical polynomials

    OpenAIRE

    Dixit, Atul; Jiu, Lin; Moll, Victor H.; Vignat, Christophe

    2014-01-01

    The finite Fourier transform of a family of orthogonal polynomials $A_{n}(x)$, is the usual transform of the polynomial extended by $0$ outside their natural domain. Explicit expressions are given for the Legendre, Jacobi, Gegenbauer and Chebyshev families.

  1. Implementation of Period-Finding Algorithm by Means of Simulating Quantum Fourier Transform

    Directory of Open Access Journals (Sweden)

    Zohreh Moghareh Abed

    2010-01-01

    Full Text Available In this paper, we introduce quantum fourier transform as a key ingredient for many useful algorithms. These algorithms make a solution for problems which is considered to be intractable problems on a classical computer. Quantum Fourier transform is propounded as a key for quantum phase estimation algorithm. In this paper our aim is the implementation of period-finding algorithm.Quantum computer solves this problem, exponentially faster than classical one. Quantum phase estimation algorithm is the key for the period-finding problem .Therefore, by means of simulating quantum Fourier transform, we are able to implement the period-finding algorithm. In this paper, the simulation of quantum Fourier transform is carried out by Matlab software.

  2. Extended-Maxima Transform Watershed Segmentation Algorithm for Touching Corn Kernels

    Directory of Open Access Journals (Sweden)

    Yibo Qin

    2013-01-01

    Full Text Available Touching corn kernels are usually oversegmented by the traditional watershed algorithm. This paper proposes a modified watershed segmentation algorithm based on the extended-maxima transform. Firstly, a distance-transformed image is processed by the extended-maxima transform in the range of the optimized threshold value. Secondly, the binary image obtained by the preceding process is run through the watershed segmentation algorithm, and watershed ridge lines are superimposed on the original image, so that touching corn kernels are separated into segments. Fifty images which all contain 400 corn kernels were tested. Experimental results showed that the effect of segmentation is satisfactory by the improved algorithm, and the accuracy of segmentation is as high as 99.87%.

  3. Fast algorithm for computing complex number-theoretic transforms

    Science.gov (United States)

    Reed, I. S.; Liu, K. Y.; Truong, T. K.

    1977-01-01

    A high-radix FFT algorithm for computing transforms over FFT, where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.

  4. A fast butterfly algorithm for generalized Radon transforms

    KAUST Repository

    Hu, Jingwei; Fomel, Sergey; Demanet, Laurent; Ying, Lexing

    2013-01-01

    Generalized Radon transforms, such as the hyperbolic Radon transform, cannot be implemented as efficiently in the frequency domain as convolutions, thus limiting their use in seismic data processing. We have devised a fast butterfly algorithm

  5. Image Retrieval Algorithm Based on Discrete Fractional Transforms

    Science.gov (United States)

    Jindal, Neeru; Singh, Kulbir

    2013-06-01

    The discrete fractional transforms is a signal processing tool which suggests computational algorithms and solutions to various sophisticated applications. In this paper, a new technique to retrieve the encrypted and scrambled image based on discrete fractional transforms has been proposed. Two-dimensional image was encrypted using discrete fractional transforms with three fractional orders and two random phase masks placed in the two intermediate planes. The significant feature of discrete fractional transforms benefits from its extra degree of freedom that is provided by its fractional orders. Security strength was enhanced (1024!)4 times by scrambling the encrypted image. In decryption process, image retrieval is sensitive for both correct fractional order keys and scrambling algorithm. The proposed approach make the brute force attack infeasible. Mean square error and relative error are the recital parameters to verify validity of proposed method.

  6. Improved FHT Algorithms for Fast Computation of the Discrete Hartley Transform

    Directory of Open Access Journals (Sweden)

    M. T. Hamood

    2013-05-01

    Full Text Available In this paper, by using the symmetrical properties of the discrete Hartley transform (DHT, an improved radix-2 fast Hartley transform (FHT algorithm with arithmetic complexity comparable to that of the real-valued fast Fourier transform (RFFT is developed. It has a simple and regular butterfly structure and possesses the in-place computation property. Furthermore, using the same principles, the development can be extended to more efficient radix-based FHT algorithms. An example for the improved radix-4 FHT algorithm is given to show the validity of the presented method. The arithmetic complexity for the new algorithms are computed and then compared with the existing FHT algorithms. The results of these comparisons have shown that the developed algorithms reduce the number of multiplications and additions considerably.

  7. Fast Algorithm for Computing the Discrete Hartley Transform of Type-II

    Directory of Open Access Journals (Sweden)

    Mounir Taha Hamood

    2016-06-01

    Full Text Available The generalized discrete Hartley transforms (GDHTs have proved to be an efficient alternative to the generalized discrete Fourier transforms (GDFTs for real-valued data applications. In this paper, the development of direct computation of radix-2 decimation-in-time (DIT algorithm for the fast calculation of the GDHT of type-II (DHT-II is presented. The mathematical analysis and the implementation of the developed algorithm are derived, showing that this algorithm possesses a regular structure and can be implemented in-place for efficient memory utilization.The performance of the proposed algorithm is analyzed and the computational complexity is calculated for different transform lengths. A comparison between this algorithm and existing DHT-II algorithms shows that it can be considered as a good compromise between the structural and computational complexities.

  8. Comment on 'Analytical results for a Bessel function times Legendre polynomials class integrals'

    International Nuclear Information System (INIS)

    Cregg, P J; Svedlindh, P

    2007-01-01

    A result is obtained, stemming from Gegenbauer, where the products of certain Bessel functions and exponentials are expressed in terms of an infinite series of spherical Bessel functions and products of associated Legendre functions. Closed form solutions for integrals involving Bessel functions times associated Legendre functions times exponentials, recently elucidated by Neves et al (J. Phys. A: Math. Gen. 39 L293), are then shown to result directly from the orthogonality properties of the associated Legendre functions. This result offers greater flexibility in the treatment of classical Heisenberg chains and may do so in other problems such as occur in electromagnetic diffraction theory. (comment)

  9. Transformation Algorithm of Dielectric Response in Time-Frequency Domain

    Directory of Open Access Journals (Sweden)

    Ji Liu

    2014-01-01

    Full Text Available A transformation algorithm of dielectric response from time domain to frequency domain is presented. In order to shorten measuring time of low or ultralow frequency dielectric response characteristics, the transformation algorithm is used in this paper to transform the time domain relaxation current to frequency domain current for calculating the low frequency dielectric dissipation factor. In addition, it is shown from comparing the calculation results with actual test data that there is a coincidence for both results over a wide range of low frequencies. Meanwhile, the time domain test data of depolarization currents in dry and moist pressboards are converted into frequency domain results on the basis of the transformation. The frequency domain curves of complex capacitance and dielectric dissipation factor at the low frequency range are obtained. Test results of polarization and depolarization current (PDC in pressboards are also given at the different voltage and polarization time. It is demonstrated from the experimental results that polarization and depolarization current are affected significantly by moisture contents of the test pressboards, and the transformation algorithm is effective in ultralow frequency of 10−3 Hz. Data analysis and interpretation of the test results conclude that analysis of time-frequency domain dielectric response can be used for assessing insulation system in power transformer.

  10. Fast parallel algorithms for the x-ray transform and its adjoint.

    Science.gov (United States)

    Gao, Hao

    2012-11-01

    Iterative reconstruction methods often offer better imaging quality and allow for reconstructions with lower imaging dose than classical methods in computed tomography. However, the computational speed is a major concern for these iterative methods, for which the x-ray transform and its adjoint are two most time-consuming components. The speed issue becomes even notable for the 3D imaging such as cone beam scans or helical scans, since the x-ray transform and its adjoint are frequently computed as there is usually not enough computer memory to save the corresponding system matrix. The purpose of this paper is to optimize the algorithm for computing the x-ray transform and its adjoint, and their parallel computation. The fast and highly parallelizable algorithms for the x-ray transform and its adjoint are proposed for the infinitely narrow beam in both 2D and 3D. The extension of these fast algorithms to the finite-size beam is proposed in 2D and discussed in 3D. The CPU and GPU codes are available at https://sites.google.com/site/fastxraytransform. The proposed algorithm is faster than Siddon's algorithm for computing the x-ray transform. In particular, the improvement for the parallel computation can be an order of magnitude. The authors have proposed fast and highly parallelizable algorithms for the x-ray transform and its adjoint, which are extendable for the finite-size beam. The proposed algorithms are suitable for parallel computing in the sense that the computational cost per parallel thread is O(1).

  11. Multi-stage phase retrieval algorithm based upon the gyrator transform.

    Science.gov (United States)

    Rodrigo, José A; Duadi, Hamootal; Alieva, Tatiana; Zalevsky, Zeev

    2010-01-18

    The gyrator transform is a useful tool for optical information processing applications. In this work we propose a multi-stage phase retrieval approach based on this operation as well as on the well-known Gerchberg-Saxton algorithm. It results in an iterative algorithm able to retrieve the phase information using several measurements of the gyrator transform power spectrum. The viability and performance of the proposed algorithm is demonstrated by means of several numerical simulations and experimental results.

  12. Multi-stage phase retrieval algorithm based upon the gyrator transform

    OpenAIRE

    Rodrigo Martín-Romo, José Augusto; Duadi, Hamootal; Alieva, Tatiana Krasheninnikova; Zalevsky, Zeev

    2010-01-01

    The gyrator transform is a useful tool for optical information processing applications. In this work we propose a multi-stage phase retrieval approach based on this operation as well as on the well-known Gerchberg-Saxton algorithm. It results in an iterative algorithm able to retrieve the phase information using several measurements of the gyrator transform power spectrum. The viability and performance of the proposed algorithm is demonstrated by means of several numerical simulations and exp...

  13. The Watershed Transform : Definitions, Algorithms and Parallelization Strategies

    NARCIS (Netherlands)

    Roerdink, Jos B.T.M.; Meijster, Arnold

    2000-01-01

    The watershed transform is the method of choice for image segmentation in the field of mathematical morphology. We present a critical review of several definitions of the watershed transform and the associated sequential algorithms, and discuss various issues which often cause confusion in the

  14. A new fast algorithm for computing a complex number: Theoretic transforms

    Science.gov (United States)

    Reed, I. S.; Liu, K. Y.; Truong, T. K.

    1977-01-01

    A high-radix fast Fourier transformation (FFT) algorithm for computing transforms over GF(sq q), where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.

  15. Chudnovsky-Ramanujan Type Formulae for the Legendre Family

    OpenAIRE

    Chen, Imin; Glebov, Gleb

    2017-01-01

    We apply the method established in our previous work to derive a Chudnovsky-Ramanujan type formula for the Legendre family of elliptic curves. As a result, we prove two identities for $1/\\pi$ in terms of hypergeometric functions.

  16. A difference tracking algorithm based on discrete sine transform

    Science.gov (United States)

    Liu, HaoPeng; Yao, Yong; Lei, HeBing; Wu, HaoKun

    2018-04-01

    Target tracking is an important field of computer vision. The template matching tracking algorithm based on squared difference matching (SSD) and standard correlation coefficient (NCC) matching is very sensitive to the gray change of image. When the brightness or gray change, the tracking algorithm will be affected by high-frequency information. Tracking accuracy is reduced, resulting in loss of tracking target. In this paper, a differential tracking algorithm based on discrete sine transform is proposed to reduce the influence of image gray or brightness change. The algorithm that combines the discrete sine transform and the difference algorithm maps the target image into a image digital sequence. The Kalman filter predicts the target position. Using the Hamming distance determines the degree of similarity between the target and the template. The window closest to the template is determined the target to be tracked. The target to be tracked updates the template. Based on the above achieve target tracking. The algorithm is tested in this paper. Compared with SSD and NCC template matching algorithms, the algorithm tracks target stably when image gray or brightness change. And the tracking speed can meet the read-time requirement.

  17. On Parameter Differentiation for Integral Representations of Associated Legendre Functions

    Directory of Open Access Journals (Sweden)

    Howard S. Cohl

    2011-05-01

    Full Text Available For integral representations of associated Legendre functions in terms of modified Bessel functions, we establish justification for differentiation under the integral sign with respect to parameters. With this justification, derivatives for associated Legendre functions of the first and second kind with respect to the degree are evaluated at odd-half-integer degrees, for general complex-orders, and derivatives with respect to the order are evaluated at integer-orders, for general complex-degrees. We also discuss the properties of the complex function f: C{−1,1}→C given by f(z=z/((z+1^{1/2}(z−1^{1/2}.

  18. A Legendre Wavelet Spectral Collocation Method for Solving Oscillatory Initial Value Problems

    Directory of Open Access Journals (Sweden)

    A. Karimi Dizicheh

    2013-01-01

    wavelet suitable for large intervals, and then the Legendre-Guass collocation points of the Legendre wavelet are derived. Using this strategy, the iterative spectral method converts the differential equation to a set of algebraic equations. Solving these algebraic equations yields an approximate solution for the differential equation. The proposed method is illustrated by some numerical examples, and the result is compared with the exponentially fitted Runge-Kutta method. Our proposed method is simple and highly accurate.

  19. Discrete fractional solutions of a Legendre equation

    Science.gov (United States)

    Yılmazer, Resat

    2018-01-01

    One of the most popular research interests of science and engineering is the fractional calculus theory in recent times. Discrete fractional calculus has also an important position in fractional calculus. In this work, we acquire new discrete fractional solutions of the homogeneous and non homogeneous Legendre differential equation by using discrete fractional nabla operator.

  20. Tests of a numerical algorithm for the linear instability study of flows on a sphere

    Energy Technology Data Exchange (ETDEWEB)

    Perez Garcia, Ismael; Skiba, Yuri N [Univerisidad Nacional Autonoma de Mexico, Mexico, D.F. (Mexico)

    2001-04-01

    A numerical algorithm for the normal mode instability of a steady nondivergent flow on a rotating sphere is developed. The algorithm accuracy is tested with zonal solutions of the nonlinear barotropic vorticity equation (Legendre polynomials, zonal Rossby-Harwitz waves and monopole modons). [Spanish] Ha sido desarrollado un algoritmo numerico para estudiar la inestabilidad lineal de un flujo estacionario no divergente en una esfera en rotacion. La precision del algoritmo se prueba con soluciones zonales de la ecuacion no lineal de vorticidad barotropica (polinomios de Legendre, ondas zonales Rossby-Harwitz y modones monopolares).

  1. Mixed Legendre moments and discrete scattering cross sections for anisotropy representation

    International Nuclear Information System (INIS)

    Calloo, A.; Vidal, J. F.; Le Tellier, R.; Rimpault, G.

    2012-01-01

    This paper deals with the resolution of the integro-differential form of the Boltzmann transport equation for neutron transport in nuclear reactors. In multigroup theory, deterministic codes use transfer cross sections which are expanded on Legendre polynomials. This modelling leads to negative values of the transfer cross section for certain scattering angles, and hence, the multigroup scattering source term is wrongly computed. The first part compares the convergence of 'Legendre-expanded' cross sections with respect to the order used with the method of characteristics (MOC) for Pressurised Water Reactor (PWR) type cells. Furthermore, the cross section is developed using piecewise-constant functions, which better models the multigroup transfer cross section and prevents the occurrence of any negative value for it. The second part focuses on the method of solving the transport equation with the above-mentioned piecewise-constant cross sections for lattice calculations for PWR cells. This expansion thereby constitutes a 'reference' method to compare the conventional Legendre expansion to, and to determine its pertinence when applied to reactor physics calculations. (authors)

  2. A fast algorithm for forward-modeling of gravitational fields in spherical coordinates with 3D Gauss-Legendre quadrature

    Science.gov (United States)

    Zhao, G.; Liu, J.; Chen, B.; Guo, R.; Chen, L.

    2017-12-01

    Forward modeling of gravitational fields at large-scale requires to consider the curvature of the Earth and to evaluate the Newton's volume integral in spherical coordinates. To acquire fast and accurate gravitational effects for subsurface structures, subsurface mass distribution is usually discretized into small spherical prisms (called tesseroids). The gravity fields of tesseroids are generally calculated numerically. One of the commonly used numerical methods is the 3D Gauss-Legendre quadrature (GLQ). However, the traditional GLQ integration suffers from low computational efficiency and relatively poor accuracy when the observation surface is close to the source region. We developed a fast and high accuracy 3D GLQ integration based on the equivalence of kernel matrix, adaptive discretization and parallelization using OpenMP. The equivalence of kernel matrix strategy increases efficiency and reduces memory consumption by calculating and storing the same matrix elements in each kernel matrix just one time. In this method, the adaptive discretization strategy is used to improve the accuracy. The numerical investigations show that the executing time of the proposed method is reduced by two orders of magnitude compared with the traditional method that without these optimized strategies. High accuracy results can also be guaranteed no matter how close the computation points to the source region. In addition, the algorithm dramatically reduces the memory requirement by N times compared with the traditional method, where N is the number of discretization of the source region in the longitudinal direction. It makes the large-scale gravity forward modeling and inversion with a fine discretization possible.

  3. Using transformation algorithms to estimate (co)variance ...

    African Journals Online (AJOL)

    REML) procedures by a diagonalization approach is extended to multiple traits by the use of canonical transformations. A computing strategy is developed for use on large data sets employing two different REML algorithms for the estimation of ...

  4. Improved implementation algorithms of the two-dimensional nonseparable linear canonical transform.

    Science.gov (United States)

    Ding, Jian-Jiun; Pei, Soo-Chang; Liu, Chun-Lin

    2012-08-01

    The two-dimensional nonseparable linear canonical transform (2D NSLCT), which is a generalization of the fractional Fourier transform and the linear canonical transform, is useful for analyzing optical systems. However, since the 2D NSLCT has 16 parameters and is very complicated, it is a great challenge to implement it in an efficient way. In this paper, we improved the previous work and propose an efficient way to implement the 2D NSLCT. The proposed algorithm can minimize the numerical error arising from interpolation operations and requires fewer chirp multiplications. The simulation results show that, compared with the existing algorithm, the proposed algorithms can implement the 2D NSLCT more accurately and the required computation time is also less.

  5. Discrete Hadamard transformation algorithm's parallelism analysis and achievement

    Science.gov (United States)

    Hu, Hui

    2009-07-01

    With respect to Discrete Hadamard Transformation (DHT) wide application in real-time signal processing while limitation in operation speed of DSP. The article makes DHT parallel research and its parallel performance analysis. Based on multiprocessor platform-TMS320C80 programming structure, the research is carried out to achieve two kinds of parallel DHT algorithms. Several experiments demonstrated the effectiveness of the proposed algorithms.

  6. Legendre Wavelet Operational Matrix Method for Solution of Riccati Differential Equation

    Directory of Open Access Journals (Sweden)

    S. Balaji

    2014-01-01

    Full Text Available A Legendre wavelet operational matrix method (LWM is presented for the solution of nonlinear fractional-order Riccati differential equations, having variety of applications in quantum chemistry and quantum mechanics. The fractional-order Riccati differential equations converted into a system of algebraic equations using Legendre wavelet operational matrix. Solutions given by the proposed scheme are more accurate and reliable and they are compared with recently developed numerical, analytical, and stochastic approaches. Comparison shows that the proposed LWM approach has a greater performance and less computational effort for getting accurate solutions. Further existence and uniqueness of the proposed problem are given and moreover the condition of convergence is verified.

  7. Szegö Kernels and Asymptotic Expansions for Legendre Polynomials

    Directory of Open Access Journals (Sweden)

    Roberto Paoletti

    2017-01-01

    Full Text Available We present a geometric approach to the asymptotics of the Legendre polynomials Pk,n+1, based on the Szegö kernel of the Fermat quadric hypersurface, leading to complete asymptotic expansions holding on expanding subintervals of [-1,1].

  8. A linear-time algorithm for Euclidean feature transform sets

    NARCIS (Netherlands)

    Hesselink, Wim H.

    2007-01-01

    The Euclidean distance transform of a binary image is the function that assigns to every pixel the Euclidean distance to the background. The Euclidean feature transform is the function that assigns to every pixel the set of background pixels with this distance. We present an algorithm to compute the

  9. Solution of two-dimensional neutron diffusion equation for triangular region by finite Fourier transformation

    International Nuclear Information System (INIS)

    Kobayashi, Keisuke; Ishibashi, Hideo

    1978-01-01

    A two-dimensional neutron diffusion equation for a triangular region is shown to be solved by the finite Fourier transformation. An application of the Fourier transformation to the diffusion equation for triangular region yields equations whose unknowns are the expansion coefficients of the neutron flux and current in Fourier series or Legendre polynomials expansions only at the region boundary. Some numerical calculations have revealed that the present technique gives accurate results. It is shown also that the solution using the expansion in Legendre polynomials converges with relatively few terms even if the solution in Fourier series exhibits the Gibbs' phenomenon. (auth.)

  10. The Mehler-Fock transform of general order and arbitrary index and its inversion

    Directory of Open Access Journals (Sweden)

    Cyril Nasim

    1984-01-01

    Full Text Available An integral transform involving the associated Legendre function of zero order, P−12+iτ(x, x∈[1,∞, as the kernel (considered as a function of τ, is called Mehler-Fock transform. Some generalizations, involving the function P−12+iτμ(x, where the order μ is an arbitrary complex number, including the case when μ=0,1,2,… have been known for some time. In this present note, we define a general Mehler-Fock transform involving, as the kernel, the Legendre function P−12+tμ(x, of general order μ and an arbitrary index −12+t, t=σ+iτ, −∞<τ<∞. Then we develop a symmetric inversion formulae for these transforms. Many well-known results are derived as special cases of this general form. These transforms are widely used for solving many axisymmetric potential problems.

  11. Fast algorithms for transforming back and forth between a signed permutation and its equivalent simple permutation.

    Science.gov (United States)

    Gog, Simon; Bader, Martin

    2008-10-01

    The problem of sorting signed permutations by reversals is a well-studied problem in computational biology. The first polynomial time algorithm was presented by Hannenhalli and Pevzner in 1995. The algorithm was improved several times, and nowadays the most efficient algorithm has a subquadratic running time. Simple permutations played an important role in the development of these algorithms. Although the latest result of Tannier et al. does not require simple permutations, the preliminary version of their algorithm as well as the first polynomial time algorithm of Hannenhalli and Pevzner use the structure of simple permutations. More precisely, the latter algorithms require a precomputation that transforms a permutation into an equivalent simple permutation. To the best of our knowledge, all published algorithms for this transformation have at least a quadratic running time. For further investigations on genome rearrangement problems, the existence of a fast algorithm for the transformation could be crucial. Another important task is the back transformation, i.e. if we have a sorting on the simple permutation, transform it into a sorting on the original permutation. Again, the naive approach results in an algorithm with quadratic running time. In this paper, we present a linear time algorithm for transforming a permutation into an equivalent simple permutation, and an O(n log n) algorithm for the back transformation of the sorting sequence.

  12. Solved problems in analysis as applied to gamma, beta, Legendre and Bessel functions

    CERN Document Server

    Farrell, Orin J

    2013-01-01

    Nearly 200 problems, each with a detailed, worked-out solution, deal with the properties and applications of the gamma and beta functions, Legendre polynomials, and Bessel functions. The first two chapters examine gamma and beta functions, including applications to certain geometrical and physical problems such as heat-flow in a straight wire. The following two chapters treat Legendre polynomials, addressing applications to specific series expansions, steady-state heat-flow temperature distribution, gravitational potential of a circular lamina, and application of Gauss's mechanical quadrature

  13. Fast and accurate algorithm for the computation of complex linear canonical transforms.

    Science.gov (United States)

    Koç, Aykut; Ozaktas, Haldun M; Hesselink, Lambertus

    2010-09-01

    A fast and accurate algorithm is developed for the numerical computation of the family of complex linear canonical transforms (CLCTs), which represent the input-output relationship of complex quadratic-phase systems. Allowing the linear canonical transform parameters to be complex numbers makes it possible to represent paraxial optical systems that involve complex parameters. These include lossy systems such as Gaussian apertures, Gaussian ducts, or complex graded-index media, as well as lossless thin lenses and sections of free space and any arbitrary combinations of them. Complex-ordered fractional Fourier transforms (CFRTs) are a special case of CLCTs, and therefore a fast and accurate algorithm to compute CFRTs is included as a special case of the presented algorithm. The algorithm is based on decomposition of an arbitrary CLCT matrix into real and complex chirp multiplications and Fourier transforms. The samples of the output are obtained from the samples of the input in approximately N log N time, where N is the number of input samples. A space-bandwidth product tracking formalism is developed to ensure that the number of samples is information-theoretically sufficient to reconstruct the continuous transform, but not unnecessarily redundant.

  14. Evaluate More General Integrals Involving Universal Associated Legendre Polynomials via Taylor's Theorem

    Institute of Scientific and Technical Information of China (English)

    G.Ya(n)ez-Navarro; Guo-Hua Sun; Dong-Sheng Sun; Chang-Yuan Chen; Shi-Hai Dong

    2017-01-01

    A few important integrals involving the product of two universal associated Legendre polynomials Pl'm'(x),Pk'n'(x) and x2a(1-x2)-p-1,xb(1 ±x)-p-1 and xc(1-x2)-p-1 (1 ±-x) are evaluated using the operator form of Taylor's theorem and an integral over a single universal associated Legendre polynomial.These integrals are more general since the quantum numbers are unequal,i.e.l'≠ k'and m'≠ n'.Their selection rules are also given.We also verify the correctness of those integral formulas numerically.

  15. Simulating first order optical systems—algorithms for and composition of discrete linear canonical transforms

    Science.gov (United States)

    Healy, John J.

    2018-01-01

    The linear canonical transforms (LCTs) are a parameterised group of linear integral transforms. The LCTs encompass a number of well-known transformations as special cases, including the Fourier transform, fractional Fourier transform, and the Fresnel integral. They relate the scalar wave fields at the input and output of systems composed of thin lenses and free space, along with other quadratic phase systems. In this paper, we perform a systematic search of all algorithms based on up to five stages of magnification, chirp multiplication and Fourier transforms. Based on that search, we propose a novel algorithm, for which we present numerical results. We compare the sampling requirements of three algorithms. Finally, we discuss some issues surrounding the composition of discrete LCTs.

  16. Image compression-encryption algorithms by combining hyper-chaotic system with discrete fractional random transform

    Science.gov (United States)

    Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun

    2018-07-01

    Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.

  17. Numerical solution of sixth-order boundary-value problems using Legendre wavelet collocation method

    Science.gov (United States)

    Sohaib, Muhammad; Haq, Sirajul; Mukhtar, Safyan; Khan, Imad

    2018-03-01

    An efficient method is proposed to approximate sixth order boundary value problems. The proposed method is based on Legendre wavelet in which Legendre polynomial is used. The mechanism of the method is to use collocation points that converts the differential equation into a system of algebraic equations. For validation two test problems are discussed. The results obtained from proposed method are quite accurate, also close to exact solution, and other different methods. The proposed method is computationally more effective and leads to more accurate results as compared to other methods from literature.

  18. Quantum algorithms on Walsh transform and Hamming distance for Boolean functions

    Science.gov (United States)

    Xie, Zhengwei; Qiu, Daowen; Cai, Guangya

    2018-06-01

    Walsh spectrum or Walsh transform is an alternative description of Boolean functions. In this paper, we explore quantum algorithms to approximate the absolute value of Walsh transform W_f at a single point z0 (i.e., |W_f(z0)|) for n-variable Boolean functions with probability at least 8/π 2 using the number of O(1/|W_f(z_{0)|ɛ }) queries, promised that the accuracy is ɛ , while the best known classical algorithm requires O(2n) queries. The Hamming distance between Boolean functions is used to study the linearity testing and other important problems. We take advantage of Walsh transform to calculate the Hamming distance between two n-variable Boolean functions f and g using O(1) queries in some cases. Then, we exploit another quantum algorithm which converts computing Hamming distance between two Boolean functions to quantum amplitude estimation (i.e., approximate counting). If Ham(f,g)=t≠0, we can approximately compute Ham( f, g) with probability at least 2/3 by combining our algorithm and {Approx-Count(f,ɛ ) algorithm} using the expected number of Θ( √{N/(\\lfloor ɛ t\\rfloor +1)}+√{t(N-t)}/\\lfloor ɛ t\\rfloor +1) queries, promised that the accuracy is ɛ . Moreover, our algorithm is optimal, while the exact query complexity for the above problem is Θ(N) and the query complexity with the accuracy ɛ is O(1/ɛ 2N/(t+1)) in classical algorithm, where N=2n. Finally, we present three exact quantum query algorithms for two promise problems on Hamming distance using O(1) queries, while any classical deterministic algorithm solving the problem uses Ω(2n) queries.

  19. A fast identification algorithm for Box-Cox transformation based radial basis function neural network.

    Science.gov (United States)

    Hong, Xia

    2006-07-01

    In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.

  20. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  1. Genetic Algorithms Evolve Optimized Transforms for Signal Processing Applications

    National Research Council Canada - National Science Library

    Moore, Frank; Babb, Brendan; Becke, Steven; Koyuk, Heather; Lamson, Earl, III; Wedge, Christopher

    2005-01-01

    .... The primary goal of the research described in this final report was to establish a methodology for using genetic algorithms to evolve coefficient sets describing inverse transforms and matched...

  2. An Image Filter Based on Shearlet Transformation and Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2015-01-01

    Full Text Available Digital image is always polluted by noise and made data postprocessing difficult. To remove noise and preserve detail of image as much as possible, this paper proposed image filter algorithm which combined the merits of Shearlet transformation and particle swarm optimization (PSO algorithm. Firstly, we use classical Shearlet transform to decompose noised image into many subwavelets under multiscale and multiorientation. Secondly, we gave weighted factor to those subwavelets obtained. Then, using classical Shearlet inverse transform, we obtained a composite image which is composed of those weighted subwavelets. After that, we designed fast and rough evaluation method to evaluate noise level of the new image; by using this method as fitness, we adopted PSO to find the optimal weighted factor we added; after lots of iterations, by the optimal factors and Shearlet inverse transform, we got the best denoised image. Experimental results have shown that proposed algorithm eliminates noise effectively and yields good peak signal noise ratio (PSNR.

  3. N-Level Quantum Systems and Legendre Functions

    OpenAIRE

    Mazurenko, A. S.; Savva, V. A.

    2001-01-01

    An excitation dynamics of new quantum systems of N equidistant energy levels in a monochromatic field has been investigated. To obtain exact analytical solutions of dynamic equations an analytical method based on orthogonal functions of a real argument has been proposed. Using the orthogonal Legendre functions we have found an exact analytical expression for a population probability amplitude of the level n. Various initial conditions for the excitation of N-level quantum systems have been co...

  4. A general algorithm for computing distance transforms in linear time

    NARCIS (Netherlands)

    Meijster, A.; Roerdink, J.B.T.M.; Hesselink, W.H.; Goutsias, J; Vincent, L; Bloomberg, DS

    2000-01-01

    A new general algorithm fur computing distance transforms of digital images is presented. The algorithm consists of two phases. Both phases consist of two scans, a forward and a backward scan. The first phase scans the image column-wise, while the second phase scans the image row-wise. Since the

  5. COMPARATIVE ANALYSIS OF APPLICATION EFFICIENCY OF ORTHOGONAL TRANSFORMATIONS IN FREQUENCY ALGORITHMS FOR DIGITAL IMAGE WATERMARKING

    Directory of Open Access Journals (Sweden)

    Vladimir A. Batura

    2014-11-01

    Full Text Available The efficiency of orthogonal transformations application in the frequency algorithms of the digital watermarking of still images is examined. Discrete Hadamard transform, discrete cosine transform and discrete Haar transform are selected. Their effectiveness is determined by the invisibility of embedded in digital image watermark and its resistance to the most common image processing operations: JPEG-compression, noising, changing of the brightness and image size, histogram equalization. The algorithm for digital watermarking and its embedding parameters remain unchanged at these orthogonal transformations. Imperceptibility of embedding is defined by the peak signal to noise ratio, watermark stability– by Pearson's correlation coefficient. Embedding is considered to be invisible, if the value of the peak signal to noise ratio is not less than 43 dB. Embedded watermark is considered to be resistant to a specific attack, if the Pearson’s correlation coefficient is not less than 0.5. Elham algorithm based on the image entropy is chosen for computing experiment. Computing experiment is carried out according to the following algorithm: embedding of a digital watermark in low-frequency area of the image (container by Elham algorithm, exposure to a harmful influence on the protected image (cover image, extraction of a digital watermark. These actions are followed by quality assessment of cover image and watermark on the basis of which efficiency of orthogonal transformation is defined. As a result of computing experiment it was determined that the choice of the specified orthogonal transformations at identical algorithm and parameters of embedding doesn't influence the degree of imperceptibility for a watermark. Efficiency of discrete Hadamard transform and discrete cosine transformation in relation to the attacks chosen for experiment was established based on the correlation indicators. Application of discrete Hadamard transform increases

  6. Algorithms for Fast Computing of the 3D-DCT Transform

    Directory of Open Access Journals (Sweden)

    S. Hanus

    2003-04-01

    Full Text Available The algorithm for video compression based on the Three-DimensionalDiscrete Cosine Transform (3D-DCT is presented. The original algorithmof the 3D-DCT has high time complexity. We propose several enhancementsto the original algorithm and make the calculation of the DCT algorithmfeasible for future real-time video compression.

  7. The application of Legendre-tau approximation to parameter identification for delay and partial differential equations

    Science.gov (United States)

    Ito, K.

    1983-01-01

    Approximation schemes based on Legendre-tau approximation are developed for application to parameter identification problem for delay and partial differential equations. The tau method is based on representing the approximate solution as a truncated series of orthonormal functions. The characteristic feature of the Legendre-tau approach is that when the solution to a problem is infinitely differentiable, the rate of convergence is faster than any finite power of 1/N; higher accuracy is thus achieved, making the approach suitable for small N.

  8. Experimental analysis of shape deformation of evaporating droplet using Legendre polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Apratim [Department of Mechanical Engineering, Indian Institute of Science, Bangalore 560012 (India); Basu, Saptarshi, E-mail: sbasu@mecheng.iisc.ernet.in [Department of Mechanical Engineering, Indian Institute of Science, Bangalore 560012 (India); Kumar, Ranganathan [Department of Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL 32816 (United States)

    2014-01-24

    Experiments involving heating of liquid droplets which are acoustically levitated, reveal specific modes of oscillations. For a given radiation flux, certain fluid droplets undergo distortion leading to catastrophic bag type breakup. The voltage of the acoustic levitator has been kept constant to operate at a nominal acoustic pressure intensity, throughout the experiments. Thus the droplet shape instabilities are primarily a consequence of droplet heating through vapor pressure, surface tension and viscosity. A novel approach is used by employing Legendre polynomials for the mode shape approximation to describe the thermally induced instabilities. The two dominant Legendre modes essentially reflect (a) the droplet size reduction due to evaporation, and (b) the deformation around the equilibrium shape. Dissipation and inter-coupling of modal energy lead to stable droplet shape while accumulation of the same ultimately results in droplet breakup.

  9. Experimental analysis of shape deformation of evaporating droplet using Legendre polynomials

    International Nuclear Information System (INIS)

    Sanyal, Apratim; Basu, Saptarshi; Kumar, Ranganathan

    2014-01-01

    Experiments involving heating of liquid droplets which are acoustically levitated, reveal specific modes of oscillations. For a given radiation flux, certain fluid droplets undergo distortion leading to catastrophic bag type breakup. The voltage of the acoustic levitator has been kept constant to operate at a nominal acoustic pressure intensity, throughout the experiments. Thus the droplet shape instabilities are primarily a consequence of droplet heating through vapor pressure, surface tension and viscosity. A novel approach is used by employing Legendre polynomials for the mode shape approximation to describe the thermally induced instabilities. The two dominant Legendre modes essentially reflect (a) the droplet size reduction due to evaporation, and (b) the deformation around the equilibrium shape. Dissipation and inter-coupling of modal energy lead to stable droplet shape while accumulation of the same ultimately results in droplet breakup.

  10. On the derivative of the Legendre function of the first kind with respect to its degree

    International Nuclear Information System (INIS)

    Szmytkowski, Radoslaw

    2006-01-01

    We study the derivative of the Legendre function of the first kind, P ν (z), with respect to its degree ν. At first, we provide two contour integral representations for ∂P ν (z)/∂ν. Then, we proceed to investigate the case of [∂P ν (z)/∂ν] ν=n , with n being an integer; this case is met in some physical and engineering problems. Since it holds that [∂P ν' (z)/∂ν'] ν'==ν-1 -[∂P ν' (z0/∂ν'] ν'=ν , we focus on the sub-case of n being a non-negative integer. We show that ∂P ν (z)/∂ν vertical bar ν=n = P n (z) ln((z+1)/2) + R n (z) (n element of N) where R n (z) is a polynomial in z of degree n. We present alternative derivations of several known explicit expressions for R n (z) and also add some new. A generating function for R n (z) is also constructed. Properties of the polynomials V n (z) = [R n (z) + (-1) n R n (-z)]/2 and W n-1 (z) = -[R n (z) - (-1) n R n (-z)]/2 are also investigated. It is found that W n-1 (z) is the Christoffel polynomial, well known from the theory of the Legendre function of the second kind, Q n (z). As examples of applications of the results obtained, we present non-standard derivations of some representations of Q n (z), sum to closed forms some Legendre series, evaluate some definite integrals involving Legendre polynomials and also derive an explicit representation of the indefinite integral of the Legendre polynomial squared

  11. ALGORITHMIZATION OF PROBLEMS FOR OPTIMAL LOCATION OF TRANSFORMERS IN SUBSTATIONS OF DISTRIBUTED NETWORKS

    Directory of Open Access Journals (Sweden)

    M. I. Fursanov

    2014-01-01

    Full Text Available This article reflects algorithmization of search methods of effective replacement of consumer transformers in distributed electrical networks. As any electrical equipment of power systems, power transformers have their own limited service duration, which is determined by natural processes of materials degradation and also by unexpected wear under different conditions of overload and overvoltage. According to the standards, adapted by in the Republic of Belarus, rated service life of power transformers is 25 years. But it can be situations that transformers should be better changed till this time – economically efficient. The possibility of such replacement is considered in order to increase efficiency of electrical network operation connected with its physical wear and aging.In this article the faults of early developed mathematical models of transformers replacement were discussed. Early such worked out transformers were not used. But in practice they can be replaced in one substation but they can be successfully used  in other substations .Especially if there are limits of financial resources and the replacement needs more detail technical and economical basis.During the research the authors developed the efficient algorithm for determining of optimal location of transformers at substations of distributed electrical networks, based on search of the best solution from all sets of displacement in oriented graph. Suggested algorithm allows considerably reduce design time of optimal placement of transformers using a set of simplifications. The result of algorithm’s work is series displacement of transformers in networks, which allow obtain a great economic effect in comparison with replacement of single transformer.

  12. Limitations on continuous variable quantum algorithms with Fourier transforms

    International Nuclear Information System (INIS)

    Adcock, Mark R A; Hoeyer, Peter; Sanders, Barry C

    2009-01-01

    We study quantum algorithms implemented within a single harmonic oscillator, or equivalently within a single mode of the electromagnetic field. Logical states correspond to functions of the canonical position, and the Fourier transform to canonical momentum serves as the analogue of the Hadamard transform for this implementation. This continuous variable version of quantum information processing has widespread appeal because of advanced quantum optics technology that can create, manipulate and read Gaussian states of light. We show that, contrary to a previous claim, this implementation of quantum information processing has limitations due to a position-momentum trade-off of the Fourier transform, analogous to the famous time-bandwidth theorem of signal processing.

  13. Electric vehicle charging algorithms for coordination of the grid and distribution transformer levels

    International Nuclear Information System (INIS)

    Ramos Muñoz, Edgar; Razeghi, Ghazal; Zhang, Li; Jabbari, Faryar

    2016-01-01

    The need to reduce greenhouse gas emissions and fossil fuel consumption has increased the popularity of plug-in electric vehicles. However, a large penetration of plug-in electric vehicles can pose challenges at the grid and local distribution levels. Various charging strategies have been proposed to address such challenges, often separately. In this paper, it is shown that, with uncoordinated charging, distribution transformers and the grid can operate under highly undesirable conditions. Next, several strategies that require modest communication efforts are proposed to mitigate the burden created by high concentrations of plug-in electric vehicles, at the grid and local levels. Existing transformer and battery electric vehicle characteristics are used along with the National Household Travel Survey to simulate various charging strategies. It is shown through the analysis of hot spot temperature and equivalent aging factor that the coordinated strategies proposed here reduce the chances of transformer failure with the addition of plug-in electric vehicle loads, even for an under-designed transformer while uncontrolled and uncoordinated plug-in electric vehicle charging results in increased risk of transformer failure. - Highlights: • Charging algorithm for battery electric vehicles, for high penetration levels. • Algorithm reduces transformer overloading, for grid level valley filling. • Computation and communication requirements are minimal. • The distributed algorithm is implemented without large scale iterations. • Hot spot temperature and loss of life for transformers are evaluated.

  14. Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.

    Science.gov (United States)

    Tao, Liang; Kwan, Hon Keung

    2012-07-01

    Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.

  15. Random regression models to estimate genetic parameters for milk production of Guzerat cows using orthogonal Legendre polynomials

    Directory of Open Access Journals (Sweden)

    Maria Gabriela Campolina Diniz Peixoto

    2014-05-01

    Full Text Available The objective of this work was to compare random regression models for the estimation of genetic parameters for Guzerat milk production, using orthogonal Legendre polynomials. Records (20,524 of test-day milk yield (TDMY from 2,816 first-lactation Guzerat cows were used. TDMY grouped into 10-monthly classes were analyzed for additive genetic effect and for environmental and residual permanent effects (random effects, whereas the contemporary group, calving age (linear and quadratic effects and mean lactation curve were analized as fixed effects. Trajectories for the additive genetic and permanent environmental effects were modeled by means of a covariance function employing orthogonal Legendre polynomials ranging from the second to the fifth order. Residual variances were considered in one, four, six, or ten variance classes. The best model had six residual variance classes. The heritability estimates for the TDMY records varied from 0.19 to 0.32. The random regression model that used a second-order Legendre polynomial for the additive genetic effect, and a fifth-order polynomial for the permanent environmental effect is adequate for comparison by the main employed criteria. The model with a second-order Legendre polynomial for the additive genetic effect, and that with a fourth-order for the permanent environmental effect could also be employed in these analyses.

  16. An Efficient Algorithm for the Discrete Gabor Transform using full length Windows

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel

    2007-01-01

    This paper extends the efficient factorization of the Gabor frame operator developed by Strohmer in [1] to the Gabor analysis/synthesis operator. This provides a fast method for computing the discrete Gabor transform (DGT) and several algorithms associated with it. The algorithm is used...

  17. Specification of the Fast Fourier Transform algorithm as a term rewriting system

    NARCIS (Netherlands)

    Rodenburg, P.H.; Hoekzema, D.J.

    1987-01-01

    We specify an algorithm for multiplying polynomials with complex coefficients incorporating, the Fast Fourier Transform algorithm of Cooley and Tukey [CT]. The specification formalism we use is a variant of the formalism ASF described in. [BHK]. The difference with ASF is essentially a matter of

  18. Generalizations of an integral for Legendre polynomials by Persson and Strang

    NARCIS (Netherlands)

    Diekema, E.; Koornwinder, T.H.

    2012-01-01

    Persson and Strang (2003) evaluated the integral over [−1,1] of a squared odd degree Legendre polynomial divided by x2 as being equal to 2. We consider a similar integral for orthogonal polynomials with respect to a general even orthogonality measure, with Gegenbauer and Hermite polynomials as

  19. Migration of a Real-Time Optimal-Control Algorithm: From MATLAB (Trademark) to Field Programmable Gate Array (FPGA)

    National Research Council Canada - National Science Library

    Moon, II, Ron L

    2005-01-01

    ...) development environment into an FPGA-based embedded-platform development board. Research at the Naval Postgraduate School has produced a revolutionary time-optimal spacecraft control algorithm based upon the Legendre Pseudospectral method...

  20. Efficient Algorithms for the Discrete Gabor Transform with a Long Fir Window

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel

    2012-01-01

    The Discrete Gabor Transform (DGT) is the most commonly used signal transform for signal analysis and synthesis using a linear frequency scale. The development of the Linear Time-Frequency Analysis Toolbox (LTFAT) has been based on a detailed study of many variants of the relevant algorithms. As ...

  1. Transform Domain Robust Variable Step Size Griffiths' Adaptive Algorithm for Noise Cancellation in ECG

    Science.gov (United States)

    Hegde, Veena; Deekshit, Ravishankar; Satyanarayana, P. S.

    2011-12-01

    The electrocardiogram (ECG) is widely used for diagnosis of heart diseases. Good quality of ECG is utilized by physicians for interpretation and identification of physiological and pathological phenomena. However, in real situations, ECG recordings are often corrupted by artifacts or noise. Noise severely limits the utility of the recorded ECG and thus needs to be removed, for better clinical evaluation. In the present paper a new noise cancellation technique is proposed for removal of random noise like muscle artifact from ECG signal. A transform domain robust variable step size Griffiths' LMS algorithm (TVGLMS) is proposed for noise cancellation. For the TVGLMS, the robust variable step size has been achieved by using the Griffiths' gradient which uses cross-correlation between the desired signal contaminated with observation or random noise and the input. The algorithm is discrete cosine transform (DCT) based and uses symmetric property of the signal to represent the signal in frequency domain with lesser number of frequency coefficients when compared to that of discrete Fourier transform (DFT). The algorithm is implemented for adaptive line enhancer (ALE) filter which extracts the ECG signal in a noisy environment using LMS filter adaptation. The proposed algorithm is found to have better convergence error/misadjustment when compared to that of ordinary transform domain LMS (TLMS) algorithm, both in the presence of white/colored observation noise. The reduction in convergence error achieved by the new algorithm with desired signal decomposition is found to be lower than that obtained without decomposition. The experimental results indicate that the proposed method is better than traditional adaptive filter using LMS algorithm in the aspects of retaining geometrical characteristics of ECG signal.

  2. Modified rational Legendre approach to laminar viscous flow over a semi-infinite flat plate

    International Nuclear Information System (INIS)

    Tajvidi, T.; Razzaghi, M.; Dehghan, M.

    2008-01-01

    A numerical method for solving the classical Blasius' equation is proposed. The Blasius' equation is a third order nonlinear ordinary differential equation , which arises in the problem of the two-dimensional laminar viscous flow over a semi-infinite flat plane. The approach is based on a modified rational Legendre tau method. The operational matrices for the derivative and product of the modified rational Legendre functions are presented. These matrices together with the tau method are utilized to reduce the solution of Blasius' equation to the solution of a system of algebraic equations. A numerical evaluation is included to demonstrate the validity and applicability of the method and a comparison is made with existing results

  3. A new Fortran 90 program to compute regular and irregular associated Legendre functions (new version announcement)

    Science.gov (United States)

    Schneider, Barry I.; Segura, Javier; Gil, Amparo; Guan, Xiaoxu; Bartschat, Klaus

    2018-04-01

    This is a revised and updated version of a modern Fortran 90 code to compute the regular Plm (x) and irregular Qlm (x) associated Legendre functions for all x ∈(- 1 , + 1) (on the cut) and | x | > 1 and integer degree (l) and order (m). The necessity to revise the code comes as a consequence of some comments of Prof. James Bremer of the UC//Davis Mathematics Department, who discovered that there were errors in the code for large integer degree and order for the normalized regular Legendre functions on the cut.

  4. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    Science.gov (United States)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  5. Algorithmic transformation of multi-loop master integrals to a canonical basis with CANONICA

    Science.gov (United States)

    Meyer, Christoph

    2018-01-01

    The integration of differential equations of Feynman integrals can be greatly facilitated by using a canonical basis. This paper presents the Mathematica package CANONICA, which implements a recently developed algorithm to automatize the transformation to a canonical basis. This represents the first publicly available implementation suitable for differential equations depending on multiple scales. In addition to the presentation of the package, this paper extends the description of some aspects of the algorithm, including a proof of the uniqueness of canonical forms up to constant transformations.

  6. Study on Magneto-Hydro-Dynamics Disturbance Signal Feature Classification Using Improved S-Transform Algorithm and Radial Basis Function Neural Network

    Directory of Open Access Journals (Sweden)

    Nan YU

    2014-09-01

    Full Text Available The interference signal in magneto-hydro-dynamics (MHD may be the disturbance from the power supply, the equipment itself, or the electromagnetic radiation. Interference signal mixed in normal signal, brings difficulties for signal analysis and processing. Recently proposed S-Transform algorithm combines advantages of short time Fourier transform and wavelet transform. It uses Fourier kernel and wavelet like Gauss window whose width is inversely proportional to the frequency. Therefore, S-Transform algorithm not only preserves the phase information of the signals but also has variable resolution like wavelet transform. This paper proposes a new method to establish a MHD signal classifier using S-transform algorithm and radial basis function neural network (RBFNN. Because RBFNN centers ascertained by k-means clustering algorithm probably are the local optimum, this paper analyzes the characteristics of k-means clustering algorithm and proposes an improved k-means clustering algorithm called GCW (Group-cluster-weight k-means clustering algorithm to improve the centers distribution. The experiment results show that the improvement greatly enhances the RBFNN performance.

  7. Algorithms evaluation for transformers differential protection; Avaliacao de algoritmos para protecao diferencial de transformadores

    Energy Technology Data Exchange (ETDEWEB)

    Piovesan, Luis Sergio

    1997-07-01

    The appliance of two algorithms is evaluated, one based in Fourier analysis and other based in a rectangular transform technique over Fourier analysis, to be used in digital logical circuits (digital protection relays) for the purpose of differential protection of power transformers (ANSI 87T). The first chapter has a brief introduction about electrical protection. The second chapter discusses the general problems of transform protection, the development of digital technology and, with more detail, the differential protection associated to this technology. In this chapter are presented the particular aspects of transformers differential protection concerning sensibility, inrush current situations and harmonic distortions caused by transformer core saturations and the differential protection algorithms and their applications in a specific relay design. In chapter three, a method to make possible testing the protection performance is developed. This work applies digital simulations using EMTP to generate current signal of transformer operation and fault conditions. Digital simulation using Matlab is used to simulate the protection. The EMTP generated field signals are sent to the relay under test, furnishing data of normal operation, internal and external faults. The relay logic simulator at Matlab will work this data and so, it will be possible to verify and evaluate the algorithm behavior and performance. Chapter 4 shows the protection operation over simulations of several of transformer operation and fault conditions. The last chapter presents a conclusion about the protection performance, discussions about all the methods applied in this work and suggestions for further studies. (author)

  8. De la glosa a la publicidad. Notas para una lectura de Pierre Legendre

    Directory of Open Access Journals (Sweden)

    Bellido, José

    2008-12-01

    Full Text Available By emphasizing the singular experience of the act of reading, this paper presents the work of a French legal philosopher and psychoanalyst: Pierre Legendre. Whereas this attempt could be simultaneously an impossible and irritating venture, its aim is to emphasize something that is not so often seen in legal theory: an inquiry into the nuances of the legal unconscious. In doing so, the paper opens with some references to his particular understanding of love and the spectacular as a legal resource to dominate its subjects. It continues to be peppered with several attractive spaces in which Legendre’s ecounters the binding force in the imaginary of the legal institution.

    Señalando la experiencia singular del acto de leer, este trabajo presenta la obra de un filósofo del derecho y psicoanalista francés: Pierre Legendre. A pesar de que tal proyecto pudiera constituir una empresa tan imposible como irritante, el propósito principal es resaltar un elemento que no suele observarse con frecuencia en la teoría del derecho: un recorrido por los diversos matices del inconsciente jurídico. El trabajo comienza con algunas referencias a su concepción particular del amor y del espectáculo como recursos legales para dominar a los sujetos. Y continúa con algunos espacios sugerentes donde Legendre encuentra la fuerza vinculante en el imaginario de la institución jurídica.

  9. Optimization design for the stepped impedance transformer based on the genetic algorithm

    International Nuclear Information System (INIS)

    Zou Dehui; Lai Wanchang; Qiu Dong

    2007-01-01

    This paper introduces the basic principium and mathematic model of the stepped impedance transformer, then puts the emphasis on comparing two kinds of design methods of the stepped impedance transformer. The design results are simulated by EDA, which indicates that genetic algorithm design is better than Chebyshev integrated design in the term of the most reflect coefficient's module. (authors)

  10. A Rigid Image Registration Based on the Nonsubsampled Contourlet Transform and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Nasreddine Taleb

    2010-09-01

    Full Text Available Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT. An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise.

  11. A rigid image registration based on the nonsubsampled contourlet transform and genetic algorithms.

    Science.gov (United States)

    Meskine, Fatiha; Chikr El Mezouar, Miloud; Taleb, Nasreddine

    2010-01-01

    Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise.

  12. Legendre Duality of Spherical and Gaussian Spin Glasses

    Energy Technology Data Exchange (ETDEWEB)

    Genovese, Giuseppe, E-mail: giuseppe.genovese@math.uzh.ch [Universität Zürich, Institut für Mathematik (Switzerland); Tantari, Daniele, E-mail: daniele.tantari@sns.it [Scuola Normale Superiore di Pisa, Centro Ennio de Giorgi (Italy)

    2015-12-15

    The classical result of concentration of the Gaussian measure on the sphere in the limit of large dimension induces a natural duality between Gaussian and spherical models of spin glass. We analyse the Legendre variational structure linking the free energies of these two systems, in the spirit of the equivalence of ensembles of statistical mechanics. Our analysis, combined with the previous work (Barra et al., J. Phys. A: Math. Theor. 47, 155002, 2014), shows that such models are replica symmetric. Lastly, we briefly discuss an application of our result to the study of the Gaussian Hopfield model.

  13. Legendre Duality of Spherical and Gaussian Spin Glasses

    International Nuclear Information System (INIS)

    Genovese, Giuseppe; Tantari, Daniele

    2015-01-01

    The classical result of concentration of the Gaussian measure on the sphere in the limit of large dimension induces a natural duality between Gaussian and spherical models of spin glass. We analyse the Legendre variational structure linking the free energies of these two systems, in the spirit of the equivalence of ensembles of statistical mechanics. Our analysis, combined with the previous work (Barra et al., J. Phys. A: Math. Theor. 47, 155002, 2014), shows that such models are replica symmetric. Lastly, we briefly discuss an application of our result to the study of the Gaussian Hopfield model

  14. Analysis of the Chirplet Transform-Based Algorithm for Radar Detection of Accelerated Targets

    Science.gov (United States)

    Galushko, V. G.; Vavriv, D. M.

    2017-06-01

    Purpose: Efficiency analysis of an optimal algorithm of chirp signal processing based on the chirplet transform as applied to detection of radar targets in uniformly accelerated motion. Design/methodology/approach: Standard methods of the optimal filtration theory are used to investigate the ambiguity function of chirp signals. Findings: An analytical expression has been derived for the ambiguity function of chirp signals that is analyzed with respect to detection of radar targets moving at a constant acceleration. Sidelobe level and characteristic width of the ambiguity function with respect to the coordinates frequency and rate of its change have been estimated. The gain in the signal-to-noise ratio has been assessed that is provided by the algorithm under consideration as compared with application of the standard Fourier transform to detection of chirp signals against a “white” noise background. It is shown that already with a comparatively small (processing channels (elementary filters with respect to the frequency change rate) the gain in the signal-tonoise ratio exceeds 10 dB. A block diagram of implementation of the algorithm under consideration is suggested on the basis of a multichannel weighted Fourier transform. Recommendations as for selection of the detection algorithm parameters have been developed. Conclusions: The obtained results testify to efficiency of application of the algorithm under consideration to detection of radar targets moving at a constant acceleration. Nevertheless, it seems expedient to perform computer simulations of its operability with account for the noise impact along with trial measurements in real conditions.

  15. Medical Image Fusion Algorithm Based on Nonlinear Approximation of Contourlet Transform and Regional Features

    Directory of Open Access Journals (Sweden)

    Hui Huang

    2017-01-01

    Full Text Available According to the pros and cons of contourlet transform and multimodality medical imaging, here we propose a novel image fusion algorithm that combines nonlinear approximation of contourlet transform with image regional features. The most important coefficient bands of the contourlet sparse matrix are retained by nonlinear approximation. Low-frequency and high-frequency regional features are also elaborated to fuse medical images. The results strongly suggested that the proposed algorithm could improve the visual effects of medical image fusion and image quality, image denoising, and enhancement.

  16. Discrete cosine and sine transforms general properties, fast algorithms and integer approximations

    CERN Document Server

    Britanak, Vladimir; Rao, K R; Rao, K R

    2006-01-01

    The Discrete Cosine Transform (DCT) is used in many applications by the scientific, engineering and research communities and in data compression in particular. Fast algorithms and applications of the DCT Type II (DCT-II) have become the heart of many established international image/video coding standards. Since then other forms of the DCT and Discrete Sine Transform (DST) have been investigated in detail. This new edition presents the complete set of DCT and DST discrete trigonometric transforms, including their definitions, general mathematical properties, and relations to the optimal Karhune

  17. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Cieplak, Agnieszka M.; Slosar, Anže, E-mail: acieplak@bnl.gov, E-mail: anze@bnl.gov [Brookhaven National Laboratory, Bldg 510, Upton, NY, 11973 (United States)

    2017-10-01

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n -th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.

  18. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    Science.gov (United States)

    Cieplak, Agnieszka M.; Slosar, Anže

    2017-10-01

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.

  19. Abdomen disease diagnosis in CT images using flexiscale curvelet transform and improved genetic algorithm.

    Science.gov (United States)

    Sethi, Gaurav; Saini, B S

    2015-12-01

    This paper presents an abdomen disease diagnostic system based on the flexi-scale curvelet transform, which uses different optimal scales for extracting features from computed tomography (CT) images. To optimize the scale of the flexi-scale curvelet transform, we propose an improved genetic algorithm. The conventional genetic algorithm assumes that fit parents will likely produce the healthiest offspring that leads to the least fit parents accumulating at the bottom of the population, reducing the fitness of subsequent populations and delaying the optimal solution search. In our improved genetic algorithm, combining the chromosomes of a low-fitness and a high-fitness individual increases the probability of producing high-fitness offspring. Thereby, all of the least fit parent chromosomes are combined with high fit parent to produce offspring for the next population. In this way, the leftover weak chromosomes cannot damage the fitness of subsequent populations. To further facilitate the search for the optimal solution, our improved genetic algorithm adopts modified elitism. The proposed method was applied to 120 CT abdominal images; 30 images each of normal subjects, cysts, tumors and stones. The features extracted by the flexi-scale curvelet transform were more discriminative than conventional methods, demonstrating the potential of our method as a diagnostic tool for abdomen diseases.

  20. The Kustaanheimo-Stiefel transformation and certain special functions

    International Nuclear Information System (INIS)

    Kibler, M.; Negadi, T.; Ronveaux, A.

    1984-10-01

    The Kustaanheimo-Stiefel transformation is briefly described in various frameworks. This transformation is used to convert the R 3 harmonics into R 4 harmonics. Then, the Schroedinger equation for an hydrogen-like atom is transformed into the set of a coupled pair of Schroedinger equations for two R 2 isotropic harmonic oscillators and a coupled pair of constraint relations. This connection between two famous quantization cases is tackled in terms of both eigenvalues and eigenvectors corresponding to the discrete spectrum of the hydrogen atom. This leads to an integral involving Laguerre, Legendre, and Hermite polynomials. A program has been realized in the algebraic and symbolic programming system macsyma to cover the various computing aspects of this work

  1. Iris Location Algorithm Based on the CANNY Operator and Gradient Hough Transform

    Science.gov (United States)

    Zhong, L. H.; Meng, K.; Wang, Y.; Dai, Z. Q.; Li, S.

    2017-12-01

    In the iris recognition system, the accuracy of the localization of the inner and outer edges of the iris directly affects the performance of the recognition system, so iris localization has important research meaning. Our iris data contain eyelid, eyelashes, light spot and other noise, even the gray transformation of the images is not obvious, so the general methods of iris location are unable to realize the iris location. The method of the iris location based on Canny operator and gradient Hough transform is proposed. Firstly, the images are pre-processed; then, calculating the gradient information of images, the inner and outer edges of iris are coarse positioned using Canny operator; finally, according to the gradient Hough transform to realize precise localization of the inner and outer edge of iris. The experimental results show that our algorithm can achieve the localization of the inner and outer edges of the iris well, and the algorithm has strong anti-interference ability, can greatly reduce the location time and has higher accuracy and stability.

  2. A genetic-algorithm-based method to find unitary transformations for any desired quantum computation and application to a one-bit oracle decision problem

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Jeongho [Seoul National University, Seoul (Korea, Republic of); Hanyang University, Seoul (Korea, Republic of); Yoo, Seokwon [Hanyang University, Seoul (Korea, Republic of)

    2014-12-15

    We propose a genetic-algorithm-based method to find the unitary transformations for any desired quantum computation. We formulate a simple genetic algorithm by introducing the 'genetic parameter vector' of the unitary transformations to be found. In the genetic algorithm process, all components of the genetic parameter vectors are supposed to evolve to the solution parameters of the unitary transformations. We apply our method to find the optimal unitary transformations and to generalize the corresponding quantum algorithms for a realistic problem, the one-bit oracle decision problem, or the often-called Deutsch problem. By numerical simulations, we can faithfully find the appropriate unitary transformations to solve the problem by using our method. We analyze the quantum algorithms identified by the found unitary transformations and generalize the variant models of the original Deutsch's algorithm.

  3. Generalizing, optimizing, and inventing numerical algorithms for the fractional Fourier, Fresnel, and linear canonical transforms

    Science.gov (United States)

    Hennelly, Bryan M.; Sheridan, John T.

    2005-05-01

    By use of matrix-based techniques it is shown how the space-bandwidth product (SBP) of a signal, as indicated by the location of the signal energy in the Wigner distribution function, can be tracked through any quadratic-phase optical system whose operation is described by the linear canonical transform. Then, applying the regular uniform sampling criteria imposed by the SBP and linking the criteria explicitly to a decomposition of the optical matrix of the system, it is shown how numerical algorithms (employing interpolation and decimation), which exhibit both invertibility and additivity, can be implemented. Algorithms appearing in the literature for a variety of transforms (Fresnel, fractional Fourier) are shown to be special cases of our general approach. The method is shown to allow the existing algorithms to be optimized and is also shown to permit the invention of many new algorithms.

  4. An implementation for the algorithm of the Hirota bilinear Baecklund transformation of integrable hierarchies

    International Nuclear Information System (INIS)

    Yu Guofu; Duan Qihua

    2010-01-01

    In this paper, based on the Hirota bilinear method, a reliable algorithm for generating the bilinear Baecklund transformation (BT) of integrable hierarchies is described. With the help of Maple symbolic computation the algorithm would be very helpful and powerful for looking for the bilinear BT of integrable systems especially for those high-order integrable hierarchies. The BTs of bilinear Ramani hierarchy are deduced for the first time by using the algorithm.

  5. A method of optimized neural network by L-M algorithm to transformer winding hot spot temperature forecasting

    Science.gov (United States)

    Wei, B. G.; Wu, X. Y.; Yao, Z. F.; Huang, H.

    2017-11-01

    Transformers are essential devices of the power system. The accurate computation of the highest temperature (HST) of a transformer’s windings is very significant, as for the HST is a fundamental parameter in controlling the load operation mode and influencing the life time of the insulation. Based on the analysis of the heat transfer processes and the thermal characteristics inside transformers, there is taken into consideration the influence of factors like the sunshine, external wind speed etc. on the oil-immersed transformers. Experimental data and the neural network are used for modeling and protesting of the HST, and furthermore, investigations are conducted on the optimization of the structure and algorithms of neutral network are conducted. Comparison is made between the measured values and calculated values by using the recommended algorithm of IEC60076 and by using the neural network algorithm proposed by the authors; comparison that shows that the value computed with the neural network algorithm approximates better the measured value than the value computed with the algorithm proposed by IEC60076.

  6. The discrete Fourier transform theory, algorithms and applications

    CERN Document Server

    Sundaraajan, D

    2001-01-01

    This authoritative book provides comprehensive coverage of practical Fourier analysis. It develops the concepts right from the basics and gradually guides the reader to the advanced topics. It presents the latest and practically efficient DFT algorithms, as well as the computation of discrete cosine and Walsh-Hadamard transforms. The large number of visual aids such as figures, flow graphs and flow charts makes the mathematical topic easy to understand. In addition, the numerous examples and the set of C-language programs (a supplement to the book) help greatly in understanding the theory and

  7. Nonlinear Legendre Spectral Finite Elements for Wind Turbine Blade Dynamics: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Q.; Sprague, M. A.; Jonkman, J.; Johnson, N.

    2014-01-01

    This paper presents a numerical implementation and examination of new wind turbine blade finite element model based on Geometrically Exact Beam Theory (GEBT) and a high-order spectral finite element method. The displacement-based GEBT is presented, which includes the coupling effects that exist in composite structures and geometric nonlinearity. Legendre spectral finite elements (LSFEs) are high-order finite elements with nodes located at the Gauss-Legendre-Lobatto points. LSFEs can be an order of magnitude more efficient that low-order finite elements for a given accuracy level. Interpolation of the three-dimensional rotation, a major technical barrier in large-deformation simulation, is discussed in the context of LSFEs. It is shown, by numerical example, that the high-order LSFEs, where weak forms are evaluated with nodal quadrature, do not suffer from a drawback that exists in low-order finite elements where the tangent-stiffness matrix is calculated at the Gauss points. Finally, the new LSFE code is implemented in the new FAST Modularization Framework for dynamic simulation of highly flexible composite-material wind turbine blades. The framework allows for fully interactive simulations of turbine blades in operating conditions. Numerical examples showing validation and LSFE performance will be provided in the final paper.

  8. Using Legendre Functions for Spatial Covariance Approximation and Investigation of Radial Nonisotrophy for NOGAPS Data

    National Research Council Canada - National Science Library

    Franke, Richard

    2001-01-01

    .... It was found that for all levels the approximation of the covariance data for pressure height innovations by Legendre functions led to positive coefficients for up to 25 terms except at the some low and high levels...

  9. A Streaming Distance Transform Algorithm for Neighborhood-Sequence Distances

    Directory of Open Access Journals (Sweden)

    Nicolas Normand

    2014-09-01

    Full Text Available We describe an algorithm that computes a “translated” 2D Neighborhood-Sequence Distance Transform (DT using a look up table approach. It requires a single raster scan of the input image and produces one line of output for every line of input. The neighborhood sequence is specified either by providing one period of some integer periodic sequence or by providing the rate of appearance of neighborhoods. The full algorithm optionally derives the regular (centered DT from the “translated” DT, providing the result image on-the-fly, with a minimal delay, before the input image is fully processed. Its efficiency can benefit all applications that use neighborhood- sequence distances, particularly when pipelined processing architectures are involved, or when the size of objects in the source image is limited.

  10. An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform

    Science.gov (United States)

    2018-01-01

    ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a

  11. Moving boundary - Oxygen diffusion. Two algorithms using Landau transformation

    International Nuclear Information System (INIS)

    Moyano, E.A.

    1991-01-01

    A description is made of two algorithms which solve a mathematical model destinated for the study of one-dimensional problems with moving boundaries and implicit boundary conditions. The Landau transformation is used in both methods for each temporal level so as to work all through with the same amount of nodes. Thus, it is necessary to deal with a partial differential equation whose diffusive and convective terms are accompanied by variable coefficients. The partial differential equation is made discrete implicitly, using the Laasonen scheme -which is always stable- instead of the Crank-Nicholson scheme, as performed by Ferris and Hill (5), in the fixed time passing method. The second method employs the tridiagonal algorithm. The first algorithm uses fixed time passing and iterates with variable interface positions, that is to say, it varies δs until it satisfies the boundary condition. The mathematical model describes oxygen diffusion in live tissues. Its numerical solution is obtained by finite differences. An important application of this method could be the estimation of the radiation dose in cancerous tumor treatment. (Author) [es

  12. Characterizing the Lyman-alpha forest flux probability distribution function using Legendre polynomials

    Science.gov (United States)

    Cieplak, Agnieszka; Slosar, Anze

    2018-01-01

    The Lyman-alpha forest has become a powerful cosmological probe at intermediate redshift. It is a highly non-linear field with much information present beyond the power spectrum. The flux probability flux distribution (PDF) in particular has been a successful probe of small scale physics. However, it is also sensitive to pixel noise, spectrum resolution, and continuum fitting, all of which lead to possible biased estimators. Here we argue that measuring the coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. Since the n-th Legendre coefficient can be expressed as a linear combination of the first n moments of the field, this allows for the coefficients to be measured in the presence of noise and allows for a clear route towards marginalization over the mean flux. Additionally, in the presence of noise, a finite number of these coefficients are well measured with a very sharp transition into noise dominance. This compresses the information into a small amount of well-measured quantities. Finally, we find that measuring fewer quasars with high signal-to-noise produces a higher amount of recoverable information.

  13. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    Science.gov (United States)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  14. An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm

    Science.gov (United States)

    Zhang, B.; Sang, Jun; Alam, Mohammad S.

    2013-03-01

    An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 2×2 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 2×2 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.

  15. An algorithm for the basis of the finite Fourier transform

    Science.gov (United States)

    Santhanam, Thalanayar S.

    1995-01-01

    The Finite Fourier Transformation matrix (F.F.T.) plays a central role in the formulation of quantum mechanics in a finite dimensional space studied by the author over the past couple of decades. An outstanding problem which still remains open is to find a complete basis for F.F.T. In this paper we suggest a simple algorithm to find the eigenvectors of F.T.T.

  16. Representation of the Fokker-Planck collision term for Coulomb interaction as series of Legendre polynomials

    International Nuclear Information System (INIS)

    Almeida Ferreira, A.C. de.

    1984-01-01

    For problems with azimuthal symmetry in velocity space, the distribution function depends only on the speed and on the pitch angle. The angular dependence of the distribution function is expanded in Legendre polynomials, and the expansions of the collision integrals describing two-body Coulomb interactions in a plasma are determined through the use of the Rosenbluth potentials. The electron distribution function is written as a Maxwellian plus a deviation, and the representation in Legendre polynomials of the electron-electron collision term is given for both its linear and nonlinear part. To determine the representation of the electron-ion collision term it is assumed that the ion distribution is much narrower in velocity space than the electron distribution, and shifted from the origin by a flow velocity. The equations are presented in a form that is suitable for their use in a computer. (Author) [pt

  17. Improving the efficiency of molecular replacement by utilizing a new iterative transform phasing algorithm

    Energy Technology Data Exchange (ETDEWEB)

    He, Hongxing; Fang, Hengrui [Department of Physics and Texas Center for Superconductivity, University of Houston, Houston, Texas 77204 (United States); Miller, Mitchell D. [Department of BioSciences, Rice University, Houston, Texas 77005 (United States); Phillips, George N. Jr [Department of BioSciences, Rice University, Houston, Texas 77005 (United States); Department of Chemistry, Rice University, Houston, Texas 77005 (United States); Department of Biochemistry, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Su, Wu-Pei, E-mail: wpsu@uh.edu [Department of Physics and Texas Center for Superconductivity, University of Houston, Houston, Texas 77204 (United States)

    2016-07-15

    An iterative transform algorithm is proposed to improve the conventional molecular-replacement method for solving the phase problem in X-ray crystallography. Several examples of successful trial calculations carried out with real diffraction data are presented. An iterative transform method proposed previously for direct phasing of high-solvent-content protein crystals is employed for enhancing the molecular-replacement (MR) algorithm in protein crystallography. Target structures that are resistant to conventional MR due to insufficient similarity between the template and target structures might be tractable with this modified phasing method. Trial calculations involving three different structures are described to test and illustrate the methodology. The relationship of the approach to PHENIX Phaser-MR and MR-Rosetta is discussed.

  18. Improvements on the minimax algorithm for the Laplace transformation of orbital energy denominators

    Energy Technology Data Exchange (ETDEWEB)

    Helmich-Paris, Benjamin, E-mail: b.helmichparis@vu.nl; Visscher, Lucas, E-mail: l.visscher@vu.nl

    2016-09-15

    We present a robust and non-heuristic algorithm that finds all extremum points of the error distribution function of numerically Laplace-transformed orbital energy denominators. The extremum point search is one of the two key steps for finding the minimax approximation. If pre-tabulation of initial guesses is supposed to be avoided, strategies for a sufficiently robust algorithm have not been discussed so far. We compare our non-heuristic approach with a bracketing and bisection algorithm and demonstrate that 3 times less function evaluations are required altogether when applying it to typical non-relativistic and relativistic quantum chemical systems.

  19. The parallel algorithm for the 2D discrete wavelet transform

    Science.gov (United States)

    Barina, David; Najman, Pavel; Kleparnik, Petr; Kula, Michal; Zemcik, Pavel

    2018-04-01

    The discrete wavelet transform can be found at the heart of many image-processing algorithms. Until now, the transform on general-purpose processors (CPUs) was mostly computed using a separable lifting scheme. As the lifting scheme consists of a small number of operations, it is preferred for processing using single-core CPUs. However, considering a parallel processing using multi-core processors, this scheme is inappropriate due to a large number of steps. On such architectures, the number of steps corresponds to the number of points that represent the exchange of data. Consequently, these points often form a performance bottleneck. Our approach appropriately rearranges calculations inside the transform, and thereby reduces the number of steps. In other words, we propose a new scheme that is friendly to parallel environments. When evaluating on multi-core CPUs, we consistently overcome the original lifting scheme. The evaluation was performed on 61-core Intel Xeon Phi and 8-core Intel Xeon processors.

  20. Validation of a numerical algorithm based on transformed equations

    International Nuclear Information System (INIS)

    Xu, H.; Barron, R.M.; Zhang, C.

    2003-01-01

    Generally, a typical equation governing a physical process, such as fluid flow or heat transfer, has three types of terms that involve partial derivatives, namely, the transient term, the convective terms and the diffusion terms. The major difficulty in obtaining numerical solutions of these partial differential equations is the discretization of the convective terms. The transient term is usually discretized using the first-order forward or backward differencing scheme. The diffusion terms are usually discretized using the central differencing scheme and no difficulty arises since these terms involve second-order spatial derivatives of the flow variables. The convective terms are non-linear and contain first-order spatial derivatives. The main difference between various numerical algorithms is the discretization of the convective terms. In the present study, an alternative approach to discretizing the governing equations is presented. In this algorithm, the governing equations are first transformed by introducing an exponential function to eliminate the convective terms in the equations. The proposed algorithm is applied to simulate some fluid flows with exact solutions to validate the proposed algorithm. The fluid flows used in this study are a self-designed quasi-fluid flow problem, stagnation in plane flow (Hiemenz flow), and flow between two concentric cylinders. The comparisons with the power-law scheme indicate that the proposed scheme exhibits better performance. (author)

  1. Pyramidal Watershed Segmentation Algorithm for High-Resolution Remote Sensing Images Using Discrete Wavelet Transforms

    Directory of Open Access Journals (Sweden)

    K. Parvathi

    2009-01-01

    Full Text Available The watershed transformation is a useful morphological segmentation tool for a variety of grey-scale images. However, over segmentation and under segmentation have become the key problems for the conventional algorithm. In this paper, an efficient segmentation method for high-resolution remote sensing image analysis is presented. Wavelet analysis is one of the most popular techniques that can be used to detect local intensity variation and hence the wavelet transformation is used to analyze the image. Wavelet transform is applied to the image, producing detail (horizontal, vertical, and diagonal and Approximation coefficients. The image gradient with selective regional minima is estimated with the grey-scale morphology for the Approximation image at a suitable resolution, and then the watershed is applied to the gradient image to avoid over segmentation. The segmented image is projected up to high resolutions using the inverse wavelet transform. The watershed segmentation is applied to small subset size image, demanding less computational time. We have applied our new approach to analyze remote sensing images. The algorithm was implemented in MATLAB. Experimental results demonstrated the method to be effective.

  2. New fundamental equations of thermodynamics for systems in chemical equilibrium at a specified partial pressure of a reactant and the standard transformed formation properties of reactants

    International Nuclear Information System (INIS)

    Alberty, R.A.; Oppenheim, I.

    1993-01-01

    When temperature, pressure, and the partial pressure of a reactant are fixed, the criterion of chemical equilibrium can be expressed in terms of the transformed Gibbs energy G' that is obtained by using a Legendre transform involving the chemical potential of the reactant that is fixed. For reactions of ideal gases, the most natural variables to use in the fundamental equation are T, P', and P B , where P' is the partial pressure of the reactants other than the one that is fixed and P B is the partial pressure of the reactant that is fixed. The fundamental equation for G' yields the expression for the transformed entropy S', and a transformed enthalpy can be defined by the additional Legendre transform H'=G'+TS'. This leads to an additional form of the fundamental equation. The calculation of transformed thermodynamic properties and equilibrium compositions is discussed for a simple system and for a general multireaction system. The change, in a reaction, of the binding of the reactant that is at a specified pressure can be calculated using one of the six Maxwell equations of the fundamental equation in G'

  3. Solution of volume-surface integral equations using higher-order hierarchical Legendre basis functions

    DEFF Research Database (Denmark)

    Kim, Oleksiy S.; Meincke, Peter; Breinbjerg, Olav

    2007-01-01

    The problem of electromagnetic scattering by composite metallic and dielectric objects is solved using the coupled volume-surface integral equation (VSIE). The method of moments (MoM) based on higher-order hierarchical Legendre basis functions and higher-order curvilinear geometrical elements...... with the analytical Mie series solution. Scattering by more complex metal-dielectric objects are also considered to compare the presented technique with other numerical methods....

  4. Local structure information by EXAFS analysis using two algorithms for Fourier transform calculation

    International Nuclear Information System (INIS)

    Aldea, N; Pintea, S; Rednic, V; Matei, F; Hu Tiandou; Xie Yaning

    2009-01-01

    The present work is a comparison study between different algorithms of Fourier transform for obtaining very accurate local structure results using Extended X-ray Absorption Fine Structure technique. In this paper we focus on the local structural characteristics of supported nickel catalysts and Fe 3 O 4 core-shell nanocomposites. The radial distribution function could be efficiently calculated by the fast Fourier transform when the coordination shells are well separated while the Filon quadrature gave remarkable results for close-shell coordination.

  5. Application of affinity propagation algorithm based on manifold distance for transformer PD pattern recognition

    Science.gov (United States)

    Wei, B. G.; Huo, K. X.; Yao, Z. F.; Lou, J.; Li, X. Y.

    2018-03-01

    It is one of the difficult problems encountered in the research of condition maintenance technology of transformers to recognize partial discharge (PD) pattern. According to the main physical characteristics of PD, three models of oil-paper insulation defects were set up in laboratory to study the PD of transformers, and phase resolved partial discharge (PRPD) was constructed. By using least square method, the grey-scale images of PRPD were constructed and features of each grey-scale image were 28 box dimensions and 28 information dimensions. Affinity propagation algorithm based on manifold distance (AP-MD) for transformers PD pattern recognition was established, and the data of box dimension and information dimension were clustered based on AP-MD. Study shows that clustering result of AP-MD is better than the results of affinity propagation (AP), k-means and fuzzy c-means algorithm (FCM). By choosing different k values of k-nearest neighbor, we find clustering accuracy of AP-MD falls when k value is larger or smaller, and the optimal k value depends on sample size.

  6. Wavelet transform and Huffman coding based electrocardiogram compression algorithm: Application to telecardiology

    International Nuclear Information System (INIS)

    Chouakri, S A; Djaafri, O; Taleb-Ahmed, A

    2013-01-01

    We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly

  7. Scheduling Two-Sided Transformations Using Tile Algorithms on Multicore Architectures

    Directory of Open Access Journals (Sweden)

    Hatem Ltaief

    2010-01-01

    Full Text Available The objective of this paper is to describe, in the context of multicore architectures, three different scheduler implementations for the two-sided linear algebra transformations, in particular the Hessenberg and Bidiagonal reductions which are the first steps for the standard eigenvalue problems and the singular value decompositions respectively. State-of-the-art dense linear algebra softwares, such as the LAPACK and ScaLAPACK libraries, suffer performance losses on multicore processors due to their inability to fully exploit thread-level parallelism. At the same time the fine-grain dataflow model gains popularity as a paradigm for programming multicore architectures. Buttari et al. (Parellel Comput. Syst. Appl. 35 (2009, 38–53 introduced the concept of tile algorithms in which parallelism is no longer hidden inside Basic Linear Algebra Subprograms but is brought to the fore to yield much better performance. Along with efficient scheduling mechanisms for data-driven execution, these tile two-sided reductions achieve high performance computing by reaching up to 75% of the DGEMM peak on a 12000×12000 matrix with 16 Intel Tigerton 2.4 GHz processors. The main drawback of the tile algorithms approach for two-sided transformations is that the full reduction cannot be obtained in one stage. Other methods have to be considered to further reduce the band matrices to the required forms.

  8. Transformations and algorithms in a computerized brain atlas

    International Nuclear Information System (INIS)

    Thurfjell, L.; Bohm, C.; Eriksson, L.; Karolinska Institute/Hospital, Stockholm

    1993-01-01

    The computerized brain atlas constructed at the Karolinska Hospital, Stockholm, Sweden, has been further developed. This atlas was designed to be employed in different fields of neuro imaging such as positron emission tomography (PET), single photon emission tomography (SPECT), computerized tomography (CT) and magnetic resonance imaging (MR). The main objectives with the atlas is to aid the interpretation of functional images by introducing anatomical information, to serve as a tool in the merging of data from different imaging modalities and to facilitate the comparisons of data from different individuals by allowing for anatomical standardization of individual data. The purpose of this paper is to describe the algorithms and transformations used in the implementation of the atlas software

  9. TRANSFORMATION ALGORITHM FOR IMAGES OBTAINED BY OMNIDIRECTIONAL CAMERAS

    Directory of Open Access Journals (Sweden)

    V. P. Lazarenko

    2015-01-01

    Full Text Available Omnidirectional optoelectronic systems find their application in areas where a wide viewing angle is critical. However, omnidirectional optoelectronic systems have a large distortion that makes their application more difficult. The paper compares the projection functions of traditional perspective lenses and omnidirectional wide angle fish-eye lenses with a viewing angle not less than 180°. This comparison proves that distortion models of omnidirectional cameras cannot be described as a deviation from the classic model of pinhole camera. To solve this problem, an algorithm for transforming omnidirectional images has been developed. The paper provides a brief comparison of the four calibration methods available in open source toolkits for omnidirectional optoelectronic systems. Geometrical projection model is given used for calibration of omnidirectional optical system. The algorithm consists of three basic steps. At the first step, we calculate he field of view of a virtual pinhole PTZ camera. This field of view is characterized by an array of 3D points in the object space. At the second step the array of corresponding pixels for these three-dimensional points is calculated. Then we make a calculation of the projection function that expresses the relation between a given 3D point in the object space and a corresponding pixel point. In this paper we use calibration procedure providing the projection function for calibrated instance of the camera. At the last step final image is formed pixel-by-pixel from the original omnidirectional image using calculated array of 3D points and projection function. The developed algorithm gives the possibility for obtaining an image for a part of the field of view of an omnidirectional optoelectronic system with the corrected distortion from the original omnidirectional image. The algorithm is designed for operation with the omnidirectional optoelectronic systems with both catadioptric and fish-eye lenses

  10. A novel algorithm for discrimination between inrush current and internal faults in power transformer differential protection based on discrete wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Eldin, A.A. Hossam; Refaey, M.A. [Electrical Engineering Department, Alexandria University, Alexandria (Egypt)

    2011-01-15

    This paper proposes a novel methodology for transformer differential protection, based on wave shape recognition of the discriminating criterion extracted of the instantaneous differential currents. Discrete wavelet transform has been applied to the differential currents due to internal fault and inrush currents. The diagnosis criterion is based on median absolute deviation (MAD) of wavelet coefficients over a specified frequency band. The proposed algorithm is examined using various simulated inrush and internal fault current cases on a power transformer that has been modeled using electromagnetic transients program EMTDC software. Results of evaluation study show that, proposed wavelet based differential protection scheme can discriminate internal faults from inrush currents. (author)

  11. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    Science.gov (United States)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  12. Improved target detection algorithm using Fukunaga-Koontz transform and distance classifier correlation filter

    Science.gov (United States)

    Bal, A.; Alam, M. S.; Aslan, M. S.

    2006-05-01

    Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.

  13. A Novel Radiation Transport Algorithm for Radiography Simulations

    International Nuclear Information System (INIS)

    Inanc, Feyzi

    2004-01-01

    The simulations used in the NDE community are becoming more realistic with the introduction of more physics. In this work, we have developed a new algorithm that is capable of representing photon and charged particle fluxes through spherical harmonic expansions in a manner similar to well known discrete ordinates method with the exception that Boltzmann operator is treated through exact integration rather than conventional Legendre expansions. This approach provides a mean to include radiation interactions for higher energy regimes where there are additional physical mechanisms for photons and charged particles

  14. Multi-resolution inversion algorithm for the attenuated radon transform

    KAUST Repository

    Barbano, Paolo Emilio

    2011-09-01

    We present a FAST implementation of the Inverse Attenuated Radon Transform which incorporates accurate collimator response, as well as artifact rejection due to statistical noise and data corruption. This new reconstruction procedure is performed by combining a memory-efficient implementation of the analytical inversion formula (AIF [1], [2]) with a wavelet-based version of a recently discovered regularization technique [3]. The paper introduces all the main aspects of the new AIF, as well numerical experiments on real and simulated data. Those display a substantial improvement in reconstruction quality when compared to linear or iterative algorithms. © 2011 IEEE.

  15. Discrimination of unitary transformations in the Deutsch-Jozsa algorithm: Implications for thermal-equilibrium-ensemble implementations

    International Nuclear Information System (INIS)

    Collins, David

    2010-01-01

    A general framework for regarding oracle-assisted quantum algorithms as tools for discriminating among unitary transformations is described. This framework is applied to the Deutsch-Jozsa problem and all possible quantum algorithms which solve the problem with certainty using oracle unitaries in a particular form are derived. It is also used to show that any quantum algorithm that solves the Deutsch-Jozsa problem starting with a quantum system in a particular class of initial, thermal equilibrium-based states of the type encountered in solution-state NMR can only succeed with greater probability than a classical algorithm when the problem size n exceeds ∼10 5 .

  16. Hough transform used on the spot-centroiding algorithm for the Shack-Hartmann wavefront sensor

    Science.gov (United States)

    Chia, Chou-Min; Huang, Kuang-Yuh; Chang, Elmer

    2016-01-01

    An approach to the spot-centroiding algorithm for the Shack-Hartmann wavefront sensor (SHWS) is presented. The SHWS has a common problem, in that while measuring high-order wavefront distortion, the spots may exceed each of the subapertures, which are used to restrict the displacement of spots. This artificial restriction may limit the dynamic range of the SHWS. When using the SHWS to measure adaptive optics or aspheric lenses, the accuracy of the traditional spot-centroiding algorithm may be uncertain because the spots leave or cross the confined area of the subapertures. The proposed algorithm combines the Hough transform with an artificial neural network, which requires no confined subapertures, to increase the dynamic range of the SHWS. This algorithm is then explored in comprehensive simulations and the results are compared with those of the existing algorithm.

  17. Stable reduced-order models of generalized dynamical systems using coordinate-transformed Arnoldi algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Silveira, L.M.; Kamon, M.; Elfadel, I.; White, J. [Massachusetts Inst. of Technology, Cambridge, MA (United States)

    1996-12-31

    Model order reduction based on Krylov subspace iterative methods has recently emerged as a major tool for compressing the number of states in linear models used for simulating very large physical systems (VLSI circuits, electromagnetic interactions). There are currently two main methods for accomplishing such a compression: one is based on the nonsymmetric look-ahead Lanczos algorithm that gives a numerically stable procedure for finding Pade approximations, while the other is based on a less well characterized Arnoldi algorithm. In this paper, we show that for certain classes of generalized state-space systems, the reduced-order models produced by a coordinate-transformed Arnoldi algorithm inherit the stability of the original system. Complete Proofs of our results will be given in the final paper.

  18. Fitting of two and three variant polynomials from experimental data through the least squares method. (Using of the codes AJUS-2D, AJUS-3D and LEGENDRE-2D); Ajuste de polinomios en dos y tres variables independientes por el metodo de minimos cuadrados. (Desarrollo de los codigos AJUS-2D, AJUS-3D y LEGENDRE-2D)

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez Miro, J J; Sanz Martin, J C

    1994-07-01

    Obtaining polynomial fittings from observational data in two and three dimensions is an interesting and practical task. Such an arduous problem suggests the development of an automatic code. The main novelty we provide lies in the generalization of the classical least squares method in three FORTRAN 77 programs usable in any sampling problem. Furthermore, we introduce the orthogonal 2D-Legendre function in the fitting process. These FORTRAN 77 programs are equipped with the options to calculate the approximation quality standard indicators, obviously generalized to two and three dimensions (correlation nonlinear factor, confidence intervals, cuadratic mean error, and so on). The aim of this paper is to rectify the absence of fitting algorithms for more than one independent variable in mathematical libraries. (Author) 10 refs.

  19. The Roadmaker's algorithm for the discrete pulse transform.

    Science.gov (United States)

    Laurie, Dirk P

    2011-02-01

    The discrete pulse transform (DPT) is a decomposition of an observed signal into a sum of pulses, i.e., signals that are constant on a connected set and zero elsewhere. Originally developed for 1-D signal processing, the DPT has recently been generalized to more dimensions. Applications in image processing are currently being investigated. The time required to compute the DPT as originally defined via the successive application of LULU operators (members of a class of minimax filters studied by Rohwer) has been a severe drawback to its applicability. This paper introduces a fast method for obtaining such a decomposition, called the Roadmaker's algorithm because it involves filling pits and razing bumps. It acts selectively only on those features actually present in the signal, flattening them in order of increasing size by subtracing an appropriate positive or negative pulse, which is then appended to the decomposition. The implementation described here covers 1-D signal as well as two and 3-D image processing in a single framework. This is achieved by considering the signal or image as a function defined on a graph, with the geometry specified by the edges of the graph. Whenever a feature is flattened, nodes in the graph are merged, until eventually only one node remains. At that stage, a new set of edges for the same nodes as the graph, forming a tree structure, defines the obtained decomposition. The Roadmaker's algorithm is shown to be equivalent to the DPT in the sense of obtaining the same decomposition. However, its simpler operators are not in general equivalent to the LULU operators in situations where those operators are not applied successively. A by-product of the Roadmaker's algorithm is that it yields a proof of the so-called Highlight Conjecture, stated as an open problem in 2006. We pay particular attention to algorithmic details and complexity, including a demonstration that in the 1-D case, and also in the case of a complete graph, the Roadmaker

  20. Modified Legendre Wavelets Technique for Fractional Oscillation Equations

    Directory of Open Access Journals (Sweden)

    Syed Tauseef Mohyud-Din

    2015-10-01

    Full Text Available Physical Phenomena’s located around us are primarily nonlinear in nature and their solutions are of highest significance for scientists and engineers. In order to have a better representation of these physical models, fractional calculus is used. Fractional order oscillation equations are included among these nonlinear phenomena’s. To tackle with the nonlinearity arising, in these phenomena’s we recommend a new method. In the proposed method, Picard’s iteration is used to convert the nonlinear fractional order oscillation equation into a fractional order recurrence relation and then Legendre wavelets method is applied on the converted problem. In order to check the efficiency and accuracy of the suggested modification, we have considered three problems namely: fractional order force-free Duffing–van der Pol oscillator, forced Duffing–van der Pol oscillator and higher order fractional Duffing equations. The obtained results are compared with the results obtained via other techniques.

  1. Searching for continuous gravitational wave signals. The hierarchical Hough transform algorithm

    International Nuclear Information System (INIS)

    Papa, M.; Schutz, B.F.; Sintes, A.M.

    2001-01-01

    It is well known that matched filtering techniques cannot be applied for searching extensive parameter space volumes for continuous gravitational wave signals. This is the reason why alternative strategies are being pursued. Hierarchical strategies are best at investigating a large parameter space when there exist computational power constraints. Algorithms of this kind are being implemented by all the groups that are developing software for analyzing the data of the gravitational wave detectors that will come online in the next years. In this talk I will report about the hierarchical Hough transform method that the GEO 600 data analysis team at the Albert Einstein Institute is developing. The three step hierarchical algorithm has been described elsewhere [8]. In this talk I will focus on some of the implementational aspects we are currently concerned with. (author)

  2. Implementation in an FPGA circuit of Edge detection algorithm based on the Discrete Wavelet Transforms

    Science.gov (United States)

    Bouganssa, Issam; Sbihi, Mohamed; Zaim, Mounia

    2017-07-01

    The 2D Discrete Wavelet Transform (DWT) is a computationally intensive task that is usually implemented on specific architectures in many imaging systems in real time. In this paper, a high throughput edge or contour detection algorithm is proposed based on the discrete wavelet transform. A technique for applying the filters on the three directions (Horizontal, Vertical and Diagonal) of the image is used to present the maximum of the existing contours. The proposed architectures were designed in VHDL and mapped to a Xilinx Sparten6 FPGA. The results of the synthesis show that the proposed architecture has a low area cost and can operate up to 100 MHz, which can perform 2D wavelet analysis for a sequence of images while maintaining the flexibility of the system to support an adaptive algorithm.

  3. Legendre-tau approximation for functional differential equations. II - The linear quadratic optimal control problem

    Science.gov (United States)

    Ito, Kazufumi; Teglas, Russell

    1987-01-01

    The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.

  4. The Mehler-Fock Transform in Signal Processing

    Directory of Open Access Journals (Sweden)

    Reiner Lenz

    2017-06-01

    Full Text Available Many signals can be described as functions on the unit disk (ball. In the framework of group representations it is well-known how to construct Hilbert-spaces containing these functions that have the groups SU(1,N as their symmetry groups. One illustration of this construction is three-dimensional color spaces in which chroma properties are described by points on the unit disk. A combination of principal component analysis and the Perron-Frobenius theorem can be used to show that perspective projections map positive signals (i.e., functions with positive values to a product of the positive half-axis and the unit ball. The representation theory (harmonic analysis of the group SU(1,1 leads to an integral transform, the Mehler-Fock-transform (MFT, that decomposes functions, depending on the radial coordinate only, into combinations of associated Legendre functions. This transformation is applied to kernel density estimators of probability distributions on the unit disk. It is shown that the transform separates the influence of the data and the measured data. The application of the transform is illustrated by studying the statistical distribution of RGB vectors obtained from a common set of object points under different illuminants.

  5. Solution of two-dimensional diffusion equation for hexagonal cells by the finite Fourier transformation

    International Nuclear Information System (INIS)

    Kobayashi, Keisuke

    1975-01-01

    A method of solution is presented for a monoenergetic diffusion equation in two-dimensional hexagonal cells by a finite Fourier transformation. Up to the present, the solution by the finite Fourier transformation has been developed for x-y, r-z and x-y-z geometries, and the flux and current at the boundary are obtained in terms of Fourier series. It is shown here that the method can be applied to hexagonal cells and the expansion of boundary values in a Legendre polynomials gives numerically a higher accuracy than is obtained by a Fourier series. (orig.) [de

  6. A Novel Robust Audio Watermarking Algorithm by Modifying the Average Amplitude in Transform Domain

    Directory of Open Access Journals (Sweden)

    Qiuling Wu

    2018-05-01

    Full Text Available In order to improve the robustness and imperceptibility in practical application, a novel audio watermarking algorithm with strong robustness is proposed by exploring the multi-resolution characteristic of discrete wavelet transform (DWT and the energy compaction capability of discrete cosine transform (DCT. The human auditory system is insensitive to the minor changes in the frequency components of the audio signal, so the watermarks can be embedded by slightly modifying the frequency components of the audio signal. The audio fragments segmented from the cover audio signal are decomposed by DWT to obtain several groups of wavelet coefficients with different frequency bands, and then the fourth level detail coefficient is selected to be divided into the former packet and the latter packet, which are executed for DCT to get two sets of transform domain coefficients (TDC respectively. Finally, the average amplitudes of the two sets of TDC are modified to embed the binary image watermark according to the special embedding rule. The watermark extraction is blind without the carrier audio signal. Experimental results confirm that the proposed algorithm has good imperceptibility, large payload capacity and strong robustness when resisting against various attacks such as MP3 compression, low-pass filtering, re-sampling, re-quantization, amplitude scaling, echo addition and noise corruption.

  7. Study of a method to solve the one speed, three dimensional transport equation using the finite element method and the associated Legendre function

    International Nuclear Information System (INIS)

    Fernandes, A.

    1991-01-01

    A method to solve three dimensional neutron transport equation and it is based on the original work suggested by J.K. Fletcher (42, 43). The angular dependence of the flux is approximated by associated Legendre functions and the finite element method is applied to the space components is presented. When the angular flux, the scattering cross section and the neutrons source are expanded in associated Legendre functions, the first order neutron transport equation is reduced to a coupled set of second order diffusion like equations. These equations are solved in an iterative way by the finite element method to the moments. (author)

  8. Legendre condition and the stabilization problem for classical soliton solutions in generalized Skyrme models

    International Nuclear Information System (INIS)

    Kiknadze, N.A.; Khelashvili, A.A.

    1990-01-01

    The problem on stability of classical soliton solutions is studied from the unique point of view: the Legendre condition - necessary condition of existence of weak local minimum for energy functional (term soliton is used here in the wide sense) is used. Limits to parameters of the model Lagrangians are obtained; it is shown that there is no soliton stabilization in some of them despite the phenomenological achievements. The Jacoby sufficient condition is discussed

  9. Bound-preserving Legendre-WENO finite volume schemes using nonlinear mapping

    Science.gov (United States)

    Smith, Timothy; Pantano, Carlos

    2017-11-01

    We present a new method to enforce field bounds in high-order Legendre-WENO finite volume schemes. The strategy consists of reconstructing each field through an intermediate mapping, which by design satisfies realizability constraints. Determination of the coefficients of the polynomial reconstruction involves nonlinear equations that are solved using Newton's method. The selection between the original or mapped reconstruction is implemented dynamically to minimize computational cost. The method has also been generalized to fields that exhibit interdependencies, requiring multi-dimensional mappings. Further, the method does not depend on the existence of a numerical flux function. We will discuss details of the proposed scheme and show results for systems in conservation and non-conservation form. This work was funded by the NSF under Grant DMS 1318161.

  10. Discrete fourier transform (DFT) analysis for applications using iterative transform methods

    Science.gov (United States)

    Dean, Bruce H. (Inventor)

    2012-01-01

    According to various embodiments, a method is provided for determining aberration data for an optical system. The method comprises collecting a data signal, and generating a pre-transformation algorithm. The data is pre-transformed by multiplying the data with the pre-transformation algorithm. A discrete Fourier transform of the pre-transformed data is performed in an iterative loop. The method further comprises back-transforming the data to generate aberration data.

  11. Multiple Harmonics Fitting Algorithms Applied to Periodic Signals Based on Hilbert-Huang Transform

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2013-01-01

    Full Text Available A new generation of multipurpose measurement equipment is transforming the role of computers in instrumentation. The new features involve mixed devices, such as kinds of sensors, analog-to-digital and digital-to-analog converters, and digital signal processing techniques, that are able to substitute typical discrete instruments like multimeters and analyzers. Signal-processing applications frequently use least-squares (LS sine-fitting algorithms. Periodic signals may be interpreted as a sum of sine waves with multiple frequencies: the Fourier series. This paper describes a new sine fitting algorithm that is able to fit a multiharmonic acquired periodic signal. By means of a “sinusoidal wave” whose amplitude and phase are both transient, the “triangular wave” can be reconstructed on the basis of Hilbert-Huang transform (HHT. This method can be used to test effective number of bits (ENOBs of analog-to-digital converter (ADC, avoiding the trouble of selecting initial value of the parameters and working out the nonlinear equations. The simulation results show that the algorithm is precise and efficient. In the case of enough sampling points, even under the circumstances of low-resolution signal with the harmonic distortion existing, the root mean square (RMS error between the sampling data of original “triangular wave” and the corresponding points of fitting “sinusoidal wave” is marvelously small. That maybe means, under the circumstances of any periodic signal, that ENOBs of high-resolution ADC can be tested accurately.

  12. Partial discharge localization in power transformers based on the sequential quadratic programming-genetic algorithm adopting acoustic emission techniques

    Science.gov (United States)

    Liu, Hua-Long; Liu, Hua-Dong

    2014-10-01

    Partial discharge (PD) in power transformers is one of the prime reasons resulting in insulation degradation and power faults. Hence, it is of great importance to study the techniques of the detection and localization of PD in theory and practice. The detection and localization of PD employing acoustic emission (AE) techniques, as a kind of non-destructive testing, plus due to the advantages of powerful capability of locating and high precision, have been paid more and more attention. The localization algorithm is the key factor to decide the localization accuracy in AE localization of PD. Many kinds of localization algorithms exist for the PD source localization adopting AE techniques including intelligent and non-intelligent algorithms. However, the existed algorithms possess some defects such as the premature convergence phenomenon, poor local optimization ability and unsuitability for the field applications. To overcome the poor local optimization ability and easily caused premature convergence phenomenon of the fundamental genetic algorithm (GA), a new kind of improved GA is proposed, namely the sequence quadratic programming-genetic algorithm (SQP-GA). For the hybrid optimization algorithm, SQP-GA, the sequence quadratic programming (SQP) algorithm which is used as a basic operator is integrated into the fundamental GA, so the local searching ability of the fundamental GA is improved effectively and the premature convergence phenomenon is overcome. Experimental results of the numerical simulations of benchmark functions show that the hybrid optimization algorithm, SQP-GA, is better than the fundamental GA in the convergence speed and optimization precision, and the proposed algorithm in this paper has outstanding optimization effect. At the same time, the presented SQP-GA in the paper is applied to solve the ultrasonic localization problem of PD in transformers, then the ultrasonic localization method of PD in transformers based on the SQP-GA is proposed. And

  13. Induced Voltages Ratio-Based Algorithm for Fault Detection, and Faulted Phase and Winding Identification of a Three-Winding Power Transformer

    Directory of Open Access Journals (Sweden)

    Byung Eun Lee

    2014-09-01

    Full Text Available This paper proposes an algorithm for fault detection, faulted phase and winding identification of a three-winding power transformer based on the induced voltages in the electrical power system. The ratio of the induced voltages of the primary-secondary, primary-tertiary and secondary-tertiary windings is the same as the corresponding turns ratio during normal operating conditions, magnetic inrush, and over-excitation. It differs from the turns ratio during an internal fault. For a single phase and a three-phase power transformer with wye-connected windings, the induced voltages of each pair of windings are estimated. For a three-phase power transformer with delta-connected windings, the induced voltage differences are estimated to use the line currents, because the delta winding currents are practically unavailable. Six detectors are suggested for fault detection. An additional three detectors and a rule for faulted phase and winding identification are presented as well. The proposed algorithm can not only detect an internal fault, but also identify the faulted phase and winding of a three-winding power transformer. The various test results with Electromagnetic Transients Program (EMTP-generated data show that the proposed algorithm successfully discriminates internal faults from normal operating conditions including magnetic inrush and over-excitation. This paper concludes by implementing the algorithm into a prototype relay based on a digital signal processor.

  14. [A peak recognition algorithm designed for chromatographic peaks of transformer oil].

    Science.gov (United States)

    Ou, Linjun; Cao, Jian

    2014-09-01

    In the field of the chromatographic peak identification of the transformer oil, the traditional first-order derivative requires slope threshold to achieve peak identification. In terms of its shortcomings of low automation and easy distortion, the first-order derivative method was improved by applying the moving average iterative method and the normalized analysis techniques to identify the peaks. Accurate identification of the chromatographic peaks was realized through using multiple iterations of the moving average of signal curves and square wave curves to determine the optimal value of the normalized peak identification parameters, combined with the absolute peak retention times and peak window. The experimental results show that this algorithm can accurately identify the peaks and is not sensitive to the noise, the chromatographic peak width or the peak shape changes. It has strong adaptability to meet the on-site requirements of online monitoring devices of dissolved gases in transformer oil.

  15. Legendre-tau approximation for functional differential equations. Part 2: The linear quadratic optimal control problem

    Science.gov (United States)

    Ito, K.; Teglas, R.

    1984-01-01

    The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.

  16. Quantum Color Image Encryption Algorithm Based on A Hyper-Chaotic System and Quantum Fourier Transform

    Science.gov (United States)

    Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong

    2016-12-01

    To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.

  17. Research on fast Fourier transforms algorithm of huge remote sensing image technology with GPU and partitioning technology.

    Science.gov (United States)

    Yang, Xue; Li, Xue-You; Li, Jia-Guo; Ma, Jun; Zhang, Li; Yang, Jan; Du, Quan-Ye

    2014-02-01

    Fast Fourier transforms (FFT) is a basic approach to remote sensing image processing. With the improvement of capacity of remote sensing image capture with the features of hyperspectrum, high spatial resolution and high temporal resolution, how to use FFT technology to efficiently process huge remote sensing image becomes the critical step and research hot spot of current image processing technology. FFT algorithm, one of the basic algorithms of image processing, can be used for stripe noise removal, image compression, image registration, etc. in processing remote sensing image. CUFFT function library is the FFT algorithm library based on CPU and FFTW. FFTW is a FFT algorithm developed based on CPU in PC platform, and is currently the fastest CPU based FFT algorithm function library. However there is a common problem that once the available memory or memory is less than the capacity of image, there will be out of memory or memory overflow when using the above two methods to realize image FFT arithmetic. To address this problem, a CPU and partitioning technology based Huge Remote Fast Fourier Transform (HRFFT) algorithm is proposed in this paper. By improving the FFT algorithm in CUFFT function library, the problem of out of memory and memory overflow is solved. Moreover, this method is proved rational by experiment combined with the CCD image of HJ-1A satellite. When applied to practical image processing, it improves effect of the image processing, speeds up the processing, which saves the time of computation and achieves sound result.

  18. NDT applications of the 3D radon transform algorithm for cone beam reconstruction

    International Nuclear Information System (INIS)

    Sire, P.; Grangeat, P.; Lemasson, P.; Molennec, P.; Rizo, P.

    1990-01-01

    The paper describes the authors' 3D X-ray CT algorithm RADON using attenuation measurements acquired with a bidimensional detector. The authors' inversion diagram uses the first derivative of the Radon transform synthesis then its inversion. The potentiality of that new method, particularly for the large aperture, prompted us to develop an optimized software offering convenience and high performances on a modern scientific computer. After a brief recall of the basic principle of X-ray imaging processing, the authors introduce theoretical developments resulting in the present inversion diagram. A general algorithm structure will be proposed afterwards. As a conclusion the authors present the performances and the results obtained with ceramic rotors examination

  19. Control algorithms based on the active and non-active currents for a UPQC without series transformers

    OpenAIRE

    Correa Monteiro, Luis Fernando; Aredes, Mauricio; Pinto, J. G.; Exposto, Bruno; Afonso, João L.

    2016-01-01

    This study presents control algorithms for a new unified power quality conditioner (UPQC) without the series transformers that are frequently used to make the insertion of the series converter of the UPQC between the power supply and the load. The behaviour of the proposed UPQC is evaluated in presence of voltage imbalances, as well as under non-sinusoidal voltage-and current conditions. The presented algorithms derive from the concepts involving the active and non-active currents, together w...

  20. Fitting of two and three variant polynomials from experimental data through the least squares method. (Using of the codes AJUS-2D, AJUS-3D and LEGENDRE-2D)

    International Nuclear Information System (INIS)

    Sanchez Miro, J. J.; Sanz Martin, J. C.

    1994-01-01

    Obtaining polynomial fittings from observational data in two and three dimensions is an interesting and practical task. Such an arduous problem suggests the development of an automatic code. The main novelty we provide lies in the generalization of the classical least squares method in three FORTRAN 77 programs usable in any sampling problem. Furthermore, we introduce the orthogonal 2D-Legendre function in the fitting process. These FORTRAN 77 programs are equipped with the options to calculate the approximation quality standard indicators, obviously generalized to two and three dimensions (correlation nonlinear factor, confidence intervals, cuadratic mean error, and so on). The aim of this paper is to rectify the absence of fitting algorithms for more than one independent variable in mathematical libraries. (Author) 10 refs

  1. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    Science.gov (United States)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  2. Algorithm for three dimension reconstruction of magnetic resonance tomographs and X-ray images based on Fast Fourier Transform

    International Nuclear Information System (INIS)

    Bueno, Josiane M.; Traina, Agma Juci M.; Cruvinel, Paulo E.

    1995-01-01

    This work presents an algorithm for three-dimensional digital image reconstruction. Such algorithms based on the combination of both a Fast Fourier Transform method with Hamming Window and the use of a tri-linear interpolation function. The algorithm allows not only the generation of three-dimensional spatial spin distribution maps for Magnetic Resonance Tomography data but also X and Y-rays linear attenuation coefficient maps for CT scanners. Results demonstrates the usefulness of the algorithm in three-dimensional image reconstruction by doing first two-dimensional reconstruction and rather after interpolation. The algorithm was developed in C++ language, and there are two available versions: one under the DOS environment, and the other under the UNIX/Sun environment. (author)

  3. A quantum search algorithm of two entangled registers to realize quantum discrete Fourier transform of signal processing

    International Nuclear Information System (INIS)

    Pang Chaoyang; Hu Benqiong

    2008-01-01

    The discrete Fourier transform (DFT) is the base of modern signal processing. 1-dimensional fast Fourier transform (ID FFT) and 2D FFT have time complexity O (N log N) and O (N 2 log N) respectively. Since 1965, there has been no more essential breakthrough for the design of fast DFT algorithm. DFT has two properties. One property is that DFT is energy conservation transform. The other property is that many DFT coefficients are close to zero. The basic idea of this paper is that the generalized Grover's iteration can perform the computation of DFT which acts on the entangled states to search the big DFT coefficients until these big coefficients contain nearly all energy. One-dimensional quantum DFT (ID QDFT) and two-dimensional quantum DFT (2D QDFT) are presented in this paper. The quantum algorithm for convolution estimation is also presented in this paper. Compared with FFT, ID and 2D QDFT have time complexity O(√N) and O (N) respectively. QDFT and quantum convolution demonstrate that quantum computation to process classical signal is possible. (general)

  4. A wavelet transform algorithm for peak detection and application to powder x-ray diffraction data.

    Science.gov (United States)

    Gregoire, John M; Dale, Darren; van Dover, R Bruce

    2011-01-01

    Peak detection is ubiquitous in the analysis of spectral data. While many noise-filtering algorithms and peak identification algorithms have been developed, recent work [P. Du, W. Kibbe, and S. Lin, Bioinformatics 22, 2059 (2006); A. Wee, D. Grayden, Y. Zhu, K. Petkovic-Duran, and D. Smith, Electrophoresis 29, 4215 (2008)] has demonstrated that both of these tasks are efficiently performed through analysis of the wavelet transform of the data. In this paper, we present a wavelet-based peak detection algorithm with user-defined parameters that can be readily applied to the application of any spectral data. Particular attention is given to the algorithm's resolution of overlapping peaks. The algorithm is implemented for the analysis of powder diffraction data, and successful detection of Bragg peaks is demonstrated for both low signal-to-noise data from theta-theta diffraction of nanoparticles and combinatorial x-ray diffraction data from a composition spread thin film. These datasets have different types of background signals which are effectively removed in the wavelet-based method, and the results demonstrate that the algorithm provides a robust method for automated peak detection.

  5. Adaption of optical Fresnel transform to optical Wigner transform

    International Nuclear Information System (INIS)

    Lv Cuihong; Fan Hongyi

    2010-01-01

    Enlightened by the algorithmic isomorphism between the rotation of the Wigner distribution function (WDF) and the αth fractional Fourier transform, we show that the optical Fresnel transform performed on the input through an ABCD system makes the output naturally adapting to the associated Wigner transform, i.e. there exists algorithmic isomorphism between ABCD transformation of the WDF and the optical Fresnel transform. We prove this adaption in the context of operator language. Both the single-mode and the two-mode Fresnel operators as the image of classical Fresnel transform are introduced in our discussions, while the two-mode Wigner operator in the entangled state representation is introduced for fitting the two-mode Fresnel operator.

  6. Partial fingerprint identification algorithm based on the modified generalized Hough transform on mobile device

    Science.gov (United States)

    Qin, Jin; Tang, Siqi; Han, Congying; Guo, Tiande

    2018-04-01

    Partial fingerprint identification technology which is mainly used in device with small sensor area like cellphone, U disk and computer, has taken more attention in recent years with its unique advantages. However, owing to the lack of sufficient minutiae points, the conventional method do not perform well in the above situation. We propose a new fingerprint matching technique which utilizes ridges as features to deal with partial fingerprint images and combines the modified generalized Hough transform and scoring strategy based on machine learning. The algorithm can effectively meet the real-time and space-saving requirements of the resource constrained devices. Experiments on in-house database indicate that the proposed algorithm have an excellent performance.

  7. Adaptive discrete cosine transform coding algorithm for digital mammography

    Science.gov (United States)

    Baskurt, Atilla M.; Magnin, Isabelle E.; Goutte, Robert

    1992-09-01

    The need for storage, transmission, and archiving of medical images has led researchers to develop adaptive and efficient data compression techniques. Among medical images, x-ray radiographs of the breast are especially difficult to process because of their particularly low contrast and very fine structures. A block adaptive coding algorithm based on the discrete cosine transform to compress digitized mammograms is described. A homogeneous repartition of the degradation in the decoded images is obtained using a spatially adaptive threshold. This threshold depends on the coding error associated with each block of the image. The proposed method is tested on a limited number of pathological mammograms including opacities and microcalcifications. A comparative visual analysis is performed between the original and the decoded images. Finally, it is shown that data compression with rather high compression rates (11 to 26) is possible in the mammography field.

  8. Transformer Protection Using the Wavelet Transform

    OpenAIRE

    ÖZGÖNENEL, Okan; ÖNBİLGİN, Güven; KOCAMAN, Çağrı

    2014-01-01

    This paper introduces a novel approach for power transformer protection algorithm. Power system signals such as current and voltage have traditionally been analysed by the Fast Fourier Transform. This paper aims to prove that the Wavelet Transform is a reliable and computationally efficient tool for distinguishing between the inrush currents and fault currents. The simulated results presented clearly show that the proposed technique for power transformer protection facilitates the a...

  9. A Fast Mellin and Scale Transform

    Directory of Open Access Journals (Sweden)

    Davide Rocchesso

    2007-01-01

    Full Text Available A fast algorithm for the discrete-scale (and β-Mellin transform is proposed. It performs a discrete-time discrete-scale approximation of the continuous-time transform, with subquadratic asymptotic complexity. The algorithm is based on a well-known relation between the Mellin and Fourier transforms, and it is practical and accurate. The paper gives some theoretical background on the Mellin, β-Mellin, and scale transforms. Then the algorithm is presented and analyzed in terms of computational complexity and precision. The effects of different interpolation procedures used in the algorithm are discussed.

  10. A Fast Mellin and Scale Transform

    Directory of Open Access Journals (Sweden)

    Rocchesso Davide

    2007-01-01

    Full Text Available A fast algorithm for the discrete-scale (and -Mellin transform is proposed. It performs a discrete-time discrete-scale approximation of the continuous-time transform, with subquadratic asymptotic complexity. The algorithm is based on a well-known relation between the Mellin and Fourier transforms, and it is practical and accurate. The paper gives some theoretical background on the Mellin, -Mellin, and scale transforms. Then the algorithm is presented and analyzed in terms of computational complexity and precision. The effects of different interpolation procedures used in the algorithm are discussed.

  11. A summation formula over the zeros of a combination of the associated Legendre functions with a physical application

    International Nuclear Information System (INIS)

    Saharian, A A

    2009-01-01

    By using the generalized Abel-Plana formula, we derive a summation formula for the series over the zeros of a combination of the associated Legendre functions with respect to the degree. The summation formula for the series over the zeros of the combination of the Bessel functions, previously discussed in the literature, is obtained as a limiting case. As an application we evaluate the Wightman function for a scalar field with a general curvature coupling parameter in the region between concentric spherical shells on a background of constant negative curvature space. For the Dirichlet boundary conditions the corresponding mode-sum contains the series over the zeros of the combination of the associated Legendre functions. The application of the summation formula allows us to present the Wightman function in the form of the sum of two integrals. The first one corresponds to the Wightman function for the geometry of a single spherical shell and the second one is induced by the presence of the second shell. The boundary-induced part in the vacuum expectation value of the field squared is investigated. For points away from the boundaries the corresponding renormalization procedure is reduced to that for the boundary-free part.

  12. Deconvolution, differentiation and Fourier transformation algorithms for noise-containing data based on splines and global approximation

    NARCIS (Netherlands)

    Wormeester, Herbert; Sasse, A.G.B.M.; van Silfhout, Arend

    1988-01-01

    One of the main problems in the analysis of measured spectra is how to reduce the influence of noise in data processing. We show a deconvolution, a differentiation and a Fourier Transform algorithm that can be run on a small computer (64 K RAM) and suffer less from noise than commonly used routines.

  13. Numerical study of nonlinear singular fractional differential equations arising in biology by operational matrix of shifted Legendre polynomials

    Directory of Open Access Journals (Sweden)

    D. Jabari Sabeg

    2016-10-01

    Full Text Available In this paper, we present a new computational method for solving nonlinear singular boundary value problems of fractional order arising in biology. To this end, we apply the operational matrices of derivatives of shifted Legendre polynomials to reduce such problems to a system of nonlinear algebraic equations. To demonstrate the validity and applicability of the presented method, we present some numerical examples.

  14. Method of moments solution of volume integral equations using higher-order hierarchical Legendre basis functions

    DEFF Research Database (Denmark)

    Kim, Oleksiy S.; Jørgensen, Erik; Meincke, Peter

    2004-01-01

    An efficient higher-order method of moments (MoM) solution of volume integral equations is presented. The higher-order MoM solution is based on higher-order hierarchical Legendre basis functions and higher-order geometry modeling. An unstructured mesh composed of 8-node trilinear and/or curved 27...... of magnitude in comparison to existing higher-order hierarchical basis functions. Consequently, an iterative solver can be applied even for high expansion orders. Numerical results demonstrate excellent agreement with the analytical Mie series solution for a dielectric sphere as well as with results obtained...

  15. Spherical space Bessel-Legendre-Fourier mode solver for Maxwell's wave equations

    Science.gov (United States)

    Alzahrani, Mohammed A.; Gauthier, Robert C.

    2015-02-01

    For spherically symmetric dielectric structures, a basis set composed of Bessel, Legendre and Fourier functions, BLF, are used to cast Maxwell's wave equations into an eigenvalue problem from which the localized modes can be determined. The steps leading to the eigenmatrix are reviewed and techniques used to reduce the order of matrix and tune the computations for particular mode types are detailed. The BLF basis functions are used to expand the electric and magnetic fields as well as the inverse relative dielectric profile. Similar to the common plane wave expansion technique, the BLF matrix returns the eigen-frequencies and eigenvectors, but in BLF only steady states, non-propagated, are obtained. The technique is first applied to a air filled spherical structure with perfectly conducting outer surface and then to a spherical microsphere located in air. Results are compared published values were possible.

  16. Efficient Algorithm and Architecture of Critical-Band Transform for Low-Power Speech Applications

    Directory of Open Access Journals (Sweden)

    Gan Woon-Seng

    2007-01-01

    Full Text Available An efficient algorithm and its corresponding VLSI architecture for the critical-band transform (CBT are developed to approximate the critical-band filtering of the human ear. The CBT consists of a constant-bandwidth transform in the lower frequency range and a Brown constant- transform (CQT in the higher frequency range. The corresponding VLSI architecture is proposed to achieve significant power efficiency by reducing the computational complexity, using pipeline and parallel processing, and applying the supply voltage scaling technique. A 21-band Bark scale CBT processor with a sampling rate of 16 kHz is designed and simulated. Simulation results verify its suitability for performing short-time spectral analysis on speech. It has a better fitting on the human ear critical-band analysis, significantly fewer computations, and therefore is more energy-efficient than other methods. With a 0.35 m CMOS technology, it calculates a 160-point speech in 4.99 milliseconds at 234 kHz. The power dissipation is 15.6 W at 1.1 V. It achieves 82.1 power reduction as compared to a benchmark 256-point FFT processor.

  17. Algorithm, applications and evaluation for protein comparison by Ramanujan Fourier transform.

    Science.gov (United States)

    Zhao, Jian; Wang, Jiasong; Hua, Wei; Ouyang, Pingkai

    2015-12-01

    The amino acid sequence of a protein determines its chemical properties, chain conformation and biological functions. Protein sequence comparison is of great importance to identify similarities of protein structures and infer their functions. Many properties of a protein correspond to the low-frequency signals within the sequence. Low frequency modes in protein sequences are linked to the secondary structures, membrane protein types, and sub-cellular localizations of the proteins. In this paper, we present Ramanujan Fourier transform (RFT) with a fast algorithm to analyze the low-frequency signals of protein sequences. The RFT method is applied to similarity analysis of protein sequences with the Resonant Recognition Model (RRM). The results show that the proposed fast RFT method on protein comparison is more efficient than commonly used discrete Fourier transform (DFT). RFT can detect common frequencies as significant feature for specific protein families, and the RFT spectrum heat-map of protein sequences demonstrates the information conservation in the sequence comparison. The proposed method offers a new tool for pattern recognition, feature extraction and structural analysis on protein sequences. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. On a Hopping-Points SVD and Hough Transform-Based Line Detection Algorithm for Robot Localization and Mapping

    Directory of Open Access Journals (Sweden)

    Abhijeet Ravankar

    2016-05-01

    Full Text Available Line detection is an important problem in computer vision, graphics and autonomous robot navigation. Lines detected using a laser range sensor (LRS mounted on a robot can be used as features to build a map of the environment, and later to localize the robot in the map, in a process known as Simultaneous Localization and Mapping (SLAM. We propose an efficient algorithm for line detection from LRS data using a novel hopping-points Singular Value Decomposition (SVD and Hough transform-based algorithm, in which SVD is applied to intermittent LRS points to accelerate the algorithm. A reverse-hop mechanism ensures that the end points of the line segments are accurately extracted. Line segments extracted from the proposed algorithm are used to form a map and, subsequently, LRS data points are matched with the line segments to localize the robot. The proposed algorithm eliminates the drawbacks of point-based matching algorithms like the Iterative Closest Points (ICP algorithm, the performance of which degrades with an increasing number of points. We tested the proposed algorithm for mapping and localization in both simulated and real environments, and found it to detect lines accurately and build maps with good self-localization.

  19. The fractional Fourier transform and applications

    Science.gov (United States)

    Bailey, David H.; Swarztrauber, Paul N.

    1991-01-01

    This paper describes the 'fractional Fourier transform', which admits computation by an algorithm that has complexity proportional to the fast Fourier transform algorithm. Whereas the discrete Fourier transform (DFT) is based on integral roots of unity e exp -2(pi)i/n, the fractional Fourier transform is based on fractional roots of unity e exp -2(pi)i(alpha), where alpha is arbitrary. The fractional Fourier transform and the corresponding fast algorithm are useful for such applications as computing DFTs of sequences with prime lengths, computing DFTs of sparse sequences, analyzing sequences with noninteger periodicities, performing high-resolution trigonometric interpolation, detecting lines in noisy images, and detecting signals with linearly drifting frequencies. In many cases, the resulting algorithms are faster by arbitrarily large factors than conventional techniques.

  20. Detection algorithm for glass bottle mouth defect by continuous wavelet transform based on machine vision

    Science.gov (United States)

    Qian, Jinfang; Zhang, Changjiang

    2014-11-01

    An efficient algorithm based on continuous wavelet transform combining with pre-knowledge, which can be used to detect the defect of glass bottle mouth, is proposed. Firstly, under the condition of ball integral light source, a perfect glass bottle mouth image is obtained by Japanese Computar camera through the interface of IEEE-1394b. A single threshold method based on gray level histogram is used to obtain the binary image of the glass bottle mouth. In order to efficiently suppress noise, moving average filter is employed to smooth the histogram of original glass bottle mouth image. And then continuous wavelet transform is done to accurately determine the segmentation threshold. Mathematical morphology operations are used to get normal binary bottle mouth mask. A glass bottle to be detected is moving to the detection zone by conveyor belt. Both bottle mouth image and binary image are obtained by above method. The binary image is multiplied with normal bottle mask and a region of interest is got. Four parameters (number of connected regions, coordinate of centroid position, diameter of inner cycle, and area of annular region) can be computed based on the region of interest. Glass bottle mouth detection rules are designed by above four parameters so as to accurately detect and identify the defect conditions of glass bottle. Finally, the glass bottles of Coca-Cola Company are used to verify the proposed algorithm. The experimental results show that the proposed algorithm can accurately detect the defect conditions of the glass bottles and have 98% detecting accuracy.

  1. Reconstruction of convex bodies from moments

    DEFF Research Database (Denmark)

    Hörrmann, Julia; Kousholt, Astrid

    We investigate how much information about a convex body can be retrieved from a finite number of its geometric moments. We give a sufficient condition for a convex body to be uniquely determined by a finite number of its geometric moments, and we show that among all convex bodies, those which......- rithm that approximates a convex body using a finite number of its Legendre moments. The consistency of the algorithm is established using the stabil- ity result for Legendre moments. When only noisy measurements of Legendre moments are available, the consistency of the algorithm is established under...

  2. Group-invariant finite Fourier transforms

    International Nuclear Information System (INIS)

    Shenefelt, M.H.

    1988-01-01

    The computation of the finite Fourier transform of functions is one of the most used computations in crystallography. Since the Fourier transform involved in 3-dimensional, the size of the computation becomes very large even for relatively few sample points along each edge. In this thesis, there is a family of algorithms that reduce the computation of Fourier transform of functions respecting the symmetries. Some properties of these algorithms are: (1) The algorithms make full use of the group of symmetries of a crystal. (2) The algorithms can be factored and combined according to the prime factorization of the number of points in the sample space. (3) The algorithms are organized into a family using the group structure of the crystallographic groups to make iterative procedures possible

  3. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    Science.gov (United States)

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  4. A coordinate transform method for one-speed neutron transport in composite slabs

    International Nuclear Information System (INIS)

    Haidar, N.H.S.

    1988-01-01

    The optical path transformation is applied to reduce the one-speed neutron transport equation for a class of composite subcritical slabs to single-region problems. The class idealises, within the uncertainty of the one-speed model, a variety of practical situations such as U-D 2 O-C-Zr-Pb or Pu-U-Na-Fe symmetric reactor assemblies; which may possibly contain a symmetrically anisotropic neutron source. A closed form double series solution, which turns out to be quite convenient for design and optimisation purposes, has been obtained, in terms of discontinuous functions for the multi-regional angular flux by application of a double finite Legendre transform. Disadvantage factor evaluations for a U-C lattice cell resulting from a low-order P 0 P 1 approximation of this method are found to be in full agreement with hybrid diffusion-transport estimates. (author)

  5. An Algorithm of Building Extraction in Urban Area Based on Improved Top-hat Transformations and LBP Elevation Texture

    Directory of Open Access Journals (Sweden)

    HE Manyun

    2017-09-01

    Full Text Available Classification of building and vegetation is difficult solely by LiDAR data and vegetation in shadows can't be eliminated only by aerial images. The improved top-hat transformations and local binary patterns (LBP elevation texture analysis for building extraction are proposed based on the fusion of aerial images and LiDAR data. Firstly, LiDAR data is reorganized into grid cell, the algorithm removes ground points through top-hat transform. Then, the vegetation points are extracted by normalized difference vegetation index (NDVI. Thirdly, according to the elevation information of LiDAR points, LBP elevation texture is calculated and achieving precise elimination of vegetation in shadows or surrounding to the buildings. At last, morphological operations are used to fill the holes of building roofs, and region growing for complete building edges. The simulation is based on the complex urban area in Vaihingen benchmark provided by ISPRS, the results show that the algorithm affording higher classification accuracy.

  6. Monte Carlo Calculation of Sensitivities to Secondaries' Angular Distributions

    International Nuclear Information System (INIS)

    Perel, R.L.

    2003-01-01

    An algorithm for Monte Carlo calculation of sensitivities of responses to secondaries' angular distributions (SAD) is developed, based on the differential operator approach. The algorithm was formulated for the sensitivity to Legendre coefficients of the SAD and is valid even in cases where the actual representation of SAD is not in the form of a Legendre series. The algorithm was implemented, for point- or ring-detectors, in a local version of the code MCNP. Numerical tests were performed to validate the algorithm and its implementation. In addition, an algorithm specific for the Kalbach-Mann representation of SAD is presented

  7. Algorithms for Computing the Magnetic Field, Vector Potential, and Field Derivatives for Circular Current Loops in Cylindrical Coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Walstrom, Peter Lowell [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-24

    A numerical algorithm for computing the field components Br and Bz and their r and z derivatives with open boundaries in cylindrical coordinates for circular current loops is described. An algorithm for computing the vector potential is also described. For the convenience of the reader, derivations of the final expressions from their defining integrals are given in detail, since their derivations (especially for the field derivatives) are not all easily found in textbooks. Numerical calculations are based on evaluation of complete elliptic integrals using the Bulirsch algorithm cel. Since cel can evaluate complete elliptic integrals of a fairly general type, in some cases the elliptic integrals can be evaluated without first reducing them to forms containing standard Legendre forms. The algorithms avoid the numerical difficulties that many of the textbook solutions have for points near the axis because of explicit factors of 1=r or 1=r2 in the some of the expressions.

  8. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    Science.gov (United States)

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Parallel algorithms for quantum chemistry. I. Integral transformations on a hypercube multiprocessor

    International Nuclear Information System (INIS)

    Whiteside, R.A.; Binkley, J.S.; Colvin, M.E.; Schaefer, H.F. III

    1987-01-01

    For many years it has been recognized that fundamental physical constraints such as the speed of light will limit the ultimate speed of single processor computers to less than about three billion floating point operations per second (3 GFLOPS). This limitation is becoming increasingly restrictive as commercially available machines are now within an order of magnitude of this asymptotic limit. A natural way to avoid this limit is to harness together many processors to work on a single computational problem. In principle, these parallel processing computers have speeds limited only by the number of processors one chooses to acquire. The usefulness of potentially unlimited processing speed to a computationally intensive field such as quantum chemistry is obvious. If these methods are to be applied to significantly larger chemical systems, parallel schemes will have to be employed. For this reason we have developed distributed-memory algorithms for a number of standard quantum chemical methods. We are currently implementing these on a 32 processor Intel hypercube. In this paper we present our algorithm and benchmark results for one of the bottleneck steps in quantum chemical calculations: the four index integral transformation

  10. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    Science.gov (United States)

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  11. Evaluation of integrals involving powers of (1-x2) and two associated Legendre functions or Gegenbauer polynomials

    International Nuclear Information System (INIS)

    Rashid, M.A.

    1984-08-01

    Integrals involving powers of (1-x 2 ) and two associated Legendre functions or two Gegenbauer polynomials are evaluated as finite sums which can be expressed in terms of terminating hypergeometric function 4 F 3 . The integrals which are evaluated are ∫sub(-1)sup(1)[Psub(l)sup(m)(x)Psub(k)sup(n)(x)]/[(1-x 2 )sup(p+1)]dx and ∫sub(-1)sup(1)Csub(l)sup(α)(x)Csub(k)sup(β)(x)[(1-x 2 )sup[(α+β-3)/2-p

  12. Space-bandwidth ratio as a means of choosing between Fresnel and other linear canonical transform algorithms.

    Science.gov (United States)

    Healy, John J; Sheridan, John T

    2011-05-01

    The product of the spatial and spatial frequency extents of a wave field has proven useful in the analysis of the sampling requirements of numerical simulations. We propose that the ratio of these quantities is also illuminating. We have shown that the distance at which the so-called "direct method" becomes more efficient than the so-called "spectral method" for simulations of Fresnel transforms may be written in terms of this space-bandwidth ratio. We have proposed generalizations of these algorithms for numerical simulations of general ABCD systems and derived expressions for the "transition space-bandwidth ratio," above which the generalization of the spectral method is the more efficient algorithm and below which the generalization of the direct method is preferable.

  13. Numerical Solution of the Fractional Partial Differential Equations by the Two-Dimensional Fractional-Order Legendre Functions

    Directory of Open Access Journals (Sweden)

    Fukang Yin

    2013-01-01

    Full Text Available A numerical method is presented to obtain the approximate solutions of the fractional partial differential equations (FPDEs. The basic idea of this method is to achieve the approximate solutions in a generalized expansion form of two-dimensional fractional-order Legendre functions (2D-FLFs. The operational matrices of integration and derivative for 2D-FLFs are first derived. Then, by these matrices, a system of algebraic equations is obtained from FPDEs. Hence, by solving this system, the unknown 2D-FLFs coefficients can be computed. Three examples are discussed to demonstrate the validity and applicability of the proposed method.

  14. Iterative algorithm of discrete Fourier transform for processing randomly sampled NMR data sets

    International Nuclear Information System (INIS)

    Stanek, Jan; Kozminski, Wiktor

    2010-01-01

    Spectra obtained by application of multidimensional Fourier Transformation (MFT) to sparsely sampled nD NMR signals are usually corrupted due to missing data. In the present paper this phenomenon is investigated on simulations and experiments. An effective iterative algorithm for artifact suppression for sparse on-grid NMR data sets is discussed in detail. It includes automated peak recognition based on statistical methods. The results enable one to study NMR spectra of high dynamic range of peak intensities preserving benefits of random sampling, namely the superior resolution in indirectly measured dimensions. Experimental examples include 3D 15 N- and 13 C-edited NOESY-HSQC spectra of human ubiquitin.

  15. Inverse transformation algorithm of transient electromagnetic field and its high-resolution continuous imaging interpretation method

    International Nuclear Information System (INIS)

    Qi, Zhipeng; Li, Xiu; Lu, Xushan; Zhang, Yingying; Yao, Weihua

    2015-01-01

    We introduce a new and potentially useful method for wave field inverse transformation and its application in transient electromagnetic method (TEM) 3D interpretation. The diffusive EM field is known to have a unique integral representation in terms of a fictitious wave field that satisfies a wave equation. The continuous imaging of TEM can be accomplished using the imaging methods in seismic interpretation after the diffusion equation is transformed into a fictitious wave equation. The interpretation method based on the imaging of a fictitious wave field could be used as a fast 3D inversion method. Moreover, the fictitious wave field possesses some wave field features making it possible for the application of a wave field interpretation method in TEM to improve the prospecting resolution.Wave field transformation is a key issue in the migration imaging of a fictitious wave field. The equation in the wave field transformation belongs to the first class Fredholm integration equation, which is a typical ill-posed equation. Additionally, TEM has a large dynamic time range, which also facilitates the weakness of this ill-posed problem. The wave field transformation is implemented by using pre-conditioned regularized conjugate gradient method. The continuous imaging of a fictitious wave field is implemented by using Kirchhoff integration. A synthetic aperture and deconvolution algorithm is also introduced to improve the interpretation resolution. We interpreted field data by the method proposed in this paper, and obtained a satisfying interpretation result. (paper)

  16. Homotopy Algorithm for Optimal Control Problems with a Second-order State Constraint

    International Nuclear Information System (INIS)

    Hermant, Audrey

    2010-01-01

    This paper deals with optimal control problems with a regular second-order state constraint and a scalar control, satisfying the strengthened Legendre-Clebsch condition. We study the stability of structure of stationary points. It is shown that under a uniform strict complementarity assumption, boundary arcs are stable under sufficiently smooth perturbations of the data. On the contrary, nonreducible touch points are not stable under perturbations. We show that under some reasonable conditions, either a boundary arc or a second touch point may appear. Those results allow us to design an homotopy algorithm which automatically detects the structure of the trajectory and initializes the shooting parameters associated with boundary arcs and touch points.

  17. Approximating the Analytic Fourier Transform with the Discrete Fourier Transform

    OpenAIRE

    Axelrod, Jeremy

    2015-01-01

    The Fourier transform is approximated over a finite domain using a Riemann sum. This Riemann sum is then expressed in terms of the discrete Fourier transform, which allows the sum to be computed with a fast Fourier transform algorithm more rapidly than via a direct matrix multiplication. Advantages and limitations of using this method to approximate the Fourier transform are discussed, and prototypical MATLAB codes implementing the method are presented.

  18. Supersymmetric two-particle equations

    International Nuclear Information System (INIS)

    Sissakyan, A.N.; Skachkov, N.B.; Shevchenko, O.Yu.

    1986-01-01

    In the framework of the scalar superfield model, a particular case of which is the well-known Wess-Zumino model, the supersymmetric Schwinger equations are found. On their basis with the use of the second Legendre transformation the two-particle supersymmetric Edwards and Bethe-Salpeter equations are derived. A connection of the kernels and inhomogeneous terms of these equations with generating functional of the second Legendre transformation is found

  19. BOX-COX transformation and random regression models for fecal egg count data

    Directory of Open Access Journals (Sweden)

    Marcos Vinicius Silva

    2012-01-01

    Full Text Available Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants fecal egg count (FEC is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used to achieve normality before analysis. However, the transformed data are often not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6,375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (covariance components. We also proposed using random regression models (RRM for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4 adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.

  20. Box-Cox Transformation and Random Regression Models for Fecal egg Count Data.

    Science.gov (United States)

    da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P; Sonstegard, Tad S; Cobuci, Jaime Araujo; Gasbarre, Louis C

    2011-01-01

    Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.

  1. Online Feature Transformation Learning for Cross-Domain Object Category Recognition.

    Science.gov (United States)

    Zhang, Xuesong; Zhuang, Yan; Wang, Wei; Pedrycz, Witold

    2017-06-09

    In this paper, we introduce a new research problem termed online feature transformation learning in the context of multiclass object category recognition. The learning of a feature transformation is viewed as learning a global similarity metric function in an online manner. We first consider the problem of online learning a feature transformation matrix expressed in the original feature space and propose an online passive aggressive feature transformation algorithm. Then these original features are mapped to kernel space and an online single kernel feature transformation (OSKFT) algorithm is developed to learn a nonlinear feature transformation. Based on the OSKFT and the existing Hedge algorithm, a novel online multiple kernel feature transformation algorithm is also proposed, which can further improve the performance of online feature transformation learning in large-scale application. The classifier is trained with k nearest neighbor algorithm together with the learned similarity metric function. Finally, we experimentally examined the effect of setting different parameter values in the proposed algorithms and evaluate the model performance on several multiclass object recognition data sets. The experimental results demonstrate the validity and good performance of our methods on cross-domain and multiclass object recognition application.

  2. A General Approach for Orthogonal 4-Tap Integer Multiwavelet Transforms

    Directory of Open Access Journals (Sweden)

    Mingli Jing

    2010-01-01

    Full Text Available An algorithm for orthogonal 4-tap integer multiwavelet transforms is proposed. We compute the singular value decomposition (SVD of block recursive matrices of transform matrix, and then transform matrix can be rewritten in a product of two block diagonal matrices and a permutation matrix. Furthermore, we factorize the block matrix of block diagonal matrices into triangular elementary reversible matrices (TERMs, which map integers to integers by rounding arithmetic. The cost of factorizing block matrix into TERMs does not increase with the increase of the dimension of transform matrix, and the proposed algorithm is in-place calculation and without allocating auxiliary memory. Examples of integer multiwavelet transform using DGHM and CL are given, which verify that the proposed algorithm is an executable algorithm and outperforms the existing algorithm for orthogonal 4-tap integer multiwavelet transform.

  3. Short-Term Load Forecasting Based on Wavelet Transform and Least Squares Support Vector Machine Optimized by Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Wei Sun

    2015-01-01

    Full Text Available Electric power is a kind of unstorable energy concerning the national welfare and the people’s livelihood, the stability of which is attracting more and more attention. Because the short-term power load is always interfered by various external factors with the characteristics like high volatility and instability, a single model is not suitable for short-term load forecasting due to low accuracy. In order to solve this problem, this paper proposes a new model based on wavelet transform and the least squares support vector machine (LSSVM which is optimized by fruit fly algorithm (FOA for short-term load forecasting. Wavelet transform is used to remove error points and enhance the stability of the data. Fruit fly algorithm is applied to optimize the parameters of LSSVM, avoiding the randomness and inaccuracy to parameters setting. The result of implementation of short-term load forecasting demonstrates that the hybrid model can be used in the short-term forecasting of the power system.

  4. Study of the influence of semiconductor material parameters on acoustic wave propagation modes in GaSb/AlSb bi-layered structures by Legendre polynomial method

    Energy Technology Data Exchange (ETDEWEB)

    Othmani, Cherif, E-mail: othmanicheriffss@gmail.com; Takali, Farid; Njeh, Anouar; Ben Ghozlen, Mohamed Hédi

    2016-09-01

    The propagation of Rayleigh–Lamb waves in bi-layered structures is studied. For this purpose, an extension of the Legendre polynomial (LP) method is proposed to formulate the acoustic wave equation in the bi-layered structures induced by thin film Gallium Antimonide (GaSb) and with Aluminum Antimonide (AlSb) substrate in moderate thickness. Acoustic modes propagating along a bi-layer plate are shown to be quite different than classical Lamb modes, contrary to most of the multilayered structures. The validation of the LP method is illustrated by a comparison between the associated numerical results and those obtained using the ordinary differential equation (ODE) method. The convergency of the LP method is discussed through a numerical example. Moreover, the influences of thin film GaSb parameters on the characteristics Rayleigh–Lamb waves propagation has been studied in detail. Finally, the advantages of the Legendre polynomial (LP) method to analyze the multilayered structures are described. All the developments performed in this work were implemented in Matlab software.

  5. Study of the influence of semiconductor material parameters on acoustic wave propagation modes in GaSb/AlSb bi-layered structures by Legendre polynomial method

    International Nuclear Information System (INIS)

    Othmani, Cherif; Takali, Farid; Njeh, Anouar; Ben Ghozlen, Mohamed Hédi

    2016-01-01

    The propagation of Rayleigh–Lamb waves in bi-layered structures is studied. For this purpose, an extension of the Legendre polynomial (LP) method is proposed to formulate the acoustic wave equation in the bi-layered structures induced by thin film Gallium Antimonide (GaSb) and with Aluminum Antimonide (AlSb) substrate in moderate thickness. Acoustic modes propagating along a bi-layer plate are shown to be quite different than classical Lamb modes, contrary to most of the multilayered structures. The validation of the LP method is illustrated by a comparison between the associated numerical results and those obtained using the ordinary differential equation (ODE) method. The convergency of the LP method is discussed through a numerical example. Moreover, the influences of thin film GaSb parameters on the characteristics Rayleigh–Lamb waves propagation has been studied in detail. Finally, the advantages of the Legendre polynomial (LP) method to analyze the multilayered structures are described. All the developments performed in this work were implemented in Matlab software.

  6. Designing an Algorithm for Cancerous Tissue Segmentation Using Adaptive K-means Cluttering and Discrete Wavelet Transform.

    Science.gov (United States)

    Rezaee, Kh; Haddadnia, J

    2013-09-01

    Breast cancer is currently one of the leading causes of death among women worldwide. The diagnosis and separation of cancerous tumors in mammographic images require accuracy, experience and time, and it has always posed itself as a major challenge to the radiologists and physicians. This paper proposes a new algorithm which draws on discrete wavelet transform and adaptive K-means techniques to transmute the medical images implement the tumor estimation and detect breast cancer tumors in mammograms in early stages. It also allows the rapid processing of the input data. In the first step, after designing a filter, the discrete wavelet transform is applied to the input images and the approximate coefficients of scaling components are constructed. Then, the different parts of image are classified in continuous spectrum. In the next step, by using adaptive K-means algorithm for initializing and smart choice of clusters' number, the appropriate threshold is selected. Finally, the suspicious cancerous mass is separated by implementing the image processing techniques. We Received 120 mammographic images in LJPEG format, which had been scanned in Gray-Scale with 50 microns size, 3% noise and 20% INU from clinical data taken from two medical databases (mini-MIAS and DDSM). The proposed algorithm detected tumors at an acceptable level with an average accuracy of 92.32% and sensitivity of 90.24%. Also, the Kappa coefficient was approximately 0.85, which proved the suitable reliability of the system performance. The exact positioning of the cancerous tumors allows the radiologist to determine the stage of disease progression and suggest an appropriate treatment in accordance with the tumor growth. The low PPV and high NPV of the system is a warranty of the system and both clinical specialists and patients can trust its output.

  7. Algorithm for removing the noise from γ energy spectrum by analyzing the evolution of the wavelet transform maxima across scales

    International Nuclear Information System (INIS)

    Li Tianduo; Xiao Gang; Di Yuming; Han Feng; Qiu Xiaoling

    1999-01-01

    The γ energy spectrum is expanded in allied energy-frequency space. By the different characterization of the evolution of wavelet transform modulus maxima across scales between energy spectrum and noise, the algorithm for removing the noise from γ energy spectrum by analyzing the evolution of the wavelet transform maxima across scales is presented. The results show, in contrast to the methods in energy space or in frequency space, the method has the advantages that the peak of energy spectrum can be indicated accurately and the energy spectrum can be reconstructed with a good approximation

  8. [A new peak detection algorithm of Raman spectra].

    Science.gov (United States)

    Jiang, Cheng-Zhi; Sun, Qiang; Liu, Ying; Liang, Jing-Qiu; An, Yan; Liu, Bing

    2014-01-01

    The authors proposed a new Raman peak recognition method named bi-scale correlation algorithm. The algorithm uses the combination of the correlation coefficient and the local signal-to-noise ratio under two scales to achieve Raman peak identification. We compared the performance of the proposed algorithm with that of the traditional continuous wavelet transform method through MATLAB, and then tested the algorithm with real Raman spectra. The results show that the average time for identifying a Raman spectrum is 0.51 s with the algorithm, while it is 0.71 s with the continuous wavelet transform. When the signal-to-noise ratio of Raman peak is greater than or equal to 6 (modern Raman spectrometers feature an excellent signal-to-noise ratio), the recognition accuracy with the algorithm is higher than 99%, while it is less than 84% with the continuous wavelet transform method. The mean and the standard deviations of the peak position identification error of the algorithm are both less than that of the continuous wavelet transform method. Simulation analysis and experimental verification prove that the new algorithm possesses the following advantages: no needs of human intervention, no needs of de-noising and background removal operation, higher recognition speed and higher recognition accuracy. The proposed algorithm is operable in Raman peak identification.

  9. Road Detection by Using a Generalized Hough Transform

    Directory of Open Access Journals (Sweden)

    Weifeng Liu

    2017-06-01

    Full Text Available Road detection plays key roles for remote sensing image analytics. Hough transform (HT is one very typical method for road detection, especially for straight line road detection. Although many variants of Hough transform have been reported, it is still a great challenge to develop a low computational complexity and time-saving Hough transform algorithm. In this paper, we propose a generalized Hough transform (i.e., Radon transform implementation for road detection in remote sensing images. Specifically, we present a dictionary learning method to approximate the Radon transform. The proposed approximation method treats a Radon transform as a linear transform, which then facilitates parallel implementation of the Radon transform for multiple images. To evaluate the proposed algorithm, we conduct extensive experiments on the popular RSSCN7 database for straight road detection. The experimental results demonstrate that our method is superior to the traditional algorithms in terms of accuracy and computing complexity.

  10. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    Science.gov (United States)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  11. Some Connections between the Spherical and Parabolic Bases on the Cone Expressed in terms of the Macdonald Function

    OpenAIRE

    Shilin, I. A.; Choi, Junesang

    2014-01-01

    Computing the matrix elements of the linear operator, which transforms the spherical basis of $SO(3,1)$ -representation space into the hyperbolic basis, very recently, Shilin and Choi (2013) presented an integral formula involving the product of two Legendre functions of the first kind expressed in terms of ${ }_{4}{F}_{3}$ -hypergeometric function and, using the general Mehler-Fock transform, another integral formula for the Legendre function of the first kind. In the sequel, we investigate ...

  12. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  13. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  14. Hybrid algorithm of ensemble transform and importance sampling for assimilation of non-Gaussian observations

    Directory of Open Access Journals (Sweden)

    Shin'ya Nakano

    2014-05-01

    Full Text Available A hybrid algorithm that combines the ensemble transform Kalman filter (ETKF and the importance sampling approach is proposed. Since the ETKF assumes a linear Gaussian observation model, the estimate obtained by the ETKF can be biased in cases with nonlinear or non-Gaussian observations. The particle filter (PF is based on the importance sampling technique, and is applicable to problems with nonlinear or non-Gaussian observations. However, the PF usually requires an unrealistically large sample size in order to achieve a good estimation, and thus it is computationally prohibitive. In the proposed hybrid algorithm, we obtain a proposal distribution similar to the posterior distribution by using the ETKF. A large number of samples are then drawn from the proposal distribution, and these samples are weighted to approximate the posterior distribution according to the importance sampling principle. Since the importance sampling provides an estimate of the probability density function (PDF without assuming linearity or Gaussianity, we can resolve the bias due to the nonlinear or non-Gaussian observations. Finally, in the next forecast step, we reduce the sample size to achieve computational efficiency based on the Gaussian assumption, while we use a relatively large number of samples in the importance sampling in order to consider the non-Gaussian features of the posterior PDF. The use of the ETKF is also beneficial in terms of the computational simplicity of generating a number of random samples from the proposal distribution and in weighting each of the samples. The proposed algorithm is not necessarily effective in case that the ensemble is located distant from the true state. However, monitoring the effective sample size and tuning the factor for covariance inflation could resolve this problem. In this paper, the proposed hybrid algorithm is introduced and its performance is evaluated through experiments with non-Gaussian observations.

  15. Partner symmetries of the complex Monge-Ampere equation yield hyper-Kaehler metrics without continuous symmetries

    International Nuclear Information System (INIS)

    Malykh, A A; Nutku, Y; Sheftel, M B

    2003-01-01

    We extend the Mason-Newman Lax pair for the elliptic complex Monge-Ampere equation so that this equation itself emerges as an algebraic consequence. We regard the function in the extended Lax equations as a complex potential. Their differential compatibility condition coincides with the determining equation for the symmetries of the complex Monge-Ampere equation. We shall identify the real and imaginary parts of the potential, which we call partner symmetries, with the translational and dilatational symmetry characteristics, respectively. Then we choose the dilatational symmetry characteristic as the new unknown replacing the Kaehler potential. This directly leads to a Legendre transformation. Studying the integrability conditions of the Legendre-transformed system we arrive at a set of linear equations satisfied by a single real potential. This enables us to construct non-invariant solutions of the Legendre transform of the complex Monge-Ampere equation. Using these solutions we obtained explicit Legendre-transformed hyper-Kaehler metrics with a anti-self-dual Riemann curvature 2-form that admit no Killing vectors. They satisfy the Einstein field equations with Euclidean signature. We give the detailed derivation of the solution announced earlier and present a new solution with an added parameter. We compare our method of partner symmetries for finding non-invariant solutions to that of Dunajski and Mason who use 'hidden' symmetries for the same purpose

  16. Dependency Parsing with Transformed Feature

    Directory of Open Access Journals (Sweden)

    Fuxiang Wu

    2017-01-01

    Full Text Available Dependency parsing is an important subtask of natural language processing. In this paper, we propose an embedding feature transforming method for graph-based parsing, transform-based parsing, which directly utilizes the inner similarity of the features to extract information from all feature strings including the un-indexed strings and alleviate the feature sparse problem. The model transforms the extracted features to transformed features via applying a feature weight matrix, which consists of similarities between the feature strings. Since the matrix is usually rank-deficient because of similar feature strings, it would influence the strength of constraints. However, it is proven that the duplicate transformed features do not degrade the optimization algorithm: the margin infused relaxed algorithm. Moreover, this problem can be alleviated by reducing the number of the nearest transformed features of a feature. In addition, to further improve the parsing accuracy, a fusion parser is introduced to integrate transformed and original features. Our experiments verify that both transform-based and fusion parser improve the parsing accuracy compared to the corresponding feature-based parser.

  17. Maser-like transformations using the lie transform

    International Nuclear Information System (INIS)

    Michelotti, L.

    1985-01-01

    The Deprit-Hori-Kamel recursive algorithm is presented for carrying out canonical transformations that eliminate non-secular terms of a Hamiltonian. The method is illustrated in the context of accelerator theory by application to three sample problems. (author)

  18. A Direct Search Algorithm for Global Optimization

    Directory of Open Access Journals (Sweden)

    Enrique Baeyens

    2016-06-01

    Full Text Available A direct search algorithm is proposed for minimizing an arbitrary real valued function. The algorithm uses a new function transformation and three simplex-based operations. The function transformation provides global exploration features, while the simplex-based operations guarantees the termination of the algorithm and provides global convergence to a stationary point if the cost function is differentiable and its gradient is Lipschitz continuous. The algorithm’s performance has been extensively tested using benchmark functions and compared to some well-known global optimization algorithms. The results of the computational study show that the algorithm combines both simplicity and efficiency and is competitive with the heuristics-based strategies presently used for global optimization.

  19. Feature Extraction Using the Hough Transform

    OpenAIRE

    Ferguson, Tara; Baker, Doran

    2002-01-01

    This paper contains a brief literature survey of applications and improvements of the Hough transform, a description of the Hough transform and a few of its algorithms, and simulation examples of line and curve detection using the Hough transform.

  20. Ring-Shaped Potential and a Class of Relevant Integrals Involved Universal Associated Legendre Polynomials with Complicated Arguments

    Directory of Open Access Journals (Sweden)

    Wei Li

    2017-01-01

    Full Text Available We find that the solution of the polar angular differential equation can be written as the universal associated Legendre polynomials. Its generating function is applied to obtain an analytical result for a class of interesting integrals involving complicated argument, that is, ∫-11Pl′m′xt-1/1+t2-2xtPk′m′(x/(1+t2-2tx(l′+1/2dx, where t∈(0,1. The present method can in principle be generalizable to the integrals involving other special functions. As an illustration we also study a typical Bessel integral with a complicated argument ∫0∞Jn(αx2+z2/(x2+z2nx2m+1dx.

  1. Wavelet-LMS algorithm-based echo cancellers

    Science.gov (United States)

    Seetharaman, Lalith K.; Rao, Sathyanarayana S.

    2002-12-01

    This paper presents Echo Cancellers based on the Wavelet-LMS Algorithm. The performance of the Least Mean Square Algorithm in Wavelet transform domain is observed and its application in Echo cancellation is analyzed. The Widrow-Hoff Least Mean Square Algorithm is most widely used algorithm for Adaptive filters that function as Echo Cancellers. The present day communication signals are widely non-stationary in nature and some errors crop up when Least Mean Square Algorithm is used for the Echo Cancellers handling such signals. The analysis of non-stationary signals often involves a compromise between how well transitions or discontinuities can be located. The multi-scale or multi-resolution of signal analysis, which is the essence of wavelet transform, makes Wavelets popular in non-stationary signal analysis. In this paper, we present a Wavelet-LMS algorithm wherein the wavelet coefficients of a signal are modified adaptively using the Least Mean Square Algorithm and then reconstructed to give an Echo-free signal. The Echo Canceller based on this Algorithm is found to have a better convergence and a comparatively lesser MSE (Mean Square error).

  2. Optimized nonorthogonal transforms for image compression.

    Science.gov (United States)

    Guleryuz, O G; Orchard, M T

    1997-01-01

    The transform coding of images is analyzed from a common standpoint in order to generate a framework for the design of optimal transforms. It is argued that all transform coders are alike in the way they manipulate the data structure formed by transform coefficients. A general energy compaction measure is proposed to generate optimized transforms with desirable characteristics particularly suited to the simple transform coding operation of scalar quantization and entropy coding. It is shown that the optimal linear decoder (inverse transform) must be an optimal linear estimator, independent of the structure of the transform generating the coefficients. A formulation that sequentially optimizes the transforms is presented, and design equations and algorithms for its computation provided. The properties of the resulting transform systems are investigated. In particular, it is shown that the resulting basis are nonorthogonal and complete, producing energy compaction optimized, decorrelated transform coefficients. Quantization issues related to nonorthogonal expansion coefficients are addressed with a simple, efficient algorithm. Two implementations are discussed, and image coding examples are given. It is shown that the proposed design framework results in systems with superior energy compaction properties and excellent coding results.

  3. Dynamics of one-dimensional self-gravitating systems using Hermite-Legendre polynomials

    Science.gov (United States)

    Barnes, Eric I.; Ragan, Robert J.

    2014-01-01

    The current paradigm for understanding galaxy formation in the Universe depends on the existence of self-gravitating collisionless dark matter. Modelling such dark matter systems has been a major focus of astrophysicists, with much of that effort directed at computational techniques. Not surprisingly, a comprehensive understanding of the evolution of these self-gravitating systems still eludes us, since it involves the collective non-linear dynamics of many particle systems interacting via long-range forces described by the Vlasov equation. As a step towards developing a clearer picture of collisionless self-gravitating relaxation, we analyse the linearized dynamics of isolated one-dimensional systems near thermal equilibrium by expanding their phase-space distribution functions f(x, v) in terms of Hermite functions in the velocity variable, and Legendre functions involving the position variable. This approach produces a picture of phase-space evolution in terms of expansion coefficients, rather than spatial and velocity variables. We obtain equations of motion for the expansion coefficients for both test-particle distributions and self-gravitating linear perturbations of thermal equilibrium. N-body simulations of perturbed equilibria are performed and found to be in excellent agreement with the expansion coefficient approach over a time duration that depends on the size of the expansion series used.

  4. Recursive Pyramid Algorithm-Based Discrete Wavelet Transform for Reactive Power Measurement in Smart Meters

    Directory of Open Access Journals (Sweden)

    Mahin K. Atiq

    2013-09-01

    Full Text Available Measurement of the active, reactive, and apparent power is one of the most fundamental tasks of smart meters in energy systems. Recently, a number of studies have employed the discrete wavelet transform (DWT for power measurement in smart meters. The most common way to implement DWT is the pyramid algorithm; however, this is not feasible for practical DWT computation because it requires either a log N cascaded filter or O (N word size memory storage for an input signal of the N-point. Both solutions are too expensive for practical applications of smart meters. It is proposed that the recursive pyramid algorithm is more suitable for smart meter implementation because it requires only word size storage of L × Log (N-L, where L is the length of filter. We also investigated the effect of varying different system parameters, such as the sampling rate, dc offset, phase offset, linearity error in current and voltage sensors, analog to digital converter resolution, and number of harmonics in a non-sinusoidal system, on the reactive energy measurement using DWT. The error analysis is depicted in the form of the absolute difference between the measured and the true value of the reactive energy.

  5. Parallel implementation of geometric transformations

    Energy Technology Data Exchange (ETDEWEB)

    Clarke, K A; Ip, H H.S.

    1982-10-01

    An implementation of digitized picture rotation and magnification based on Weiman's algorithm is presented. In a programmable array machine routines to perform small transformations code efficiently. The method illustrates the interpolative nature of the algorithm. 6 references.

  6. Adaptive Filtering in the Wavelet Transform Domain via Genetic Algorithms

    Science.gov (United States)

    2004-08-06

    wavelet transforms. Whereas the term “evolved” pertains only to the altered wavelet coefficients used during the inverse transform process. 2...words, the inverse transform produces the original signal x(t) from the wavelet and scaling coefficients. )()( ,, tdtx nk n nk k ψ...reconstruct the original signal as accurately as possible. The inverse transform reconstructs an approximation of the original signal (Burrus

  7. On the Cooley-Turkey Fast Fourier algorithm for arbitrary factors ...

    African Journals Online (AJOL)

    Atonuje and Okonta in [1] developed the Cooley-Turkey Fast Fourier transform algorithm and its application to the Fourier transform of discretely sampled data points N, expressed in terms of a power y of 2. In this paper, we extend the formalism of [1] Cookey-Turkey Fast Fourier transform algorithm. The method is developed ...

  8. A study of Hough Transform-based fingerprint alignment algorithms

    CSIR Research Space (South Africa)

    Mlambo, CS

    2014-10-01

    Full Text Available the implementation of each algorithm. The comparison is performed by considering the alignment results computed using each group of algorithms when varying number of minutiae points, rotation angle, and translation. In addition, the memory usage, computing time...

  9. Expectation Maximization Algorithm for Box-Cox Transformation Cure Rate Model and Assessment of Model Misspecification Under Weibull Lifetimes.

    Science.gov (United States)

    Pal, Suvra; Balakrishnan, Narayanaswamy

    2018-05-01

    In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.

  10. A discrete Fourier transform for virtual memory machines

    Science.gov (United States)

    Galant, David C.

    1992-01-01

    An algebraic theory of the Discrete Fourier Transform is developed in great detail. Examination of the details of the theory leads to a computationally efficient fast Fourier transform for the use on computers with virtual memory. Such an algorithm is of great use on modern desktop machines. A FORTRAN coded version of the algorithm is given for the case when the sequence of numbers to be transformed is a power of two.

  11. Foundations of genetic algorithms 1991

    CERN Document Server

    1991-01-01

    Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition

  12. Behavioural modelling using the MOESP algorithm, dynamic neural networks and the Bartels-Stewart algorithm

    NARCIS (Netherlands)

    Schilders, W.H.A.; Meijer, P.B.L.; Ciggaar, E.

    2008-01-01

    In this paper we discuss the use of the state-space modelling MOESP algorithm to generate precise information about the number of neurons and hidden layers in dynamic neural networks developed for the behavioural modelling of electronic circuits. The Bartels–Stewart algorithm is used to transform

  13. A Novel Parallel Algorithm for Edit Distance Computation

    Directory of Open Access Journals (Sweden)

    Muhammad Murtaza Yousaf

    2018-01-01

    Full Text Available The edit distance between two sequences is the minimum number of weighted transformation-operations that are required to transform one string into the other. The weighted transformation-operations are insert, remove, and substitute. Dynamic programming solution to find edit distance exists but it becomes computationally intensive when the lengths of strings become very large. This work presents a novel parallel algorithm to solve edit distance problem of string matching. The algorithm is based on resolving dependencies in the dynamic programming solution of the problem and it is able to compute each row of edit distance table in parallel. In this way, it becomes possible to compute the complete table in min(m,n iterations for strings of size m and n whereas state-of-the-art parallel algorithm solves the problem in max(m,n iterations. The proposed algorithm also increases the amount of parallelism in each of its iteration. The algorithm is also capable of exploiting spatial locality while its implementation. Additionally, the algorithm works in a load balanced way that further improves its performance. The algorithm is implemented for multicore systems having shared memory. Implementation of the algorithm in OpenMP shows linear speedup and better execution time as compared to state-of-the-art parallel approach. Efficiency of the algorithm is also proven better in comparison to its competitor.

  14. Watermarking on 3D mesh based on spherical wavelet transform.

    Science.gov (United States)

    Jin, Jian-Qiu; Dai, Min-Ya; Bao, Hu-Jun; Peng, Qun-Sheng

    2004-03-01

    In this paper we propose a robust watermarking algorithm for 3D mesh. The algorithm is based on spherical wavelet transform. Our basic idea is to decompose the original mesh into a series of details at different scales by using spherical wavelet transform; the watermark is then embedded into the different levels of details. The embedding process includes: global sphere parameterization, spherical uniform sampling, spherical wavelet forward transform, embedding watermark, spherical wavelet inverse transform, and at last resampling the mesh watermarked to recover the topological connectivity of the original model. Experiments showed that our algorithm can improve the capacity of the watermark and the robustness of watermarking against attacks.

  15. Parallel Monte Carlo Search for Hough Transform

    Science.gov (United States)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.

    2017-10-01

    We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.

  16. Numerical Solution of a Fractional Order Model of HIV Infection of CD4+T Cells Using Müntz-Legendre Polynomials

    Directory of Open Access Journals (Sweden)

    Mojtaba Rasouli Gandomani

    2016-06-01

    Full Text Available In this paper, the model of HIV infection of CD4+ T cells is considered as a system of fractional differential equations. Then, a numerical method by using collocation method based on the Müntz-Legendre polynomials to approximate solution of the model is presented. The application of the proposed numerical method causes fractional differential equations system to convert into the algebraic equations system. The new system can be solved by one of the existing methods. Finally, we compare the result of this numerical method with the result of the methods have already been presented in the literature.

  17. APPLICATION OF NATURAL TRANSFORM IN CRYPTOGRAPHY

    OpenAIRE

    Chindhe, Anil Dhondiram; Kiwne, Sakharam

    2017-01-01

    Abstaract−The newly defined integral transform ”Natural transform” has many application in the field of science and engineering.In this paper we described the application of Natural transform to Cryptography.This provide the algorithm for cryptography in which we use the natural transform of the exponential function for encryption of the plain text and corresponding inverse natural transform for decryption

  18. Fast parallel approach for 2-D DHT-based real-valued discrete Gabor transform.

    Science.gov (United States)

    Tao, Liang; Kwan, Hon Keung

    2009-12-01

    Two-dimensional fast Gabor transform algorithms are useful for real-time applications due to the high computational complexity of the traditional 2-D complex-valued discrete Gabor transform (CDGT). This paper presents two block time-recursive algorithms for 2-D DHT-based real-valued discrete Gabor transform (RDGT) and its inverse transform and develops a fast parallel approach for the implementation of the two algorithms. The computational complexity of the proposed parallel approach is analyzed and compared with that of the existing 2-D CDGT algorithms. The results indicate that the proposed parallel approach is attractive for real time image processing.

  19. Cluster algorithms with empahsis on quantum spin systems

    International Nuclear Information System (INIS)

    Gubernatis, J.E.; Kawashima, Naoki

    1995-01-01

    The purpose of this lecture is to discuss in detail the generalized approach of Kawashima and Gubernatis for the construction of cluster algorithms. We first present a brief refresher on the Monte Carlo method, describe the Swendsen-Wang algorithm, show how this algorithm follows from the Fortuin-Kastelyn transformation, and re=interpret this transformation in a form which is the basis of the generalized approach. We then derive the essential equations of the generalized approach. This derivation is remarkably simple if done from the viewpoint of probability theory, and the essential assumptions will be clearly stated. These assumptions are implicit in all useful cluster algorithms of which we are aware. They lead to a quite different perspective on cluster algorithms than found in the seminal works and in Ising model applications. Next, we illustrate how the generalized approach leads to a cluster algorithm for world-line quantum Monte Carlo simulations of Heisenberg models with S = 1/2. More succinctly, we also discuss the generalization of the Fortuin- Kasetelyn transformation to higher spin models and illustrate the essential steps for a S = 1 Heisenberg model. Finally, we summarize how to go beyond S = 1 to a general spin, XYZ model

  20. A High-Resolution Demodulation Algorithm for FBG-FP Static-Strain Sensors Based on the Hilbert Transform and Cross Third-Order Cumulant

    Directory of Open Access Journals (Sweden)

    Wenzhu Huang

    2015-04-01

    Full Text Available Static strain can be detected by measuring a cross-correlation of reflection spectra from two fiber Bragg gratings (FBGs. However, the static-strain measurement resolution is limited by the dominant Gaussian noise source when using this traditional method. This paper presents a novel static-strain demodulation algorithm for FBG-based Fabry-Perot interferometers (FBG-FPs. The Hilbert transform is proposed for changing the Gaussian distribution of the two FBG-FPs’ reflection spectra, and a cross third-order cumulant is used to use the results of the Hilbert transform and get a group of noise-vanished signals which can be used to accurately calculate the wavelength difference of the two FBG-FPs. The benefit by these processes is that Gaussian noise in the spectra can be suppressed completely in theory and a higher resolution can be reached. In order to verify the precision and flexibility of this algorithm, a detailed theory model and a simulation analysis are given, and an experiment is implemented. As a result, a static-strain resolution of 0.9 nε under laboratory environment condition is achieved, showing a higher resolution than the traditional cross-correlation method.

  1. Algorithmic foundation of multi-scale spatial representation

    CERN Document Server

    Li, Zhilin

    2006-01-01

    With the widespread use of GIS, multi-scale representation has become an important issue in the realm of spatial data handling. However, no book to date has systematically tackled the different aspects of this discipline. Emphasizing map generalization, Algorithmic Foundation of Multi-Scale Spatial Representation addresses the mathematical basis of multi-scale representation, specifically, the algorithmic foundation.Using easy-to-understand language, the author focuses on geometric transformations, with each chapter surveying a particular spatial feature. After an introduction to the essential operations required for geometric transformations as well as some mathematical and theoretical background, the book describes algorithms for a class of point features/clusters. It then examines algorithms for individual line features, such as the reduction of data points, smoothing (filtering), and scale-driven generalization, followed by a discussion of algorithms for a class of line features including contours, hydrog...

  2. A high-order perturbation of surfaces method for scattering of linear waves by periodic multiply layered gratings in two and three dimensions

    Science.gov (United States)

    Hong, Youngjoon; Nicholls, David P.

    2017-09-01

    The capability to rapidly and robustly simulate the scattering of linear waves by periodic, multiply layered media in two and three dimensions is crucial in many engineering applications. In this regard, we present a High-Order Perturbation of Surfaces method for linear wave scattering in a multiply layered periodic medium to find an accurate numerical solution of the governing Helmholtz equations. For this we truncate the bi-infinite computational domain to a finite one with artificial boundaries, above and below the structure, and enforce transparent boundary conditions there via Dirichlet-Neumann Operators. This is followed by a Transformed Field Expansion resulting in a Fourier collocation, Legendre-Galerkin, Taylor series method for solving the problem in a transformed set of coordinates. Assorted numerical simulations display the spectral convergence of the proposed algorithm.

  3. Sum of top-hat transform based algorithm for vessel enhancement in MRA images

    Science.gov (United States)

    Ouazaa, Hibet-Allah; Jlassi, Hajer; Hamrouni, Kamel

    2018-04-01

    The Magnetic Resonance Angiography (MRA) is rich with information's. But, they suffer from poor contrast, illumination and noise. Thus, it is required to enhance the images. But, these significant information can be lost if improper techniques are applied. Therefore, in this paper, we propose a new method of enhancement. We applied firstly the CLAHE method to increase the contrast of the image. Then, we applied the sum of Top-Hat Transform to increase the brightness of vessels. It is performed with the structuring element oriented in different angles. The methodology is tested and evaluated on the publicly available database BRAINIX. And, we used the measurement methods MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio) and SNR (Signal to Noise Ratio) for the evaluation. The results demonstrate that the proposed method could efficiently enhance the image details and is comparable with state of the art algorithms. Hence, the proposed method could be broadly used in various applications.

  4. Research on Palmprint Identification Method Based on Quantum Algorithms

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-01-01

    Full Text Available Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.

  5. Tensor Fukunaga-Koontz transform for small target detection in infrared images

    Science.gov (United States)

    Liu, Ruiming; Wang, Jingzhuo; Yang, Huizhen; Gong, Chenglong; Zhou, Yuanshen; Liu, Lipeng; Zhang, Zhen; Shen, Shuli

    2016-09-01

    Infrared small targets detection plays a crucial role in warning and tracking systems. Some novel methods based on pattern recognition technology catch much attention from researchers. However, those classic methods must reshape images into vectors with the high dimensionality. Moreover, vectorizing breaks the natural structure and correlations in the image data. Image representation based on tensor treats images as matrices and can hold the natural structure and correlation information. So tensor algorithms have better classification performance than vector algorithms. Fukunaga-Koontz transform is one of classification algorithms and it is a vector version method with the disadvantage of all vector algorithms. In this paper, we first extended the Fukunaga-Koontz transform into its tensor version, tensor Fukunaga-Koontz transform. Then we designed a method based on tensor Fukunaga-Koontz transform for detecting targets and used it to detect small targets in infrared images. The experimental results, comparison through signal-to-clutter, signal-to-clutter gain and background suppression factor, have validated the advantage of the target detection based on the tensor Fukunaga-Koontz transform over that based on the Fukunaga-Koontz transform.

  6. Super-Encryption Implementation Using Monoalphabetic Algorithm and XOR Algorithm for Data Security

    Science.gov (United States)

    Rachmawati, Dian; Andri Budiman, Mohammad; Aulia, Indra

    2018-03-01

    The exchange of data that occurs offline and online is very vulnerable to the threat of data theft. In general, cryptography is a science and art to maintain data secrecy. An encryption is a cryptography algorithm in which data is transformed into cipher text, which is something that is unreadable and meaningless so it cannot be read or understood by other parties. In super-encryption, two or more encryption algorithms are combined to make it more secure. In this work, Monoalphabetic algorithm and XOR algorithm are combined to form a super- encryption. Monoalphabetic algorithm works by changing a particular letter into a new letter based on existing keywords while the XOR algorithm works by using logic operation XOR Since Monoalphabetic algorithm is a classical cryptographic algorithm and XOR algorithm is a modern cryptographic algorithm, this scheme is expected to be both easy-to-implement and more secure. The combination of the two algorithms is capable of securing the data and restoring it back to its original form (plaintext), so the data integrity is still ensured.

  7. General entanglement-assisted transformation for bipartite pure quantum states

    Science.gov (United States)

    Song, Wei; Huang, Yan; Nai-LeLiu; Chen, Zeng-Bing

    2007-01-01

    We introduce the general catalysts for pure entanglement transformations under local operations and classical communications in such a way that we disregard the profit and loss of entanglement of the catalysts per se. As such, the possibilities of pure entanglement transformations are greatly expanded. We also design an efficient algorithm to detect whether a k × k general catalyst exists for a given entanglement transformation. This algorithm can also be exploited to witness the existence of standard catalysts.

  8. An algorithm for reduct cardinality minimization

    KAUST Repository

    AbouEisha, Hassan M.

    2013-12-01

    This is devoted to the consideration of a new algorithm for reduct cardinality minimization. This algorithm transforms the initial table to a decision table of a special kind, simplify this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. Results of computer experiments with decision tables from UCI ML Repository are discussed. © 2013 IEEE.

  9. An algorithm for reduct cardinality minimization

    KAUST Repository

    AbouEisha, Hassan M.; Al Farhan, Mohammed; Chikalov, Igor; Moshkov, Mikhail

    2013-01-01

    This is devoted to the consideration of a new algorithm for reduct cardinality minimization. This algorithm transforms the initial table to a decision table of a special kind, simplify this table, and use a dynamic programming algorithm to finish the construction of an optimal reduct. Results of computer experiments with decision tables from UCI ML Repository are discussed. © 2013 IEEE.

  10. Cyclic transformation of orbital angular momentum modes

    International Nuclear Information System (INIS)

    Schlederer, Florian; Krenn, Mario; Fickler, Robert; Malik, Mehul; Zeilinger, Anton

    2016-01-01

    The spatial modes of photons are one realization of a QuDit, a quantum system that is described in a D-dimensional Hilbert space. In order to perform quantum information tasks with QuDits, a general class of D-dimensional unitary transformations is needed. Among these, cyclic transformations are an important special case required in many high-dimensional quantum communication protocols. In this paper, we experimentally demonstrate a cyclic transformation in the high-dimensional space of photonic orbital angular momentum (OAM). Using simple linear optical components, we show a successful four-fold cyclic transformation of OAM modes. Interestingly, our experimental setup was found by a computer algorithm. In addition to the four-cyclic transformation, the algorithm also found extensions to higher-dimensional cycles in a hybrid space of OAM and polarization. Besides being useful for quantum cryptography with QuDits, cyclic transformations are key for the experimental production of high-dimensional maximally entangled Bell-states. (paper)

  11. One improved LSB steganography algorithm

    Science.gov (United States)

    Song, Bing; Zhang, Zhi-hong

    2013-03-01

    It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.

  12. Program Transformation to Identify List-Based Parallel Skeletons

    Directory of Open Access Journals (Sweden)

    Venkatesh Kannan

    2016-07-01

    Full Text Available Algorithmic skeletons are used as building-blocks to ease the task of parallel programming by abstracting the details of parallel implementation from the developer. Most existing libraries provide implementations of skeletons that are defined over flat data types such as lists or arrays. However, skeleton-based parallel programming is still very challenging as it requires intricate analysis of the underlying algorithm and often uses inefficient intermediate data structures. Further, the algorithmic structure of a given program may not match those of list-based skeletons. In this paper, we present a method to automatically transform any given program to one that is defined over a list and is more likely to contain instances of list-based skeletons. This facilitates the parallel execution of a transformed program using existing implementations of list-based parallel skeletons. Further, by using an existing transformation called distillation in conjunction with our method, we produce transformed programs that contain fewer inefficient intermediate data structures.

  13. Fourier transform NMR

    International Nuclear Information System (INIS)

    Hallenga, K.

    1991-01-01

    This paper discusses the concept of Fourier transformation one of the many precious legacies of the French mathematician Jean Baptiste Joseph Fourier, essential for understanding the link between continuous-wave (CW) and Fourier transform (FT) NMR. Although in modern FT NMR the methods used to obtain a frequency spectrum from the time-domain signal may vary greatly, from the efficient Cooley-Tukey algorithm to very elaborate iterative least-square methods based other maximum entropy method or on linear prediction, the principles for Fourier transformation are unchanged and give invaluable insight into the interconnection of many pairs of physical entities called Fourier pairs

  14. Algorithm for the classification of multi-modulating signals on the electrocardiogram.

    Science.gov (United States)

    Mita, Mitsuo

    2007-03-01

    This article discusses the algorithm to measure electrocardiogram (ECG) and respiration simultaneously and to have the diagnostic potentiality for sleep apnoea from ECG recordings. The algorithm is composed by the combination with the three particular scale transform of a(j)(t), u(j)(t), o(j)(a(j)) and the statistical Fourier transform (SFT). Time and magnitude scale transforms of a(j)(t), u(j)(t) change the source into the periodic signal and tau(j) = o(j)(a(j)) confines its harmonics into a few instantaneous components at tau(j) being a common instant on two scales between t and tau(j). As a result, the multi-modulating source is decomposed by the SFT and is reconstructed into ECG, respiration and the other signals by inverse transform. The algorithm is expected to get the partial ventilation and the heart rate variability from scale transforms among a(j)(t), a(j+1)(t) and u(j+1)(t) joining with each modulation. The algorithm has a high potentiality of the clinical checkup for the diagnosis of sleep apnoea from ECG recordings.

  15. Fourier transform and particle swarm optimization based modified LQR algorithm for mitigation of vibrations using magnetorheological dampers

    Science.gov (United States)

    Kumar, Gaurav; Kumar, Ashok

    2017-11-01

    Structural control has gained significant attention in recent times. The standalone issue of power requirement during an earthquake has already been solved up to a large extent by designing semi-active control systems using conventional linear quadratic control theory, and many other intelligent control algorithms such as fuzzy controllers, artificial neural networks, etc. In conventional linear-quadratic regulator (LQR) theory, it is customary to note that the values of the design parameters are decided at the time of designing the controller and cannot be subsequently altered. During an earthquake event, the response of the structure may increase or decrease, depending the quasi-resonance occurring between the structure and the earthquake. In this case, it is essential to modify the value of the design parameters of the conventional LQR controller to obtain optimum control force to mitigate the vibrations due to the earthquake. A few studies have been done to sort out this issue but in all these studies it was necessary to maintain a database of the earthquake. To solve this problem and to find the optimized design parameters of the LQR controller in real time, a fast Fourier transform and particle swarm optimization based modified linear quadratic regulator method is presented here. This method comprises four different algorithms: particle swarm optimization (PSO), the fast Fourier transform (FFT), clipped control algorithm and the LQR. The FFT helps to obtain the dominant frequency for every time window. PSO finds the optimum gain matrix through the real-time update of the weighting matrix R, thereby, dispensing with the experimentation. The clipped control law is employed to match the magnetorheological (MR) damper force with the desired force given by the controller. The modified Bouc-Wen phenomenological model is taken to recognize the nonlinearities in the MR damper. The assessment of the advised method is done by simulation of a three-story structure

  16. Quantitative Comparison of Tolerance-Based Feature Transforms

    OpenAIRE

    Reniers, Dennie; Telea, Alexandru

    2006-01-01

    Tolerance-based feature transforms (TFTs) assign to each pixel in an image not only the nearest feature pixels on the boundary (origins), but all origins from the minimum distance up to a user-defined tolerance. In this paper, we compare four simple-to-implement methods for computing TFTs for binary images. Of these, two are novel methods and two extend existing distance transform algorithms. We quantitatively and qualitatively compare all algorithms on speed and accuracy of both distance and...

  17. Graph Transformation and Designing Parallel Sparse Matrix Algorithms beyond Data Dependence Analysis

    Directory of Open Access Journals (Sweden)

    H.X. Lin

    2004-01-01

    Full Text Available Algorithms are often parallelized based on data dependence analysis manually or by means of parallel compilers. Some vector/matrix computations such as the matrix-vector products with simple data dependence structures (data parallelism can be easily parallelized. For problems with more complicated data dependence structures, parallelization is less straightforward. The data dependence graph is a powerful means for designing and analyzing parallel algorithms. However, for sparse matrix computations, parallelization based on solely exploiting the existing parallelism in an algorithm does not always give satisfactory results. For example, the conventional Gaussian elimination algorithm for the solution of a tri-diagonal system is inherently sequential, so algorithms specially for parallel computation has to be designed. After briefly reviewing different parallelization approaches, a powerful graph formalism for designing parallel algorithms is introduced. This formalism will be discussed using a tri-diagonal system as an example. Its application to general matrix computations is also discussed. Its power in designing parallel algorithms beyond the ability of data dependence analysis is shown by means of a new algorithm called ACER (Alternating Cyclic Elimination and Reduction algorithm.

  18. Algorithmic Verification of Linearizability for Ordinary Differential Equations

    KAUST Repository

    Lyakhov, Dmitry A.

    2017-07-19

    For a nonlinear ordinary differential equation solved with respect to the highest order derivative and rational in the other derivatives and in the independent variable, we devise two algorithms to check if the equation can be reduced to a linear one by a point transformation of the dependent and independent variables. The first algorithm is based on a construction of the Lie point symmetry algebra and on the computation of its derived algebra. The second algorithm exploits the differential Thomas decomposition and allows not only to test the linearizability, but also to generate a system of nonlinear partial differential equations that determines the point transformation and the coefficients of the linearized equation. The implementation of both algorithms is discussed and their application is illustrated using several examples.

  19. Cryptanalysis of Application of Laplace Transform for Cryptography

    OpenAIRE

    Gençoğlu Muharrem Tuncay

    2017-01-01

    Although Laplace Transform is a good application field in the design of cryptosystems, many cryptographic algorithm proposals become unsatisfactory for secure communication. In this cryptanalysis study, one of the significant disadvantages of the proposed algorithm is performed with only statistical test of security analysis. In this study, Explaining what should be considered when performing security analysis of Laplace Transform based encryption systems and using basic mathematical rules, p...

  20. Efficient algorithms of multidimensional γ-ray spectra compression

    International Nuclear Information System (INIS)

    Morhac, M.; Matousek, V.

    2006-01-01

    The efficient algorithms to compress multidimensional γ-ray events are presented. Two alternative kinds of compression algorithms based on both the adaptive orthogonal and randomizing transforms are proposed. In both algorithms we employ the reduction of data volume due to the symmetry of the γ-ray spectra

  1. A Motion Estimation Algorithm Using DTCWT and ARPS

    Directory of Open Access Journals (Sweden)

    Unan Y. Oktiawati

    2013-09-01

    Full Text Available In this paper, a hybrid motion estimation algorithm utilizing the Dual Tree Complex Wavelet Transform (DTCWT and the Adaptive Rood Pattern Search (ARPS block is presented. The proposed algorithm first transforms each video sequence with DTCWT. The frame n of the video sequence is used as a reference input and the frame n+2 is used to find the motion vector. Next, the ARPS block search algorithm is carried out and followed by an inverse DTCWT. The motion compensation is then carried out on each inversed frame n and motion vector. The results show that PSNR can be improved for mobile device without depriving its quality. The proposed algorithm also takes less memory usage compared to the DCT-based algorithm. The main contribution of this work is a hybrid wavelet-based motion estimation algorithm for mobile devices. Other contribution is the visual quality scoring system as used in section 6.

  2. The optimal digital filters of sine and cosine transforms for geophysical transient electromagnetic method

    Science.gov (United States)

    Zhao, Yun-wei; Zhu, Zi-qiang; Lu, Guang-yin; Han, Bo

    2018-03-01

    The sine and cosine transforms implemented with digital filters have been used in the Transient electromagnetic methods for a few decades. Kong (2007) proposed a method of obtaining filter coefficients, which are computed in the sample domain by Hankel transform pair. However, the curve shape of Hankel transform pair changes with a parameter, which usually is set to be 1 or 3 in the process of obtaining the digital filter coefficients of sine and cosine transforms. First, this study investigates the influence of the parameter on the digital filter algorithm of sine and cosine transforms based on the digital filter algorithm of Hankel transform and the relationship between the sine, cosine function and the ±1/2 order Bessel function of the first kind. The results show that the selection of the parameter highly influences the precision of digital filter algorithm. Second, upon the optimal selection of the parameter, it is found that an optimal sampling interval s also exists to achieve the best precision of digital filter algorithm. Finally, this study proposes four groups of sine and cosine transform digital filter coefficients with different length, which may help to develop the digital filter algorithm of sine and cosine transforms, and promote its application.

  3. Cryptanalysis of Application of Laplace Transform for Cryptography

    Directory of Open Access Journals (Sweden)

    Gençoğlu Muharrem Tuncay

    2017-01-01

    Full Text Available Although Laplace Transform is a good application field in the design of cryptosystems, many cryptographic algorithm proposals become unsatisfactory for secure communication. In this cryptanalysis study, one of the significant disadvantages of the proposed algorithm is performed with only statistical test of security analysis. In this study, Explaining what should be considered when performing security analysis of Laplace Transform based encryption systems and using basic mathematical rules, password has broken without knowing secret key. Under the skin; This study is a refutation for the article titled Application of Laplace Transform for Cryptography written by Hiwerakar[3].

  4. Governance by algorithms

    Directory of Open Access Journals (Sweden)

    Francesca Musiani

    2013-08-01

    Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

  5. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    Science.gov (United States)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein

  6. A New Formula for the Inverse Wavelet Transform

    OpenAIRE

    Sun, Wenchang

    2010-01-01

    Finding a computationally efficient algorithm for the inverse continuous wavelet transform is a fundamental topic in applications. In this paper, we show the convergence of the inverse wavelet transform.

  7. On the multivariate total least-squares approach to empirical coordinate transformations. Three algorithms

    Science.gov (United States)

    Schaffrin, Burkhard; Felus, Yaron A.

    2008-06-01

    The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.

  8. A Fast Algorithm of Generalized Radon-Fourier Transform for Weak Maneuvering Target Detection

    Directory of Open Access Journals (Sweden)

    Weijie Xia

    2016-01-01

    Full Text Available The generalized Radon-Fourier transform (GRFT has been proposed to detect radar weak maneuvering targets by realizing coherent integration via jointly searching in motion parameter space. Two main drawbacks of GRFT are the heavy computational burden and the blind speed side lobes (BSSL which will cause serious false alarms. The BSSL learning-based particle swarm optimization (BPSO has been proposed before to reduce the computational burden of GRFT and solve the BSSL problem simultaneously. However, the BPSO suffers from an apparent loss in detection performance compared with GRFT. In this paper, a fast implementation algorithm of GRFT using the BSSL learning-based modified wind-driven optimization (BMWDO is proposed. In the BMWDO, the BSSL learning procedure is also used to deal with the BSSL phenomenon. Besides, the MWDO adjusts the coefficients in WDO with Levy distribution and uniform distribution, and it outperforms PSO in a noisy environment. Compared with BPSO, the proposed method can achieve better detection performance with a similar computational cost. Several numerical experiments are also provided to demonstrate the effectiveness of the proposed method.

  9. Array architectures for iterative algorithms

    Science.gov (United States)

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  10. Discrete computational mechanics for stiff phenomena

    KAUST Repository

    Michels, Dominik L.

    2016-11-28

    Many natural phenomena which occur in the realm of visual computing and computational physics, like the dynamics of cloth, fibers, fluids, and solids as well as collision scenarios are described by stiff Hamiltonian equations of motion, i.e. differential equations whose solution spectra simultaneously contain extremely high and low frequencies. This usually impedes the development of physically accurate and at the same time efficient integration algorithms. We present a straightforward computationally oriented introduction to advanced concepts from classical mechanics. We provide an easy to understand step-by-step introduction from variational principles over the Euler-Lagrange formalism and the Legendre transformation to Hamiltonian mechanics. Based on such solid theoretical foundations, we study the underlying geometric structure of Hamiltonian systems as well as their discrete counterparts in order to develop sophisticated structure preserving integration algorithms to efficiently perform high fidelity simulations.

  11. Global track finder for Belle II experiment

    Energy Technology Data Exchange (ETDEWEB)

    Trusov, Viktor; Feindt, Michael; Heck, Martin; Kuhr, Thomas; Goldenzweig, Pablo [Karlsruhe Institute of Technology, IEKP (Germany); Collaboration: Belle II-Collaboration

    2015-07-01

    We present an implementation of a method based on the Legendre transformation for reconstruction charged particle tracks in the central drift chamber of the Belle II experiment. The method is designed for fast track finding and restoring circular patterns of track hits in transverse plane. It is done by searching for common tangents to drift circles of hits in the conformal space. With known transverse trajectories longitudinal momentum estimation performed by assigning stereo hits followed by determination of the track parameters. The method includes algorithms responsible for track quality estimation and reduction of rate of fakes. The work is targeting at increasing the efficiency and reducing the execution time because the computing power available to the experiment is limited. The algorithm is developed within the Belle II software environment with using Monte-Carlo simulation for probing its efficiency.

  12. Human Motion Capture Data Tailored Transform Coding.

    Science.gov (United States)

    Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He

    2015-07-01

    Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.

  13. Fast Unitary Transforms - Benefits and Restrictions.

    Science.gov (United States)

    1980-04-01

    transformation kernel, and u assumes values in the range 0, 1, ... , N-i. Similarly, the inverse transform is given by the relation N-1 f(x) E T(u)h(x...function to obtain T(u,v). Similar comments hold for the inverse transform if h(x,y,u,v) is separable. If the kernel g(xy,u,v) is separable and symmetric...the forward transform can be used directly to obtain the inverse transform simply by multiplying the result of the algorithm by N. 12 The forward and

  14. GPU-Vote: A Framework for Accelerating Voting Algorithms on GPU.

    NARCIS (Netherlands)

    Braak, van den G.J.W.; Nugteren, C.; Mesman, B.; Corporaal, H.; Kaklamanis, C.; Papatheodorou, T.; Spirakis, P.G.

    2012-01-01

    Voting algorithms, such as histogram and Hough transforms, are frequently used algorithms in various domains, such as statistics and image processing. Algorithms in these domains may be accelerated using GPUs. Implementing voting algorithms efficiently on a GPU however is far from trivial due to

  15. Algorithmic Complexity and Reprogrammability of Chemical Structure Networks

    KAUST Repository

    Zenil, Hector; Kiani, Narsis A.; Shang, Ming-mei; Tegner, Jesper

    2018-01-01

    Here we address the challenge of profiling causal properties and tracking the transformation of chemical compounds from an algorithmic perspective. We explore the potential of applying a computational interventional calculus based on the principles of algorithmic probability to chemical structure networks. We profile the sensitivity of the elements and covalent bonds in a chemical structure network algorithmically, asking whether reprogrammability affords information about thermodynamic and chemical processes involved in the transformation of different compound classes. We arrive at numerical results suggesting a correspondence between some physical, structural and functional properties. Our methods are capable of separating chemical classes that reflect functional and natural differences without considering any information about atomic and molecular properties. We conclude that these methods, with their links to chemoinformatics via algorithmic, probability hold promise for future research.

  16. Algorithmic Complexity and Reprogrammability of Chemical Structure Networks

    KAUST Repository

    Zenil, Hector

    2018-02-16

    Here we address the challenge of profiling causal properties and tracking the transformation of chemical compounds from an algorithmic perspective. We explore the potential of applying a computational interventional calculus based on the principles of algorithmic probability to chemical structure networks. We profile the sensitivity of the elements and covalent bonds in a chemical structure network algorithmically, asking whether reprogrammability affords information about thermodynamic and chemical processes involved in the transformation of different compound classes. We arrive at numerical results suggesting a correspondence between some physical, structural and functional properties. Our methods are capable of separating chemical classes that reflect functional and natural differences without considering any information about atomic and molecular properties. We conclude that these methods, with their links to chemoinformatics via algorithmic, probability hold promise for future research.

  17. Algorithmic Complexity and Reprogrammability of Chemical Structure Networks

    KAUST Repository

    Zenil, Hector

    2018-04-02

    Here we address the challenge of profiling causal properties and tracking the transformation of chemical compounds from an algorithmic perspective. We explore the potential of applying a computational interventional calculus based on the principles of algorithmic probability to chemical structure networks. We profile the sensitivity of the elements and covalent bonds in a chemical structure network algorithmically, asking whether reprogrammability affords information about thermodynamic and chemical processes involved in the transformation of different compound classes. We arrive at numerical results suggesting a correspondence between some physical, structural and functional properties. Our methods are capable of separating chemical classes that reflect functional and natural differences without considering any information about atomic and molecular properties. We conclude that these methods, with their links to chemoinformatics via algorithmic, probability hold promise for future research.

  18. Medical image compression with fast Hartley transform

    International Nuclear Information System (INIS)

    Paik, C.H.; Fox, M.D.

    1988-01-01

    The purpose of data compression is storage and transmission of images with minimization of memory for storage and bandwidth for transmission, while maintaining robustness in the presence of transmission noise or storage medium errors. Here, the fast Hartley transform (FHT) is used for transformation and a new thresholding method is devised. The FHT is used instead of the fast Fourier transform (FFT), thus providing calculation at least as fast as that of the fastest algorithm of FFT. This real numbered transform requires only half the memory array space for saving of transform coefficients and allows for easy implementation on very large-scale integrated circuits because of the use of the same formula for both forward and inverse transformation and the conceptually straightforward algorithm. Threshold values were adaptively selected according to the correlation factor of each block of equally divided blocks of the image. Therefore, this approach provided a coding scheme that included maximum information with minimum image bandwidth. Overall, the results suggested that the Hartley transform adaptive thresholding approach results in improved fidelity, shorter decoding time, and greater robustness in the presence of noise than previous approaches

  19. LOW COMPLEXITY HYBRID LOSSY TO LOSSLESS IMAGE CODER WITH COMBINED ORTHOGONAL POLYNOMIALS TRANSFORM AND INTEGER WAVELET TRANSFORM

    Directory of Open Access Journals (Sweden)

    R. Krishnamoorthy

    2012-05-01

    Full Text Available In this paper, a new lossy to lossless image coding scheme combined with Orthogonal Polynomials Transform and Integer Wavelet Transform is proposed. The Lifting Scheme based Integer Wavelet Transform (LS-IWT is first applied on the image in order to reduce the blocking artifact and memory demand. The Embedded Zero tree Wavelet (EZW subband coding algorithm is used in this proposed work for progressive image coding which achieves efficient bit rate reduction. The computational complexity of lower subband coding of EZW algorithm is reduced in this proposed work with a new integer based Orthogonal Polynomials transform coding. The normalization and mapping are done on the subband of the image for exploiting the subjective redundancy and the zero tree structure is obtained for EZW coding and so the computation complexity is greatly reduced in this proposed work. The experimental results of the proposed technique also show that the efficient bit rate reduction is achieved for both lossy and lossless compression when compared with existing techniques.

  20. Algorithms for mapping high-throughput DNA sequences

    DEFF Research Database (Denmark)

    Frellsen, Jes; Menzel, Peter; Krogh, Anders

    2014-01-01

    of data generation, new bioinformatics approaches have been developed to cope with the large amount of sequencing reads obtained in these experiments. In this chapter, we first introduce HTS technologies and their usage in molecular biology and discuss the problem of mapping sequencing reads...... to their genomic origin. We then in detail describe two approaches that offer very fast heuristics to solve the mapping problem in a feasible runtime. In particular, we describe the BLAT algorithm, and we give an introduction to the Burrows-Wheeler Transform and the mapping algorithms based on this transformation....

  1. Accelerating the Non-equispaced Fast Fourier Transform on Commodity Graphics Hardware

    DEFF Research Database (Denmark)

    Sørensen, Thomas Sangild; Schaeffter, Tobias; Noe, Karsten Østergaard

    2008-01-01

    We present a fast parallel algorithm to compute the Non-equispaced fast Fourier transform on commodity graphics hardware (the GPU). We focus particularly on a novel implementation of the convolution step in the transform, which was previously its most time consuming part. We describe the performa......We present a fast parallel algorithm to compute the Non-equispaced fast Fourier transform on commodity graphics hardware (the GPU). We focus particularly on a novel implementation of the convolution step in the transform, which was previously its most time consuming part. We describe...

  2. KAM Tori Construction Algorithms

    Science.gov (United States)

    Wiesel, W.

    In this paper we evaluate and compare two algorithms for the calculation of KAM tori in Hamiltonian systems. The direct fitting of a torus Fourier series to a numerically integrated trajectory is the first method, while an accelerated finite Fourier transform is the second method. The finite Fourier transform, with Hanning window functions, is by far superior in both computational loading and numerical accuracy. Some thoughts on applications of KAM tori are offered.

  3. Fast Exact Euclidean Distance (FEED): A new class of adaptable distance transforms

    NARCIS (Netherlands)

    Schouten, Theo E.; van den Broek, Egon

    2014-01-01

    A new unique class of foldable distance transforms of digital images (DT) is introduced, baptized: Fast Exact Euclidean Distance (FEED) transforms. FEED class algorithms calculate the DT starting directly from the definition or rather its inverse. The principle of FEED class algorithms is

  4. Fast Exact Euclidean Distance (FEED) : A new class of adaptable distance transforms

    NARCIS (Netherlands)

    Schouten, Theo E.; van den Broek, Egon L.

    2014-01-01

    A new unique class of foldable distance transforms of digital images (DT) is introduced, baptized: Fast Exact Euclidean Distance (FEED) transforms. FEED class algorithms calculate the DT startingdirectly from the definition or rather its inverse. The principle of FEED class algorithms is introduced,

  5. Analytic image reconstruction in PVI using the 3D radon transform

    International Nuclear Information System (INIS)

    Staxyk, M.W.; Rogers, J.G.

    1992-01-01

    This paper reports that algorithms have been derived for three dimensional image reconstruction in positron volume imaging (PVI) using the inversion of the three dimensional Radon Transform (RT). The RT is formed by histogramming events into the planes in which they lie rather than along lines as in the X-ray Transform (XT). The authors show the transformation between the RT and the XT and using this relationship they describe a fast backprojection method for the RT in which the computation time is found to be up to 20 times faster with the new algorithm. Monte Carlo simulations show that statistical noise levels in images reconstructed from complete projections with the new RT algorithm are comparable to those obtained using the Fourier Transform (FT) inversion of the XT

  6. A Sweepline Algorithm for Generalized Delaunay Triangulations

    DEFF Research Database (Denmark)

    Skyum, Sven

    We give a deterministic O(n log n) sweepline algorithm to construct the generalized Voronoi diagram for n points in the plane or rather its dual the generalized Delaunay triangulation. The algorithm uses no transformations and it is developed solely from the sweepline paradigm together...

  7. Application of the one-dimensional Fourier transform for tracking moving objects in noisy environments

    Science.gov (United States)

    Rajala, S. A.; Riddle, A. N.; Snyder, W. E.

    1983-01-01

    In Riddle and Rajala (1981), an algorithm was presented which operates on an image sequence to identify all sets of pixels having the same velocity. The algorithm operates by performing a transformation in which all pixels with the same two-dimensional velocity map to a peak in a transform space. The transform can be decomposed into applications of the one-dimensional Fourier transform and therefore can gain from the computational advantages of the FFT. The aim of this paper is the concern with the fundamental limitations of that algorithm, particularly as relates to its sensitivity to image-disturbing parameters as noise, jitter, and clutter. A modification to the algorithm is then proposed which increases its robustness in the presence of these disturbances.

  8. Integral transformations applied to image encryption

    International Nuclear Information System (INIS)

    Vilardy, Juan M.; Torres, Cesar O.; Perez, Ronal

    2017-01-01

    In this paper we consider the application of the integral transformations for image encryption through optical systems, a mathematical algorithm under Matlab platform using fractional Fourier transform (FrFT) and Random Phase Mask (RPM) for digital images encryption is implemented. The FrFT can be related to others integral transforms, such as: Fourier transform, Sine and Cosine transforms, Radial Hilbert transform, fractional Sine transform, fractional Cosine transform, fractional Hartley transform, fractional Wavelet transform and Gyrator transform, among other transforms. The encryption scheme is based on the use of the FrFT, the joint transform correlator and two RPMs, which provide security and robustness to the implemented security system. One of the RPMs used during encryption-decryption and the fractional order of the FrFT are the keys to improve security and make the system more resistant against security attacks. (paper)

  9. Automatic gender determination from 3D digital maxillary tooth plaster models based on the random forest algorithm and discrete cosine transform.

    Science.gov (United States)

    Akkoç, Betül; Arslan, Ahmet; Kök, Hatice

    2017-05-01

    One of the first stages in the identification of an individual is gender determination. Through gender determination, the search spectrum can be reduced. In disasters such as accidents or fires, which can render identification somewhat difficult, durable teeth are an important source for identification. This study proposes a smart system that can automatically determine gender using 3D digital maxillary tooth plaster models. The study group was composed of 40 Turkish individuals (20 female, 20 male) between the ages of 21 and 24. Using the iterative closest point (ICP) algorithm, tooth models were aligned, and after the segmentation process, models were transformed into depth images. The local discrete cosine transform (DCT) was used in the process of feature extraction, and the random forest (RF) algorithm was used for the process of classification. Classification was performed using 30 different seeds for random generator values and 10-fold cross-validation. A value of 85.166% was obtained for average classification accuracy (CA) and a value of 91.75% for the area under the ROC curve (AUC). A multi-disciplinary study is performed here that includes computer sciences, medicine and dentistry. A smart system is proposed for the determination of gender from 3D digital models of maxillary tooth plaster models. This study has the capacity to extend the field of gender determination from teeth. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Recent developments for the pattern recognition in the central drift chamber of the Belle II detector

    Energy Technology Data Exchange (ETDEWEB)

    Trusov, Viktor; Feindt, Michael; Heck, Martin; Hauth, Thomas; Goldenzweig, Pablo [Karlsruhe Institute of Technology (Germany); Collaboration: Belle II-Collaboration

    2016-07-01

    The Belle II experiment is designed to perform more precise measurements (e.g. C P-violation measurements, New Physics phenomena, rare decays etc) than its predecessor, the Belle experiment. To achieve this goal, the luminosity of the experiment will be increased by a factor of 40 and as result multiple times more data will be collected. Due to this fact, faster reconstruction algorithms for the data processing need to be developed and at the same time accurate physical results should be retained. One important part in the data processing chain is the track reconstruction section. We present the development of one of the pattern recognition algorithms for the Belle II experiment based on conformal and Legendre transformations. In order to optimize the performance of the algorithm (CPU time and efficiency) we have introduced specialized processing steps. To show improvements in the results we introduce efficiency measurements of the tracking algorithms in the Central Drift Chamber (CDC) which were done using Monte-Carlo simulation of e{sup +} e{sup -} collisions followed by a full simulation of the Belle II detector.

  11. A Poisson-Fault Model for Testing Power Transformers in Service

    Directory of Open Access Journals (Sweden)

    Dengfu Zhao

    2014-01-01

    Full Text Available This paper presents a method for assessing the instant failure rate of a power transformer under different working conditions. The method can be applied to a dataset of a power transformer under periodic inspections and maintenance. We use a Poisson-fault model to describe failures of a power transformer. When investigating a Bayes estimate of the instant failure rate under the model, we find that complexities of a classical method and a Monte Carlo simulation are unacceptable. Through establishing a new filtered estimate of Poisson process observations, we propose a quick algorithm of the Bayes estimate of the instant failure rate. The proposed algorithm is tested by simulation datasets of a power transformer. For these datasets, the proposed estimators of parameters of the model have better performance than other estimators. The simulation results reveal the suggested algorithms are quickest among three candidates.

  12. Quantum entanglement and quantum computational algorithms

    Indian Academy of Sciences (India)

    We demonstrate that the one- and the two-bit Deutsch-Jozsa algorithm does not require entanglement and can be mapped onto a classical optical scheme. It is only for three and more input bits that the DJ algorithm requires the implementation of entangling transformations and in these cases it is impossible to implement ...

  13. A Fast DCT Algorithm for Watermarking in Digital Signal Processor

    Directory of Open Access Journals (Sweden)

    S. E. Tsai

    2017-01-01

    Full Text Available Discrete cosine transform (DCT has been an international standard in Joint Photographic Experts Group (JPEG format to reduce the blocking effect in digital image compression. This paper proposes a fast discrete cosine transform (FDCT algorithm that utilizes the energy compactness and matrix sparseness properties in frequency domain to achieve higher computation performance. For a JPEG image of 8×8 block size in spatial domain, the algorithm decomposes the two-dimensional (2D DCT into one pair of one-dimensional (1D DCTs with transform computation in only 24 multiplications. The 2D spatial data is a linear combination of the base image obtained by the outer product of the column and row vectors of cosine functions so that inverse DCT is as efficient. Implementation of the FDCT algorithm shows that embedding a watermark image of 32 × 32 block pixel size in a 256 × 256 digital image can be completed in only 0.24 seconds and the extraction of watermark by inverse transform is within 0.21 seconds. The proposed FDCT algorithm is shown more efficient than many previous works in computation.

  14. The fast decoding of Reed-Solomon codes using high-radix fermat theoretic transforms

    Science.gov (United States)

    Liu, K. Y.; Reed, I. S.; Truong, T. K.

    1976-01-01

    Fourier-like transforms over GF(F sub n), where F sub n = 2(2n) + 1 is a Fermat prime, are applied in decoding Reed-Solomon codes. It is shown that such transforms can be computed using high-radix fast Fourier transform (FFT) algorithms requiring considerably fewer multiplications than the more usual radix 2 FFT algorithm. A special 256-symbol, 16-symbol-error-correcting, Reed-Solomon (RS) code for space communication-link applications can be encoded and decoded using this high-radix FFT algorithm over GF(F sub 3).

  15. Bäcklund transformations and divisor doubling

    Science.gov (United States)

    Tsiganov, A. V.

    2018-03-01

    In classical mechanics well-known cryptographic algorithms and protocols can be very useful for construction of canonical transformations preserving form of Hamiltonians. We consider application of a standard generic divisor doubling for construction of new auto Bäcklund transformations for the Lagrange top and Hénon-Heiles system separable in parabolic coordinates.

  16. Algorithmic Verification of Linearizability for Ordinary Differential Equations

    KAUST Repository

    Lyakhov, Dmitry A.; Gerdt, Vladimir P.; Michels, Dominik L.

    2017-01-01

    one by a point transformation of the dependent and independent variables. The first algorithm is based on a construction of the Lie point symmetry algebra and on the computation of its derived algebra. The second algorithm exploits the differential

  17. Thinning an object boundary on digital image using pipelined algorithm

    International Nuclear Information System (INIS)

    Dewanto, S.; Aliyanta, B.

    1997-01-01

    In digital image processing, the thinning process to an object boundary is required to analyze the image structure with a measurement of parameter such as area, circumference of the image object. The process needs a sufficient large memory and time consuming if all the image pixels stored in the memory and the following process is done after all the pixels has ben transformed. pipelined algorithm can reduce the time used in the process. This algorithm uses buffer memory where its size can be adjusted. the next thinning process doesn't need to wait all the transformation of pixels. This paper described pipelined algorithm with some result on the use of the algorithm to digital image

  18. A Novel Medical Image Watermarking in Three-dimensional Fourier Compressed Domain

    Directory of Open Access Journals (Sweden)

    Baoru Han

    2015-09-01

    Full Text Available Digital watermarking is a research hotspot in the field of image security, which is protected digital image copyright. In order to ensure medical image information security, a novel medical image digital watermarking algorithm in three-dimensional Fourier compressed domain is proposed. The novel medical image digital watermarking algorithm takes advantage of three-dimensional Fourier compressed domain characteristics, Legendre chaotic neural network encryption features and robust characteristics of differences hashing, which is a robust zero-watermarking algorithm. On one hand, the original watermarking image is encrypted in order to enhance security. It makes use of Legendre chaotic neural network implementation. On the other hand, the construction of zero-watermarking adopts differences hashing in three-dimensional Fourier compressed domain. The novel watermarking algorithm does not need to select a region of interest, can solve the problem of medical image content affected. The specific implementation of the algorithm and the experimental results are given in the paper. The simulation results testify that the novel algorithm possesses a desirable robustness to common attack and geometric attack.

  19. Algorithms as fetish: Faith and possibility in algorithmic work

    Directory of Open Access Journals (Sweden)

    Suzanne L Thomas

    2018-01-01

    Full Text Available Algorithms are powerful because we invest in them the power to do things. With such promise, they can transform the ordinary, say snapshots along a robotic vacuum cleaner’s route, into something much more, such as a clean home. Echoing David Graeber’s revision of fetishism, we argue that this easy slip from technical capabilities to broader claims betrays not the “magic” of algorithms but rather the dynamics of their exchange. Fetishes are not indicators of false thinking, but social contracts in material form. They mediate emerging distributions of power often too nascent, too slippery or too disconcerting to directly acknowledge. Drawing primarily on 2016 ethnographic research with computer vision professionals, we show how faith in what algorithms can do shapes the social encounters and exchanges of their production. By analyzing algorithms through the lens of fetishism, we can see the social and economic investment in some people’s labor over others. We also see everyday opportunities for social creativity and change. We conclude that what is problematic about algorithms is not their fetishization but instead their stabilization into full-fledged gods and demons – the more deserving objects of critique.

  20. Fast compact algorithms and software for spline smoothing

    CERN Document Server

    Weinert, Howard L

    2012-01-01

    Fast Compact Algorithms and Software for Spline Smoothing investigates algorithmic alternatives for computing cubic smoothing splines when the amount of smoothing is determined automatically by minimizing the generalized cross-validation score. These algorithms are based on Cholesky factorization, QR factorization, or the fast Fourier transform. All algorithms are implemented in MATLAB and are compared based on speed, memory use, and accuracy. An overall best algorithm is identified, which allows very large data sets to be processed quickly on a personal computer.

  1. Studying entanglement-assisted entanglement transformation

    International Nuclear Information System (INIS)

    Hsu Liyi

    2004-01-01

    In this paper, we study catalysis of entanglement transformations for n-level pure entangled states. We propose an algorithm of finding the required catalystic entanglement. We introduce several examples by way of demonstration. We evaluate the lower and upper bound of the required inequalities for deciding whether there are m-level appropriate catalyst states for entanglement transformations for two n-level pure entangled states

  2. Mobile robot motion estimation using Hough transform

    Science.gov (United States)

    Aldoshkin, D. N.; Yamskikh, T. N.; Tsarev, R. Yu

    2018-05-01

    This paper proposes an algorithm for estimation of mobile robot motion. The geometry of surrounding space is described with range scans (samples of distance measurements) taken by the mobile robot’s range sensors. A similar sample of space geometry in any arbitrary preceding moment of time or the environment map can be used as a reference. The suggested algorithm is invariant to isotropic scaling of samples or map that allows using samples measured in different units and maps made at different scales. The algorithm is based on Hough transform: it maps from measurement space to a straight-line parameters space. In the straight-line parameters, space the problems of estimating rotation, scaling and translation are solved separately breaking down a problem of estimating mobile robot localization into three smaller independent problems. The specific feature of the algorithm presented is its robustness to noise and outliers inherited from Hough transform. The prototype of the system of mobile robot orientation is described.

  3. Dirichlet Characters, Gauss Sums, and Inverse Z Transform

    OpenAIRE

    Gao, Jing; Liu, Huaning

    2012-01-01

    A generalized Möbius transform is presented. It is based on Dirichlet characters. A general algorithm is developed to compute the inverse $Z$ transform on the unit circle, and an error estimate is given for the truncated series representation.

  4. MRI reconstruction with joint global regularization and transform learning.

    Science.gov (United States)

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Inversion of the star transform

    International Nuclear Information System (INIS)

    Zhao, Fan; Schotland, John C; Markel, Vadim A

    2014-01-01

    We define the star transform as a generalization of the broken ray transform introduced by us in previous work. The advantages of using the star transform include the possibility to reconstruct the absorption and the scattering coefficients of the medium separately and simultaneously (from the same data) and the possibility to utilize scattered radiation which, in the case of conventional x-ray tomography, is discarded. In this paper, we derive the star transform from physical principles, discuss its mathematical properties and analyze numerical stability of inversion. In particular, it is shown that stable inversion of the star transform can be obtained only for configurations involving odd number of rays. Several computationally-efficient inversion algorithms are derived and tested numerically. (paper)

  6. Target Transformation Constrained Sparse Unmixing (ttcsu) Algorithm for Retrieving Hydrous Minerals on Mars: Application to Southwest Melas Chasma

    Science.gov (United States)

    Lin, H.; Zhang, X.; Wu, X.; Tarnas, J. D.; Mustard, J. F.

    2018-04-01

    Quantitative analysis of hydrated minerals from hyperspectral remote sensing data is fundamental for understanding Martian geologic process. Because of the difficulties for selecting endmembers from hyperspectral images, a sparse unmixing algorithm has been proposed to be applied to CRISM data on Mars. However, it's challenge when the endmember library increases dramatically. Here, we proposed a new methodology termed Target Transformation Constrained Sparse Unmixing (TTCSU) to accurately detect hydrous minerals on Mars. A new version of target transformation technique proposed in our recent work was used to obtain the potential detections from CRISM data. Sparse unmixing constrained with these detections as prior information was applied to CRISM single-scattering albedo images, which were calculated using a Hapke radiative transfer model. This methodology increases success rate of the automatic endmember selection of sparse unmixing and could get more accurate abundances. CRISM images with well analyzed in Southwest Melas Chasma was used to validate our methodology in this study. The sulfates jarosite was detected from Southwest Melas Chasma, the distribution is consistent with previous work and the abundance is comparable. More validations will be done in our future work.

  7. The generalized effective potential and its equations of motion

    International Nuclear Information System (INIS)

    Ananikyan, N.S.; Savvidy, G.K.

    1980-01-01

    By means ot the Legendre transformations a functional GITA(PHI, G, S) is constructed which depends on PHI -a possible expectation value of the quantum field, G -a possible expectation value of the 2-point connected Green function and S= - a possible expectation value of the classical action. The motion equations for the functional GITA are derived on the example of the gPHI 3 theory and an iteration technique is suggested to solve them. A basic equation for GITA which is solved by means of iteration techniques is an ordinary and not a variation one, as it is the case at usual Legendre transformations. The developed formalism can be easily generalized as to other theories

  8. A complex guided spectral transform Lanczos method for studying quantum resonance states

    International Nuclear Information System (INIS)

    Yu, Hua-Gen

    2014-01-01

    A complex guided spectral transform Lanczos (cGSTL) algorithm is proposed to compute both bound and resonance states including energies, widths and wavefunctions. The algorithm comprises of two layers of complex-symmetric Lanczos iterations. A short inner layer iteration produces a set of complex formally orthogonal Lanczos (cFOL) polynomials. They are used to span the guided spectral transform function determined by a retarded Green operator. An outer layer iteration is then carried out with the transform function to compute the eigen-pairs of the system. The guided spectral transform function is designed to have the same wavefunctions as the eigenstates of the original Hamiltonian in the spectral range of interest. Therefore the energies and/or widths of bound or resonance states can be easily computed with their wavefunctions or by using a root-searching method from the guided spectral transform surface. The new cGSTL algorithm is applied to bound and resonance states of HO, and compared to previous calculations

  9. Adaptive geodesic transform for segmentation of vertebrae on CT images

    Science.gov (United States)

    Gaonkar, Bilwaj; Shu, Liao; Hermosillo, Gerardo; Zhan, Yiqiang

    2014-03-01

    Vertebral segmentation is a critical first step in any quantitative evaluation of vertebral pathology using CT images. This is especially challenging because bone marrow tissue has the same intensity profile as the muscle surrounding the bone. Thus simple methods such as thresholding or adaptive k-means fail to accurately segment vertebrae. While several other algorithms such as level sets may be used for segmentation any algorithm that is clinically deployable has to work in under a few seconds. To address these dual challenges we present here, a new algorithm based on the geodesic distance transform that is capable of segmenting the spinal vertebrae in under one second. To achieve this we extend the theory of the geodesic distance transforms proposed in1 to incorporate high level anatomical knowledge through adaptive weighting of image gradients. Such knowledge may be provided by the user directly or may be automatically generated by another algorithm. We incorporate information 'learnt' using a previously published machine learning algorithm2 to segment the L1 to L5 vertebrae. While we present a particular application here, the adaptive geodesic transform is a generic concept which can be applied to segmentation of other organs as well.

  10. Volkov transform generalized projection algorithm for attosecond pulse characterization

    International Nuclear Information System (INIS)

    Keathley, P D; Bhardwaj, S; Moses, J; Laurent, G; Kärtner, F X

    2016-01-01

    An algorithm for characterizing attosecond extreme ultraviolet pulses that is not bandwidth-limited, requires no interpolation of the experimental data, and makes no approximations beyond the strong-field approximation is introduced. This approach fully incorporates the dipole transition matrix element into the retrieval process. Unlike attosecond retrieval methods such as phase retrieval by omega oscillation filtering (PROOF), or improved PROOF, it simultaneously retrieves both the attosecond and infrared (IR) pulses, without placing fundamental restrictions on the IR pulse duration, intensity or bandwidth. The new algorithm is validated both numerically and experimentally, and is also found to have practical advantages. These include an increased robustness to noise, and relaxed requirements for the size of the experimental dataset and the intensity of the streaking pulse. (paper)

  11. An Image Matching Method Based on Fourier and LOG-Polar Transform

    Directory of Open Access Journals (Sweden)

    Zhijia Zhang

    2014-04-01

    Full Text Available This Traditional template matching methods are not appropriate for the situation of large angle rotation between two images in the online detection for industrial production. Aiming at this problem, Fourier transform algorithm was introduced to correct image rotation angle based on its rotatary invariance in time-frequency domain, orienting image under test in the same direction with reference image, and then match these images using matching algorithm based on log-polar transform. Compared with the current matching algorithms, experimental results show that the proposed algorithm can not only match two images with rotation of arbitrary angle, but also possess a high matching accuracy and applicability. In addition, the validity and reliability of algorithm was verified by simulated matching experiment targeting circular images.

  12. Fourier transform and controlling of flux in scalar hysteresis measurement

    International Nuclear Information System (INIS)

    Kuczmann, Miklos

    2008-01-01

    The paper deals with a possible realization of eliminating the effect of noise in scalar hysteresis measurements. The measured signals have been transformed into the frequency domain, and, after applying digital filter, the spectrums of the filtered signals have been transformed back to the time domain. The proposed technique results in an accurate noise-removal algorithm. The paper illustrates a fast controlling algorithm applying the inverse of the actually measured hysteresis loop, and another proportional one to measure distorted flux pattern. By developing the mentioned algorithms, it aims at the controlling of a more complicated phenomena, i.e. measuring the vector hysteresis characteristics

  13. Scalable explicit implementation of anisotropic diffusion with Runge-Kutta-Legendre super-time stepping

    Science.gov (United States)

    Vaidya, Bhargav; Prasad, Deovrat; Mignone, Andrea; Sharma, Prateek; Rickler, Luca

    2017-12-01

    An important ingredient in numerical modelling of high temperature magnetized astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures. Magnetohydrodynamics typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution (Δx) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time-stepping (STS) methods has been discussed in literature, but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work, we highlight a second-order (in time) Runge-Kutta-Legendre (RKL) scheme (first described by Meyer, Balsara & Aslam 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first-order STS schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than 104 processors.

  14. Harmonic Domain Modelling of Transformer Core Nonlinearities Using the DIgSILENT PowerFactory Software

    DEFF Research Database (Denmark)

    Bak, Claus Leth; Bak-Jensen, Birgitte; Wiechowski, Wojciech

    2008-01-01

    This paper demonstrates the results of implementation and verification of an already existing algorithm that allows for calculating saturation characteristics of singlephase power transformers. The algorithm was described for the first time in 1993. Now this algorithm has been implemented using...... the DIgSILENT Programming Language (DPL) as an external script in the harmonic domain calculations of a power system analysis tool PowerFactory [10]. The algorithm is verified by harmonic measurements on a single-phase power transformer. A theoretical analysis of the core nonlinearities phenomena...... in single and three-phase transformers is also presented. This analysis leads to the conclusion that the method can be applied for modelling nonlinearities of three-phase autotransformers....

  15. REMOTELY SENSEDC IMAGE COMPRESSION BASED ON WAVELET TRANSFORM

    Directory of Open Access Journals (Sweden)

    Heung K. Lee

    1996-06-01

    Full Text Available In this paper, we present an image compression algorithm that is capable of significantly reducing the vast mount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet trans-form to remove the spatial redundancy. The transformed images are than encoded by hilbert-curve scanning and run-length-encoding, followed by huffman coding. We also present the performance of the proposed algorithm with KITSAT-1 image as well as the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by peak signal to noise ratio (PSNR and classification capability.

  16. Computation of watersheds based on parallel graph algorithms

    NARCIS (Netherlands)

    Meijster, A.; Roerdink, J.B.T.M.; Maragos, P; Schafer, RW; Butt, MA

    1996-01-01

    In this paper the implementation of a parallel watershed algorithm is described. The algorithm has been implemented on a Cray J932, which is a shared memory architecture with 32 processors. The watershed transform has generally been considered to be inherently sequential, but recently a few research

  17. On the effect of response transformations in sequential parameter optimization.

    Science.gov (United States)

    Wagner, Tobias; Wessing, Simon

    2012-01-01

    Parameter tuning of evolutionary algorithms (EAs) is attracting more and more interest. In particular, the sequential parameter optimization (SPO) framework for the model-assisted tuning of stochastic optimizers has resulted in established parameter tuning algorithms. In this paper, we enhance the SPO framework by introducing transformation steps before the response aggregation and before the actual modeling. Based on design-of-experiments techniques, we empirically analyze the effect of integrating different transformations. We show that in particular, a rank transformation of the responses provides significant improvements. A deeper analysis of the resulting models and additional experiments with adaptive procedures indicates that the rank and the Box-Cox transformation are able to improve the properties of the resultant distributions with respect to symmetry and normality of the residuals. Moreover, model-based effect plots document a higher discriminatory power obtained by the rank transformation.

  18. A kernel adaptive algorithm for quaternion-valued inputs.

    Science.gov (United States)

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  19. Complementing SRCNN by Transformed Self-Exemplars

    DEFF Research Database (Denmark)

    Aakerberg, Andreas; Rasmussen, Christoffer Bøgelund; Nasrollahi, Kamal

    2017-01-01

    , namelythe one found in Single Image Super-Resolution from Transformed Self-Exemplars [7] and the Super-Resolution Convolutional Neural Networkfrom [4]. The combination of these two, through an alpha-blending, hasresulted in a system that outperforms state-of-the-art super-resolutionalgorithms on public......Super-resolution algorithms are used to improve the qualityand resolution of low-resolution images. These algorithms can be dividedinto two classes of hallucination- and reconstruction-based ones. Theimprovement factors of these algorithms are limited, however, previousresearch [10], [9] has shown...

  20. Detection of Blood Vessels in Color Fundus Images using a Local Radon Transform

    Directory of Open Access Journals (Sweden)

    Reza Pourreza

    2010-09-01

    Full Text Available Introduction: This paper addresses a method for automatic detection of blood vessels in color fundus images which utilizes two main tools: image partitioning and local Radon transform. Material and Methods: The input images are firstly divided into overlapping windows and then the Radon transform is applied to each. The maximum of the Radon transform in each window corresponds to the probable available sub-vessel. To verify the detected sub-vessel, the maximum is compared with a predefined threshold. The verified sub-vessels are reconstructed using the Radon transform information. All detected and reconstructed sub-vessels are finally combined to make the final vessel tree. Results: The algorithm’s performance was evaluated numerically by applying it to 40 images of DRIVE database, a standard retinal image database. The vessels were extracted manually by two physicians. This database was used to test and compare the available and proposed algorithms for vessel detection in color fundus images. By comparing the output of the algorithm with the manual results, the two parameters TPR and FPR were calculated for each image and the average of TPRs and FPRs were used to plot the ROC curve. Discussion and Conclusion: Comparison of the ROC curve of this algorithm with other algorithms demonstrated the high achieved accuracy. Beside the high accuracy, the Radon transform which is integral-based makes the algorithm robust against noise.

  1. Transform Decoding of Reed-Solomon Codes. Volume I. Algorithm and Signal Processing Structure

    Science.gov (United States)

    1982-11-01

    systematic channel co.’e. 1. lake the inverse transform of the r- ceived se, - nee. 2. Isolate the error syndrome from the inverse transform and use... inverse transform is identic l with interpolation of the polynomial a(z) from its n values. In order to generate a Reed-Solomon (n,k) cooce, we let the set...in accordance with the transform of equation (4). If we were to apply the inverse transform of equa- tion (6) to the coefficient sequence of A(z), we

  2. Entanglement-continuous unitary transformations

    Energy Technology Data Exchange (ETDEWEB)

    Sahin, Serkan; Orus, Roman [Institute of Physics, Johannes Gutenberg University, 55099 Mainz (Germany)

    2016-07-01

    In this talk we present a new algorithm for quantum many-body systems using continuous unitary transformations (CUT) and tensor networks (TNs). With TNs we are able to approximate the solution to the flow equations that lie at the heart of continuous unitary transformations. We call this method Entanglement-Continuous Unitary Transformations (eCUT). It allows us to compute expectation values of local observables as well as tensor network representations of ground states and low-energy excited states. An implementation of the method is shown for 1d systems using matrix product operators. We show preliminary results for the 1d transverse-field Ising model to demonstrate the feasibility of the method.

  3. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  4. Image Registration Using Redundant Wavelet Transforms

    National Research Council Canada - National Science Library

    Brown, Richard

    2001-01-01

    .... In our research, we present a fundamentally new wavelet-based registration algorithm utilizing redundant transforms and a masking process to suppress the adverse effects of noise and improve processing efficiency...

  5. Exact fan-beam and 4π-acquisition cone-beam SPECT algorithms with uniform attenuation correction

    International Nuclear Information System (INIS)

    Tang Qiulin; Zeng, Gengsheng L.; Wu Jiansheng; Gullberg, Grant T.

    2005-01-01

    This paper presents analytical fan-beam and cone-beam reconstruction algorithms that compensate for uniform attenuation in single photon emission computed tomography. First, a fan-beam algorithm is developed by obtaining a relationship between the two-dimensional (2D) Fourier transform of parallel-beam projections and fan-beam projections. Using this relationship, 2D Fourier transforms of equivalent parallel-beam projection data are obtained from the fan-beam projection data. Then a quasioptimal analytical reconstruction algorithm for uniformly attenuated Radon data, developed by Metz and Pan, is used to reconstruct the image. A cone-beam algorithm is developed by extending the fan-beam algorithm to 4π solid angle geometry. The cone-beam algorithm is also an exact algorithm

  6. Multi-resolution inversion algorithm for the attenuated radon transform

    KAUST Repository

    Barbano, Paolo Emilio; Fokas, Athanasios S.

    2011-01-01

    We present a FAST implementation of the Inverse Attenuated Radon Transform which incorporates accurate collimator response, as well as artifact rejection due to statistical noise and data corruption. This new reconstruction procedure is performed

  7. A robust color image watermarking algorithm against rotation attacks

    Science.gov (United States)

    Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min

    2018-01-01

    A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.

  8. Unconventional Algorithms: Complementarity of Axiomatics and Construction

    Directory of Open Access Journals (Sweden)

    Gordana Dodig Crnkovic

    2012-10-01

    Full Text Available In this paper, we analyze axiomatic and constructive issues of unconventional computations from a methodological and philosophical point of view. We explain how the new models of algorithms and unconventional computations change the algorithmic universe, making it open and allowing increased flexibility and expressive power that augment creativity. At the same time, the greater power of new types of algorithms also results in the greater complexity of the algorithmic universe, transforming it into the algorithmic multiverse and demanding new tools for its study. That is why we analyze new powerful tools brought forth by local mathematics, local logics, logical varieties and the axiomatic theory of algorithms, automata and computation. We demonstrate how these new tools allow efficient navigation in the algorithmic multiverse. Further work includes study of natural computation by unconventional algorithms and constructive approaches.

  9. Improved autonomous star identification algorithm

    International Nuclear Information System (INIS)

    Luo Li-Yan; Xu Lu-Ping; Zhang Hua; Sun Jing-Rong

    2015-01-01

    The log–polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. (paper)

  10. Electrocardiogram ST-Segment Morphology Delineation Method Using Orthogonal Transformations.

    Directory of Open Access Journals (Sweden)

    Miha Amon

    Full Text Available Differentiation between ischaemic and non-ischaemic transient ST segment events of long term ambulatory electrocardiograms is a persisting weakness in present ischaemia detection systems. Traditional ST segment level measuring is not a sufficiently precise technique due to the single point of measurement and severe noise which is often present. We developed a robust noise resistant orthogonal-transformation based delineation method, which allows tracing the shape of transient ST segment morphology changes from the entire ST segment in terms of diagnostic and morphologic feature-vector time series, and also allows further analysis. For these purposes, we developed a new Legendre Polynomials based Transformation (LPT of ST segment. Its basis functions have similar shapes to typical transient changes of ST segment morphology categories during myocardial ischaemia (level, slope and scooping, thus providing direct insight into the types of time domain morphology changes through the LPT feature-vector space. We also generated new Karhunen and Lo ève Transformation (KLT ST segment basis functions using a robust covariance matrix constructed from the ST segment pattern vectors derived from the Long Term ST Database (LTST DB. As for the delineation of significant transient ischaemic and non-ischaemic ST segment episodes, we present a study on the representation of transient ST segment morphology categories, and an evaluation study on the classification power of the KLT- and LPT-based feature vectors to classify between ischaemic and non-ischaemic ST segment episodes of the LTST DB. Classification accuracy using the KLT and LPT feature vectors was 90% and 82%, respectively, when using the k-Nearest Neighbors (k = 3 classifier and 10-fold cross-validation. New sets of feature-vector time series for both transformations were derived for the records of the LTST DB which is freely available on the PhysioNet website and were contributed to the LTST DB. The

  11. Forest FIRE and FIRE wood : tools for tree automata and tree algorithms

    NARCIS (Netherlands)

    Cleophas, L.G.W.A.; Piskorski, J.; Watson, B.W.; Yli-Jyrä, A.

    2009-01-01

    Pattern matching, acceptance, and parsing algorithms on node-labeled, ordered, ranked trees ('tree algorithms') are important for applications such as instruction selection and tree transformation/term rewriting. Many such algorithms have been developed. They often are based on results from such

  12. Hough transform search for continuous gravitational waves

    International Nuclear Information System (INIS)

    Krishnan, Badri; Papa, Maria Alessandra; Sintes, Alicia M.; Schutz, Bernard F.; Frasca, Sergio; Palomba, Cristiano

    2004-01-01

    This paper describes an incoherent method to search for continuous gravitational waves based on the Hough transform, a well-known technique used for detecting patterns in digital images. We apply the Hough transform to detect patterns in the time-frequency plane of the data produced by an earth-based gravitational wave detector. Two different flavors of searches will be considered, depending on the type of input to the Hough transform: either Fourier transforms of the detector data or the output of a coherent matched-filtering type search. We present the technical details for implementing the Hough transform algorithm for both kinds of searches, their statistical properties, and their sensitivities

  13. Transformation between surface spherical harmonic expansion of arbitrary high degree and order and double Fourier series on sphere

    Science.gov (United States)

    Fukushima, Toshio

    2018-02-01

    In order to accelerate the spherical harmonic synthesis and/or analysis of arbitrary function on the unit sphere, we developed a pair of procedures to transform between a truncated spherical harmonic expansion and the corresponding two-dimensional Fourier series. First, we obtained an analytic expression of the sine/cosine series coefficient of the 4 π fully normalized associated Legendre function in terms of the rectangle values of the Wigner d function. Then, we elaborated the existing method to transform the coefficients of the surface spherical harmonic expansion to those of the double Fourier series so as to be capable with arbitrary high degree and order. Next, we created a new method to transform inversely a given double Fourier series to the corresponding surface spherical harmonic expansion. The key of the new method is a couple of new recurrence formulas to compute the inverse transformation coefficients: a decreasing-order, fixed-degree, and fixed-wavenumber three-term formula for general terms, and an increasing-degree-and-order and fixed-wavenumber two-term formula for diagonal terms. Meanwhile, the two seed values are analytically prepared. Both of the forward and inverse transformation procedures are confirmed to be sufficiently accurate and applicable to an extremely high degree/order/wavenumber as 2^{30} {≈ } 10^9. The developed procedures will be useful not only in the synthesis and analysis of the spherical harmonic expansion of arbitrary high degree and order, but also in the evaluation of the derivatives and integrals of the spherical harmonic expansion.

  14. Behaviors study of image registration algorithms in image guided radiation therapy

    International Nuclear Information System (INIS)

    Zou Lian; Hou Qing

    2008-01-01

    Objective: Study the behaviors of image registration algorithms, and analyze the elements which influence the performance of image registrations. Methods: Pre-known corresponding coordinates were appointed for reference image and moving image, and then the influence of region of interest (ROI) selection, transformation function initial parameters and coupled parameter spaces on registration results were studied with a software platform developed in home. Results: Region of interest selection had a manifest influence on registration performance. An improperly chosen ROI resulted in a bad registration. Transformation function initial parameters selection based on pre-known information could improve the accuracy of image registration. Coupled parameter spaces would enhance the dependence of image registration algorithm on ROI selection. Conclusions: It is necessary for clinic IGRT to obtain a ROI selection strategy (depending on specific commercial software) correlated to tumor sites. Three suggestions for image registration technique developers are automatic selection of the initial parameters of transformation function based on pre-known information, developing specific image registration algorithm for specific image feature, and assembling real-time image registration algorithms according to tumor sites selected by software user. (authors)

  15. WAVELET TRANSFORM AND LIP MODEL

    Directory of Open Access Journals (Sweden)

    Guy Courbebaisse

    2011-05-01

    Full Text Available The Fourier transform is well suited to the study of stationary functions. Yet, it is superseded by the Wavelet transform for the powerful characterizations of function features such as singularities. On the other hand, the LIP (Logarithmic Image Processing model is a mathematical framework developed by Jourlin and Pinoli, dedicated to the representation and processing of gray tones images called hereafter logarithmic images. This mathematically well defined model, comprising a Fourier Transform "of its own", provides an effective tool for the representation of images obtained by transmitted light, such as microscope images. This paper presents a Wavelet transform within the LIP framework, with preservation of the classical Wavelet Transform properties. We show that the fast computation algorithm due to Mallat can be easily used. An application is given for the detection of crests.

  16. Fourier-transforming with quantum annealers

    Directory of Open Access Journals (Sweden)

    Itay eHen

    2014-07-01

    Full Text Available We introduce a set of quantum adiabatic evolutions that we argue may be used as `building blocks', or subroutines, in the onstruction of an adiabatic algorithm that executes Quantum Fourier Transform (QFT with the same complexity and resources as its gate-model counterpart. One implication of the above construction is the theoretical feasibility of implementing Shor's algorithm for integer factorization in an optimal manner, and any other algorithm that makes use of QFT, on quantum annealing devices. We discuss the possible advantages, as well as the limitations, of the proposed approach as well as its relation to traditional adiabatic quantum computation.

  17. Firefly algorithm based solution to minimize the real power loss in a power system

    Directory of Open Access Journals (Sweden)

    P. Balachennaiah

    2018-03-01

    Full Text Available This paper proposes a method to minimize the real power loss (RPL of a power system transmission network using a new meta-heuristic algorithm known as firefly algorithm (FA by optimizing the control variables such as transformer taps, UPFC location and UPFC series injected voltage magnitude and phase angle. A software program is developed in MATLAB environment for FA to minimize the RPL by optimizing (i only the transformer tap values, (ii only UPFC location and its variables with optimized tap values and (iii UPFC location and its variables along with transformer tap setting values simultaneously. Interior point successive linear programming (IPSLP technique and real coded genetic algorithm (RCGA are considered here to compare the results and to show the efficiency and superiority of the proposed FA towards the optimization of RPL. Also in this paper, bacteria foraging algorithm (BFA is adopted to validate the results of the proposed algorithm.

  18. A practical Hadamard transform spectrometer for astronomical application

    Science.gov (United States)

    Tai, M. H.

    1977-01-01

    The mathematical properties of Hadamard matrices and their application to spectroscopy are discussed. A comparison is made between Fourier and Hadamard transform encoding in spectrometry. The spectrometer is described and its laboratory performance evaluated. The algorithm and programming of inverse transform are given. A minicomputer is used to recover the spectrum.

  19. WAVELET-BASED ALGORITHM FOR DETECTION OF BEARING FAULTS IN A GAS TURBINE ENGINE

    Directory of Open Access Journals (Sweden)

    Sergiy Enchev

    2014-07-01

    Full Text Available Presented is a gas turbine engine bearing diagnostic system that integrates information from various advanced vibration analysis techniques to achieve robust bearing health state awareness. This paper presents a computational algorithm for identifying power frequency variations and integer harmonics by using wavelet-based transform. The continuous wavelet transform with  the complex Morlet wavelet is adopted to detect the harmonics presented in a power signal. The algorithm based on the discrete stationary wavelet transform is adopted to denoise the wavelet ridges.

  20. Architecture for time or transform domain decoding of reed-solomon codes

    Science.gov (United States)

    Shao, Howard M. (Inventor); Truong, Trieu-Kie (Inventor); Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor)

    1989-01-01

    Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.

  1. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  2. Complex nonlinear Fourier transform and its inverse

    International Nuclear Information System (INIS)

    Saksida, Pavle

    2015-01-01

    We study the nonlinear Fourier transform associated to the integrable systems of AKNS-ZS type. Two versions of this transform appear in connection with the AKNS-ZS systems. These two versions can be considered as two real forms of a single complex transform F c . We construct an explicit algorithm for the calculation of the inverse transform (F c ) -1 (h) for an arbitrary argument h. The result is given in the form of a convergent series of functions in the domain space and the terms of this series can be computed explicitly by means of finitely many integrations. (paper)

  3. Combined Sparsifying Transforms for Compressive Image Fusion

    Directory of Open Access Journals (Sweden)

    ZHAO, L.

    2013-11-01

    Full Text Available In this paper, we present a new compressive image fusion method based on combined sparsifying transforms. First, the framework of compressive image fusion is introduced briefly. Then, combined sparsifying transforms are presented to enhance the sparsity of images. Finally, a reconstruction algorithm based on the nonlinear conjugate gradient is presented to get the fused image. The simulations demonstrate that by using the combined sparsifying transforms better results can be achieved in terms of both the subjective visual effect and the objective evaluation indexes than using only a single sparsifying transform for compressive image fusion.

  4. MANAGER PRINCIPLES AS BASIS OF MANAGEMENT STYLE TRANSFORMATION

    OpenAIRE

    R. A. Kopytov

    2011-01-01

    The paper considers an approach which is based on non-conventional mechanisms of management style formation. The preset level of sustainable management is maintained by self-organized environment created in the process of management style transformation in efficient management principles. Their efficiency is checked within an adaptive algorithm. The algorithm is developed on the basis of combination of evaluative tools  and base of operational  proves. The operating algorithm capability is te...

  5. Analysis on Behaviour of Wavelet Coefficient during Fault Occurrence in Transformer

    Science.gov (United States)

    Sreewirote, Bancha; Ngaopitakkul, Atthapol

    2018-03-01

    The protection system for transformer has play significant role in avoiding severe damage to equipment when disturbance occur and ensure overall system reliability. One of the methodology that widely used in protection scheme and algorithm is discrete wavelet transform. However, characteristic of coefficient under fault condition must be analyzed to ensure its effectiveness. So, this paper proposed study and analysis on wavelet coefficient characteristic when fault occur in transformer in both high- and low-frequency component from discrete wavelet transform. The effect of internal and external fault on wavelet coefficient of both fault and normal phase has been taken into consideration. The fault signal has been simulate using transmission connected to transformer experimental setup on laboratory level that modelled after actual system. The result in term of wavelet coefficient shown a clearly differentiate between wavelet characteristic in both high and low frequency component that can be used to further design and improve detection and classification algorithm that based on discrete wavelet transform methodology in the future.

  6. Frequency hopping signal detection based on wavelet decomposition and Hilbert-Huang transform

    Science.gov (United States)

    Zheng, Yang; Chen, Xihao; Zhu, Rui

    2017-07-01

    Frequency hopping (FH) signal is widely adopted by military communications as a kind of low probability interception signal. Therefore, it is very important to research the FH signal detection algorithm. The existing detection algorithm of FH signals based on the time-frequency analysis cannot satisfy the time and frequency resolution requirement at the same time due to the influence of window function. In order to solve this problem, an algorithm based on wavelet decomposition and Hilbert-Huang transform (HHT) was proposed. The proposed algorithm removes the noise of the received signals by wavelet decomposition and detects the FH signals by Hilbert-Huang transform. Simulation results show the proposed algorithm takes into account both the time resolution and the frequency resolution. Correspondingly, the accuracy of FH signals detection can be improved.

  7. Automated System for Teaching Computational Complexity of Algorithms Course

    Directory of Open Access Journals (Sweden)

    Vadim S. Roublev

    2017-01-01

    Full Text Available This article describes problems of designing automated teaching system for “Computational complexity of algorithms” course. This system should provide students with means to familiarize themselves with complex mathematical apparatus and improve their mathematical thinking in the respective area. The article introduces the technique of algorithms symbol scroll table that allows estimating lower and upper bounds of computational complexity. Further, we introduce a set of theorems that facilitate the analysis in cases when the integer rounding of algorithm parameters is involved and when analyzing the complexity of a sum. At the end, the article introduces a normal system of symbol transformations that allows one both to perform any symbol transformations and simplifies the automated validation of such transformations. The article is published in the authors’ wording.

  8. Exponential x-ray transform

    International Nuclear Information System (INIS)

    Hazou, I.A.

    1986-01-01

    In emission computed tomography one wants to determine the location and intensity of radiation emitted by sources in the presence of an attenuating medium. If the attenuation is known everywhere and equals a constant α in a convex neighborhood of the support of f, then the problem reduces to that of inverting the exponential x-ray transform P/sub α/. The exponential x-ray transform P/sub μ/ with the attenuation μ variable, is of interest mathematically. For the exponential x-ray transform in two dimensions, it is shown that for a large class of approximate δ functions E, convolution kernels K exist for use in the convolution backprojection algorithm. For the case where the attenuation is constant, exact formulas are derived for calculating the convolution kernels from radial point spread functions. From these an exact inversion formula for the constantly attenuated transform is obtained

  9. Vector Radix 2 × 2 Sliding Fast Fourier Transform

    Directory of Open Access Journals (Sweden)

    Keun-Yung Byun

    2016-01-01

    Full Text Available The two-dimensional (2D discrete Fourier transform (DFT in the sliding window scenario has been successfully used for numerous applications requiring consecutive spectrum analysis of input signals. However, the results of conventional sliding DFT algorithms are potentially unstable because of the accumulated numerical errors caused by recursive strategy. In this letter, a stable 2D sliding fast Fourier transform (FFT algorithm based on the vector radix (VR 2 × 2 FFT is presented. In the VR-2 × 2 FFT algorithm, each 2D DFT bin is hierarchically decomposed into four sub-DFT bins until the size of the sub-DFT bins is reduced to 2 × 2; the output DFT bins are calculated using the linear combination of the sub-DFT bins. Because the sub-DFT bins for the overlapped input signals between the previous and current window are the same, the proposed algorithm reduces the computational complexity of the VR-2 × 2 FFT algorithm by reusing previously calculated sub-DFT bins in the sliding window scenario. Moreover, because the resultant DFT bins are identical to those of the VR-2 × 2 FFT algorithm, numerical errors do not arise; therefore, unconditional stability is guaranteed. Theoretical analysis shows that the proposed algorithm has the lowest computational requirements among the existing stable sliding DFT algorithms.

  10. Aerobic activated sludge transformation of vincristine and identification of the transformation products.

    Science.gov (United States)

    Kosjek, Tina; Negreira, Noelia; Heath, Ester; López de Alda, Miren; Barceló, Damià

    2018-01-01

    This study aims to identify (bio)transformation products of vincristine, a plant alkaloid chemotherapy drug. A batch biotransformation experiment was set-up using activated sludge at two concentration levels with and without the addition of a carbon source. Sample analysis was performed on an ultra-high performance liquid chromatograph coupled to a high-resolution hybrid quadrupole-Orbitrap tandem mass spectrometer. To identify molecular ions of vincristine transformation products and to propose molecular and chemical structures, we performed data-dependent acquisition experiments combining full-scan mass spectrometry data with product ion spectra. In addition, the use of non-commercial detection and prediction algorithms such as MZmine 2 and EAWAG-BBD Pathway Prediction System, was proven to be proficient for screening for transformation products in complex wastewater matrix total ion chromatograms. In this study eleven vincristine transformation products were detected, nine of which were tentatively identified. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Discrete linear canonical transform computation by adaptive method.

    Science.gov (United States)

    Zhang, Feng; Tao, Ran; Wang, Yue

    2013-07-29

    The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.

  12. Pipeline Analyzer using the Fractional Fourier Transform for Engine Control and Satellites Data

    Directory of Open Access Journals (Sweden)

    Darian M. Onchiș

    2011-09-01

    Full Text Available The aim of this paper is to present an algorithm for computing the fractional Fourier transform integrated into the pipeline of processing multi-variate and distributed data recorded by the engine control unit (ECU of a car and its satellites. The role of this transform is vital in establishing a time-variant filter and therefore it must be computed in a fast way. But for large scale time series, the application of the discrete fractional Fourier transform involves the computations of a large number of Hermite polynomials of increasingly order. The parallel algorithm presented will optimally compute the discrete Fourier-type transform for any given angle.

  13. Baecklund transformations for discrete Painleve equations: Discrete PII-PV

    International Nuclear Information System (INIS)

    Sakka, A.; Mugan, U.

    2006-01-01

    Transformation properties of discrete Painleve equations are investigated by using an algorithmic method. This method yields explicit transformations which relates the solutions of discrete Painleve equations, discrete P II -P V , with different values of parameters. The particular solutions which are expressible in terms of the discrete analogue of the classical special functions of discrete Painleve equations can also be obtained from these transformations

  14. Normalization and Implementation of Three Gravitational Acceleration Models

    Science.gov (United States)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.; Gottlieb, Robert G.

    2016-01-01

    Unlike the uniform density spherical shell approximations of Newton, the consequence of spaceflight in the real universe is that gravitational fields are sensitive to the asphericity of their generating central bodies. The gravitational potential of an aspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities that must be removed to generalize the method and solve for any possible orbit, including polar orbits. Samuel Pines, Bill Lear, and Robert Gottlieb developed three unique algorithms to eliminate these singularities. This paper documents the methodical normalization of two of the three known formulations for singularity-free gravitational acceleration (namely, the Lear and Gottlieb algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre polynomials and Associated Legendre Functions (ALFs) for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  15. Cancer cell detection and classification using transformation invariant template learning methods

    International Nuclear Information System (INIS)

    Talware, Rajendra; Abhyankar, Aditya

    2011-01-01

    In traditional cancer cell detection, pathologists examine biopsies to make diagnostic assessments, largely based on cell morphology and tissue distribution. The process of image acquisition is very much subjective and the pattern undergoes unknown or random transformations during data acquisition (e.g. variation in illumination, orientation, translation and perspective) results in high degree of variability. Transformed Component Analysis (TCA) incorporates a discrete, hidden variable that accounts for transformations and uses the Expectation Maximization (EM) algorithm to jointly extract components and normalize for transformations. Further the TEMPLAR framework developed takes advantage of hierarchical pattern models and adds probabilistic modeling for local transformations. Pattern classification is based on Expectation Maximization algorithm and General Likelihood Ratio Tests (GLRT). Performance of TEMPLAR is certainly improved by defining area of interest on slide a priori. Performance can be further enhanced by making the kernel function adaptive during learning. (author)

  16. Quantum image encryption based on generalized affine transform and logistic map

    Science.gov (United States)

    Liang, Hao-Ran; Tao, Xiang-Yang; Zhou, Nan-Run

    2016-07-01

    Quantum circuits of the generalized affine transform are devised based on the novel enhanced quantum representation of digital images. A novel quantum image encryption algorithm combining the generalized affine transform with logistic map is suggested. The gray-level information of the quantum image is encrypted by the XOR operation with a key generator controlled by the logistic map, while the position information of the quantum image is encoded by the generalized affine transform. The encryption keys include the independent control parameters used in the generalized affine transform and the logistic map. Thus, the key space is large enough to frustrate the possible brute-force attack. Numerical simulations and analyses indicate that the proposed algorithm is realizable, robust and has a better performance than its classical counterpart in terms of computational complexity.

  17. A Fourier reconstruction algorithm with constant attenuation compensation using 1800 acquisition data for SPECT

    International Nuclear Information System (INIS)

    Tang Qiulin; Zeng, Gengsheng L; Gullberg, Grant T

    2007-01-01

    In this paper, we develop an approximate analytical reconstruction algorithm that compensates for uniform attenuation in 2D parallel-beam SPECT with a 180 0 acquisition. This new algorithm is in the form of a direct Fourier reconstruction. The complex variable central slice theorem is used to derive this algorithm. The image is reconstructed with the following steps: first, the attenuated projection data acquired over 180 deg. are extended to 360 deg. and the value for the uniform attenuator is changed to a negative value. The Fourier transform (FT) of the image in polar coordinates is obtained from the Fourier transform of an analytic function interpolated from an extension of the projection data according to the complex central slice theorem. Finally, the image is obtained by performing a 2D inverse Fourier transform. Computer simulations and comparison studies with a 360 deg. full-scan algorithm are provided

  18. Algorithm Indicating Moment of P-Wave Arrival Based on Second-Moment Characteristic

    Directory of Open Access Journals (Sweden)

    Jakub Sokolowski

    2016-01-01

    Full Text Available The moment of P-wave arrival can provide us with many information about the nature of a seismic event. Without adequate knowledge regarding the onset moment, many properties of the events related to location, polarization of P-wave, and so forth are impossible to receive. In order to save time required to indicate P-wave arrival moment manually, one can benefit from automatic picking algorithms. In this paper two algorithms based on a method finding a regime switch point are applied to seismic event data in order to find P-wave arrival time. The algorithms are based on signals transformed via a basic transform rather than on raw recordings. They involve partitioning the transformed signal into two separate series and fitting logarithm function to the first subset (which corresponds to pure noise and therefore it is considered stationary, exponent or power function to the second subset (which corresponds to nonstationary seismic event, and finding the point at which these functions best fit the statistic in terms of sum of squared errors. Effectiveness of the algorithms is tested on seismic data acquired from O/ZG “Rudna” underground copper ore mine with moments of P-wave arrival initially picked by broadly known STA/LTA algorithm and then corrected by seismic station specialists. The results of proposed algorithms are compared to those obtained using STA/LTA.

  19. An ultrafast line-by-line algorithm for calculating spectral transmittance and radiance

    International Nuclear Information System (INIS)

    Tan, X.

    2013-01-01

    An ultrafast line-by-line algorithm for calculating spectral transmittance and radiance of gases is presented. The algorithm is based on fast convolution of the Voigt line profile using Fourier transform and a binning technique. The algorithm breaks a radiative transfer calculation into two steps: a one-time pre-computation step in which a set of pressure independent coefficients are computed using the spectral line information; a normal calculation step in which the Fourier transform coefficients of the optical depth are calculated using the line of sight information and the coefficients pre-computed in the first step, the optical depth is then calculated using an inverse Fourier transform and the spectral transmittance and radiance are calculated. The algorithm is significantly faster than line-by-line algorithms that do not employ special speedup techniques by a factor of 10 3 –10 6 . A case study of the 2.7 μm band of H 2 O vapor is presented. -- Highlights: •An ultrafast line-by-line model based on FFT and a binning technique is presented. •Computationally expensive calculations are factored out into a pre-computation step. •It is 10 3 –10 8 times faster than LBL algorithms that do not employ speedup techniques. •Good agreement with experimental data for the 2.7 μm band of H 2 O

  20. Genetic algorithms: Theory and applications in the safety domain

    International Nuclear Information System (INIS)

    Marseguerra, M.; Zio, E.

    2001-01-01

    This work illustrates the fundamentals underlying optimization by genetic algorithms. All the steps of the procedure are sketched in details for both the traditional breeding algorithm as well as for more sophisticated breeding procedures. The necessity of affine transforming the fitness function, object of the optimization, is discussed in detail, together with the transformation itself. Procedures for the inducement of species and niches are also presented. The theoretical aspects of the work are corroborated by a demonstration of the potential of genetic algorithm optimization procedures on three different case studies. The first case study deals with the design of the pressure stages of a natural gas pipeline system; the second one treats a reliability allocation problem in system configuration design; the last case regards the selection of maintenance and repair strategies for the logistic management of a risky plant. (author)

  1. Multidimensional Scaling Localization Algorithm in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhang Dongyang

    2014-02-01

    Full Text Available Due to the localization algorithm in large-scale wireless sensor network exists shortcomings both in positioning accuracy and time complexity compared to traditional localization algorithm, this paper presents a fast multidimensional scaling location algorithm. By positioning algorithm for fast multidimensional scaling, fast mapping initialization, fast mapping and coordinate transform can get schematic coordinates of node, coordinates Initialize of MDS algorithm, an accurate estimate of the node coordinates and using the PRORUSTES to analysis alignment of the coordinate and final position coordinates of nodes etc. There are four steps, and the thesis gives specific implementation steps of the algorithm. Finally, compared with stochastic algorithms and classical MDS algorithm experiment, the thesis takes application of specific examples. Experimental results show that: the proposed localization algorithm has fast multidimensional scaling positioning accuracy in ensuring certain circumstances, but also greatly improves the speed of operation.

  2. Digital Sound Encryption with Logistic Map and Number Theoretic Transform

    Science.gov (United States)

    Satria, Yudi; Gabe Rizky, P. H.; Suryadi, MT

    2018-03-01

    Digital sound security has limits on encrypting in Frequency Domain. Number Theoretic Transform based on field (GF 2521 – 1) improve and solve that problem. The algorithm for this sound encryption is based on combination of Chaos function and Number Theoretic Transform. The Chaos function that used in this paper is Logistic Map. The trials and the simulations are conducted by using 5 different digital sound files data tester in Wave File Extension Format and simulated at least 100 times each. The key stream resulted is random with verified by 15 NIST’s randomness test. The key space formed is very big which more than 10469. The processing speed of algorithm for encryption is slightly affected by Number Theoretic Transform.

  3. Moment-based method for computing the two-dimensional discrete Hartley transform

    Science.gov (United States)

    Dong, Zhifang; Wu, Jiasong; Shu, Huazhong

    2009-10-01

    In this paper, we present a fast algorithm for computing the two-dimensional (2-D) discrete Hartley transform (DHT). By using kernel transform and Taylor expansion, the 2-D DHT is approximated by a linear sum of 2-D geometric moments. This enables us to use the fast algorithms developed for computing the 2-D moments to efficiently calculate the 2-D DHT. The proposed method achieves a simple computational structure and is suitable to deal with any sequence lengths.

  4. The derivation of distributed termination detection algorithms from garbage collection schemes

    NARCIS (Netherlands)

    Tel, G.; Mattern, F.

    1990-01-01

    It is shown that the termination detection problem for distributed computations can be modelled as an instance of the garbage collection problem. Consequently, algorithms for the termination detection problem are obtained by applying transformations to garbage collection algorithms. The

  5. Iterative-Transform Phase Diversity: An Object and Wavefront Recovery Algorithm

    Science.gov (United States)

    Smith, J. Scott

    2011-01-01

    Presented is a solution for recovering the wavefront and an extended object. It builds upon the VSM architecture and deconvolution algorithms. Simulations are shown for recovering the wavefront and extended object from noisy data.

  6. [Affine transformation-based automatic registration for peripheral digital subtraction angiography (DSA)].

    Science.gov (United States)

    Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min

    2008-07-01

    In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.

  7. Stereo matching using epipolar distance transform.

    Science.gov (United States)

    Yang, Qingxiong; Ahuja, Narendra

    2012-10-01

    In this paper, we propose a simple but effective image transform, called the epipolar distance transform, for matching low-texture regions. It converts image intensity values to a relative location inside a planar segment along the epipolar line, such that pixels in the low-texture regions become distinguishable. We theoretically prove that the transform is affine invariant, thus the transformed images can be directly used for stereo matching. Any existing stereo algorithms can be directly used with the transformed images to improve reconstruction accuracy for low-texture regions. Results on real indoor and outdoor images demonstrate the effectiveness of the proposed transform for matching low-texture regions, keypoint detection, and description for low-texture scenes. Our experimental results on Middlebury images also demonstrate the robustness of our transform for highly textured scenes. The proposed transform has a great advantage, its low computational complexity. It was tested on a MacBook Air laptop computer with a 1.8 GHz Core i7 processor, with a speed of about 9 frames per second for a video graphics array-sized image.

  8. Implementation of the 2-D Wavelet Transform into FPGA for Image

    Energy Technology Data Exchange (ETDEWEB)

    Leon, M; Barba, L; Vargas, L; Torres, C O, E-mail: madeleineleon@unicesar.edu.co [Laboratorio de Optica e Informatica, Universidad Popular del Cesar, Sede balneario Hurtado, Valledupar, Cesar (Colombia)

    2011-01-01

    This paper presents a hardware system implementation of the of discrete wavelet transform algorithm in two dimensions for FPGA, using the Daubechies filter family of order 2 (db2). The decomposition algorithm of this transform is designed and simulated with the Hardware Description Language VHDL and is implemented in a programmable logic device (FPGA) XC3S1200E reference, Spartan IIIE family, by Xilinx, take advantage the parallels properties of these gives us and speeds processing that can reach them. The architecture is evaluated using images input of different sizes. This implementation is done with the aim of developing a future images encryption hardware system using wavelet transform for security information.

  9. TMS320C25 Digital Signal Processor For 2-Dimensional Fast Fourier Transform Computation

    International Nuclear Information System (INIS)

    Ardisasmita, M. Syamsa

    1996-01-01

    The Fourier transform is one of the most important mathematical tool in signal processing and analysis, which converts information from the time/spatial domain into the frequency domain. Even with implementation of the Fast Fourier Transform algorithms in imaging data, the discrete Fourier transform execution consume a lot of time. Digital signal processors are designed specifically to perform computation intensive digital signal processing algorithms. By taking advantage of the advanced architecture. parallel processing, and dedicated digital signal processing (DSP) instruction sets. This device can execute million of DSP operations per second. The device architecture, characteristics and feature suitable for fast Fourier transform application and speed-up are discussed

  10. An Early Fire Detection Algorithm Using IP Cameras

    Directory of Open Access Journals (Sweden)

    Hector Perez-Meana

    2012-05-01

    Full Text Available The presence of smoke is the first symptom of fire; therefore to achieve early fire detection, accurate and quick estimation of the presence of smoke is very important. In this paper we propose an algorithm to detect the presence of smoke using video sequences captured by Internet Protocol (IP cameras, in which important features of smoke, such as color, motion and growth properties are employed. For an efficient smoke detection in the IP camera platform, a detection algorithm must operate directly in the Discrete Cosine Transform (DCT domain to reduce computational cost, avoiding a complete decoding process required for algorithms that operate in spatial domain. In the proposed algorithm the DCT Inter-transformation technique is used to increase the detection accuracy without inverse DCT operation. In the proposed scheme, firstly the candidate smoke regions are estimated using motion and color smoke properties; next using morphological operations the noise is reduced. Finally the growth properties of the candidate smoke regions are furthermore analyzed through time using the connected component labeling technique. Evaluation results show that a feasible smoke detection method with false negative and false positive error rates approximately equal to 4% and 2%, respectively, is obtained.

  11. Novel Polynomial Basis with Fast Fourier Transform and Its Application to Reed-Solomon Erasure Codes

    KAUST Repository

    Lin, Sian-Jheng

    2016-09-13

    In this paper, we present a fast Fourier transform (FFT) algorithm over extension binary fields, where the polynomial is represented in a non-standard basis. The proposed Fourier-like transform requires O(h lg(h)) field operations, where h is the number of evaluation points. Based on the proposed Fourier-like algorithm, we then develop the encoding/ decoding algorithms for (n = 2m; k) Reed-Solomon erasure codes. The proposed encoding/erasure decoding algorithm requires O(n lg(n)), in both additive and multiplicative complexities. As the complexity leading factor is small, the proposed algorithms are advantageous in practical applications. Finally, the approaches to convert the basis between the monomial basis and the new basis are proposed.

  12. Direct numerical reconstruction of conductivities in three dimensions using scattering transforms

    DEFF Research Database (Denmark)

    Bikowski, Jutta; Knudsen, Kim; Mueller, Jennifer L

    2011-01-01

    A direct three-dimensional EIT reconstruction algorithm based on complex geometrical optics solutions and a nonlinear scattering transform is presented and implemented for spherically symmetric conductivity distributions. The scattering transform is computed both with a Born approximation and from...

  13. Verification of Transformer Restricted Earth Fault Protection by using the Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    KRSTIVOJEVIC, J. P.

    2015-08-01

    Full Text Available The results of a comprehensive investigation of the influence of current transformer (CT saturation on restricted earth fault (REF protection during power transformer magnetization inrush are presented. Since the inrush current during switch-on of unloaded power transformer is stochastic, its values are obtained by: (i laboratory measurements and (ii calculations based on the input data obtained by the Monte Carlo (MC simulation. To make a detailed assessment of the current transformer performance the uncertain input data for the CT model were obtained by applying the MC method. In this way, different levels of remanent flux in CT core are taken into consideration. By the generated CT secondary currents, the algorithm for REF protection based on phase comparison in time domain is tested. On the basis of the obtained results, a method of adjustment of the triggering threshold in order to ensure safe operation during transients, and thereby improve the algorithm security, has been proposed. The obtained results indicate that power transformer REF protection would be enhanced by using the proposed adjustment of triggering threshold in the algorithm which is based on phase comparison in time domain.

  14. Interpolation algorithm for asynchronous ADC-data

    Directory of Open Access Journals (Sweden)

    S. Bramburger

    2017-09-01

    Full Text Available This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.

  15. a pyramid algorithm for the haar discrete wavelet packet transform

    African Journals Online (AJOL)

    PROF EKWUEME

    computer-aided signal processing of non-stationary signals, this paper develops a pyramid algorithm for the discrete wavelet packet ... Edith T. Luhanga, School of Computational and Communication Sciences and Engineering, Nelson Mandela African. Institute of ..... Mathematics, Washington University. 134. EDITH T.

  16. An improved ASIFT algorithm for indoor panorama image matching

    Science.gov (United States)

    Fu, Han; Xie, Donghai; Zhong, Ruofei; Wu, Yu; Wu, Qiong

    2017-07-01

    The generation of 3D models for indoor objects and scenes is an attractive tool for digital city, virtual reality and SLAM purposes. Panoramic images are becoming increasingly more common in such applications due to their advantages to capture the complete environment in one single image with large field of view. The extraction and matching of image feature points are important and difficult steps in three-dimensional reconstruction, and ASIFT is a state-of-the-art algorithm to implement these functions. Compared with the SIFT algorithm, more feature points can be generated and the matching accuracy of ASIFT algorithm is higher, even for the panoramic images with obvious distortions. However, the algorithm is really time-consuming because of complex operations and performs not very well for some indoor scenes under poor light or without rich textures. To solve this problem, this paper proposes an improved ASIFT algorithm for indoor panoramic images: firstly, the panoramic images are projected into multiple normal perspective images. Secondly, the original ASIFT algorithm is simplified from the affine transformation of tilt and rotation with the images to the only tilt affine transformation. Finally, the results are re-projected to the panoramic image space. Experiments in different environments show that this method can not only ensure the precision of feature points extraction and matching, but also greatly reduce the computing time.

  17. Solution of the weighted symmetric similarity transformations based on quaternions

    Science.gov (United States)

    Mercan, H.; Akyilmaz, O.; Aydin, C.

    2017-12-01

    A new method through Gauss-Helmert model of adjustment is presented for the solution of the similarity transformations, either 3D or 2D, in the frame of errors-in-variables (EIV) model. EIV model assumes that all the variables in the mathematical model are contaminated by random errors. Total least squares estimation technique may be used to solve the EIV model. Accounting for the heteroscedastic uncertainty both in the target and the source coordinates, that is the more common and general case in practice, leads to a more realistic estimation of the transformation parameters. The presented algorithm can handle the heteroscedastic transformation problems, i.e., positions of the both target and the source points may have full covariance matrices. Therefore, there is no limitation such as the isotropic or the homogenous accuracy for the reference point coordinates. The developed algorithm takes the advantage of the quaternion definition which uniquely represents a 3D rotation matrix. The transformation parameters: scale, translations, and the quaternion (so that the rotation matrix) along with their covariances, are iteratively estimated with rapid convergence. Moreover, prior least squares (LS) estimation of the unknown transformation parameters is not required to start the iterations. We also show that the developed method can also be used to estimate the 2D similarity transformation parameters by simply treating the problem as a 3D transformation problem with zero (0) values assigned for the z-components of both target and source points. The efficiency of the new algorithm is presented with the numerical examples and comparisons with the results of the previous studies which use the same data set. Simulation experiments for the evaluation and comparison of the proposed and the conventional weighted LS (WLS) method is also presented.

  18. Integrating Data Transformation in Principal Components Analysis

    KAUST Repository

    Maadooliat, Mehdi

    2015-01-02

    Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior to applying PCA. Such transformation is usually obtained from previous studies, prior knowledge, or trial-and-error. In this work, we develop a model-based method that integrates data transformation in PCA and finds an appropriate data transformation using the maximum profile likelihood. Extensions of the method to handle functional data and missing values are also developed. Several numerical algorithms are provided for efficient computation. The proposed method is illustrated using simulated and real-world data examples.

  19. High-radix transforms for Reed-Solomon codes over Fermat primes

    Science.gov (United States)

    Liu, K. Y.; Reed, I. S.; Truong, T. K.

    1977-01-01

    A method is proposed to streamline the transform decoding algorithm for Reed-Solomon (RS) codes of length equal to 2 raised to the power 2n. It is shown that a high-radix fast Fourier transform (FFT) type algorithm with generator equal to 3 on GF(F sub n), where F sub n is a Fermat prime, can be used to decode RS codes of this length. For a 256-symbol RS code, a radix 4 and radix 16 FFT over GF(F sub 3) require, respectively, 30 and 70% fewer modulo F sub n multiplications than the usual radix 2 FFT.

  20. An Alternative Algorithm for Computing Watersheds on Shared Memory Parallel Computers

    NARCIS (Netherlands)

    Meijster, A.; Roerdink, J.B.T.M.

    1995-01-01

    In this paper a parallel implementation of a watershed algorithm is proposed. The algorithm can easily be implemented on shared memory parallel computers. The watershed transform is generally considered to be inherently sequential since the discrete watershed of an image is defined using recursion.

  1. Fast Transform Decoding Of Nonsystematic Reed-Solomon Codes

    Science.gov (United States)

    Truong, Trieu-Kie; Cheung, Kar-Ming; Shiozaki, A.; Reed, Irving S.

    1992-01-01

    Fast, efficient Fermat number transform used to compute F'(x) analogous to computation of syndrome in conventional decoding scheme. Eliminates polynomial multiplications and reduces number of multiplications in reconstruction of F'(x) to n log (n). Euclidean algorithm used to evaluate F(x) directly, without going through intermediate steps of solving error-locator and error-evaluator polynomials. Algorithm suitable for implementation in very-large-scale integrated circuits.

  2. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    Science.gov (United States)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  3. Wavelet Based Hilbert Transform with Digital Design and Application to QCM-SS Watermarking

    Directory of Open Access Journals (Sweden)

    S. P. Maity

    2008-04-01

    Full Text Available In recent time, wavelet transforms are used extensively for efficient storage, transmission and representation of multimedia signals. Hilbert transform pairs of wavelets is the basic unit of many wavelet theories such as complex filter banks, complex wavelet and phaselet etc. Moreover, Hilbert transform finds various applications in communications and signal processing such as generation of single sideband (SSB modulation, quadrature carrier multiplexing (QCM and bandpass representation of a signal. Thus wavelet based discrete Hilbert transform design draws much attention of researchers for couple of years. This paper proposes an (i algorithm for generation of low computation cost Hilbert transform pairs of symmetric filter coefficients using biorthogonal wavelets, (ii approximation to its rational coefficients form for its efficient hardware realization and without much loss in signal representation, and finally (iii development of QCM-SS (spread spectrum image watermarking scheme for doubling the payload capacity. Simulation results show novelty of the proposed Hilbert transform design and its application to watermarking compared to existing algorithms.

  4. Hubbard-Stratonovich-like Transformations for Few-Body Inter-actions

    Directory of Open Access Journals (Sweden)

    Körber Christopher

    2018-01-01

    Full Text Available Through the development of many-body methodology and algorithms, it has become possible to describe quantum systems composed of a large number of particles with great accuracy. Essential to all these methods is the application of auxiliary fields via the Hubbard-Stratonovich transformation. This transformation effectively reduces two-body interactions to interactions of one particle with the auxiliary field, thereby improving the computational scaling of the respective algorithms. The relevance of collective phenomena and interactions grows with the number of particles. For many theories, e.g. Chiral Perturbation Theory, the inclusion of three-body forces has become essential in order to further increase the accuracy on the many-body level. In this proceeding, the an-alytical framework for establishing a Hubbard-Stratonovich-like transformation, which allows for the systematic and controlled inclusion of contact three-and more-body inter-actions, is presented.

  5. Compression of seismic data: filter banks and extended transforms, synthesis and adaptation; Compression de donnees sismiques: bancs de filtres et transformees etendues, synthese et adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Duval, L.

    2000-11-01

    Wavelet and wavelet packet transforms are the most commonly used algorithms for seismic data compression. Wavelet coefficients are generally quantized and encoded by classical entropy coding techniques. We first propose in this work a compression algorithm based on the wavelet transform. The wavelet transform is used together with a zero-tree type coding, with first use in seismic applications. Classical wavelet transforms nevertheless yield a quite rigid approach, since it is often desirable to adapt the transform stage to the properties of each type of signal. We thus propose a second algorithm using, instead of wavelets, a set of so called 'extended transforms'. These transforms, originating from the filter bank theory, are parameterized. Classical examples are Malvar's Lapped Orthogonal Transforms (LOT) or de Queiroz et al. Generalized Lapped Orthogonal Transforms (GenLOT). We propose several optimization criteria to build 'extended transforms' which are adapted the properties of seismic signals. We further show that these transforms can be used with the same zero-tree type coding technique as used with wavelets. Both proposed algorithms provide exact compression rate choice, block-wise compression (in the case of extended transforms) and partial decompression for quality control or visualization. Performances are tested on a set of actual seismic data. They are evaluated for several quality measures. We also compare them to other seismic compression algorithms. (author)

  6. Intelligent Models Performance Improvement Based on Wavelet Algorithm and Logarithmic Transformations in Suspended Sediment Estimation

    Directory of Open Access Journals (Sweden)

    R. Hajiabadi

    2016-10-01

    Full Text Available Introduction One reason for the complexity of hydrological phenomena prediction, especially time series is existence of features such as trend, noise and high-frequency oscillations. These complex features, especially noise, can be detected or removed by preprocessing. Appropriate preprocessing causes estimation of these phenomena become easier. Preprocessing in the data driven models such as artificial neural network, gene expression programming, support vector machine, is more effective because the quality of data in these models is important. Present study, by considering diagnosing and data transformation as two different preprocessing, tries to improve the results of intelligent models. In this study two different intelligent models, Artificial Neural Network and Gene Expression Programming, are applied to estimation of daily suspended sediment load. Wavelet transforms and logarithmic transformation is used for diagnosing and data transformation, respectively. Finally, the impacts of preprocessing on the results of intelligent models are evaluated. Materials and Methods In this study, Gene Expression Programming and Artificial Neural Network are used as intelligent models for suspended sediment load estimation, then the impacts of diagnosing and logarithmic transformations approaches as data preprocessor are evaluated and compared to the result improvement. Two different logarithmic transforms are considered in this research, LN and LOG. Wavelet transformation is used to time series denoising. In order to denoising by wavelet transforms, first, time series can be decomposed at one level (Approximation part and detail part and second, high-frequency part (detail will be removed as noise. According to the ability of gene expression programming and artificial neural network to analysis nonlinear systems; daily values of suspended sediment load of the Skunk River in USA, during a 5-year period, are investigated and then estimated.4 years of

  7. Invariance algorithms for processing NDE signals

    Science.gov (United States)

    Mandayam, Shreekanth; Udpa, Lalita; Udpa, Satish S.; Lord, William

    1996-11-01

    Signals that are obtained in a variety of nondestructive evaluation (NDE) processes capture information not only about the characteristics of the flaw, but also reflect variations in the specimen's material properties. Such signal changes may be viewed as anomalies that could obscure defect related information. An example of this situation occurs during in-line inspection of gas transmission pipelines. The magnetic flux leakage (MFL) method is used to conduct noninvasive measurements of the integrity of the pipe-wall. The MFL signals contain information both about the permeability of the pipe-wall and the dimensions of the flaw. Similar operational effects can be found in other NDE processes. This paper presents algorithms to render NDE signals invariant to selected test parameters, while retaining defect related information. Wavelet transform based neural network techniques are employed to develop the invariance algorithms. The invariance transformation is shown to be a necessary pre-processing step for subsequent defect characterization and visualization schemes. Results demonstrating the successful application of the method are presented.

  8. Speckle imaging algorithms for planetary imaging

    Energy Technology Data Exchange (ETDEWEB)

    Johansson, E. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.

  9. Arikan and Alamouti matrices based on fast block-wise inverse Jacket transform

    Science.gov (United States)

    Lee, Moon Ho; Khan, Md Hashem Ali; Kim, Kyeong Jin

    2013-12-01

    Recently, Lee and Hou (IEEE Signal Process Lett 13: 461-464, 2006) proposed one-dimensional and two-dimensional fast algorithms for block-wise inverse Jacket transforms (BIJTs). Their BIJTs are not real inverse Jacket transforms from mathematical point of view because their inverses do not satisfy the usual condition, i.e., the multiplication of a matrix with its inverse matrix is not equal to the identity matrix. Therefore, we mathematically propose a fast block-wise inverse Jacket transform of orders N = 2 k , 3 k , 5 k , and 6 k , where k is a positive integer. Based on the Kronecker product of the successive lower order Jacket matrices and the basis matrix, the fast algorithms for realizing these transforms are obtained. Due to the simple inverse and fast algorithms of Arikan polar binary and Alamouti multiple-input multiple-output (MIMO) non-binary matrices, which are obtained from BIJTs, they can be applied in areas such as 3GPP physical layer for ultra mobile broadband permutation matrices design, first-order q-ary Reed-Muller code design, diagonal channel design, diagonal subchannel decompose for interference alignment, and 4G MIMO long-term evolution Alamouti precoding design.

  10. Numerical computation of the discrete Fourier transform and its applications in the statistic processing of experimental data

    International Nuclear Information System (INIS)

    Marinescu, D.C.; Radulescu, T.G.

    1977-06-01

    The Integral Fourier Transform has a large range of applications in such areas as communication theory, circuit theory, physics, etc. In order to perform discrete Fourier Transform the Finite Fourier Transform is defined; it operates upon N samples of a uniformely sampled continuous function. All the properties known in the continuous case can be found in the discrete case also. The first part of the paper presents the relationship between the Finite Fourier Transform and the Integral one. The computing of a Finite Fourier Transform is a problem in itself since in order to transform a set of N data we have to perform N 2 ''operations'' if the transformation relations are used directly. An algorithm known as the Fast Fourier Transform (FFT) reduces this figure from N 2 to a more reasonable Nlog 2 N, when N is a power of two. The original Cooley and Tuckey algorithm for FFT can be further improved when higher basis are used. The price to be paid in this case is the increase in complexity of such algorithms. The recurrence relations and a comparation among such algorithms are presented. The key point in understanding the application of FFT resides in the convolution theorem which states that the convolution (an N 2 type procedure) of the primitive functions is equivalent to the ordinar multiplication of their transforms. Since filtering is actually a convolution process we present several procedures to perform digital filtering by means of FFT. The best is the one using the segmentation of records and the transformation of pairs of records. In the digital processing of signals, besides digital filtering a special attention is paid to the estimation of various statistical characteristics of a signal as: autocorrelation and correlation functions, periodiograms, density power sepctrum, etc. We give several algorithms for the consistent and unbiased estimation of such functions, by means of FFT. (author)

  11. A novel method to design S-box based on chaotic map and genetic algorithm

    International Nuclear Information System (INIS)

    Wang, Yong; Wong, Kwok-Wo; Li, Changbing; Li, Yang

    2012-01-01

    The substitution box (S-box) is an important component in block encryption algorithms. In this Letter, the problem of constructing S-box is transformed to a Traveling Salesman Problem and a method for designing S-box based on chaos and genetic algorithm is proposed. Since the proposed method makes full use of the traits of chaotic map and evolution process, stronger S-box is obtained. The results of performance test show that the presented S-box has good cryptographic properties, which justify that the proposed algorithm is effective in generating strong S-boxes. -- Highlights: ► The problem of constructing S-box is transformed to a Traveling Salesman Problem. ► We present a new method for designing S-box based on chaos and genetic algorithm. ► The proposed algorithm is effective in generating strong S-boxes.

  12. A novel method to design S-box based on chaotic map and genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yong, E-mail: wangyong_cqupt@163.com [State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044 (China); Key Laboratory of Electronic Commerce and Logistics, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Wong, Kwok-Wo [Department of Electronic Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon Tong (Hong Kong); Li, Changbing [Key Laboratory of Electronic Commerce and Logistics, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Li, Yang [Department of Automatic Control and Systems Engineering, The University of Sheffield, Mapping Street, S1 3DJ (United Kingdom)

    2012-01-30

    The substitution box (S-box) is an important component in block encryption algorithms. In this Letter, the problem of constructing S-box is transformed to a Traveling Salesman Problem and a method for designing S-box based on chaos and genetic algorithm is proposed. Since the proposed method makes full use of the traits of chaotic map and evolution process, stronger S-box is obtained. The results of performance test show that the presented S-box has good cryptographic properties, which justify that the proposed algorithm is effective in generating strong S-boxes. -- Highlights: ► The problem of constructing S-box is transformed to a Traveling Salesman Problem. ► We present a new method for designing S-box based on chaos and genetic algorithm. ► The proposed algorithm is effective in generating strong S-boxes.

  13. Automatic Transformation of MPI Programs to Asynchronous, Graph-Driven Form

    Energy Technology Data Exchange (ETDEWEB)

    Baden, Scott B [University of California, San Diego; Weare, John H [University of California, San Diego; Bylaska, Eric J [Pacific Northwest National Laboratory

    2013-04-30

    The goals of this project are to develop new, scalable, high-fidelity algorithms for atomic-level simulations and program transformations that automatically restructure existing applications, enabling them to scale forward to Petascale systems and beyond. The techniques enable legacy MPI application code to exploit greater parallelism though increased latency hiding and improved workload assignment. The techniques were successfully demonstrated on high-end scalable systems located at DOE laboratories. Besides the automatic MPI program transformations efforts, the project also developed several new scalable algorithms for ab-initio molecular dynamics, including new massively parallel algorithms for hybrid DFT and new parallel in time algorithms for molecular dynamics and ab-initio molecular dynamics. These algorithms were shown to scale to very large number of cores, and they were designed to work in the latency hiding framework developed in this project. The effectiveness of the developments was enhanced by the direct application to real grand challenge simulation problems covering a wide range of technologically important applications, time scales and accuracies. These included the simulation of the electronic structure of mineral/fluid interfaces, the very accurate simulation of chemical reactions in microsolvated environments, and the simulation of chemical behavior in very large enzyme reactions.

  14. Method for Optimal Sensor Deployment on 3D Terrains Utilizing a Steady State Genetic Algorithm with a Guided Walk Mutation Operator Based on the Wavelet Transform

    Science.gov (United States)

    Unaldi, Numan; Temel, Samil; Asari, Vijayan K.

    2012-01-01

    One of the most critical issues of Wireless Sensor Networks (WSNs) is the deployment of a limited number of sensors in order to achieve maximum coverage on a terrain. The optimal sensor deployment which enables one to minimize the consumed energy, communication time and manpower for the maintenance of the network has attracted interest with the increased number of studies conducted on the subject in the last decade. Most of the studies in the literature today are proposed for two dimensional (2D) surfaces; however, real world sensor deployments often arise on three dimensional (3D) environments. In this paper, a guided wavelet transform (WT) based deployment strategy (WTDS) for 3D terrains, in which the sensor movements are carried out within the mutation phase of the genetic algorithms (GAs) is proposed. The proposed algorithm aims to maximize the Quality of Coverage (QoC) of a WSN via deploying a limited number of sensors on a 3D surface by utilizing a probabilistic sensing model and the Bresenham's line of sight (LOS) algorithm. In addition, the method followed in this paper is novel to the literature and the performance of the proposed algorithm is compared with the Delaunay Triangulation (DT) method as well as a standard genetic algorithm based method and the results reveal that the proposed method is a more powerful and more successful method for sensor deployment on 3D terrains. PMID:22666078

  15. An effective detection algorithm for region duplication forgery in digital images

    Science.gov (United States)

    Yavuz, Fatih; Bal, Abdullah; Cukur, Huseyin

    2016-04-01

    Powerful image editing tools are very common and easy to use these days. This situation may cause some forgeries by adding or removing some information on the digital images. In order to detect these types of forgeries such as region duplication, we present an effective algorithm based on fixed-size block computation and discrete wavelet transform (DWT). In this approach, the original image is divided into fixed-size blocks, and then wavelet transform is applied for dimension reduction. Each block is processed by Fourier Transform and represented by circle regions. Four features are extracted from each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks are detected according to comparison metric results. The experimental results show that the proposed algorithm presents computational efficiency due to fixed-size circle block architecture.

  16. An optimized digital watermarking algorithm in wavelet domain based on differential evolution for color image.

    Science.gov (United States)

    Cui, Xinchun; Niu, Yuying; Zheng, Xiangwei; Han, Yingshuai

    2018-01-01

    In this paper, a new color watermarking algorithm based on differential evolution is proposed. A color host image is first converted from RGB space to YIQ space, which is more suitable for the human visual system. Then, apply three-level discrete wavelet transformation to luminance component Y and generate four different frequency sub-bands. After that, perform singular value decomposition on these sub-bands. In the watermark embedding process, apply discrete wavelet transformation to a watermark image after the scrambling encryption processing. Our new algorithm uses differential evolution algorithm with adaptive optimization to choose the right scaling factors. Experimental results show that the proposed algorithm has a better performance in terms of invisibility and robustness.

  17. An optical Fourier transform coprocessor with direct phase determination.

    Science.gov (United States)

    Macfaden, Alexander J; Gordon, George S D; Wilkinson, Timothy D

    2017-10-20

    The Fourier transform is a ubiquitous mathematical operation which arises naturally in optics. We propose and demonstrate a practical method to optically evaluate a complex-to-complex discrete Fourier transform. By implementing the Fourier transform optically we can overcome the limiting O(nlogn) complexity of fast Fourier transform algorithms. Efficiently extracting the phase from the well-known optical Fourier transform is challenging. By appropriately decomposing the input and exploiting symmetries of the Fourier transform we are able to determine the phase directly from straightforward intensity measurements, creating an optical Fourier transform with O(n) apparent complexity. Performing larger optical Fourier transforms requires higher resolution spatial light modulators, but the execution time remains unchanged. This method could unlock the potential of the optical Fourier transform to permit 2D complex-to-complex discrete Fourier transforms with a performance that is currently untenable, with applications across information processing and computational physics.

  18. Theoretical Provision of Tax Transformation

    Directory of Open Access Journals (Sweden)

    Feofanova Iryna V.

    2016-05-01

    Full Text Available The article is aimed at defining the questions, giving answers to which is necessary for scientific substantiation of the tax transformation in Ukraine. The article analyzes the structural-logical relationships of the theories, providing substantiation of tax systems and transformation of them. Various views on the level of both the tax burden and the distribution of the tax burden between big and small business have been systematized. The issues that require theoretical substantiation when choosing a model of tax system have been identified. It is determined that shares of both indirect and direct taxes and their rates can be substantiated by calculations on the basis of statistical data. The results of the presented research can be used to develop the algorithm for theoretical substantiation of tax transformation

  19. Modeling of austenite to ferrite transformation

    Indian Academy of Sciences (India)

    395–398. c Indian Academy of Sciences. Modeling of austenite to ferrite transformation. MOHSEN KAZEMINEZHAD. ∗. Department of Materials Science and Engineering, Sharif University of Technology, Azadi Avenue, Tehran, Iran. MS received 17 January 2011; revised 9 July 2011. Abstract. In this research, an algorithm ...

  20. Design Transformations for Rule-based Procedural Modeling

    KAUST Repository

    Lienhard, Stefan; Lau, Cheryl; Mü ller, Pascal; Wonka, Peter; Pauly, Mark

    2017-01-01

    We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

  1. Design Transformations for Rule-based Procedural Modeling

    KAUST Repository

    Lienhard, Stefan

    2017-05-24

    We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

  2. Inverse kinematics algorithm for a six-link manipulator using a polynomial expression

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1987-01-01

    This report is concerned with the forward and inverse kinematics problem relevant to a six-link robot manipulator. In order to derive the kinematic relationships between links, the vector rotation operator was applied instead of the conventional homogeneous transformation. The exact algorithm for solving the inverse problem was obtained by transforming kinematics equations into a polynomial. As shown in test calculations, the accuracies of numerical solutions obtained by means of the present approach are found to be quite high. The algorithm proposed permits to find out all feasible solutions for the given inverse problem. (author)

  3. 2-D DOA Estimation of LFM Signals Based on Dechirping Algorithm and Uniform Circle Array

    Directory of Open Access Journals (Sweden)

    K. B. Cui

    2017-04-01

    Full Text Available Based on Dechirping algorithm and uniform circle array(UCA, a new 2-D direction of arrival (DOA estimation algorithm of linear frequency modulation (LFM signals is proposed in this paper. The algorithm uses the thought of Dechirping and regards the signal to be estimated which is received by the reference sensor as the reference signal and proceeds the difference frequency treatment with the signal received by each sensor. So the signal to be estimated becomes a single-frequency signal in each sensor. Then we transform the single-frequency signal to an isolated impulse through Fourier transform (FFT and construct a new array data model based on the prominent parts of the impulse. Finally, we respectively use multiple signal classification (MUSIC algorithm and rotational invariance technique (ESPRIT algorithm to realize 2-D DOA estimation of LFM signals. The simulation results verify the effectiveness of the algorithm proposed.

  4. Linear canonical transforms theory and applications

    CERN Document Server

    Kutay, M; Ozaktas, Haldun; Sheridan, John

    2016-01-01

    This book provides a clear and accessible introduction to the essential mathematical foundations of linear canonical transforms from a signals and systems perspective. Substantial attention is devoted to how these transforms relate to optical systems and wave propagation. There is extensive coverage of sampling theory and fast algorithms for numerically approximating the family of transforms. Chapters on topics ranging from digital holography to speckle metrology provide a window on the wide range of applications. This volume will serve as a reference for researchers in the fields of image and signal processing, wave propagation, optical information processing and holography, optical system design and modeling, and quantum optics. It will be of use to graduate students in physics and engineering, as well as for scientists in other areas seeking to learn more about this important yet relatively unfamiliar class of integral transformations.

  5. Novel Simplex Unscented Transform and Filter

    Institute of Scientific and Technical Information of China (English)

    Wan-Chun Li; Ping Wei; Xian-Ci Xiao

    2008-01-01

    In this paper, a new simplex unscented transform (UT) based Schmidt orthogonal algorithm and a new filter method based on this transform are proposed. This filter has less computation consumption than UKF (unscented Kalman filter), SUKF (simplex unscented Kalman filter) and EKF (extended Kalman filter). Computer simulation shows that this filter has the same performance as UKF and SUKF, and according to the analysis of the computational requirements of EKF, UKF and SUKF, this filter has preferable practicality value. Finally, the appendix shows the efficiency for this UT.

  6. The fast decoding of Reed-Solomon codes using Fermat theoretic transforms and continued fractions

    Science.gov (United States)

    Reed, I. S.; Scholtz, R. A.; Welch, L. R.; Truong, T. K.

    1978-01-01

    It is shown that Reed-Solomon (RS) codes can be decoded by using a fast Fourier transform (FFT) algorithm over finite fields GF(F sub n), where F sub n is a Fermat prime, and continued fractions. This new transform decoding method is simpler than the standard method for RS codes. The computing time of this new decoding algorithm in software can be faster than the standard decoding method for RS codes.

  7. Analysis of Usefulness of a Fuzzy Transform for Industrial Data Compression

    International Nuclear Information System (INIS)

    Sztyber, Anna

    2014-01-01

    This paper presents the first part of an ongoing work on detailed analysis of compression algorithms and development of an algorithm for implementation in a real industrial data processing system. Fuzzy transforms give promising results in an image compression. The main aim of this paper is to test the possibility of an application of the fuzzy transforms to the industrial data compression. Test are carried out on the data from DAMADICS benchmark. Comparison is provided with a piecewise linear compression, which is nowadays the standard in the industry. The last section contains discussion of the obtained results and plans for the future work

  8. Impact of data transformation and preprocessing in supervised ...

    African Journals Online (AJOL)

    Impact of data transformation and preprocessing in supervised learning ... Nowadays, the ideas of integrating machine learning techniques in power system has ... The proposed algorithm used Python-based split train and k-fold model ...

  9. An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects.

    Science.gov (United States)

    Kim, Jinkwon; Min, Se Dong; Lee, Myoungho

    2011-06-27

    Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians.

  10. An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects

    Directory of Open Access Journals (Sweden)

    Min Se Dong

    2011-06-01

    Full Text Available Abstract Background Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. Methods In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. Results A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. Conclusions The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians.

  11. Reliability Analysis of Differential Relay as Main Protection Transformer Using Fuzzy Logic Algorithm

    Science.gov (United States)

    Mulyadi, Y.; Sucita, T.; Sumarto; Alpani, M.

    2018-02-01

    Electricity supply demand is increasing every year. It makes PT. PLN (Persero) is required to provide optimal customer service and satisfaction. Optimal service depends on the performance of the equipment of the power system owned, especially the transformer. Power transformer is an electrical equipment that transforms electricity from high voltage to low voltage or vice versa. However, in the electrical power system, is inseparable from interference included in the transformer. But, the disturbance can be minimized by the protection system. The main protection transformer is differential relays. Differential relays working system using Kirchoff law where inflows equal outflows. If there are excessive currents that interfere then the relays will work. But, the relay can also experience decreased performance. Therefore, this final project aims to analyze the reliability of the differential relay on the transformer in three different substations. Referring to the standard applied by the transmission line protection officer, the differential relay shall have slope characteristics of 30% in the first slope and 80% in the second slope when using two slopes and 80% when using one slope with an instant time and the corresponding ratio. So, the results obtained on the Siemens differential release have a reliable slope characteristic with a value of 30 on the fuzzy logic system. In a while, ABB a differential relay is only 80% reliable because two experiments are not reliable. For the time, all the differential relays are instant with a value of 0.06 on the fuzzy logic system. For ratios, the differential relays ABB have a better value than others brand with a value of 151 on the fuzzy logic system.

  12. Efficient Implementation of Nested-Loop Multimedia Algorithms

    Directory of Open Access Journals (Sweden)

    Kittitornkun Surin

    2001-01-01

    Full Text Available A novel dependence graph representation called the multiple-order dependence graph for nested-loop formulated multimedia signal processing algorithms is proposed. It allows a concise representation of an entire family of dependence graphs. This powerful representation facilitates the development of innovative implementation approach for nested-loop formulated multimedia algorithms such as motion estimation, matrix-matrix product, 2D linear transform, and others. In particular, algebraic linear mapping (assignment and scheduling methodology can be applied to implement such algorithms on an array of simple-processing elements. The feasibility of this new approach is demonstrated in three major target architectures: application-specific integrated circuit (ASIC, field programmable gate array (FPGA, and a programmable clustered VLIW processor.

  13. Basic Theoretical Principles Pertaining to Thermal Protection of Oil Transformer

    Directory of Open Access Journals (Sweden)

    O. G. Shirokov

    2008-01-01

    Full Text Available The paper contains formulation of basic theoretical principles pertaining to thermal protection of an oil transformer in accordance with classical theory of relay protection and theory of diagnostics with the purpose of unification of terminological and analytical information which is presently available in respect of this problem. Classification of abnormal thermal modes of an oil transformer and also algorithms and methods for operation of diagnostic thermal protection of a transformer have been proposed.

  14. Experiences with serial and parallel algorithms for channel routing using simulated annealing

    Science.gov (United States)

    Brouwer, Randall Jay

    1988-01-01

    Two algorithms for channel routing using simulated annealing are presented. Simulated annealing is an optimization methodology which allows the solution process to back up out of local minima that may be encountered by inappropriate selections. By properly controlling the annealing process, it is very likely that the optimal solution to an NP-complete problem such as channel routing may be found. The algorithm presented proposes very relaxed restrictions on the types of allowable transformations, including overlapping nets. By freeing that restriction and controlling overlap situations with an appropriate cost function, the algorithm becomes very flexible and can be applied to many extensions of channel routing. The selection of the transformation utilizes a number of heuristics, still retaining the pseudorandom nature of simulated annealing. The algorithm was implemented as a serial program for a workstation, and a parallel program designed for a hypercube computer. The details of the serial implementation are presented, including many of the heuristics used and some of the resulting solutions.

  15. Image reconstruction from pairs of Fourier-transform magnitude

    International Nuclear Information System (INIS)

    Hunt, B.R.; Overman, T.L.; Gough, P.

    1998-01-01

    The retrieval of phase information from only the magnitude of the Fourier transform of a signal remains an important problem for many applications. We present an algorithm for phase retrieval when there exist two related sets of Fourier-transform magnitude data. The data are assumed to come from a single object observed in two different polarizations through a distorting medium, so the phase component of the Fourier transform of the object is corrupted. Phase retrieval is accomplished by minimization of a suitable criterion function, which can take three different forms. copyright 1998 Optical Society of America

  16. Multispectral image pansharpening based on the contourlet transform

    Energy Technology Data Exchange (ETDEWEB)

    Amro, Israa; Mateos, Javier, E-mail: iamro@correo.ugr.e, E-mail: jmd@decsai.ugr.e [Departamento de Ciencias de la Computacion e I.A., Universidad de Granada, 18071 Granada (Spain)

    2010-02-01

    Pansharpening is a technique that fuses the information of a low resolution multispectral image (MS) and a high resolution panchromatic image (PAN), usually remote sensing images, to provide a high resolution multispectral image. In the literature, this task has been addressed from different points of view being one of the most popular the wavelets based algorithms. Recently, the contourlet transform has been proposed. This transform combines the advantages of the wavelets transform with a more efficient directional information representation. In this paper we propose a new pansharpening method based on contourlets, compare with its wavelet counterpart and assess its performance numerically and visually.

  17. Spatial compression algorithm for the analysis of very large multivariate images

    Science.gov (United States)

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  18. Positive Scattering Cross Sections using Constrained Least Squares

    International Nuclear Information System (INIS)

    Dahl, J.A.; Ganapol, B.D.; Morel, J.E.

    1999-01-01

    A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented

  19. GCA-w Algorithms for Traffic Simulation

    International Nuclear Information System (INIS)

    Hoffmann, R.

    2011-01-01

    The GCA-w model (Global Cellular Automata with write access) is an extension of the GCA (Global Cellular Automata) model, which is based on the cellular automata model (CA). Whereas the CA model uses static links to local neighbors, the GCA model uses dynamic links to potentially global neighbors. The GCA-w model is a further extension that allows modifying the neighbors' states. Thereby, neighbors can dynamically be activated or deactivated. Algorithms can be described more concisely and may execute more efficiently because redundant computations can be avoided. Modeling traffic flow is a good example showing the usefulness of the GCA-w model. The Nagel-Schreckenberg algorithm for traffic simulation is first described as CA and GCA, and then transformed into the GCA-w model. This algorithm is '' exclusive-write '', meaning that no write conflicts have to be resolved. Furthermore, this algorithm is extended, allowing to deactivate and to activate cars stuck in a traffic jam in order to save computation time and energy. (author)

  20. Analysis of computational complexity for HT-based fingerprint alignment algorithms on java card environment

    CSIR Research Space (South Africa)

    Mlambo, CS

    2015-01-01

    Full Text Available In this paper, implementations of three Hough Transform based fingerprint alignment algorithms are analyzed with respect to time complexity on Java Card environment. Three algorithms are: Local Match Based Approach (LMBA), Discretized Rotation Based...

  1. A hash-based image encryption algorithm

    Science.gov (United States)

    Cheddad, Abbas; Condell, Joan; Curran, Kevin; McKevitt, Paul

    2010-03-01

    There exist several algorithms that deal with text encryption. However, there has been little research carried out to date on encrypting digital images or video files. This paper describes a novel way of encrypting digital images with password protection using 1D SHA-2 algorithm coupled with a compound forward transform. A spatial mask is generated from the frequency domain by taking advantage of the conjugate symmetry of the complex imagery part of the Fourier Transform. This mask is then XORed with the bit stream of the original image. Exclusive OR (XOR), a logical symmetric operation, that yields 0 if both binary pixels are zeros or if both are ones and 1 otherwise. This can be verified simply by modulus (pixel1, pixel2, 2). Finally, confusion is applied based on the displacement of the cipher's pixels in accordance with a reference mask. Both security and performance aspects of the proposed method are analyzed, which prove that the method is efficient and secure from a cryptographic point of view. One of the merits of such an algorithm is to force a continuous tone payload, a steganographic term, to map onto a balanced bits distribution sequence. This bit balance is needed in certain applications, such as steganography and watermarking, since it is likely to have a balanced perceptibility effect on the cover image when embedding.

  2. Nonrigid synthetic aperture radar and optical image coregistration by combining local rigid transformations using a Kohonen network.

    Science.gov (United States)

    Salehpour, Mehdi; Behrad, Alireza

    2017-10-01

    This study proposes a new algorithm for nonrigid coregistration of synthetic aperture radar (SAR) and optical images. The proposed algorithm employs point features extracted by the binary robust invariant scalable keypoints algorithm and a new method called weighted bidirectional matching for initial correspondence. To refine false matches, we assume that the transformation between SAR and optical images is locally rigid. This property is used to refine false matches by assigning scores to matched pairs and clustering local rigid transformations using a two-layer Kohonen network. Finally, the thin plate spline algorithm and mutual information are used for nonrigid coregistration of SAR and optical images.

  3. Distinctive Features of Faults for Use in Power Transformer Differential Protection

    Directory of Open Access Journals (Sweden)

    Glazyrin V.E.

    2017-04-01

    Full Text Available The aim of the work is to study the change in instantaneous values of the differential current in power transformer differential protection circuits under conditions of magnetizing inrush when the unloaded transformer is energized and under conditions of a fault within the protection zone. Saturation of measuring current transformers during the transient process leads to distortion of signals in their secondary windings, which can cause a long delay in the disconnection of the protected object and the development of an accident in the power system if traditional protective algorithms are used. Taking into account the peculiarities of the change in the instantaneous values of the differential current while developing the protection algorithm makes it possible to recognize faults with maximum speed before the moment of the first saturation of electromagnetic current transformers and thus avoid a delay in the operation of the protection. For quick and correct recognition of a fault within the protection zone authors proposed to monitor the maximum value of the derivative of the differential current and the duration of its monotonous change from the moment of the onset of the transient process. This is because the monitored parameters in the emergency and normal operation of the power transformer can vary significantly. Application of traditional protection algorithms together with proposed methods allows increasing the speed of differential protection response in different operation modes of the power system. Mathematical simulation has been used to study the magnetizing inrush and short circuits within the protection zone.

  4. Improved transformer protection using probabilistic neural network ...

    African Journals Online (AJOL)

    This article presents a novel technique to distinguish between magnetizing inrush current and internal fault current of power transformer. An algorithm has been developed around the theme of the conventional differential protection method in which parallel combination of Probabilistic Neural Network (PNN) and Power ...

  5. Parallel algorithms for computation of the manipulator inertia matrix

    Science.gov (United States)

    Amin-Javaheri, Masoud; Orin, David E.

    1989-01-01

    The development of an O(log2N) parallel algorithm for the manipulator inertia matrix is presented. It is based on the most efficient serial algorithm which uses the composite rigid body method. Recursive doubling is used to reformulate the linear recurrence equations which are required to compute the diagonal elements of the matrix. It results in O(log2N) levels of computation. Computation of the off-diagonal elements involves N linear recurrences of varying-size and a new method, which avoids redundant computation of position and orientation transforms for the manipulator, is developed. The O(log2N) algorithm is presented in both equation and graphic forms which clearly show the parallelism inherent in the algorithm.

  6. Guaranteed convergence of the Hough transform

    Science.gov (United States)

    Soffer, Menashe; Kiryati, Nahum

    1995-01-01

    The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.

  7. z-transform DFT filters and FFT's

    DEFF Research Database (Denmark)

    Bruun, G.

    1978-01-01

    The paper shows how discrete Fourier transformation can be implemented as a filter bank in a way which reduces the number of filter coefficients. A particular implementation of such a filter bank is directly related to the normal complex FFT algorithm. The principle developed further leads to types...... of DFT filter banks which utilize a minimum of complex coefficients. These implementations lead to new forms of FFT's, among which is acos/sinFFT for a real signal which only employs real coefficients. The new FFT algorithms use only half as many real multiplications as does the classical FFT....

  8. On constructing optimistic simulation algorithms for the discrete event system specification

    International Nuclear Information System (INIS)

    Nutaro, James J.

    2008-01-01

    This article describes a Time Warp simulation algorithm for discrete event models that are described in terms of the Discrete Event System Specification (DEVS). The article shows how the total state transition and total output function of a DEVS atomic model can be transformed into an event processing procedure for a logical process. A specific Time Warp algorithm is constructed around this logical process, and it is shown that the algorithm correctly simulates a DEVS coupled model that consists entirely of interacting atomic models. The simulation algorithm is presented abstractly; it is intended to provide a basis for implementing efficient and scalable parallel algorithms that correctly simulate DEVS models

  9. Pyramid algorithms as models of human cognition

    Science.gov (United States)

    Pizlo, Zygmunt; Li, Zheng

    2003-06-01

    There is growing body of experimental evidence showing that human perception and cognition involves mechanisms that can be adequately modeled by pyramid algorithms. The main aspect of those mechanisms is hierarchical clustering of information: visual images, spatial relations, and states as well as transformations of a problem. In this paper we review prior psychophysical and simulation results on visual size transformation, size discrimination, speed-accuracy tradeoff, figure-ground segregation, and the traveling salesman problem. We also present our new results on graph search and on the 15-puzzle.

  10. Some generalizations of the nonlocal transformations approach

    Directory of Open Access Journals (Sweden)

    V. A. Tychynin

    2015-02-01

    Full Text Available Some generalizations of a method of nonlocal transformations are proposed: a con­nection of given equations via prolonged nonlocal transformations and finding of an adjoint solution to the solutions of initial equation are considered. A concept of nonlocal transformation with additional variables is introduced, developed and used for searching symmetries of differential equations. A problem of inversion of the nonlocal transforma­tion with additional variables is investigated and in some cases solved. Several examples are presented. Derived technique is applied for construction of the algorithms and for­mulae of generation of solutions. The formulae derived are used for construction of exact solutions of some nonlinear equations.

  11. Polynomial Phase Estimation Based on Adaptive Short-Time Fourier Transform.

    Science.gov (United States)

    Jing, Fulong; Zhang, Chunjie; Si, Weijian; Wang, Yu; Jiao, Shuhong

    2018-02-13

    Polynomial phase signals (PPSs) have numerous applications in many fields including radar, sonar, geophysics, and radio communication systems. Therefore, estimation of PPS coefficients is very important. In this paper, a novel approach for PPS parameters estimation based on adaptive short-time Fourier transform (ASTFT), called the PPS-ASTFT estimator, is proposed. Using the PPS-ASTFT estimator, both one-dimensional and multi-dimensional searches and error propagation problems, which widely exist in PPSs field, are avoided. In the proposed algorithm, the instantaneous frequency (IF) is estimated by S-transform (ST), which can preserve information on signal phase and provide a variable resolution similar to the wavelet transform (WT). The width of the ASTFT analysis window is equal to the local stationary length, which is measured by the instantaneous frequency gradient (IFG). The IFG is calculated by the principal component analysis (PCA), which is robust to the noise. Moreover, to improve estimation accuracy, a refinement strategy is presented to estimate signal parameters. Since the PPS-ASTFT avoids parameter search, the proposed algorithm can be computed in a reasonable amount of time. The estimation performance, computational cost, and implementation of the PPS-ASTFT are also analyzed. The conducted numerical simulations support our theoretical results and demonstrate an excellent statistical performance of the proposed algorithm.

  12. Vanishing points detection using combination of fast Hough transform and deep learning

    Science.gov (United States)

    Sheshkus, Alexander; Ingacheva, Anastasia; Nikolaev, Dmitry

    2018-04-01

    In this paper we propose a novel method for vanishing points detection based on convolutional neural network (CNN) approach and fast Hough transform algorithm. We show how to determine fast Hough transform neural network layer and how to use it in order to increase usability of the neural network approach to the vanishing point detection task. Our algorithm includes CNN with consequence of convolutional and fast Hough transform layers. We are building estimator for distribution of possible vanishing points in the image. This distribution can be used to find candidates of vanishing point. We provide experimental results from tests of suggested method using images collected from videos of road trips. Our approach shows stable result on test images with different projective distortions and noise. Described approach can be effectively implemented for mobile GPU and CPU.

  13. A novel iris localization algorithm using correlation filtering

    Science.gov (United States)

    Pohit, Mausumi; Sharma, Jitu

    2015-06-01

    Fast and efficient segmentation of iris from the eye images is a primary requirement for robust database independent iris recognition. In this paper we have presented a new algorithm for computing the inner and outer boundaries of the iris and locating the pupil centre. Pupil-iris boundary computation is based on correlation filtering approach, whereas iris-sclera boundary is determined through one dimensional intensity mapping. The proposed approach is computationally less extensive when compared with the existing algorithms like Hough transform.

  14. An iris recognition algorithm based on DCT and GLCM

    Science.gov (United States)

    Feng, G.; Wu, Ye-qing

    2008-04-01

    With the enlargement of mankind's activity range, the significance for person's status identity is becoming more and more important. So many different techniques for person's status identity were proposed for this practical usage. Conventional person's status identity methods like password and identification card are not always reliable. A wide variety of biometrics has been developed for this challenge. Among those biologic characteristics, iris pattern gains increasing attention for its stability, reliability, uniqueness, noninvasiveness and difficult to counterfeit. The distinct merits of the iris lead to its high reliability for personal identification. So the iris identification technique had become hot research point in the past several years. This paper presents an efficient algorithm for iris recognition using gray-level co-occurrence matrix(GLCM) and Discrete Cosine transform(DCT). To obtain more representative iris features, features from space and DCT transformation domain are extracted. Both GLCM and DCT are applied on the iris image to form the feature sequence in this paper. The combination of GLCM and DCT makes the iris feature more distinct. Upon GLCM and DCT the eigenvector of iris extracted, which reflects features of spatial transformation and frequency transformation. Experimental results show that the algorithm is effective and feasible with iris recognition.

  15. Eliminating the zero spectrum in Fourier transform profilometry using empirical mode decomposition.

    Science.gov (United States)

    Li, Sikun; Su, Xianyu; Chen, Wenjing; Xiang, Liqun

    2009-05-01

    Empirical mode decomposition is introduced into Fourier transform profilometry to extract the zero spectrum included in the deformed fringe pattern without the need for capturing two fringe patterns with pi phase difference. The fringe pattern is subsequently demodulated using a standard Fourier transform profilometry algorithm. With this method, the deformed fringe pattern is adaptively decomposed into a finite number of intrinsic mode functions that vary from high frequency to low frequency by means of an algorithm referred to as a sifting process. Then the zero spectrum is separated from the high-frequency components effectively. Experiments validate the feasibility of this method.

  16. A new algorithm for optimum voltage and reactive power control for minimizing transmission lines losses

    International Nuclear Information System (INIS)

    Ghoudjehbaklou, H.; Danai, B.

    2001-01-01

    Reactive power dispatch for voltage profile modification has been of interest to power utilities. Usually local bus voltages can be altered by changing generator voltages, reactive shunts, ULTC transformers and SVCs. Determination of optimum values for control parameters, however, is not simple for modern power system networks. Heuristic and rather intelligent algorithms have to be sought. In this paper a new algorithm is proposed that is based on a variant of a genetic algorithm combined with simulated annealing updates. In this algorithm a fuzzy multi-objective a approach is used for the fitness function of the genetic algorithm. This fuzzy multi-objective function can efficiently modify the voltage profile in order to minimize transmission lines losses, thus reducing the operating costs. The reason for such a combination is to utilize the best characteristics of each method and overcome their deficiencies. The proposed algorithm is much faster than the classical genetic algorithm and cna be easily integrated into existing power utilities software. The proposed algorithm is tested on an actual system model of 1284 buses, 799 lines, 1175 fixed and ULTC transformers, 86 generators, 181 controllable shunts and 425 loads

  17. Optical simulation of quantum algorithms using programmable liquid-crystal displays

    International Nuclear Information System (INIS)

    Puentes, Graciana; La Mela, Cecilia; Ledesma, Silvia; Iemmi, Claudio; Paz, Juan Pablo; Saraceno, Marcos

    2004-01-01

    We present a scheme to perform an all optical simulation of quantum algorithms and maps. The main components are lenses to efficiently implement the Fourier transform and programmable liquid-crystal displays to introduce space dependent phase changes on a classical optical beam. We show how to simulate Deutsch-Jozsa and Grover's quantum algorithms using essentially the same optical array programmed in two different ways

  18. A New Approach to High-accuracy Road Orthophoto Mapping Based on Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Ming Yang

    2011-12-01

    Full Text Available Existing orthophoto map based on satellite photography and aerial photography is not precise enough for road marking. This paper proposes a new approach to high-accuracy orthophoto mapping. The approach uses inverse perspective transformation to process the image information and generates the orthophoto fragment. The offline interpolation algorithm is used to process the location information. It processes the dead reckoning and the EKF location information, and uses the result to transform the fragments to the global coordinate system. At last it uses wavelet transform to divides the image to two frequency bands and uses weighted median algorithm to deal with them separately. The result of experiment shows that the map produced with this method has high accuracy.

  19. A COMPREHENSIVE MODEL FOR THE POWER TRANSFORMER DIGITAL DIFFERENTIAL PROTECTION FUNCTIONING RESEARCH

    Directory of Open Access Journals (Sweden)

    Yu. V. Rumiantsev

    2016-01-01

    Full Text Available This article presents a comprehensive model for the two-winding power transformer digital differential protection functioning research. Considered comprehensive model is developed in MatLab-Simulink dynamic simulation environment with the help of SimPowerSystems component library and includes the following elements: power supply, three-phase power transformer, wye-connected current transformers and two-winding power transformer digital differential protection model. Each element of the presented model is described in the degree sufficient for its implementation in the dynamic simulation environment. Particular attention is paid to the digital signal processing principles and to the ways of differential and restraining currents forming of the considered comprehensive model main element – power transformer digital differential protection. With the help of this model the power transformer digital differential protection functioning was researched during internal and external faults: internal short-circuit, external short-circuit with and without current transformers saturation on the power transformer low-voltage side. Each experiment is illustrated with differential and restraining currents waveforms of the digital differential protection under research. Particular attention was paid to the digital protection functioning analysis during power transformer abnormal modes: overexcitation and inrush current condition. Typical current waveforms during these modes were showed and their harmonic content was investigated. The causes of these modes were analyzed in details. Digital differential protection blocking algorithms based on the harmonic content were considered. Drawbacks of theses algorithms were observed and the need of their further technical improvement was marked.

  20. Fourier transform resampling: Theory and application

    International Nuclear Information System (INIS)

    Hawkins, W.G.

    1996-01-01

    One of the most challenging problems in medical imaging is the development of reconstruction algorithms for nonstandard geometries. This work focuses on the application of Fourier analysis to the problem of resampling or rebinning. Conventional resampling methods utilizing some form of interpolation almost always result in a loss of resolution in the tomographic image. Fourier Transform Resampling (FTRS) offers potential improvement because the Modulation Transfer Function (MTF) of the process behaves like an ideal low pass filter. The MTF, however, is nonstationary if the coordinate transformation is nonlinear. FTRS may be viewed as a generalization of the linear coordinate transformations of standard Fourier analysis. Simulated MTF's were obtained by projecting point sources at different transverse positions in the flat fan beam detector geometry. These MTF's were compared to the closed form expression for FIRS. Excellent agreement was obtained for frequencies at or below the estimated cutoff frequency. The resulting FTRS algorithm is applied to simulations with symmetric fan beam geometry, an elliptical orbit and uniform attenuation, with a normalized root mean square error (NRME) of 0.036. Also, a Tc-99m point source study (1 cm dia., placed in air 10 cm from the COR) for a circular fan beam acquisition was reconstructed with a hybrid resampling method. The FWHM of the hybrid resampling method was 11.28 mm and compares favorably with a direct reconstruction (FWHM: 11.03 mm)

  1. Canonical algorithms for numerical integration of charged particle motion equations

    Science.gov (United States)

    Efimov, I. N.; Morozov, E. A.; Morozova, A. R.

    2017-02-01

    A technique for numerically integrating the equation of charged particle motion in a magnetic field is considered. It is based on the canonical transformations of the phase space in Hamiltonian mechanics. The canonical transformations make the integration process stable against counting error accumulation. The integration algorithms contain a minimum possible amount of arithmetics and can be used to design accelerators and devices of electron and ion optics.

  2. Algorithm for three dimension reconstruction of magnetic resonance tomographs and X-ray images based on Fast Fourier Transform; Algoritmo para reconstrucao tridimensional de imagens de tomografos de ressonancia magnetica e de raio-X baseado no uso de Transformada Rapida de Fourier

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Josiane M.; Traina, Agma Juci M. [Sao Paulo Univ., Sao Carlos, SP (Brazil). Inst. de Ciencias Matematicas; Cruvinel, Paulo E. [EMBRAPA, Sao Carlos, SP (Brazil). CNPDIA

    1995-12-31

    This work presents an algorithm for three-dimensional digital image reconstruction. Such algorithms based on the combination of both a Fast Fourier Transform method with Hamming Window and the use of a tri-linear interpolation function. The algorithm allows not only the generation of three-dimensional spatial spin distribution maps for Magnetic Resonance Tomography data but also X and Y-rays linear attenuation coefficient maps for CT scanners. Results demonstrates the usefulness of the algorithm in three-dimensional image reconstruction by doing first two-dimensional reconstruction and rather after interpolation. The algorithm was developed in C++ language, and there are two available versions: one under the DOS environment, and the other under the UNIX/Sun environment. (author) 10 refs., 5 figs.

  3. A Global Optimization Algorithm for Sum of Linear Ratios Problem

    Directory of Open Access Journals (Sweden)

    Yuelin Gao

    2013-01-01

    Full Text Available We equivalently transform the sum of linear ratios programming problem into bilinear programming problem, then by using the linear characteristics of convex envelope and concave envelope of double variables product function, linear relaxation programming of the bilinear programming problem is given, which can determine the lower bound of the optimal value of original problem. Therefore, a branch and bound algorithm for solving sum of linear ratios programming problem is put forward, and the convergence of the algorithm is proved. Numerical experiments are reported to show the effectiveness of the proposed algorithm.

  4. The fuzzy Hough Transform-feature extraction in medical images

    International Nuclear Information System (INIS)

    Philip, K.P.; Dove, E.L.; Stanford, W.; Chandran, K.B.; McPherson, D.D.; Gotteiner, N.L.

    1994-01-01

    Identification of anatomical features is a necessary step for medical image analysis. Automatic methods for feature identification using conventional pattern recognition techniques typically classify an object as a member of a predefined class of objects, but do not attempt to recover the exact or approximate shape of that object. For this reason, such techniques are usually not sufficient to identify the borders of organs when individual geometry varies in local detail, even though the general geometrical shape is similar. The authors present an algorithm that detects features in an image based on approximate geometrical models. The algorithm is based on the traditional and generalized Hough Transforms but includes notions from fuzzy set theory. The authors use the new algorithm to roughly estimate the actual locations of boundaries of an internal organ, and from this estimate, to determine a region of interest around the organ. Based on this rough estimate of the border location, and the derived region of interest, the authors find the final estimate of the true borders with other image processing techniques. The authors present results that demonstrate that the algorithm was successfully used to estimate the approximate location of the chest wall in humans, and of the left ventricular contours of a dog heart obtained from cine-computed tomographic images. The authors use this fuzzy Hough Transform algorithm as part of a larger procedures to automatically identify the myocardial contours of the heart. This algorithm may also allow for more rapid image processing and clinical decision making in other medical imaging applications

  5. A Fuzzy Homomorphic Algorithm for Image Enhancement | Nnolim ...

    African Journals Online (AJOL)

    The implementation and analysis of a novel Fuzzy Homomorphic image enhancement technique is presented. The technique combines the logarithmic transform with fuzzy membership functions to deliver an intuitive method of image enhancement. This algorithm reduces the computational complexity by eliminating the ...

  6. A Monte Carlo algorithm for the Vavilov distribution

    International Nuclear Information System (INIS)

    Yi, Chul-Young; Han, Hyon-Soo

    1999-01-01

    Using the convolution property of the inverse Laplace transform, an improved Monte Carlo algorithm for the Vavilov energy-loss straggling distribution of the charged particle is developed, which is relatively simple and gives enough accuracy to be used for most Monte Carlo applications

  7. Analysis and Evaluation of IKONOS Image Fusion Algorithm Based on Land Cover Classification

    Institute of Scientific and Technical Information of China (English)

    Xia; JING; Yan; BAO

    2015-01-01

    Different fusion algorithm has its own advantages and limitations,so it is very difficult to simply evaluate the good points and bad points of the fusion algorithm. Whether an algorithm was selected to fuse object images was also depended upon the sensor types and special research purposes. Firstly,five fusion methods,i. e. IHS,Brovey,PCA,SFIM and Gram-Schmidt,were briefly described in the paper. And then visual judgment and quantitative statistical parameters were used to assess the five algorithms. Finally,in order to determine which one is the best suitable fusion method for land cover classification of IKONOS image,the maximum likelihood classification( MLC) was applied using the above five fusion images. The results showed that the fusion effect of SFIM transform and Gram-Schmidt transform were better than the other three image fusion methods in spatial details improvement and spectral information fidelity,and Gram-Schmidt technique was superior to SFIM transform in the aspect of expressing image details. The classification accuracy of the fused image using Gram-Schmidt and SFIM algorithms was higher than that of the other three image fusion methods,and the overall accuracy was greater than 98%. The IHS-fused image classification accuracy was the lowest,the overall accuracy and kappa coefficient were 83. 14% and 0. 76,respectively. Thus the IKONOS fusion images obtained by the Gram-Schmidt and SFIM were better for improving the land cover classification accuracy.

  8. Single image super resolution algorithm based on edge interpolation in NSCT domain

    Science.gov (United States)

    Zhang, Mengqun; Zhang, Wei; He, Xinyu

    2017-11-01

    In order to preserve the texture and edge information and to improve the space resolution of single frame, a superresolution algorithm based on Contourlet (NSCT) is proposed. The original low resolution image is transformed by NSCT, and the directional sub-band coefficients of the transform domain are obtained. According to the scale factor, the high frequency sub-band coefficients are amplified by the interpolation method based on the edge direction to the desired resolution. For high frequency sub-band coefficients with noise and weak targets, Bayesian shrinkage is used to calculate the threshold value. The coefficients below the threshold are determined by the correlation among the sub-bands of the same scale to determine whether it is noise and de-noising. The anisotropic diffusion filter is used to effectively enhance the weak target in the low contrast region of the target and background. Finally, the high-frequency sub-band is amplified by the bilinear interpolation method to the desired resolution, and then combined with the high-frequency subband coefficients after de-noising and small target enhancement, the NSCT inverse transform is used to obtain the desired resolution image. In order to verify the effectiveness of the proposed algorithm, the proposed algorithm and several common image reconstruction methods are used to test the synthetic image, motion blurred image and hyperspectral image, the experimental results show that compared with the traditional single resolution algorithm, the proposed algorithm can obtain smooth edges and good texture features, and the reconstructed image structure is well preserved and the noise is suppressed to some extent.

  9. High-order hydrodynamic algorithms for exascale computing

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, Nathaniel Ray [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-05

    Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.

  10. Ship detection in satellite imagery using rank-order greyscale hit-or-miss transforms

    Energy Technology Data Exchange (ETDEWEB)

    Harvey, Neal R [Los Alamos National Laboratory; Porter, Reid B [Los Alamos National Laboratory; Theiler, James [Los Alamos National Laboratory

    2010-01-01

    Ship detection from satellite imagery is something that has great utility in various communities. Knowing where ships are and their types provides useful intelligence information. However, detecting and recognizing ships is a difficult problem. Existing techniques suffer from too many false-alarms. We describe approaches we have taken in trying to build ship detection algorithms that have reduced false alarms. Our approach uses a version of the grayscale morphological Hit-or-Miss transform. While this is well known and used in its standard form, we use a version in which we use a rank-order selection for the dilation and erosion parts of the transform, instead of the standard maximum and minimum operators. This provides some slack in the fitting that the algorithm employs and provides a method for tuning the algorithm's performance for particular detection problems. We describe our algorithms, show the effect of the rank-order parameter on the algorithm's performance and illustrate the use of this approach for real ship detection problems with panchromatic satellite imagery.

  11. Transforming differential equations of multi-loop Feynman integrals into canonical form

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Christoph [Institut für Physik, Humboldt-Universität zu Berlin,12489 Berlin (Germany)

    2017-04-03

    The method of differential equations has been proven to be a powerful tool for the computation of multi-loop Feynman integrals appearing in quantum field theory. It has been observed that in many instances a canonical basis can be chosen, which drastically simplifies the solution of the differential equation. In this paper, an algorithm is presented that computes the transformation to a canonical basis, starting from some basis that is, for instance, obtained by the usual integration-by-parts reduction techniques. The algorithm requires the existence of a rational transformation to a canonical basis, but is otherwise completely agnostic about the differential equation. In particular, it is applicable to problems involving multiple scales and allows for a rational dependence on the dimensional regulator. It is demonstrated that the algorithm is suitable for current multi-loop calculations by presenting its successful application to a number of non-trivial examples.

  12. Transforming differential equations of multi-loop Feynman integrals into canonical form

    Science.gov (United States)

    Meyer, Christoph

    2017-04-01

    The method of differential equations has been proven to be a powerful tool for the computation of multi-loop Feynman integrals appearing in quantum field theory. It has been observed that in many instances a canonical basis can be chosen, which drastically simplifies the solution of the differential equation. In this paper, an algorithm is presented that computes the transformation to a canonical basis, starting from some basis that is, for instance, obtained by the usual integration-by-parts reduction techniques. The algorithm requires the existence of a rational transformation to a canonical basis, but is otherwise completely agnostic about the differential equation. In particular, it is applicable to problems involving multiple scales and allows for a rational dependence on the dimensional regulator. It is demonstrated that the algorithm is suitable for current multi-loop calculations by presenting its successful application to a number of non-trivial examples.

  13. Transforming differential equations of multi-loop Feynman integrals into canonical form

    International Nuclear Information System (INIS)

    Meyer, Christoph

    2017-01-01

    The method of differential equations has been proven to be a powerful tool for the computation of multi-loop Feynman integrals appearing in quantum field theory. It has been observed that in many instances a canonical basis can be chosen, which drastically simplifies the solution of the differential equation. In this paper, an algorithm is presented that computes the transformation to a canonical basis, starting from some basis that is, for instance, obtained by the usual integration-by-parts reduction techniques. The algorithm requires the existence of a rational transformation to a canonical basis, but is otherwise completely agnostic about the differential equation. In particular, it is applicable to problems involving multiple scales and allows for a rational dependence on the dimensional regulator. It is demonstrated that the algorithm is suitable for current multi-loop calculations by presenting its successful application to a number of non-trivial examples.

  14. Quantum walks and search algorithms

    CERN Document Server

    Portugal, Renato

    2013-01-01

    This book addresses an interesting area of quantum computation called quantum walks, which play an important role in building quantum algorithms, in particular search algorithms. Quantum walks are the quantum analogue of classical random walks. It is known that quantum computers have great power for searching unsorted databases. This power extends to many kinds of searches, particularly to the problem of finding a specific location in a spatial layout, which can be modeled by a graph. The goal is to find a specific node knowing that the particle uses the edges to jump from one node to the next. This book is self-contained with main topics that include: Grover's algorithm, describing its geometrical interpretation and evolution by means of the spectral decomposition of the evolution operater Analytical solutions of quantum walks on important graphs like line, cycles, two-dimensional lattices, and hypercubes using Fourier transforms Quantum walks on generic graphs, describing methods to calculate the limiting d...

  15. The Generalized Legendre transform and its applications to inverse spectral problems

    OpenAIRE

    Guillemin, Victor; Wang, Zuoqin

    2015-01-01

    Let $M$ be a Riemannian manifold, $\\tau: G \\times M \\to M$ an isometric action on $M$ of an $n$-torus $G$ and $V: M \\to \\mathbb R$ a bounded $G$-invariant smooth function. By $G$-invariance the Schr\\"odinger operator, $P=-\\hbar^2 \\Delta_M+V$, restricts to a self-adjoint operator on $L^2(M)_{\\alpha/\\hbar}$, $\\alpha$ being a weight of $G$ and $1/\\hbar$ a large positive integer. Let $[c_\\alpha, \\infty)$ be the asymptotic support of the spectrum of this operator. We will show that $c_\\alpha$ exte...

  16. Efficient production of Aschersonia placenta protoplasts for transformation using optimization algorithms.

    Science.gov (United States)

    Wei, Xiuyan; Song, Xinyue; Dong, Dong; Keyhani, Nemat O; Yao, Lindan; Zang, Xiangyun; Dong, Lili; Gu, Zijian; Fu, Delai; Liu, Xingzhong; Qiu, Junzhi; Guan, Xiong

    2016-07-01

    The insect pathogenic fungus Aschersonia placenta is a highly effective pathogen of whiteflies and scale insects. However, few genetic tools are currently available for studying this organism. Here we report on the conditions for the production of transformable A. placenta protoplasts using an optimized protocol based on the response surface method (RSM). Critical parameters for protoplast production were modelled by using a Box-Behnken design (BBD) involving 3 levels of 3 variables that was subsequently tested to verify its ability to predict protoplast production (R(2) = 0.9465). The optimized conditions resulted in the highest yield of protoplasts ((4.41 ± 0.02) × 10(7) cells/mL of culture, mean ± SE) when fungal cells were treated with 26.1 mg/mL of lywallzyme for 4 h of digestion, and subsequently allowed to recover for 64.6 h in 0.7 mol/L NaCl-Tris buffer. The latter was used as an osmotic stabilizer. The yield of protoplasts was approximately 10-fold higher than that of the nonoptimized conditions. Generated protoplasts were transformed with vector PbarGPE containing the bar gene as the selection marker. Transformation efficiency was 300 colonies/(μg DNA·10(7) protoplasts), and integration of the vector DNA was confirmed by PCR. The results show that rational design strategies (RSM and BBD methods) are useful to increase the production of fungal protoplasts for a variety of downstream applications.

  17. AN EFFICIENT, BOX SHAPE INDEPENDENT NONBONDED FORCE AND VIRIAL ALGORITHM FOR MOLECULAR-DYNAMICS

    NARCIS (Netherlands)

    Bekker, H.; Dijkstra, E.J; Renardus, M.K.R.; Berendsen, H.J.C.

    1995-01-01

    A notation is introduced and used to transform a conventional specification of the non-bonded force and virial algorithm in the case of periodic boundary conditions into an alternative specification. The implementation of the transformed specification is simpler and typically a factor of 1.5 faster

  18. Discrete quantum Fourier transform in coupled semiconductor double quantum dot molecules

    International Nuclear Information System (INIS)

    Dong Ping; Yang Ming; Cao Zhuoliang

    2008-01-01

    In this Letter, we present a physical scheme for implementing the discrete quantum Fourier transform in a coupled semiconductor double quantum dot system. The main controlled-R gate operation can be decomposed into many simple and feasible unitary transformations. The current scheme would be a useful step towards the realization of complex quantum algorithms in the quantum dot system

  19. Application of wavelet transform in seismic signal processing

    International Nuclear Information System (INIS)

    Ghasemi, M. R.; Mohammadzadeh, A.; Salajeghe, E.

    2005-01-01

    Wavelet transform is a new tool for signal analysis which can perform a simultaneous signal time and frequency representations. Under Multi Resolution Analysis, one can quickly determine details for signals and their properties using Fast Wavelet Transform algorithms. In this paper, for a better physical understanding of a signal and its basic algorithms, Multi Resolution Analysis together with wavelet transforms in a form of Digital Signal Processing will be discussed. For a Seismic Signal Processing, sets of Orthonormal Daubechies Wavelets are suggested. when dealing with the application of wavelets in SSP, one may discuss about denoising from the signal and data compression existed in the signal, which is important in seismic signal data processing. Using this techniques, EL-Centro and Nagan signals were remodeled with a 25% of total points, resulted in a satisfactory results with an acceptable error drift. Thus a total of 1559 and 2500 points for EL-centro and Nagan seismic curves each, were reduced to 389 and 625 points respectively, with a very reasonable error drift, details of which are recorded in the paper. Finally, the future progress in signal processing, based on wavelet theory will be appointed

  20. Analysis and removing noise from speech using wavelet transform

    Science.gov (United States)

    Tomala, Karel; Voznak, Miroslav; Partila, Pavol; Rezac, Filip; Safarik, Jakub

    2013-05-01

    The paper discusses the use of Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT) wavelet in removing noise from voice samples and evaluation of its impact on speech quality. One significant part of Quality of Service (QoS) in communication technology is the speech quality assessment. However, this part is seriously overlooked as telecommunication providers often focus on increasing network capacity, expansion of services offered and their enforcement in the market. Among the fundamental factors affecting the transmission properties of the communication chain is noise, either at the transmitter or the receiver side. A wavelet transform (WT) is a modern tool for signal processing. One of the most significant areas in which wavelet transforms are used is applications designed to suppress noise in signals. To remove noise from the voice sample in our experiment, we used the reference segment of the voice which was distorted by Gaussian white noise. An evaluation of the impact on speech quality was carried out by an intrusive objective algorithm Perceptual Evaluation of Speech Quality (PESQ). DWT and SWT transformation was applied to voice samples that were devalued by Gaussian white noise. Afterwards, we determined the effectiveness of DWT and SWT by means of objective algorithm PESQ. The decisive criterion for determining the quality of a voice sample once the noise had been removed was Mean Opinion Score (MOS) which we obtained in PESQ. The contribution of this work lies in the evaluation of efficiency of wavelet transformation to suppress noise in voice samples.

  1. Computer Generation of Fourier Transform Libraries for Distributed Memory Architectures

    Science.gov (United States)

    2010-12-01

    tractions used in quantum chemistry . It too performs algebraic transformations tominimize the operations count, and then optimizes code based on...existing parallel DFT algorithms, including their strengths and weaknesses. Four-stepFFT.The four-step algorithm [Hegland, 1994;Norton and Silberger , 1987...Sadayappan, and Alexander Sibiryakov. Synthesis of high-performance parallel programs for a class of ab initio quan- tum chemistry models. Proc. of

  2. Improved algorithms for circuit fault diagnosis based on wavelet packet and neural network

    International Nuclear Information System (INIS)

    Zhang, W-Q; Xu, C

    2008-01-01

    In this paper, two improved BP neural network algorithms of fault diagnosis for analog circuit are presented through using optimal wavelet packet transform(OWPT) or incomplete wavelet packet transform(IWPT) as preprocessor. The purpose of preprocessing is to reduce the nodes in input layer and hidden layer of BP neural network, so that the neural network gains faster training and convergence speed. At first, we apply OWPT or IWPT to the response signal of circuit under test(CUT), and then calculate the normalization energy of each frequency band. The normalization energy is used to train the BP neural network to diagnose faulty components in the analog circuit. These two algorithms need small network size, while have faster learning and convergence speed. Finally, simulation results illustrate the two algorithms are effective for fault diagnosis

  3. Phase-unwrapping algorithm by a rounding-least-squares approach

    Science.gov (United States)

    Juarez-Salazar, Rigoberto; Robledo-Sanchez, Carlos; Guerrero-Sanchez, Fermin

    2014-02-01

    A simple and efficient phase-unwrapping algorithm based on a rounding procedure and a global least-squares minimization is proposed. Instead of processing the gradient of the wrapped phase, this algorithm operates over the gradient of the phase jumps by a robust and noniterative scheme. Thus, the residue-spreading and over-smoothing effects are reduced. The algorithm's performance is compared with four well-known phase-unwrapping methods: minimum cost network flow (MCNF), fast Fourier transform (FFT), quality-guided, and branch-cut. A computer simulation and experimental results show that the proposed algorithm reaches a high-accuracy level than the MCNF method by a low-computing time similar to the FFT phase-unwrapping method. Moreover, since the proposed algorithm is simple, fast, and user-free, it could be used in metrological interferometric and fringe-projection automatic real-time applications.

  4. Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT

    Directory of Open Access Journals (Sweden)

    Cunsuo Pang

    2016-09-01

    Full Text Available This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT’s performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated pulse radar, SAR (Synthetic aperture radar, or ISAR (Inverse synthetic aperture radar, for improving the probability of target recognition.

  5. Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT.

    Science.gov (United States)

    Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan

    2016-09-24

    This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT's performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition.

  6. A non-linear discrete transform for pattern recognition of discrete chaotic systems

    International Nuclear Information System (INIS)

    Karanikas, C.; Proios, G.

    2003-01-01

    It is shown, by an invertible non-linear discrete transform that any finite sequence or any collection of strings of any length can be presented as a random walk on trees. These transforms create the mathematical background for coding any information, for exploring its local variability and diversity. With the underlying computational algorithms, with several examples and applications we propose that these transforms can be used for pattern recognition of immune type. In other words we propose a mathematical platform for detecting self and non-self strings of any alphabet, based on a negative selection algorithms, for scouting data's periodicity and self-similarity and for measuring the diversity of chaotic strings with fractal dimension methods. In particular we estimate successfully the entropy and the ratio of chaotic data with self similarity. Moreover we give some applications of a non-linear denoising filter

  7. A non-linear discrete transform for pattern recognition of discrete chaotic systems

    CERN Document Server

    Karanikas, C

    2003-01-01

    It is shown, by an invertible non-linear discrete transform that any finite sequence or any collection of strings of any length can be presented as a random walk on trees. These transforms create the mathematical background for coding any information, for exploring its local variability and diversity. With the underlying computational algorithms, with several examples and applications we propose that these transforms can be used for pattern recognition of immune type. In other words we propose a mathematical platform for detecting self and non-self strings of any alphabet, based on a negative selection algorithms, for scouting data's periodicity and self-similarity and for measuring the diversity of chaotic strings with fractal dimension methods. In particular we estimate successfully the entropy and the ratio of chaotic data with self similarity. Moreover we give some applications of a non-linear denoising filter.

  8. Evaluation of segmentation algorithms for generation of patient models in radiofrequency hyperthermia

    International Nuclear Information System (INIS)

    Wust, P.; Gellermann, J.; Beier, J.; Tilly, W.; Troeger, J.; Felix, R.; Wegner, S.; Oswald, H.; Stalling, D.; Hege, H.C.; Deuflhard, P.

    1998-01-01

    Time-efficient and easy-to-use segmentation algorithms (contour generation) are a precondition for various applications in radiation oncology, especially for planning purposes in hyperthermia. We have developed the three following algorithms for contour generation and implemented them in an editor of the HyperPlan hyperthermia planning system. Firstly, a manual contour input with numerous correction and editing options. Secondly, a volume growing algorithm with adjustable threshold range and minimal region size. Thirdly, a watershed transformation in two and three dimensions. In addition, the region input function of the Helax commercial radiation therapy planning system was available for comparison. All four approaches were applied under routine conditions to two-dimensional computed tomographic slices of the superior thoracic aperture, mid-chest, upper abdomen, mid-abdomen, pelvis and thigh; they were also applied to a 3D CT sequence of 72 slices using the three-dimensional extension of the algorithms. Time to generate the contours and their quality with respect to a reference model were determined. Manual input for a complete patient model required approximately 5 to 6 h for 72 CT slices (4.5 min/slice). If slight irregularities at object boundaries are accepted, this time can be reduced to 3.5 min/slice using the volume growing algorithm. However, generating a tetrahedron mesh from such a contour sequence for hyperthermia planning (the basis for finite-element algorithms) requires a significant amount of postediting. With the watershed algorithm extended to three dimensions, processing time can be further reduced to 3 min/slice while achieving satisfactory contour quality. Therefore, this method is currently regarded as offering some potential for efficient automated model generation in hyperthermia. In summary, the 3D volume growing algorithm and watershed transformation are both suitable for segmentation of even low-contrast objects. However, they are not

  9. Improved peak detection in mass spectrum by incorporating continuous wavelet transform-based pattern matching.

    Science.gov (United States)

    Du, Pan; Kibbe, Warren A; Lin, Simon M

    2006-09-01

    A major problem for current peak detection algorithms is that noise in mass spectrometry (MS) spectra gives rise to a high rate of false positives. The false positive rate is especially problematic in detecting peaks with low amplitudes. Usually, various baseline correction algorithms and smoothing methods are applied before attempting peak detection. This approach is very sensitive to the amount of smoothing and aggressiveness of the baseline correction, which contribute to making peak detection results inconsistent between runs, instrumentation and analysis methods. Most peak detection algorithms simply identify peaks based on amplitude, ignoring the additional information present in the shape of the peaks in a spectrum. In our experience, 'true' peaks have characteristic shapes, and providing a shape-matching function that provides a 'goodness of fit' coefficient should provide a more robust peak identification method. Based on these observations, a continuous wavelet transform (CWT)-based peak detection algorithm has been devised that identifies peaks with different scales and amplitudes. By transforming the spectrum into wavelet space, the pattern-matching problem is simplified and in addition provides a powerful technique for identifying and separating the signal from the spike noise and colored noise. This transformation, with the additional information provided by the 2D CWT coefficients can greatly enhance the effective signal-to-noise ratio. Furthermore, with this technique no baseline removal or peak smoothing preprocessing steps are required before peak detection, and this improves the robustness of peak detection under a variety of conditions. The algorithm was evaluated with SELDI-TOF spectra with known polypeptide positions. Comparisons with two other popular algorithms were performed. The results show the CWT-based algorithm can identify both strong and weak peaks while keeping false positive rate low. The algorithm is implemented in R and will be

  10. Harmonic Domain Modelling of Transformer Core Nonlinearities Using the DIgSILENT PowerFactory Software

    OpenAIRE

    Bak, Claus Leth; Bak-Jensen, Birgitte; Wiechowski, Wojciech

    2008-01-01

    This paper demonstrates the results of implementation and verification of an already existing algorithm that allows for calculating saturation characteristics of singlephase power transformers. The algorithm was described for the first time in 1993. Now this algorithm has been implemented using the DIgSILENT Programming Language (DPL) as an external script in the harmonic domain calculations of a power system analysis tool PowerFactory [10]. The algorithm is verified by harmonic measurements ...

  11. A comparative study of image low level feature extraction algorithms

    Directory of Open Access Journals (Sweden)

    M.M. El-gayar

    2013-07-01

    Full Text Available Feature extraction and matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods for assessing the performance of popular image matching algorithms are presented and rely on costly descriptors for detection and matching. Specifically, the method assesses the type of images under which each of the algorithms reviewed herein perform to its maximum or highest efficiency. The efficiency is measured in terms of the number of matches founds by the algorithm and the number of type I and type II errors encountered when the algorithm is tested against a specific pair of images. Current comparative studies asses the performance of the algorithms based on the results obtained in different criteria such as speed, sensitivity, occlusion, and others. This study addresses the limitations of the existing comparative tools and delivers a generalized criterion to determine beforehand the level of efficiency expected from a matching algorithm given the type of images evaluated. The algorithms and the respective images used within this work are divided into two groups: feature-based and texture-based. And from this broad classification only three of the most widely used algorithms are assessed: color histogram, FAST (Features from Accelerated Segment Test, SIFT (Scale Invariant Feature Transform, PCA-SIFT (Principal Component Analysis-SIFT, F-SIFT (fast-SIFT and SURF (speeded up robust features. The performance of the Fast-SIFT (F-SIFT feature detection methods are compared for scale changes, rotation, blur, illumination changes and affine transformations. All the experiments use repeatability measurement and the number of correct matches for the evaluation measurements. SIFT presents its stability in most situations although its slow. F-SIFT is the fastest one with good performance as the same as SURF, SIFT, PCA-SIFT show its advantages in rotation and illumination changes.

  12. Spectrums Transform Operators in Bases of Fourier and Walsh Functions

    Directory of Open Access Journals (Sweden)

    V. V. Syuzev

    2017-01-01

    Full Text Available The problems of synthesis of the efficient algorithms for digital processing of discrete signals require transforming the signal spectra from one basis system into other. The rational solution to this problem is to construct the Fourier kernel, which is a spectrum of some basis functions, according to the system of functions of the other basis. However, Fourier kernel properties are not equally studied and described for all basis systems of practical importance. The article sets a task and presents an original way to solve the problem of mutual transformation of trigonometric Fourier spectrum into Walsh spectrum of different basis systems.The relevance of this theoretical and applied problem is stipulated, on the one hand, by the prevalence of trigonometric Fourier basis for harmonic representation of digital signals, and, on the other hand, by the fact that Walsh basis systems allow us to have efficient algorithms to simulate signals. The problem solution is achieved through building the Fourier kernel of a special structure that allows us to establish independent groups of Fourier and Walsh spectrum coefficients for further reducing the computational complexity of the transform algorithms.The article analyzes the properties of the system of trigonometric Fourier functions and shows its completeness. Considers the Walsh function basis systems in three versions, namely those of Hadamard, Paley, and Hartmut giving different ordering and analytical descriptions of the functions that make up the basis. Proves a completeness of these systems.Sequentially, for each of the three Walsh systems the analytical curves for the Fourier kernel components are obtained, and Fourier kernel themselves are built with binary rational number of samples of basis functions. The kernels are presented in matrix form and, as an example, recorded for a particular value of the discrete interval of N, equal to 8. The analysis spectral coefficients of the Fourier kernel

  13. 32Still Image Compression Algorithm Based on Directional Filter Banks

    OpenAIRE

    Chunling Yang; Duanwu Cao; Li Ma

    2010-01-01

    Hybrid wavelet and directional filter banks (HWD) is an effective multi-scale geometrical analysis method. Compared to wavelet transform, it can better capture the directional information of images. But the ringing artifact, which is caused by the coefficient quantization in transform domain, is the biggest drawback of image compression algorithms in HWD domain. In this paper, by researching on the relationship between directional decomposition and ringing artifact, an improved decomposition ...

  14. The Simulation of the Traction Drive with Middle-Frequency Transformer

    Directory of Open Access Journals (Sweden)

    Pavel Drabek

    2008-01-01

    Full Text Available This paper presents research motivated by industrial demand for special traction drive topology devoted to minimization of traction transformer weight against topology with classical 50Hz traction transformer. The special traction drive topology for AC power systems consists of input high voltage trolley converter (single phase matrix converter –middle frequency transformer - output converter - traction motor has been described. The main attention has been given tothe control algorithm of the traction topology (inserting of NULL vector of matrix converter and Two-value control ofsecondary active rectifier.

  15. Analytical fan-beam and cone-beam reconstruction algorithms with uniform attenuation correction for SPECT

    International Nuclear Information System (INIS)

    Tang Qiulin; Zeng, Gengsheng L; Gullberg, Grant T

    2005-01-01

    In this paper, we developed an analytical fan-beam reconstruction algorithm that compensates for uniform attenuation in SPECT. The new fan-beam algorithm is in the form of backprojection first, then filtering, and is mathematically exact. The algorithm is based on three components. The first one is the established generalized central-slice theorem, which relates the 1D Fourier transform of a set of arbitrary data and the 2D Fourier transform of the backprojected image. The second one is the fact that the backprojection of the fan-beam measurements is identical to the backprojection of the parallel measurements of the same object with the same attenuator. The third one is the stable analytical reconstruction algorithm for uniformly attenuated Radon data, developed by Metz and Pan. The fan-beam algorithm is then extended into a cone-beam reconstruction algorithm, where the orbit of the focal point of the cone-beam imaging geometry is a circle. This orbit geometry does not satisfy Tuy's condition and the obtained cone-beam algorithm is an approximation. In the cone-beam algorithm, the cone-beam data are first backprojected into the 3D image volume; then a slice-by-slice filtering is performed. This slice-by-slice filtering procedure is identical to that of the fan-beam algorithm. Both the fan-beam and cone-beam algorithms are efficient, and computer simulations are presented. The new cone-beam algorithm is compared with Bronnikov's cone-beam algorithm, and it is shown to have better performance with noisy projections

  16. Optimized Fast Walsh–Hadamard Transform on GPUs for non-binary LDPC decoding

    OpenAIRE

    Andrade, Joao; Falcao, Gabriel; Silva, Vitor

    2014-01-01

    The Fourier Transform Sum-Product Algorithm (FT-SPA) used in non-binary Low-Density Parity-Check (LDPC) decoding makes extensive use of the Walsh–Hadamard Transform (WHT). We have developed a massively parallel Fast Walsh–Hadamard Transform (FWHT) which exploits the Graphics Processing Unit (GPU) pipeline and memory hierarchy, thereby minimizing the level of memory bank conflicts and maximizing the number of returned instructions per clock cycle for different generations of graphics processor...

  17. Optimization and experimental realization of the quantum permutation algorithm

    Science.gov (United States)

    Yalçınkaya, I.; Gedik, Z.

    2017-12-01

    The quantum permutation algorithm provides computational speed-up over classical algorithms for determining the parity of a given cyclic permutation. For its n -qubit implementations, the number of required quantum gates scales quadratically with n due to the quantum Fourier transforms included. We show here for the n -qubit case that the algorithm can be simplified so that it requires only O (n ) quantum gates, which theoretically reduces the complexity of the implementation. To test our results experimentally, we utilize IBM's 5-qubit quantum processor to realize the algorithm by using the original and simplified recipes for the 2-qubit case. It turns out that the latter results in a significantly higher success probability which allows us to verify the algorithm more precisely than the previous experimental realizations. We also verify the algorithm for the first time for the 3-qubit case with a considerable success probability by taking the advantage of our simplified scheme.

  18. Realization of Deutsch-like algorithm using ensemble computing

    International Nuclear Information System (INIS)

    Wei Daxiu; Luo Jun; Sun Xianping; Zeng Xizhi

    2003-01-01

    The Deutsch-like algorithm [Phys. Rev. A. 63 (2001) 034101] distinguishes between even and odd query functions using fewer function calls than its possible classical counterpart in a two-qubit system. But the similar method cannot be applied to a multi-qubit system. We propose a new approach for solving Deutsch-like problem using ensemble computing. The proposed algorithm needs an ancillary qubit and can be easily extended to multi-qubit system with one query. Our ensemble algorithm beginning with a easily-prepared initial state has three main steps. The classifications of the functions can be obtained directly from the spectra of the ancilla qubit. We also demonstrate the new algorithm in a four-qubit molecular system using nuclear magnetic resonance (NMR). One hydrogen and three carbons are selected as the four qubits, and one of carbons is ancilla qubit. We choice two unitary transformations, corresponding to two functions (one odd function and one even function), to validate the ensemble algorithm. The results show that our experiment is successfully and our ensemble algorithm for solving the Deutsch-like problem is virtual

  19. The CEV Model and Its Application in a Study of Optimal Investment Strategy

    Directory of Open Access Journals (Sweden)

    Aiyin Wang

    2014-01-01

    Full Text Available The constant elasticity of variance (CEV model is used to describe the price of the risky asset. Maximizing the expected utility relating to the Hamilton-Jacobi-Bellman (HJB equation which describes the optimal investment strategies, we obtain a partial differential equation. Applying the Legendre transform, we transform the equation into a dual problem and obtain an approximation solution and an optimal investment strategies for the exponential utility function.

  20. A stationary wavelet transform and a time-frequency based spike detection algorithm for extracellular recorded data.

    Science.gov (United States)

    Lieb, Florian; Stark, Hans-Georg; Thielemann, Christiane

    2017-06-01

    Spike detection from extracellular recordings is a crucial preprocessing step when analyzing neuronal activity. The decision whether a specific part of the signal is a spike or not is important for any kind of other subsequent preprocessing steps, like spike sorting or burst detection in order to reduce the classification of erroneously identified spikes. Many spike detection algorithms have already been suggested, all working reasonably well whenever the signal-to-noise ratio is large enough. When the noise level is high, however, these algorithms have a poor performance. In this paper we present two new spike detection algorithms. The first is based on a stationary wavelet energy operator and the second is based on the time-frequency representation of spikes. Both algorithms are more reliable than all of the most commonly used methods. The performance of the algorithms is confirmed by using simulated data, resembling original data recorded from cortical neurons with multielectrode arrays. In order to demonstrate that the performance of the algorithms is not restricted to only one specific set of data, we also verify the performance using a simulated publicly available data set. We show that both proposed algorithms have the best performance under all tested methods, regardless of the signal-to-noise ratio in both data sets. This contribution will redound to the benefit of electrophysiological investigations of human cells. Especially the spatial and temporal analysis of neural network communications is improved by using the proposed spike detection algorithms.