WorldWideScience

Sample records for toeplitz factorization algorithms

  1. An Improved Direction Finding Algorithm Based on Toeplitz Approximation

    Directory of Open Access Journals (Sweden)

    Qing Wang

    2013-01-01

    Full Text Available In this paper, a novel direction of arrival (DOA estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments.

  2. A Parallel Prefix Algorithm for Almost Toeplitz Tridiagonal Systems

    Science.gov (United States)

    Sun, Xian-He; Joslin, Ronald D.

    1995-01-01

    A compact scheme is a discretization scheme that is advantageous in obtaining highly accurate solutions. However, the resulting systems from compact schemes are tridiagonal systems that are difficult to solve efficiently on parallel computers. Considering the almost symmetric Toeplitz structure, a parallel algorithm, simple parallel prefix (SPP), is proposed. The SPP algorithm requires less memory than the conventional LU decomposition and is efficient on parallel machines. It consists of a prefix communication pattern and AXPY operations. Both the computation and the communication can be truncated without degrading the accuracy when the system is diagonally dominant. A formal accuracy study has been conducted to provide a simple truncation formula. Experimental results have been measured on a MasPar MP-1 SIMD machine and on a Cray 2 vector machine. Experimental results show that the simple parallel prefix algorithm is a good algorithm for symmetric, almost symmetric Toeplitz tridiagonal systems and for the compact scheme on high-performance computers.

  3. Procrustes Problems for General, Triangular, and Symmetric Toeplitz Matrices

    Directory of Open Access Journals (Sweden)

    Juan Yang

    2013-01-01

    Full Text Available The Toeplitz Procrustes problems are the least squares problems for the matrix equation AX=B over some Toeplitz matrix sets. In this paper the necessary and sufficient conditions are obtained about the existence and uniqueness for the solutions of the Toeplitz Procrustes problems when the unknown matrices are constrained to the general, the triangular, and the symmetric Toeplitz matrices, respectively. The algorithms are designed and the numerical examples show that these algorithms are feasible.

  4. Toeplitz operators and group representations

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav

    2007-01-01

    Roč. 13, č. 3 (2007), s. 243-265 ISSN 1069-5869 R&D Projects: GA ČR GA201/03/0041; GA AV ČR IAA1019304 Institutional research plan: CEZ:AV0Z10190503 Keywords : Toeplitz operator * group representation * symbol calculus Subject RIV: BA - General Mathematics Impact factor: 1.125, year: 2007

  5. Analytic continuation of Toeplitz operators

    Czech Academy of Sciences Publication Activity Database

    Bommier-Hato, H.; Engliš, Miroslav; Youssfi, E.-H.

    2015-01-01

    Roč. 25, č. 4 (2015), s. 2323-2359 ISSN 1050-6926 R&D Projects: GA MŠk(CZ) MEB021108 Institutional support: RVO:67985840 Keywords : Toeplitz operator * Bergman space * strictly pseudoconvex domain Subject RIV: BA - General Mathematics Impact factor: 1.109, year: 2015 http://link.springer.com/article/10.1007%2Fs12220-014-9515-0

  6. Berezin-Toeplitz quantization and invariant symbolic calculi

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav

    2003-01-01

    Roč. 65, č. 1 (2003), s. 59-74 ISSN 0377-9017 R&D Projects: GA ČR GA201/03/0041; GA AV ČR KSK1019101 Institutional research plan: CEZ:AV0Z1019905 Keywords : Toeplitz operator * Schwartz space * Borel theory Subject RIV: BA - General Mathematics Impact factor: 0.709, year: 2003

  7. Toeplitz operators on higher Cauchy-Riemann spaces

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav; Zhang, G.

    2017-01-01

    Roč. 22, č. 22 (2017), s. 1081-1116 ISSN 1431-0643 Institutional support: RVO:67985840 Keywords : Toeplitz operator * Hankel operator * Cauchy-Riemann operators Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.800, year: 2016 https://www.math.uni-bielefeld.de/documenta/vol-22/32.html

  8. Algebraic Properties of Toeplitz Operators on the Polydisk

    Directory of Open Access Journals (Sweden)

    Bo Zhang

    2011-01-01

    Full Text Available We discuss some algebraic properties of Toeplitz operators on the Bergman space of the polydisk Dn. Firstly, we introduce Toeplitz operators with quasihomogeneous symbols and property (P. Secondly, we study commutativity of certain quasihomogeneous Toeplitz operators and commutators of diagonal Toeplitz operators. Thirdly, we discuss finite rank semicommutators and commutators of Toeplitz operators with quasihomogeneous symbols. Finally, we solve the finite rank product problem for Toeplitz operators on the polydisk.

  9. Toeplitz quantization and asymptotic expansions for real bounded symmetric domains

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav; Upmeier, H.

    2011-01-01

    Roč. 268, 3-4 (2011), s. 931-967 ISSN 0025-5874 R&D Projects: GA ČR(CZ) GA201/06/0128 Institutional research plan: CEZ:AV0Z10190503 Keywords : bounded symmetric domain * Toeplitz operator * star product Subject RIV: BA - General Mathematics Impact factor: 0.749, year: 2011 http://rd.springer.com/article/10.1007/s00209-010-0702-9

  10. Toeplitz quantization and asymptotic expansions : Peter Weyl decomposition

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav; Upmeier, H.

    2010-01-01

    Roč. 68, č. 3 (2010), s. 427-449 ISSN 0378-620X R&D Projects: GA ČR GA201/09/0473 Institutional research plan: CEZ:AV0Z10190503 Keywords : bounded symmetric domain * real symmetric domain * star product * Toeplitz operator * Peter-Weyl decomposition Subject RIV: BA - General Mathematics Impact factor: 0.521, year: 2010 http://link.springer.com/article/10.1007%2Fs00020-010-1808-5

  11. Affine coherent states and Toeplitz operators

    Science.gov (United States)

    Hutníková, Mária; Hutník, Ondrej

    2012-06-01

    We study a parameterized family of Toeplitz operators in the context of affine coherent states based on the Calderón reproducing formula (= resolution of unity on L_2( {R})) and the specific admissible wavelets (= affine coherent states in L_2( {R})) related to Laguerre functions. Symbols of such Calderón-Toeplitz operators as individual coordinates of the affine group (= upper half-plane with the hyperbolic geometry) are considered. In this case, a certain class of pseudo-differential operators, their properties and their operator algebras are investigated. As a result of this study, the Fredholm symbol algebras of the Calderón-Toeplitz operator algebras for these particular cases of symbols are described. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Coherent states: mathematical and physical aspects’.

  12. Deconvolution and Regularization with Toeplitz Matrices

    DEFF Research Database (Denmark)

    Hansen, Per Christian

    2002-01-01

    of these discretized deconvolution problems, with emphasis on methods that take the special structure of the matrix into account. Wherever possible, analogies to classical DFT-based deconvolution problems are drawn. Among other things, we present direct methods for regularization with Toeplitz matrices, and we show...

  13. Toeplitz and Hankel operators and Dixmier trances on the unit ball of C-n

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav; Guo, K.; Zhang, G.

    2009-01-01

    Roč. 137, č. 11 (2009), s. 3669-3678 ISSN 0002-9939 R&D Projects: GA ČR(CZ) GA201/06/0128 Institutional research plan: CEZ:AV0Z10190503 Keywords : Schatten- von Neumann classes * Macaev classes * trace * Dixmier trace * Toeplitz operators * Hankel operators Subject RIV: BA - General Mathematics Impact factor: 0.640, year: 2009

  14. Berezin-Toeplitz quantization on the Schwartz space of bounded symmetric domains

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav

    2005-01-01

    Roč. 15, č. 1 (2005), s. 27-50 ISSN 0949-5932 R&D Projects: GA AV ČR(CZ) IAA1019304 Institutional research plan: CEZ:AV0Z10190503 Keywords : Berezin-Toeplitz quantization * bounded symmetric domain * Schwartz space Subject RIV: BA - General Mathematics Impact factor: 0.319, year: 2005

  15. Noncommutative coherent states and related aspects of Berezin-Toeplitz quantization

    Czech Academy of Sciences Publication Activity Database

    Chowdhury, S. H. H.; Ali, S. T.; Engliš, Miroslav

    2017-01-01

    Roč. 50, č. 19 (2017), č. článku 195203. ISSN 1751-8113 Institutional support: RVO:67985840 Keywords : Berezin-Toeplitz quantization Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.857, year: 2016 http://iopscience.iop.org/article/10.1088/1751-8121/aa66a6/meta

  16. Generalized locally Toeplitz sequences theory and applications

    CERN Document Server

    Garoni, Carlo

    2017-01-01

    Based on their research experience, the authors propose a reference textbook in two volumes on the theory of generalized locally Toeplitz sequences and their applications. This first volume focuses on the univariate version of the theory and the related applications in the unidimensional setting, while the second volume, which addresses the multivariate case, is mainly devoted to concrete PDE applications. This book systematically develops the theory of generalized locally Toeplitz (GLT) sequences and presents some of its main applications, with a particular focus on the numerical discretization of differential equations (DEs). It is the first book to address the relatively new field of GLT sequences, which occur in numerous scientific applications and are especially dominant in the context of DE discretizations. Written for applied mathematicians, engineers, physicists, and scientists who (perhaps unknowingly) encounter GLT sequences in their research, it is also of interest to those working in the fields of...

  17. On some Toeplitz matrices and their inversions

    Directory of Open Access Journals (Sweden)

    S. Dutta

    2014-10-01

    Full Text Available In this article, using the difference operator B(a[m], we introduce a lower triangular Toeplitz matrix T which includes several difference matrices such as Δ(1,Δ(m,B(r,s,B(r,s,t, and B(r̃,s̃,t̃,ũ in different special cases. For any x ∈ w and m∈N0={0,1,2,…}, the difference operator B(a[m] is defined by (B(a[m]xk=ak(0xk+ak-1(1xk-1+ak-2(2xk-2+⋯+ak-m(mxk-m,(k∈N0 where a[m] = {a(0, a(1, …, a(m} and a(i = (ak(i for 0 ⩽ i ⩽ m are convergent sequences of real numbers. We use the convention that any term with negative subscript is equal to zero. The main results of this article relate to the determination and applications of the inverse of the Toeplitz matrix T.

  18. FFT-based preconditioners for Toeplitz-Block least square problems

    Energy Technology Data Exchange (ETDEWEB)

    Chan, R.H. (Univ. of Hong Kong (Hong Kong). Dept. of Mathematics); Nagy, J.G.; Plemons, R.J. (Univ. of Minnesota, Minneapolis, MN (United States). Inst. for Mathematics and its Applications)

    1993-12-01

    Discretized two-dimensional deconvolution problems arising, e.g., in image restoration and seismic tomography, can be formulated as least squares computations, min [parallel] b [minus] Tx [parallel][sub 2], where T is often a large-scale rectangular Toeplitz-block matrix. The authors consider solving such block least squares problems by the preconditioned conjugate gradient algorithm using square nonsingular circulant-block and related preconditioners, constructed from the blocks of the rectangular matrix T. Preconditioning with such matrices allows efficient implementation using the one-dimensional or two-dimensional fast Fourier transform (FFT). Two-block preconditioners, related to those proposed by T. Chan and J. Olkin for square nonsingular Toeplitz-block systems, are derived and analyzed. It is shown that, for important classes of T, the singular values of the preconditioned matrix are clustered around one. This extends the authors' earlier work on preconditioners for Toeplitz least squares iterations for one-dimensional problems. It is well known that the resolution of ill-posed deconvolution problems can be substantially improved by regularization to compensate for their ill-posed nature. It is shown that regularization can easily be incorporated into the preconditioners, and a report is given on numerical experiments on a Cray Y-MP. The experiments illustrate good convergence properties of these FFT-based preconditioned iterations.

  19. A new kind of Hankel-Toeplitz type operator

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav; Hille, S. C.; Peetre, J.; Rosengren, H.; Zhang, G.

    2000-01-01

    Roč. 6, č. 1 (2000), s. 49-80 ISSN 1319-5166 Institutional research plan: CEZ:AV0Z1019905 Keywords : Hankel-Toeplitz type%Schatten von Neumann classes membership Subject RIV: BA - General Mathematics

  20. On commutators and semicommutators of Toeplitz operators

    International Nuclear Information System (INIS)

    Berkani, M.

    1994-04-01

    Let φ and ψ be two essentially bounded functions on the unit circle T. This paper is devoted to the study of the semicommutator T φψ -T φ T ψ of the Toeplitz operators T φ and T ψ . We prove that if φ is inner and P+(ψφ-bar) is an element of B 1/p p then the semicommutator T φψ -T φ T ψ is in the Schatten-Von-Neumann class σ p , p>0. Here B 1/p p denote the Besov class, P + is the orthogonal projection from L 2 onto the Hardy space H 2 . Moreover if φ is also continuous, then T φψ -T φ T ψ is of finite rank. An example based on the Hardy-Littelwood series shows that the result fails if we suppose φ only continuous. We also give some sufficient conditions on the Fourier coefficients of the symbols φ and ψ which implies that the commutator [T φ ,T ψ ] belongs to σ p , p>0. (author). 14 refs

  1. On commutators and semicommutators of Toeplitz operators

    International Nuclear Information System (INIS)

    Berkani, M.

    1996-08-01

    Let φ and ψ be two essentially bounded functions on the unit circle T. This paper is devoted to the study of the semicommutator T φψ - T φ T ψ of the Toeplitz operators T φ and T ψ . We prove that if φ is inner and P + (ψφ-bar) is an element of B 1/p p then the semicommutator T φψ - T φ T ψ is in the Schatten-Von-Neumann class σ p , p > 0. Here B 1/p p denote the Besov class, P + is the orthogonal projection from L 2 onto the Hardy space H 2 . Moreover if φ is also continuous, then T φψ - T φ T ψ is of finite rank. An example based on the Hardy-Littlewood series shows that the result fails if we suppose φ only continuous. We also give some sufficient conditions on the Fourier coefficients of the symbols φ and ψ which implies that the commutator [T φ , T ψ ] belongs to σ p , p > 0. (author). 14 refs

  2. Exact solution of corner-modified banded block-Toeplitz eigensystems

    Science.gov (United States)

    Cobanera, Emilio; Alase, Abhijeet; Ortiz, Gerardo; Viola, Lorenza

    2017-05-01

    Motivated by the challenge of seeking a rigorous foundation for the bulk-boundary correspondence for free fermions, we introduce an algorithm for determining exactly the spectrum and a generalized-eigenvector basis of a class of banded block quasi-Toeplitz matrices that we call corner-modified. Corner modifications of otherwise arbitrary banded block-Toeplitz matrices capture the effect of boundary conditions and the associated breakdown of translational invariance. Our algorithm leverages the interplay between a non-standard, projector-based method of kernel determination (physically, a bulk-boundary separation) and families of linear representations of the algebra of matrix Laurent polynomials. Thanks to the fact that these representations act on infinite-dimensional carrier spaces in which translation symmetry is restored, it becomes possible to determine the eigensystem of an auxiliary projected block-Laurent matrix. This results in an analytic eigenvector Ansatz, independent of the system size, which we prove is guaranteed to contain the full solution of the original finite-dimensional problem. The actual solution is then obtained by imposing compatibility with a boundary matrix, whose shape is also independent of system size. As an application, we show analytically that eigenvectors of short-ranged fermionic tight-binding models may display power-law corrections to exponential behavior, and demonstrate the phenomenon for the paradigmatic Majorana chain of Kitaev.

  3. Berezin and Berezin-Toeplitz quantizations for general function spaces

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav

    2006-01-01

    Roč. 19, č. 2 (2006), s. 385-430 ISSN 1139-1138 R&D Projects: GA AV ČR(CZ) IAA1019301 Institutional research plan: CEZ:AV0Z10190503 Keywords : Berezin quantization * Berezin-Toeplitz quantization * star product Subject RIV: BA - General Mathematics

  4. Algebraic Properties of Quasihomogeneous and Separately Quasihomogeneous Toeplitz Operators on the Pluriharmonic Bergman Space

    Directory of Open Access Journals (Sweden)

    Hongyan Guan

    2013-01-01

    Full Text Available We study some algebraic properties of Toeplitz operator with quasihomogeneous or separately quasihomogeneous symbol on the pluriharmonic Bergman space of the unit ball in ℂn. We determine when the product of two Toeplitz operators with certain separately quasi-homogeneous symbols is a Toeplitz operator. Next, we discuss the zero-product problem for several Toeplitz operators, one of whose symbols is separately quasihomogeneous and the others are quasi-homogeneous functions, and show that the zero-product problem for two Toeplitz operators has only a trivial solution if one of the symbols is separately quasihomogeneous and the other is arbitrary. Finally, we also characterize the commutativity of certain quasihomogeneous or separately quasihomogeneous Toeplitz operators.

  5. Berezin-Toeplitz Quantization for Compact Kähler Manifolds. A Review of Results

    Directory of Open Access Journals (Sweden)

    Martin Schlichenmaier

    2010-01-01

    Full Text Available This article is a review on Berezin-Toeplitz operator and Berezin-Toeplitz deformation quantization for compact quantizable Kähler manifolds. The basic objects, concepts, and results are given. This concerns the correct semiclassical limit behaviour of the operator quantization, the unique Berezin-Toeplitz deformation quantization (star product, covariant and contravariant Berezin symbols, and Berezin transform. Other related objects and constructions are also discussed.

  6. KMS states on Nica-Toeplitz algebras of product systems

    DEFF Research Database (Denmark)

    Hong, Jeong Hee; Larsen, Nadia S.; Szymanski, Wojciech

    2012-01-01

    We investigate KMS states of Fowler's Nica-Toeplitz algebra NT(X) associated to a compactly aligned product system X over a semigroup P of Hilbert bimodules. This analysis relies on restrictions of these states to the core algebra which satisfy appropriate scaling conditions. The concept of product...... system of finite type is introduced. If (G, P) is a lattice ordered group and X is a product system of finite type over P satisfying certain coherence properties, we construct KMS_beta states of NT(X) associated to a scalar dynamics from traces on the coefficient algebra of the product system. Our...... results were motivated by, and generalize some of the results of Laca and Raeburn obtained for the Toeplitz algebra of the affine semigroup over the natural numbers....

  7. Iterative methods for symmetric ill-conditioned Toeplitz matrices

    Energy Technology Data Exchange (ETDEWEB)

    Huckle, T. [Institut fuer Informatik, Muenchen (Germany)

    1996-12-31

    We consider ill-conditioned symmetric positive definite, Toeplitz systems T{sub n}x = b. If we want to solve such a system iteratively with the conjugate gradient method, we can use band-Toeplitz-preconditioners or Sine-Transform-peconditioners M = S{sub n}{Lambda}S{sub n}, S{sub n} the Sine-Transform-matrix and {Lambda} a diagonal matrix. A Toeplitz matrix T{sub n} = (t{sub i-j)}{sub i}{sup n},{sub j=1} is often related to an underlying function f defined by the coefficients t{sub j}, j = -{infinity},..,-1,0, 1,.., {infinity}. There are four cases, for which we want to determine a preconditioner M: - T{sub n} is related to an underlying function which is given explicitly; - T{sub n} is related to an underlying function that is given by its Fourier coefficients; - T{sub n} is related to an underlying function that is unknown; - T{sub n} is not related to an underlying function. Especially for the first three cases we show how positive definite and effective preconditioners based on the Sine-Transform can be defined for general nonnegative underlying function f. To define M, we evaluate or estimate the values of f at certain positions, and build a Sine-transform matrix with these values as eigenvalues. Then, the spectrum of the preconditioned system is bounded from above and away from zero.

  8. More on Estimation of Banded and Banded Toeplitz Covariance Matrices

    OpenAIRE

    Berntsson, Fredrik; Ohlson, Martin

    2017-01-01

    In this paper we consider two different linear covariance structures, e.g., banded and bended Toeplitz, and how to estimate them using different methods, e.g., by minimizing different norms. One way to estimate the parameters in a linear covariance structure is to use tapering, which has been shown to be the solution to a universal least squares problem. We know that tapering not always guarantee the positive definite constraints on the estimated covariance matrix and may not be a suitable me...

  9. On the Worst-Case Convergence of MR and CG for Symmetric Positive Definite Tridiagonal Toeplitz Matrices

    Czech Academy of Sciences Publication Activity Database

    Liesen, J.; Tichý, Petr

    2005-01-01

    Roč. 20, - (2005), s. 180-197 ISSN 1068-9613 R&D Projects: GA AV ČR(CZ) KJB1030306 Institutional research plan: CEZ:AV0Z10300504 Keywords : Krylov subspace methods * conjugate gradient method * minimal residual method * convergence analysis * tridiagonal Toeplitz matrices * Poisson equation Subject RIV: BA - General Mathematics Impact factor: 0.608, year: 2005 http://etna.mcs.kent.edu/volumes/2001-2010/vol20/abstract.php?vol=20&pages=180-197

  10. Toeplitz Operators, Pseudo-Homogeneous Symbols, and Moment Maps on the Complex Projective Space

    Directory of Open Access Journals (Sweden)

    Miguel Antonio Morales-Ramos

    2017-01-01

    Full Text Available Following previous works for the unit ball due to Nikolai Vasilevski, we define quasi-radial pseudo-homogeneous symbols on the projective space and obtain the corresponding commutativity results for Toeplitz operators. A geometric interpretation of these symbols in terms of moment maps is developed. This leads us to the introduction of a new family of symbols, extended pseudo-homogeneous, that provide larger commutative Banach algebras generated by Toeplitz operators. This family of symbols provides new commutative Banach algebras generated by Toeplitz operators on the unit ball.

  11. The Upper Bound for GMRES on Normal Tridiagonal Toeplitz Linear System

    Directory of Open Access Journals (Sweden)

    R. Doostaki∗

    2015-09-01

    Full Text Available The Generalized Minimal Residual method (GMRES is often used to solve a large and sparse system Ax = b. This paper establishes error bound for residuals of GMRES on solving an N × N normal tridiagonal Toeplitz linear system. This problem has been studied previously by Li [R.-C. Li, Convergence of CG and GMRES on a tridiagonal Toeplitz linear system, BIT 47 (3 (2007 577-599.], for two special right-hand sides b = e1, eN . Also, Li and Zhang [R.-C. Li, W. Zhang, The rate of convergence of GMRES on a tridiagonal Toeplitz linear system, Numer. Math. 112 (2009 267-293.] for non-symmetric matrix A, presented upper bound for GMRES residuals. But in this paper we establish the upper bound on normal tridiagonal Toeplitz linear systems for special right-hand sides b = b(lel, for 1  l  N

  12. Block Toeplitz operators with frequency-modulated semi-almost periodic symbols

    Directory of Open Access Journals (Sweden)

    A. Böttcher

    2003-01-01

    conditions on an orientation-preserving homeomorphism α of the real line that ensure the following: if b belongs to a certain class of oscillating matrix functions (periodic, almost periodic, or semi-almost periodic matrix functions and the Toeplitz operator generated by the matrix function b(x is semi-Fredholm, then the Toeplitz operator with the matrix symbol b(α(x is also semi-Fredholm.

  13. ON LU FACTORIZATION ALGORITHM WITH MULTIPLIERS

    African Journals Online (AJOL)

    Various algorithm such as Doolittle, Crouts and Cholesky's have been proposed to factor a square matrix into a product of L and U matrices, that is, to find L and U such that A = LU; where L and U are lower and upper triangular matrices respectively. These methods are derived by writing the general forms of L and U and the ...

  14. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance-Structure Models to Block-Toeplitz Representing Single-Subject Multivariate Time-Series

    NARCIS (Netherlands)

    Molenaar, P.C.M.; Nesselroade, J.R.

    1998-01-01

    The study of intraindividual variability pervades empirical inquiry in virtually all subdisciplines of psychology. The statistical analysis of multivariate time-series data - a central product of intraindividual investigations - requires special modeling techniques. The dynamic factor model (DFM),

  15. Toeplitz Quantization for Non-commutating Symbol Spaces such as SUq(2

    Directory of Open Access Journals (Sweden)

    Sontz Stephen Bruce

    2016-08-01

    Full Text Available Toeplitz quantization is defined in a general setting in which the symbols are the elements of a possibly non-commutative algebra with a conjugation and a possibly degenerate inner product. We show that the quantum group SUq(2 is such an algebra. Unlike many quantization schemes, this Toeplitz quantization does not require a measure. The theory is based on the mathematical structures defined and studied in several recent papers of the author; those papers dealt with some specific examples of this new Toeplitz quantization. Annihilation and creation operators are defined as densely defined Toeplitz operators acting in a quantum Hilbert space, and their commutation relations are discussed. At this point Planck’s constant is introduced into the theory. Due to the possibility of non-commuting symbols, there are now two definitions for anti-Wick quantization; these two definitions are equivalent in the commutative case. The Toeplitz quantization introduced here satisfies one of these definitions, but not necessarily the other. This theory should be considered as a second quantization, since it quantizes non-commutative (that is, already quantum objects. The quantization theory presented here has two essential features of a physically useful quantization: Planck’s constant and a Hilbert space where natural, densely defined operators act.

  16. Advanced incomplete factorization algorithms for Stiltijes matrices

    Energy Technology Data Exchange (ETDEWEB)

    Il`in, V.P. [Siberian Division RAS, Novosibirsk (Russian Federation)

    1996-12-31

    The modern numerical methods for solving the linear algebraic systems Au = f with high order sparse matrices A, which arise in grid approximations of multidimensional boundary value problems, are based mainly on accelerated iterative processes with easily invertible preconditioning matrices presented in the form of approximate (incomplete) factorization of the original matrix A. We consider some recent algorithmic approaches, theoretical foundations, experimental data and open questions for incomplete factorization of Stiltijes matrices which are {open_quotes}the best{close_quotes} ones in the sense that they have the most advanced results. Special attention is given to solving the elliptic differential equations with strongly variable coefficients, singular perturbated diffusion-convection and parabolic equations.

  17. Problems with EM Algorithms for ML Factor Analysis.

    Science.gov (United States)

    Bentler, P. M.; Tanaka, Jeffrey S.

    1983-01-01

    Rubin and Thayer recently presented equations to implement maximum likelihood estimation in factor analysis via the EM algorithm. It is argued here that the advantages of using the EM algorithm remain to be demonstrated. (Author/JKS)

  18. The quadratic assignment problem is easy for Robinsonian matrices with Toeplitz structure

    NARCIS (Netherlands)

    M. Laurent (Monique); M. Seminaroti (Matteo)

    2014-01-01

    htmlabstractWe present a new polynomially solvable case of the Quadratic Assignment Problem in Koopmans-Beckman form QAP(A,B), by showing that the identity permutation is optimal when A and B are respectively a Robinson similarity and dissimilarity matrix and one of A or B is a Toeplitz matrix. A

  19. Shor's quantum factoring algorithm on a photonic chip.

    Science.gov (United States)

    Politi, Alberto; Matthews, Jonathan C F; O'Brien, Jeremy L

    2009-09-04

    Shor's quantum factoring algorithm finds the prime factors of a large number exponentially faster than any other known method, a task that lies at the heart of modern information security, particularly on the Internet. This algorithm requires a quantum computer, a device that harnesses the massive parallelism afforded by quantum superposition and entanglement of quantum bits (or qubits). We report the demonstration of a compiled version of Shor's algorithm on an integrated waveguide silica-on-silicon chip that guides four single-photon qubits through the computation to factor 15.

  20. Algorithms for unweighted least-squares factor analysis

    NARCIS (Netherlands)

    Krijnen, WP

    Estimation of the factor model by unweighted least squares (ULS) is distribution free, yields consistent estimates, and is computationally fast if the Minimum Residuals (MinRes) algorithm is employed, MinRes algorithms produce a converging sequence of monotonically decreasing ULS function values.

  1. A state space algorithm for the spectral factorization

    NARCIS (Netherlands)

    Kraffer, F.; Kraffer, F.; Kwakernaak, H.

    1997-01-01

    This paper presents an algorithm for the spectral factorization of a para-Hermitian polynomial matrix. The algorithm is based on polynomial matrix to state space and vice versa conversions, and avoids elementary polynomial operations in computations; It relies on well-proven methods of numerical

  2. Quantum computation and Shor close-quote s factoring algorithm

    International Nuclear Information System (INIS)

    Ekert, A.; Jozsa, R.

    1996-01-01

    Current technology is beginning to allow us to manipulate rather than just observe individual quantum phenomena. This opens up the possibility of exploiting quantum effects to perform computations beyond the scope of any classical computer. Recently Peter Shor discovered an efficient algorithm for factoring whole numbers, which uses characteristically quantum effects. The algorithm illustrates the potential power of quantum computation, as there is no known efficient classical method for solving this problem. The authors give an exposition of Shor close-quote s algorithm together with an introduction to quantum computation and complexity theory. They discuss experiments that may contribute to its practical implementation. copyright 1996 The American Physical Society

  3. Weighted norm inequalities for Toeplitz type operators associated to generalized Calderón-Zygmund operators.

    Science.gov (United States)

    Tang, Yongli; Ban, Tao

    2016-01-01

    Let [Formula: see text] be a generalized Calderón-Zygmund operator or [Formula: see text] ( the identity operator), let [Formula: see text] and [Formula: see text] be the linear operators, and let [Formula: see text]. Denote the Toeplitz type operator by [Formula: see text]where [Formula: see text] and [Formula: see text] is fractional integral operator. In this paper, we establish the sharp maximal function estimates for [Formula: see text] when b belongs to weighted Lipschitz function space, and the weighted norm inequalities of [Formula: see text] on weighted Lebesgue space are obtained.

  4. Toeplitz Type Operators Associated with Generalized Calderón-Zygmund Operator on Weighted Morrey Spaces

    Directory of Open Access Journals (Sweden)

    Bijun Ren

    2016-01-01

    Full Text Available Let T1 be a generalized Calderón-Zygmund operator or ±I (the identity operator, let T2 and T4 be the linear operators, and let T3=±I. Denote the Toeplitz type operator by Tb=T1MbIαT2+T3IαMbT4, where Mbf=bf and Iα is the fractional integral operator. In this paper, we investigate the boundedness of the operator Tb on weighted Morrey space when b belongs to the weighted BMO spaces.

  5. Symmetric nonnegative matrix factorization: algorithms and applications to probabilistic clustering.

    Science.gov (United States)

    He, Zhaoshui; Xie, Shengli; Zdunek, Rafal; Zhou, Guoxu; Cichocki, Andrzej

    2011-12-01

    Nonnegative matrix factorization (NMF) is an unsupervised learning method useful in various applications including image processing and semantic analysis of documents. This paper focuses on symmetric NMF (SNMF), which is a special case of NMF decomposition. Three parallel multiplicative update algorithms using level 3 basic linear algebra subprograms directly are developed for this problem. First, by minimizing the Euclidean distance, a multiplicative update algorithm is proposed, and its convergence under mild conditions is proved. Based on it, we further propose another two fast parallel methods: α-SNMF and β -SNMF algorithms. All of them are easy to implement. These algorithms are applied to probabilistic clustering. We demonstrate their effectiveness for facial image clustering, document categorization, and pattern clustering in gene expression.

  6. On Lu Factorization Algorithm With Multipliers | Ntekim | Global ...

    African Journals Online (AJOL)

    Various algorithm such as Doolittle, Crouts and Cholesky's have been proposed to factor a square matrix into a product of L and U matrices, that is, to find L and U such that A = LU; where L and U are lower and upper triangular matrices respectively. These methods are derived by writing the general forms of L and U and the ...

  7. Factorization algorithm based on the periodicity measurement of a CTES

    International Nuclear Information System (INIS)

    Tamma, Vincenzo; Zhang Heyi; He Xuehua; Yanhua Shih; Garuccio, Augusto

    2010-01-01

    We introduce a new factorization algorithm based on the measurement of the periodicity of a determined function, similar to Shor's algorithm. In particular, such a function is given by a generalized continuous truncated exponential sum (CTES). The CTES interference pattern satisfies a remarkable scaling property, which allows one to plot the interferogram as a function of a suitable continuous variable depending on the number to factorize. This allows one, in principle, to factorize arbitrary numbers with a single interferogram. In particular, information about the factors is encoded in the location of the interference maxima, which repeat periodically in the interferogram. A possible analogue computer for the implementation of such an algorithm can be realized using multi-path optical interferometers, with polychromatic light sources and a high-resolution spectrometer. The experimental accuracy in the realization of the CTES interferogram and the bandwidth of the polychromatic sources determine the largest number N max factorable. Once the CTES interferogram is recorded, all the numbers with value up to N max can be factorable, without performing any further measurement.

  8. Multiplicative algorithms for constrained non-negative matrix factorization

    KAUST Repository

    Peng, Chengbin

    2012-12-01

    Non-negative matrix factorization (NMF) provides the advantage of parts-based data representation through additive only combinations. It has been widely adopted in areas like item recommending, text mining, data clustering, speech denoising, etc. In this paper, we provide an algorithm that allows the factorization to have linear or approximatly linear constraints with respect to each factor. We prove that if the constraint function is linear, algorithms within our multiplicative framework will converge. This theory supports a large variety of equality and inequality constraints, and can facilitate application of NMF to a much larger domain. Taking the recommender system as an example, we demonstrate how a specialized weighted and constrained NMF algorithm can be developed to fit exactly for the problem, and the tests justify that our constraints improve the performance for both weighted and unweighted NMF algorithms under several different metrics. In particular, on the Movielens data with 94% of items, the Constrained NMF improves recall rate 3% compared to SVD50 and 45% compared to SVD150, which were reported as the best two in the top-N metric. © 2012 IEEE.

  9. A fast marching algorithm for the factored eikonal equation

    International Nuclear Information System (INIS)

    Treister, Eran; Haber, Eldad

    2016-01-01

    The eikonal equation is instrumental in many applications in several fields ranging from computer vision to geoscience. This equation can be efficiently solved using the iterative Fast Sweeping (FS) methods and the direct Fast Marching (FM) methods. However, when used for a point source, the original eikonal equation is known to yield inaccurate numerical solutions, because of a singularity at the source. In this case, the factored eikonal equation is often preferred, and is known to yield a more accurate numerical solution. One application that requires the solution of the eikonal equation for point sources is travel time tomography. This inverse problem may be formulated using the eikonal equation as a forward problem. While this problem has been solved using FS in the past, the more recent choice for applying it involves FM methods because of the efficiency in which sensitivities can be obtained using them. However, while several FS methods are available for solving the factored equation, the FM method is available only for the original eikonal equation. In this paper we develop a Fast Marching algorithm for the factored eikonal equation, using both first and second order finite-difference schemes. Our algorithm follows the same lines as the original FM algorithm and requires the same computational effort. In addition, we show how to obtain sensitivities using this FM method and apply travel time tomography, formulated as an inverse factored eikonal equation. Numerical results in two and three dimensions show that our algorithm solves the factored eikonal equation efficiently, and demonstrate the achieved accuracy for computing the travel time. We also demonstrate a recovery of a 2D and 3D heterogeneous medium by travel time tomography using the eikonal equation for forward modeling and inversion by Gauss–Newton.

  10. On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.

    2017-12-01

    The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.

  11. Determination of the Köthe-Toeplitz Duals over the Non-Newtonian Complex Field

    Directory of Open Access Journals (Sweden)

    Uğur Kadak

    2014-01-01

    Full Text Available The important point to note is that the non-Newtonian calculus is a self-contained system independent of any other system of calculus. Therefore the reader may be surprised to learn that there is a uniform relationship between the corresponding operators of this calculus and the classical calculus. Several basic concepts based on non-Newtonian calculus are presented by Grossman (1983, Grossman and Katz (1978, and Grossman (1979. Following Grossman and Katz, in the present paper, we introduce the sets of bounded, convergent, null series and p-bounded variation of sequences over the complex field C* and prove that these are complete. We propose a quite concrete approach based on the notion of Köthe-Toeplitz duals with respect to the non-Newtonian calculus. Finally, we derive some inclusion relationships between Köthe space and solidness.

  12. Data Reduction Algorithm Using Nonnegative Matrix Factorization with Nonlinear Constraints

    Science.gov (United States)

    Sembiring, Pasukat

    2017-12-01

    Processing ofdata with very large dimensions has been a hot topic in recent decades. Various techniques have been proposed in order to execute the desired information or structure. Non- Negative Matrix Factorization (NMF) based on non-negatives data has become one of the popular methods for shrinking dimensions. The main strength of this method is non-negative object, the object model by a combination of some basic non-negative parts, so as to provide a physical interpretation of the object construction. The NMF is a dimension reduction method thathasbeen used widely for numerous applications including computer vision,text mining, pattern recognitions,and bioinformatics. Mathematical formulation for NMF did not appear as a convex optimization problem and various types of algorithms have been proposed to solve the problem. The Framework of Alternative Nonnegative Least Square(ANLS) are the coordinates of the block formulation approaches that have been proven reliable theoretically and empirically efficient. This paper proposes a new algorithm to solve NMF problem based on the framework of ANLS.This algorithm inherits the convergenceproperty of the ANLS framework to nonlinear constraints NMF formulations.

  13. Algorithms

    Indian Academy of Sciences (India)

    have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming language Is called a program. From activities 1-3, we can observe that: • Each activity is a command.

  14. A Fast parallel tridiagonal algorithm for a class of CFD applications

    Science.gov (United States)

    Moitra, Stuti; Sun, Xian-He

    1996-01-01

    The parallel diagonal dominant (PDD) algorithm is an efficient tridiagonal solver. This paper presents for study a variation of the PDD algorithm, the reduced PDD algorithm. The new algorithm maintains the minimum communication provided by the PDD algorithm, but has a reduced operation count. The PDD algorithm also has a smaller operation count than the conventional sequential algorithm for many applications. Accuracy analysis is provided for the reduced PDD algorithm for symmetric Toeplitz tridiagonal (STT) systems. Implementation results on Langley's Intel Paragon and IBM SP2 show that both the PDD and reduced PDD algorithms are efficient and scalable.

  15. Fast alternating projected gradient descent algorithms for recovering spectrally sparse signals

    KAUST Repository

    Cho, Myung

    2016-06-24

    We propose fast algorithms that speed up or improve the performance of recovering spectrally sparse signals from un-derdetermined measurements. Our algorithms are based on a non-convex approach of using alternating projected gradient descent for structured matrix recovery. We apply this approach to two formulations of structured matrix recovery: Hankel and Toeplitz mosaic structured matrix, and Hankel structured matrix. Our methods provide better recovery performance, and faster signal recovery than existing algorithms, including atomic norm minimization.

  16. Algorithms

    Indian Academy of Sciences (India)

    algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language ... ·1 x:=sln(theta) x : = sm(theta) 1. ~. Idl d.t Read A.B,C. ~ lei ~ Print x.y.z. L;;;J. Figure 2 Symbols used In flowchart language to rep- resent Assignment, Read.

  17. Algorithms

    Indian Academy of Sciences (India)

    In the previous articles, we have discussed various common data-structures such as arrays, lists, queues and trees and illustrated the widely used algorithm design paradigm referred to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted ...

  18. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    Science.gov (United States)

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  19. A New GCD Algorithm for Quadratic Number Rings with Unique Factorization

    DEFF Research Database (Denmark)

    Agarwal, Saurabh; Frandsen, Gudmund Skovbjerg

    2006-01-01

    We present an algorithm to compute a greatest common divisor of two integers in a quadratic number ring that is a unique factorization domain. The algorithm uses bit operations in a ring of discriminant Δ. This appears to be the first gcd algorithm of complexity o(n 2) for any fixed non-Euclidean...

  20. Variable forgetting factor mechanisms for diffusion recursive least squares algorithm in sensor networks

    Science.gov (United States)

    Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.

    2017-12-01

    In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.

  1. Algorithms

    Indian Academy of Sciences (India)

    In the program shown in Figure 1, we have repeated the algorithm. M times and we can make the following observations. Each block is essentially a different instance of "code"; that is, the objects differ by the value to which N is initialized before the execution of the. "code" block. Thus, we can now avoid the repetition of the ...

  2. Algorithms

    Indian Academy of Sciences (India)

    algorithms built into the computer corresponding to the logic- circuit rules that are used to .... For the purpose of carrying ou t ari thmetic or logical operations the memory is organized in terms .... In fixed point representation, one essentially uses integer arithmetic operators assuming the binary point to be at some point other ...

  3. Shor's factoring algorithm and modern cryptography. An illustration of the capabilities inherent in quantum computers

    Science.gov (United States)

    Gerjuoy, Edward

    2005-06-01

    The security of messages encoded via the widely used RSA public key encryption system rests on the enormous computational effort required to find the prime factors of a large number N using classical (conventional) computers. In 1994 Peter Shor showed that for sufficiently large N, a quantum computer could perform the factoring with much less computational effort. This paper endeavors to explain, in a fashion comprehensible to the nonexpert, the RSA encryption protocol; the various quantum computer manipulations constituting the Shor algorithm; how the Shor algorithm performs the factoring; and the precise sense in which a quantum computer employing Shor's algorithm can be said to accomplish the factoring of very large numbers with less computational effort than a classical computer. It is made apparent that factoring N generally requires many successive runs of the algorithm. Our analysis reveals that the probability of achieving a successful factorization on a single run is about twice as large as commonly quoted in the literature.

  4. A Hybrid Algorithm for Non-negative Matrix Factorization Based on Symmetric Information Divergence.

    Science.gov (United States)

    Devarajan, Karthik; Ebrahimi, Nader; Soofi, Ehsan

    2015-11-01

    The objective of this paper is to provide a hybrid algorithm for non-negative matrix factorization based on a symmetric version of Kullback-Leibler divergence, known as intrinsic information . The convergence of the proposed algorithm is shown for several members of the exponential family such as the Gaussian, Poisson, gamma and inverse Gaussian models. The speed of this algorithm is examined and its usefulness is illustrated through some applied problems.

  5. Determination of the Main Influencing Factors on Road Fatalities Using an Integrated Neuro-Fuzzy Algorithm

    Directory of Open Access Journals (Sweden)

    Amir Masoud Rahimi

    Full Text Available Abstract This paper proposed an integrated algorithm of neuro-fuzzy techniques to examine the complex impact of socio-technical influencing factors on road fatalities. The proposed algorithm could handle complexity, non-linearity and fuzziness in the modeling environment due to its mechanism. The Neuro-fuzzy algorithm for determination of the potential influencing factors on road fatalities consisted of two phases. In the first phase, intelligent techniques are compared for their improved accuracy in predicting fatality rate with respect to some socio-technical influencing factors. Then in the second phase, sensitivity analysis is performed to calculate the pure effect on fatality rate of the potential influencing factors. The applicability and usefulness of the proposed algorithm is illustrated using the data in Iran provincial road transportation systems in the time period 2012-2014. Results show that road design improvement, number of trips, and number of passengers are the most influencing factors on provincial road fatality rate.

  6. Asymmetry in some common assignment algorithms: the dispersion factor solution

    OpenAIRE

    T de la Barra; B Pérez

    1986-01-01

    Many common assignment algorithms are based on Dial's original design to determine the paths that trip makers will follow from a given origin to destination centroids. The purpose of this paper is to show that the rules that have to be applied result in two unwanted properties. The first is that trips assigned from an origin centroid i to a destination j can be dramatically different to those resulting from centroid j to centroid i , even if the number of trips is the same and the network is ...

  7. Matrix completion via a low rank factorization model and an Augmented Lagrangean Succesive Overrelaxation Algorithm

    Directory of Open Access Journals (Sweden)

    Hugo Lara

    2014-12-01

    Full Text Available The matrix completion problem (MC has been approximated by using the nuclear norm relaxation. Some algorithms based on this strategy require the computationally expensive singular value decomposition (SVD at each iteration. One way to avoid SVD calculations is to use alternating methods, which pursue the completion through matrix factorization with a low rank condition. In this work an augmented Lagrangean-type alternating algorithm is proposed. The new algorithm uses duality information to define the iterations, in contrast to the solely primal LMaFit algorithm, which employs a Successive Over Relaxation scheme. The convergence result is studied. Some numerical experiments are given to compare numerical performance of both proposals.

  8. Research on Multirobot Pursuit Task Allocation Algorithm Based on Emotional Cooperation Factor

    Directory of Open Access Journals (Sweden)

    Baofu Fang

    2014-01-01

    Full Text Available Multirobot task allocation is a hot issue in the field of robot research. A new emotional model is used with the self-interested robot, which gives a new way to measure self-interested robots’ individual cooperative willingness in the problem of multirobot task allocation. Emotional cooperation factor is introduced into self-interested robot; it is updated based on emotional attenuation and external stimuli. Then a multirobot pursuit task allocation algorithm is proposed, which is based on emotional cooperation factor. Combined with the two-step auction algorithm recruiting team leaders and team collaborators, set up pursuit teams, and finally use certain strategies to complete the pursuit task. In order to verify the effectiveness of this algorithm, some comparing experiments have been done with the instantaneous greedy optimal auction algorithm; the results of experiments show that the total pursuit time and total team revenue can be optimized by using this algorithm.

  9. Two Expectation-Maximization Algorithms for Boolean Factor Analysis

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.

    2014-01-01

    Roč. 130, 23 April (2014), s. 83-97 ISSN 0925-2312 R&D Projects: GA ČR GAP202/10/0262 Grant - others:GA MŠk(CZ) ED1.1.00/02.0070; GA MŠk(CZ) EE.2.3.20.0073 Program:ED Institutional research plan: CEZ:AV0Z10300504 Keywords : Boolean Factor analysis * Binary Matrix factorization * Neural networks * Binary data model * Dimension reduction * Bars problem Subject RIV: IN - Informatics, Computer Science Impact factor: 2.083, year: 2014

  10. HPC-NMF: A High-Performance Parallel Algorithm for Nonnegative Matrix Factorization

    Energy Technology Data Exchange (ETDEWEB)

    2016-08-22

    NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient distributed algorithms to solve the problem for big data sets. We propose a high-performance distributed-memory parallel algorithm that computes the factorization by iteratively solving alternating non-negative least squares (NLS) subproblems for $\\WW$ and $\\HH$. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). As opposed to previous implementation, our algorithm is also flexible: It performs well for both dense and sparse matrices, and allows the user to choose any one of the multiple algorithms for solving the updates to low rank factors $\\WW$ and $\\HH$ within the alternating iterations.

  11. Approximation Algorithms for k-Connected Graph Factors

    NARCIS (Netherlands)

    Manthey, Bodo; Waanders, Marten; Sanita, Laura; Skutella, Martin

    2016-01-01

    Finding low-cost spanning subgraphs with given degree and connectivity requirements is a fundamental problem in the area of network design. We consider the problem of finding d-regular spanning subgraphs (or d-factors) of minimum weight with connectivity requirements. For the case of

  12. Conditions for bound states in a periodic linear chain, and the spectra of a class of Toeplitz operators in terms of polylogarithm functions

    International Nuclear Information System (INIS)

    Prunele, E de

    2003-01-01

    Conditions for bound states for a periodic linear chain are given within the framework of an exactly solvable non-relativistic quantum-mechanical model in three-dimensional space. These conditions express the strength parameter in terms of the distance between two consecutive centres of the chain, and of the range interaction parameter. This expression can be formulated in terms of polylogarithm functions, and, in some particular cases, in terms of the Riemann zeta function. An interesting mathematical result is that these expressions also correspond to the spectra of Toeplitz complex symmetric operators. The non-trivial zeros of the Riemann zeta function are interpreted as multiple points, at the origin, of the spectra of these Toeplitz operators

  13. A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Hanxiao Wu

    2012-10-01

    Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.

  14. Investigating the enhanced Best Performance Algorithm for Annual Crop Planning problem based on economic factors.

    Science.gov (United States)

    Adewumi, Aderemi Oluyinka; Chetty, Sivashan

    2017-01-01

    The Annual Crop Planning (ACP) problem was a recently introduced problem in the literature. This study further expounds on this problem by presenting a new mathematical formulation, which is based on market economic factors. To determine solutions, a new local search metaheuristic algorithm is investigated which is called the enhanced Best Performance Algorithm (eBPA). eBPA's results are compared against two well-known local search metaheuristic algorithms; these include Tabu Search and Simulated Annealing. The results show the potential of the eBPA for continuous optimization problems.

  15. A structural dynamic factor model for the effects of monetary policy estimated by the EM algorithm

    DEFF Research Database (Denmark)

    Bork, Lasse

    This paper applies the maximum likelihood based EM algorithm to a large-dimensional factor analysis of US monetary policy. Specifically, economy-wide effects of shocks to the US federal funds rate are estimated in a structural dynamic factor model in which 100+ US macroeconomic and financial time...... as opposed to the orthogonal factors resulting from the popular principal component approach to structural factor models. Correlated factors are economically more sensible and important for a richer monetary policy transmission mechanism. Secondly, I consider both static factor loadings as well as dynamic...

  16. An efficient algorithm to compute row and column counts for sparse Cholesky factorization

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, J.R. (Xerox Palo Alto Research Center, CA (United States)); Ng, E.G.; Peyton, B.W. (Oak Ridge National Lab., TN (United States))

    1992-09-01

    Let an undirected graph G be given, along with a specified depth- first spanning tree T. We give almost-linear-time algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, for every vertx v, compute the number of ancestors of v that are adjacent (in G) to at least one descendant of v. These problems arise in Cholesky and QR factorizations of sparse matrices. Our algorithms can be used to determine the number of nonzero entries in each row and column of the triangular factor of a matrix from the zero/nonzero structure of the matrix. Such a prediction makes storage allocation for sparse matrix factorizations more efficient. Our algorithms run in time linear in the size of the input times a slowly-growing inverse of Ackermann's function. The best previously known algorithms for these problems ran in time linear in the sum of the nonzero counts, which is usually much larger. We give experimental results demonstrating the practical efficiency of the new algorithms.

  17. An efficient algorithm to compute row and column counts for sparse Cholesky factorization

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, J.R. [Xerox Palo Alto Research Center, CA (United States); Ng, E.G.; Peyton, B.W. [Oak Ridge National Lab., TN (United States)

    1992-09-01

    Let an undirected graph G be given, along with a specified depth- first spanning tree T. We give almost-linear-time algorithms to solve the following two problems: First, for every vertex v, compute the number of descendants w of v for which some descendant of w is adjacent (in G) to v. Second, for every vertx v, compute the number of ancestors of v that are adjacent (in G) to at least one descendant of v. These problems arise in Cholesky and QR factorizations of sparse matrices. Our algorithms can be used to determine the number of nonzero entries in each row and column of the triangular factor of a matrix from the zero/nonzero structure of the matrix. Such a prediction makes storage allocation for sparse matrix factorizations more efficient. Our algorithms run in time linear in the size of the input times a slowly-growing inverse of Ackermann`s function. The best previously known algorithms for these problems ran in time linear in the sum of the nonzero counts, which is usually much larger. We give experimental results demonstrating the practical efficiency of the new algorithms.

  18. An effective method to identify various factors for denoising wrist pulse signal using wavelet denoising algorithm.

    Science.gov (United States)

    Garg, Nidhi; Ryait, Hardeep S; Kumar, Amod; Bisht, Amandeep

    2018-01-01

    WPS is a non-invasive method to investigate human health. During signal acquisition, noises are also recorded along with WPS. Clean WPS with high peak signal to noise ratio is a prerequisite before use in disease diagnosis. Wavelet Transform is a commonly used method in the filtration process. Apart from its extensive use, the appropriate factors for wavelet denoising algorithm is not yet clear in WPS application. The presented work gives an effective approach to select various factors for wavelet denoise algorithm. With the appropriate selection of wavelet and factors, it is possible to reduce noise in WPS. In this work, all the factors of wavelet denoising are varied successively. Various evaluation parameters such as MSE, PSNR, PRD and Fit Coefficient are used to find out the performance of the wavelet denoised algorithm at every one step. The results obtained from computerized WPS illustrates that the presented approach can successfully select the mother wavelet and other factors for wavelet denoise algorithm. The selection of db9 as mother wavelet with sure threshold function and single rescaling function using UWT has been a better option for our database. The empirical results proves that the methodology discussed here could be effective in denoising WPS of any morphological pattern.

  19. An adaptive scale factor based MPPT algorithm for changing solar irradiation levels in outer space

    Science.gov (United States)

    Kwan, Trevor Hocksun; Wu, Xiaofeng

    2017-03-01

    Maximum power point tracking (MPPT) techniques are popularly used for maximizing the output of solar panels by continuously tracking the maximum power point (MPP) of their P-V curves, which depend both on the panel temperature and the input insolation. Various MPPT algorithms have been studied in literature, including perturb and observe (P&O), hill climbing, incremental conductance, fuzzy logic control and neural networks. This paper presents an algorithm which improves the MPP tracking performance by adaptively scaling the DC-DC converter duty cycle. The principle of the proposed algorithm is to detect the oscillation by checking the sign (ie. direction) of the duty cycle perturbation between the current and previous time steps. If there is a difference in the signs then it is clear an oscillation is present and the DC-DC converter duty cycle perturbation is subsequently scaled down by a constant factor. By repeating this process, the steady state oscillations become negligibly small which subsequently allows for a smooth steady state MPP response. To verify the proposed MPPT algorithm, a simulation involving irradiances levels that are typically encountered in outer space is conducted. Simulation and experimental results prove that the proposed algorithm is fast and stable in comparison to not only the conventional fixed step counterparts, but also to previous variable step size algorithms.

  20. Metropolis-Hastings Robbins-Monro Algorithm for Confirmatory Item Factor Analysis

    Science.gov (United States)

    Cai, Li

    2010-01-01

    Item factor analysis (IFA), already well established in educational measurement, is increasingly applied to psychological measurement in research settings. However, high-dimensional confirmatory IFA remains a numerical challenge. The current research extends the Metropolis-Hastings Robbins-Monro (MH-RM) algorithm, initially proposed for…

  1. A comprehensive performance evaluation on the prediction results of existing cooperative transcription factors identification algorithms.

    Science.gov (United States)

    Lai, Fu-Jou; Chang, Hong-Tsun; Huang, Yueh-Min; Wu, Wei-Sheng

    2014-01-01

    Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to measure the performance of new

  2. Mining association rules between stroke risk factors based on the Apriori algorithm.

    Science.gov (United States)

    Li, Qin; Zhang, Yiyan; Kang, Hongyu; Xin, Yi; Shi, Caicheng

    2017-07-20

    Stroke is a frequently-occurring disease and is a severe threat to human health. We aimed to explore the associations between stroke risk factors. Subjects who were aged 40 or above were requested to do surveys with a unified questionnaire as well as laboratory examinations. The Apriori algorithm was applied to find out the meaningful association rules. Selected association rules were divided into 8 groups by the number of former items. The rules with higher confidence degree in every group were viewed as the meaningful rules. The training set used in association analysis consists of a total of 985,325 samples, with 15,835 stroke patients (1.65%) and 941,490 without stroke (98.35%). Based on the threshold we set for the Apriori algorithm, eight meaningful association rules were obtained between stroke and its high risk factors. While between high risk factors, there are 25 meaningful association rules. Based on the Apriori algorithm, meaningful association rules between the high risk factors of stroke were found, proving a feasible way to reduce the risk of stroke with early intervention.

  3. Factor Analysis with EM Algorithm Never Gives Improper Solutions when Sample Covariance and Initial Parameter Matrices Are Proper

    Science.gov (United States)

    Adachi, Kohei

    2013-01-01

    Rubin and Thayer ("Psychometrika," 47:69-76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one,…

  4. Algorithm for calculating an availability factor for the inhalation of radioactive and chemical materials

    International Nuclear Information System (INIS)

    1984-02-01

    This report presents a method of calculating the availability of buried radioactive and nonradioactive materials via an inhalation pathway. Availability is the relationship between the concentration of a substance in the soil and the dose rate to a human receptor. Algorithms presented for calculating availabiliy of elemental inorganic substances are based on atmospheric enrichment factors; those presented for calculating availability of organic substances are based on vapor pressures. The basis, use, and limitations of the developed equations are discussed. 32 references, 5 tables

  5. The evaluation of the individual impact factor of researchers and research centers using the RC algorithm.

    Science.gov (United States)

    Cordero-Villafáfila, Amelia; Ramos-Brieva, Jesus A

    2015-01-01

    The RC algorithm quantitatively evaluates the personal impact factor of the scientific production of isolated researchers. The authors propose an adaptation of RC to evaluate the personal impact factor of research centers, hospitals and other research groups. Thus, these could be classified according to the accredited impact of the results of their scientific work between researchers of the same scientific area. This could be useful for channelling budgets and grants for research. Copyright © 2013 SEP y SEPB. Published by Elsevier España. All rights reserved.

  6. Genetic Algorithm and Graph Theory Based Matrix Factorization Method for Online Friend Recommendation

    Directory of Open Access Journals (Sweden)

    Qu Li

    2014-01-01

    Full Text Available Online friend recommendation is a fast developing topic in web mining. In this paper, we used SVD matrix factorization to model user and item feature vector and used stochastic gradient descent to amend parameter and improve accuracy. To tackle cold start problem and data sparsity, we used KNN model to influence user feature vector. At the same time, we used graph theory to partition communities with fairly low time and space complexity. What is more, matrix factorization can combine online and offline recommendation. Experiments showed that the hybrid recommendation algorithm is able to recommend online friends with good accuracy.

  7. Asymptotic expansions for Toeplitz operators on symmetric spaces of general type

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav; Upmeier, H.

    2015-01-01

    Roč. 367, č. 1 (2015), s. 423-476 ISSN 0002-9947 R&D Projects: GA ČR GA201/09/0473 Institutional support: RVO:67985840 Keywords : symmetric space * symmetric domain * Berezin quantization Subject RIV: BA - General Mathematics Impact factor: 1.196, year: 2015 http://www.ams.org/journals/tran/2015-367-01/S0002-9947-2014-06130-8/

  8. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  9. Automated morphological classification of galaxies based on projection gradient nonnegative matrix factorization algorithm

    Science.gov (United States)

    Selim, I. M.; Abd El Aziz, Mohamed

    2017-04-01

    The development of automated morphological classification schemes can successfully distinguish between morphological types of galaxies and can be used for studies of the formation and subsequent evolution of galaxies in our universe. In this paper, we present a new automated machine supervised learning astronomical classification scheme based on the Nonnegative Matrix Factorization algorithm. This scheme is making distinctions between all types roughly corresponding to Hubble types such as elliptical, lenticulars, spiral, and irregular galaxies. The proposed algorithm is performed on two examples with different number of image (small dataset contains 110 image and large dataset contains 700 images). The experimental results show that galaxy images from EFIGI catalog can be classified automatically with an accuracy of ˜93% for small and ˜92% for large number. These results are in good agreement when compared with the visual classifications.

  10. Fast parallel DNA-based algorithms for molecular computation: quadratic congruence and factoring integers.

    Science.gov (United States)

    Chang, Weng-Long

    2012-03-01

    Assume that n is a positive integer. If there is an integer such that M (2) ≡ C (mod n), i.e., the congruence has a solution, then C is said to be a quadratic congruence (mod n). If the congruence does not have a solution, then C is said to be a quadratic noncongruence (mod n). The task of solving the problem is central to many important applications, the most obvious being cryptography. In this article, we describe a DNA-based algorithm for solving quadratic congruence and factoring integers. In additional to this novel contribution, we also show the utility of our encoding scheme, and of the algorithm's submodules. We demonstrate how a variety of arithmetic, shifted and comparative operations, namely bitwise and full addition, subtraction, left shifter and comparison perhaps are performed using strands of DNA.

  11. Design of optimal input–output scaling factors based fuzzy PSS using bat algorithm

    Directory of Open Access Journals (Sweden)

    D.K. Sambariya

    2016-06-01

    Full Text Available In this article, a fuzzy logic based power system stabilizer (FPSS is designed by tuning its input–output scaling factors. Two input signals to FPSS are considered as change of speed and change in power, and the output signal is considered as a correcting voltage signal. The normalizing factors of these signals are considered as the optimization problem with minimization of integral of square error in single-machine and multi-machine power systems. These factors are optimally determined with bat algorithm (BA and considered as scaling factors of FPSS. The performance of power system with such a designed BA based FPSS (BA-FPSS is compared to that of response with FPSS, Harmony Search Algorithm based FPSS (HSA-FPSS and Particle Swarm Optimization based FPSS (PSO-FPSS. The systems considered are single-machine connected to infinite-bus, two-area 4-machine 10-bus and IEEE New England 10-machine 39-bus power systems for evaluating the performance of BA-FPSS. The comparison is carried out in terms of the integral of time-weighted absolute error (ITAE, integral of absolute error (IAE and integral of square error (ISE of speed response for systems with FPSS, HSA-FPSS and BA-FPSS. The superior performance of systems with BA-FPSS is established considering eight plant conditions of each system, which represents the wide range of operating conditions.

  12. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    Science.gov (United States)

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  13. A Framework for Batched and GPU-Resident Factorization Algorithms Applied to Block Householder Transformations

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Tingzing Tim [University of Tennessee (UT); Tomov, Stanimire Z [ORNL; Luszczek, Piotr R [ORNL; Dongarra, Jack J [ORNL

    2015-01-01

    As modern hardware keeps evolving, an increasingly effective approach to developing energy efficient and high-performance solvers is to design them to work on many small size and independent problems. Many applications already need this functionality, especially for GPUs, which are currently known to be about four to five times more energy efficient than multicore CPUs. We describe the development of one-sided factorizations that work for a set of small dense matrices in parallel, and we illustrate our techniques on the QR factorization based on Householder transformations. We refer to this mode of operation as a batched factorization. Our approach is based on representing the algorithms as a sequence of batched BLAS routines for GPU-only execution. This is in contrast to the hybrid CPU-GPU algorithms that rely heavily on using the multicore CPU for specific parts of the workload. But for a system to benefit fully from the GPU's significantly higher energy efficiency, avoiding the use of the multicore CPU must be a primary design goal, so the system can rely more heavily on the more efficient GPU. Additionally, this will result in the removal of the costly CPU-to-GPU communication. Furthermore, we do not use a single symmetric multiprocessor(on the GPU) to factorize a single problem at a time. We illustrate how our performance analysis, and the use of profiling and tracing tools, guided the development and optimization of our batched factorization to achieve up to a 2-fold speedup and a 3-fold energy efficiency improvement compared to our highly optimized batched CPU implementations based on the MKL library(when using two sockets of Intel Sandy Bridge CPUs). Compared to a batched QR factorization featured in the CUBLAS library for GPUs, we achieved up to 5x speedup on the K40 GPU.

  14. A computational method using the random walk with restart algorithm for identifying novel epigenetic factors.

    Science.gov (United States)

    Li, JiaRui; Chen, Lei; Wang, ShaoPeng; Zhang, YuHang; Kong, XiangYin; Huang, Tao; Cai, Yu-Dong

    2018-02-01

    Epigenetic regulation has long been recognized as a significant factor in various biological processes, such as development, transcriptional regulation, spermatogenesis, and chromosome stabilization. Epigenetic alterations lead to many human diseases, including cancer, depression, autism, and immune system defects. Although efforts have been made to identify epigenetic regulators, it remains a challenge to systematically uncover all the components of the epigenetic regulation in the genome level using experimental approaches. The advances of constructing protein-protein interaction (PPI) networks provide an excellent opportunity to identify novel epigenetic factors computationally in the genome level. In this study, we identified potential epigenetic factors by using a computational method that applied the random walk with restart (RWR) algorithm on a protein-protein interaction (PPI) network using reported epigenetic factors as seed nodes. False positives were identified by their specific roles in the PPI network or by a low-confidence interaction and a weak functional relationship with epigenetic regulators. After filtering out the false positives, 26 candidate epigenetic factors were finally accessed. According to previous studies, 22 of these are thought to be involved in epigenetic regulation, suggesting the robustness of our method. Our study provides a novel computational approach which successfully identified 26 potential epigenetic factors, paving the way on deepening our understandings on the epigenetic mechanism.

  15. Particle swarm optimizer for weighting factor selection in intensity-modulated radiation therapy optimization algorithms.

    Science.gov (United States)

    Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo

    2017-01-01

    In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human

  16. The emission factor of volatile isoprenoids: caveats, model algorithms, response shapes and scaling

    Science.gov (United States)

    Niinemets, Ü.; Monson, R. K.; Arneth, A.; Ciccioli, P.; Kesselmeier, J.; Kuhn, U.; Noe, S. M.; Peñuelas, J.; Staudt, M.

    2010-02-01

    In models of plant volatile isoprenoid emissions, the instantaneous compound emission rate typically scales with the plant's emission capacity under specified environmental conditions, also defined as the emission factor, ES. In the most widely employed plant isoprenoid emission models, the algorithms developed by Guenther and colleagues (1991, 1993), instantaneous variation of the steady-state emission rate is described as the product of ES and light and temperature response functions. When these models are employed in the in atmospheric chemistry modeling community, species-specific ES values and parameter values defining the instantaneous response curves are typically considered as constant. In the current review, we argue that ES is largely a modeling concept, importantly depending on our understanding of which environmental factors affect isoprenoid emissions, and consequently need standardization during ES determination. In particular, there is now increasing consensus that variations in atmospheric CO2 concentration, in addition to variations in light and temperature, need to be included in the emission models. Furthermore, we demonstrate that for less volatile isoprenoids, mono- and sesquiterpenes, the emissions are often jointly controlled by the compound synthesis and volatility, and because of these combined biochemical and physico-chemical properties, specification of ES as a constant value is incapable of describing instantaneous emissions within the sole assumptions of fluctuating light and temperature, as are used in the standard algorithms. The definition of ES also varies depending on the degree of aggregation of ES values in different parameterization schemes (leaf- vs. canopy- or region-level, species vs. plant functional type level), and various aggregated ES schemes are not compatible for different integration models. The summarized information collectively emphasizes the need to update model algorithms by including missing environmental and

  17. The leaf-level emission factor of volatile isoprenoids: caveats, model algorithms, response shapes and scaling

    Science.gov (United States)

    Niinemets, Ü.; Monson, R. K.; Arneth, A.; Ciccioli, P.; Kesselmeier, J.; Kuhn, U.; Noe, S. M.; Peñuelas, J.; Staudt, M.

    2010-06-01

    In models of plant volatile isoprenoid emissions, the instantaneous compound emission rate typically scales with the plant's emission potential under specified environmental conditions, also called as the emission factor, ES. In the most widely employed plant isoprenoid emission models, the algorithms developed by Guenther and colleagues (1991, 1993), instantaneous variation of the steady-state emission rate is described as the product of ES and light and temperature response functions. When these models are employed in the atmospheric chemistry modeling community, species-specific ES values and parameter values defining the instantaneous response curves are often taken as initially defined. In the current review, we argue that ES as a characteristic used in the models importantly depends on our understanding of which environmental factors affect isoprenoid emissions, and consequently need standardization during experimental ES determinations. In particular, there is now increasing consensus that in addition to variations in light and temperature, alterations in atmospheric and/or within-leaf CO2 concentrations may need to be included in the emission models. Furthermore, we demonstrate that for less volatile isoprenoids, mono- and sesquiterpenes, the emissions are often jointly controlled by the compound synthesis and volatility. Because of these combined biochemical and physico-chemical drivers, specification of ES as a constant value is incapable of describing instantaneous emissions within the sole assumptions of fluctuating light and temperature as used in the standard algorithms. The definition of ES also varies depending on the degree of aggregation of ES values in different parameterization schemes (leaf- vs. canopy- or region-scale, species vs. plant functional type levels) and various aggregated ES schemes are not compatible for different integration models. The summarized information collectively emphasizes the need to update model algorithms by including

  18. A Method for Correcting the Calibration Factor Used in the TLD Dose Calculation Algorithm

    International Nuclear Information System (INIS)

    Shin, S.; Jin, H.; Son, J.; Song, M.

    1999-01-01

    The method is described for estimating calibration factors used in the TLD neutron dose calculation algorithm in order to assess the personal neutron dose equivalent to radiation workers in a nuclear power plant in accordance with ICRP 60 recommendations. Neutron spectra were measured at several locations inside the reactor containment building of Youngkwang Unit 4 in Korea by using a Bonner multisphere spectrometer (BMS) system. Based on the fractional distribution of measured neutron fluence, four locations were selected for in situ TLD calibration. TL responses for the four selected locations were calculated from the measured spectra and the reported fit response function of TLD-600. TL responses were also measured with Harshaw type 8806 albedo dosemeters mounted on the water phantom, and compared with the calculated TL responses. From the responses measured with Harshaw 8806 TLDs thermal neutron fluence was evaluated, and used to adjust the neutron spectrum obtained with BMS. TL responses calculated for the adjusted neutron spectra showed an excellent consistency with the measured TL responses within 15% difference. Neutron calibration factors were calculated for the measured neutron spectra and the D 2 O-moderated 252 Cf spectrum, and used to calculate correction factors, which ranged from 2.38 to 11.18. The correction factor estimated in this way for the known neutron spectrum at an area can be conveniently used to calculate the personal dose equivalent at the area from the calibration factor obtained for a calibration neutron spectrum. (author)

  19. Angioedema of the upper aerodigestive tract: risk factors associated with airway intervention and management algorithm.

    Science.gov (United States)

    Brook, Christopher D; Devaiah, Anand K; Davis, Elizabeth M

    2014-03-01

    Angioedema of the upper aerodigestive tract can lead to significant airway obstruction. To date no articles have delineated risk factors for progression after initial evaluation. This article presents the results of a retrospective study of patients with angioedema at a single institution. Patients included were consecutive otolaryngology consultations for angioedema in the emergency department (ED) from 1999 to 2003. All patients were evaluated by an otolaryngologist and underwent fiber-optic laryngoscopy, which was repeated serially depending on findings. Data was collected on demographics, comorbidities, intubation, disposition, and progression of angioedema. A total of 177 patients were included in the study: 32 (18%) patients required intubation; 25 (14%) on initial presentation and 7 (4%) who progressed from an initially stable airway to requiring intervention after reevaluation. Analysis of variance (ANOVA) demonstrated a statistically significant variance between location of edema and rate of intubation, with higher rates in the pharynx and larynx vs the lip and face. Patients who required intubation after progression between serial evaluations were statistically more likely to have edema that involved deeper portions of the aerodigestive tract. Patients who required intubation were statistically more likely to be older (average age 61.8 vs 55.1 years, p = 0.03). In this large series of patients managed for aerodigestive angioedema we demonstrate risk factors associated with airway intervention, and risk factors associated with clinical progression on serial examination to airway intervention. In addition, we demonstrate a successful management algorithm for patients with aerodigestive angioedema. © 2014 ARS-AAOA, LLC.

  20. Factorization of J-unitary matrix polynomials on the line and a Schur algorithm for generalized Nevanlinna functions

    NARCIS (Netherlands)

    Alpay, D.; Dijksma, A.; Langer, H.

    2004-01-01

    We prove that a 2 × 2 matrix polynomial which is J-unitary on the real line can be written as a product of normalized elementary J-unitary factors and a J-unitary constant. In the second part we give an algorithm for this factorization using an analog of the Schur transformation.

  1. Algorithm for predicting CHD death risk in Turkish adults: conventional factors contribute only moderately in women.

    Science.gov (United States)

    Onat, Altan; Can, Günay; Kaya, Ayşem; Keskin, Muhammed; Hayıroğlu, Mert I; Yüksel, Hüsniye

    2017-06-01

    To assist the management strategy of individuals, we determined an algorithm for predicting the risk of coronary heart disease (CHD) death in Turkish adults with a high prevalence of metabolic syndrome (MetS). The risk of CHD death was estimated in 3054 middle-aged adults, followed over 9.08±4.2 years. Cox proportional hazard regression was used to predict risk. Discrimination was assessed using C-statistics. CHD death was identified in 233 subjects. In multivariable analysis, the serum high-density lipoprotein-cholesterol (HDL-C) level was not predictive in men and the non-HDL-C level was not predictive in women. Age, presence of diabetes, systolic blood pressure ≥160 mm Hg, smoking habit, and low physical activity were predictors in both sexes. The exclusion of coronary disease at baseline did not change the risk estimates materially. Using an algorithm of the 7 stated variables, individuals in the highest category of risk score showed a 19- to 50-fold higher spread in the absolute risk of death from CHD than those in the second lowest category. C-index of the model using age alone was as high as 0.774 in men and 0.836 in women (pindex of 0.058 in males and 0.042 in females. In a middle-aged population with prevalent MetS, men disclosed anticipated risk parameters (except for high HDL-C levels) as determinants of the risk of CHD death. On the other hand, serum non-HDL-C levels and moderate systolic hypertension were not relevant in women. The moderate contribution of conventional risk factors (beyond age) to the estimation of the risk of CHD death in women is consistent with the operation of autoimmune activation.

  2. An acenocoumarol dosing algorithm exploiting clinical and genetic factors in South Indian (Dravidian) population.

    Science.gov (United States)

    Krishna Kumar, Dhakchinamoorthi; Shewade, Deepak Gopal; Loriot, Marie-Anne; Beaune, Philippe; Sai Chandran, B V; Balachander, Jayaraman; Adithan, Chandrasekaran

    2015-02-01

    The objective of this study was to determine the influence of CYP2C9, VKORC1, CYP4F2, and GGCX genetic polymorphisms on mean daily dose of acenocoumarol in South Indian patients and to develop a new pharmacogenetic algorithm based on clinical and genetic factors. Patients receiving acenocoumarol maintenance therapy (n = 230) were included in the study. Single nucleotide polymorphisms (SNP) of CYP2C9, VKORC1, CYP4F2, and GGCX were genotyped by real-time polymerase chain reaction (RT-PCR) method. The mean daily acenocoumarol maintenance dose was found to be 3.7 ± 2.3 (SD) mg/day. The CYP2C9 *1*2, CYP2C9 *1*3, and CYP2C9 *2*3 variant genotypes significantly reduced the dose by 56.7 % (2.0 mg), 67.6 % (1.6 mg), and 70.3 % (1.5 mg) than wild-type carriers 4.1 mg, p genetic variants of CYP2C9 and GGCX (rs11676382) were found to be associated with lower acenocoumarol dose, whereas CYP4F2 (rs2108622) was associated with higher doses. Age, body mass index (BMI), variation of CYP2C9, VKORC1, CYP4F2, and GGCX were the major determinants of acenocoumarol maintenance dose, accounting for 61.8 % of its variability (adjusted r (2) = 0.615, p algorithm was established to determine the acenocoumarol dose in South Indian population.

  3. An advanced algorithm for construction of Integral Transport Matrix Method operators using accumulation of single cell coupling factors

    International Nuclear Information System (INIS)

    Powell, B. P.; Azmy, Y. Y.

    2013-01-01

    The Integral Transport Matrix Method (ITMM) has been shown to be an effective method for solving the neutron transport equation in large domains on massively parallel architectures. In the limit of very large number of processors, the speed of the algorithm, and its suitability for unstructured meshes, i.e. other than an ordered Cartesian grid, is limited by the construction of four matrix operators required for obtaining the solution in each sub-domain. The existing algorithm used for construction of these matrix operators, termed the differential mesh sweep, is computationally expensive and was developed for a structured grid. This work proposes the use of a new algorithm for construction of these operators based on the construction of a single, fundamental matrix representing the transport of a particle along every possible path throughout the sub-domain mesh. Each of the operators is constructed by multiplying an element of this fundamental matrix by two factors dependent only upon the operator being constructed and on properties of the emitting and incident cells. The ITMM matrix operator construction time for the new algorithm is demonstrated to be shorter than the existing algorithm in all tested cases with both isotropic and anisotropic scattering considered. While also being a more efficient algorithm on a structured Cartesian grid, the new algorithm is promising in its geometric robustness and potential for being applied to an unstructured mesh, with the ultimate goal of application to an unstructured tetrahedral mesh on a massively parallel architecture. (authors)

  4. Adaptive Multiview Nonnegative Matrix Factorization Algorithm for Integration of Multimodal Biomedical Data

    Directory of Open Access Journals (Sweden)

    Bisakha Ray

    2017-08-01

    Full Text Available The amounts and types of available multimodal tumor data are rapidly increasing, and their integration is critical for fully understanding the underlying cancer biology and personalizing treatment. However, the development of methods for effectively integrating multimodal data in a principled manner is lagging behind our ability to generate the data. In this article, we introduce an extension to a multiview nonnegative matrix factorization algorithm (NNMF for dimensionality reduction and integration of heterogeneous data types and compare the predictive modeling performance of the method on unimodal and multimodal data. We also present a comparative evaluation of our novel multiview approach and current data integration methods. Our work provides an efficient method to extend an existing dimensionality reduction method. We report rigorous evaluation of the method on large-scale quantitative protein and phosphoprotein tumor data from the Clinical Proteomic Tumor Analysis Consortium (CPTAC acquired using state-of-the-art liquid chromatography mass spectrometry. Exome sequencing and RNA-Seq data were also available from The Cancer Genome Atlas for the same tumors. For unimodal data, in case of breast cancer, transcript levels were most predictive of estrogen and progesterone receptor status and copy number variation of human epidermal growth factor receptor 2 status. For ovarian and colon cancers, phosphoprotein and protein levels were most predictive of tumor grade and stage and residual tumor, respectively. When multiview NNMF was applied to multimodal data to predict outcomes, the improvement in performance is not overall statistically significant beyond unimodal data, suggesting that proteomics data may contain more predictive information regarding tumor phenotypes than transcript levels, probably due to the fact that proteins are the functional gene products and therefore a more direct measurement of the functional state of the tumor. Here, we

  5. Algorithms for polynomial spectral factorization and bounded-real balanced state space representations

    NARCIS (Netherlands)

    Rapisarda, P.; Trentelman, H.L.; Minh, H.B.

    We illustrate an algorithm that starting from the image representation of a strictly bounded-real system computes a minimal balanced state variable, from which a minimal balanced state realization is readily obtained. The algorithm stems from an iterative procedure to compute a storage function,

  6. On the Cooley-Turkey Fast Fourier algorithm for arbitrary factors ...

    African Journals Online (AJOL)

    Atonuje and Okonta in [1] developed the Cooley-Turkey Fast Fourier transform algorithm and its application to the Fourier transform of discretely sampled data points N, expressed in terms of a power y of 2. In this paper, we extend the formalism of [1] Cookey-Turkey Fast Fourier transform algorithm. The method is developed ...

  7. Parallel Factor-Based Model for Two-Dimensional Direction Estimation

    Directory of Open Access Journals (Sweden)

    Nizar Tayem

    2017-01-01

    Full Text Available Two-dimensional (2D Direction-of-Arrivals (DOA estimation for elevation and azimuth angles assuming noncoherent, mixture of coherent and noncoherent, and coherent sources using extended three parallel uniform linear arrays (ULAs is proposed. Most of the existing schemes have drawbacks in estimating 2D DOA for multiple narrowband incident sources as follows: use of large number of snapshots, estimation failure problem for elevation and azimuth angles in the range of typical mobile communication, and estimation of coherent sources. Moreover, the DOA estimation for multiple sources requires complex pair-matching methods. The algorithm proposed in this paper is based on first-order data matrix to overcome these problems. The main contributions of the proposed method are as follows: (1 it avoids estimation failure problem using a new antenna configuration and estimates elevation and azimuth angles for coherent sources; (2 it reduces the estimation complexity by constructing Toeplitz data matrices, which are based on a single or few snapshots; (3 it derives parallel factor (PARAFAC model to avoid pair-matching problems between multiple sources. Simulation results demonstrate the effectiveness of the proposed algorithm.

  8. Finite difference methods for option pricing under Lévy processes: Wiener-Hopf factorization approach.

    Science.gov (United States)

    Kudryavtsev, Oleg

    2013-01-01

    In the paper, we consider the problem of pricing options in wide classes of Lévy processes. We propose a general approach to the numerical methods based on a finite difference approximation for the generalized Black-Scholes equation. The goal of the paper is to incorporate the Wiener-Hopf factorization into finite difference methods for pricing options in Lévy models with jumps. The method is applicable for pricing barrier and American options. The pricing problem is reduced to the sequence of linear algebraic systems with a dense Toeplitz matrix; then the Wiener-Hopf factorization method is applied. We give an important probabilistic interpretation based on the infinitely divisible distributions theory to the Laurent operators in the correspondent factorization identity. Notice that our algorithm has the same complexity as the ones which use the explicit-implicit scheme, with a tridiagonal matrix. However, our method is more accurate. We support the advantage of the new method in terms of accuracy and convergence by using numerical experiments.

  9. Constraint factor in optimization of truss structures via flower pollination algorithm

    Science.gov (United States)

    Bekdaş, Gebrail; Nigdeli, Sinan Melih; Sayin, Baris

    2017-07-01

    The aim of the paper is to investigate the optimum design of truss structures by considering different stress and displacement constraints. For that reason, the flower pollination algorithm based methodology was applied for sizing optimization of space truss structures. Flower pollination algorithm is a metaheuristic algorithm inspired by the pollination process of flowering plants. By the imitation of cross-pollination and self-pollination processes, the randomly generation of sizes of truss members are done in two ways and these two types of optimization are controlled with a switch probability. In the study, a 72 bar space truss structure was optimized by using five different cases of the constraint limits. According to the results, a linear relationship between the optimum structure weight and constraint limits was observed.

  10. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    Science.gov (United States)

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  11. Parallel O(log n) algorithms for open- and closed-chain rigid multibody systems based on a new mass matrix factorization technique

    Science.gov (United States)

    Fijany, Amir

    1993-01-01

    In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.

  12. Upper and lower bounds for disadvantage factors as a test of an algorithm used in a synthesis method

    International Nuclear Information System (INIS)

    Ackroyd, R.T.; Nanneh, M.M.

    1988-01-01

    A lower bound for the disadvantage factor of a lattice cell of arbitrary configuration is obtained using a finite element method which is based on a variational principle for the even-parity angular flux. An upper bound for the disadvantage factor is given by a finite element method using the complementary variational principle for the odd-parity angular flux. These theoretical results are illustrated by calculations for urnaium/graphite and uranium/water lattices. As the approximations are refined the fluxes obtained by the first method tend towards the actual flux from below in the moderator, and from above in the fuel. These trends are reversed for the second method. This derivation of benchmarks for disadvantage factors has been undertaken primarily as a test of an important algorithm used by the authors in a method of synthesising transport solutions starting with a diffusion theory approximation. The algorithm is used to convert odd-parity approximations for the angular flux into even-parity approximations and vice versa. (author)

  13. Upper and lower bounds for disadvantage factors as a test of algorithm used in a synthesis method

    International Nuclear Information System (INIS)

    Nanneh, M.M.; Ackroyd, R.T.

    1991-01-01

    A lower bound for the disadvantage factor of a lattice cell of arbitrary configuration is obtained using a finite element method which is based on a variational principle for the even-parity angular flux. An upper bound for the disadvantage factor is given by a finite element method using the complementary variational principle for the odd-parity angular flux. These theoretical results are illustrated by calculations for uranium/graphite and uranium/water lattices. As the approximations are refined the fluxes obtained by the first method tend towards the actual flux from below in the moderator, and from above in the fuel. These trends are reversed for the second method. This derivation of benchmarks for disadvantage factors has been undertaken primarily as a test of an important algorithm used by the authors in a method of synthesising transport solutions starting with a diffusion theory approximation. The algorithm is used to convert odd-parity approximations for the angular flux into even-parity approximations and vice versa. (author). 15 refs., 8 tabs., 9 figs

  14. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    Science.gov (United States)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  15. J(l)-unitary factorization and the Schur algorithm for Nevanlinna functions in an indefinite setting

    NARCIS (Netherlands)

    Alpay, D.; Dijksma, A.; Langer, H.

    2006-01-01

    We introduce a Schur transformation for generalized Nevanlinna functions and show that it can be used in obtaining the unique minimal factorization of a class of rational J(l)-unitary 2 x 2 matrix functions into elementary factors from the same class. (c) 2006 Elsevier Inc. All rights reserved.

  16. Hierarchical Genetic Algorithm and Fuzzy Radial Basis Function Networks for Factors Influencing Hospital Length of Stay Outliers.

    Science.gov (United States)

    Belderrar, Ahmed; Hazzab, Abdeldjebar

    2017-07-01

    Controlling hospital high length of stay outliers can provide significant benefits to hospital management resources and lead to cost reduction. The strongest predictive factors influencing high length of stay outliers should be identified to build a high-performance prediction model for hospital outliers. We highlight the application of the hierarchical genetic algorithm to provide the main predictive factors and to define the optimal structure of the prediction model fuzzy radial basis function neural network. To establish the prediction model, we used a data set of 26,897 admissions from five different intensive care units with discharges between 2001 and 2012. We selected and analyzed the high length of stay outliers using the trimming method geometric mean plus two standard deviations. A total of 28 predictive factors were extracted from the collected data set and investigated. High length of stay outliers comprised 5.07% of the collected data set. The results indicate that the prediction model can provide effective forecasting. We found 10 common predictive factors within the studied intensive care units. The obtained main predictive factors include patient demographic characteristics, hospital characteristics, medical events, and comorbidities. The main initial predictive factors available at the time of admission are useful in evaluating high length of stay outliers. The proposed approach can provide a practical tool for healthcare providers, and its application can be extended to other hospital predictions, such as readmissions and cost.

  17. Fast sweeping algorithm for accurate solution of the TTI eikonal equation using factorization

    KAUST Repository

    bin Waheed, Umair

    2017-06-10

    Traveltime computation is essential for many seismic data processing applications and velocity analysis tools. High-resolution seismic imaging requires eikonal solvers to account for anisotropy whenever it significantly affects the seismic wave kinematics. Moreover, computation of auxiliary quantities, such as amplitude and take-off angle, rely on highly accurate traveltime solutions. However, the finite-difference based eikonal solution for a point-source initial condition has an upwind source-singularity at the source position, since the wavefront curvature is large near the source point. Therefore, all finite-difference solvers, even the high-order ones, show inaccuracies since the errors due to source-singularity spread from the source point to the whole computational domain. We address the source-singularity problem for tilted transversely isotropic (TTI) eikonal solvers using factorization. We solve a sequence of factored tilted elliptically anisotropic (TEA) eikonal equations iteratively, each time by updating the right hand side function. At each iteration, we factor the unknown TEA traveltime into two factors. One of the factors is specified analytically, such that the other factor is smooth in the source neighborhood. Therefore, through the iterative procedure we obtain accurate solution to the TTI eikonal equation. Numerical tests show significant improvement in accuracy due to factorization. The idea can be easily extended to compute accurate traveltimes for models with lower anisotropic symmetries, such as orthorhombic, monoclinic or even triclinic media.

  18. Shor's Factoring Algorithm and Modern Cryptography. An Illustration of the Capabilities Inherent in Quantum Computers

    OpenAIRE

    Gerjuoy, Edward

    2004-01-01

    The security of messages encoded via the widely used RSA public key encryption system rests on the enormous computational effort required to find the prime factors of a large number N using classical (i.e., conventional) computers. In 1994, however, Peter Shor showed that for sufficiently large N a quantum computer would be expected to perform the factoring with much less computational effort. This paper endeavors to explain, in a fashion comprehensible to the non-expert readers of this journ...

  19. Nonnegative Matrix Factorization of time frequency representation of vibration signal for local damage detection - comparison of algorithms

    Science.gov (United States)

    Wodecki, Jacek

    2018-01-01

    Local damage detection in rotating machine elements is very important problem widely researched in the literature. One of the most common approaches is the vibration signal analysis. Since time domain processing is often insufficient, other representations are frequently favored. One of the most common one is time-frequency representation hence authors propose to separate internal processes occurring in the vibration signal by spectrogram matrix factorization. In order to achieve this, it is proposed to use the approach of Nonnegative Matrix Factorization (NMF). In this paper three NMF algorithms are tested using real and simulated data describing single-channel vibration signal acquired on damaged rolling bearing operating in drive pulley in belt conveyor driving station. Results are compared with filtration using Spectral Kurtosis, which is currently recognized as classical method for impulsive information extraction, to verify the validity of presented methodology.

  20. Application of classification algorithms for analysis of road safety risk factor dependencies.

    Science.gov (United States)

    Kwon, Oh Hoon; Rhee, Wonjong; Yoon, Yoonjin

    2015-02-01

    Transportation continues to be an integral part of modern life, and the importance of road traffic safety cannot be overstated. Consequently, recent road traffic safety studies have focused on analysis of risk factors that impact fatality and injury level (severity) of traffic accidents. While some of the risk factors, such as drug use and drinking, are widely known to affect severity, an accurate modeling of their influences is still an open research topic. Furthermore, there are innumerable risk factors that are waiting to be discovered or analyzed. A promising approach is to investigate historical traffic accident data that have been collected in the past decades. This study inspects traffic accident reports that have been accumulated by the California Highway Patrol (CHP) since 1973 for which each accident report contains around 100 data fields. Among them, we investigate 25 fields between 2004 and 2010 that are most relevant to car accidents. Using two classification methods, the Naive Bayes classifier and the decision tree classifier, the relative importance of the data fields, i.e., risk factors, is revealed with respect to the resulting severity level. Performances of the classifiers are compared to each other and a binary logistic regression model is used as the basis for the comparisons. Some of the high-ranking risk factors are found to be strongly dependent on each other, and their incremental gains on estimating or modeling severity level are evaluated quantitatively. The analysis shows that only a handful of the risk factors in the data dominate the severity level and that dependency among the top risk factors is an imperative trait to consider for an accurate analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. CIME Summer Course on Exploiting Hidden Structure in Matrix Computations : Algorithms and Applications

    CERN Document Server

    Simoncini, Valeria

    2016-01-01

    Focusing on special matrices and matrices which are in some sense "near" to structured matrices, this volume covers a broad range of topics of current interest in numerical linear algebra. Exploitation of these less obvious structural properties can be of great importance in the design of efficient numerical methods, for example algorithms for matrices with low-rank block structure, matrices with decay, and structured tensor computations. Applications range from quantum chemistry to queuing theory. Structured matrices arise frequently in applications. Examples include banded and sparse matrices, Toeplitz-type matrices, and matrices with semi-separable or quasi-separable structure, as well as Hamiltonian and symplectic matrices. The associated literature is enormous, and many efficient algorithms have been developed for solving problems involving such matrices. The text arose from a C.I.M.E. course held in Cetraro (Italy) in June 2015 which aimed to present this fast growing field to young researchers, exploit...

  2. Level-3 Cholesky Factorization Routines Improve Performance of Many Cholesky Algorithms

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Wasniewski, Jerzy; Dongarra, Jack J.

    2013-01-01

    Four routines called DPOTF3i, i = a,b,c,d, are presented. DPOTF3i are a novel type of level-3 BLAS for use by BPF (Blocked Packed Format) Cholesky factorization and LAPACK routine DPOTRF. Performance of routines DPOTF3i are still increasing when the performance of Level-2 routine DPOTF2 of LAPACK...

  3. Fast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations

    Czech Academy of Sciences Publication Activity Database

    Phan, A. H.; Tichavský, Petr; Cichocki, A.

    2013-01-01

    Roč. 61, č. 19 (2013), s. 4834-4846 ISSN 1053-587X R&D Projects: GA ČR GA102/09/1278 Institutional support: RVO:67985556 Keywords : Canonical polyadic decomposition * tensor decomposition Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.198, year: 2013 http://library.utia.cas.cz/separaty/2013/SI/tichavsky-0396774.pdf

  4. Evaluation of Effective Factors on Travel Time in Optimization of Bus Stops Placement Using Genetic Algorithm

    Science.gov (United States)

    Bargegol, Iraj; Ghorbanzadeh, Mahyar; Ghasedi, Meisam; Rastbod, Mohammad

    2017-10-01

    In congested cities, locating and proper designing of bus stops according to the unequal distribution of passengers is crucial issue economically and functionally, since this subject plays an important role in the use of bus system by passengers. Location of bus stops is a complicated subject; by reducing distances between stops, walking time decreases, but the total travel time may increase. In this paper, a specified corridor in the city of Rasht in north of Iran is studied. Firstly, a new formula is presented to calculate the travel time, by which the number of stops and consequently, the travel time can be optimized. An intended corridor with specified number of stops and distances between them is addressed, the related formulas to travel time are created, and its travel time is calculated. Then the corridor is modelled using a meta-heuristic method in order that the placement and the optimal distances of bus stops for that are determined. It was found that alighting and boarding time along with bus capacity are the most effective factors affecting travel time. Consequently, it is better to have more concentration on indicated factors for improving the efficiency of bus system.

  5. A COMPARATIVE STUDY ON THE PERCEIVED APPLICABILITY OF HONEY BEE MATING OPTIMIZATION ALGORITHM (HBMO AND PARTICLE SWARM OPTIMIZATION (PSO ALGORITHM BY APPLYING THREE FACTOR THEORY AMONG RESEARCHERS IN TAMIL NADU

    Directory of Open Access Journals (Sweden)

    K. Kalyani

    2015-04-01

    Full Text Available The perceived applicability of honey bee mating optimization HBMO and Particle Swarm Optimization PSO among the research scholars in Tamil Nadu are understudied. The purpose of the present study is to address this dearth in the literature in three ways: (i providing descriptive data related to the applicability of these algorithm in their area of study. (ii Applying Three Factor theory to assess the perceived range of applicability of the two said theories and to develop, a theoretically-based model that predicts the applicability and robustness of the algorithm in comparative basis grounded on the perceptual data collected from the research scholars from all over Tamil Nadu. (iii Attempting to compare the strength and form of correlation between the factors of influence and perceived applicability of the algorithms in the research process by the researchers. Self-report data were collected from Researchers in Tamil Nadu (n = 869, assessing the levels of individual personal belief factors in influencing the scholars perception of applicability of the algorithm for a range of issues, perception based on the results produced by the application of the algorithm. Perceptions formed in conformity with a group of researchers were analyzed through statistical tools. From the findings analysis, it is evident that perceptions of personal belief level and perception based on conformity with peer group perceptions have significant influences in predicting the applicability of the Algorithms. However, the study results suggest that empirical result is based in on the specified context and level of investigation on which it can produce similar or varied outcomes when the study is conducted to larger domain of subjects.

  6. An improved algorithm for activated protein C resistance and factor V Leiden screening.

    Science.gov (United States)

    Herskovits, Adrianna Z; Morgan, Elizabeth A; Lemire, Susan J; Lindeman, Neal I; Dorfman, David M

    2013-09-01

    To evaluate the performance of a Russell viper venom-based activated protein C resistance (APCR) screening test relative to DNA analysis for the factor V Leiden mutation. We evaluated the concordance between Pefakit APCR screening results and DNA analysis for 435 patients homozygous (n = 11), heterozygous (n = 310), or wild-type (n =114) for the G1691A allele. Using receiver operating characteristic analysis, we found that a cutoff of 1.89 for the APCR ratio yields a sensitivity and specificity of 99.1%. In patients with discrepant genotype-phenotype correlation, their APCR may provide a more clinically relevant result. We compared several strategies for employing reflex testing and found that performing initial APCR screening followed by confirmatory molecular analysis on a subset of cases in the borderline regions between the diagnostic groups can reduce unnecessary testing by approximately 80% without compromising diagnostic accuracy.

  7. Rectangular Full Packed Format for Cholesky's Algorithm: Factorization, Solution, and Inversion

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Wasniewski, Jerzy; Dongarra, Jack J

    2010-01-01

    of the storage space but provide high performance via the use of Level 3 BLAS. Standard packed format arrays fully utilize storage (array space) but provide low performance as there is no Level 3 packed BLAS. We combine the good features of packed and full storage using RFPF to obtain high performance via using...... Level 3 BLAS as RFPF is a standard full-format representation. Also, RFPF requires exactly the same minimal storage as packed the format. Each LAPACK full and/or packed triangular, symmetric, and Hermitian routine becomes a single new RFPF routine based on eight possible data layouts of RFPF. This new...... RFPF routine usually consists of two calls to the corresponding LAPACK full-format routine and two calls to Level 3 BLAS routines. This means no new software is required. As examples, we present LAPACK routines for Cholesky factorization, Cholesky solution, and Cholesky inverse computation in RFPF...

  8. Fast matrix factorization algorithm for DOSY based on the eigenvalue decomposition and the difference approximation focusing on the size of observed matrix

    International Nuclear Information System (INIS)

    Tanaka, Yuho; Uruma, Kazunori; Furukawa, Toshihiro; Nakao, Tomoki; Izumi, Kenya; Utsumi, Hiroaki

    2017-01-01

    This paper deals with an analysis problem for diffusion-ordered NMR spectroscopy (DOSY). DOSY is formulated as a matrix factorization problem of a given observed matrix. In order to solve this problem, a direct exponential curve resolution algorithm (DECRA) is well known. DECRA is based on singular value decomposition; the advantage of this algorithm is that the initial value is not required. However, DECRA requires a long calculating time, depending on the size of the given observed matrix due to the singular value decomposition, and this is a serious problem in practical use. Thus, this paper proposes a new analysis algorithm for DOSY to achieve a short calculating time. In order to solve matrix factorization for DOSY without using singular value decomposition, this paper focuses on the size of the given observed matrix. The observed matrix in DOSY is also a rectangular matrix with more columns than rows, due to limitation of the measuring time; thus, the proposed algorithm transforms the given observed matrix into a small observed matrix. The proposed algorithm applies the eigenvalue decomposition and the difference approximation to the small observed matrix, and the matrix factorization problem for DOSY is solved. The simulation and a data analysis show that the proposed algorithm achieves a lower calculating time than DECRA as well as similar analysis result results to DECRA. (author)

  9. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  10. New Factorization Techniques and Parallel (log N) Algorithms for Forward Dynamics Solution of Single Closed-Chain Robot Manipulators

    Science.gov (United States)

    Fijany, Amir

    1993-01-01

    In this paper parallel 0(log N) algorithms for dynamic simulation of single closed-chain rigid multibody system as specialized to the case of a robot manipulatoar in contact with the environment are developed.

  11. Impact of genetic and clinical factors on dose requirements and quality of anticoagulation therapy in Polish patients receiving acenocoumarol: dosing calculation algorithm.

    Science.gov (United States)

    Wolkanin-Bartnik, Jolanta; Pogorzelska, Hanna; Szperl, Małgorzata; Bartnik, Aleksandra; Koziarek, Jacek; Bilinska, Zofia T

    2013-11-01

    Despite the recent emergence of new oral anticoagulants, vitamin K antagonists remain the primary therapy in patients with atrial fibrillation and the only therapy licensed for use in patients with artificial heart valves. The aim of this study was (a) to assess the impact of clinical and genetic factors on acenocoumarol (AC) dose requirements and the percentage of time in therapeutic range (%TTR) and (b) to develop pharmacogenetic-guided AC dose calculation algorithm. We included 235 outpatients of the Institute of Cardiology (Warsaw), mean age 69.3, 46.9% women, receiving AC for artificial heart valves and/or atrial fibrillation. A multiple linear-regression analysis was performed using log-transformed effective AC dose as the dependent variable, and combining CYP2C9 and VKORC1 genotyping with other clinical factors as independent predictors. We identified factors that influenced the AC dose: CYP2C9 polymorphisms (P=0.004), VKORC1 polymorphisms (Pgenetic factors explained 49.0% of AC dose variability. We developed a dosing calculation algorithm that is, to the best of our knowledge, the first one to assess the effect of such clinical factors as creatinine clearance and dietary vitamin K intake on the AC dose. The clinical usefulness of the algorithm was assessed on separate validation group (n=50) with 70% accuracy. Dietary vitamin K intake higher than 200 mcg/day improved international normalized ratio control (%TTR 73.3±17 vs. 67.7±18, respectively, P=0.04). Inclusion of a variety of genetic and clinical factors in the dosing calculation algorithm allows for precise AC dose estimation in most patients and thus improves the efficacy and safety of the therapy.

  12. Decoding the encoding of functional brain networks: An fMRI classification comparison of non-negative matrix factorization (NMF), independent component analysis (ICA), and sparse coding algorithms.

    Science.gov (United States)

    Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E

    2017-04-15

    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (pnegative BOLD signal, had the poorest accuracy compared to the ICA and sparse coding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (pNegative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  14. Cooperative vehicles for robust traffic congestion reduction: An analysis based on algorithmic, environmental and agent behavioral factors.

    Science.gov (United States)

    Desai, Prajakta; Loke, Seng W; Desai, Aniruddha

    2017-01-01

    Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN), wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction), across variations in: (a) environmental parameters such as road network topology and configuration; (b) algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c) agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies.

  15. Cooperative vehicles for robust traffic congestion reduction: An analysis based on algorithmic, environmental and agent behavioral factors.

    Directory of Open Access Journals (Sweden)

    Prajakta Desai

    Full Text Available Traffic congestion continues to be a persistent problem throughout the world. As vehicle-to-vehicle communication develops, there is an opportunity of using cooperation among close proximity vehicles to tackle the congestion problem. The intuition is that if vehicles could cooperate opportunistically when they come close enough to each other, they could, in effect, spread themselves out among alternative routes so that vehicles do not all jam up on the same roads. Our previous work proposed a decentralized multiagent based vehicular congestion management algorithm entitled Congestion Avoidance and Route Allocation using Virtual Agent Negotiation (CARAVAN, wherein the vehicles acting as intelligent agents perform cooperative route allocation using inter-vehicular communication. This paper focuses on evaluating the practical applicability of this approach by testing its robustness and performance (in terms of travel time reduction, across variations in: (a environmental parameters such as road network topology and configuration; (b algorithmic parameters such as vehicle agent preferences and route cost/preference multipliers; and (c agent-related parameters such as equipped/non-equipped vehicles and compliant/non-compliant agents. Overall, the results demonstrate the adaptability and robustness of the decentralized cooperative vehicles approach to providing global travel time reduction using simple local coordination strategies.

  16. Identifying the association rules between clinicopathologic factors and higher survival performance in operation-centric oral cancer patients using the Apriori algorithm.

    Science.gov (United States)

    Tang, Jen-Yang; Chuang, Li-Yeh; Hsi, Edward; Lin, Yu-Da; Yang, Cheng-Hong; Chang, Hsueh-Wei

    2013-01-01

    This study computationally determines the contribution of clinicopathologic factors correlated with 5-year survival in oral squamous cell carcinoma (OSCC) patients primarily treated by surgical operation (OP) followed by other treatments. From 2004 to 2010, the program enrolled 493 OSCC patients at the Kaohsiung Medical Hospital University. The clinicopathologic records were retrospectively reviewed and compared for survival analysis. The Apriori algorithm was applied to mine the association rules between these factors and improved survival. Univariate analysis of demographic data showed that grade/differentiation, clinical tumor size, pathology tumor size, and OP grouping were associated with survival longer than 36 months. Using the Apriori algorithm, multivariate correlation analysis identified the factors that coexistently provide good survival rates with higher lift values, such as grade/differentiation = 2, clinical stage group = early, primary site = tongue, and group = OP. Without the OP, the lift values are lower. In conclusion, this hospital-based analysis suggests that early OP and other treatments starting from OP are the key to improving the survival of OSCC patients, especially for early stage tongue cancer with moderate differentiation, having a better survival (>36 months) with varied OP approaches.

  17. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  18. Using a combination of weighting factor method and imperialist competitive algorithm to improve speed and enhance process of reloading pattern optimization of VVER-1000 reactors in transient cycles

    Energy Technology Data Exchange (ETDEWEB)

    Rahmani, Yashar, E-mail: yashar.rahmani@gmail.com [Department of Physics, Faculty of Engineering, Islamic Azad University, Sari Branch, Sari (Iran, Islamic Republic of); Shahvari, Yaser [Department of Computer Engineering, Payame Noor University (PNU), P.O. Box 19395-3697, Tehran (Iran, Islamic Republic of); Kia, Faezeh [Golestan Institute of Higher Education, Gorgan 49139-83635 (Iran, Islamic Republic of)

    2017-03-15

    Highlights: • This article was an attempt to optimize reloading pattern of Bushehr VVER-1000 reactor. • A combination of weighting factor method and the imperialist competitive algorithm was used. • The speed of optimization and desirability of the proposed pattern increased considerably. • To evaluate arrangements, a coupling of WIMSD5-B, CITATION-LDI2 and WERL codes was used. • Results reflected the considerable superiority of the proposed method over direct optimization. - Abstract: In this research, an innovative solution is described which can be used with a combination of the new imperialist competitive algorithm and the weighting factor method to improve speed and increase globality of search in reloading pattern optimization of VVER-1000 reactors in transient cycles and even obtain more desirable results than conventional direct method. In this regard, to reduce the scope of the assumed searchable arrangements, first using the weighting factor method and based on values of these coefficients in each of the 16 types of loadable fuel assemblies in the second cycle, the fuel assemblies were classified in more limited groups. In consequence, the types of fuel assemblies were reduced from 16 to 6 and consequently the number of possible arrangements was reduced considerably. Afterwards, in the first phase of optimization the imperialist competitive algorithm was used to propose an optimum reloading pattern with 6 groups. In the second phase, the algorithm was reused for finding desirable placement of the subset assemblies of each group in the optimum arrangement obtained from the previous phase, and thus the retransformation of the optimum arrangement takes place from the virtual 6-group mode to the real mode with 16 fuel types. In this research, the optimization process was conducted in two states. In the first state, it was tried to obtain an arrangement with the maximum effective multiplication factor and the smallest maximum power peaking factor. In

  19. Using a combination of weighting factor method and imperialist competitive algorithm to improve speed and enhance process of reloading pattern optimization of VVER-1000 reactors in transient cycles

    International Nuclear Information System (INIS)

    Rahmani, Yashar; Shahvari, Yaser; Kia, Faezeh

    2017-01-01

    Highlights: • This article was an attempt to optimize reloading pattern of Bushehr VVER-1000 reactor. • A combination of weighting factor method and the imperialist competitive algorithm was used. • The speed of optimization and desirability of the proposed pattern increased considerably. • To evaluate arrangements, a coupling of WIMSD5-B, CITATION-LDI2 and WERL codes was used. • Results reflected the considerable superiority of the proposed method over direct optimization. - Abstract: In this research, an innovative solution is described which can be used with a combination of the new imperialist competitive algorithm and the weighting factor method to improve speed and increase globality of search in reloading pattern optimization of VVER-1000 reactors in transient cycles and even obtain more desirable results than conventional direct method. In this regard, to reduce the scope of the assumed searchable arrangements, first using the weighting factor method and based on values of these coefficients in each of the 16 types of loadable fuel assemblies in the second cycle, the fuel assemblies were classified in more limited groups. In consequence, the types of fuel assemblies were reduced from 16 to 6 and consequently the number of possible arrangements was reduced considerably. Afterwards, in the first phase of optimization the imperialist competitive algorithm was used to propose an optimum reloading pattern with 6 groups. In the second phase, the algorithm was reused for finding desirable placement of the subset assemblies of each group in the optimum arrangement obtained from the previous phase, and thus the retransformation of the optimum arrangement takes place from the virtual 6-group mode to the real mode with 16 fuel types. In this research, the optimization process was conducted in two states. In the first state, it was tried to obtain an arrangement with the maximum effective multiplication factor and the smallest maximum power peaking factor. In

  20. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  1. Significant factors selection in the chemical and enzymatic hydrolysis of lignocellulosic residues by a genetic algorithm analysis and comparison with the standard Plackett-Burman methodology.

    Science.gov (United States)

    Giordano, Pablo C; Beccaria, Alejandro J; Goicoechea, Héctor C

    2011-11-01

    A comparison between the classic Plackett-Burman design (PB) ANOVA analysis and a genetic algorithm (GA) approach to identify significant factors have been carried out. This comparison was made by applying both analyses to data obtained from the experimental results when optimizing both chemical and enzymatic hydrolysis of three lignocellulosic feedstocks (corn and wheat bran, and pine sawdust) by a PB experimental design. Depending on the kind of biomass and the hydrolysis being considered, different results were obtained. Interestingly, some interactions were found to be significant by the GA approach and allowed to identify significant factors, that otherwise, based only in the classic PB analysis, would have not been taken into account in a further optimization step. Improvements in the fitting of c.a. 80% were obtained when comparing the coefficient of determination (R2) computed for both methods. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Form factors of the finite quantum XY-chain

    International Nuclear Information System (INIS)

    Iorgov, Nikolai

    2011-01-01

    Explicit factorized formulas for the matrix elements (form factors) of the spin operators σ x and σ y between the eigenvectors of the Hamiltonian of the finite quantum periodic XY-chain in a transverse field were derived. The derivation is based on the relations between three models: the model of quantum XY-chain, Ising model on 2D lattice and N = 2 Baxter-Bazhanov-Stroganov τ (2) -model. Due to these relations we transfer the formulas for the form factors of the latter model recently obtained by the use of separation of variables method to the model of quantum XY-chain. Hopefully, the formulas for the form factors will help in analysis of multipoint dynamic correlation functions at a finite temperature. As an example, we re-derive the asymptotics of the two-point correlation function in the disordered phase without the use of the Toeplitz determinants and the Wiener-Hopf factorization method.

  3. CONREAL: conserved regulatory elements anchored alignment algorithm for identification of transcription factor binding sites by phylogenetic footprinting

    NARCIS (Netherlands)

    Berezikov, E.; Guryev, V.; Plasterk, R.; Cuppen, E.

    2004-01-01

    Prediction of transcription-factor target sites in promoters remains difficult due to the short length and degeneracy of the target sequences. Although the use of orthologous sequences and phylogenetic footprinting approaches may help in the recognition of conserved and potentially functional

  4. Appropriate Algorithms for Estimating Frequency-Selective Rician Fading MIMO Channels and Channel Rice Factor: Substantial Benefits of Rician Model and Estimator Tradeoffs

    Directory of Open Access Journals (Sweden)

    Shirvani Moghaddam Shahriar

    2010-01-01

    Full Text Available The training-based channel estimation (TBCE scheme in multiple-input multiple-output (MIMO frequency-selective Rician fading channels is investigated. We propose the new technique of shifted scaled least squares (SSLS and the minimum mean square error (MMSE estimator that are suitable to estimate the above-mentioned channel model. Analytical results show that the proposed estimators achieve much better minimum possible Bayesian Cramér-Rao lower bounds (CRLBs in the frequency-selective Rician MIMO channels compared with those of Rayleigh one. It is seen that the SSLS channel estimator requires less knowledge about the channel and/or has better performance than the conventional least squares (LS and MMSE estimators. Simulation results confirm the superiority of the proposed channel estimators. Finally, to estimate the channel Rice factor, an algorithm is proposed, and its efficiency is verified using the result in the SSLS and MMSE channel estimators.

  5. Linear, Non-Linear and Alternative Algorithms in the Correlation of IEQ Factors with Global Comfort: A Case Study

    Directory of Open Access Journals (Sweden)

    Francesco Fassio

    2014-11-01

    Full Text Available Indoor environmental quality (IEQ factors usually considered in engineering studies, i.e., thermal, acoustical, visual comfort and indoor air quality are individually associated with the occupant satisfaction level on the basis of well-established relationships. On the other hand, the full understanding of how single IEQ factors contribute and interact to determine the overall occupant satisfaction (global comfort is currently an open field of research. The lack of a shared approach in treating the subject depends on many aspects: absence of established protocols for the collection of subjective and objective measurements, the amount of variables to consider and in general the complexity of the technical issues involved. This case study is aimed to perform a comparison between some of the models available, studying the results of a survey conducted with objective and subjective method on a classroom within University of Roma TRE premises. Different models are fitted on the same measured values, allowing comparison between different weighting schemes between IEQ categories obtained with different methods. The critical issues, like differences in the weighting scheme obtained with different IEQ models and the variability of the weighting scheme with respect to the time of exposure of the users in the building, identified during this small scale comfort assessment study, provide the basis for a survey activity on a larger scale, basis for the development of an improved IEQ assessment method.

  6. Algorithms in Singular

    Directory of Open Access Journals (Sweden)

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  7. OCT-based deep learning algorithm for the evaluation of treatment indication with anti-vascular endothelial growth factor medications.

    Science.gov (United States)

    Prahs, Philipp; Radeck, Viola; Mayer, Christian; Cvetkov, Yordan; Cvetkova, Nadezhda; Helbig, Horst; Märker, David

    2018-01-01

    Intravitreal injections with anti-vascular endothelial growth factor (anti-VEGF) medications have become the standard of care for their respective indications. Optical coherence tomography (OCT) scans of the central retina provide detailed anatomical data and are widely used by clinicians in the decision-making process of anti-VEGF indication. In recent years, significant progress has been made in artificial intelligence and computer vision research. We trained a deep convolutional artificial neural network to predict treatment indication based on central retinal OCT scans without human intervention. A total of 183,402 retinal OCT B-scans acquired between 2008 and 2016 were exported from the institutional image archive of a university hospital. OCT images were cross-referenced with the electronic institutional intravitreal injection records. OCT images with a following intravitreal injection during the first 21 days after image acquisition were assigned into the 'injection' group, while the same amount of random OCT images without intravitreal injections was labeled as 'no injection'. After image preprocessing, OCT images were split in a 9:1 ratio to training and test datasets. We trained a GoogLeNet inception deep convolutional neural network and assessed its performance on the validation dataset. We calculated prediction accuracy, sensitivity, specificity, and receiver operating characteristics. The deep convolutional neural network was successfully trained on the extracted clinical data. The trained neural network classifier reached a prediction accuracy of 95.5% on the images in the validation dataset. For single retinal B-scans in the validation dataset, a sensitivity of 90.1% and a specificity of 96.2% were achieved. The area under the receiver operating characteristic curve was 0.968 on a per B-scan image basis, and 0.988 by averaging over six B-scans per examination on the validation dataset. Deep artificial neural networks show impressive performance on

  8. Improved Variable Selection Algorithm Using a LASSO-Type Penalty, with an Application to Assessing Hepatitis B Infection Relevant Factors in Community Residents

    Science.gov (United States)

    Guo, Pi; Zeng, Fangfang; Hu, Xiaomin; Zhang, Dingmei; Zhu, Shuming; Deng, Yu; Hao, Yuantao

    2015-01-01

    Objectives In epidemiological studies, it is important to identify independent associations between collective exposures and a health outcome. The current stepwise selection technique ignores stochastic errors and suffers from a lack of stability. The alternative LASSO-penalized regression model can be applied to detect significant predictors from a pool of candidate variables. However, this technique is prone to false positives and tends to create excessive biases. It remains challenging to develop robust variable selection methods and enhance predictability. Material and methods Two improved algorithms denoted the two-stage hybrid and bootstrap ranking procedures, both using a LASSO-type penalty, were developed for epidemiological association analysis. The performance of the proposed procedures and other methods including conventional LASSO, Bolasso, stepwise and stability selection models were evaluated using intensive simulation. In addition, methods were compared by using an empirical analysis based on large-scale survey data of hepatitis B infection-relevant factors among Guangdong residents. Results The proposed procedures produced comparable or less biased selection results when compared to conventional variable selection models. In total, the two newly proposed procedures were stable with respect to various scenarios of simulation, demonstrating a higher power and a lower false positive rate during variable selection than the compared methods. In empirical analysis, the proposed procedures yielding a sparse set of hepatitis B infection-relevant factors gave the best predictive performance and showed that the procedures were able to select a more stringent set of factors. The individual history of hepatitis B vaccination, family and individual history of hepatitis B infection were associated with hepatitis B infection in the studied residents according to the proposed procedures. Conclusions The newly proposed procedures improve the identification of

  9. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  10. Optimization of vitamin K antagonist drug dose finding by replacement of the international normalized ratio by a bidirectional factor : validation of a new algorithm

    NARCIS (Netherlands)

    Beinema, M J; van der Meer, F J M; Brouwers, J R B J; Rosendaal, F R

    2016-01-01

    UNLABELLED: Essentials We developed a new algorithm to optimize vitamin K antagonist dose finding. Validation was by comparing actual dosing to algorithm predictions. Predicted and actual dosing of well performing centers were highly associated. The method is promising and should be tested in a

  11. Genetic Algorithms in Noisy Environments

    OpenAIRE

    THEN, T. W.; CHONG, EDWIN K. P.

    1993-01-01

    Genetic Algorithms (GA) have been widely used in the areas of searching, function optimization, and machine learning. In many of these applications, the effect of noise is a critical factor in the performance of the genetic algorithms. While it hals been shown in previous siiudies that genetic algorithms are still able to perform effectively in the presence of noise, the problem of locating the global optimal solution at the end of the search has never been effectively addressed. Furthermore,...

  12. Effective transcription factor binding site prediction using a combination of optimization, a genetic algorithm and discriminant analysis to capture distant interactions.

    Science.gov (United States)

    Levitsky, Victor G; Ignatieva, Elena V; Ananko, Elena A; Turnaev, Igor I; Merkulova, Tatyana I; Kolchanov, Nikolay A; Hodgman, T C

    2007-12-19

    Reliable transcription factor binding site (TFBS) prediction methods are essential for computer annotation of large amount of genome sequence data. However, current methods to predict TFBSs are hampered by the high false-positive rates that occur when only sequence conservation at the core binding-sites is considered. To improve this situation, we have quantified the performance of several Position Weight Matrix (PWM) algorithms, using exhaustive approaches to find their optimal length and position. We applied these approaches to bio-medically important TFBSs involved in the regulation of cell growth and proliferation as well as in inflammatory, immune, and antiviral responses (NF-kappaB, ISGF3, IRF1, STAT1), obesity and lipid metabolism (PPAR, SREBP, HNF4), regulation of the steroidogenic (SF-1) and cell cycle (E2F) genes expression. We have also gained extra specificity using a method, entitled SiteGA, which takes into account structural interactions within TFBS core and flanking regions, using a genetic algorithm (GA) with a discriminant function of locally positioned dinucleotide (LPD) frequencies. To ensure a higher confidence in our approach, we applied resampling-jackknife and bootstrap tests for the comparison, it appears that, optimized PWM and SiteGA have shown similar recognition performances. Then we applied SiteGA and optimized PWMs (both separately and together) to sequences in the Eukaryotic Promoter Database (EPD). The resulting SiteGA recognition models can now be used to search sequences for BSs using the web tool, SiteGA. Analysis of dependencies between close and distant LPDs revealed by SiteGA models has shown that the most significant correlations are between close LPDs, and are generally located in the core (footprint) region. A greater number of less significant correlations are mainly between distant LPDs, which spanned both core and flanking regions. When SiteGA and optimized PWM models were applied together, this substantially reduced

  13. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...

  14. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  15. Genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Grefenstette, J.J.

    1994-12-31

    Genetic algorithms solve problems by using principles inspired by natural population genetics: They maintain a population of knowledge structures that represent candidate solutions, and then let that population evolve over time through competition and controlled variation. GAs are being applied to a wide range of optimization and learning problems in many domains.

  16. A Scalable Gaussian Process Analysis Algorithm for Biomass Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Biomass monitoring is vital for studying the carbon cycle of earth's ecosystem and has several significant implications, especially in the context of understanding climate change and its impacts. Recently, several change detection methods have been proposed to identify land cover changes in temporal profiles (time series) of vegetation collected using remote sensing instruments, but do not satisfy one or both of the two requirements of the biomass monitoring problem, i.e., {\\em operating in online mode} and {\\em handling periodic time series}. In this paper, we adapt Gaussian process regression to detect changes in such time series in an online fashion. While Gaussian process (GP) have been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. We focus on addressing the scalability issues associated with the proposed GP based change detection algorithm. This paper makes several significant contributions. First, we propose a GP based online time series change detection algorithm and demonstrate its effectiveness in detecting different types of changes in {\\em Normalized Difference Vegetation Index} (NDVI) data obtained from a study area in Iowa, USA. Second, we propose an efficient Toeplitz matrix based solution which significantly improves the computational complexity and memory requirements of the proposed GP based method. Specifically, the proposed solution can analyze a time series of length $t$ in $O(t^2)$ time while maintaining a $O(t)$ memory footprint, compared to the $O(t^3)$ time and $O(t^2)$ memory requirement of standard matrix manipulation based methods. Third, we describe a parallel version of the proposed solution which can be used to simultaneously analyze a large number of time series. We study three different parallel implementations: using threads, MPI, and a

  17. Improved Coarray Interpolation Algorithms with Additional Orthogonal Constraint for Cyclostationary Signals

    Directory of Open Access Journals (Sweden)

    Jinyang Song

    2018-01-01

    Full Text Available Many modulated signals exhibit a cyclostationarity property, which can be exploited in direction-of-arrival (DOA estimation to effectively eliminate interference and noise. In this paper, our aim is to integrate the cyclostationarity with the spatial domain and enable the algorithm to estimate more sources than sensors. However, DOA estimation with a sparse array is performed in the coarray domain and the holes within the coarray limit the usage of the complete coarray information. In order to use the complete coarray information to increase the degrees-of-freedom (DOFs, sparsity-aware-based methods and the difference coarray interpolation methods have been proposed. In this paper, the coarray interpolation technique is further explored with cyclostationary signals. Besides the difference coarray model and its corresponding Toeplitz completion formulation, we build up a sum coarray model and formulate a Hankel completion problem. In order to further improve the performance of the structured matrix completion, we define the spatial spectrum sampling operations and the derivative (conjugate correlation subspaces, which can be exploited to construct orthogonal constraints for the autocorrelation vectors in the coarray interpolation problem. Prior knowledge of the source interval can also be incorporated into the problem. Simulation results demonstrate that the additional constraints contribute to a remarkable performance improvement.

  18. Identifying Risk Factors for Recent HIV Infection in Kenya Using a Recent Infection Testing Algorithm: Results from a Nationally Representative Population-Based Survey.

    Directory of Open Access Journals (Sweden)

    Andrea A Kim

    Full Text Available A recent infection testing algorithm (RITA that can distinguish recent from long-standing HIV infection can be applied to nationally representative population-based surveys to characterize and identify risk factors for recent infection in a country.We applied a RITA using the Limiting Antigen Avidity Enzyme Immunoassay (LAg on stored HIV-positive samples from the 2007 Kenya AIDS Indicator Survey. The case definition for recent infection included testing recent on LAg and having no evidence of antiretroviral therapy use. Multivariate analysis was conducted to determine factors associated with recent and long-standing infection compared to HIV-uninfected persons. All estimates were weighted to adjust for sampling probability and nonresponse.Of 1,025 HIV-antibody-positive specimens, 64 (6.2% met the case definition for recent infection and 961 (93.8% met the case definition for long-standing infection. Compared to HIV-uninfected individuals, factors associated with higher adjusted odds of recent infection were living in Nairobi (adjusted odds ratio [AOR] 11.37; confidence interval [CI] 2.64-48.87 and Nyanza (AOR 4.55; CI 1.39-14.89 provinces compared to Western province; being widowed (AOR 8.04; CI 1.42-45.50 or currently married (AOR 6.42; CI 1.55-26.58 compared to being never married; having had ≥ 2 sexual partners in the last year (AOR 2.86; CI 1.51-5.41; not using a condom at last sex in the past year (AOR 1.61; CI 1.34-1.93; reporting a sexually transmitted infection (STI diagnosis or symptoms of STI in the past year (AOR 1.97; CI 1.05-8.37; and being aged <30 years with: 1 HSV-2 infection (AOR 8.84; CI 2.62-29.85, 2 male genital ulcer disease (AOR 8.70; CI 2.36-32.08, or 3 lack of male circumcision (AOR 17.83; CI 2.19-144.90. Compared to HIV-uninfected persons, factors associated with higher adjusted odds of long-standing infection included living in Coast (AOR 1.55; CI 1.04-2.32 and Nyanza (AOR 2.33; CI 1.67-3.25 provinces compared to

  19. Model order reduction using eigen algorithm | Singh | International ...

    African Journals Online (AJOL)

    -scale dynamic systems where denominator polynomial determined through Eigen algorithm and numerator polynomial via factor division algorithm. In Eigen algorithm, the most dominant Eigen value of both original and reduced order ...

  20. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  1. Searching Algorithms Implemented on Probabilistic Systolic Arrays

    Czech Academy of Sciences Publication Activity Database

    Kramosil, Ivan

    1996-01-01

    Roč. 25, č. 1 (1996), s. 7-45 ISSN 0308-1079 R&D Projects: GA ČR GA201/93/0781 Keywords : searching algorithms * probabilistic algorithms * systolic arrays * parallel algorithms Impact factor: 0.214, year: 1996

  2. Large truncated Toeplitz matrices, Toeplitz operators, and related topics the Albrecht Böttcher anniversary volume

    CERN Document Server

    Ehrhardt, Torsten; Karlovich, Alexei; Spitkovsky, Ilya

    2017-01-01

    This book presents a collection of expository and research papers on various topics in matrix and operator theory, contributed by several experts on the occasion of Albrecht Böttcher’s 60th birthday. Albrecht Böttcher himself has made substantial contributions to the subject in the past. The book also includes a biographical essay, a complete bibliography of Albrecht Böttcher’s work and brief informal notes on personal encounters with him. The book is of interest to graduate and advanced undergraduate students majoring in mathematics, researchers in matrix and operator theory as well as engineers and applied mathematicians.

  3. Algorithms on ensemble quantum computers.

    Science.gov (United States)

    Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh

    2010-06-01

    In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.

  4. Quantum Computations: Fundamentals and Algorithms

    International Nuclear Information System (INIS)

    Duplij, S.A.; Shapoval, I.I.

    2007-01-01

    Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described

  5. ALGORITHM OF OBJECT RECOGNITION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available The second important problem to be resolved to the algorithm and its software, that comprises an automatic design of a complex closed circuit television system, represents object recognition, by virtue of which an image is transmitted by the video camera. Since imaging of almost any object is dependent on many factors, including its orientation in respect of the camera, lighting conditions, parameters of the registering system, static and dynamic parameters of the object itself, it is quite difficult to formalize the image and represent it in the form of a certain mathematical model. Therefore, methods of computer-aided visualization depend substantially on the problems to be solved. They can be rarely generalized. The majority of these methods are non-linear; therefore, there is a need to increase the computing power and complexity of algorithms to be able to process the image. This paper covers the research of visual object recognition and implementation of the algorithm in the view of the software application that operates in the real-time mode

  6. Adaptive Maneuvering Target Tracking Algorithm

    Directory of Open Access Journals (Sweden)

    Chunling Wu

    2014-07-01

    Full Text Available Based on the current statistical model, a new adaptive maneuvering target tracking algorithm, CS-MSTF, is presented. The new algorithm keep the merits of high tracking precision that the current statistical model and strong tracking filter (STF have in tracking maneuvering target, and made the modifications as such: First, STF has the defect that it achieves the perfect performance in maneuvering segment at a cost of the precision in non-maneuvering segment, so the new algorithm modified the prediction error covariance matrix and the fading factor to improve the tracking precision both of the maneuvering segment and non-maneuvering segment; The estimation error covariance matrix was calculated using the Joseph form, which is more stable and robust in numerical. The Monte- Carlo simulation shows that the CS-MSTF algorithm has a more excellent performance than CS-STF and can estimate efficiently.

  7. An improved affine projection algorithm for active noise cancellation

    Science.gov (United States)

    Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo

    2017-08-01

    Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.

  8. Algorithm Improvement Program Nuclide Identification Algorithm Scoring Criteria And Scoring Application - DNDO.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  9. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  10. Resolving of challenging gas chromatography-mass spectrometry peak clusters in fragrance samples using multicomponent factorization approaches based on polygon inflation algorithm.

    Science.gov (United States)

    Ghaheri, Salehe; Masoum, Saeed; Gholami, Ali

    2016-01-15

    Analysis of fragrance composition is very important for both the fragrance producers and consumers. Unraveling of fragrance formulation is necessary for quality control, competitor and trace analysis. Gas chromatography-mass spectrometry (GC-MS) has been introduced as the most appropriate analytical technique for this type of analysis, which is based on Kovats index and MS database. The most straightforward method to analyze a GC-MS dataset is to integrate those peaks that can be recognized by their mass profiles. But, because of common problems of chromatographic data such as spectral background, baseline offset and specially overlapped peaks, accurate quantitative and qualitative analysis could be failed. Some chemometric modeling techniques such as bilinear multivariate curve resolution (MCR) methods have been introduced to overcome these problems and obtained well resolved chromatographic profiles. The main drawback of these methods is rotational ambiguity or nonunique solution that is represented as area of feasible solutions (AFS). Polygonal inflation algorithm (PIA) is an automatic and simple to use algorithm for numerical computation of AFS. In this study, the extent of rotational ambiguity in curve resolution methods is calculated by MCR-BAND toolbox and the PIA. The ability of the PIA in resolving GC-MS data sets is evaluated by simulated GC-MS data in comparison with other popular curve resolution methods such as multivariate curve resolution alternative least square (MCR-ALS), multivariate curve resolution objective function minimization (MCR-FMIN) by different initial estimation methods and independent component analysis (ICA). In addition, two typical challenging area of total ion chromatogram (TIC) of commercial fragrances with overlapped peaks were analyzed by the PIA to investigate the possibility of peak deconvolution analysis. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. The algorithm design manual

    CERN Document Server

    Skiena, Steven S

    2008-01-01

    Explaining designing algorithms, and analyzing their efficacy and efficiency, this book covers combinatorial algorithms technology, stressing design over analysis. It presents instruction on methods for designing and analyzing computer algorithms. It contains the catalog of algorithmic resources, implementations and a bibliography

  12. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops....... Examining how algorithms make people feel, then, seems crucial if we want to understand their social power....

  13. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  14. New algorithms for binary wavefront optimization

    Science.gov (United States)

    Zhang, Xiaolong; Kner, Peter

    2015-03-01

    Binary amplitude modulation promises to allow rapid focusing through strongly scattering media with a large number of segments due to the faster update rates of digital micromirror devices (DMDs) compared to spatial light modulators (SLMs). While binary amplitude modulation has a lower theoretical enhancement than phase modulation, the faster update rate should more than compensate for the difference - a factor of π2 /2. Here we present two new algorithms, a genetic algorithm and a transmission matrix algorithm, for optimizing the focus with binary amplitude modulation that achieve enhancements close to the theoretical maximum. Genetic algorithms have been shown to work well in noisy environments and we show that the genetic algorithm performs better than a stepwise algorithm. Transmission matrix algorithms allow complete characterization and control of the medium but require phase control either at the input or output. Here we introduce a transmission matrix algorithm that works with only binary amplitude control and intensity measurements. We apply these algorithms to binary amplitude modulation using a Texas Instruments Digital Micromirror Device. Here we report an enhancement of 152 with 1536 segments (9.90%×N) using a genetic algorithm with binary amplitude modulation and an enhancement of 136 with 1536 segments (8.9%×N) using an intensity-only transmission matrix algorithm.

  15. Genetic Algorithms For the Linear Ordering Problem

    Czech Academy of Sciences Publication Activity Database

    Krömer, P.; Snášel, V.; Platoš, J.; Húsek, Dušan

    2009-01-01

    Roč. 19, č. 1 (2009), s. 65-80 ISSN 1210-0552 Institutional research plan: CEZ:AV0Z10300504 Keywords : evolutionary algorithm s * genetic algorithm s * linear ordering problem * combinatorial optimization Subject RIV: IN - Informatics, Computer Science Impact factor: 0.475, year: 2009

  16. Model order reduction using eigen algorithm

    African Journals Online (AJOL)

    DR OKE

    to use either for design or analysis. Hence, it is ... directly from the Eigen algorithm while the zeros are determined through factor division algorithm to obtain the reduced order system. ..... V. Singh, Chandra and H. Kar, “Improved Routh Pade approximationss: A computer aided approach”, IEEE Transaction on. Automat ...

  17. Computer-assisted assessment of the Human Epidermal Growth Factor Receptor 2 immunohistochemical assay in imaged histologic sections using a membrane isolation algorithm and quantitative analysis of positive controls

    Directory of Open Access Journals (Sweden)

    Ianosi-Irimie Monica

    2008-06-01

    Full Text Available Abstract Background Breast cancers that overexpress the human epidermal growth factor receptor 2 (HER2 are eligible for effective biologically targeted therapies, such as trastuzumab. However, accurately determining HER2 overexpression, especially in immunohistochemically equivocal cases, remains a challenge. Manual analysis of HER2 expression is dependent on the assessment of membrane staining as well as comparisons with positive controls. In spite of the strides that have been made to standardize the assessment process, intra- and inter-observer discrepancies in scoring is not uncommon. In this manuscript we describe a pathologist assisted, computer-based continuous scoring approach for increasing the precision and reproducibility of assessing imaged breast tissue specimens. Methods Computer-assisted analysis on HER2 IHC is compared with manual scoring and fluorescence in situ hybridization results on a test set of 99 digitally imaged breast cancer cases enriched with equivocally scored (2+ cases. Image features are generated based on the staining profile of the positive control tissue and pixels delineated by a newly developed Membrane Isolation Algorithm. Evaluation of results was performed using Receiver Operator Characteristic (ROC analysis. Results A computer-aided diagnostic approach has been developed using a membrane isolation algorithm and quantitative use of positive immunostaining controls. By incorporating internal positive controls into feature analysis a greater Area Under the Curve (AUC in ROC analysis was achieved than feature analysis without positive controls. Evaluation of HER2 immunostaining that utilized membrane pixels, controls, and percent area stained showed significantly greater AUC than manual scoring, and significantly less false positive rate when used to evaluate immunohistochemically equivocal cases. Conclusion It has been shown that by incorporating both a membrane isolation algorithm and analysis of known

  18. Computer-assisted assessment of the Human Epidermal Growth Factor Receptor 2 immunohistochemical assay in imaged histologic sections using a membrane isolation algorithm and quantitative analysis of positive controls

    International Nuclear Information System (INIS)

    Hall, Bonnie H; Ianosi-Irimie, Monica; Javidian, Parisa; Chen, Wenjin; Ganesan, Shridar; Foran, David J

    2008-01-01

    Breast cancers that overexpress the human epidermal growth factor receptor 2 (HER2) are eligible for effective biologically targeted therapies, such as trastuzumab. However, accurately determining HER2 overexpression, especially in immunohistochemically equivocal cases, remains a challenge. Manual analysis of HER2 expression is dependent on the assessment of membrane staining as well as comparisons with positive controls. In spite of the strides that have been made to standardize the assessment process, intra- and inter-observer discrepancies in scoring is not uncommon. In this manuscript we describe a pathologist assisted, computer-based continuous scoring approach for increasing the precision and reproducibility of assessing imaged breast tissue specimens. Computer-assisted analysis on HER2 IHC is compared with manual scoring and fluorescence in situ hybridization results on a test set of 99 digitally imaged breast cancer cases enriched with equivocally scored (2+) cases. Image features are generated based on the staining profile of the positive control tissue and pixels delineated by a newly developed Membrane Isolation Algorithm. Evaluation of results was performed using Receiver Operator Characteristic (ROC) analysis. A computer-aided diagnostic approach has been developed using a membrane isolation algorithm and quantitative use of positive immunostaining controls. By incorporating internal positive controls into feature analysis a greater Area Under the Curve (AUC) in ROC analysis was achieved than feature analysis without positive controls. Evaluation of HER2 immunostaining that utilized membrane pixels, controls, and percent area stained showed significantly greater AUC than manual scoring, and significantly less false positive rate when used to evaluate immunohistochemically equivocal cases. It has been shown that by incorporating both a membrane isolation algorithm and analysis of known positive controls a computer-assisted diagnostic algorithm was

  19. Mathematical algorithms for approximate reasoning

    Science.gov (United States)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  20. A Bregman-proximal point algorithm for robust non-negative matrix factorization with possible missing values and outliers - application to gene expression analysis.

    Science.gov (United States)

    Chrétien, Stéphane; Guyeux, Christophe; Conesa, Bastien; Delage-Mouroux, Régis; Jouvenot, Michèle; Huetz, Philippe; Descôtes, Françoise

    2016-08-31

    Non-Negative Matrix factorization has become an essential tool for feature extraction in a wide spectrum of applications. In the present work, our objective is to extend the applicability of the method to the case of missing and/or corrupted data due to outliers. An essential property for missing data imputation and detection of outliers is that the uncorrupted data matrix is low rank, i.e. has only a small number of degrees of freedom. We devise a new version of the Bregman proximal idea which preserves nonnegativity and mix it with the Augmented Lagrangian approach for simultaneous reconstruction of the features of interest and detection of the outliers using a sparsity promoting ℓ 1 penality. An application to the analysis of gene expression data of patients with bladder cancer is finally proposed.

  1. Approximate iterative algorithms

    CERN Document Server

    Almudevar, Anthony Louis

    2014-01-01

    Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a

  2. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  3. Guest editorial: Adaptive and natural computing algorithms

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra

    2012-01-01

    Roč. 96, - (2012), s. 1-1 ISSN 0925-2312 Institutional support: RVO:67985807 Keywords : soft-computing * adaptive algorithms * neural networks Subject RIV: IN - Informatics, Computer Science Impact factor: 1.634, year: 2012

  4. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  5. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  6. Generalized-ensemble molecular dynamics and Monte Carlo algorithms beyond the limit of the multicanonical algorithm

    International Nuclear Information System (INIS)

    Okumura, Hisashi

    2010-01-01

    I review two new generalized-ensemble algorithms for molecular dynamics and Monte Carlo simulations of biomolecules, that is, the multibaric–multithermal algorithm and the partial multicanonical algorithm. In the multibaric–multithermal algorithm, two-dimensional random walks not only in the potential-energy space but also in the volume space are realized. One can discuss the temperature dependence and pressure dependence of biomolecules with this algorithm. The partial multicanonical simulation samples a wide range of only an important part of potential energy, so that one can concentrate the effort to determine a multicanonical weight factor only on the important energy terms. This algorithm has higher sampling efficiency than the multicanonical and canonical algorithms. (review)

  7. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  8. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  9. Digital Arithmetic: Division Algorithms

    DEFF Research Database (Denmark)

    Montuschi, Paolo; Nannarelli, Alberto

    2017-01-01

    implement it in hardware to not compromise the overall computation performances. This entry explains the basic algorithms, suitable for hardware and software, to implement division in computer systems. Two classes of algorithms implement division or square root: digit-recurrence and multiplicative (e.......g., Newton–Raphson) algorithms. The first class of algorithms, the digit-recurrence type, is particularly suitable for hardware implementation as it requires modest resources and provides good performance on contemporary technology. The second class of algorithms, the multiplicative type, requires...

  10. Protein Structure Prediction with Evolutionary Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hart, W.E.; Krasnogor, N.; Pelta, D.A.; Smith, J.

    1999-02-08

    Evolutionary algorithms have been successfully applied to a variety of molecular structure prediction problems. In this paper we reconsider the design of genetic algorithms that have been applied to a simple protein structure prediction problem. Our analysis considers the impact of several algorithmic factors for this problem: the confirmational representation, the energy formulation and the way in which infeasible conformations are penalized, Further we empirically evaluated the impact of these factors on a small set of polymer sequences. Our analysis leads to specific recommendations for both GAs as well as other heuristic methods for solving PSP on the HP model.

  11. Identification of the significant factors in food safety using global sensitivity analysis and the accept-and-reject algorithm: application to the cold chain of ham.

    Science.gov (United States)

    Duret, Steven; Guillier, Laurent; Hoang, Hong-Minh; Flick, Denis; Laguerre, Onrawee

    2014-06-16

    Deterministic models describing heat transfer and microbial growth in the cold chain are widely studied. However, it is difficult to apply them in practice because of several variable parameters in the logistic supply chain (e.g., ambient temperature varying due to season and product residence time in refrigeration equipment), the product's characteristics (e.g., pH and water activity) and the microbial characteristics (e.g., initial microbial load and lag time). This variability can lead to different bacterial growth rates in food products and has to be considered to properly predict the consumer's exposure and identify the key parameters of the cold chain. This study proposes a new approach that combines deterministic (heat transfer) and stochastic (Monte Carlo) modeling to account for the variability in the logistic supply chain and the product's characteristics. The model generates a realistic time-temperature product history , contrary to existing modeling whose describe time-temperature profile Contrary to existing approaches that use directly a time-temperature profile, the proposed model predicts product temperature evolution from the thermostat setting and the ambient temperature. The developed methodology was applied to the cold chain of cooked ham including, the display cabinet, transport by the consumer and the domestic refrigerator, to predict the evolution of state variables, such as the temperature and the growth of Listeria monocytogenes. The impacts of the input factors were calculated and ranked. It was found that the product's time-temperature history and the initial contamination level are the main causes of consumers' exposure. Then, a refined analysis was applied, revealing the importance of consumer behaviors on Listeria monocytogenes exposure. Copyright © 2014. Published by Elsevier B.V.

  12. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  13. Cloud Model Bat Algorithm

    OpenAIRE

    Yongquan Zhou; Jian Xie; Liangliang Li; Mingzhi Ma

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformati...

  14. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  15. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  16. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  17. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  18. Particle swarm genetic algorithm and its application

    International Nuclear Information System (INIS)

    Liu Chengxiang; Yan Changxiang; Wang Jianjun; Liu Zhenhai

    2012-01-01

    To solve the problems of slow convergence speed and tendency to fall into the local optimum of the standard particle swarm optimization while dealing with nonlinear constraint optimization problem, a particle swarm genetic algorithm is designed. The proposed algorithm adopts feasibility principle handles constraint conditions and avoids the difficulty of penalty function method in selecting punishment factor, generates initial feasible group randomly, which accelerates particle swarm convergence speed, and introduces genetic algorithm crossover and mutation strategy to avoid particle swarm falls into the local optimum Through the optimization calculation of the typical test functions, the results show that particle swarm genetic algorithm has better optimized performance. The algorithm is applied in nuclear power plant optimization, and the optimization results are significantly. (authors)

  19. Network-Oblivious Algorithms

    DEFF Research Database (Denmark)

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network-oblivi...

  20. Graph Colouring Algorithms

    DEFF Research Database (Denmark)

    Husfeldt, Thore

    2015-01-01

    This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...

  1. Efficient graph algorithms

    Indian Academy of Sciences (India)

    Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. Computing all-pairs distances good algorithm wrt both space and time - but only approximate solutions can be found. Optimal bipartite matchings an optimal matching need not always exist.

  2. Algorithms and Their Explanations

    NARCIS (Netherlands)

    Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.

    2014-01-01

    By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of

  3. 8. Algorithm Design Techniques

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...

  4. 8. Algorithm Design Techniques

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 Issue 8 August 1997 pp 6-17. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/002/08/0006-0017 ...

  5. Introduction to Algorithms -14 ...

    Indian Academy of Sciences (India)

    As elaborated in the earlier articles, algorithms must be written in an unambiguous formal way. Algorithms intended for automatic execution by computers are called programs and the formal notations used to write programs are called programming languages. The concept of a programming language has been around ...

  6. Novel algorithm for management of acute epididymitis.

    Science.gov (United States)

    Hongo, Hiroshi; Kikuchi, Eiji; Matsumoto, Kazuhiro; Yazawa, Satoshi; Kanao, Kent; Kosaka, Takeo; Mizuno, Ryuichi; Miyajima, Akira; Saito, Shiro; Oya, Mototsugu

    2017-01-01

    To identify predictive factors for the severity of epididymitis and to develop an algorithm guiding decisions on how to manage patients with this disease. A retrospective study was carried out on 160 epididymitis patients at Keio University Hospital. We classified cases into severe and non-severe groups, and compared clinical findings at the first visit. Based on statistical analyses, we developed an algorithm for predicting severe cases. We validated the algorithm by applying it to an external cohort of 96 patients at Tokyo Medical Center. The efficacy of the algorithm was investigated by a decision curve analysis. A total of 19 patients (11.9%) had severe epididymitis. Patient characteristics including older age, previous history of diabetes mellitus and fever, as well as laboratory data including a higher white blood cell count, C-reactive protein level and blood urea nitrogen level were independently associated with severity. A predictive algorithm was created with the ability to classify epididymitis cases into three risk groups. In the Keio University Hospital cohort, 100%, 23.5%, and 3.4% of cases in the high-, intermediate-, and low-risk groups, respectively, became severe. The specificity of the algorithm for predicting severe epididymitis proved to be 100% in the Keio University Hospital cohort and 98.8% in the Tokyo Medical Center cohort. The decision curve analysis also showed the high efficacy of the algorithm. This algorithm might aid in decision-making for the clinical management of acute epididymitis. © 2016 The Japanese Urological Association.

  7. Governance by algorithms

    Directory of Open Access Journals (Sweden)

    Francesca Musiani

    2013-08-01

    Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

  8. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  9. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  10. Antenna design by means of the fruit fly optimization algorithm

    OpenAIRE

    Polo-López, Lucas; Córcoles, Juan; Ruiz-Cruz, Jorge A.

    2018-01-01

    In this work a heuristic optimization algorithm known as the Fruit fly Optimization Algorithm is applied to antenna design problems. The original formulation of the algorithm is presented and it is adapted to array factor and horn antenna optimization problems. Specifically, it is applied to the array factor synthesis of uniformly-fed, non-equispaced arrays and to the profile optimization of multimode horn antennas. Several numerical examples are presented and the obtained results are compare...

  11. Antenna Design by Means of the Fruit Fly Optimization Algorithm

    OpenAIRE

    Lucas Polo-López; Juan Córcoles; Jorge A. Ruiz-Cruz

    2018-01-01

    In this work a heuristic optimization algorithm known as the Fruit fly Optimization Algorithm is applied to antenna design problems. The original formulation of the algorithm is presented and it is adapted to array factor and horn antenna optimization problems. Specifically, it is applied to the array factor synthesis of uniformly-fed, non-equispaced arrays and to the profile optimization of multimode horn antennas. Several numerical examples are presented and the obtained results are compare...

  12. A New Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  13. Algorithms for optimizing drug therapy

    Directory of Open Access Journals (Sweden)

    Martin Lene

    2004-07-01

    Full Text Available Abstract Background Drug therapy has become increasingly efficient, with more drugs available for treatment of an ever-growing number of conditions. Yet, drug use is reported to be sub optimal in several aspects, such as dosage, patient's adherence and outcome of therapy. The aim of the current study was to investigate the possibility to optimize drug therapy using computer programs, available on the Internet. Methods One hundred and ten officially endorsed text documents, published between 1996 and 2004, containing guidelines for drug therapy in 246 disorders, were analyzed with regard to information about patient-, disease- and drug-related factors and relationships between these factors. This information was used to construct algorithms for identifying optimum treatment in each of the studied disorders. These algorithms were categorized in order to define as few models as possible that still could accommodate the identified factors and the relationships between them. The resulting program prototypes were implemented in HTML (user interface and JavaScript (program logic. Results Three types of algorithms were sufficient for the intended purpose. The simplest type is a list of factors, each of which implies that the particular patient should or should not receive treatment. This is adequate in situations where only one treatment exists. The second type, a more elaborate model, is required when treatment can by provided using drugs from different pharmacological classes and the selection of drug class is dependent on patient characteristics. An easily implemented set of if-then statements was able to manage the identified information in such instances. The third type was needed in the few situations where the selection and dosage of drugs were depending on the degree to which one or more patient-specific factors were present. In these cases the implementation of an established decision model based on fuzzy sets was required. Computer programs

  14. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Projects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse problems * partially separable problems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior -point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  15. War-Algorithm Accountability

    OpenAIRE

    Lewis, Dustin A.; Blum, Gabriella; Modirzadeh, Naz K.

    2016-01-01

    In this briefing report, we introduce a new concept — war algorithms — that elevates algorithmically-derived “choices” and “decisions” to a, and perhaps the, central concern regarding technical autonomy in war. We thereby aim to shed light on and recast the discussion regarding “autonomous weapon systems.” We define “war algorithm” as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed co...

  16. Selected event reconstruction algorithms for the CBM experiment at FAIR

    International Nuclear Information System (INIS)

    Lebedev, Semen; Höhne, Claudia; Lebedev, Andrey; Ososkov, Gennady

    2014-01-01

    Development of fast and efficient event reconstruction algorithms is an important and challenging task in the Compressed Baryonic Matter (CBM) experiment at the future FAIR facility. The event reconstruction algorithms have to process terabytes of input data produced in particle collisions. In this contribution, several event reconstruction algorithms are presented. Optimization of the algorithms in the following CBM detectors are discussed: Ring Imaging Cherenkov (RICH) detector, Transition Radiation Detectors (TRD) and Muon Chamber (MUCH). The ring reconstruction algorithm in the RICH is discussed. In TRD and MUCH track reconstruction algorithms are based on track following and Kalman Filter methods. All algorithms were significantly optimized to achieve maximum speed up and minimum memory consumption. Obtained results showed that a significant speed up factor for all algorithms was achieved and the reconstruction efficiency stays at high level.

  17. Adaptive symbiotic organisms search (SOS algorithm for structural design optimization

    Directory of Open Access Journals (Sweden)

    Ghanshyam G. Tejani

    2016-07-01

    Full Text Available The symbiotic organisms search (SOS algorithm is an effective metaheuristic developed in 2014, which mimics the symbiotic relationship among the living beings, such as mutualism, commensalism, and parasitism, to survive in the ecosystem. In this study, three modified versions of the SOS algorithm are proposed by introducing adaptive benefit factors in the basic SOS algorithm to improve its efficiency. The basic SOS algorithm only considers benefit factors, whereas the proposed variants of the SOS algorithm, consider effective combinations of adaptive benefit factors and benefit factors to study their competence to lay down a good balance between exploration and exploitation of the search space. The proposed algorithms are tested to suit its applications to the engineering structures subjected to dynamic excitation, which may lead to undesirable vibrations. Structure optimization problems become more challenging if the shape and size variables are taken into account along with the frequency. To check the feasibility and effectiveness of the proposed algorithms, six different planar and space trusses are subjected to experimental analysis. The results obtained using the proposed methods are compared with those obtained using other optimization methods well established in the literature. The results reveal that the adaptive SOS algorithm is more reliable and efficient than the basic SOS algorithm and other state-of-the-art algorithms.

  18. Cloud model bat algorithm.

    Science.gov (United States)

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization.

  19. Cloud Model Bat Algorithm

    Directory of Open Access Journals (Sweden)

    Yongquan Zhou

    2014-01-01

    Full Text Available Bat algorithm (BA is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: “bats approach their prey.” Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization.

  20. Unsupervised learning algorithms

    CERN Document Server

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  1. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  2. An efficient planar inverse acoustic method based on Toeplitz matrices

    NARCIS (Netherlands)

    Wind, Jelmer; de Boer, Andries; Ellenbroek, Marcellinus Hermannus Maria

    2011-01-01

    This article proposes a new, fast method to solve inverse acoustic problems for planar sources. This problem is often encountered in practice and methods such as planar nearfield acoustic holography (PNAH) and statistically optimised nearfield acoustic holography (SONAH) are widely used to solve it.

  3. Hitchin's connection, Toeplitz operators, and symmetry invariant deformation quantization

    DEFF Research Database (Denmark)

    Andersen, Jørgen Ellegaard

    2012-01-01

    We introduce the notion of a rigid family of Kähler structures on a symplectic manifold. We then prove that a Hitchin connection exists for any rigid holomorphic family of Kähler structures on any compact pre-quantizable symplectic manifold which satisfies certain simple topological constraints...

  4. On the Construction of Jointly Superregular Lower Triangular Toeplitz Matrices

    DEFF Research Database (Denmark)

    Hansen, Jonas; Østergaard, Jan; Kudahl, Johnny

    2016-01-01

    superregular and product preserving jointly superregular matrices, and extend our explicit constructions of superregular matrices to these cases. Jointly superregular matrices are necessary to achieve optimal decoding capabilities for the case of codes with a rate lower than 1/2, and the product preserving...

  5. Static Analysis Numerical Algorithms

    Science.gov (United States)

    2016-04-01

    STATIC ANALYSIS OF NUMERICAL ALGORITHMS KESTREL TECHNOLOGY, LLC APRIL 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...3. DATES COVERED (From - To) NOV 2013 – NOV 2015 4. TITLE AND SUBTITLE STATIC ANALYSIS OF NUMERICAL ALGORITHMS 5a. CONTRACT NUMBER FA8750-14-C...and Honeywell Aerospace Advanced Technology to combine model-based development of complex avionics control software with static analysis of the

  6. Improved Chaff Solution Algorithm

    Science.gov (United States)

    2009-03-01

    Programme de démonstration de technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré...technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré pour déterminer automatiquement

  7. Optimization algorithms and applications

    CERN Document Server

    Arora, Rajesh Kumar

    2015-01-01

    Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc

  8. Image Segmentation Algorithms Overview

    OpenAIRE

    Yuheng, Song; Hao, Yan

    2017-01-01

    The technology of image segmentation is widely used in medical image processing, face recognition pedestrian detection, etc. The current image segmentation techniques include region-based segmentation, edge detection segmentation, segmentation based on clustering, segmentation based on weakly-supervised learning in CNN, etc. This paper analyzes and summarizes these algorithms of image segmentation, and compares the advantages and disadvantages of different algorithms. Finally, we make a predi...

  9. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  10. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  11. Blind distributed estimation algorithms for adaptive networks

    Science.gov (United States)

    Bin Saeed, Muhammad O.; Zerguine, Azzedine; Zummo, Salam A.

    2014-12-01

    Until recently, a lot of work has been done to develop algorithms that utilize the distributed structure of an ad hoc wireless sensor network to estimate a certain parameter of interest. However, all these algorithms assume that the input regressor data is available to the sensors, but this data is not always available to the sensors. In such cases, blind estimation of the required parameter is needed. This work formulates two newly developed blind block-recursive algorithms based on singular value decomposition (SVD) and Cholesky factorization-based techniques. These adaptive algorithms are then used for blind estimation in a wireless sensor network using diffusion of data among cooperative sensors. Simulation results show that the performance greatly improves over the case where no cooperation among sensors is involved.

  12. Antenna Design by Means of the Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Lucas Polo-López

    2018-01-01

    Full Text Available In this work a heuristic optimization algorithm known as the Fruit fly Optimization Algorithm is applied to antenna design problems. The original formulation of the algorithm is presented and it is adapted to array factor and horn antenna optimization problems. Specifically, it is applied to the array factor synthesis of uniformly-fed, non-equispaced arrays and to the profile optimization of multimode horn antennas. Several numerical examples are presented and the obtained results are compared with those provided by a deterministic optimization based on a simplex method and another well-known heuristic approach, the Genetic Algorithm.

  13. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  14. Agency and Algorithms

    Directory of Open Access Journals (Sweden)

    Hanns Holger Rutz

    2016-11-01

    Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory. 

  15. Algorithms in invariant theory

    CERN Document Server

    Sturmfels, Bernd

    2008-01-01

    J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.

  16. Detection of algorithmic trading

    Science.gov (United States)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  17. The Retina Algorithm

    CERN Multimedia

    CERN. Geneva; PUNZI, Giovanni

    2015-01-01

    Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.

  18. Handbook of Memetic Algorithms

    CERN Document Server

    Cotta, Carlos; Moscato, Pablo

    2012-01-01

    Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems.  The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes.   “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now.  A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem,  memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...

  19. Named Entity Linking Algorithm

    Directory of Open Access Journals (Sweden)

    M. F. Panteleev

    2017-01-01

    Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.

  20. A cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    2000-01-01

    textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)

  1. Law and Order in Algorithmics

    NARCIS (Netherlands)

    Fokkinga, M.M.

    1992-01-01

    An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as

  2. Evolutionary Algorithms For Neural Networks Binary And Real Data Classification

    Directory of Open Access Journals (Sweden)

    Dr. Hanan A.R. Akkar

    2015-08-01

    Full Text Available Artificial neural networks are complex networks emulating the way human rational neurons process data. They have been widely used generally in prediction clustering classification and association. The training algorithms that used to determine the network weights are almost the most important factor that influence the neural networks performance. Recently many meta-heuristic and Evolutionary algorithms are employed to optimize neural networks weights to achieve better neural performance. This paper aims to use recently proposed algorithms for optimizing neural networks weights comparing these algorithms performance with other classical meta-heuristic algorithms used for the same purpose. However to evaluate the performance of such algorithms for training neural networks we examine such algorithms to classify four opposite binary XOR clusters and classification of continuous real data sets such as Iris and Ecoli.

  3. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  4. Wireless communications algorithmic techniques

    CERN Document Server

    Vitetta, Giorgio; Colavolpe, Giulio; Pancaldi, Fabrizio; Martin, Philippa A

    2013-01-01

    This book introduces the theoretical elements at the basis of various classes of algorithms commonly employed in the physical layer (and, in part, in MAC layer) of wireless communications systems. It focuses on single user systems, so ignoring multiple access techniques. Moreover, emphasis is put on single-input single-output (SISO) systems, although some relevant topics about multiple-input multiple-output (MIMO) systems are also illustrated.Comprehensive wireless specific guide to algorithmic techniquesProvides a detailed analysis of channel equalization and channel coding for wi

  5. Algorithms for Reinforcement Learning

    CERN Document Server

    Szepesvari, Csaba

    2010-01-01

    Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'

  6. Information filtering via weighted heat conduction algorithm

    Science.gov (United States)

    Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng

    2011-06-01

    In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.

  7. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  8. Accelerated Gossip Algorithms for Distributed Computation

    NARCIS (Netherlands)

    Cao, M.; Spielman, D.A.; Yeh, E.M.

    2006-01-01

    We introduce a technique for accelerating the gos- sip algorithm of Boyd et. al. (INFOCOM 2005) for distributed averaging in a network. By employing memory in the form of a small shift-register in the computation at each node, we can speed up the algorithm’s convergence by a factor of 10. Our

  9. From Story to Algorithm.

    Science.gov (United States)

    Ball, Stanley

    1986-01-01

    Presents a developmental taxonomy which promotes sequencing activities to enhance the potential of matching these activities with learner needs and readiness, suggesting that the order commonly found in the classroom needs to be inverted. The proposed taxonomy (story, skill, and algorithm) involves problem-solving emphasis in the classroom. (JN)

  10. The Design of Algorithms.

    Science.gov (United States)

    Ferguson, David L.; Henderson, Peter B.

    1987-01-01

    Designed initially for use in college computer science courses, the model and computer-aided instructional environment (CAIE) described helps students develop algorithmic problem solving skills. Cognitive skills required are discussed, and implications for developing computer-based design environments in other disciplines are suggested by…

  11. Improved Approximation Algorithm for

    NARCIS (Netherlands)

    Byrka, Jaroslaw; Li, S.; Rybicki, Bartosz

    2014-01-01

    We study the k-level uncapacitated facility location problem (k-level UFL) in which clients need to be connected with paths crossing open facilities of k types (levels). In this paper we first propose an approximation algorithm that for any constant k, in polynomial time, delivers solutions of

  12. Enhanced sampling algorithms.

    Science.gov (United States)

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  13. Algorithm Theory - SWAT 2006

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...

  14. Algorithmic information theory

    NARCIS (Netherlands)

    Grünwald, P.D.; Vitányi, P.M.B.; Adriaans, P.; van Benthem, J.

    2008-01-01

    We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining 'information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are

  15. Algorithmic information theory

    NARCIS (Netherlands)

    Grünwald, P.D.; Vitányi, P.M.B.

    2008-01-01

    We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining `information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are

  16. Introduction to Algorithms

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 9. Introduction to Algorithms Turtle Graphics. R K Shyamasundar. Series Article Volume 1 ... Author Affiliations. R K Shyamasundar1. Computer Science Group Tata Institute of Fundamental Research Homi Bhabha Road Mumbai 400 005, India.

  17. Modular Regularization Algorithms

    DEFF Research Database (Denmark)

    Jacobsen, Michael

    2004-01-01

    The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen......The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed...... into independent modules. These modules are then combined to form new regularization algorithms with other properties than those we started out with. Several variations are tested using the Matlab toolbox MOORe Tools created in connection with this thesis. Object oriented programming techniques are explained...... and used to set up the illposed problems in the toolbox. Hereby, we are able to write regularization algorithms that automatically exploit structure in the ill-posed problem without being rewritten explicitly. We explain how to implement a stopping criteria for a parameter choice method based upon...

  18. Algorithms for SCC Decomposition

    NARCIS (Netherlands)

    J. Barnat; J. Chaloupka (Jakub); J.C. van de Pol (Jaco)

    2008-01-01

    htmlabstractWe study and improve the OBF technique [Barnat, J. and P.Moravec, Parallel algorithms for finding SCCs in implicitly given graphs, in: Proceedings of the 5th International Workshop on Parallel and Distributed Methods in Verification (PDMC 2006), LNCS (2007)], which was used in

  19. Fractal Landscape Algorithms for Environmental Simulations

    Science.gov (United States)

    Mao, H.; Moran, S.

    2014-12-01

    Natural science and geographical research are now able to take advantage of environmental simulations that more accurately test experimental hypotheses, resulting in deeper understanding. Experiments affected by the natural environment can benefit from 3D landscape simulations capable of simulating a variety of terrains and environmental phenomena. Such simulations can employ random terrain generation algorithms that dynamically simulate environments to test specific models against a variety of factors. Through the use of noise functions such as Perlin noise, Simplex noise, and diamond square algorithms, computers can generate simulations that model a variety of landscapes and ecosystems. This study shows how these algorithms work together to create realistic landscapes. By seeding values into the diamond square algorithm, one can control the shape of landscape. Perlin noise and Simplex noise are also used to simulate moisture and temperature. The smooth gradient created by coherent noise allows more realistic landscapes to be simulated. Terrain generation algorithms can be used in environmental studies and physics simulations. Potential studies that would benefit from simulations include the geophysical impact of flash floods or drought on a particular region and regional impacts on low lying area due to global warming and rising sea levels. Furthermore, terrain generation algorithms also serve as aesthetic tools to display landscapes (Google Earth), and simulate planetary landscapes. Hence, it can be used as a tool to assist science education. Algorithms used to generate these natural phenomena provide scientists a different approach in analyzing our world. The random algorithms used in terrain generation not only contribute to the generating the terrains themselves, but are also capable of simulating weather patterns.

  20. Python algorithms mastering basic algorithms in the Python language

    CERN Document Server

    Hetland, Magnus Lie

    2014-01-01

    Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc

  1. Optimal Pid Controller Design Using Adaptive Vurpso Algorithm

    Science.gov (United States)

    Zirkohi, Majid Moradi

    2015-04-01

    The purpose of this paper is to improve theVelocity Update Relaxation Particle Swarm Optimization algorithm (VURPSO). The improved algorithm is called Adaptive VURPSO (AVURPSO) algorithm. Then, an optimal design of a Proportional-Integral-Derivative (PID) controller is obtained using the AVURPSO algorithm. An adaptive momentum factor is used to regulate a trade-off between the global and the local exploration abilities in the proposed algorithm. This operation helps the system to reach the optimal solution quickly and saves the computation time. Comparisons on the optimal PID controller design confirm the superiority of AVURPSO algorithm to the optimization algorithms mentioned in this paper namely the VURPSO algorithm, the Ant Colony algorithm, and the conventional approach. Comparisons on the speed of convergence confirm that the proposed algorithm has a faster convergence in a less computation time to yield a global optimum value. The proposed AVURPSO can be used in the diverse areas of optimization problems such as industrial planning, resource allocation, scheduling, decision making, pattern recognition and machine learning. The proposed AVURPSO algorithm is efficiently used to design an optimal PID controller.

  2. Fast autodidactic adaptive equalization algorithms

    Science.gov (United States)

    Hilal, Katia

    Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.

  3. A new modified fast fractal image compression algorithm

    DEFF Research Database (Denmark)

    Salarian, Mehdi; Nadernejad, Ehsan; MiarNaimi, Hossein

    2013-01-01

    In this paper, a new fractal image compression algorithm is proposed, in which the time of the encoding process is considerably reduced. The algorithm exploits a domain pool reduction approach, along with the use of innovative predefined values for contrast scaling factor, S, instead of searching...

  4. A Cache-Optimal Alternative to the Unidirectional Hierarchization Algorithm

    DEFF Research Database (Denmark)

    Hupp, Philipp; Jacob, Riko

    2016-01-01

    of the cache misses by a factor of d compared to the unidirectional algorithm which is the common standard up to now. The new algorithm is also optimal in the sense that the leading term of the cache misses is reduced to scanning complexity, i.e., every degree of freedom has to be touched once. We also present...

  5. A MEDLINE categorization algorithm

    Directory of Open Access Journals (Sweden)

    Gehanno Jean-Francois

    2006-02-01

    Full Text Available Abstract Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources

  6. Reactive Collision Avoidance Algorithm

    Science.gov (United States)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  7. A MEDLINE categorization algorithm

    Science.gov (United States)

    Darmoni, Stefan J; Névéol, Aurelie; Renard, Jean-Marie; Gehanno, Jean-Francois; Soualmia, Lina F; Dahamna, Badisse; Thirion, Benoit

    2006-01-01

    Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA) is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources with MeSH (terms

  8. The PARAFAC-MUSIC Algorithm for DOA Estimation with Doppler Frequency in a MIMO Radar System

    Directory of Open Access Journals (Sweden)

    Nan Wang

    2014-01-01

    Full Text Available The PARAFAC-MUSIC algorithm is proposed to estimate the direction-of-arrival (DOA of the targets with Doppler frequency in a monostatic MIMO radar system in this paper. To estimate the Doppler frequency, the PARAFAC (parallel factor algorithm is firstly utilized in the proposed algorithm, and after the compensation of Doppler frequency, MUSIC (multiple signal classification algorithm is applied to estimate the DOA. By these two steps, the DOA of moving targets can be estimated successfully. Simulation results show that the proposed PARAFAC-MUSIC algorithm has a higher accuracy than the PARAFAC algorithm and the MUSIC algorithm in DOA estimation.

  9. Genetic Algorithms and Local Search

    Science.gov (United States)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  10. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    Science.gov (United States)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  11. Algorithms for Global Positioning

    DEFF Research Database (Denmark)

    Borre, Kai; Strang, Gilbert

    and replaces the authors' previous work, Linear Algebra, Geodesy, and GPS (1997). An initial discussion of the basic concepts, characteristics and technical aspects of different satellite systems is followed by the necessary mathematical content which is presented in a detailed and self-contained fashion......The emergence of satellite technology has changed the lives of millions of people. In particular, GPS has brought an unprecedented level of accuracy to the field of geodesy. This text is a guide to the algorithms and mathematical principles that account for the success of GPS technology....... At the heart of the matter are the positioning algorithms on which GPS technology relies, the discussion of which will affirm the mathematical contents of the previous chapters. Numerous ready-to-use MATLAB codes are included for the reader. This comprehensive guide will be invaluable for engineers...

  12. Genetic algorithm essentials

    CERN Document Server

    Kramer, Oliver

    2017-01-01

    This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.

  13. DAL Algorithms and Python

    CERN Document Server

    Aydemir, Bahar

    2017-01-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...

  14. Partitional clustering algorithms

    CERN Document Server

    2015-01-01

    This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...

  15. Fatigue Evaluation Algorithms: Review

    DEFF Research Database (Denmark)

    Passipoularidis, Vaggelis; Brøndsted, Povl

    A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck...... series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor...... blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects...

  16. Boosting foundations and algorithms

    CERN Document Server

    Schapire, Robert E

    2012-01-01

    Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.

  17. Likelihood Inflating Sampling Algorithm

    OpenAIRE

    Entezari, Reihaneh; Craiu, Radu V.; Rosenthal, Jeffrey S.

    2016-01-01

    Markov Chain Monte Carlo (MCMC) sampling from a posterior distribution corresponding to a massive data set can be computationally prohibitive since producing one sample requires a number of operations that is linear in the data size. In this paper, we introduce a new communication-free parallel method, the Likelihood Inflating Sampling Algorithm (LISA), that significantly reduces computational costs by randomly splitting the dataset into smaller subsets and running MCMC methods independently ...

  18. Constrained Minimization Algorithms

    Science.gov (United States)

    Lantéri, H.; Theys, C.; Richard, C.

    2013-03-01

    In this paper, we consider the inverse problem of restoring an unknown signal or image, knowing the transformation suffered by the unknowns. More specifically we deal with transformations described by a linear model linking the unknown signal to an unnoisy version of the data. The measured data are generally corrupted by noise. This aspect of the problem is presented in the introduction for general models. In Section 2, we introduce the linear models, and some examples of linear inverse problems are presented. The specificities of the inverse problems are briefly mentionned and shown on a simple example. In Section 3, we give some information on classical distances or divergences. Indeed, an inverse problem is generally solved by minimizing a discrepancy function (divergence or distance) between the measured data and the model (here linear) of such data. Section 4 deals with the likelihood maximization and with their links with divergences minimization. The physical constraints on the solution are indicated and the Split Gradient Method (SGM) is detailed in Section 5. A constraint on the inferior bound of the solution is introduced at first; the positivity constraint is a particular case of such a constraint. We show how to obtain strictly, the multiplicative form of the algorithms. In a second step, the so-called flux constraint is introduced, and a complete algorithmic form is given. In Section 6 we give some brief information on acceleration method of such algorithms. A conclusion is given in Section 7.

  19. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  20. NEUTRON ALGORITHM VERIFICATION TESTING

    Energy Technology Data Exchange (ETDEWEB)

    COWGILL,M.; MOSBY,W.; ARGONNE NATIONAL LABORATORY-WEST

    2000-07-19

    Active well coincidence counter assays have been performed on uranium metal highly enriched in {sup 235}U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the {sup 235}U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the {sup 235}U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility.

  1. The Hip Restoration Algorithm

    Science.gov (United States)

    Stubbs, Allston Julius; Atilla, Halis Atil

    2016-01-01

    Summary Background Despite the rapid advancement of imaging and arthroscopic techniques about the hip joint, missed diagnoses are still common. As a deep joint and compared to the shoulder and knee joints, localization of hip symptoms is difficult. Hip pathology is not easily isolated and is often related to intra and extra-articular abnormalities. In light of these diagnostic challenges, we recommend an algorithmic approach to effectively diagnoses and treat hip pain. Methods In this review, hip pain is evaluated from diagnosis to treatment in a clear decision model. First we discuss emergency hip situations followed by the differentiation of intra and extra-articular causes of the hip pain. We differentiate the intra-articular hip as arthritic and non-arthritic and extra-articular pain as surrounding or remote tissue generated. Further, extra-articular hip pain is evaluated according to pain location. Finally we summarize the surgical treatment approach with an algorithmic diagram. Conclusion Diagnosis of hip pathology is difficult because the etiologies of pain may be various. An algorithmic approach to hip restoration from diagnosis to rehabilitation is crucial to successfully identify and manage hip pathologies. Level of evidence: V. PMID:28066734

  2. An efficient algorithm for function optimization: modified stem cells algorithm

    Science.gov (United States)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  3. Convex hull ranking algorithm for multi-objective evolutionary algorithms

    NARCIS (Netherlands)

    Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.

    2012-01-01

    Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity

  4. A Hybrid Parallel Preconditioning Algorithm For CFD

    Science.gov (United States)

    Barth,Timothy J.; Tang, Wei-Pai; Kwak, Dochan (Technical Monitor)

    1995-01-01

    A new hybrid preconditioning algorithm will be presented which combines the favorable attributes of incomplete lower-upper (ILU) factorization with the favorable attributes of the approximate inverse method recently advocated by numerous researchers. The quality of the preconditioner is adjustable and can be increased at the cost of additional computation while at the same time the storage required is roughly constant and approximately equal to the storage required for the original matrix. In addition, the preconditioning algorithm suggests an efficient and natural parallel implementation with reduced communication. Sample calculations will be presented for the numerical solution of multi-dimensional advection-diffusion equations. The matrix solver has also been embedded into a Newton algorithm for solving the nonlinear Euler and Navier-Stokes equations governing compressible flow. The full paper will show numerous examples in CFD to demonstrate the efficiency and robustness of the method.

  5. Flocking algorithm for autonomous flying robots.

    Science.gov (United States)

    Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás

    2014-06-01

    Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.

  6. Iterative Algorithms for Nonexpansive Mappings

    Directory of Open Access Journals (Sweden)

    Yao Yonghong

    2008-01-01

    Full Text Available Abstract We suggest and analyze two new iterative algorithms for a nonexpansive mapping in Banach spaces. We prove that the proposed iterative algorithms converge strongly to some fixed point of .

  7. Efficient sampling algorithms for Monte Carlo based treatment planning

    International Nuclear Information System (INIS)

    DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.

    1998-01-01

    Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed

  8. Foundations of genetic algorithms 1991

    CERN Document Server

    1991-01-01

    Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition

  9. Parallel Architectures and Bioinspired Algorithms

    CERN Document Server

    Pérez, José; Lanchares, Juan

    2012-01-01

    This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future. Parallel Architectures and Bioinspired Algorithms will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms.

  10. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  11. Efficient Approximation Algorithms for Weighted $b$-Matching

    Energy Technology Data Exchange (ETDEWEB)

    Khan, Arif; Pothen, Alex; Mostofa Ali Patwary, Md.; Satish, Nadathur Rajagopalan; Sundaram, Narayanan; Manne, Fredrik; Halappanavar, Mahantesh; Dubey, Pradeep

    2016-01-01

    We describe a half-approximation algorithm, b-Suitor, for computing a b-Matching of maximum weight in a graph with weights on the edges. b-Matching is a generalization of the well-known Matching problem in graphs, where the objective is to choose a subset of M edges in the graph such that at most a specified number b(v) of edges in M are incident on each vertex v. Subject to this restriction we maximize the sum of the weights of the edges in M. We prove that the b-Suitor algorithm computes the same b-Matching as the one obtained by the greedy algorithm for the problem. We implement the algorithm on serial and shared-memory parallel processors, and compare its performance against a collection of approximation algorithms that have been proposed for the Matching problem. Our results show that the b-Suitor algorithm outperforms the Greedy and Locally Dominant edge algorithms by one to two orders of magnitude on a serial processor. The b-Suitor algorithm has a high degree of concurrency, and it scales well up to 240 threads on a shared memory multiprocessor. The b-Suitor algorithm outperforms the Locally Dominant edge algorithm by a factor of fourteen on 16 cores of an Intel Xeon multiprocessor.

  12. Block Least Mean Squares Algorithm over Distributed Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    T. Panigrahi

    2012-01-01

    Full Text Available In a distributed parameter estimation problem, during each sampling instant, a typical sensor node communicates its estimate either by the diffusion algorithm or by the incremental algorithm. Both these conventional distributed algorithms involve significant communication overheads and, consequently, defeat the basic purpose of wireless sensor networks. In the present paper, we therefore propose two new distributed algorithms, namely, block diffusion least mean square (BDLMS and block incremental least mean square (BILMS by extending the concept of block adaptive filtering techniques to the distributed adaptation scenario. The performance analysis of the proposed BDLMS and BILMS algorithms has been carried out and found to have similar performances to those offered by conventional diffusion LMS and incremental LMS algorithms, respectively. The convergence analyses of the proposed algorithms obtained from the simulation study are also found to be in agreement with the theoretical analysis. The remarkable and interesting aspect of the proposed block-based algorithms is that their communication overheads per node and latencies are less than those of the conventional algorithms by a factor as high as the block size used in the algorithms.

  13. Efficient GPS Position Determination Algorithms

    National Research Council Canada - National Science Library

    Nguyen, Thao Q

    2007-01-01

    ... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...

  14. Recent results on howard's algorithm

    DEFF Research Database (Denmark)

    Miltersen, P.B.

    2012-01-01

    Howard’s algorithm is a fifty-year old generally applicable algorithm for sequential decision making in face of uncertainty. It is routinely used in practice in numerous application areas that are so important that they usually go by their acronyms, e.g., OR, AI, and CAV. While Howard’s algorithm...

  15. Multisensor estimation: New distributed algorithms

    Directory of Open Access Journals (Sweden)

    Plataniotis K. N.

    1997-01-01

    Full Text Available The multisensor estimation problem is considered in this paper. New distributed algorithms, which are able to locally process the information and which deliver identical results to those generated by their centralized counterparts are presented. The algorithms can be used to provide robust and computationally efficient solutions to the multisensor estimation problem. The proposed distributed algorithms are theoretically interesting and computationally attractive.

  16. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    Science.gov (United States)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  17. Set-Membership Proportionate Affine Projection Algorithms

    Directory of Open Access Journals (Sweden)

    Stefan Werner

    2007-01-01

    Full Text Available Proportionate adaptive filters can improve the convergence speed for the identification of sparse systems as compared to their conventional counterparts. In this paper, the idea of proportionate adaptation is combined with the framework of set-membership filtering (SMF in an attempt to derive novel computationally efficient algorithms. The resulting algorithms attain an attractive faster converge for both situations of sparse and dispersive channels while decreasing the average computational complexity due to the data discerning feature of the SMF approach. In addition, we propose a rule that allows us to automatically adjust the number of past data pairs employed in the update. This leads to a set-membership proportionate affine projection algorithm (SM-PAPA having a variable data-reuse factor allowing a significant reduction in the overall complexity when compared with a fixed data-reuse factor. Reduced-complexity implementations of the proposed algorithms are also considered that reduce the dimensions of the matrix inversions involved in the update. Simulations show good results in terms of reduced number of updates, speed of convergence, and final mean-squared error.

  18. An Algorithmic Diversity Diet?

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik

    2016-01-01

    diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....

  19. Randomized Filtering Algorithms

    DEFF Research Database (Denmark)

    Katriel, Irit; Van Hentenryck, Pascal

    2008-01-01

    of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed......Filtering every global constraint of a CPS to are consistency at every search step can be costly and solvers often compromise on either the level of consistency or the frequency at which are consistency is enforced. In this paper we propose two randomized filtering schemes for dense instances...

  20. Recognition algorithms in knot theory

    International Nuclear Information System (INIS)

    Dynnikov, I A

    2003-01-01

    In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory

  1. Fast algorithm for Morphological Filters

    International Nuclear Information System (INIS)

    Lou Shan; Jiang Xiangqian; Scott, Paul J

    2011-01-01

    In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.

  2. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    Science.gov (United States)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  3. Contour Error Map Algorithm

    Science.gov (United States)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  4. Algorithmic Relative Complexity

    Directory of Open Access Journals (Sweden)

    Daniele Cerra

    2011-04-01

    Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.

  5. Fatigue evaluation algorithms: Review

    Energy Technology Data Exchange (ETDEWEB)

    Passipoularidis, V.A.; Broendsted, P.

    2009-11-15

    A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)

  6. Online Planning Algorithm

    Science.gov (United States)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  7. Efficient algorithms for conditional independence inference

    Czech Academy of Sciences Publication Activity Database

    Bouckaert, R.; Hemmecke, R.; Lindner, S.; Studený, Milan

    2010-01-01

    Roč. 11, č. 1 (2010), s. 3453-3479 ISSN 1532-4435 R&D Projects: GA ČR GA201/08/0539; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : conditional independence inference * linear programming approach Subject RIV: BA - General Mathematics Impact factor: 2.949, year: 2010 http://library.utia.cas.cz/separaty/2010/MTR/studeny-efficient algorithms for conditional independence inference.pdf

  8. Racing algorithms for conditional independence inference

    Czech Academy of Sciences Publication Activity Database

    Bouckaert, R. R.; Studený, Milan

    2007-01-01

    Roč. 45, č. 2 (2007), s. 386-401 ISSN 0888-613X R&D Projects: GA ČR GA201/04/0393 Institutional research plan: CEZ:AV0Z10750506 Keywords : conditonal independence * inference * imset * algorithm Subject RIV: BA - General Mathematics Impact factor: 1.220, year: 2007 http://library.utia.cas.cz/separaty/2007/mtr/studeny-0083472.pdf

  9. Applications of algorithmic differentiation to phase retrieval algorithms.

    Science.gov (United States)

    Jurling, Alden S; Fienup, James R

    2014-07-01

    In this paper, we generalize the techniques of reverse-mode algorithmic differentiation to include elementary operations on multidimensional arrays of complex numbers. We explore the application of the algorithmic differentiation to phase retrieval error metrics and show that reverse-mode algorithmic differentiation provides a framework for straightforward calculation of gradients of complicated error metrics without resorting to finite differences or laborious symbolic differentiation.

  10. Optimal Fungal Space Searching Algorithms.

    Science.gov (United States)

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  11. AN IMPROVED FUZZY CLUSTERING ALGORITHM FOR MICROARRAY IMAGE SPOTS SEGMENTATION

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-11-01

    Full Text Available An automatic cDNA microarray image processing using an improved fuzzy clustering algorithm is presented in this paper. The spot segmentation algorithm proposed uses the gridding technique developed by the authors earlier, for finding the co-ordinates of each spot in an image. Automatic cropping of spots from microarray image is done using these co-ordinates. The present paper proposes an improved fuzzy clustering algorithm Possibility fuzzy local information c means (PFLICM to segment the spot foreground (FG from background (BG. The PFLICM improves fuzzy local information c means (FLICM algorithm by incorporating typicality of a pixel along with gray level information and local spatial information. The performance of the algorithm is validated using a set of simulated cDNA microarray images added with different levels of AWGN noise. The strength of the algorithm is tested by computing the parameters such as the Segmentation matching factor (SMF, Probability of error (pe, Discrepancy distance (D and Normal mean square error (NMSE. SMF value obtained for PFLICM algorithm shows an improvement of 0.9 % and 0.7 % for high noise and low noise microarray images respectively compared to FLICM algorithm. The PFLICM algorithm is also applied on real microarray images and gene expression values are computed.

  12. Algorithms and their others: Algorithmic culture in context

    Directory of Open Access Journals (Sweden)

    Paul Dourish

    2016-08-01

    Full Text Available Algorithms, once obscure objects of technical art, have lately been subject to considerable popular and scholarly scrutiny. What does it mean to adopt the algorithm as an object of analytic attention? What is in view, and out of view, when we focus on the algorithm? Using Niklaus Wirth's 1975 formulation that “algorithms + data structures = programs” as a launching-off point, this paper examines how an algorithmic lens shapes the way in which we might inquire into contemporary digital culture.

  13. Fighting Censorship with Algorithms

    Science.gov (United States)

    Mahdian, Mohammad

    In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.

  14. Algorithmic Reflections on Choreography

    Directory of Open Access Journals (Sweden)

    Pablo Ventura

    2016-11-01

    Full Text Available In 1996, Pablo Ventura turned his attention to the choreography software Life Forms to find out whether the then-revolutionary new tool could lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies.

  15. The Copenhagen Triage Algorithm

    DEFF Research Database (Denmark)

    Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia

    2016-01-01

    BACKGROUND: Crowding in the emergency department (ED) is a well-known problem resulting in an increased risk of adverse outcomes. Effective triage might counteract this problem by identifying the sickest patients and ensuring early treatment. In the last two decades, systematic triage has become...... the standard in ED's worldwide. However, triage models are also time consuming, supported by limited evidence and could potentially be of more harm than benefit. The aim of this study is to develop a quicker triage model using data from a large cohort of unselected ED patients and evaluate if this new model...... is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...

  16. Multi-objective mixture-based iterated density estimation evolutionary algorithms

    NARCIS (Netherlands)

    Thierens, D.; Bosman, P.A.N.

    2001-01-01

    We propose an algorithm for multi-objective optimization using a mixture-based iterated density estimation evolutionary algorithm (MIDEA). The MIDEA algorithm is a prob- abilistic model building evolutionary algo- rithm that constructs at each generation a mixture of factorized probability

  17. A simple and efficient parallel FFT algorithm using the BSP model

    NARCIS (Netherlands)

    Bisseling, R.H.; Inda, M.A.

    2000-01-01

    In this paper we present a new parallel radix FFT algorithm based on the BSP model Our parallel algorithm uses the groupcyclic distribution family which makes it simple to understand and easy to implement We show how to reduce the com munication cost of the algorithm by a factor of three in the case

  18. World Competitive Contests (WCC algorithm: A novel intelligent optimization algorithm for biological and non-biological problems

    Directory of Open Access Journals (Sweden)

    Yosef Masoudi-Sobhanzadeh

    Full Text Available Since different sciences face lots of problems which cannot be solved in reasonable time order, we need new methods and algorithms for getting acceptable answers in proper time order. In the present study, a novel intelligent optimization algorithm, known as WCC (World Competitive Contests, has been proposed and applied to find the transcriptional factor binding sites (TFBS and eight benchmark functions discovery processes. We recognize the need to introduce an intelligent optimization algorithm because the TFBS discovery is a biological and an NP-Hard problem. Although there are some intelligent algorithms for the purpose of solving the above-mentioned problems, an optimization algorithm with good and acceptable performance, which is based on the real parameters, is essential. Like the other optimization algorithms, the proposed algorithm starts with the first population of teams. After teams are put into different groups, they will begin competing against their rival teams. The highly qualified teams will ascend to the elimination stage and will play each other in the next rounds. The other teams will wait for a new season to start. In this paper, we’re going to implement our proposed algorithm and compare it with five famous optimization algorithms from the perspective of the following: the obtained results, stability, convergence, standard deviation and elapsed time, which are applied to the real and randomly created datasets with different motif sizes. According to our obtained results, in many cases, the WCC׳s performance is better than the other algorithms’. Keywords: The motif discovery, Transcriptional factor binding sites, Optimization algorithms, World Competitive Contests

  19. To develop a universal gamut mapping algorithm

    International Nuclear Information System (INIS)

    Morovic, J.

    1998-10-01

    When a colour image from one colour reproduction medium (e.g. nature, a monitor) needs to be reproduced on another (e.g. on a monitor or in print) and these media have different colour ranges (gamuts), it is necessary to have a method for mapping between them. If such a gamut mapping algorithm can be used under a wide range of conditions, it can also be incorporated in an automated colour reproduction system and considered to be in some sense universal. In terms of preliminary work, a colour reproduction system was implemented, for which a new printer characterisation model (including grey-scale correction) was developed. Methods were also developed for calculating gamut boundary descriptors and for calculating gamut boundaries along given lines from them. The gamut mapping solution proposed in this thesis is a gamut compression algorithm developed with the aim of being accurate and universally applicable. It was arrived at by way of an evolutionary gamut mapping development strategy for the purposes of which five test images were reproduced between a CRT and printed media obtained using an inkjet printer. Initially, a number of previously published algorithms were chosen and psychophysically evaluated whereby an important characteristic of this evaluation was that it also considered the performance of algorithms for individual colour regions within the test images used. New algorithms were then developed on their basis, subsequently evaluated and this process was repeated once more. In this series of experiments the new GCUSP algorithm, which consists of a chroma-dependent lightness compression followed by a compression towards the lightness of the reproduction cusp on the lightness axis, gave the most accurate and stable performance overall. The results of these experiments were also useful for improving the understanding of some gamut mapping factors - in particular gamut difference. In addition to looking at accuracy, the pleasantness of reproductions obtained

  20. An overview of smart grid routing algorithms

    Science.gov (United States)

    Wang, Junsheng; OU, Qinghai; Shen, Haijuan

    2017-08-01

    This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.

  1. Mao-Gilles Stabilization Algorithm

    OpenAIRE

    Jérôme Gilles

    2013-01-01

    Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...

  2. Mao-Gilles Stabilization Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Gilles

    2013-07-01

    Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.

  3. Unsupervised Classification Using Immune Algorithm

    OpenAIRE

    Al-Muallim, M. T.; El-Kouatly, R.

    2012-01-01

    Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...

  4. Fuzzy HRRN CPU Scheduling Algorithm

    OpenAIRE

    Bashir Alam; R. Biswas; M. Alam

    2011-01-01

    There are several scheduling algorithms like FCFS, SRTN, RR, priority etc. Scheduling decisions of these algorithms are based on parameters which are assumed to be crisp. However, in many circumstances these parameters are vague. The vagueness of these parameters suggests that scheduler should use fuzzy technique in scheduling the jobs. In this paper we have proposed a novel CPU scheduling algorithm Fuzzy HRRN that incorporates fuzziness in basic HRRN using fuzzy Technique FIS.

  5. An efficient control algorithm for nonlinear systems

    International Nuclear Information System (INIS)

    Sinha, S.

    1990-12-01

    We suggest a scheme to step up the efficiency of a recently proposed adaptive control algorithm, which is remarkably effective for regulating nonlinear systems. The technique involves monitoring of the ''stiffness of control'' to get maximum gain while maintaining a predetermined accuracy. The success of the procedure is demonstrated for the case of the logistic map, where we show that the improvement in performance is often factors of tens, and for small control stiffness, even factors of hundreds. (author). 4 refs, 1 fig., 1 tab

  6. Machine Learning an algorithmic perspective

    CERN Document Server

    Marsland, Stephen

    2009-01-01

    Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le

  7. Algorithmic complexity of quantum capacity

    Science.gov (United States)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  8. Diversity-Guided Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    2002-01-01

    Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...

  9. Quantum Algorithms and Protocols

    National Research Council Canada - National Science Library

    Huntsman, Steve

    2001-01-01

    .... Foremost among the efforts in this vein is quantum information, which, largely on the basis of startling results on quantum teleportation and polynomial-time factoring, has evolved into a major scientific initiative...

  10. Lorentz covariant canonical symplectic algorithms for dynamics of charged particles

    Science.gov (United States)

    Wang, Yulei; Liu, Jian; Qin, Hong

    2016-12-01

    In this paper, the Lorentz covariance of algorithms is introduced. Under Lorentz transformation, both the form and performance of a Lorentz covariant algorithm are invariant. To acquire the advantages of symplectic algorithms and Lorentz covariance, a general procedure for constructing Lorentz covariant canonical symplectic algorithms (LCCSAs) is provided, based on which an explicit LCCSA for dynamics of relativistic charged particles is built. LCCSA possesses Lorentz invariance as well as long-term numerical accuracy and stability, due to the preservation of a discrete symplectic structure and the Lorentz symmetry of the system. For situations with time-dependent electromagnetic fields, which are difficult to handle in traditional construction procedures of symplectic algorithms, LCCSA provides a perfect explicit canonical symplectic solution by implementing the discretization in 4-spacetime. We also show that LCCSA has built-in energy-based adaptive time steps, which can optimize the computation performance when the Lorentz factor varies.

  11. Faster Algorithms for Computing Longest Common Increasing Subsequences

    DEFF Research Database (Denmark)

    Kutz, Martin; Brodal, Gerth Stølting; Kaligosi, Kanela

    2011-01-01

    of the alphabet, and Sort is the time to sort each input sequence. For k⩾3 length-n sequences we present an algorithm which improves the previous best bound by more than a factor k for many inputs. In both cases, our algorithms are conceptually quite simple but rely on existing sophisticated data structures......We present algorithms for finding a longest common increasing subsequence of two or more input sequences. For two sequences of lengths n and m, where m⩾n, we present an algorithm with an output-dependent expected running time of and O(m) space, where ℓ is the length of an LCIS, σ is the size....... Finally, we introduce the problem of longest common weakly-increasing (or non-decreasing) subsequences (LCWIS), for which we present an -time algorithm for the 3-letter alphabet case. For the extensively studied longest common subsequence problem, comparable speedups have not been achieved for small...

  12. Two Strategies to Speed up Connected Component LabelingAlgorithms

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Otoo, Ekow; Suzuki, Kenji

    2005-11-13

    This paper presents two new strategies to speed up connectedcomponent labeling algorithms. The first strategy employs a decisiontreeto minimize the work performed in the scanning phase of connectedcomponent labeling algorithms. The second strategy uses a simplifiedunion-find data structure to represent the equivalence information amongthe labels. For 8-connected components in atwo-dimensional (2D) image,the first strategy reduces the number of neighboring pixels visited from4 to7/3 on average. In various tests, using a decision tree decreases thescanning time by a factor of about 2. The second strategy uses a compactrepresentation of the union-find data structure. This strategysignificantly speeds up the labeling algorithms. We prove analyticallythat a labeling algorithm with our simplified union-find structure hasthe same optimal theoretical time complexity as do the best labelingalgorithms. By extensive experimental measurements, we confirm theexpected performance characteristics of the new labeling algorithms anddemonstrate that they are faster than other optimal labelingalgorithms.

  13. Robust stability analysis of adaptation algorithms for single perceptron.

    Science.gov (United States)

    Hui, S; Zak, S H

    1991-01-01

    The problem of robust stability and convergence of learning parameters of adaptation algorithms in a noisy environment for the single preceptron is addressed. The case in which the same input pattern is presented in the adaptation cycle is analyzed. The algorithm proposed is of the Widrow-Hoff type. It is concluded that this algorithm is robust. However, the weight vectors do not necessarily converge in the presence of measurement noise. A modified version of this algorithm in which the reduction factors are allowed to vary with time is proposed, and it is shown that this algorithm is robust and that the weight vectors converge in the presence of bounded noise. Only deterministic-type arguments are used in the analysis. An ultimate bound on the error in terms of a convex combination of the initial error and the bound on the noise is obtained.

  14. Research on data auto-analysis algorithms in the explosive detection system

    International Nuclear Information System (INIS)

    Wang Haidong; Li Yuanjing; Yang Yigang; Li Tiezhu; Chen Boxian; Cheng Jianping

    2006-01-01

    This paper mainly describe some auto-analysis algorithms in explosive detection system with TNA method. These include the auto-calibration algorithm when disturbed by other factors, MCA auto-calibration algorithm with calibrated spectrum, the auto-fitting and integral of hydrogen and nitrogen elements data. With these numerical algorithms, the authors can automatically and precisely analysis the gamma-spectra and ultimately achieve the explosive auto-detection. (authors)

  15. The Research and Application of SURF Algorithm Based on Feature Point Selection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Fang Hu

    2014-04-01

    Full Text Available As the pixel information of depth image is derived from the distance information, when implementing SURF algorithm with KINECT sensor for static sign language recognition, there can be some mismatched pairs in palm area. This paper proposes a feature point selection algorithm, by filtering the SURF feature points step by step based on the number of feature points within adaptive radius r and the distance between the two points, it not only greatly improves the recognition rate, but also ensures the robustness under the environmental factors, such as skin color, illumination intensity, complex background, angle and scale changes. The experiment results show that the improved SURF algorithm can effectively improve the recognition rate, has a good robustness.

  16. Backtrack Orbit Search Algorithm

    Science.gov (United States)

    Knowles, K.; Swick, R.

    2002-12-01

    A Mathematical Solution to a Mathematical Problem. With the dramatic increase in satellite-born sensor resolution traditional methods of spatially searching for orbital data have become inadequate. As data volumes increase end-users of the data have become increasingly intolerant of false positives. And, as computing power rapidly increases end-users have come to expect equally rapid search speeds. Meanwhile data archives have an interest in delivering the minimum amount of data that meets users' needs. This keeps their costs down and allows them to serve more users in a more timely manner. Many methods of spatial search for orbital data have been tried in the past and found wanting. The ever popular lat/lon bounding box on a flat Earth is highly inaccurate. Spatial search based on nominal "orbits" is somewhat more accurate at much higher implementation cost and slower performance. Spatial search of orbital data based on predict orbit models are very accurate at a much higher maintenance cost and slower performance. This poster describes the Backtrack Orbit Search Algorithm--an alternative spatial search method for orbital data. Backtrack has a degree of accuracy that rivals predict methods while being faster, less costly to implement, and less costly to maintain than other methods.

  17. Diagnostic algorithm for syncope.

    Science.gov (United States)

    Mereu, Roberto; Sau, Arunashis; Lim, Phang Boon

    2014-09-01

    Syncope is a common symptom with many causes. Affecting a large proportion of the population, both young and old, it represents a significant healthcare burden. The diagnostic approach to syncope should be focused on the initial evaluation, which includes a detailed clinical history, physical examination and 12-lead electrocardiogram. Following the initial evaluation, patients should be risk-stratified into high or low-risk groups in order to guide further investigations and management. Patients with high-risk features should be investigated further to exclude significant structural heart disease or arrhythmia. The ideal currently-available investigation should allow ECG recording during a spontaneous episode of syncope, and when this is not possible, an implantable loop recorder may be considered. In the emergency room setting, acute causes of syncope must also be considered including severe cardiovascular compromise due to pulmonary, cardiac or vascular pathology. While not all patients will receive a conclusive diagnosis, risk-stratification in patients to guide appropriate investigations in the context of a diagnostic algorithm should allow a benign prognosis to be maintained. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Toward an Algorithmic Pedagogy

    Directory of Open Access Journals (Sweden)

    Holly Willis

    2007-01-01

    Full Text Available The demand for an expanded definition of literacy to accommodate visual and aural media is not particularly new, but it gains urgency as college students transform, becoming producers of media in many of their everyday social activities. The response among those who grapple with these issues as instructors has been to advocate for new definitions of literacy and particularly, an understanding of visual literacy. These efforts are exemplary, and promote a much needed rethinking of literacy and models of pedagogy. However, in what is more akin to a manifesto than a polished argument, this essay argues that we need to push farther: What if we moved beyond visual rhetoric, as well as a game-based pedagogy and the adoption of a broad range of media tools on campus, toward a pedagogy grounded fundamentally in a media ecology? Framing this investigation in terms of a media ecology allows us to take account of the multiply determining relationships wrought not just by individual media, but by the interrelationships, dependencies and symbioses that take place within the dynamic system that is today’s high-tech university. An ecological approach allows us to examine what happens when new media practices collide with computational models, providing a glimpse of possible transformations not only ways of being but ways of teaching and learning. How, then, may pedagogical practices be transformed computationally or algorithmically and to what ends?

  19. Streaming Algorithms for Line Simplification

    DEFF Research Database (Denmark)

    Abam, Mohammad; de Berg, Mark; Hachenberger, Peter

    2010-01-01

    this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...

  20. Echo Cancellation I: Algorithms Simulation

    Directory of Open Access Journals (Sweden)

    P. Sovka

    2000-04-01

    Full Text Available Echo cancellation system used in mobile communications is analyzed.Convergence behavior and misadjustment of several LMS algorithms arecompared. The misadjustment means errors in filter weight estimation.The resulting echo suppression for discussed algorithms with simulatedas well as rela speech signals is evaluated. The optional echocancellation configuration is suggested.

  1. Look-ahead fermion algorithm

    International Nuclear Information System (INIS)

    Grady, M.

    1986-01-01

    I describe a fast fermion algorithm which utilizes pseudofermion fields but appears to have little or no systematic error. Test simulations on two-dimensional gauge theories are described. A possible justification for the algorithm being exact is discussed. 8 refs

  2. Global alignment algorithms implementations | Fatumo ...

    African Journals Online (AJOL)

    In this paper, we implemented the two routes for sequence comparison, that is; the dotplot and Needleman-wunsch algorithm for global sequence alignment. Our algorithms were implemented in python programming language and were tested on Linux platform 1.60GHz, 512 MB of RAM SUSE 9.2 and 10.1 versions.

  3. Recovery Rate of Clustering Algorithms

    NARCIS (Netherlands)

    Li, Fajie; Klette, Reinhard; Wada, T; Huang, F; Lin, S

    2009-01-01

    This article provides a simple and general way for defining the recovery rate of clustering algorithms using a given family of old clusters for evaluating the performance of the algorithm when calculating a family of new clusters. Under the assumption of dealing with simulated data (i.e., known old

  4. Diversity-Guided Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    2002-01-01

    Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few algorit...

  5. Quantum algorithms and learning theory

    NARCIS (Netherlands)

    Arunachalam, S.

    2018-01-01

    This thesis studies strengths and weaknesses of quantum computers. In the first part we present three contributions to quantum algorithms. 1) consider a search space of N elements. One of these elements is "marked" and our goal is to find this. We describe a quantum algorithm to solve this problem

  6. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  7. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  8. Algorithms in combinatorial design theory

    CERN Document Server

    Colbourn, CJ

    1985-01-01

    The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.

  9. Executable Pseudocode for Graph Algorithms

    NARCIS (Netherlands)

    B. Ó Nualláin (Breanndán)

    2015-01-01

    textabstract Algorithms are written in pseudocode. However the implementation of an algorithm in a conventional, imperative programming language can often be scattered over hundreds of lines of code thus obscuring its essence. This can lead to difficulties in understanding or verifying the

  10. On exact algorithms for treewidth

    NARCIS (Netherlands)

    Bodlaender, H.L.; Fomin, F.V.; Koster, A.M.C.A.; Kratsch, D.; Thilikos, D.M.

    2006-01-01

    We give experimental and theoretical results on the problem of computing the treewidth of a graph by exact exponential time algorithms using exponential space or using only polynomial space. We first report on an implementation of a dynamic programming algorithm for computing the treewidth of a

  11. Cascade Error Projection Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  12. Heuristic Scheduling Algorithm Oriented Dynamic Tasks for Imaging Satellites

    Directory of Open Access Journals (Sweden)

    Maocai Wang

    2014-01-01

    Full Text Available Imaging satellite scheduling is an NP-hard problem with many complex constraints. This paper researches the scheduling problem for dynamic tasks oriented to some emergency cases. After the dynamic properties of satellite scheduling were analyzed, the optimization model is proposed in this paper. Based on the model, two heuristic algorithms are proposed to solve the problem. The first heuristic algorithm arranges new tasks by inserting or deleting them, then inserting them repeatedly according to the priority from low to high, which is named IDI algorithm. The second one called ISDR adopts four steps: insert directly, insert by shifting, insert by deleting, and reinsert the tasks deleted. Moreover, two heuristic factors, congestion degree of a time window and the overlapping degree of a task, are employed to improve the algorithm’s performance. Finally, a case is given to test the algorithms. The results show that the IDI algorithm is better than ISDR from the running time point of view while ISDR algorithm with heuristic factors is more effective with regard to algorithm performance. Moreover, the results also show that our method has good performance for the larger size of the dynamic tasks in comparison with the other two methods.

  13. Research and Applications of Shop Scheduling Based on Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Hang ZHAO

    Full Text Available ABSTRACT Shop Scheduling is an important factor affecting the efficiency of production, efficient scheduling method and a research and application for optimization technology play an important role for manufacturing enterprises to improve production efficiency, reduce production costs and many other aspects. Existing studies have shown that improved genetic algorithm has solved the limitations that existed in the genetic algorithm, the objective function is able to meet customers' needs for shop scheduling, and the future research should focus on the combination of genetic algorithm with other optimized algorithms. In this paper, in order to overcome the shortcomings of early convergence of genetic algorithm and resolve local minimization problem in search process,aiming at mixed flow shop scheduling problem, an improved cyclic search genetic algorithm is put forward, and chromosome coding method and corresponding operation are given.The operation has the nature of inheriting the optimal individual ofthe previous generation and is able to avoid the emergence of local minimum, and cyclic and crossover operation and mutation operation can enhance the diversity of the population and then quickly get the optimal individual, and the effectiveness of the algorithm is validated. Experimental results show that the improved algorithm can well avoid the emergency of local minimum and is rapid in convergence.

  14. DiamondTorre Algorithm for High-Performance Wave Modeling

    Directory of Open Access Journals (Sweden)

    Vadim Levchenko

    2016-08-01

    Full Text Available Effective algorithms of physical media numerical modeling problems’ solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five.

  15. Novel medical image enhancement algorithms

    Science.gov (United States)

    Agaian, Sos; McClendon, Stephen A.

    2010-01-01

    In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.

  16. Elementary functions algorithms and implementation

    CERN Document Server

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  17. Plastic surgery for breast cancer: еssentials, classification, performance algorithm

    Directory of Open Access Journals (Sweden)

    A. Kh. Ismagilov

    2014-01-01

    Full Text Available The choice of plastic surgical techniques for cancer is influenced by two factors: resection volume/baseline breast volume ratio and tumor site.Based on these factors, the authors propose a two-level classification and an algorithm for performing the most optimal plastic operation onthe breast for its cancer.

  18. Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  19. Learning from nature: Nature-inspired algorithms

    DEFF Research Database (Denmark)

    Albeanu, Grigore; Madsen, Henrik; Popentiu-Vladicescu, Florin

    2016-01-01

    During last decade, the nature has inspired researchers to develop new algorithms. The largest collection of nature-inspired algorithms is biology-inspired: swarm intelligence (particle swarm optimization, ant colony optimization, cuckoo search, bees' algorithm, bat algorithm, firefly algorithm etc...

  20. Complex networks an algorithmic perspective

    CERN Document Server

    Erciyes, Kayhan

    2014-01-01

    Network science is a rapidly emerging field of study that encompasses mathematics, computer science, physics, and engineering. A key issue in the study of complex networks is to understand the collective behavior of the various elements of these networks.Although the results from graph theory have proven to be powerful in investigating the structures of complex networks, few books focus on the algorithmic aspects of complex network analysis. Filling this need, Complex Networks: An Algorithmic Perspective supplies the basic theoretical algorithmic and graph theoretic knowledge needed by every r

  1. An investigation of genetic algorithms

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1995-04-01

    Genetic algorithms mimic biological evolution by natural selection in their search for better individuals within a changing population. they can be used as efficient optimizers. This report discusses the developing field of genetic algorithms. It gives a simple example of the search process and introduces the concept of schema. It also discusses modifications to the basic genetic algorithm that result in species and niche formation, in machine learning and artificial evolution of computer programs, and in the streamlining of human-computer interaction. (author). 3 refs., 1 tab., 2 figs

  2. Instance-specific algorithm configuration

    CERN Document Server

    Malitsky, Yuri

    2014-01-01

    This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization.    The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014,

  3. Algorithms Design Techniques and Analysis

    CERN Document Server

    Alsuwaiyel, M H

    1999-01-01

    Problem solving is an essential part of every scientific discipline. It has two components: (1) problem identification and formulation, and (2) solution of the formulated problem. One can solve a problem on its own using ad hoc techniques or follow those techniques that have produced efficient solutions to similar problems. This requires the understanding of various algorithm design techniques, how and when to use them to formulate solutions and the context appropriate for each of them. This book advocates the study of algorithm design techniques by presenting most of the useful algorithm desi

  4. Subcubic Control Flow Analysis Algorithms

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Van Horn, David

    We give the first direct subcubic algorithm for performing control flow analysis of higher-order functional programs. Despite the long held belief that inclusion-based flow analysis could not surpass the ``cubic bottleneck, '' we apply known set compression techniques to obtain an algorithm...... that runs in time O(n^3/log n) on a unit cost random-access memory model machine. Moreover, we refine the initial flow analysis into two more precise analyses incorporating notions of reachability. We give subcubic algorithms for these more precise analyses and relate them to an existing analysis from...

  5. A New DG Multiobjective Optimization Method Based on an Improved Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Wanxing Sheng

    2013-01-01

    Full Text Available A distribution generation (DG multiobjective optimization method based on an improved Pareto evolutionary algorithm is investigated in this paper. The improved Pareto evolutionary algorithm, which introduces a penalty factor in the objective function constraints, uses an adaptive crossover and a mutation operator in the evolutionary process and combines a simulated annealing iterative process. The proposed algorithm is utilized to the optimize DG injection models to maximize DG utilization while minimizing system loss and environmental pollution. A revised IEEE 33-bus system with multiple DG units was used to test the multiobjective optimization algorithm in a distribution power system. The proposed algorithm was implemented and compared with the strength Pareto evolutionary algorithm 2 (SPEA2, a particle swarm optimization (PSO algorithm, and nondominated sorting genetic algorithm II (NGSA-II. The comparison of the results demonstrates the validity and practicality of utilizing DG units in terms of economic dispatch and optimal operation in a distribution power system.

  6. General Quantum Meet-in-the-Middle Search Algorithm Based on Target Solution of Fixed Weight

    Science.gov (United States)

    Fu, Xiang-Qun; Bao, Wan-Su; Wang, Xiang; Shi, Jian-Hong

    2016-10-01

    Similar to the classical meet-in-the-middle algorithm, the storage and computation complexity are the key factors that decide the efficiency of the quantum meet-in-the-middle algorithm. Aiming at the target vector of fixed weight, based on the quantum meet-in-the-middle algorithm, the algorithm for searching all n-product vectors with the same weight is presented, whose complexity is better than the exhaustive search algorithm. And the algorithm can reduce the storage complexity of the quantum meet-in-the-middle search algorithm. Then based on the algorithm and the knapsack vector of the Chor-Rivest public-key crypto of fixed weight d, we present a general quantum meet-in-the-middle search algorithm based on the target solution of fixed weight, whose computational complexity is \\sumj = 0d {(O(\\sqrt {Cn - k + 1d - j }) + O(C_kj log C_k^j))} with Σd i =0 Ck i memory cost. And the optimal value of k is given. Compared to the quantum meet-in-the-middle search algorithm for knapsack problem and the quantum algorithm for searching a target solution of fixed weight, the computational complexity of the algorithm is lower. And its storage complexity is smaller than the quantum meet-in-the-middle-algorithm. Supported by the National Basic Research Program of China under Grant No. 2013CB338002 and the National Natural Science Foundation of China under Grant No. 61502526

  7. An introduction to quantum computing algorithms

    CERN Document Server

    Pittenger, Arthur O

    2000-01-01

    In 1994 Peter Shor [65] published a factoring algorithm for a quantum computer that finds the prime factors of a composite integer N more efficiently than is possible with the known algorithms for a classical com­ puter. Since the difficulty of the factoring problem is crucial for the se­ curity of a public key encryption system, interest (and funding) in quan­ tum computing and quantum computation suddenly blossomed. Quan­ tum computing had arrived. The study of the role of quantum mechanics in the theory of computa­ tion seems to have begun in the early 1980s with the publications of Paul Benioff [6]' [7] who considered a quantum mechanical model of computers and the computation process. A related question was discussed shortly thereafter by Richard Feynman [35] who began from a different perspec­ tive by asking what kind of computer should be used to simulate physics. His analysis led him to the belief that with a suitable class of "quantum machines" one could imitate any quantum system.

  8. Recursive Algorithm For Linear Regression

    Science.gov (United States)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  9. Designing algorithms using CAD technologies

    Directory of Open Access Journals (Sweden)

    Alin IORDACHE

    2008-01-01

    Full Text Available A representative example of eLearning-platform modular application, ‘Logical diagrams’, is intended to be a useful learning and testing tool for the beginner programmer, but also for the more experienced one. The problem this application is trying to solve concerns young programmers who forget about the fundamentals of this domain, algorithmic. Logical diagrams are a graphic representation of an algorithm, which uses different geometrical figures (parallelograms, rectangles, rhombuses, circles with particular meaning that are called blocks and connected between them to reveal the flow of the algorithm. The role of this application is to help the user build the diagram for the algorithm and then automatically generate the C code and test it.

  10. A quantum causal discovery algorithm

    Science.gov (United States)

    Giarmatzi, Christina; Costa, Fabio

    2018-03-01

    Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.

  11. Multiagent scheduling models and algorithms

    CERN Document Server

    Agnetis, Alessandro; Gawiejnowicz, Stanisław; Pacciarelli, Dario; Soukhal, Ameur

    2014-01-01

    This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms.

  12. Efficient Algorithms for Subgraph Listing

    Directory of Open Access Journals (Sweden)

    Niklas Zechner

    2014-05-01

    Full Text Available Subgraph isomorphism is a fundamental problem in graph theory. In this paper we focus on listing subgraphs isomorphic to a given pattern graph. First, we look at the algorithm due to Chiba and Nishizeki for listing complete subgraphs of fixed size, and show that it cannot be extended to general subgraphs of fixed size. Then, we consider the algorithm due to Ga̧sieniec et al. for finding multiple witnesses of a Boolean matrix product, and use it to design a new output-sensitive algorithm for listing all triangles in a graph. As a corollary, we obtain an output-sensitive algorithm for listing subgraphs and induced subgraphs isomorphic to an arbitrary fixed pattern graph.

  13. A retrodictive stochastic simulation algorithm

    International Nuclear Information System (INIS)

    Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.

    2010-01-01

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  14. Autonomous algorithms for image restoration

    OpenAIRE

    Griniasty, Meir

    1994-01-01

    We describe a general theoretical framework for algorithms that adaptively tune all their parameters during the restoration of a noisy image. The adaptation procedure is based on a mean field approach which is known as ``Deterministic Annealing'', and is reminiscent of the ``Deterministic Bolzmann Machiné'. The algorithm is less time consuming in comparison with its simulated annealing alternative. We apply the theory to several architectures and compare their performances.

  15. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  16. When the greedy algorithm fails

    OpenAIRE

    Bang-Jensen, Jørgen; Gutin, Gregory; Yeo, Anders

    2004-01-01

    We provide a characterization of the cases when the greedy algorithm may produce the unique worst possible solution for the problem of finding a minimum weight base in an independence system when the weights are taken from a finite range. We apply this theorem to TSP and the minimum bisection problem. The practical message of this paper is that the greedy algorithm should be used with great care, since for many optimization problems its usage seems impractical even for generating a starting s...

  17. A* Algorithm for Graphics Processors

    OpenAIRE

    Inam, Rafia; Cederman, Daniel; Tsigas, Philippas

    2010-01-01

    Today's computer games have thousands of agents moving at the same time in areas inhabited by a large number of obstacles. In such an environment it is important to be able to calculate multiple shortest paths concurrently in an efficient manner. The highly parallel nature of the graphics processor suits this scenario perfectly. We have implemented a graphics processor based version of the A* path finding algorithm together with three algorithmic improvements that allow it to work faster and ...

  18. Algorithm for programming function generators

    International Nuclear Information System (INIS)

    Bozoki, E.

    1981-01-01

    The present paper deals with a mathematical problem, encountered when driving a fully programmable μ-processor controlled function generator. An algorithm is presented to approximate a desired function by a set of straight segments in such a way that additional restrictions (hardware imposed) are also satisfied. A computer program which incorporates this algorithm and automatically generates the necessary input for the function generator for a broad class of desired functions is also described

  19. Novel Algorithms for Astronomical Plate Analyses

    Czech Academy of Sciences Publication Activity Database

    Hudec, René; Hudec, L.

    2011-01-01

    Roč. 32, 1-2 (2011), s. 121-123 ISSN 0250-6335. [Conference on Multiwavelength Variability of Blazars. Guangzhou, 22,09,2010-24,09,2010] R&D Projects: GA ČR GA205/08/1207 Grant - others:GA ČR(CZ) GA102/09/0997; MŠMT(CZ) ME09027 Institutional research plan: CEZ:AV0Z10030501 Keywords : astronomical plates * plate archives archives * astronomical algorithms Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 0.400, year: 2011

  20. Cascade Error Projection: A New Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.

    1995-01-01

    A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.

  1. Rotational Invariant Dimensionality Reduction Algorithms.

    Science.gov (United States)

    Lai, Zhihui; Xu, Yong; Yang, Jian; Shen, Linlin; Zhang, David

    2017-11-01

    A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the norm as the metric. In this paper, a series of methods based on the -norm are proposed for linear dimensionality reduction. Since the -norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous norm based subspace learning algorithms.

  2. Artificial Flora (AF Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Long Cheng

    2018-02-01

    Full Text Available Inspired by the process of migration and reproduction of flora, this paper proposes a novel artificial flora (AF algorithm. This algorithm can be used to solve some complex, non-linear, discrete optimization problems. Although a plant cannot move, it can spread seeds within a certain range to let offspring to find the most suitable environment. The stochastic process is easy to copy, and the spreading space is vast; therefore, it is suitable for applying in intelligent optimization algorithm. First, the algorithm randomly generates the original plant, including its position and the propagation distance. Then, the position and the propagation distance of the original plant as parameters are substituted in the propagation function to generate offspring plants. Finally, the optimal offspring is selected as a new original plant through the selection function. The previous original plant becomes the former plant. The iteration continues until we find out optimal solution. In this paper, six classical evaluation functions are used as the benchmark functions. The simulation results show that proposed algorithm has high accuracy and stability compared with the classical particle swarm optimization and artificial bee colony algorithm.

  3. A Metropolis algorithm combined with Nelder-Mead Simplex applied to nuclear reactor core design

    Energy Technology Data Exchange (ETDEWEB)

    Sacco, Wagner F. [Depto. de Modelagem Computacional, Instituto Politecnico, Universidade do Estado do Rio de Janeiro, R. Alberto Rangel, s/n, P.O. Box 972285, Nova Friburgo, RJ 28601-970 (Brazil)], E-mail: wfsacco@iprj.uerj.br; Filho, Hermes Alves; Henderson, Nelio [Depto. de Modelagem Computacional, Instituto Politecnico, Universidade do Estado do Rio de Janeiro, R. Alberto Rangel, s/n, P.O. Box 972285, Nova Friburgo, RJ 28601-970 (Brazil); Oliveira, Cassiano R.E. de [Nuclear and Radiological Engineering Program, George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0405 (United States)

    2008-05-15

    A hybridization of the recently introduced Particle Collision Algorithm (PCA) and the Nelder-Mead Simplex algorithm is introduced and applied to a core design optimization problem which was previously attacked by other metaheuristics. The optimization problem consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a three-enrichment-zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. The new metaheuristic performs better than the genetic algorithm, particle swarm optimization, and the Metropolis algorithms PCA and the Great Deluge Algorithm, thus demonstrating its potential for other applications.

  4. A Metropolis algorithm combined with Nelder-Mead Simplex applied to nuclear reactor core design

    International Nuclear Information System (INIS)

    Sacco, Wagner F.; Filho, Hermes Alves; Henderson, Nelio; Oliveira, Cassiano R.E. de

    2008-01-01

    A hybridization of the recently introduced Particle Collision Algorithm (PCA) and the Nelder-Mead Simplex algorithm is introduced and applied to a core design optimization problem which was previously attacked by other metaheuristics. The optimization problem consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a three-enrichment-zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. The new metaheuristic performs better than the genetic algorithm, particle swarm optimization, and the Metropolis algorithms PCA and the Great Deluge Algorithm, thus demonstrating its potential for other applications

  5. A Constrained Algorithm Based NMFα for Image Representation

    Directory of Open Access Journals (Sweden)

    Chenxue Yang

    2014-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a useful tool in learning a basic representation of image data. However, its performance and applicability in real scenarios are limited because of the lack of image information. In this paper, we propose a constrained matrix decomposition algorithm for image representation which contains parameters associated with the characteristics of image data sets. Particularly, we impose label information as additional hard constraints to the α-divergence-NMF unsupervised learning algorithm. The resulted algorithm is derived by using Karush-Kuhn-Tucker (KKT conditions as well as the projected gradient and its monotonic local convergence is proved by using auxiliary functions. In addition, we provide a method to select the parameters to our semisupervised matrix decomposition algorithm in the experiment. Compared with the state-of-the-art approaches, our method with the parameters has the best classification accuracy on three image data sets.

  6. The ATLAS Trigger Algorithms for General Purpose Graphics Processor Units

    CERN Document Server

    Tavares Delgado, Ademar; The ATLAS collaboration

    2016-01-01

    The ATLAS Trigger Algorithms for General Purpose Graphics Processor Units Type: Talk Abstract: We present the ATLAS Trigger algorithms developed to exploit General­ Purpose Graphics Processor Units. ATLAS is a particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system has two levels, hardware-­based Level 1 and the High Level Trigger implemented in software running on a farm of commodity CPU. Performing the trigger event selection within the available farm resources presents a significant challenge that will increase future LHC upgrades. are being evaluated as a potential solution for trigger algorithms acceleration. Key factors determining the potential benefit of this new technology are the relative execution speedup, the number of GPUs required and the relative financial cost of the selected GPU. We have developed a trigger demonstrator which includes algorithms for reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Cal...

  7. Fixed-point error analysis of Winograd Fourier transform algorithms

    Science.gov (United States)

    Patterson, R. W.; Mcclellan, J. H.

    1978-01-01

    The quantization error introduced by the Winograd Fourier transform algorithm (WFTA) when implemented in fixed-point arithmetic is studied and compared with that of the fast Fourier transform (FFT). The effect of ordering the computational modules and the relative contributions of data quantization error and coefficient quantization error are determined. In addition, the quantization error introduced by the Good-Winograd (GW) algorithm, which uses Good's prime-factor decomposition for the discrete Fourier transform (DFT) together with Winograd's short length DFT algorithms, is studied. Error introduced by the WFTA is, in all cases, worse than that of the FFT. In general, the WFTA requires one or two more bits for data representation to give an error similar to that of the FFT. Error introduced by the GW algorithm is approximately the same as that of the FFT.

  8. Algebraic Algorithm Design and Local Search

    National Research Council Canada - National Science Library

    Graham, Robert

    1996-01-01

    .... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...

  9. Golden Sine Algorithm: A Novel Math-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    TANYILDIZI, E.

    2017-05-01

    Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.

  10. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    Science.gov (United States)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  11. A review on quantum search algorithms

    Science.gov (United States)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  12. A cluster analysis on road traffic accidents using genetic algorithms

    Science.gov (United States)

    Saharan, Sabariah; Baragona, Roberto

    2017-04-01

    The analysis of traffic road accidents is increasingly important because of the accidents cost and public road safety. The availability or large data sets makes the study of factors that affect the frequency and severity accidents are viable. However, the data are often highly unbalanced and overlapped. We deal with the data set of the road traffic accidents recorded in Christchurch, New Zealand, from 2000-2009 with a total of 26440 accidents. The data is in a binary set and there are 50 factors road traffic accidents with four level of severity. We used genetic algorithm for the analysis because we are in the presence of a large unbalanced data set and standard clustering like k-means algorithm may not be suitable for the task. The genetic algorithm based on clustering for unknown K, (GCUK) has been used to identify the factors associated with accidents of different levels of severity. The results provided us with an interesting insight into the relationship between factors and accidents severity level and suggest that the two main factors that contributes to fatal accidents are "Speed greater than 60 km h" and "Did not see other people until it was too late". A comparison with the k-means algorithm and the independent component analysis is performed to validate the results.

  13. Algorithms, complexity, and the sciences.

    Science.gov (United States)

    Papadimitriou, Christos

    2014-11-11

    Algorithms, perhaps together with Moore's law, compose the engine of the information technology revolution, whereas complexity--the antithesis of algorithms--is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal--and therefore less compelling--than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene's cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution.

  14. SDR Input Power Estimation Algorithms

    Science.gov (United States)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  15. Computational geometry algorithms and applications

    CERN Document Server

    de Berg, Mark; Overmars, Mark; Schwarzkopf, Otfried

    1997-01-01

    Computational geometry emerged from the field of algorithms design and anal­ ysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The suc­ cess of the field as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains--computer graphics, geographic in­ formation systems (GIS), robotics, and others-in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or difficult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simplified many of the previous approaches. In this textbook we have tried to make these modem algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can ...

  16. Factorized Graph Matching.

    Science.gov (United States)

    Zhou, Feng; de la Torre, Fernando

    2015-11-19

    Graph matching (GM) is a fundamental problem in computer science, and it plays a central role to solve correspondence problems in computer vision. GM problems that incorporate pairwise constraints can be formulated as a quadratic assignment problem (QAP). Although widely used, solving the correspondence problem through GM has two main limitations: (1) the QAP is NP-hard and difficult to approximate; (2) GM algorithms do not incorporate geometric constraints between nodes that are natural in computer vision problems. To address aforementioned problems, this paper proposes factorized graph matching (FGM). FGM factorizes the large pairwise affinity matrix into smaller matrices that encode the local structure of each graph and the pairwise affinity between edges. Four are the benefits that follow from this factorization: (1) There is no need to compute the costly (in space and time) pairwise affinity matrix; (2) The factorization allows the use of a path-following optimization algorithm, that leads to improved optimization strategies and matching performance; (3) Given the factorization, it becomes straight-forward to incorporate geometric transformations (rigid and non-rigid) to the GM problem. (4) Using a matrix formulation for the GM problem and the factorization, it is easy to reveal commonalities and differences between different GM methods. The factorization also provides a clean connection with other matching algorithms such as iterative closest point; Experimental results on synthetic and real databases illustrate how FGM outperforms state-of-the-art algorithms for GM. The code is available at http://humansensing.cs.cmu.edu/fgm.

  17. A new hybrid evolutionary algorithm based on new fuzzy adaptive PSO and NM algorithms for Distribution Feeder Reconfiguration

    International Nuclear Information System (INIS)

    Niknam, Taher; Azadfarsani, Ehsan; Jabbari, Masoud

    2012-01-01

    Highlights: ► Network reconfiguration is a very important way to save the electrical energy. ► This paper proposes a new algorithm to solve the DFR. ► The algorithm combines NFAPSO with NM. ► The proposed algorithm is tested on two distribution test feeders. - Abstract: Network reconfiguration for loss reduction in distribution system is a very important way to save the electrical energy. This paper proposes a new hybrid evolutionary algorithm to solve the Distribution Feeder Reconfiguration problem (DFR). The algorithm is based on combination of a New Fuzzy Adaptive Particle Swarm Optimization (NFAPSO) and Nelder–Mead simplex search method (NM) called NFAPSO–NM. In the proposed algorithm, a new fuzzy adaptive particle swarm optimization includes two parts. The first part is Fuzzy Adaptive Binary Particle Swarm Optimization (FABPSO) that determines the status of tie switches (open or close) and second part is Fuzzy Adaptive Discrete Particle Swarm Optimization (FADPSO) that determines the sectionalizing switch number. In other side, due to the results of binary PSO(BPSO) and discrete PSO(DPSO) algorithms highly depends on the values of their parameters such as the inertia weight and learning factors, a fuzzy system is employed to adaptively adjust the parameters during the search process. Moreover, the Nelder–Mead simplex search method is combined with the NFAPSO algorithm to improve its performance. Finally, the proposed algorithm is tested on two distribution test feeders. The results of simulation show that the proposed method is very powerful and guarantees to obtain the global optimization.

  18. Predicting the onset of major depression in primary care : international validation of a risk prediction algorithm from Spain

    NARCIS (Netherlands)

    Bellon, J. A.; Luna, J. de Dios; King, M.; Moreno-Kuestner, B.; Nazareth, I.; Monton-Franco, C.; GildeGomez-Barragan, M. J.; Sanchez-Celaya, M.; Diaz-Barreiros, M. A.; Vicens, C.; Cervilla, J. A.; Svab, I.; Maaroos, H. -I.; Xavier, M.; Geerlings, M. I.; Saldivia, S.; Gutierrez, B.; Motrico, E.; Martinez-Canavate, M. T.; Olivan-Blazquez, B.; Sanchez-Artiaga, M. S.; March, S.; Munoz-Garcia, M. del Mar; Vazquez-Medrano, A.; Moreno-Peral, P.; Torres-Gonzalez, F.

    2011-01-01

    Background. The different incidence rates of, and risk factors for, depression in different countries argue for the need to have a specific risk algorithm for each country or a supranational risk algorithm. We aimed to develop and validate a predictD-Spain risk algorithm (PSRA) for the onset of

  19. Algorithms for Sparse Non-negative Tucker Decompositions

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai

    2008-01-01

    for tensors are the Tucker model and the more restricted PARAFAC model. Both models can be viewed as generalizations of the regular factor analysis to data of more than two modalities. Non-negative matrix factorization (NMF) in conjunction with sparse coding has lately been given much attention due to its...... part based and easy interpretable representation. While NMF has been extended to the PARAFAC model no such attempt has been done to extend NMF to the Tucker model. However, if the tensor data analyzed is non-negative it may well be relevant to consider purely additive (i.e., non-negative Tucker...... decompositions). To reduce ambiguities of this type of decomposition we develop updates that can impose sparseness in any combination of modalities, hence, proposed algorithms for sparse non-negative Tucker decompositions (SN-TUCKER). We demonstrate how the proposed algorithms are superior to existing algorithms...

  20. Universal algorithm of time sharing

    International Nuclear Information System (INIS)

    Silin, I.N.; Fedyun'kin, E.D.

    1979-01-01

    Timesharing system algorithm is proposed for the wide class of one- and multiprocessor computer configurations. Dynamical priority is the piece constant function of the channel characteristic and system time quantum. The interactive job quantum has variable length. Characteristic recurrent formula is received. The concept of the background job is introduced. Background job loads processor if high priority jobs are inactive. Background quality function is given on the base of the statistical data received in the timesharing process. Algorithm includes optimal trashing off procedure for the jobs replacements in the memory. Sharing of the system time in proportion to the external priorities is guaranteed for the all active enough computing channels (back-ground too). The fast answer is guaranteed for the interactive jobs, which use small time and memory. The external priority control is saved for the high level scheduler. The experience of the algorithm realization on the BESM-6 computer in JINR is discussed

  1. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  2. Algorithms and Public Service Media

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk; Hutchinson, Jonathon

    2018-01-01

    When Public Service Media (PSM) organisations introduce algorithmic recommender systems to suggest media content to users, fundamental values of PSM are challenged. Beyond being confronted with ubiquitous computer ethics problems of causality and transparency, also the identity of PSM as curator...... and agenda-setter is challenged. The algorithms represents rules for which content to present to whom, and in this sense they may discriminate and bias the exposure of diversity. Furthermore, on a practical level, the introduction of the systems shifts power within the organisations and changes...... the regulatory conditions. In this chapter we analyse two cases - the EBU-members' introduction of recommender systems and the Australian broadcaster ABC's experiences with the use of chatbots. We use these cases to exemplify the challenges that algorithmic systems poses to PSM organisations....

  3. Quantum walks and search algorithms

    CERN Document Server

    Portugal, Renato

    2013-01-01

    This book addresses an interesting area of quantum computation called quantum walks, which play an important role in building quantum algorithms, in particular search algorithms. Quantum walks are the quantum analogue of classical random walks. It is known that quantum computers have great power for searching unsorted databases. This power extends to many kinds of searches, particularly to the problem of finding a specific location in a spatial layout, which can be modeled by a graph. The goal is to find a specific node knowing that the particle uses the edges to jump from one node to the next. This book is self-contained with main topics that include: Grover's algorithm, describing its geometrical interpretation and evolution by means of the spectral decomposition of the evolution operater Analytical solutions of quantum walks on important graphs like line, cycles, two-dimensional lattices, and hypercubes using Fourier transforms Quantum walks on generic graphs, describing methods to calculate the limiting d...

  4. Algorithms for Decision Tree Construction

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28] showed that such algorithm may construct decision trees whose average depth is arbitrarily far from the minimum. Hyafil and Rivest in [35] proved NP-hardness of DT problem that is constructing a tree with the minimum average depth for a diagnostic problem over 2-valued information system and uniform probability distribution. Cox et al. in [22] showed that for a two-class problem over information system, even finding the root node attribute for an optimal tree is an NP-hard problem. © Springer-Verlag Berlin Heidelberg 2011.

  5. Some nonlinear space decomposition algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  6. Next Generation Suspension Dynamics Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Higdon, Jonathon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chen, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-12-01

    This research project has the objective to extend the range of application, improve the efficiency and conduct simulations with the Fast Lubrication Dynamics (FLD) algorithm for concentrated particle suspensions in a Newtonian fluid solvent. The research involves a combination of mathematical development, new computational algorithms, and application to processing flows of relevance in materials processing. The mathematical developments clarify the underlying theory, facilitate verification against classic monographs in the field and provide the framework for a novel parallel implementation optimized for an OpenMP shared memory environment. The project considered application to consolidation flows of major interest in high throughput materials processing and identified hitherto unforeseen challenges in the use of FLD in these applications. Extensions to the algorithm have been developed to improve its accuracy in these applications.

  7. Fault Tolerant External Memory Algorithms

    DEFF Research Database (Denmark)

    Jørgensen, Allan Grønlund; Brodal, Gerth Stølting; Mølhave, Thomas

    2009-01-01

    Algorithms dealing with massive data sets are usually designed for I/O-efficiency, often captured by the I/O model by Aggarwal and Vitter. Another aspect of dealing with massive data is how to deal with memory faults, e.g. captured by the adversary based faulty memory RAM by Finocchi and Italiano....... However, current fault tolerant algorithms do not scale beyond the internal memory. In this paper we investigate for the first time the connection between I/O-efficiency in the I/O model and fault tolerance in the faulty memory RAM, and we assume that both memory and disk are unreliable. We show a lower...... bound on the number of I/Os required for any deterministic dictionary that is resilient to memory faults. We design a static and a dynamic deterministic dictionary with optimal query performance as well as an optimal sorting algorithm and an optimal priority queue. Finally, we consider scenarios where...

  8. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    2001-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint-ranking algorithm for learning optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  9. A new cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    1998-01-01

    textabstractA new cluster algorithm for graphs called the emph{Markov Cluster algorithm ($MCL$ algorithm) is introduced. The graphs may be both weighted (with nonnegative weight) and directed. Let~$G$~be such a graph. The $MCL$ algorithm simulates flow in $G$ by first identifying $G$ in a

  10. Seamless Merging of Hypertext and Algorithm Animation

    Science.gov (United States)

    Karavirta, Ville

    2009-01-01

    Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…

  11. Deterministic algorithms for multi-criteria TSP

    NARCIS (Netherlands)

    Manthey, Bodo; Ogihara, Mitsunori; Tarui, Jun

    2011-01-01

    We present deterministic approximation algorithms for the multi-criteria traveling salesman problem (TSP). Our algorithms are faster and simpler than the existing randomized algorithms. First, we devise algorithms for the symmetric and asymmetric multi-criteria Max-TSP that achieve ratios of

  12. Using Alternative Multiplication Algorithms to "Offload" Cognition

    Science.gov (United States)

    Jazby, Dan; Pearn, Cath

    2015-01-01

    When viewed through a lens of embedded cognition, algorithms may enable aspects of the cognitive work of multi-digit multiplication to be "offloaded" to the environmental structure created by an algorithm. This study analyses four multiplication algorithms by viewing different algorithms as enabling cognitive work to be distributed…

  13. AN ALGORITHM FOR AN ALGORITHM FOR THE DESIGN THE ...

    African Journals Online (AJOL)

    eobe

    focuses on the development of an algorithm for designing an axial flow compressor for designing an axial flow compressor for designing an axial flow compressor for a power generation gas turbine generation gas turbine and attempt and attempt and attempts to bring to the public domain some parameters regarded as.

  14. Big Data Mining: Tools & Algorithms

    Directory of Open Access Journals (Sweden)

    Adeel Shiraz Hashmi

    2016-03-01

    Full Text Available We are now in Big Data era, and there is a growing demand for tools which can process and analyze it. Big data analytics deals with extracting valuable information from that complex data which can’t be handled by traditional data mining tools. This paper surveys the available tools which can handle large volumes of data as well as evolving data streams. The data mining tools and algorithms which can handle big data have also been summarized, and one of the tools has been used for mining of large datasets using distributed algorithms.

  15. CATEGORIES OF COMPUTER SYSTEMS ALGORITHMS

    Directory of Open Access Journals (Sweden)

    A. V. Poltavskiy

    2015-01-01

    Full Text Available Philosophy as a frame of reference on world around and as the first science is a fundamental basis, "roots" (R. Descartes for all branches of the scientific knowledge accumulated and applied in all fields of activity of a human being person. The theory of algorithms as one of the fundamental sections of mathematics, is also based on researches of the gnoseology conducting cognition of a true picture of the world of the buman being. From gnoseology and ontology positions as fundamental sections of philosophy modern innovative projects are inconceivable without development of programs,and algorithms.

  16. Industrial Applications of Evolutionary Algorithms

    CERN Document Server

    Sanchez, Ernesto; Tonda, Alberto

    2012-01-01

    This book is intended as a reference both for experienced users of evolutionary algorithms and for researchers that are beginning to approach these fascinating optimization techniques. Experienced users will find interesting details of real-world problems, and advice on solving issues related to fitness computation, modeling and setting appropriate parameters to reach optimal solutions. Beginners will find a thorough introduction to evolutionary computation, and a complete presentation of all evolutionary algorithms exploited to solve different problems. The book could fill the gap between the

  17. Wavelets theory, algorithms, and applications

    CERN Document Server

    Montefusco, Laura

    2014-01-01

    Wavelets: Theory, Algorithms, and Applications is the fifth volume in the highly respected series, WAVELET ANALYSIS AND ITS APPLICATIONS. This volume shows why wavelet analysis has become a tool of choice infields ranging from image compression, to signal detection and analysis in electrical engineering and geophysics, to analysis of turbulent or intermittent processes. The 28 papers comprising this volume are organized into seven subject areas: multiresolution analysis, wavelet transforms, tools for time-frequency analysis, wavelets and fractals, numerical methods and algorithms, and applicat

  18. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  19. Optimisation combinatoire Theorie et algorithmes

    CERN Document Server

    Korte, Bernhard; Fonlupt, Jean

    2010-01-01

    Ce livre est la traduction fran aise de la quatri me et derni re dition de Combinatorial Optimization: Theory and Algorithms crit par deux minents sp cialistes du domaine: Bernhard Korte et Jens Vygen de l'universit de Bonn en Allemagne. Il met l accent sur les aspects th oriques de l'optimisation combinatoire ainsi que sur les algorithmes efficaces et exacts de r solution de probl mes. Il se distingue en cela des approches heuristiques plus simples et souvent d crites par ailleurs. L ouvrage contient de nombreuses d monstrations, concises et l gantes, de r sultats difficiles. Destin aux tudia

  20. Algorithms over partially ordered sets

    DEFF Research Database (Denmark)

    Baer, Robert M.; Østerby, Ole

    1969-01-01

    We here study some problems concerned with the computational analysis of finite partially ordered sets. We begin (in § 1) by showing that the matrix representation of a binary relationR may always be taken in triangular form ifR is a partial ordering. We consider (in § 2) the chain structure...... in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi...

  1. Deceptiveness and genetic algorithm dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Liepins, G.E. (Oak Ridge National Lab., TN (USA)); Vose, M.D. (Tennessee Univ., Knoxville, TN (USA))

    1990-01-01

    We address deceptiveness, one of at least four reasons genetic algorithms can fail to converge to function optima. We construct fully deceptive functions and other functions of intermediate deceptiveness. For the fully deceptive functions of our construction, we generate linear transformations that induce changes of representation to render the functions fully easy. We further model genetic algorithm selection recombination as the interleaving of linear and quadratic operators. Spectral analysis of the underlying matrices allows us to draw preliminary conclusions about fixed points and their stability. We also obtain an explicit formula relating the nonuniform Walsh transform to the dynamics of genetic search. 21 refs.

  2. A Distributed Spanning Tree Algorithm

    DEFF Research Database (Denmark)

    Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Sven Hauge

    We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two-way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...... as communication is asynchronous. The total number of messages sent during a construction of a spanning tree is at most 2E+3NlogN. The maximal message size is loglogN+log(maxid)+3, where maxid is the maximal processor identity....

  3. A distributed spanning tree algorithm

    DEFF Research Database (Denmark)

    Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Svend Hauge

    1988-01-01

    We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...... as communication is asyncronous. The total number of messages sent during a construction of a spanning tree is at most 2E+3NlogN. The maximal message size is loglogN+log(maxid)+3, where maxid is the maximal processor identity....

  4. Performance Evaluation of A* Algorithms

    OpenAIRE

    Martell, Victor; Sandberg, Aron

    2016-01-01

    Context. There have been a lot of progress made in the field of pathfinding. One of the most used algorithms is A*, which over the years has had a lot of variations. There have been a number of papers written about the variations of A* and in what way they specifically improve A*. However, few papers have been written comparing A* with several different variations of A*. Objectives. The objectives of this thesis is to find how Dijkstra's algorithm, IDA*, Theta* and HPA* compare against A* bas...

  5. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Shi-hua Zhan

    2016-01-01

    Full Text Available Simulated annealing (SA algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters’ setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA algorithm to solve traveling salesman problem (TSP. LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.

  6. Estimation of distribution algorithms for nuclear reactor fuel management optimisation

    International Nuclear Information System (INIS)

    Jiang, S.; Ziver, A.K.; Carter, J.N.; Pain, C.C.; Goddard, A.J.H.; Franklin, S.; Phillips, H.J.

    2006-01-01

    In this paper, estimation of distribution algorithms (EDAs) are used to solve nuclear reactor fuel management optimisation (NRFMO) problems. Similar to typical population based optimisation algorithms, e.g. genetic algorithms (GAs), EDAs maintain a population of solutions and evolve them during the optimisation process. Unlike GAs, new solutions are suggested by sampling the distribution estimated from all the solutions evaluated so far. We have developed new algorithms based on the EDAs approach, which are applied to maximize the effective multiplication factor (K eff ) of the CONSORT research reactor of Imperial College London. In the new algorithms, a new 'elite-guided' strategy and the 'stand-alone'K eff with fuel coupling is used as heuristic information to improve the optimisation. A detailed comparison study between the EDAs and GAs with previously published crossover operators is presented. A trained three-layer feed-forward artificial neural network (ANN) was used as a fast approximate model to replace the three-dimensional finite element reactor simulation code EVENT in predicting the K eff . Results from the numerical experiments have shown that the EDAs used provide accurate, efficient and robust algorithms for the test case studied here. This encourages further investigation of the performance of EDAs on more realistic problems

  7. Improved algorithms for approximate string matching (extended abstract

    Directory of Open Access Journals (Sweden)

    Papamichail Georgios

    2009-01-01

    Full Text Available Abstract Background The problem of approximate string matching is important in many different areas such as computational biology, text processing and pattern recognition. A great effort has been made to design efficient algorithms addressing several variants of the problem, including comparison of two strings, approximate pattern identification in a string or calculation of the longest common subsequence that two strings share. Results We designed an output sensitive algorithm solving the edit distance problem between two strings of lengths n and m respectively in time O((s - |n - m|·min(m, n, s + m + n and linear space, where s is the edit distance between the two strings. This worst-case time bound sets the quadratic factor of the algorithm independent of the longest string length and improves existing theoretical bounds for this problem. The implementation of our algorithm also excels in practice, especially in cases where the two strings compared differ significantly in length. Conclusion We have provided the design, analysis and implementation of a new algorithm for calculating the edit distance of two strings with both theoretical and practical implications. Source code of our algorithm is available online.

  8. Adaptation of evidence-based surgical wound care algorithm.

    Science.gov (United States)

    Han, Jung Yeon; Choi-Kwon, Smi

    2011-12-01

    This study was designed to adapt a surgical wound care algorithm that is used to provide evidence-based surgical wound care in a critical care unit. This study used, the 'ADAPTE process', an international clinical practice guideline development method. The 'Bonnie Sue wound care algorithm' was used as a draft for the new algorithm. A content validity index (CVI) targeting 135 critical care nurses was conducted. A 5-point Likert scale was applied to the CVI test using a statistical criterion of .75. A surgical wound care algorithm comprised 9 components: wound assessment, infection control, necrotic tissue management, wound classification by exudates and depths, dressing selection, consideration of systemic factors, wound expected outcome, reevaluate non-healing wounds, and special treatment for non-healing wounds. All of the CVI tests were ≥.75. Compared to existing wound care guidelines, the new wound care algorithm provides precise wound assessment, reliabilities of wound care, expands applicability of wound care to critically ill patients, and provides evidence and strength of recommendations. The new surgical wound care algorithm will contribute to the advancement of evidence-based nursing care, and its use is expected as a nursing intervention in critical care.

  9. Analysis and Improvement of Fireworks Algorithm

    OpenAIRE

    Xi-Guang Li; Shou-Fei Han; Chang-Qing Gong

    2017-01-01

    The Fireworks Algorithm is a recently developed swarm intelligence algorithm to simulate the explosion process of fireworks. Based on the analysis of each operator of Fireworks Algorithm (FWA), this paper improves the FWA and proves that the improved algorithm converges to the global optimal solution with probability 1. The proposed algorithm improves the goal of further boosting performance and achieving global optimization where mainly include the following strategies. Firstly using the opp...

  10. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  11. Optimization of Catalysts Using Specific, Description-Based Genetic Algorithms

    Czech Academy of Sciences Publication Activity Database

    Holeňa, Martin; Čukić, T.; Rodemerck, U.; Linke, D.

    2008-01-01

    Roč. 48, č. 2 (2008), s. 274-282 ISSN 1549-9596 R&D Projects: GA ČR GA201/08/1744 Institutional research plan: CEZ:AV0Z10300504 Keywords : optimization of catalytic materials * genetic algorithm s * mixed optimization * constrained optimization Subject RIV: IN - Informatics, Computer Science Impact factor: 3.643, year: 2008

  12. An improved exponential-time algorithm for k-SAT

    Czech Academy of Sciences Publication Activity Database

    Pudlák, Pavel

    2005-01-01

    Roč. 52, č. 3 (2005), s. 337-364 ISSN 0004-5411 R&D Projects: GA AV ČR(CZ) IAA1019901 Institutional research plan: CEZ:AV0Z10190503 Keywords : CNF sat isfiability * randomized algorithms Subject RIV: BA - General Mathematics Impact factor: 2.197, year: 2005

  13. High-Quality 800-b/s Voice Processing Algorithm.

    Science.gov (United States)

    1991-02-25

    filter. The feedback gain of the low-pass filter is a critical factor. We recommend a feedback gain somewhere between 0.990 and 0.995, which is large...algorithm discriminated the following word pairs more successfully than the 2400-b/s LPC: ZEE - THEE JILT - GILT JEST - GUEST CHEEP - KEEP SING - THING

  14. Computational Experiments with ABS Algorithms for KKT Linear Systems

    Czech Academy of Sciences Publication Activity Database

    Bodon, E.; Del Popolo, A.; Lukšan, Ladislav; Spedicato, E.

    2001-01-01

    Roč. 16, č. 1-4 (2001), s. 85-99 ISSN 1055-6788 R&D Projects: GA ČR GA201/00/0080 Institutional research plan: AV0Z1030915 Keywords : ABS algorithms * KKT systems Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.623, year: 2001

  15. Some Practical Payments Clearance Algorithms

    Science.gov (United States)

    Kumlander, Deniss

    The globalisation of corporations' operations has produced a huge volume of inter-company invoices. Optimisation of those known as payment clearance can produce a significant saving in costs associated with those transfers and handling. The paper revises some common and so practical approaches to the payment clearance problem and proposes some novel algorithms based on graphs theory and heuristic totals' distribution.

  16. Algorithmic Issues in Modeling Motion

    DEFF Research Database (Denmark)

    Agarwal, P. K; Guibas, L. J; Edelsbrunner, H.

    2003-01-01

    This article is a survey of research areas in which motion plays a pivotal role. The aim of the article is to review current approaches to modeling motion together with related data structures and algorithms, and to summarize the challenges that lie ahead in producing a more unified theory...

  17. Hill climbing algorithms and trivium

    DEFF Research Database (Denmark)

    Borghoff, Julia; Knudsen, Lars Ramkilde; Matusiewicz, Krystian

    2011-01-01

    This paper proposes a new method to solve certain classes of systems of multivariate equations over the binary field and its cryptanalytical applications. We show how heuristic optimization methods such as hill climbing algorithms can be relevant to solving systems of multivariate equations...

  18. Understanding Algorithms in Different Presentations

    Science.gov (United States)

    Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János

    2015-01-01

    Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…

  19. Template Generation and Selection Algorithms

    NARCIS (Netherlands)

    Guo, Y.; Smit, Gerardus Johannes Maria; Broersma, Haitze J.; Heysters, P.M.; Badaway, W.; Ismail, Y.

    The availability of high-level design entry tooling is crucial for the viability of any reconfigurable SoC architecture. This paper presents a template generation method to extract functional equivalent structures, i.e. templates, from a control data flow graph. By inspecting the graph the algorithm

  20. Document Organization Using Kohonen's Algorithm.

    Science.gov (United States)

    Guerrero Bote, Vicente P.; Moya Anegon, Felix de; Herrero Solana, Victor

    2002-01-01

    Discussion of the classification of documents from bibliographic databases focuses on a method of vectorizing reference documents from LISA (Library and Information Science Abstracts) which permits their topological organization using Kohonen's algorithm. Analyzes possibilities of this type of neural network with respect to the development of…

  1. Classification algorithms using adaptive partitioning

    KAUST Repository

    Binev, Peter

    2014-12-01

    © 2014 Institute of Mathematical Statistics. Algorithms for binary classification based on adaptive tree partitioning are formulated and analyzed for both their risk performance and their friendliness to numerical implementation. The algorithms can be viewed as generating a set approximation to the Bayes set and thus fall into the general category of set estimators. In contrast with the most studied tree-based algorithms, which utilize piecewise constant approximation on the generated partition [IEEE Trans. Inform. Theory 52 (2006) 1335.1353; Mach. Learn. 66 (2007) 209.242], we consider decorated trees, which allow us to derive higher order methods. Convergence rates for these methods are derived in terms the parameter - of margin conditions and a rate s of best approximation of the Bayes set by decorated adaptive partitions. They can also be expressed in terms of the Besov smoothness β of the regression function that governs its approximability by piecewise polynomials on adaptive partition. The execution of the algorithms does not require knowledge of the smoothness or margin conditions. Besov smoothness conditions are weaker than the commonly used Holder conditions, which govern approximation by nonadaptive partitions, and therefore for a given regression function can result in a higher rate of convergence. This in turn mitigates the compatibility conflict between smoothness and margin parameters.

  2. Tau reconstruction and identification algorithm

    Indian Academy of Sciences (India)

    2012-11-15

    Nov 15, 2012 ... from electrons, muons and hadronic jets. These algorithms enable extended reach for the searches for MSSM Higgs, Z and other exotic particles. Keywords. CMS; tau; LHC; ECAL; HCAL. PACS No. 13.35.Dx. 1. Introduction. Tau is the heaviest known lepton (Mτ = 1.78 GeV) which decays into lighter leptons.

  3. Privacy preserving randomized gossip algorithms

    KAUST Repository

    Hanzely, Filip

    2017-06-23

    In this work we present three different randomized gossip algorithms for solving the average consensus problem while at the same time protecting the information about the initial private values stored at the nodes. We give iteration complexity bounds for all methods, and perform extensive numerical experiments.

  4. Associative Algorithms for Computational Creativity

    Science.gov (United States)

    Varshney, Lav R.; Wang, Jun; Varshney, Kush R.

    2016-01-01

    Computational creativity, the generation of new, unimagined ideas or artifacts by a machine that are deemed creative by people, can be applied in the culinary domain to create novel and flavorful dishes. In fact, we have done so successfully using a combinatorial algorithm for recipe generation combined with statistical models for recipe ranking…

  5. Parallel Algorithms for Model Checking

    NARCIS (Netherlands)

    van de Pol, Jaco; Mousavi, Mohammad Reza; Sgall, Jiri

    2017-01-01

    Model checking is an automated verification procedure, which checks that a model of a system satisfies certain properties. These properties are typically expressed in some temporal logic, like LTL and CTL. Algorithms for LTL model checking (linear time logic) are based on automata theory and graph

  6. Algorithms and Public Service Media

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk; Hutchinson, Jonathon

    2018-01-01

    the regulatory conditions. In this chapter we analyse two cases - the EBU-members' introduction of recommender systems and the Australian broadcaster ABC's experiences with the use of chatbots. We use these cases to exemplify the challenges that algorithmic systems poses to PSM organisations....

  7. A speedup technique for (l, d-motif finding algorithms

    Directory of Open Access Journals (Sweden)

    Dinh Hieu

    2011-03-01

    Full Text Available Abstract Background The discovery of patterns in DNA, RNA, and protein sequences has led to the solution of many vital biological problems. For instance, the identification of patterns in nucleic acid sequences has resulted in the determination of open reading frames, identification of promoter elements of genes, identification of intron/exon splicing sites, identification of SH RNAs, location of RNA degradation signals, identification of alternative splicing sites, etc. In protein sequences, patterns have proven to be extremely helpful in domain identification, location of protease cleavage sites, identification of signal peptides, protein interactions, determination of protein degradation elements, identification of protein trafficking elements, etc. Motifs are important patterns that are helpful in finding transcriptional regulatory elements, transcription factor binding sites, functional genomics, drug design, etc. As a result, numerous papers have been written to solve the motif search problem. Results Three versions of the motif search problem have been proposed in the literature: Simple Motif Search (SMS, (l, d-motif search (or Planted Motif Search (PMS, and Edit-distance-based Motif Search (EMS. In this paper we focus on PMS. Two kinds of algorithms can be found in the literature for solving the PMS problem: exact and approximate. An exact algorithm identifies the motifs always and an approximate algorithm may fail to identify some or all of the motifs. The exact version of PMS problem has been shown to be NP-hard. Exact algorithms proposed in the literature for PMS take time that is exponential in some of the underlying parameters. In this paper we propose a generic technique that can be used to speedup PMS algorithms. Conclusions We present a speedup technique that can be used on any PMS algorithm. We have tested our speedup technique on a number of algorithms. These experimental results show that our speedup technique is indeed very

  8. An enhanced search algorithm for the charged fuel enrichment in equilibrium cycle analysis of REBUS-3

    International Nuclear Information System (INIS)

    Park, Tongkyu; Yang, Won Sik; Kim, Sang-Ji

    2017-01-01

    Highlights: • An enhanced search algorithm for charged fuel enrichment was developed for equilibrium cycle analysis with REBUS-3. • The new search algorithm is not sensitive to the user-specified initial guesses. • The new algorithm reduces the computational time by a factor of 2–3. - Abstract: This paper presents an enhanced search algorithm for the charged fuel enrichment in equilibrium cycle analysis of REBUS-3. The current enrichment search algorithm of REBUS-3 takes a large number of iterations to yield a converged solution or even terminates without a converged solution when the user-specified initial guesses are far from the solution. To resolve the convergence problem and to reduce the computational time, an enhanced search algorithm was developed. The enhanced algorithm is based on the idea of minimizing the number of enrichment estimates by allowing drastic enrichment changes and by optimizing the current search algorithm of REBUS-3. Three equilibrium cycle problems with recycling, without recycling and of high discharge burnup were defined and a series of sensitivity analyses were performed with a wide range of user-specified initial guesses. Test results showed that the enhanced search algorithm is able to produce a converged solution regardless of the initial guesses. In addition, it was able to reduce the number of flux calculations by a factor of 2.9, 1.8, and 1.7 for equilibrium cycle problems with recycling, without recycling, and of high discharge burnup, respectively, compared to the current search algorithm.

  9. Fuzzy logic-based diagnostic algorithm for implantable cardioverter defibrillators.

    Science.gov (United States)

    Bárdossy, András; Blinowska, Aleksandra; Kuzmicz, Wieslaw; Ollitrault, Jacky; Lewandowski, Michał; Przybylski, Andrzej; Jaworski, Zbigniew

    2014-02-01

    The paper presents a diagnostic algorithm for classifying cardiac tachyarrhythmias for implantable cardioverter defibrillators (ICDs). The main aim was to develop an algorithm that could reduce the rate of occurrence of inappropriate therapies, which are often observed in existing ICDs. To achieve low energy consumption, which is a critical factor for implantable medical devices, very low computational complexity of the algorithm was crucial. The study describes and validates such an algorithm and estimates its clinical value. The algorithm was based on the heart rate variability (HRV) analysis. The input data for our algorithm were: RR-interval (I), as extracted from raw intracardiac electrogram (EGM), and in addition two other features of HRV called here onset (ONS) and instability (INST). 6 diagnostic categories were considered: ventricular fibrillation (VF), ventricular tachycardia (VT), sinus tachycardia (ST), detection artifacts and irregularities (including extrasystoles) (DAI), atrial tachyarrhythmias (ATF) and no tachycardia (i.e. normal sinus rhythm) (NT). The initial set of fuzzy rules based on the distributions of I, ONS and INST in the 6 categories was optimized by means of a software tool for automatic rule assessment using simulated annealing. A training data set with 74 EGM recordings was used during optimization, and the algorithm was validated with a validation data set with 58 EGM recordings. Real life recordings stored in defibrillator memories were used. Additionally the algorithm was tested on 2 sets of recordings from the PhysioBank databases: MIT-BIH Arrhythmia Database and MIT-BIH Supraventricular Arrhythmia Database. A custom CMOS integrated circuit implementing the diagnostic algorithm was designed in order to estimate the power consumption. A dedicated Web site, which provides public online access to the algorithm, has been created and is available for testing it. The total number of events in our training and validation sets was 132. In

  10. The TROPOMI surface UV algorithm

    Science.gov (United States)

    Lindfors, Anders V.; Kujanpää, Jukka; Kalakoski, Niilo; Heikkilä, Anu; Lakkala, Kaisa; Mielonen, Tero; Sneep, Maarten; Krotkov, Nickolay A.; Arola, Antti; Tamminen, Johanna

    2018-02-01

    The TROPOspheric Monitoring Instrument (TROPOMI) is the only payload of the Sentinel-5 Precursor (S5P), which is a polar-orbiting satellite mission of the European Space Agency (ESA). TROPOMI is a nadir-viewing spectrometer measuring in the ultraviolet, visible, near-infrared, and the shortwave infrared that provides near-global daily coverage. Among other things, TROPOMI measurements will be used for calculating the UV radiation reaching the Earth's surface. Thus, the TROPOMI surface UV product will contribute to the monitoring of UV radiation by providing daily information on the prevailing UV conditions over the globe. The TROPOMI UV algorithm builds on the heritage of the Ozone Monitoring Instrument (OMI) and the Satellite Application Facility for Atmospheric Composition and UV Radiation (AC SAF) algorithms. This paper provides a description of the algorithm that will be used for estimating surface UV radiation from TROPOMI observations. The TROPOMI surface UV product includes the following UV quantities: the UV irradiance at 305, 310, 324, and 380 nm; the erythemally weighted UV; and the vitamin-D weighted UV. Each of these are available as (i) daily dose or daily accumulated irradiance, (ii) overpass dose rate or irradiance, and (iii) local noon dose rate or irradiance. In addition, all quantities are available corresponding to actual cloud conditions and as clear-sky values, which otherwise correspond to the same conditions but assume a cloud-free atmosphere. This yields 36 UV parameters altogether. The TROPOMI UV algorithm has been tested using input based on OMI and the Global Ozone Monitoring Experiment-2 (GOME-2) satellite measurements. These preliminary results indicate that the algorithm is functioning according to expectations.

  11. The TROPOMI surface UV algorithm

    Directory of Open Access Journals (Sweden)

    A. V. Lindfors

    2018-02-01

    Full Text Available The TROPOspheric Monitoring Instrument (TROPOMI is the only payload of the Sentinel-5 Precursor (S5P, which is a polar-orbiting satellite mission of the European Space Agency (ESA. TROPOMI is a nadir-viewing spectrometer measuring in the ultraviolet, visible, near-infrared, and the shortwave infrared that provides near-global daily coverage. Among other things, TROPOMI measurements will be used for calculating the UV radiation reaching the Earth's surface. Thus, the TROPOMI surface UV product will contribute to the monitoring of UV radiation by providing daily information on the prevailing UV conditions over the globe. The TROPOMI UV algorithm builds on the heritage of the Ozone Monitoring Instrument (OMI and the Satellite Application Facility for Atmospheric Composition and UV Radiation (AC SAF algorithms. This paper provides a description of the algorithm that will be used for estimating surface UV radiation from TROPOMI observations. The TROPOMI surface UV product includes the following UV quantities: the UV irradiance at 305, 310, 324, and 380 nm; the erythemally weighted UV; and the vitamin-D weighted UV. Each of these are available as (i daily dose or daily accumulated irradiance, (ii overpass dose rate or irradiance, and (iii local noon dose rate or irradiance. In addition, all quantities are available corresponding to actual cloud conditions and as clear-sky values, which otherwise correspond to the same conditions but assume a cloud-free atmosphere. This yields 36 UV parameters altogether. The TROPOMI UV algorithm has been tested using input based on OMI and the Global Ozone Monitoring Experiment-2 (GOME-2 satellite measurements. These preliminary results indicate that the algorithm is functioning according to expectations.

  12. ALGORITHM FOR ESTIMATING OF COMPETITIVENESS A REGION

    Directory of Open Access Journals (Sweden)

    Friedman Yu. A.

    2014-12-01

    Full Text Available The present stage of management of territorial development, characterized as «strategic openness», actualized need to create sustainable sources and mechanisms of formation of competitive advantages of regions, renewed interest to evaluation of the competitiveness of a regional economy. In the present study, firstly , formulated author's definition of regional competitiveness, based on the concept of its «attractiveness» for business development and people's lives; secondly , suggested the approach and the algorithm for measurement and comparative analysis of the competitiveness of the regional economy that based on quantitative evaluation of the various aspects of «attractiveness»; third , the authors develop basic elements of the modeling and methodical support for quantitative assessing the competitiveness of regional economies, including identified and described statistically reliable indicators of five competitively important factors (the level of economic potential of a region and the efficiency of its use, the attractiveness of a region for the population and for business, the level of innovation in the economy of a region, that allow qualitatively and quantitatively measure the «attractiveness» are sufficiently; fourth, was the comparative analysis of the results of a quantitative evaluation of the competitiveness on these factors in the five regions of the Siberian Federal District for the period of 2000-2012 years (Kemerovo oblast, Novosibirsk oblast, Tomsk oblast, Krasnoyarsk krai, Altai krai. Research has shown that developed assessment algorithm of regional competitiveness can not only fix competitive advantages and ratings at a specific time, but also monitor the dynamics of competitively important factors in the region. The obtained evaluations of competitiveness are the basis for recommendations about changing the drivers of growth in the region and the reconstruction of models of its regional development.

  13. Comparative analysis of distributed power control algorithms in CDMA

    OpenAIRE

    Abdulhamid, Mohanad F.

    2017-01-01

    This paper presents comparative analysis of various algorithms of distributed power control used in Code Division Multiple Access (CDMA) systems. These algorithms include Distributed Balancing power control algorithm (DB), Modified Distributed Balancing power control algorithm (MDB), Fully Distributed Power Control algorithm (FDPC), Distributed Power Control algorithm (DPC), Distributed Constrained Power Control algorithm (DCPC), Unconstrained Second-Order Power Control algorithm (USOPC), Con...

  14. The Great Deluge Algorithm applied to a nuclear reactor core design optimization problem

    International Nuclear Information System (INIS)

    Sacco, Wagner F.; Oliveira, Cassiano R.E. de

    2005-01-01

    The Great Deluge Algorithm (GDA) is a local search algorithm introduced by Dueck. It is an analogy with a flood: the 'water level' rises continuously and the proposed solution must lie above the 'surface' in order to survive. The crucial parameter is the 'rain speed', which controls convergence of the algorithm similarly to Simulated Annealing's annealing schedule. This algorithm is applied to the reactor core design optimization problem, which consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a 3-enrichment-zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. This problem was previously attacked by the canonical genetic algorithm (GA) and by a Niching Genetic Algorithm (NGA). NGAs were designed to force the genetic algorithm to maintain a heterogeneous population throughout the evolutionary process, avoiding the phenomenon known as genetic drift, where all the individuals converge to a single solution. The results obtained by the Great Deluge Algorithm are compared to those obtained by both algorithms mentioned above. The three algorithms are submitted to the same computational effort and GDA reaches the best results, showing its potential for other applications in the nuclear engineering field as, for instance, the nuclear core reload optimization problem. One of the great advantages of this algorithm over the GA is that it does not require special operators for discrete optimization. (author)

  15. Accelerating scientific computations with mixed precision algorithms

    Science.gov (United States)

    Baboulin, Marc; Buttari, Alfredo; Dongarra, Jack; Kurzak, Jakub; Langou, Julie; Langou, Julien; Luszczek, Piotr; Tomov, Stanimire

    2009-12-01

    factorization of the coefficient matrix using Gaussian elimination. First, the coefficient matrix A is factored into the product of a lower triangular matrix L and an upper triangular matrix U. Partial row pivoting is in general used to improve numerical stability resulting in a factorization PA=LU, where P is a permutation matrix. The solution for the system is achieved by first solving Ly=Pb (forward substitution) and then solving Ux=y (backward substitution). Due to round-off errors, the computed solution, x, carries a numerical error magnified by the condition number of the coefficient matrix A. In order to improve the computed solution, an iterative process can be applied, which produces a correction to the computed solution at each iteration, which then yields the method that is commonly known as the iterative refinement algorithm. Provided that the system is not too ill-conditioned, the algorithm produces a solution correct to the working precision. Running time: seconds/minutes

  16. Opposition-Based Adaptive Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Chibing Gong

    2016-07-01

    Full Text Available A fireworks algorithm (FWA is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA. The purpose of this paper is to add opposition-based learning (OBL to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based adaptive fireworks algorithm (OAFWA. The final results conclude that OAFWA significantly outperformed EFWA and AFWA in terms of solution accuracy. Additionally, OAFWA was compared with a bat algorithm (BA, differential evolution (DE, self-adapting control parameters in differential evolution (jDE, a firefly algorithm (FA, and a standard particle swarm optimization 2011 (SPSO2011 algorithm. The research results indicate that OAFWA ranks the highest of the six algorithms for both solution accuracy and runtime cost.

  17. Fused Entropy Algorithm in Optical Computed Tomography

    Directory of Open Access Journals (Sweden)

    Xiong Wan

    2014-02-01

    Full Text Available In most applications of optical computed tomography (OpCT, limited-view problems are often encountered, which can be solved to a certain extent with typical OpCT reconstructive algorithms. The concept of entropy first emerged in information theory has been introduced into OpCT algorithms, such as maximum entropy (ME algorithms and cross entropy (CE algorithms, which have demonstrated their superiority over traditional OpCT algorithms, yet have their own limitations. A fused entropy (FE algorithm, which follows an optimized criterion combining self-adaptively ME with CE, is proposed and investigated by comparisons with ME, CE and some traditional OpCT algorithms. Reconstructed results of several physical models show this FE algorithm has a good convergence and can achieve better precision than other algorithms, which verifies the feasibility of FE as an approach of optimizing computation, not only for OpCT, but also for other image processing applications.

  18. Linear Bregman algorithm implemented in parallel GPU

    Science.gov (United States)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  19. Algorithms

    Indian Academy of Sciences (India)

    immediate successor as well as the immediate predecessor explicitly. Such a list is referred to as a doubly linked list. A typical doubly linked list is shown in Figure 3f. The ability to get to either the successor or predecessor not only makes access easy but also enables one to backtrack in a search. Two Dimensional Arrays: It ...

  20. Algorithms

    Indian Academy of Sciences (India)

    SERIES I ARTICLE. Table 2 Merging two sorted arrays. procedure MERGE_TWO _ARRA YS(A[I,p], B(1, q], C[I,p+q]:integer);. (* A[l,p], B[l, q] are the sorted arrays to be merged and placed in array C. *). (* Note that array C will be oflength p+q; in the program we use parameters *). (* p and q explicidy *) var i, j, k: integer; begin.

  1. Algorithms

    Indian Academy of Sciences (India)

    like programming language. Recursion. One of the usual techniques of problem solving is to break the problem into smaller problems. From the solution of these smaller problems, one obtains a solution for the original problem. Consider the procedural abstraction described above. It is possible to visualize the given ...

  2. Algorithms

    Indian Academy of Sciences (India)

    guesses for the technique discussed above. The method described above for computing the approximate square root is referred to as Newton's method for finding..Ja after the famous English mathematician Isaac Newton. In Table 5, we have essentially solved the nonlinear equation. RESONANCE I March 1996 - ---- .

  3. Algorithms

    Indian Academy of Sciences (India)

    In the previous article of this series, we looked at simple data types and their representation in computer memory. The notion of a simple data type can be extended to denote a set of elements corresponding to one data item at a higher level. The process of structuring or grouping of the basic data elements is often referred ...

  4. Algorithms

    Indian Academy of Sciences (India)

    var A: array [looN, 100M] of integer;. The above declaration denotes that A is an array having N rows and M columns. Applications for arrays are innumerable; the simplest being the classical multiplication table. A table can also be used to store hostel room numbers and codes of the persons staying in the respective rooms.

  5. Algorithms

    Indian Academy of Sciences (India)

    1 It must be noted that if the input assertion is not satisfied at this point, then any output assertion holds due to the classical implication operator. ..... on our intuitive knowledge about the underlying theory. The above processes can be formalised in a logical framework without relying on the intuitive deductions we have used.

  6. Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm

    Science.gov (United States)

    Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.

    2017-12-01

    Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.

  7. A New Binarization Algorithm for Historical Documents

    Directory of Open Access Journals (Sweden)

    Marcos Almeida

    2018-01-01

    Full Text Available Monochromatic documents claim for much less computer bandwidth for network transmission and storage space than their color or even grayscale equivalent. The binarization of historical documents is far more complex than recent ones as paper aging, color, texture, translucidity, stains, back-to-front interference, kind and color of ink used in handwriting, printing process, digitalization process, etc. are some of the factors that affect binarization. This article presents a new binarization algorithm for historical documents. The new global filter proposed is performed in four steps: filtering the image using a bilateral filter, splitting image into the RGB components, decision-making for each RGB channel based on an adaptive binarization method inspired by Otsu’s method with a choice of the threshold level, and classification of the binarized images to decide which of the RGB components best preserved the document information in the foreground. The quantitative and qualitative assessment made with 23 binarization algorithms in three sets of “real world” documents showed very good results.

  8. Field-Split Preconditioned Inexact Newton Algorithms

    KAUST Repository

    Liu, Lulu

    2015-06-02

    The multiplicative Schwarz preconditioned inexact Newton (MSPIN) algorithm is presented as a complement to additive Schwarz preconditioned inexact Newton (ASPIN). At an algebraic level, ASPIN and MSPIN are variants of the same strategy to improve the convergence of systems with unbalanced nonlinearities; however, they have natural complementarity in practice. MSPIN is naturally based on partitioning of degrees of freedom in a nonlinear PDE system by field type rather than by subdomain, where a modest factor of concurrency can be sacrificed for physically motivated convergence robustness. ASPIN, originally introduced for decompositions into subdomains, is natural for high concurrency and reduction of global synchronization. We consider both types of inexact Newton algorithms in the field-split context, and we augment the classical convergence theory of ASPIN for the multiplicative case. Numerical experiments show that MSPIN can be significantly more robust than Newton methods based on global linearizations, and that MSPIN can be more robust than ASPIN and maintain fast convergence even for challenging problems, such as high Reynolds number Navier--Stokes equations.

  9. Web page sorting algorithm based on query keyword distance relation

    Science.gov (United States)

    Yang, Han; Cui, Hong Gang; Tang, Hao

    2017-08-01

    In order to optimize the problem of page sorting, according to the search keywords in the web page in the relationship between the characteristics of the proposed query keywords clustering ideas. And it is converted into the degree of aggregation of the search keywords in the web page. Based on the PageRank algorithm, the clustering degree factor of the query keyword is added to make it possible to participate in the quantitative calculation. This paper proposes an improved algorithm for PageRank based on the distance relation between search keywords. The experimental results show the feasibility and effectiveness of the method.

  10. An Efficient Algorithm for the Discrete Gabor Transform using full length Windows

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel

    2009-01-01

    This paper extends the efficient factorization of the Gabor frame operator developed by Strohmer (1998) to the Gabor analysis/synthesis operator. This provides a fast method for computing the discrete Gabor transform (DGT) and several algorithms associated with it. The algorithm is used...... for the case when the involved window and signal have the same length....

  11. An Efficient Algorithm for the Discrete Gabor Transform using full length Windows

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel

    2007-01-01

    This paper extends the efficient factorization of the Gabor frame operator developed by Strohmer in [1] to the Gabor analysis/synthesis operator. This provides a fast method for computing the discrete Gabor transform (DGT) and several algorithms associated with it. The algorithm is used...

  12. The PHMC algorithm for simulations of dynamical fermions; 1, description and properties

    CERN Document Server

    Frezzotti, R

    1999-01-01

    We give a detailed description of the so-called Polynomial Hybrid Monte Carlo (PHMC) algorithm. The effects of the correction factor, which is introduced to render the algorithm exact, are discussed, stressing their relevance for the statistical fluctuations and (almost) zero mode contributions to physical observables. We also investigate rounding-error effects and propose several ways to reduce memory requirements.

  13. Design Genetic Algorithm Optimization Education Software Based Fuzzy Controller for a Tricopter Fly Path Planning

    Science.gov (United States)

    Tran, Huu-Khoa; Chiou, Juing -Shian; Peng, Shou-Tao

    2016-01-01

    In this paper, the feasibility of a Genetic Algorithm Optimization (GAO) education software based Fuzzy Logic Controller (GAO-FLC) for simulating the flight motion control of Unmanned Aerial Vehicles (UAVs) is designed. The generated flight trajectories integrate the optimized Scaling Factors (SF) fuzzy controller gains by using GAO algorithm. The…

  14. Hybrid employment recommendation algorithm based on Spark

    Science.gov (United States)

    Li, Zuoquan; Lin, Yubei; Zhang, Xingming

    2017-08-01

    Aiming at the real-time application of collaborative filtering employment recommendation algorithm (CF), a clustering collaborative filtering recommendation algorithm (CCF) is developed, which applies hierarchical clustering to CF and narrows the query range of neighbour items. In addition, to solve the cold-start problem of content-based recommendation algorithm (CB), a content-based algorithm with users’ information (CBUI) is introduced for job recommendation. Furthermore, a hybrid recommendation algorithm (HRA) which combines CCF and CBUI algorithms is proposed, and implemented on Spark platform. The experimental results show that HRA can overcome the problems of cold start and data sparsity, and achieve good recommendation accuracy and scalability for employment recommendation.

  15. Evolutionary algorithms for the selection of single nucleotide polymorphisms.

    Science.gov (United States)

    Hubley, Robert M; Zitzler, Eckart; Roach, Jared C

    2003-07-23

    Large databases of single nucleotide polymorphisms (SNPs) are available for use in genomics studies. Typically, investigators must choose a subset of SNPs from these databases to employ in their studies. The choice of subset is influenced by many factors, including estimated or known reliability of the SNP, biochemical factors, intellectual property, cost, and effectiveness of the subset for mapping genes or identifying disease loci. We present an evolutionary algorithm for multiobjective SNP selection. We implemented a modified version of the Strength-Pareto Evolutionary Algorithm (SPEA2) in Java. Our implementation, Multiobjective Analyzer for Genetic Marker Acquisition (MAGMA), approximates the set of optimal trade-off solutions for large problems in minutes. This set is very useful for the design of large studies, including those oriented towards disease identification, genetic mapping, population studies, and haplotype-block elucidation. Evolutionary algorithms are particularly suited for optimization problems that involve multiple objectives and a complex search space on which exact methods such as exhaustive enumeration cannot be applied. They provide flexibility with respect to the problem formulation if a problem description evolves or changes. Results are produced as a trade-off front, allowing the user to make informed decisions when prioritizing factors. MAGMA is open source and available at http://snp-magma.sourceforge.net. Evolutionary algorithms are well suited for many other applications in genomics.

  16. Evolutionary algorithms for the selection of single nucleotide polymorphisms

    Directory of Open Access Journals (Sweden)

    Zitzler Eckart

    2003-07-01

    Full Text Available Abstract Background Large databases of single nucleotide polymorphisms (SNPs are available for use in genomics studies. Typically, investigators must choose a subset of SNPs from these databases to employ in their studies. The choice of subset is influenced by many factors, including estimated or known reliability of the SNP, biochemical factors, intellectual property, cost, and effectiveness of the subset for mapping genes or identifying disease loci. We present an evolutionary algorithm for multiobjective SNP selection. Results We implemented a modified version of the Strength-Pareto Evolutionary Algorithm (SPEA2 in Java. Our implementation, Multiobjective Analyzer for Genetic Marker Acquisition (MAGMA, approximates the set of optimal trade-off solutions for large problems in minutes. This set is very useful for the design of large studies, including those oriented towards disease identification, genetic mapping, population studies, and haplotype-block elucidation. Conclusion Evolutionary algorithms are particularly suited for optimization problems that involve multiple objectives and a complex search space on which exact methods such as exhaustive enumeration cannot be applied. They provide flexibility with respect to the problem formulation if a problem description evolves or changes. Results are produced as a trade-off front, allowing the user to make informed decisions when prioritizing factors. MAGMA is open source and available at http://snp-magma.sourceforge.net. Evolutionary algorithms are well suited for many other applications in genomics.

  17. MUSIC algorithms for rebar detection

    International Nuclear Information System (INIS)

    Solimene, Raffaele; Leone, Giovanni; Dell’Aversano, Angela

    2013-01-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios. (paper)

  18. A fast meteor detection algorithm

    Science.gov (United States)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  19. An NOy* Algorithm for SOLVE

    Science.gov (United States)

    Loewenstein, M.; Greenblatt. B. J.; Jost, H.; Podolske, J. R.; Elkins, Jim; Hurst, Dale; Romanashkin, Pavel; Atlas, Elliott; Schauffler, Sue; Donnelly, Steve; hide

    2000-01-01

    De-nitrification and excess re-nitrification was widely observed by ER-2 instruments in the Arctic vortex during SOLVE in winter/spring 2000. Analyses of these events requires a knowledge of the initial or pre-vortex state of the sampled air masses. The canonical relationship of NOy to the long-lived tracer N2O observed in the unperturbed stratosphere is generally used for this purpose. In this paper we will attempt to establish the current unperturbed NOy:N2O relationship (NOy* algorithm) using the ensemble of extra-vortex data from in situ instruments flying on the ER-2 and DC-8, and from the Mark IV remote measurements on the OMS balloon. Initial analysis indicates a change in the SOLVE NOy* from the values predicted by the 1994 Northern Hemisphere NOy* algorithm which was derived from the observations in the ASHOE/MAESA campaign.

  20. Interactive video algorithms and technologies

    CERN Document Server

    Hammoud, Riad

    2006-01-01

    This book covers both algorithms and technologies of interactive videos, so that businesses in IT and data managements, scientists and software engineers in video processing and computer vision, coaches and instructors that use video technology in teaching, and finally end-users will greatly benefit from it. This book contains excellent scientific contributions made by a number of pioneering scientists and experts from around the globe. It consists of five parts. The first part introduces the reader to interactive video and video summarization and presents effective methodologies for automatic abstraction of a single video sequence, a set of video sequences, and a combined audio-video sequence. In the second part, a list of advanced algorithms and methodologies for automatic and semi-automatic analysis and editing of audio-video documents are presented. The third part tackles a more challenging level of automatic video re-structuring, filtering of video stream by extracting of highlights, events, and meaningf...

  1. Combinatorial optimization theory and algorithms

    CERN Document Server

    Korte, Bernhard

    2018-01-01

    This comprehensive textbook on combinatorial optimization places special emphasis on theoretical results and algorithms with provably good performance, in contrast to heuristics. It is based on numerous courses on combinatorial optimization and specialized topics, mostly at graduate level. This book reviews the fundamentals, covers the classical topics (paths, flows, matching, matroids, NP-completeness, approximation algorithms) in detail, and proceeds to advanced and recent topics, some of which have not appeared in a textbook before. Throughout, it contains complete but concise proofs, and also provides numerous exercises and references. This sixth edition has again been updated, revised, and significantly extended. Among other additions, there are new sections on shallow-light trees, submodular function maximization, smoothed analysis of the knapsack problem, the (ln 4+ɛ)-approximation for Steiner trees, and the VPN theorem. Thus, this book continues to represent the state of the art of combinatorial opti...

  2. Algorithms for Lightweight Key Exchange.

    Science.gov (United States)

    Alvarez, Rafael; Caballero-Gil, Cándido; Santonja, Juan; Zamora, Antonio

    2017-06-27

    Public-key cryptography is too slow for general purpose encryption, with most applications limiting its use as much as possible. Some secure protocols, especially those that enable forward secrecy, make a much heavier use of public-key cryptography, increasing the demand for lightweight cryptosystems that can be implemented in low powered or mobile devices. This performance requirements are even more significant in critical infrastructure and emergency scenarios where peer-to-peer networks are deployed for increased availability and resiliency. We benchmark several public-key key-exchange algorithms, determining those that are better for the requirements of critical infrastructure and emergency applications and propose a security framework based on these algorithms and study its application to decentralized node or sensor networks.

  3. Innovations in Lattice QCD Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Konstantinos Orginos

    2006-06-25

    Lattice QCD calculations demand a substantial amount of computing power in order to achieve the high precision results needed to better understand the nature of strong interactions, assist experiment to discover new physics, and predict the behavior of a diverse set of physical systems ranging from the proton itself to astrophysical objects such as neutron stars. However, computer power alone is clearly not enough to tackle the calculations we need to be doing today. A steady stream of recent algorithmic developments has made an important impact on the kinds of calculations we can currently perform. In this talk I am reviewing these algorithms and their impact on the nature of lattice QCD calculations performed today.

  4. MUSIC algorithms for rebar detection

    Science.gov (United States)

    Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela

    2013-12-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.

  5. Genetic Algorithms for Case Adaptation

    International Nuclear Information System (INIS)

    Salem, A.M.; Mohamed, A.H.

    2008-01-01

    Case based reasoning (CBR) paradigm has been widely used to provide computer support for recalling and adapting known cases to novel situations. Case adaptation algorithms generally rely on knowledge based and heuristics in order to change the past solutions to solve new problems. However, case adaptation has always been a difficult process to engineers within (CBR) cycle. Its difficulties can be referred to its domain dependency; and computational cost. In an effort to solve this problem, this research explores a general-purpose method that applying a genetic algorithm (GA) to CBR adaptation. Therefore, it can decrease the computational complexity of the search space in the problems having a great dependency on their domain knowledge. The proposed model can be used to perform a variety of design tasks on a broad set of application domains. However, it has been implemented for the tablet formulation as a domain of application. The proposed system has improved the performance of the CBR design systems

  6. Algorithms for Protein Structure Prediction

    DEFF Research Database (Denmark)

    Paluszewski, Martin

    ) and contact number (CN) measures only. We show that the HSE measure is much more information-rich than CN, nevertheless, HSE does not appear to provide enough information to reconstruct the C-traces of real-sized proteins. Our experiments also show that using tabu search (with our novel tabu definition......The problem of predicting the three-dimensional structure of a protein given its amino acid sequence is one of the most important open problems in bioinformatics. One of the carbon atoms in amino acids is the C-atom and the overall structure of a protein is often represented by a so-called C...... is competitive in quality and speed with other state-of-the-art decoy generation algorithms. Our third C-trace reconstruction approach is based on bee-colony optimization [24]. We demonstrate why this algorithm has some important properties that makes it suitable for protein structure prediction. Our approach...

  7. Computed laminography and reconstruction algorithm

    International Nuclear Information System (INIS)

    Que Jiemin; Cao Daquan; Zhao Wei; Tang Xiao

    2012-01-01

    Computed laminography (CL) is an alternative to computed tomography if large objects are to be inspected with high resolution. This is especially true for planar objects. In this paper, we set up a new scanning geometry for CL, and study the algebraic reconstruction technique (ART) for CL imaging. We compare the results of ART with variant weighted functions by computer simulation with a digital phantom. It proves that ART algorithm is a good choice for the CL system. (authors)

  8. Machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2005-01-01

    In the last 40 years, machine vision has evolved into a mature field embracing a wide range of applications including surveillance, automated inspection, robot assembly, vehicle guidance, traffic monitoring and control, signature verification, biometric measurement, and analysis of remotely sensed images. While researchers and industry specialists continue to document their work in this area, it has become increasingly difficult for professionals and graduate students to understand the essential theory and practicalities well enough to design their own algorithms and systems. This book directl

  9. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  10. Graphics and visualization principles & algorithms

    CERN Document Server

    Theoharis, T; Platis, Nikolaos; Patrikalakis, Nicholas M

    2008-01-01

    Computer and engineering collections strong in applied graphics and analysis of visual data via computer will find Graphics & Visualization: Principles and Algorithms makes an excellent classroom text as well as supplemental reading. It integrates coverage of computer graphics and other visualization topics, from shadow geneeration and particle tracing to spatial subdivision and vector data visualization, and it provides a thorough review of literature from multiple experts, making for a comprehensive review essential to any advanced computer study.-California Bookw

  11. Solving Hub Network Problem Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Mursyid Hasan Basri

    2012-01-01

    Full Text Available This paper addresses a network problem that described as follows. There are n ports that interact, and p of those will be designated as hubs. All hubs are fully interconnected. Each spoke will be allocated to only one of available hubs. Direct connection between two spokes is allowed only if they are allocated to the same hub. The latter is a distinct characteristic that differs it from pure hub-and-spoke system. In case of pure hub-and-spoke system, direct connection between two spokes is not allowed. The problem is where to locate hub ports and to which hub a spoke should be allocated so that total transportation cost is minimum. In the first model, there are some additional aspects are taken into consideration in order to achieve a better representation of the problem. The first, weekly service should be accomplished. Secondly, various vessel types should be considered. The last, a concept of inter-hub discount factor is introduced. Regarding the last aspect, it represents cost reduction factor at hub ports due to economies of scale. In practice, it is common that the cost rate for inter-hub movement is less than the cost rate for movement between hub and origin/destination. In this first model, inter-hub discount factor is assumed independent with amount of flows on inter-hub links (denoted as flow-independent discount policy. The results indicated that the patterns of enlargement of container ship size, to some degree, are similar with those in Kurokawa study. However, with regard to hub locations, the results have not represented the real practice. In the proposed model, unsatisfactory result on hub locations is addressed. One aspect that could possibly be improved to find better hub locations is inter-hub discount factor. Then inter-hub discount factor is assumed to depend on amount of inter-hub flows (denoted as flow-dependent discount policy. There are two discount functions examined in this paper. Both functions are characterized by

  12. Boosting Learning Algorithm for Stock Price Forecasting

    Science.gov (United States)

    Wang, Chengzhang; Bai, Xiaoming

    2018-03-01

    To tackle complexity and uncertainty of stock market behavior, more studies have introduced machine learning algorithms to forecast stock price. ANN (artificial neural network) is one of the most successful and promising applications. We propose a boosting-ANN model in this paper to predict the stock close price. On the basis of boosting theory, multiple weak predicting machines, i.e. ANNs, are assembled to build a stronger predictor, i.e. boosting-ANN model. New error criteria of the weak studying machine and rules of weights updating are adopted in this study. We select technical factors from financial markets as forecasting input variables. Final results demonstrate the boosting-ANN model works better than other ones for stock price forecasting.

  13. Faster algorithms for RNA-folding using the Four-Russians method.

    Science.gov (United States)

    Venkatachalam, Balaji; Gusfield, Dan; Frid, Yelena

    2014-03-06

    The secondary structure that maximizes the number of non-crossing matchings between complimentary bases of an RNA sequence of length n can be computed in O(n3) time using Nussinov's dynamic programming algorithm. The Four-Russians method is a technique that reduces the running time for certain dynamic programming algorithms by a multiplicative factor after a preprocessing step where solutions to all smaller subproblems of a fixed size are exhaustively enumerated and solved. Frid and Gusfield designed an O(n3logn) algorithm for RNA folding using the Four-Russians technique. In their algorithm the preprocessing is interleaved with the algorithm computation. We simplify the algorithm and the analysis by doing the preprocessing once prior to the algorithm computation. We call this the two-vector method. We also show variants where instead of exhaustive preprocessing, we only solve the subproblems encountered in the main algorithm once and memoize the results. We give a simple proof of correctness and explore the practical advantages over the earlier method.The Nussinov algorithm admits an O(n2) time parallel algorithm. We show a parallel algorithm using the two-vector idea that improves the time bound to O(n2logn). We have implemented the parallel algorithm on graphics processing units using the CUDA platform. We discuss the organization of the data structures to exploit coalesced memory access for fast running times. The ideas to organize the data structures also help in improving the running time of the serial algorithms. For sequences of length up to 6000 bases the parallel algorithm takes only about 2.5 seconds and the two-vector serial method takes about 57 seconds on a desktop and 15 seconds on a server. Among the serial algorithms, the two-vector and memoized versions are faster than the Frid-Gusfield algorithm by a factor of 3, and are faster than Nussinov by up to a factor of 20. The source-code for the algorithms is available at http://github.com/ijalabv/FourRussiansRNAFolding.

  14. A Rotor Tip Vortex Tracing Algorithm for Image Post-Processing

    Science.gov (United States)

    Overmeyer, Austin D.

    2015-01-01

    A neurite tracing algorithm, originally developed for medical image processing, was used to trace the location of the rotor tip vortex in density gradient flow visualization images. The tracing algorithm was applied to several representative test images to form case studies. The accuracy of the tracing algorithm was compared to two current methods including a manual point and click method and a cross-correlation template method. It is shown that the neurite tracing algorithm can reduce the post-processing time to trace the vortex by a factor of 10 to 15 without compromising the accuracy of the tip vortex location compared to other methods presented in literature.

  15. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  16. Comparison of turbulence mitigation algorithms

    Science.gov (United States)

    Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric

    2017-07-01

    When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.

  17. New Level-3 BLAS Kernels for Cholesky Factorization

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Wasniewski, Jerzy; Herrero, José R.

    2012-01-01

    Some Linear Algebra Libraries use Level-2 routines during the factorization part of any Level-3 block factorization algorithm. We discuss four Level-3 routines called DPOTF3, a new type of BLAS, for the factorization part of a block Cholesky factorization algorithm for use by LAPACK routine DPOTRF...

  18. Policy-aware algorithms for proxy placement in the Internet

    Science.gov (United States)

    Kamath, Krishnanand M.; Bassali, Harpal S.; Hosamani, Rajendraprasad B.; Gao, Lixin

    2001-07-01

    Internet has grown explosively for the past few years and has matured into an important commercial infrastructure. The explosive growth of traffic has contributed to degradation of user perceived response times in today's Internet. Caching at the proxy server have emerged as an effective way of reducing the overall latency. The effectiveness of a proxy server is primarily determined by its locality. This locality is affected by factors such as the Internet topology and routing policies. In this paper, we present heuristic algorithms for placing proxies in the Internet by considering both Internet topology and routing policies. In particular, we make use of the logical topology inferred from Autonomous System (AS) relationships to derive the path between a proxy and a client. We present heuristic algorithms for placing proxies and evaluate these algorithms for the Internet logical topology over three years. To the best of our knowledge, this is the first work on placing proxy servers in the Internet that considers logical topology.

  19. SINS/CNS Nonlinear Integrated Navigation Algorithm for Hypersonic Vehicle

    Directory of Open Access Journals (Sweden)

    Yong-jun Yu

    2015-01-01

    Full Text Available Celestial Navigation System (CNS has characteristics of accurate orientation and strong autonomy and has been widely used in Hypersonic Vehicle. Since the CNS location and orientation mainly depend upon the inertial reference that contains errors caused by gyro drifts and other error factors, traditional Strap-down Inertial Navigation System (SINS/CNS positioning algorithm setting the position error between SINS and CNS as measurement is not effective. The model of altitude azimuth, platform error angles, and horizontal position is designed, and the SINS/CNS tightly integrated algorithm is designed, in which CNS altitude azimuth is set as measurement information. GPF (Gaussian particle filter is introduced to solve the problem of nonlinear filtering. The results of simulation show that the precision of SINS/CNS algorithm which reaches 130 m using three stars is improved effectively.

  20. An Evolutionary Algorithm to Mine High-Utility Itemsets

    Directory of Open Access Journals (Sweden)

    Jerry Chun-Wei Lin

    2015-01-01

    Full Text Available High-utility itemset mining (HUIM is a critical issue in recent years since it can be used to reveal the profitable products by considering both the quantity and profit factors instead of frequent itemset mining (FIM of association rules (ARs. In this paper, an evolutionary algorithm is presented to efficiently mine high-utility itemsets (HUIs based on the binary particle swarm optimization. A maximal pattern (MP-tree strcutrue is further designed to solve the combinational problem in the evolution process. Substantial experiments on real-life datasets show that the proposed binary PSO-based algorithm has better results compared to the state-of-the-art GA-based algorithm.