WorldWideScience

Sample records for deutsch-jozsa algorithm implemented

  1. A Cavity QED Implementation of Deutsch-Jozsa Algorithm

    OpenAIRE

    Guerra, E. S.

    2004-01-01

    The Deutsch-Jozsa algorithm is a generalization of the Deutsch algorithm which was the first algorithm written. We present schemes to implement the Deutsch algorithm and the Deutsch-Jozsa algorithm via cavity QED.

  2. Unifying parameter estimation and the Deutsch-Jozsa algorithm for continuous variables

    International Nuclear Information System (INIS)

    Zwierz, Marcin; Perez-Delgado, Carlos A.; Kok, Pieter

    2010-01-01

    We reveal a close relationship between quantum metrology and the Deutsch-Jozsa algorithm on continuous-variable quantum systems. We develop a general procedure, characterized by two parameters, that unifies parameter estimation and the Deutsch-Jozsa algorithm. Depending on which parameter we keep constant, the procedure implements either the parameter-estimation protocol or the Deutsch-Jozsa algorithm. The parameter-estimation part of the procedure attains the Heisenberg limit and is therefore optimal. Due to the use of approximate normalizable continuous-variable eigenstates, the Deutsch-Jozsa algorithm is probabilistic. The procedure estimates a value of an unknown parameter and solves the Deutsch-Jozsa problem without the use of any entanglement.

  3. Realization of seven-qubit Deutsch-Jozsa algorithm on NMR quantum computer

    International Nuclear Information System (INIS)

    Wei Daxiu; Yang Xiaodong; Luo Jun; Sun Xianping; Zeng Xizhi; Liu Maili; Ding Shangwu

    2002-01-01

    Recent years, remarkable progresses in experimental realization of quantum information have been made, especially based on nuclear magnetic resonance (NMR) theory. In all quantum algorithms, Deutsch-Jozsa algorithm has been widely studied. It can be realized on NMR quantum computer and also can be simplified by using the Cirac's scheme. At first the principle of Deutsch-Jozsa quantum algorithm is analyzed, then the authors implement the seven-qubit Deutsch-Jozsa algorithm on NMR quantum computer

  4. Implementing Deutsch-Jozsa algorithm using light shifts and atomic ensembles

    International Nuclear Information System (INIS)

    Dasgupta, Shubhrangshu; Biswas, Asoka; Agarwal, G.S.

    2005-01-01

    We present an optical scheme to implement the Deutsch-Jozsa algorithm using ac Stark shifts. The scheme uses an atomic ensemble consisting of four-level atoms interacting dispersively with a field. This leads to a Hamiltonian in the atom-field basis which is quite suitable for quantum computation. We show how one can implement the algorithm by performing proper one- and two-qubit operations. We emphasize that in our model the decoherence is expected to be minimal due to our usage of atomic ground states and freely propagating photon

  5. Discrimination of unitary transformations in the Deutsch-Jozsa algorithm: Implications for thermal-equilibrium-ensemble implementations

    International Nuclear Information System (INIS)

    Collins, David

    2010-01-01

    A general framework for regarding oracle-assisted quantum algorithms as tools for discriminating among unitary transformations is described. This framework is applied to the Deutsch-Jozsa problem and all possible quantum algorithms which solve the problem with certainty using oracle unitaries in a particular form are derived. It is also used to show that any quantum algorithm that solves the Deutsch-Jozsa problem starting with a quantum system in a particular class of initial, thermal equilibrium-based states of the type encountered in solution-state NMR can only succeed with greater probability than a classical algorithm when the problem size n exceeds ∼10 5 .

  6. Efficient classical simulation of the Deutsch-Jozsa and Simon's algorithms

    Science.gov (United States)

    Johansson, Niklas; Larsson, Jan-Åke

    2017-09-01

    A long-standing aim of quantum information research is to understand what gives quantum computers their advantage. This requires separating problems that need genuinely quantum resources from those for which classical resources are enough. Two examples of quantum speed-up are the Deutsch-Jozsa and Simon's problem, both efficiently solvable on a quantum Turing machine, and both believed to lack efficient classical solutions. Here we present a framework that can simulate both quantum algorithms efficiently, solving the Deutsch-Jozsa problem with probability 1 using only one oracle query, and Simon's problem using linearly many oracle queries, just as expected of an ideal quantum computer. The presented simulation framework is in turn efficiently simulatable in a classical probabilistic Turing machine. This shows that the Deutsch-Jozsa and Simon's problem do not require any genuinely quantum resources, and that the quantum algorithms show no speed-up when compared with their corresponding classical simulation. Finally, this gives insight into what properties are needed in the two algorithms and calls for further study of oracle separation between quantum and classical computation.

  7. Implementation schemes in NMR of quantum processors and the Deutsch-Jozsa algorithm by using virtual spin representation

    International Nuclear Information System (INIS)

    Kessel, Alexander R.; Yakovleva, Natalia M.

    2002-01-01

    Schemes of experimental realization of the main two-qubit processors for quantum computers and the Deutsch-Jozsa algorithm are derived in virtual spin representation. The results are applicable for every four quantum states allowing the required properties for quantum processor implementation if for qubit encoding, virtual spin representation is used. A four-dimensional Hilbert space of nuclear spin 3/2 is considered in detail for this aim

  8. Implementation of a three-qubit refined Deutsch-Jozsa algorithm using SFG quantum logic gates

    International Nuclear Information System (INIS)

    Duce, A Del; Savory, S; Bayvel, P

    2006-01-01

    In this paper we present a quantum logic circuit which can be used for the experimental demonstration of a three-qubit solid state quantum computer based on a recent proposal of optically driven quantum logic gates. In these gates, the entanglement of randomly placed electron spin qubits is manipulated by optical excitation of control electrons. The circuit we describe solves the Deutsch problem with an improved algorithm called the refined Deutsch-Jozsa algorithm. We show that it is possible to select optical pulses that solve the Deutsch problem correctly, and do so without losing quantum information to the control electrons, even though the gate parameters vary substantially from one gate to another

  9. Implementation of a three-qubit refined Deutsch-Jozsa algorithm using SFG quantum logic gates

    Energy Technology Data Exchange (ETDEWEB)

    Duce, A Del; Savory, S; Bayvel, P [Department of Electronic and Electrical Engineering, University College London, Torrington Place, London WC1E 7JE (United Kingdom)

    2006-05-31

    In this paper we present a quantum logic circuit which can be used for the experimental demonstration of a three-qubit solid state quantum computer based on a recent proposal of optically driven quantum logic gates. In these gates, the entanglement of randomly placed electron spin qubits is manipulated by optical excitation of control electrons. The circuit we describe solves the Deutsch problem with an improved algorithm called the refined Deutsch-Jozsa algorithm. We show that it is possible to select optical pulses that solve the Deutsch problem correctly, and do so without losing quantum information to the control electrons, even though the gate parameters vary substantially from one gate to another.

  10. Implementation of a three-qubit refined Deutsch Jozsa algorithm using SFG quantum logic gates

    Science.gov (United States)

    DelDuce, A.; Savory, S.; Bayvel, P.

    2006-05-01

    In this paper we present a quantum logic circuit which can be used for the experimental demonstration of a three-qubit solid state quantum computer based on a recent proposal of optically driven quantum logic gates. In these gates, the entanglement of randomly placed electron spin qubits is manipulated by optical excitation of control electrons. The circuit we describe solves the Deutsch problem with an improved algorithm called the refined Deutsch-Jozsa algorithm. We show that it is possible to select optical pulses that solve the Deutsch problem correctly, and do so without losing quantum information to the control electrons, even though the gate parameters vary substantially from one gate to another.

  11. Quantum Cryptography Based on the Deutsch-Jozsa Algorithm

    Science.gov (United States)

    Nagata, Koji; Nakamura, Tadao; Farouk, Ahmed

    2017-09-01

    Recently, secure quantum key distribution based on Deutsch's algorithm using the Bell state is reported (Nagata and Nakamura, Int. J. Theor. Phys. doi: 10.1007/s10773-017-3352-4, 2017). Our aim is of extending the result to a multipartite system. In this paper, we propose a highly speedy key distribution protocol. We present sequre quantum key distribution based on a special Deutsch-Jozsa algorithm using Greenberger-Horne-Zeilinger states. Bob has promised to use a function f which is of one of two kinds; either the value of f( x) is constant for all values of x, or else the value of f( x) is balanced, that is, equal to 1 for exactly half of the possible x, and 0 for the other half. Here, we introduce an additional condition to the function when it is balanced. Our quantum key distribution overcomes a classical counterpart by a factor O(2 N ).

  12. Non-Markovianity-assisted high-fidelity Deutsch-Jozsa algorithm in diamond

    Science.gov (United States)

    Dong, Yang; Zheng, Yu; Li, Shen; Li, Cong-Cong; Chen, Xiang-Dong; Guo, Guang-Can; Sun, Fang-Wen

    2018-01-01

    The memory effects in non-Markovian quantum dynamics can induce the revival of quantum coherence, which is believed to provide important physical resources for quantum information processing (QIP). However, no real quantum algorithms have been demonstrated with the help of such memory effects. Here, we experimentally implemented a non-Markovianity-assisted high-fidelity refined Deutsch-Jozsa algorithm (RDJA) with a solid spin in diamond. The memory effects can induce pronounced non-monotonic variations in the RDJA results, which were confirmed to follow a non-Markovian quantum process by measuring the non-Markovianity of the spin system. By applying the memory effects as physical resources with the assistance of dynamical decoupling, the probability of success of RDJA was elevated above 97% in the open quantum system. This study not only demonstrates that the non-Markovianity is an important physical resource but also presents a feasible way to employ this physical resource. It will stimulate the application of the memory effects in non-Markovian quantum dynamics to improve the performance of practical QIP.

  13. Quantum computation with classical light: Implementation of the Deutsch–Jozsa algorithm

    International Nuclear Information System (INIS)

    Perez-Garcia, Benjamin; McLaren, Melanie; Goyal, Sandeep K.; Hernandez-Aranda, Raul I.; Forbes, Andrew; Konrad, Thomas

    2016-01-01

    Highlights: • An implementation of the Deutsch–Jozsa algorithm using classical optics is proposed. • Constant and certain balanced functions can be encoded and distinguished efficiently. • The encoding and the detection process does not require to access single path qubits. • While the scheme might be scalable in principle, it might not be in practice. • We suggest a generalisation of the Deutsch–Jozsa algorithm and its implementation. - Abstract: We propose an optical implementation of the Deutsch–Jozsa Algorithm using classical light in a binary decision-tree scheme. Our approach uses a ring cavity and linear optical devices in order to efficiently query the oracle functional values. In addition, we take advantage of the intrinsic Fourier transforming properties of a lens to read out whether the function given by the oracle is balanced or constant.

  14. Quantum computation with classical light: Implementation of the Deutsch–Jozsa algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Garcia, Benjamin [Photonics and Mathematical Optics Group, Tecnológico de Monterrey, Monterrey 64849 (Mexico); University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); School of Chemistry and Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000 (South Africa); McLaren, Melanie [University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); Goyal, Sandeep K. [School of Chemistry and Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000 (South Africa); Institute of Quantum Science and Technology, University of Calgary, Alberta T2N 1N4 (Canada); Hernandez-Aranda, Raul I. [Photonics and Mathematical Optics Group, Tecnológico de Monterrey, Monterrey 64849 (Mexico); Forbes, Andrew [University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); Konrad, Thomas, E-mail: konradt@ukzn.ac.za [School of Chemistry and Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000 (South Africa); National Institute of Theoretical Physics, Durban Node, Private Bag X54001, Durban 4000 (South Africa)

    2016-05-20

    Highlights: • An implementation of the Deutsch–Jozsa algorithm using classical optics is proposed. • Constant and certain balanced functions can be encoded and distinguished efficiently. • The encoding and the detection process does not require to access single path qubits. • While the scheme might be scalable in principle, it might not be in practice. • We suggest a generalisation of the Deutsch–Jozsa algorithm and its implementation. - Abstract: We propose an optical implementation of the Deutsch–Jozsa Algorithm using classical light in a binary decision-tree scheme. Our approach uses a ring cavity and linear optical devices in order to efficiently query the oracle functional values. In addition, we take advantage of the intrinsic Fourier transforming properties of a lens to read out whether the function given by the oracle is balanced or constant.

  15. Initialization-free generalized Deutsche-Jazz's algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chi, Dong Pyo [School of Mathematical Sciences, Seoul National University, Seoul (Korea, Republic of)]. E-mail: dpchi@math.snu.ac.kr; Kim, Jinsoo [School of Electrical Engineering and Computer Science, Seoul National University, Seoul (Korea)]. E-mail: jkim@ee.snu.ac.kr; Lee, Soojoon [School of Mathematical Sciences, Seoul National University, Seoul (Korea)]. E-mail: level@math.snu.ac.kr

    2001-06-29

    We generalize the Deutsch-Jozsa algorithm by exploiting summations of the roots of unity. The generalized algorithm distinguishes a wider class of functions promised to be either constant or many to one and onto an evenly spaced range. As previously, the generalized quantum algorithm solves this problem using a single functional evaluation. We also consider the problem of distinguishing constant and evenly balanced functions and present a quantum algorithm for this problem that does not require any initialization of an auxiliary register involved in the process of functional evaluation and after solving the problem recovers the initial state of an auxiliary register. (author)

  16. The continuous-variable Deutsch–Jozsa algorithm using realistic quantum systems

    International Nuclear Information System (INIS)

    Wagner, Rob C; Kendon, Viv M

    2012-01-01

    This paper is a study of the continuous-variable Deutsch–Jozsa algorithm. First, we review an existing version of the algorithm for qunat states (Pati and Braunstein 2002 arXiv:0207108v1), and then, we present a realistic version of the Deutsch–Jozsa algorithm for continuous variables, which can be implemented in a physical quantum system given the appropriate oracle. Under these conditions, we have a probabilistic algorithm for deciding the function with a very high success rate with a single call to the oracle. Finally, we look at the effects of errors in both of these continuous-variable algorithms and how they affect the chances of success. We find that the algorithm is generally robust for errors in initialization and the oracle, but less so for errors in the measurement apparatus and the Fourier transform. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Coherent states: mathematical and physical aspects’. (paper)

  17. Quantum entanglement and quantum computational algorithms

    Indian Academy of Sciences (India)

    We demonstrate that the one- and the two-bit Deutsch-Jozsa algorithm does not require entanglement and can be mapped onto a classical optical scheme. It is only for three and more input bits that the DJ algorithm requires the implementation of entangling transformations and in these cases it is impossible to implement ...

  18. Deterministic implementations of single-photon multi-qubit Deutsch–Jozsa algorithms with linear optics

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Hai-Rui, E-mail: hrwei@ustb.edu.cn; Liu, Ji-Zhen

    2017-02-15

    It is very important to seek an efficient and robust quantum algorithm demanding less quantum resources. We propose one-photon three-qubit original and refined Deutsch–Jozsa algorithms with polarization and two linear momentums degrees of freedom (DOFs). Our schemes are constructed by solely using linear optics. Compared to the traditional ones with one DOF, our schemes are more economic and robust because the necessary photons are reduced from three to one. Our linear-optic schemes are working in a determinate way, and they are feasible with current experimental technology.

  19. Optical simulation of quantum algorithms using programmable liquid-crystal displays

    International Nuclear Information System (INIS)

    Puentes, Graciana; La Mela, Cecilia; Ledesma, Silvia; Iemmi, Claudio; Paz, Juan Pablo; Saraceno, Marcos

    2004-01-01

    We present a scheme to perform an all optical simulation of quantum algorithms and maps. The main components are lenses to efficiently implement the Fourier transform and programmable liquid-crystal displays to introduce space dependent phase changes on a classical optical beam. We show how to simulate Deutsch-Jozsa and Grover's quantum algorithms using essentially the same optical array programmed in two different ways

  20. Quantum entanglement and quantum computational algorithms

    Indian Academy of Sciences (India)

    Abstract. The existence of entangled quantum states gives extra power to quantum computers over their classical counterparts. Quantum entanglement shows up qualitatively at the level of two qubits. We demonstrate that the one- and the two-bit Deutsch-Jozsa algorithm does not require entanglement and can be mapped ...

  1. Generalization of some hidden subgroup algorithms for input sets of arbitrary size

    Science.gov (United States)

    Poslu, Damla; Say, A. C. Cem

    2006-05-01

    We consider the problem of generalizing some quantum algorithms so that they will work on input domains whose cardinalities are not necessarily powers of two. When analyzing the algorithms we assume that generating superpositions of arbitrary subsets of basis states whose cardinalities are not necessarily powers of two perfectly is possible. We have taken Ballhysa's model as a template and have extended it to Chi, Kim and Lee's generalizations of the Deutsch-Jozsa algorithm and to Simon's algorithm. With perfectly equal superpositions of input sets of arbitrary size, Chi, Kim and Lee's generalized Deutsch-Jozsa algorithms, both for evenly-distributed and evenly-balanced functions, worked with one-sided error property. For Simon's algorithm the success probability of the generalized algorithm is the same as that of the original for input sets of arbitrary cardinalities with equiprobable superpositions, since the property that the measured strings are all those which have dot product zero with the string we search, for the case where the function is 2-to-1, is not lost.

  2. A review on quantum search algorithms

    Science.gov (United States)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  3. Quantum computation with classical light: The Deutsch Algorithm

    International Nuclear Information System (INIS)

    Perez-Garcia, Benjamin; Francis, Jason; McLaren, Melanie; Hernandez-Aranda, Raul I.; Forbes, Andrew; Konrad, Thomas

    2015-01-01

    We present an implementation of the Deutsch Algorithm using linear optical elements and laser light. We encoded two quantum bits in form of superpositions of electromagnetic fields in two degrees of freedom of the beam: its polarisation and orbital angular momentum. Our approach, based on a Sagnac interferometer, offers outstanding stability and demonstrates that optical quantum computation is possible using classical states of light. - Highlights: • We implement the Deutsh Algorithm using linear optical elements and classical light. • Our qubits are encoded in the polarisation and orbital angular momentum of the beam. • We show that it is possible to achieve quantum computation with two qubits in the classical domain of light

  4. Quantum computation with classical light: The Deutsch Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Garcia, Benjamin [Photonics and Mathematical Optics Group, Tecnológico de Monterrey, Monterrey 64849 (Mexico); University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); Francis, Jason [School of Chemistry and Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000 (South Africa); McLaren, Melanie [University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); Hernandez-Aranda, Raul I. [Photonics and Mathematical Optics Group, Tecnológico de Monterrey, Monterrey 64849 (Mexico); Forbes, Andrew [University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); Konrad, Thomas, E-mail: konradt@ukzn.ac.za [School of Chemistry and Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000 (South Africa); National Institute of Theoretical Physics, Durban Node, Private Bag X54001, Durban 4000 (South Africa)

    2015-08-28

    We present an implementation of the Deutsch Algorithm using linear optical elements and laser light. We encoded two quantum bits in form of superpositions of electromagnetic fields in two degrees of freedom of the beam: its polarisation and orbital angular momentum. Our approach, based on a Sagnac interferometer, offers outstanding stability and demonstrates that optical quantum computation is possible using classical states of light. - Highlights: • We implement the Deutsh Algorithm using linear optical elements and classical light. • Our qubits are encoded in the polarisation and orbital angular momentum of the beam. • We show that it is possible to achieve quantum computation with two qubits in the classical domain of light.

  5. Realization of Deutsch-like algorithm using ensemble computing

    International Nuclear Information System (INIS)

    Wei Daxiu; Luo Jun; Sun Xianping; Zeng Xizhi

    2003-01-01

    The Deutsch-like algorithm [Phys. Rev. A. 63 (2001) 034101] distinguishes between even and odd query functions using fewer function calls than its possible classical counterpart in a two-qubit system. But the similar method cannot be applied to a multi-qubit system. We propose a new approach for solving Deutsch-like problem using ensemble computing. The proposed algorithm needs an ancillary qubit and can be easily extended to multi-qubit system with one query. Our ensemble algorithm beginning with a easily-prepared initial state has three main steps. The classifications of the functions can be obtained directly from the spectra of the ancilla qubit. We also demonstrate the new algorithm in a four-qubit molecular system using nuclear magnetic resonance (NMR). One hydrogen and three carbons are selected as the four qubits, and one of carbons is ancilla qubit. We choice two unitary transformations, corresponding to two functions (one odd function and one even function), to validate the ensemble algorithm. The results show that our experiment is successfully and our ensemble algorithm for solving the Deutsch-like problem is virtual

  6. Interfacing external quantum devices to a universal quantum computer.

    Directory of Open Access Journals (Sweden)

    Antonio A Lagana

    Full Text Available We present a scheme to use external quantum devices using the universal quantum computer previously constructed. We thereby show how the universal quantum computer can utilize networked quantum information resources to carry out local computations. Such information may come from specialized quantum devices or even from remote universal quantum computers. We show how to accomplish this by devising universal quantum computer programs that implement well known oracle based quantum algorithms, namely the Deutsch, Deutsch-Jozsa, and the Grover algorithms using external black-box quantum oracle devices. In the process, we demonstrate a method to map existing quantum algorithms onto the universal quantum computer.

  7. A strategy for quantum algorithm design assisted by machine learning

    Science.gov (United States)

    Bang, Jeongho; Ryu, Junghee; Yoo, Seokwon; Pawłowski, Marcin; Lee, Jinhyoung

    2014-07-01

    We propose a method for quantum algorithm design assisted by machine learning. The method uses a quantum-classical hybrid simulator, where a ‘quantum student’ is being taught by a ‘classical teacher’. In other words, in our method, the learning system is supposed to evolve into a quantum algorithm for a given problem, assisted by a classical main-feedback system. Our method is applicable for designing quantum oracle-based algorithms. We chose, as a case study, an oracle decision problem, called a Deutsch-Jozsa problem. We showed by using Monte Carlo simulations that our simulator can faithfully learn a quantum algorithm for solving the problem for a given oracle. Remarkably, the learning time is proportional to the square root of the total number of parameters, rather than showing the exponential dependence found in the classical machine learning-based method.

  8. Read-only-memory-based quantum computation: Experimental explorations using nuclear magnetic resonance and future prospects

    International Nuclear Information System (INIS)

    Sypher, D.R.; Brereton, I.M.; Wiseman, H.M.; Hollis, B.L.; Travaglione, B.C.

    2002-01-01

    Read-only-memory-based (ROM-based) quantum computation (QC) is an alternative to oracle-based QC. It has the advantages of being less 'magical', and being more suited to implementing space-efficient computation (i.e., computation using the minimum number of writable qubits). Here we consider a number of small (one- and two-qubit) quantum algorithms illustrating different aspects of ROM-based QC. They are: (a) a one-qubit algorithm to solve the Deutsch problem; (b) a one-qubit binary multiplication algorithm; (c) a two-qubit controlled binary multiplication algorithm; and (d) a two-qubit ROM-based version of the Deutsch-Jozsa algorithm. For each algorithm we present experimental verification using nuclear magnetic resonance ensemble QC. The average fidelities for the implementation were in the ranges 0.9-0.97 for the one-qubit algorithms, and 0.84-0.94 for the two-qubit algorithms. We conclude with a discussion of future prospects for ROM-based quantum computation. We propose a four-qubit algorithm, using Grover's iterate, for solving a miniature 'real-world' problem relating to the lengths of paths in a network

  9. Demonstration of two-qubit algorithms with a superconducting quantum processor.

    Science.gov (United States)

    DiCarlo, L; Chow, J M; Gambetta, J M; Bishop, Lev S; Johnson, B R; Schuster, D I; Majer, J; Blais, A; Frunzio, L; Girvin, S M; Schoelkopf, R J

    2009-07-09

    Quantum computers, which harness the superposition and entanglement of physical states, could outperform their classical counterparts in solving problems with technological impact-such as factoring large numbers and searching databases. A quantum processor executes algorithms by applying a programmable sequence of gates to an initialized register of qubits, which coherently evolves into a final state containing the result of the computation. Building a quantum processor is challenging because of the need to meet simultaneously requirements that are in conflict: state preparation, long coherence times, universal gate operations and qubit readout. Processors based on a few qubits have been demonstrated using nuclear magnetic resonance, cold ion trap and optical systems, but a solid-state realization has remained an outstanding challenge. Here we demonstrate a two-qubit superconducting processor and the implementation of the Grover search and Deutsch-Jozsa quantum algorithms. We use a two-qubit interaction, tunable in strength by two orders of magnitude on nanosecond timescales, which is mediated by a cavity bus in a circuit quantum electrodynamics architecture. This interaction allows the generation of highly entangled states with concurrence up to 94 per cent. Although this processor constitutes an important step in quantum computing with integrated circuits, continuing efforts to increase qubit coherence times, gate performance and register size will be required to fulfil the promise of a scalable technology.

  10. Cartoon computation: quantum-like computing without quantum mechanics

    International Nuclear Information System (INIS)

    Aerts, Diederik; Czachor, Marek

    2007-01-01

    We present a computational framework based on geometric structures. No quantum mechanics is involved, and yet the algorithms perform tasks analogous to quantum computation. Tensor products and entangled states are not needed-they are replaced by sets of basic shapes. To test the formalism we solve in geometric terms the Deutsch-Jozsa problem, historically the first example that demonstrated the potential power of quantum computation. Each step of the algorithm has a clear geometric interpretation and allows for a cartoon representation. (fast track communication)

  11. Demonstration of essentiality of entanglement in a Deutsch-like quantum algorithm

    Science.gov (United States)

    Huang, He-Liang; Goswami, Ashutosh K.; Bao, Wan-Su; Panigrahi, Prasanta K.

    2018-06-01

    Quantum algorithms can be used to efficiently solve certain classically intractable problems by exploiting quantum parallelism. However, the effectiveness of quantum entanglement in quantum computing remains a question of debate. This study presents a new quantum algorithm that shows entanglement could provide advantages over both classical algorithms and quantum algo- rithms without entanglement. Experiments are implemented to demonstrate the proposed algorithm using superconducting qubits. Results show the viability of the algorithm and suggest that entanglement is essential in obtaining quantum speedup for certain problems in quantum computing. The study provides reliable and clear guidance for developing useful quantum algorithms.

  12. Introduction to quantum information science

    CERN Document Server

    Hayashi, Masahito; Kawachi, Akinori; Kimura, Gen; Ogawa, Tomohiro

    2015-01-01

    This book presents the basics of quantum information, e.g., foundation of quantum theory, quantum algorithms, quantum entanglement, quantum entropies, quantum coding, quantum error correction and quantum cryptography. The required knowledge is only elementary calculus and linear algebra. This way the book can be understood by undergraduate students. In order to study quantum information, one usually has to study the foundation of quantum theory. This book describes it from more an operational viewpoint which is suitable for quantum information while traditional textbooks of quantum theory lack this viewpoint. The current  book bases on Shor's algorithm, Grover's algorithm, Deutsch-Jozsa's algorithm as basic algorithms. To treat several topics in quantum information, this book covers several kinds of information quantities in quantum systems including von Neumann entropy. The limits of several kinds of quantum information processing are given. As important quantum protocols,this book contains quantum teleport...

  13. Deutsche Sprache - schwere Sprache?

    OpenAIRE

    Wegener, Heide

    1991-01-01

    Deutsche Sprache - schwere Sprache? : Probleme (nicht nur) japan. Studenten mit d. Substantivflexion. - In: Informationen Deutsch als Fremdsprache. 18. 1991. S. 420-437. - Auch ersch. in: Waseda-Universität : Jahrbuch. 1991.

  14. Research progress on quantum informatics and quantum computation

    Science.gov (United States)

    Zhao, Yusheng

    2018-03-01

    Quantum informatics is an emerging interdisciplinary subject developed by the combination of quantum mechanics, information science, and computer science in the 1980s. The birth and development of quantum information science has far-reaching significance in science and technology. At present, the application of quantum information technology has become the direction of people’s efforts. The preparation, storage, purification and regulation, transmission, quantum coding and decoding of quantum state have become the hotspot of scientists and technicians, which have a profound impact on the national economy and the people’s livelihood, technology and defense technology. This paper first summarizes the background of quantum information science and quantum computer and the current situation of domestic and foreign research, and then introduces the basic knowledge and basic concepts of quantum computing. Finally, several quantum algorithms are introduced in detail, including Quantum Fourier transform, Deutsch-Jozsa algorithm, Shor’s quantum algorithm, quantum phase estimation.

  15. Demonstration of quantum advantage in machine learning

    Science.gov (United States)

    Ristè, Diego; da Silva, Marcus P.; Ryan, Colm A.; Cross, Andrew W.; Córcoles, Antonio D.; Smolin, John A.; Gambetta, Jay M.; Chow, Jerry M.; Johnson, Blake R.

    2017-04-01

    The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, limiting the obtained advantage. Here we solve an oracle-based problem, known as learning parity with noise, on a five-qubit superconducting processor. Executing classical and quantum algorithms using the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a significant quantum advantage already emerges in existing noisy systems.

  16. Health Information in German (Deutsch)

    Science.gov (United States)

    ... Tools You Are Here: Home → Multiple Languages → German (Deutsch) URL of this page: https://medlineplus.gov/languages/german.html Health Information in German (Deutsch) To use the sharing features on this page, ...

  17. Deutsch-dänische Kulturbrille

    DEFF Research Database (Denmark)

    Müller, Katarina Le; Hallsteinsdóttir, Erla

    Projekts Nationale Stereotype und Marketingstrategien in der deutsch-dänischen interkulturellen Kommunikation (SMiK). Mit dem Leitfaden stellen wir das im SMiK-Projekt erarbeitete und wissenschaftlich gesicherte Wissen um deutsch-dänische stereotype Handlungsmuster für die kleinen und mittelständischen...

  18. Global alignment algorithms implementations | Fatumo ...

    African Journals Online (AJOL)

    In this paper, we implemented the two routes for sequence comparison, that is; the dotplot and Needleman-wunsch algorithm for global sequence alignment. Our algorithms were implemented in python programming language and were tested on Linux platform 1.60GHz, 512 MB of RAM SUSE 9.2 and 10.1 versions.

  19. Powering FITL for Deutsche Telekom

    Energy Technology Data Exchange (ETDEWEB)

    Schulz, W. [T-Nova Deutsche Telekom Innovationsgesellschaft mbH Technologiezentrum (Germany)

    2000-07-01

    Deutsche Telekom introduced the project OPAL (optical access line) 8 years ago. The development of new applications for FITL (FTTH, FTTB, FTTC) leads to new challenges for power systems. For the establishment of an optical fibre infrastructure at the subscriber line level a wide variety of possible optical fibre network topologies can be imagined. Different powering architectures must be developed including remote or local powering. This paper presents results and performances of the powering configurations to feed the optical network units (ONU) and remote OLT. Compared to conventional powering the centralised powering with remote feeding supply was implemented to power the ONU in the field economically. (orig.)

  20. Das Deutsch-Swahili Wörterbuch

    OpenAIRE

    Mdee, James S.

    2012-01-01

    Deutsch-Swahili Worterbuch is a bilingual German-Swahili Dictionary compiled by Karsten Legere and first published in 1990 Deutsch-Swahili Worterbuch (DSW) is aimed at the German student of Swahili, and to a lesser degree the Swahili speakers, who are advanced learners of German. The former use the dictionary for encoding Swahili and to translate German texts into Swahili The latter use it to decode German.

  1. Elementary functions algorithms and implementation

    CERN Document Server

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  2. Morton Deutsch (1920-2017).

    Science.gov (United States)

    Coleman, Peter T

    2018-01-01

    Presents an obituary for Morton Deutsch, who died March 13, 2017, at 97 years old. Deutsch believed in the power of ideas to rectify serious social problems, and in the role of science to refine our understanding of those ideas. Ranked among the 100 most eminent psychologists of the 20th century, he was a distinguished theorist and pioneer in the study of cooperation, conflict resolution and social justice, as well as a remarkably warm, wise and respectful mentor. Deutsch held numerous leadership positions, including faculty positions at Teachers College, Columbia University and New York University and various presidencies, and accumulated dozens of awards, including eight lifetime achievement awards and the creation of four awards in his name. He also trained as a psychoanalyst and had a private practice for many years. In 1986, he founded the International Center for Cooperation and Conflict Resolution at Columbia, where he continued to work and welcome students well into his 90s. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. A strategy for quantum algorithm design assisted by machine learning

    International Nuclear Information System (INIS)

    Bang, Jeongho; Lee, Jinhyoung; Ryu, Junghee; Yoo, Seokwon; Pawłowski, Marcin

    2014-01-01

    We propose a method for quantum algorithm design assisted by machine learning. The method uses a quantum–classical hybrid simulator, where a ‘quantum student’ is being taught by a ‘classical teacher’. In other words, in our method, the learning system is supposed to evolve into a quantum algorithm for a given problem, assisted by a classical main-feedback system. Our method is applicable for designing quantum oracle-based algorithms. We chose, as a case study, an oracle decision problem, called a Deutsch–Jozsa problem. We showed by using Monte Carlo simulations that our simulator can faithfully learn a quantum algorithm for solving the problem for a given oracle. Remarkably, the learning time is proportional to the square root of the total number of parameters, rather than showing the exponential dependence found in the classical machine learning-based method. (paper)

  4. Neues aus dem Forschungsfeld Deutsch als Zweitsprache. Sammelrezension

    Directory of Open Access Journals (Sweden)

    Claus Altmayer

    2015-03-01

    Full Text Available Neues aus dem Forschungsfeld Deutsch als Zweitsprache. Sammelrezension (Teil 2 von Bernt Ahrenholz (Hrsg. (2009, Empirische Befunde zu DaZ-Erwerb und Sprachförderung. Beiträge aus dem 3. ‚Workshop Kinder mit Migrationshintergrund‘; Karen Schramm & Christoph Schröder (Hrsg. (2009, Empirische Zugänge zu Spracherwerb und Sprachförderung in Deutsch als Zweitsprache; Stefan Jeuk (2010, Deutsch als Zweitsprache in der Schule. Grundlagen - Diagnose – Förderung

  5. An implementation of the Heaviside algorithm

    International Nuclear Information System (INIS)

    Dimovski, I.H.; Spiridonova, M.N.

    2011-01-01

    The so-called Heaviside algorithm based on the operational calculus approach is intended for solving initial value problems for linear ordinary differential equations with constant coefficients. We use it in the framework of Mikusinski's operational calculus. A description and implementation of the Heaviside algorithm using a computer algebra system are considered. Special attention is paid to the features making this implementation efficient. Illustrative examples are included

  6. Re-Purposing an OER for the Online Language Course: A Case Study of "Deutsch Interaktiv" by the Deutsche Welle

    Science.gov (United States)

    Dixon, Edward M.; Hondo, Junko

    2014-01-01

    This paper will describe pedagogical approaches for re-purposing an open educational resource (OER) designed and produced by the Deutsche Welle. This free online program, "Deutsch Interaktiv," consists of authentic digital videos, slideshows and audio texts and gives a contemporary overview of the culture and language in Germany, Austria…

  7. Adaptive Filtering Algorithms and Practical Implementation

    CERN Document Server

    Diniz, Paulo S R

    2013-01-01

    In the fourth edition of Adaptive Filtering: Algorithms and Practical Implementation, author Paulo S.R. Diniz presents the basic concepts of adaptive signal processing and adaptive filtering in a concise and straightforward manner. The main classes of adaptive filtering algorithms are presented in a unified framework, using clear notations that facilitate actual implementation. The main algorithms are described in tables, which are detailed enough to allow the reader to verify the covered concepts. Many examples address problems drawn from actual applications. New material to this edition includes: Analytical and simulation examples in Chapters 4, 5, 6 and 10 Appendix E, which summarizes the analysis of set-membership algorithm Updated problems and references Providing a concise background on adaptive filtering, this book covers the family of LMS, affine projection, RLS and data-selective set-membership algorithms as well as nonlinear, sub-band, blind, IIR adaptive filtering, and more. Several problems are...

  8. Categorizing Variations of Student-Implemented Sorting Algorithms

    Science.gov (United States)

    Taherkhani, Ahmad; Korhonen, Ari; Malmi, Lauri

    2012-01-01

    In this study, we examined freshmen students' sorting algorithm implementations in data structures and algorithms' course in two phases: at the beginning of the course before the students received any instruction on sorting algorithms, and after taking a lecture on sorting algorithms. The analysis revealed that many students have insufficient…

  9. Object-Oriented Implementation of Adaptive Mesh Refinement Algorithms

    Directory of Open Access Journals (Sweden)

    William Y. Crutchfield

    1993-01-01

    Full Text Available We describe C++ classes that simplify development of adaptive mesh refinement (AMR algorithms. The classes divide into two groups, generic classes that are broadly useful in adaptive algorithms, and application-specific classes that are the basis for our AMR algorithm. We employ two languages, with C++ responsible for the high-level data structures, and Fortran responsible for low-level numerics. The C++ implementation is as fast as the original Fortran implementation. Use of inheritance has allowed us to extend the original AMR algorithm to other problems with greatly reduced development time.

  10. Importance and use of an environment management system implementation and innovative optimization approach es using the example of the RAG Deutsche Steinkohle AG; Bedeutung und Nutzung der Implementierung eines Umweltmanagementsystems und innovative Ansaetze zur Optimierung am Beispiel der RAG Deutsche Steinkohle AG

    Energy Technology Data Exchange (ETDEWEB)

    Polysos, Julia

    2014-07-01

    Environmental protection is an important business objective in RAG Deutsche Steinkohle AG (DSK). The management of the company intends the company wide certification according DIN ISO 14001. The law concerning the hard coal financing includes a continuous and socially compatible staff reduction. The company is aimed to manage the hard coal phase-out process considering a sustainable handling of the long-term contamination including public acceptance. The optimization potential includes the implementation and continuation of an area-wide environment management system, the realization of the commissioning management and the environmental evaluation.

  11. [Bernt Ahrenholz : Verweise mit Demonstrativa im Gesprochenen Deutsch...] / Klaus Geyer

    Index Scriptorium Estoniae

    Geyer, Klaus

    2008-01-01

    Arvustus: Ahrenholz, Bernt. Verweise mit Demonstrativa im gesprochenen Deutsch : Grammatik, Zweitspracherwerb und Deutsch als Fremdsprache. Berlin ; New York : de Gruyter, 2007. (Linguistik - Impulse & Tendenzen ; 17)

  12. Deutsches Atomforum turns fifty; 50 Jahre Deutsches Atomforum

    Energy Technology Data Exchange (ETDEWEB)

    Geisler, Maja [Deutsches Atomforum e.V., Berlin (Germany). Bereich Oeffentlichkeitsarbeit, Informationskreis KernEnergie

    2009-07-15

    Fifty years ago, the Deutsches Atomforum e. V. was founded to promote the peaceful uses of nuclear power in Germany. On July 1, 2009, the organization celebrated its fiftieth birthday in Berlin. The anniversary was celebrated in the Berlin electricity plant, Germany's oldest existing building for commercial electricity generation. DAtF President Dr. Walter Hohlefelder welcomed some 200 high-ranking guests from politics, industry, and from the nuclear community, above all, the Chancellor of the Federal Republic of Germany, Dr. Angela Merkel, and, as keynote speaker, Professor Dr. Arnulf Baring. (orig.)

  13. Multiple Lookup Table-Based AES Encryption Algorithm Implementation

    Science.gov (United States)

    Gong, Jin; Liu, Wenyi; Zhang, Huixin

    Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.

  14. Efficient Implementation Algorithms for Homogenized Energy Models

    National Research Council Canada - National Science Library

    Braun, Thomas R; Smith, Ralph C

    2005-01-01

    ... for real-time control implementation. In this paper, we develop algorithms employing lookup tables which permit the high speed implementation of formulations which incorporate relaxation mechanisms and electromechanical coupling...

  15. Efficient Implementation of Nested-Loop Multimedia Algorithms

    Directory of Open Access Journals (Sweden)

    Kittitornkun Surin

    2001-01-01

    Full Text Available A novel dependence graph representation called the multiple-order dependence graph for nested-loop formulated multimedia signal processing algorithms is proposed. It allows a concise representation of an entire family of dependence graphs. This powerful representation facilitates the development of innovative implementation approach for nested-loop formulated multimedia algorithms such as motion estimation, matrix-matrix product, 2D linear transform, and others. In particular, algebraic linear mapping (assignment and scheduling methodology can be applied to implement such algorithms on an array of simple-processing elements. The feasibility of this new approach is demonstrated in three major target architectures: application-specific integrated circuit (ASIC, field programmable gate array (FPGA, and a programmable clustered VLIW processor.

  16. A very fast implementation of 2D iterative reconstruction algorithms

    DEFF Research Database (Denmark)

    Toft, Peter Aundal; Jensen, Peter James

    1996-01-01

    that iterative reconstruction algorithms can be implemented and run almost as fast as direct reconstruction algorithms. The method has been implemented in a software package that is available for free, providing reconstruction algorithms using ART, EM, and the Least Squares Conjugate Gradient Method...

  17. Comparison of tracking algorithms implemented in OpenCV

    Directory of Open Access Journals (Sweden)

    Janku Peter

    2016-01-01

    Full Text Available Computer vision is very progressive and modern part of computer science. From scientific point of view, theoretical aspects of computer vision algorithms prevail in many papers and publications. The underlying theory is really important, but on the other hand, the final implementation of an algorithm significantly affects its performance and robustness. For this reason, this paper tries to compare real implementation of tracking algorithms (one part of computer vision problem, which can be found in the very popular library OpenCV. Moreover, the possibilities of optimizations are discussed.

  18. Lernpunkt Deutsch--Stage 1.

    Science.gov (United States)

    Theil, Elvira

    1997-01-01

    Evaluates the first stage of "Lernpunkt Deutsch," a new three-stage German course designed for upper elementary and early secondary school. Describes the publisher's package of materials and the appropriateness of the course, utility of the different package elements, format of the materials, and assesses whether the course provides pedagogically…

  19. Mehrsprachigkeit und Schulerfolg – die europäische (deutsche ...

    African Journals Online (AJOL)

    Grundschüler nur Deutsch, aber immerhin schon fast 30% geben an, zu Hause neben Deutsch noch eine weitere Sprache zu sprechen. Dabei ist auch die Vielfalt der vertretenen Sprachen erstaunlich. Am häufigsten vertreten ist sowohl in Essen als auch in Hamburg das Türkische, daneben spielen Polnisch, Russisch, ...

  20. Complex segregation analysis of craniomandibular osteopathy in Deutsch Drahthaar dogs.

    Science.gov (United States)

    Vagt, J; Distl, O

    2018-01-01

    This study investigated familial relationships among Deutsch Drahthaar dogs with craniomandibular osteopathy and examined the most likely mode of inheritance. Sixteen Deutsch Drahthaar dogs with craniomandibular osteopathy were diagnosed using clinical findings, radiography or computed tomography. All 16 dogs with craniomandibular osteopathy had one common ancestor. Complex segregation analyses rejected models explaining the segregation of craniomandibular osteopathy through random environmental variation, monogenic inheritance or an additive sex effect. Polygenic and mixed major gene models sufficiently explained the segregation of craniomandibular osteopathy in the pedigree analysis and offered the most likely hypotheses. The SLC37A2:c.1332C>T variant was not found in a sample of Deutsch Drahthaar dogs with craniomandibular osteopathy, nor in healthy controls. Craniomandibular osteopathy is an inherited condition in Deutsch Drahthaar dogs and the inheritance seems to be more complex than a simple Mendelian model. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. AES ALGORITHM IMPLEMENTATION IN PROGRAMMING LANGUAGES

    Directory of Open Access Journals (Sweden)

    Luminiţa DEFTA

    2010-12-01

    Full Text Available Information encryption represents the usage of an algorithm to convert an unknown message into an encrypted one. It is used to protect the data against unauthorized access. Protected data can be stored on a media device or can be transmitted through the network. In this paper we describe a concrete implementation of the AES algorithm in the Java programming language (available from Java Development Kit 6 libraries and C (using the OpenSSL library. AES (Advanced Encryption Standard is an asymmetric key encryption algorithm formally adopted by the U.S. government and was elected after a long process of standardization.

  2. [Variation im heutigen Deutsch...] / Laura Tidrike

    Index Scriptorium Estoniae

    Tidrike, Laura

    2008-01-01

    Arvustus: Variation im heutigen Deutsch : Perspektiven für den Sprachunterricht / hrsg. v. Eva Neuland. Frankfurt am Main : Lang, 2006. (Sprache - Kommunikation - Kultur. Soziolinguistische Beiträge ; Vol. 4)

  3. An analytic parton shower. Algorithms, implementation and validation

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Sebastian

    2012-06-15

    The realistic simulation of particle collisions is an indispensable tool to interpret the data measured at high-energy colliders, for example the now running Large Hadron Collider at CERN. These collisions at these colliders are usually simulated in the form of exclusive events. This thesis focuses on the perturbative QCD part involved in the simulation of these events, particularly parton showers and the consistent combination of parton showers and matrix elements. We present an existing parton shower algorithm for emissions off final state partons along with some major improvements. Moreover, we present a new parton shower algorithm for emissions off incoming partons. The aim of these particular algorithms, called analytic parton shower algorithms, is to be able to calculate the probabilities for branchings and for whole events after the event has been generated. This allows a reweighting procedure to be applied after the events have been simulated. We show a detailed description of the algorithms, their implementation and the interfaces to the event generator WHIZARD. Moreover we discuss the implementation of a MLM-type matching procedure and an interface to the shower and hadronization routines from PYTHIA. Finally, we compare several predictions by our implementation to experimental measurements at LEP, Tevatron and LHC, as well as to predictions obtained using PYTHIA. (orig.)

  4. An analytic parton shower. Algorithms, implementation and validation

    International Nuclear Information System (INIS)

    Schmidt, Sebastian

    2012-06-01

    The realistic simulation of particle collisions is an indispensable tool to interpret the data measured at high-energy colliders, for example the now running Large Hadron Collider at CERN. These collisions at these colliders are usually simulated in the form of exclusive events. This thesis focuses on the perturbative QCD part involved in the simulation of these events, particularly parton showers and the consistent combination of parton showers and matrix elements. We present an existing parton shower algorithm for emissions off final state partons along with some major improvements. Moreover, we present a new parton shower algorithm for emissions off incoming partons. The aim of these particular algorithms, called analytic parton shower algorithms, is to be able to calculate the probabilities for branchings and for whole events after the event has been generated. This allows a reweighting procedure to be applied after the events have been simulated. We show a detailed description of the algorithms, their implementation and the interfaces to the event generator WHIZARD. Moreover we discuss the implementation of a MLM-type matching procedure and an interface to the shower and hadronization routines from PYTHIA. Finally, we compare several predictions by our implementation to experimental measurements at LEP, Tevatron and LHC, as well as to predictions obtained using PYTHIA. (orig.)

  5. Prototype Implementation of Two Efficient Low-Complexity Digital Predistortion Algorithms

    Directory of Open Access Journals (Sweden)

    Timo I. Laakso

    2008-01-01

    Full Text Available Predistortion (PD lineariser for microwave power amplifiers (PAs is an important topic of research. With larger and larger bandwidth as it appears today in modern WiMax standards as well as in multichannel base stations for 3GPP standards, the relatively simple nonlinear effect of a PA becomes a complex memory-including function, severely distorting the output signal. In this contribution, two digital PD algorithms are investigated for the linearisation of microwave PAs in mobile communications. The first one is an efficient and low-complexity algorithm based on a memoryless model, called the simplicial canonical piecewise linear (SCPWL function that describes the static nonlinear characteristic of the PA. The second algorithm is more general, approximating the pre-inverse filter of a nonlinear PA iteratively using a Volterra model. The first simpler algorithm is suitable for compensation of amplitude compression and amplitude-to-phase conversion, for example, in mobile units with relatively small bandwidths. The second algorithm can be used to linearise PAs operating with larger bandwidths, thus exhibiting memory effects, for example, in multichannel base stations. A measurement testbed which includes a transmitter-receiver chain with a microwave PA is built for testing and prototyping of the proposed PD algorithms. In the testing phase, the PD algorithms are implemented using MATLAB (floating-point representation and tested in record-and-playback mode. The iterative PD algorithm is then implemented on a Field Programmable Gate Array (FPGA using fixed-point representation. The FPGA implementation allows the pre-inverse filter to be tested in a real-time mode. Measurement results show excellent linearisation capabilities of both the proposed algorithms in terms of adjacent channel power suppression. It is also shown that the fixed-point FPGA implementation of the iterative algorithm performs as well as the floating-point implementation.

  6. EV Charging Algorithm Implementation with User Price Preference

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Bin; Hu, Boyang; Qiu, Charlie; Chu, Peter; Gadh, Rajit

    2015-02-17

    in this paper, we propose and implement a smart Electric Vehicle (EV) charging algorithm to control the EV charging infrastructures according to users’ price preferences. EVSE (Electric Vehicle Supply Equipment), equipped with bidirectional communication devices and smart meters, can be remotely monitored by the proposed charging algorithm applied to EV control center and mobile app. On the server side, ARIMA model is utilized to fit historical charging load data and perform day-ahead prediction. A pricing strategy with energy bidding policy is proposed and implemented to generate a charging price list to be broadcasted to EV users through mobile app. On the user side, EV drivers can submit their price preferences and daily travel schedules to negotiate with Control Center to consume the expected energy and minimize charging cost simultaneously. The proposed algorithm is tested and validated through the experimental implementations in UCLA parking lots.

  7. FPGA Implementation of Computer Vision Algorithm

    OpenAIRE

    Zhou, Zhonghua

    2014-01-01

    Computer vision algorithms, which play an significant role in vision processing, is widely applied in many aspects such as geology survey, traffic management and medical care, etc.. Most of the situations require the process to be real-timed, in other words, as fast as possible. Field Programmable Gate Arrays (FPGAs) have a advantage of parallelism fabric in programming, comparing to the serial communications of CPUs, which makes FPGA a perfect platform for implementing vision algorithms. The...

  8. Super-Encryption Implementation Using Monoalphabetic Algorithm and XOR Algorithm for Data Security

    Science.gov (United States)

    Rachmawati, Dian; Andri Budiman, Mohammad; Aulia, Indra

    2018-03-01

    The exchange of data that occurs offline and online is very vulnerable to the threat of data theft. In general, cryptography is a science and art to maintain data secrecy. An encryption is a cryptography algorithm in which data is transformed into cipher text, which is something that is unreadable and meaningless so it cannot be read or understood by other parties. In super-encryption, two or more encryption algorithms are combined to make it more secure. In this work, Monoalphabetic algorithm and XOR algorithm are combined to form a super- encryption. Monoalphabetic algorithm works by changing a particular letter into a new letter based on existing keywords while the XOR algorithm works by using logic operation XOR Since Monoalphabetic algorithm is a classical cryptographic algorithm and XOR algorithm is a modern cryptographic algorithm, this scheme is expected to be both easy-to-implement and more secure. The combination of the two algorithms is capable of securing the data and restoring it back to its original form (plaintext), so the data integrity is still ensured.

  9. Deutsches Atomforum e.V. annual report 1984

    International Nuclear Information System (INIS)

    1985-01-01

    A report on the various activities and tasks of Deutsches Atomforum (German Forum of Atomics) in 1984, including analyses, information and statements on the peaceful utilization of nuclear energy. It contains the most important data on nuclear energy in 1984, organs of and events organized by Deutsches Atomforum; operational results of power plants; total electricity generation in German power plants in 1984 rose by 40% compared to the previous year; discussion of key points in the sectors engineering and industry, public relations and press, legal affairs and administration, economy and industry, international cooperation. Information on activities of the financing society for nuclear reactors is supplied as well. (HSCH) [de

  10. Remembering Albert deutsch, an advocate for mental health.

    Science.gov (United States)

    Weiss, Kenneth J

    2011-12-01

    Albert Deutsch, journalist, advocate for the mentally ill, and honorary APA Fellow died 50 years ago. Author of The Mentally Ill in America and The Shame of the States, he believed in the obligation of individuals and institutions to advocate for patients. In 1961, he was in the midst of a vast project to assess the state of the art in psychiatric research. This article recalls aspects of Deutsch's life and work and places him in the historical context of individuals who have shown great compassion for disabled persons.

  11. Ich spreche Deutsch: A User's Report

    Science.gov (United States)

    Glassar, Sheila

    1969-01-01

    The textbook under discussion, "Ich spreche Deutsch" by Heinz Griesbach and Dora Schulz (London-Harlow: Longmans-Hueber, 1966), is intended to be a one-year introduction to German, particularly for less academic pupils and students. (FWB)

  12. A GPU-paralleled implementation of an enhanced face recognition algorithm

    Science.gov (United States)

    Chen, Hao; Liu, Xiyang; Shao, Shuai; Zan, Jiguo

    2013-03-01

    Face recognition algorithm based on compressed sensing and sparse representation is hotly argued in these years. The scheme of this algorithm increases recognition rate as well as anti-noise capability. However, the computational cost is expensive and has become a main restricting factor for real world applications. In this paper, we introduce a GPU-accelerated hybrid variant of face recognition algorithm named parallel face recognition algorithm (pFRA). We describe here how to carry out parallel optimization design to take full advantage of many-core structure of a GPU. The pFRA is tested and compared with several other implementations under different data sample size. Finally, Our pFRA, implemented with NVIDIA GPU and Computer Unified Device Architecture (CUDA) programming model, achieves a significant speedup over the traditional CPU implementations.

  13. Implementation of fuzzy logic control algorithm in embedded ...

    African Journals Online (AJOL)

    Fuzzy logic control algorithm solves problems that are difficult to address with traditional control techniques. This paper describes an implementation of fuzzy logic control algorithm using inexpensive hardware as well as how to use fuzzy logic to tackle a specific control problem without any special software tools. As a case ...

  14. Study of hardware implementations of fast tracking algorithms

    International Nuclear Information System (INIS)

    Song, Z.; Huang, G.; Wang, D.; Lentdecker, G. De; Dong, J.; Léonard, A.; Robert, F.; Yang, Y.

    2017-01-01

    Real-time track reconstruction at high event rates is a major challenge for future experiments in high energy physics. To perform pattern-recognition and track fitting, artificial retina or Hough transformation methods have been introduced in the field which have to be implemented in FPGA firmware. In this note we report on a case study of a possible FPGA hardware implementation approach of the retina algorithm based on a Floating-Point core. Detailed measurements with this algorithm are investigated. Retina performance and capabilities of the FPGA are discussed along with perspectives for further optimization and applications.

  15. Implementation of an Algorithm for Prosthetic Joint Infection: Deviations and Problems.

    Science.gov (United States)

    Mühlhofer, Heinrich M L; Kanz, Karl-Georg; Pohlig, Florian; Lenze, Ulrich; Lenze, Florian; Toepfer, Andreas; von Eisenhart-Rothe, Ruediger; Schauwecker, Johannes

    The outcome of revision surgery in arthroplasty is based on a precise diagnosis. In addition, the treatment varies based on whether the prosthetic failure is caused by aseptic or septic loosening. Algorithms can help to identify periprosthetic joint infections (PJI) and standardize diagnostic steps, however, algorithms tend to oversimplify the treatment of complex cases. We conducted a process analysis during the implementation of a PJI algorithm to determine problems and deviations associated with the implementation of this algorithm. Fifty patients who were treated after implementing a standardized algorithm were monitored retrospectively. Their treatment plans and diagnostic cascades were analyzed for deviations from the implemented algorithm. Each diagnostic procedure was recorded, compared with the algorithm, and evaluated statistically. We detected 52 deviations while treating 50 patients. In 25 cases, no discrepancy was observed. Synovial fluid aspiration was not performed in 31.8% of patients (95% confidence interval [CI], 18.1%-45.6%), while white blood cell counts (WBCs) and neutrophil differentiation were assessed in 54.5% of patients (95% CI, 39.8%-69.3%). We also observed that the prolonged incubation of cultures was not requested in 13.6% of patients (95% CI, 3.5%-23.8%). In seven of 13 cases (63.6%; 95% CI, 35.2%-92.1%), arthroscopic biopsy was performed; 6 arthroscopies were performed in discordance with the algorithm (12%; 95% CI, 3%-21%). Self-critical analysis of diagnostic processes and monitoring of deviations using algorithms are important and could increase the quality of treatment by revealing recurring faults.

  16. "Deutsche Kultur" und Werbung

    OpenAIRE

    Schug, Alexander

    2010-01-01

    Die Arbeit präsentiert die Geschichte der modernen Wirtschaftswerbung in der ersten Hälfte des 20. Jahrhunderts und zeigt, dass Werbung trotz kultureller Barrieren die Alltagswelten der Deutschen kolonialisierte und Einfluss auf die „deutsche Kultur“ nahm. Die Arbeit zeigt, dass das Konstrukt der „deutschen Kultur“ nicht ausschließlich durch die bürgerliche Hochkultur definiert wurde, sondern zunehmend auch durch Einflüsse der Konsumkultur bestimmt war. Die Bilderwelten der Werbung prägten na...

  17. Implementation of a partitioned algorithm for simulation of large CSI problems

    Science.gov (United States)

    Alvin, Kenneth F.; Park, K. C.

    1991-01-01

    The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.

  18. Implementations of PI-line based FBP and BPF algorithms on GPGPU

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Le [Tsinghua Univ., Beijing (China). Dept. of Engineering Physics; Xing, Yuxiang [Tsinghua Univ., Beijing (China). Dept. of Engineering Physics; Ministry of Education, Beijing (China). Key Lab. of Particle and Radiation Imaging

    2011-07-01

    Exact reconstruction is under the spotlight in cone beam CT. Katsevich put forward the first exact inversion formula for helical cone beam CT, which belongs to FBP type. Also, Pan Xiaochuan's group proposed another PI-line based exact reconstruction algorithm of BPF type. These two exact reconstruction algorithms and their derivative forms have been widely studied. In this paper, we present a different way of selecting PI-line segments appropriate for both Katsevich's FBP and Pan Xiaochuan's BPF algorithms. As 3D reconstruction contributes to massive computations and takes long time, people have made efforts to speed up the algorithms with the help of multi-core CPUs and GPGPU (General Purpose Graphics Processing Unit). In this paper, we also presents implementations for these two algorithms on GPGPU using an innovative way of selecting PI-line segments. Acceleration techniques and implementations are addressed in detail. The methods are tested on the Shepp-Logan phantom. Compared with our CPU's implementations, the accelerated algorithms on GPGPU are tens to hundreds times faster. (orig.)

  19. Lernen Wir Deutsch!: Part 2, German.

    Science.gov (United States)

    Dade County Public Schools, Miami, FL.

    Instructional objectives of the Dade County Public Schools Quinmester Program in German for use with "Lernen Wir Deutsch: Part 2" focus on development of basic skills through the use of short dialogues and structured exercises. The grammar of the course includes the study of nouns, pronouns, and verbs. Possessive determiners are…

  20. Hard Ware Implementation of Diamond Search Algorithm for Motion Estimation and Object Tracking

    International Nuclear Information System (INIS)

    Hashimaa, S.M.; Mahmoud, I.I.; Elazm, A.A.

    2009-01-01

    Object tracking is very important task in computer vision. Fast search algorithms emerged as important search technique to achieve real time tracking results. To enhance the performance of these algorithms, we advocate the hardware implementation of such algorithms. Diamond search block matching motion estimation has been proposed recently to reduce the complexity of motion estimation. In this paper we selected the diamond search algorithm (DS) for implementation using FPGA. This is due to its fundamental role in all fast search patterns. The proposed architecture is simulated and synthesized using Xilinix and modelsim soft wares. The results agree with the algorithm implementation in Matlab environment.

  1. FPGA Implementation of a Frame Synchronization Algorithm for Powerline Communications

    Directory of Open Access Journals (Sweden)

    S. Tsakiris

    2009-09-01

    Full Text Available This paper presents an FPGA implementation of a pilot–based time synchronization scheme employing orthogonal frequency division multiplexing for powerline communication channels. The functionality of the algorithm is analyzed and tested over a real powerline residential network. For this purpose, an appropriate transmitter circuit, implemented by an FPGA, and suitable coupling circuits are constructed. The system has been developed using VHDL language on Nallatech XtremeDSP development kits. The communication system operates in the baseband up to 30 MHz. Measurements of the algorithm's good performance in terms of the number of detected frames and timing offset error are taken and compared to simulations of existing algorithms.

  2. Deutsche Shell AG. Annual report 1997

    International Nuclear Information System (INIS)

    1998-01-01

    This annual report of Deutsche Shell AG reflects its activities in the sector natural gas, mineral oil, chemicals and renewable energies. Environmental protection, safety at work, and the position of the group in society are further subjects. Financial data of 1997 are presented (balance sheet, profit-and-loss account,etc.). (orig./RHM) [de

  3. Lernen Wir Deutsch: Part I, German.

    Science.gov (United States)

    Dade County Public Schools, Miami, FL.

    Instructional objectives of the Dade County Public Schools Quinmester Program in German for use with "Lernen Wir Deutsch: Part 1" focus on the development of basic skills through the use of short dialogues and structured exercises. The contents of this guide focus on: (1) course description, (2) broad goals and performance objectives,…

  4. Parallel GPU implementation of iterative PCA algorithms.

    Science.gov (United States)

    Andrecut, M

    2009-11-01

    Principal component analysis (PCA) is a key statistical technique for multivariate data analysis. For large data sets, the common approach to PCA computation is based on the standard NIPALS-PCA algorithm, which unfortunately suffers from loss of orthogonality, and therefore its applicability is usually limited to the estimation of the first few components. Here we present an algorithm based on Gram-Schmidt orthogonalization (called GS-PCA), which eliminates this shortcoming of NIPALS-PCA. Also, we discuss the GPU (Graphics Processing Unit) parallel implementation of both NIPALS-PCA and GS-PCA algorithms. The numerical results show that the GPU parallel optimized versions, based on CUBLAS (NVIDIA), are substantially faster (up to 12 times) than the CPU optimized versions based on CBLAS (GNU Scientific Library).

  5. A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm

    Science.gov (United States)

    Thirer, Nonel

    2013-05-01

    With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.

  6. Implementation of Period-Finding Algorithm by Means of Simulating Quantum Fourier Transform

    Directory of Open Access Journals (Sweden)

    Zohreh Moghareh Abed

    2010-01-01

    Full Text Available In this paper, we introduce quantum fourier transform as a key ingredient for many useful algorithms. These algorithms make a solution for problems which is considered to be intractable problems on a classical computer. Quantum Fourier transform is propounded as a key for quantum phase estimation algorithm. In this paper our aim is the implementation of period-finding algorithm.Quantum computer solves this problem, exponentially faster than classical one. Quantum phase estimation algorithm is the key for the period-finding problem .Therefore, by means of simulating quantum Fourier transform, we are able to implement the period-finding algorithm. In this paper, the simulation of quantum Fourier transform is carried out by Matlab software.

  7. 20 years of Deutsche Kernreaktor-Versicherungsgemeinschaft

    International Nuclear Information System (INIS)

    Hertel, G.

    1977-01-01

    Survey of the coming into being and the present state of the 'pool', a risk joint company, organized on a national level, of all insurance enterprises offering financial security on a proportional basis (DKVG = Deutsche Kernreaktor-Versicherungsgemeinschaft (German nuclear reactor cooperative insurance company)), and of cooperation with insurance pools of other western countries. (HP) [de

  8. On a new implementation of the Lanczos algorithm

    International Nuclear Information System (INIS)

    Caurier, E.; Zuker, A.P.; Poves, A.

    1991-01-01

    The new implementation proposed is based on a block labelling scheme described in detail. Time reversal, f-projection, sum rule pivots and strength functions are discussed by the aid of the new implementation of the Lanczos algorithm. Energetics and magnetic dipole behaviour of 48 Ti are studied as examples illustrating the applications of the method. (G.P.) 9 refs.; 4 figs.; 1 tab

  9. Deutsches Atomforum turns fifty

    International Nuclear Information System (INIS)

    Geisler, Maja

    2009-01-01

    Fifty years ago, the Deutsches Atomforum e. V. was founded to promote the peaceful uses of nuclear power in Germany. On July 1, 2009, the organization celebrated its fiftieth birthday in Berlin. The anniversary was celebrated in the Berlin electricity plant, Germany's oldest existing building for commercial electricity generation. DAtF President Dr. Walter Hohlefelder welcomed some 200 high-ranking guests from politics, industry, and from the nuclear community, above all, the Chancellor of the Federal Republic of Germany, Dr. Angela Merkel, and, as keynote speaker, Professor Dr. Arnulf Baring. (orig.)

  10. Implementation of software-based sensor linearization algorithms on low-cost microcontrollers.

    Science.gov (United States)

    Erdem, Hamit

    2010-10-01

    Nonlinear sensors and microcontrollers are used in many embedded system designs. As the input-output characteristic of most sensors is nonlinear in nature, obtaining data from a nonlinear sensor by using an integer microcontroller has always been a design challenge. This paper discusses the implementation of six software-based sensor linearization algorithms for low-cost microcontrollers. The comparative study of the linearization algorithms is performed by using a nonlinear optical distance-measuring sensor. The performance of the algorithms is examined with respect to memory space usage, linearization accuracy and algorithm execution time. The implementation and comparison results can be used for selection of a linearization algorithm based on the sensor transfer function, expected linearization accuracy and microcontroller capacity. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Improved implementation algorithms of the two-dimensional nonseparable linear canonical transform.

    Science.gov (United States)

    Ding, Jian-Jiun; Pei, Soo-Chang; Liu, Chun-Lin

    2012-08-01

    The two-dimensional nonseparable linear canonical transform (2D NSLCT), which is a generalization of the fractional Fourier transform and the linear canonical transform, is useful for analyzing optical systems. However, since the 2D NSLCT has 16 parameters and is very complicated, it is a great challenge to implement it in an efficient way. In this paper, we improved the previous work and propose an efficient way to implement the 2D NSLCT. The proposed algorithm can minimize the numerical error arising from interpolation operations and requires fewer chirp multiplications. The simulation results show that, compared with the existing algorithm, the proposed algorithms can implement the 2D NSLCT more accurately and the required computation time is also less.

  12. Sprachvermittlung und Spracherwerb in Afrika. Deutsch nach ...

    African Journals Online (AJOL)

    erlernende Fremdsprache richtig und auf hohem Niveau sprechen zu lernen. Lehrende sollten alles daransetzen zu verhindern, dass die Lernenden in dieser Vereinfachungsphase stehen bleiben und ihre Sprache fossilieren. Die Bewusstmachung der Ähnlichkeiten von Zulu und. Deutsch kann dazu beitragen, indem man ...

  13. The implement of Talmud property allocation algorithm based on graphic point-segment way

    Science.gov (United States)

    Cen, Haifeng

    2017-04-01

    Under the guidance of the Talmud allocation scheme's theory, the paper analyzes the algorithm implemented process via the perspective of graphic point-segment way, and designs the point-segment way's Talmud property allocation algorithm. Then it uses Java language to implement the core of allocation algorithm, by using Android programming to build a visual interface.

  14. Design and Implementation of Numerical Linear Algebra Algorithms on Fixed Point DSPs

    Directory of Open Access Journals (Sweden)

    Gene Frantz

    2007-01-01

    Full Text Available Numerical linear algebra algorithms use the inherent elegance of matrix formulations and are usually implemented using C/C++ floating point representation. The system implementation is faced with practical constraints because these algorithms usually need to run in real time on fixed point digital signal processors (DSPs to reduce total hardware costs. Converting the simulation model to fixed point arithmetic and then porting it to a target DSP device is a difficult and time-consuming process. In this paper, we analyze the conversion process. We transformed selected linear algebra algorithms from floating point to fixed point arithmetic, and compared real-time requirements and performance between the fixed point DSP and floating point DSP algorithm implementations. We also introduce an advanced code optimization and an implementation by DSP-specific, fixed point C code generation. By using the techniques described in the paper, speed can be increased by a factor of up to 10 compared to floating point emulation on fixed point hardware.

  15. Research and Implementation of the Practical Texture Synthesis Algorithms

    Institute of Scientific and Technical Information of China (English)

    孙家广; 周毅

    1991-01-01

    How to generate pictures real and esthetic objects is an important subject of computer graphics.The techniques of mapping textures onto the surfaces of an object in the 3D space are efficient approaches for the purpose.We developed and implemented algorithms for generating objects with appearances stone,wood grain,ice lattice,brick,doors and windows on Apollo workstations.All the algorithms have been incorporated into the 3D grometry modelling system (GEMS) developed by the CAD Center of Tsinghua University.This paper emphasizes the wood grain and the ice lattice algorithms.

  16. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  17. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  18. Implementation of anomaly detection algorithms for detecting transmission control protocol synchronized flooding attacks

    CSIR Research Space (South Africa)

    Mkuzangwe, NNP

    2015-08-01

    Full Text Available This work implements two anomaly detection algorithms for detecting Transmission Control Protocol Synchronized (TCP SYN) flooding attack. The two algorithms are an adaptive threshold algorithm and a cumulative sum (CUSUM) based algorithm...

  19. Small-scale quantum information processing with linear optics

    International Nuclear Information System (INIS)

    Bergou, J.A.; Steinberg, A.M.; Mohseni, M.

    2005-01-01

    Full text: Photons are the ideal systems for carrying quantum information. Although performing large-scale quantum computation on optical systems is extremely demanding, non scalable linear-optics quantum information processing may prove essential as part of quantum communication networks. In addition efficient (scalable) linear-optical quantum computation proposal relies on the same optical elements. Here, by constructing multirail optical networks, we experimentally study two central problems in quantum information science, namely optimal discrimination between nonorthogonal quantum states, and controlling decoherence in quantum systems. Quantum mechanics forbids deterministic discrimination between nonorthogonal states. This is one of the central features of quantum cryptography, which leads to secure communications. Quantum state discrimination is an important primitive in quantum information processing, since it determines the limitations of a potential eavesdropper, and it has applications in quantum cloning and entanglement concentration. In this work, we experimentally implement generalized measurements in an optical system and demonstrate the first optimal unambiguous discrimination between three non-orthogonal states with a success rate of 55 %, to be compared with the 25 % maximum achievable using projective measurements. Furthermore, we present the first realization of unambiguous discrimination between a pure state and a nonorthogonal mixed state. In a separate experiment, we demonstrate how decoherence-free subspaces (DFSs) may be incorporated into a prototype optical quantum algorithm. Specifically, we present an optical realization of two-qubit Deutsch-Jozsa algorithm in presence of random noise. By introduction of localized turbulent airflow we produce a collective optical dephasing, leading to large error rates and demonstrate that using DFS encoding, the error rate in the presence of decoherence can be reduced from 35 % to essentially its pre

  20. An implementation of Kovacic's algorithm for solving ordinary differential equations in FORMAC

    International Nuclear Information System (INIS)

    Zharkov, A.Yu.

    1987-01-01

    An implementation of Kovacic's algorithm for finding Liouvillian solutions of the differential equations y'' + a(x)y' + b(x)y = 0 with rational coefficients a(x) and b(x) in the Computer Algebra System FORMAC is described. The algorithm description is presented in such a way that one can easily implement it in a suitable Computer Algebra System

  1. German energy policy in deregulated Europe; Deutsche Energiepolitik im liberalisierten Europa

    Energy Technology Data Exchange (ETDEWEB)

    Kuhnt, D. [RWE Energie AG, Essen (Germany)

    2000-07-01

    The author argues in favor of a more fact-oriented German energy policy: Firstly, German energy policy must accept the new European framework of a market economy. This means that German utilities must no longer be burdened with the implementation of political objectives. The German power industry needs a level playing field for competition on a European scale. Consequently, also the European partner countries should not limit themselves to the minimum conditions of the Single Market Directive in opening their markets. Secondly, German energy policy must develop new forms of cooperation with the power industry so as to maintain domestic employment and the addition of value despite considerably stronger competitive pressure. Also the conflicting targets of sustainability, continuity of supply, and economic viability must not only be discussed, but must be turned into productive approaches. Thirdly, this means that there must be no inadmissible solution in matters nuclear. If the German power industry is to remain strong, in the interest of domestic jobs and opportunities for the future, it must not lose any more domestic market share to other European companies. Fourthly, we need a new energy policy which takes cognizance of the results of market development in a more rational, less emotional way. In this respect, it should be limited henceforth to supporting renewable energies and technologies so as to enhance energy efficiency in line with market requirements. Fifthly, German energy policy must not commit the mistake of enforcing deregulation and, at the same time, exempting large segments of the market from competition. Thus, the planned expansion of renewable energies, and the increase in cogeneration to more than thirty percent of the German electricty generation, by way of quotas and revenues for electricity from these sources fed into the public grid, are incompatible with competition in Europe. The electricity tax within the framework of the eco tax, the

  2. Fulltext PDF

    Indian Academy of Sciences (India)

    Alam M N see Banerjee A K. 643. Arvind. Quantum entanglement and quantum compu- tational algorithms. 357. Quantum entanglement in the NMR imple- mentation of the Deutsch–Jozsa algorithm. L705. Atkinson David. Bell's inequalities and Kolmogorov's axioms. 139. Baheti K see Khanna R K. 755. Balachandran A P.

  3. THE DEUTSCH MODEL--INSTITUTE FOR DEVELOPMENTAL STUDIES.

    Science.gov (United States)

    New York Univ., NY. Inst. for Developmental Studies.

    THE DEUTSCH INTERVENTION MODEL IS BASED ON THE THEORY THAT ENVIRONMENT PLAYS A MAJOR ROLE IN THE DEVELOPMENT OF COGNITIVE SKILLS AND OF FUNCTIONAL USE OF INTELLECTUAL CAPABILITIES. DISADVANTAGED CHILDREN HAVE INTELLECTUAL DEFICITS WHICH MAY BE OVERCOME BY USE OF MATCHED REMEDIAL MEASURES. LANGUAGE SKILLS AND MOTIVATION CAN BE IMPROVED BY TEACHING…

  4. An algorithm, implementation and execution ontology design pattern

    NARCIS (Netherlands)

    Lawrynowicz, A.; Esteves, D.; Panov, P.; Soru, T.; Dzeroski, S.; Vanschoren, J.

    2016-01-01

    This paper describes an ontology design pattern for modeling algorithms, their implementations and executions. This pattern is derived from the research results on data mining/machine learning ontologies, but is more generic. We argue that the proposed pattern will foster the development of

  5. AlgoRun: a Docker-based packaging system for platform-agnostic implemented algorithms.

    Science.gov (United States)

    Hosny, Abdelrahman; Vera-Licona, Paola; Laubenbacher, Reinhard; Favre, Thibauld

    2016-08-01

    There is a growing need in bioinformatics for easy-to-use software implementations of algorithms that are usable across platforms. At the same time, reproducibility of computational results is critical and often a challenge due to source code changes over time and dependencies. The approach introduced in this paper addresses both of these needs with AlgoRun, a dedicated packaging system for implemented algorithms, using Docker technology. Implemented algorithms, packaged with AlgoRun, can be executed through a user-friendly interface directly from a web browser or via a standardized RESTful web API to allow easy integration into more complex workflows. The packaged algorithm includes the entire software execution environment, thereby eliminating the common problem of software dependencies and the irreproducibility of computations over time. AlgoRun-packaged algorithms can be published on http://algorun.org, a centralized searchable directory to find existing AlgoRun-packaged algorithms. AlgoRun is available at http://algorun.org and the source code under GPL license is available at https://github.com/algorun laubenbacher@uchc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. The selection and implementation of hidden line algorithms

    International Nuclear Information System (INIS)

    Schneider, A.

    1983-06-01

    One of the most challenging problems in the field of computer graphics is the elimination of hidden lines in images of nontransparent bodies. In the real world the nontransparent material hinders the light ray coming from hidden regions to the observer. In the computer based image formation process there is no automatic visibility regulation of this kind. So many lines are created which result in a poor quality of the spacial representation. Therefore a three-dimensional representation on the screen is only meaningfull if the hidden lines are eliminated. For this process many algorithms have been developed in the past. A common feature of these codes is the large amount of computer time needed. In the first generation of algorithms, which are commonly used today, the bodies are modeled by plane polygons. More recently, however, also algorithms are in use, which are able to treat curved surfaces without discretisation by plane surfaces. In this paper the first group of algorithms is reviewed, and the most important codes are described. The experience obtained during the implementation of two algorithms is presented. (orig.) [de

  7. Caliko: An Inverse Kinematics Software Library Implementation of the FABRIK Algorithm

    OpenAIRE

    Lansley, Alastair; Vamplew, Peter; Smith, Philip; Foale, Cameron

    2016-01-01

    The Caliko library is an implementation of the FABRIK (Forward And Backward Reaching Inverse Kinematics) algorithm written in Java. The inverse kinematics (IK) algorithm is implemented in both 2D and 3D, and incorporates a variety of joint constraints as well as the ability to connect multiple IK chains together in a hierarchy. The library allows for the simple creation and solving of multiple IK chains as well as visualisation of these solutions. It is licensed under the MIT software license...

  8. Implementing Modifed Burg Algorithms in Multivariate Subset Autoregressive Modeling

    Directory of Open Access Journals (Sweden)

    A. Alexandre Trindade

    2003-02-01

    Full Text Available The large number of parameters in subset vector autoregressive models often leads one to procure fast, simple, and efficient alternatives or precursors to maximum likelihood estimation. We present the solution of the multivariate subset Yule-Walker equations as one such alternative. In recent work, Brockwell, Dahlhaus, and Trindade (2002, show that the Yule-Walker estimators can actually be obtained as a special case of a general recursive Burg-type algorithm. We illustrate the structure of this Algorithm, and discuss its implementation in a high-level programming language. Applications of the Algorithm in univariate and bivariate modeling are showcased in examples. Univariate and bivariate versions of the Algorithm written in Fortran 90 are included in the appendix, and their use illustrated.

  9. IMPLEMENTATION OF OBJECT TRACKING ALGORITHMS ON THE BASIS OF CUDA TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    B. A. Zalesky

    2014-01-01

    Full Text Available A fast version of correlation algorithm to track objects on video-sequences made by a nonstabilized camcorder is presented. The algorithm is based on comparison of local correlations of the object image and regions of video-frames. The algorithm is implemented in programming technology CUDA. Application of CUDA allowed to attain real time execution of the algorithm. To improve its precision and stability, a robust version of the Kalman filter has been incorporated into the flowchart. Tests showed applicability of the algorithm to practical object tracking.

  10. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    Science.gov (United States)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  11. A genetic-algorithm-based method to find unitary transformations for any desired quantum computation and application to a one-bit oracle decision problem

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Jeongho [Seoul National University, Seoul (Korea, Republic of); Hanyang University, Seoul (Korea, Republic of); Yoo, Seokwon [Hanyang University, Seoul (Korea, Republic of)

    2014-12-15

    We propose a genetic-algorithm-based method to find the unitary transformations for any desired quantum computation. We formulate a simple genetic algorithm by introducing the 'genetic parameter vector' of the unitary transformations to be found. In the genetic algorithm process, all components of the genetic parameter vectors are supposed to evolve to the solution parameters of the unitary transformations. We apply our method to find the optimal unitary transformations and to generalize the corresponding quantum algorithms for a realistic problem, the one-bit oracle decision problem, or the often-called Deutsch problem. By numerical simulations, we can faithfully find the appropriate unitary transformations to solve the problem by using our method. We analyze the quantum algorithms identified by the found unitary transformations and generalize the variant models of the original Deutsch's algorithm.

  12. Error tolerance in an NMR implementation of Grover's fixed-point quantum search algorithm

    International Nuclear Information System (INIS)

    Xiao Li; Jones, Jonathan A.

    2005-01-01

    We describe an implementation of Grover's fixed-point quantum search algorithm on a nuclear magnetic resonance quantum computer, searching for either one or two matching items in an unsorted database of four items. In this algorithm the target state (an equally weighted superposition of the matching states) is a fixed point of the recursive search operator, so that the algorithm always moves towards the desired state. The effects of systematic errors in the implementation are briefly explored

  13. Continuous-Variable Quantum Computation of Oracle Decision Problems

    Science.gov (United States)

    Adcock, Mark R. A.

    Quantum information processing is appealing due its ability to solve certain problems quantitatively faster than classical information processing. Most quantum algorithms have been studied in discretely parameterized systems, but many quantum systems are continuously parameterized. The field of quantum optics in particular has sophisticated techniques for manipulating continuously parameterized quantum states of light, but the lack of a code-state formalism has hindered the study of quantum algorithms in these systems. To address this situation, a code-state formalism for the solution of oracle decision problems in continuously-parameterized quantum systems is developed. Quantum information processing is appealing due its ability to solve certain problems quantitatively faster than classical information processing. Most quantum algorithms have been studied in discretely parameterized systems, but many quantum systems are continuously parameterized. The field of quantum optics in particular has sophisticated techniques for manipulating continuously parameterized quantum states of light, but the lack of a code-state formalism has hindered the study of quantum algorithms in these systems. To address this situation, a code-state formalism for the solution of oracle decision problems in continuously-parameterized quantum systems is developed. In the infinite-dimensional case, we study continuous-variable quantum algorithms for the solution of the Deutsch--Jozsa oracle decision problem implemented within a single harmonic-oscillator. Orthogonal states are used as the computational bases, and we show that, contrary to a previous claim in the literature, this implementation of quantum information processing has limitations due to a position-momentum trade-off of the Fourier transform. We further demonstrate that orthogonal encoding bases are not unique, and using the coherent states of the harmonic oscillator as the computational bases, our formalism enables quantifying

  14. Implementation and analysis of a Navier-Stokes algorithm on parallel computers

    Science.gov (United States)

    Fatoohi, Raad A.; Grosch, Chester E.

    1988-01-01

    The results of the implementation of a Navier-Stokes algorithm on three parallel/vector computers are presented. The object of this research is to determine how well, or poorly, a single numerical algorithm would map onto three different architectures. The algorithm is a compact difference scheme for the solution of the incompressible, two-dimensional, time-dependent Navier-Stokes equations. The computers were chosen so as to encompass a variety of architectures. They are the following: the MPP, an SIMD machine with 16K bit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. The basic comparison is among SIMD instruction parallelism on the MPP, MIMD process parallelism on the Flex/32, and vectorization of a serial code on the Cray/2. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.

  15. Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data

    Science.gov (United States)

    Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam

    2018-06-01

    Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.

  16. Modified SURF Algorithm Implementation on FPGA For Real-Time Object Tracking

    Directory of Open Access Journals (Sweden)

    Tomyslav Sledevič

    2013-05-01

    Full Text Available The paper describes the FPGA-based implementation of the modified speeded-up robust features (SURF algorithm. FPGA was selected for parallel process implementation using VHDL to ensure features extraction in real-time. A sliding 84×84 size window was used to store integral pixels and accelerate Hessian determinant calculation, orientation assignment and descriptor estimation. The local extreme searching was used to find point of interest in 8 scales. The simplified descriptor and orientation vector were calculated in parallel in 6 scales. The algorithm was investigated by tracking marker and drawing a plane or cube. All parts of algorithm worked on 25 MHz clock. The video stream was generated using 60 fps and 640×480 pixel camera.Article in Lithuanian

  17. FPGA-Based Implementation of Lithuanian Isolated Word Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Tomyslav Sledevič

    2013-05-01

    Full Text Available The paper describes the FPGA-based implementation of Lithuanian isolated word recognition algorithm. FPGA is selected for parallel process implementation using VHDL to ensure fast signal processing at low rate clock signal. Cepstrum analysis was applied to features extraction in voice. The dynamic time warping algorithm was used to compare the vectors of cepstrum coefficients. A library of 100 words features was created and stored in the internal FPGA BRAM memory. Experimental testing with speaker dependent records demonstrated the recognition rate of 94%. The recognition rate of 58% was achieved for speaker-independent records. Calculation of cepstrum coefficients lasted for 8.52 ms at 50 MHz clock, while 100 DTWs took 66.56 ms at 25 MHz clock.Article in Lithuanian

  18. Comparison Of Hybrid Sorting Algorithms Implemented On Different Parallel Hardware Platforms

    Directory of Open Access Journals (Sweden)

    Dominik Zurek

    2013-01-01

    Full Text Available Sorting is a common problem in computer science. There are lot of well-known sorting algorithms created for sequential execution on a single processor. Recently, hardware platforms enable to create wide parallel algorithms. We have standard processors consist of multiple cores and hardware accelerators like GPU. The graphic cards with their parallel architecture give new possibility to speed up many algorithms. In this paper we describe results of implementation of a few different sorting algorithms on GPU cards and multicore processors. Then hybrid algorithm will be presented which consists of parts executed on both platforms, standard CPU and GPU.

  19. Simulation of ingot casting processes at Deutsche Edelstahlwerke GmbH®

    International Nuclear Information System (INIS)

    Hartmann, L; Ernst, C; Klung, J-S

    2012-01-01

    To enhance the quality of tool steels it is necessary to analyse all stages of the production process. During the ingot- or continuous casting processes and the following solidification, material and geometry depending reactions cause defects such as macro segregations or porosities. In former times the trial and error approach, together with the experience and creativity of the steelworks engineers was used to improve the as-cast quality, with a high amount of test procedures and a high demand of research time and costs. Further development in software and algorithms has allowed modern simulation techniques to find their way into industrial steel production and casting-simulations are widely used to achieve an accurate prediction of the ingot quality. To improve the as-cast quality, several ingot casting processes of tool steels were studied at the R and D department of Deutsche Edelstahlwerke GmbH by using the numerical casting simulation software MAGMASOFT ® . In this paper some results extracted from the simulation software are shown and compared to experimental investigations.

  20. Efficiency Analysis of the Parallel Implementation of the SIMPLE Algorithm on Multiprocessor Computers

    Science.gov (United States)

    Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.

    2017-12-01

    This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.

  1. Modification of MSDR algorithm and ITS implementation on graph clustering

    Science.gov (United States)

    Prastiwi, D.; Sugeng, K. A.; Siswantining, T.

    2017-07-01

    Maximum Standard Deviation Reduction (MSDR) is a graph clustering algorithm to minimize the distance variation within a cluster. In this paper we propose a modified MSDR by replacing one technical step in MSDR which uses polynomial regression, with a new and simpler step. This leads to our new algorithm called Modified MSDR (MMSDR). We implement the new algorithm to separate a domestic flight network of an Indonesian airline into two large clusters. Further analysis allows us to discover a weak link in the network, which should be improved by adding more flights.

  2. Kinder Lernen Deutsch. Materials Project Part I. Revised.

    Science.gov (United States)

    American Association of Teachers of German.

    The Kinder Lernen Deutsch (LKD) materials evaluation project identifies materials appropriate for the elementary school German classrooms in grades K-8. This guide consists of an annotated bibliography, with ratings, of these materials. The guiding principles by which the materials were assessed were: use of the communicative approach; integration…

  3. A high performance hardware implementation image encryption with AES algorithm

    Science.gov (United States)

    Farmani, Ali; Jafari, Mohamad; Miremadi, Seyed Sohrab

    2011-06-01

    This paper describes implementation of a high-speed encryption algorithm with high throughput for encrypting the image. Therefore, we select a highly secured symmetric key encryption algorithm AES(Advanced Encryption Standard), in order to increase the speed and throughput using pipeline technique in four stages, control unit based on logic gates, optimal design of multiplier blocks in mixcolumn phase and simultaneous production keys and rounds. Such procedure makes AES suitable for fast image encryption. Implementation of a 128-bit AES on FPGA of Altra company has been done and the results are as follow: throughput, 6 Gbps in 471MHz. The time of encrypting in tested image with 32*32 size is 1.15ms.

  4. Pipeline Implementation of Polyphase PSO for Adaptive Beamforming Algorithm

    Directory of Open Access Journals (Sweden)

    Shaobing Huang

    2017-01-01

    Full Text Available Adaptive beamforming is a powerful technique for anti-interference, where searching and tracking optimal solutions are a great challenge. In this paper, a partial Particle Swarm Optimization (PSO algorithm is proposed to track the optimal solution of an adaptive beamformer due to its great global searching character. Also, due to its naturally parallel searching capabilities, a novel Field Programmable Gate Arrays (FPGA pipeline architecture using polyphase filter bank structure is designed. In order to perform computations with large dynamic range and high precision, the proposed implementation algorithm uses an efficient user-defined floating-point arithmetic. In addition, a polyphase architecture is proposed to achieve full pipeline implementation. In the case of PSO with large population, the polyphase architecture can significantly save hardware resources while achieving high performance. Finally, the simulation results are presented by cosimulation with ModelSim and SIMULINK.

  5. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    Science.gov (United States)

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  6. Caliko: An Inverse Kinematics Software Library Implementation of the FABRIK Algorithm

    Directory of Open Access Journals (Sweden)

    Alastair Lansley

    2016-09-01

    Full Text Available The Caliko library is an implementation of the FABRIK (Forward And Backward Reaching Inverse Kinematics algorithm written in Java. The inverse kinematics (IK algorithm is implemented in both 2D and 3D, and incorporates a variety of joint constraints as well as the ability to connect multiple IK chains together in a hierarchy. The library allows for the simple creation and solving of multiple IK chains as well as visualisation of these solutions. It is licensed under the MIT software license and the source code is freely available for use and modification at: https://github.com/feduni/caliko

  7. A fast implementation of the incremental backprojection algorithms for parallel beam geometries

    International Nuclear Information System (INIS)

    Chen, C.M.; Wang, C.Y.; Cho, Z.H.

    1996-01-01

    Filtered-backprojection algorithms are the most widely used approaches for reconstruction of computed tomographic (CT) images, such as X-ray CT and positron emission tomographic (PET) images. The Incremental backprojection algorithm is a fast backprojection approach based on restructuring the Shepp and Logan algorithm. By exploiting interdependency (position and values) of adjacent pixels, the Incremental algorithm requires only O(N) and O(N 2 ) multiplications in contrast to O(N 2 ) and O(N 3 ) multiplications for the Shepp and Logan algorithm in two-dimensional (2-D) and three-dimensional (3-D) backprojections, respectively, for each view, where N is the size of the image in each dimension. In addition, it may reduce the number of additions for each pixel computation. The improvement achieved by the Incremental algorithm in practice was not, however, as significant as expected. One of the main reasons is due to inevitably visiting pixels outside the beam in the searching flow scheme originally developed for the Incremental algorithm. To optimize implementation of the Incremental algorithm, an efficient scheme, namely, coded searching flow scheme, is proposed in this paper to minimize the overhead caused by searching for all pixels in a beam. The key idea of this scheme is to encode the searching flow for all pixels inside each beam. While backprojecting, all pixels may be visited without any overhead due to using the coded searching flow as the a priori information. The proposed coded searching flow scheme has been implemented on a Sun Sparc 10 and a Sun Sparc 20 workstations. The implementation results show that the proposed scheme is 1.45--2.0 times faster than the original searching flow scheme for most cases tested

  8. 50. Annual meeting of the Deutsche Gesellschaft fuer Neuroradiologie. Abstracts

    International Nuclear Information System (INIS)

    2015-01-01

    The volume on the 50th annual meeting of the Deutsche Gesellschaft fuer Neuroradiologie includes the abstracts concerning the following issues: infectious central nervous system diseases, neurodegenerations, infarction, petrosal bone pathology, neurointerventions.

  9. 49. Annual meeting of the Deutsche Gesellschaft fuer Neuroradiologie. Abstracts

    International Nuclear Information System (INIS)

    2014-01-01

    The conference proceedings of the 48. Annual meeting of the Deutsche Gesellschaft fuer Neuroradiologie contain abstracts on the following issues: neuro-oncological imaging, multimodal imaging concepts, subcranial imaging, spinal codes, interventional neuroradiology.

  10. Hardware realization of an SVM algorithm implemented in FPGAs

    Science.gov (United States)

    Wiśniewski, Remigiusz; Bazydło, Grzegorz; Szcześniak, Paweł

    2017-08-01

    The paper proposes a technique of hardware realization of a space vector modulation (SVM) of state function switching in matrix converter (MC), oriented on the implementation in a single field programmable gate array (FPGA). In MC the SVM method is based on the instantaneous space-vector representation of input currents and output voltages. The traditional computation algorithms usually involve digital signal processors (DSPs) which consumes the large number of power transistors (18 transistors and 18 independent PWM outputs) and "non-standard positions of control pulses" during the switching sequence. Recently, hardware implementations become popular since computed operations may be executed much faster and efficient due to nature of the digital devices (especially concurrency). In the paper, we propose a hardware algorithm of SVM computation. In opposite to the existing techniques, the presented solution applies COordinate Rotation DIgital Computer (CORDIC) method to solve the trigonometric operations. Furthermore, adequate arithmetic modules (that is, sub-devices) used for intermediate calculations, such as code converters or proper sectors selectors (for output voltages and input current) are presented in detail. The proposed technique has been implemented as a design described with the use of Verilog hardware description language. The preliminary results of logic implementation oriented on the Xilinx FPGA (particularly, low-cost device from Artix-7 family from Xilinx was used) are also presented.

  11. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    Directory of Open Access Journals (Sweden)

    Jinwei Wang

    2014-01-01

    Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  12. German financial media's responsiveness to Deutsche Bank's cultural change

    NARCIS (Netherlands)

    Strauß, N.

    2015-01-01

    Based on first-order and second-order agenda building theory, this study analyzes the responsiveness of German financial media to frames of the "cultural change" proclaimed in the banking industry, exemplified by Deutsche Bank. Findings suggest a difference between the two major German financial

  13. PCIU: Hardware Implementations of an Efficient Packet Classification Algorithm with an Incremental Update Capability

    Directory of Open Access Journals (Sweden)

    O. Ahmed

    2011-01-01

    Full Text Available Packet classification plays a crucial role for a number of network services such as policy-based routing, firewalls, and traffic billing, to name a few. However, classification can be a bottleneck in the above-mentioned applications if not implemented properly and efficiently. In this paper, we propose PCIU, a novel classification algorithm, which improves upon previously published work. PCIU provides lower preprocessing time, lower memory consumption, ease of incremental rule update, and reasonable classification time compared to state-of-the-art algorithms. The proposed algorithm was evaluated and compared to RFC and HiCut using several benchmarks. Results obtained indicate that PCIU outperforms these algorithms in terms of speed, memory usage, incremental update capability, and preprocessing time. The algorithm, furthermore, was improved and made more accessible for a variety of applications through implementation in hardware. Two such implementations are detailed and discussed in this paper. The results indicate that a hardware/software codesign approach results in a slower, but easier to optimize and improve within time constraints, PCIU solution. A hardware accelerator based on an ESL approach using Handel-C, on the other hand, resulted in a 31x speed-up over a pure software implementation running on a state of the art Xeon processor.

  14. Implementation of an evolutionary algorithm in planning investment in a power distribution system

    Directory of Open Access Journals (Sweden)

    Carlos Andrés García Montoya

    2011-06-01

    Full Text Available The definition of an investment plan to implement in a distribution power system, is a task that constantly faced by utilities. This work presents a methodology for determining the investment plan for a distribution power system under a shortterm, using as a criterion for evaluating investment projects, associated costs and customers benefit from its implementation. Given the number of projects carried out annually on the system, the definition of an investment plan requires the use of computational tools to evaluate, a set of possibilities, the one that best suits the needs of the present system and better results. That is why in the job, implementing a multi objective evolutionary algorithm SPEA (Strength Pareto Evolutionary Algorithm, which, based on the principles of Pareto optimality, it deliver to the planning expert, the best solutions found in the optimization process. The performance of the algorithm is tested using a set of projects to determine the best among the possible plans. We analyze also the effect of operators on the performance of evolutionary algorithm and results.

  15. Deutsches Atomforum. Annual report 1995

    International Nuclear Information System (INIS)

    Petroll, M.; Philipp, L.

    1996-01-01

    With its 1995 annual report, the Deutsches Atomforum renders account of its activities to its members and the public. As is demonstrated, there has been light and shadow in 1995. It is stated with satisfaction that German nuclear power stations for another year have fulfilled their task of supplying power reliably and safely. With their share of one third in public power generation, they represent an important pillar of supply. A cause for concern is that fact that a basic energy-political consensus between the major political forces on the long-term role to be played in power supply by nuclear power generation continues to be lacking. (orig./RHM) [de

  16. On distribution reduction and algorithm implementation in inconsistent ordered information systems.

    Science.gov (United States)

    Zhang, Yanqin

    2014-01-01

    As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.

  17. Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed

    Science.gov (United States)

    Tian, Ye; Song, Qi; Cattafesta, Louis

    2005-01-01

    This report summarizes the activities on "Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed." The work summarized consists primarily of two parts. The first part summarizes our previous work and the extensions to adaptive ID and control algorithms. The second part concentrates on the validation of adaptive algorithms by applying them to a vibration beam test bed. Extensions to flow control problems are discussed.

  18. Implementation of Genetic Algorithm in Control Structure of Induction Motor A.C. Drive

    Directory of Open Access Journals (Sweden)

    BRANDSTETTER, P.

    2014-11-01

    Full Text Available Modern concepts of control systems with digital signal processors allow the implementation of time-consuming control algorithms in real-time, for example soft computing methods. The paper deals with the design and technical implementation of a genetic algorithm for setting proportional and integral gain of the speed controller of the A.C. drive with the vector-controlled induction motor. Important simulations and experimental measurements have been realized that confirm the correctness of the proposed speed controller tuned by the genetic algorithm and the quality speed response of the A.C. drive with changing parameters and disturbance variables, such as changes in load torque.

  19. Deutsche Bahn. Small hydropower station Bad Abbach directly feeds electrical power into the overhead wire system; Deutsche Bahn. Kleinwasserkraftwerk Bad Abbach speist elektrische Energie unmittelbar in die Oberleitung

    Energy Technology Data Exchange (ETDEWEB)

    Hamerak, Kurt

    2009-07-01

    Even if the installed electrical power of the hydraulic power plant Bad Abbach (Federal Republic of Germany) of Deutsche Bahn AG with only 4,500 kVA is quite modest, a significant planning effort was necessary due to numerous boundary conditions. The construction of this unusual hydraulic power plant signified a very demanding and interesting technical challenge for all concerned. The already existing damming of the river Danube required very little interventions in the environment. Thus the hydraulic power plant satisfied all the requirements also in environmental regard. Due to the cooperation of a Kaplan turbine shaft with a single-phase AC generator for supplying power to the Deutsche Bahn AG and due to the direct supply of electrical energy into the overhead wire system of the railroad, the new hydropower plant Bad Abbach is unique. With Deutsche Bahn AG as a consumer of energy from hydropower plants inter alia on the river Danube a partnership between the Rhein Main Donau AG (Munich, Federal Republic of Germany) and E.ON Wasserkraft GmbH (Landshut, Federal Republic of Germany) was continued in the field of renewable energies.

  20. Spiral-CT-angiography of acute pulmonary embolism: factors that influence the implementation into standard diagnostic algorithms

    International Nuclear Information System (INIS)

    Bankier, A.; Herold, C.J.; Fleischmann, D.; Janata-Schwatczek, K.

    1998-01-01

    Purpose: Debate about the potential implementation of Spiral-CT in diagnostic algorithms of pulmonary embolism are often focussed on sensitivity and specificity in the context of comparative methodologic studies. We intend to investigate whether additional factors might influence this debate. Results: The factors availability, acceptance, patient-outcome, and cost-effectiveness-studies do have substantial influence on the implementation of Spiral-CT in the diagnostic algorithms of pulmonary embolism. Incorporation of these factors into the discussion might lead to more flexible and more patient-oriented algorithms for the diagnosis of pulmonary embolism. Conclusion: Availability of equipment, acceptance among clinicians, patient-out-come, and cost-effectiveness evaluations should be implemented into the debate about potential implementation of Spiral-CT in routine diagnostic imaging algorithms of pulmonary embolism. (orig./AJ) [de

  1. An Implementable First-Order Primal-Dual Algorithm for Structured Convex Optimization

    Directory of Open Access Journals (Sweden)

    Feng Ma

    2014-01-01

    Full Text Available Many application problems of practical interest can be posed as structured convex optimization models. In this paper, we study a new first-order primaldual algorithm. The method can be easily implementable, provided that the resolvent operators of the component objective functions are simple to evaluate. We show that the proposed method can be interpreted as a proximal point algorithm with a customized metric proximal parameter. Convergence property is established under the analytic contraction framework. Finally, we verify the efficiency of the algorithm by solving the stable principal component pursuit problem.

  2. Implementation of several mathematical algorithms to breast tissue density classification

    International Nuclear Information System (INIS)

    Quintana, C.; Redondo, M.; Tirao, G.

    2014-01-01

    The accuracy of mammographic abnormality detection methods is strongly dependent on breast tissue characteristics, where a dense breast tissue can hide lesions causing cancer to be detected at later stages. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. This paper presents the implementation and the performance of different mathematical algorithms designed to standardize the categorization of mammographic images, according to the American College of Radiology classifications. These mathematical techniques are based on intrinsic properties calculations and on comparison with an ideal homogeneous image (joint entropy, mutual information, normalized cross correlation and index Q) as categorization parameters. The algorithms evaluation was performed on 100 cases of the mammographic data sets provided by the Ministerio de Salud de la Provincia de Córdoba, Argentina—Programa de Prevención del Cáncer de Mama (Department of Public Health, Córdoba, Argentina, Breast Cancer Prevention Program). The obtained breast classifications were compared with the expert medical diagnostics, showing a good performance. The implemented algorithms revealed a high potentiality to classify breasts into tissue density categories. - Highlights: • Breast density classification can be obtained by suitable mathematical algorithms. • Mathematical processing help radiologists to obtain the BI-RADS classification. • The entropy and joint entropy show high performance for density classification

  3. FPGA implementation of image dehazing algorithm for real time applications

    Science.gov (United States)

    Kumar, Rahul; Kaushik, Brajesh Kumar; Balasubramanian, R.

    2017-09-01

    Weather degradation such as haze, fog, mist, etc. severely reduces the effective range of visual surveillance. This degradation is a spatially varying phenomena, which makes this problem non trivial. Dehazing is an essential preprocessing stage in applications such as long range imaging, border security, intelligent transportation system, etc. However, these applications require low latency of the preprocessing block. In this work, single image dark channel prior algorithm is modified and implemented for fast processing with comparable visual quality of the restored image/video. Although conventional single image dark channel prior algorithm is computationally expensive, it yields impressive results. Moreover, a two stage image dehazing architecture is introduced, wherein, dark channel and airlight are estimated in the first stage. Whereas, transmission map and intensity restoration are computed in the next stages. The algorithm is implemented using Xilinx Vivado software and validated by using Xilinx zc702 development board, which contains an Artix7 equivalent Field Programmable Gate Array (FPGA) and ARM Cortex A9 dual core processor. Additionally, high definition multimedia interface (HDMI) has been incorporated for video feed and display purposes. The results show that the dehazing algorithm attains 29 frames per second for the image resolution of 1920x1080 which is suitable of real time applications. The design utilizes 9 18K_BRAM, 97 DSP_48, 6508 FFs and 8159 LUTs.

  4. Implementation Aspects of a Flexible Frequency Spectrum Usage Algorithm for Cognitive OFDM Systems

    DEFF Research Database (Denmark)

    Sacchi, Claudio; Tonelli, Oscar; Cattoni, Andrea Fabio

    2011-01-01

    time on a shared spectrum chunk, emphasizes the role of resource allocation as a critical system design issue. This work is aimed at analyzing the practical issues related to the Software Defined Radio (SDR)-based implementation of a dynamic spectrum allocation algorithm, designed for OFDM...... on a Xilinx ML506 development board is performed. The main novelty proposed in this paper consists in the SDR-based implementation of a computationally-sustainable resource allocation algorithm for FSU on low-cost commercial FPGA platforms. The proposed implementation is competitive with respect to other ones...... on a Virtex 5 FPGA. Experimental results will illustrate that the selected core functionalities are effectively implementable with around 3% or less of the total FPGA computing resources....

  5. An implementation of a data-transmission pipelining algorithm on Imote2 platforms

    Science.gov (United States)

    Li, Xu; Dorvash, Siavash; Cheng, Liang; Pakzad, Shamim

    2011-04-01

    Over the past several years, wireless network systems and sensing technologies have been developed significantly. This has resulted in the broad application of wireless sensor networks (WSNs) in many engineering fields and in particular structural health monitoring (SHM). The movement of traditional SHM toward the new generation of SHM, which utilizes WSNs, relies on the advantages of this new approach such as relatively low costs, ease of implementation and the capability of onboard data processing and management. In the particular case of long span bridge monitoring, a WSN should be capable of transmitting commands and measurement data over long network geometry in a reliable manner. While using single-hop data transmission in such geometry requires a long radio range and consequently a high level of power supply, multi-hop communication may offer an effective and reliable way for data transmissions across the network. Using a multi-hop communication protocol, the network relays data from a remote node to the base station via intermediary nodes. We have proposed a data-transmission pipelining algorithm to enable an effective use of the available bandwidth and minimize the energy consumption and the delay performance by the multi-hop communication protocol. This paper focuses on the implementation aspect of the pipelining algorithm on Imote2 platforms for SHM applications, describes its interaction with underlying routing protocols, and presents the solutions to various implementation issues of the proposed pipelining algorithm. Finally, the performance of the algorithm is evaluated based on the results of an experimental implementation.

  6. VIRTEX-5 Fpga Implementation of Advanced Encryption Standard Algorithm

    Science.gov (United States)

    Rais, Muhammad H.; Qasim, Syed M.

    2010-06-01

    In this paper, we present an implementation of Advanced Encryption Standard (AES) cryptographic algorithm using state-of-the-art Virtex-5 Field Programmable Gate Array (FPGA). The design is coded in Very High Speed Integrated Circuit Hardware Description Language (VHDL). Timing simulation is performed to verify the functionality of the designed circuit. Performance evaluation is also done in terms of throughput and area. The design implemented on Virtex-5 (XC5VLX50FFG676-3) FPGA achieves a maximum throughput of 4.34 Gbps utilizing a total of 399 slices.

  7. Implementation of several mathematical algorithms to breast tissue density classification

    Science.gov (United States)

    Quintana, C.; Redondo, M.; Tirao, G.

    2014-02-01

    The accuracy of mammographic abnormality detection methods is strongly dependent on breast tissue characteristics, where a dense breast tissue can hide lesions causing cancer to be detected at later stages. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. This paper presents the implementation and the performance of different mathematical algorithms designed to standardize the categorization of mammographic images, according to the American College of Radiology classifications. These mathematical techniques are based on intrinsic properties calculations and on comparison with an ideal homogeneous image (joint entropy, mutual information, normalized cross correlation and index Q) as categorization parameters. The algorithms evaluation was performed on 100 cases of the mammographic data sets provided by the Ministerio de Salud de la Provincia de Córdoba, Argentina—Programa de Prevención del Cáncer de Mama (Department of Public Health, Córdoba, Argentina, Breast Cancer Prevention Program). The obtained breast classifications were compared with the expert medical diagnostics, showing a good performance. The implemented algorithms revealed a high potentiality to classify breasts into tissue density categories.

  8. Results of selected research projects of information and communication technology at RAG Deutsche Steinkohle; Ergebnisse ausgewaehlter Forschungsprojekte der Informations- und Kommunikationstechnik bei der RAG Deutsche Steinkohle

    Energy Technology Data Exchange (ETDEWEB)

    Ostermann, Dirk [RAG Mining Solutions GmbH, Herne (Germany); Zentralbereich IT-Strategie, RAG Aktiengesellschaft, Herne (Germany); Skirde, Juergen; Bramsiepe, Heribert [RAG Deutsche Steinkohle AG, Herne (Germany). IT-Infrastruktur

    2009-10-01

    RAG Deutsche Steinkohle participated in two important research and development projects between 2006 and 2009. The important developments such as RFID, mobile WLAN camera and underground mobile radio are described in this contribution. (orig.)

  9. Deutsches Krebsforschungszentrum Heidelberg. Research report 1997/1998

    International Nuclear Information System (INIS)

    1999-01-01

    The Deutsches Krebsforschungszentrum Heidelberg (DKFZ, German Cancer Research Center) publishes alternating every year the 'Research Report' and the 'Wissenschaftlicher Ergebnisbericht' (in German). Both volumes are reports on the present state of research activities of the DKFZ as a National Research Center to the funding federal and state authorities (Federal Republic of Germany, Land (state) Baden-Wuerttemberg). Furthermore they shall inform colleagues and the scientifically interested public. Both reports are structured according to the center's eight research programs. (orig.) [de

  10. Implementation of trigonometric function using CORDIC algorithms

    Science.gov (United States)

    Mokhtar, A. S. N.; Ayub, M. I.; Ismail, N.; Daud, N. G. Nik

    2018-02-01

    In 1959, Jack E. Volder presents a brand new formula to the real-time solution of the equation raised in navigation system. This new algorithm was the most beneficial replacement of analog navigation system by the digital. The CORDIC (Coordinate Rotation Digital Computer) algorithm are used for the rapid calculation associated with elementary operates like trigonometric function, multiplication, division and logarithm function, and also various conversions such as conversion of rectangular to polar coordinate including the conversion between binary coded information. In this current time CORDIC formula have many applications in the field of communication, signal processing, 3-D graphics, and others. This paper would be presents the trigonometric function implementation by using CORDIC algorithm in rotation mode for circular coordinate system. The CORDIC technique is used in order to generating the output angle between range 0o to 90o and error analysis is concern. The result showed that the average percentage error is about 0.042% at angles between ranges 00 to 900. But the average percentage error rose up to 45% at angle 90o and above. So, this method is very accurate at the 1st quadrant. The mirror properties method is used to find out an angle at 2nd, 3rd and 4th quadrant.

  11. Implementation of the Grover search algorithm with Josephson charge qubits

    International Nuclear Information System (INIS)

    Zheng Xiaohu; Dong Ping; Xue Zhengyuan; Cao Zhuoliang

    2007-01-01

    A scheme of implementing the Grover search algorithm based on Josephson charge qubits has been proposed, which would be a key step to scale more complex quantum algorithms and very important for constructing a real quantum computer via Josephson charge qubits. The present scheme is simple but fairly efficient, and easily manipulated because any two-charge-qubit can be selectively and effectively coupled by a common inductance. More manipulations can be carried out before decoherence sets in. Our scheme can be realized within the current technology

  12. Infinitely oscillating wavelets and a efficient implementation algorithm based the FFT

    Directory of Open Access Journals (Sweden)

    Marcela Fabio

    2015-01-01

    Full Text Available In this work we present the design of an orthogonal wavelet, infinitely oscillating, located in time with decay 1/|t|n and limited-band. Its appli- cation leads to the signal decomposition in waves of instantaneous, well defined frequency. We also present the implementation algorithm for the analysis and synthesis based on the Fast Fourier Transform (FFT with the same complexity as Mallat’s algorithm.

  13. A multithreaded parallel implementation of a dynamic programming algorithm for sequence comparison.

    Science.gov (United States)

    Martins, W S; Del Cuvillo, J B; Useche, F J; Theobald, K B; Gao, G R

    2001-01-01

    This paper discusses the issues involved in implementing a dynamic programming algorithm for biological sequence comparison on a general-purpose parallel computing platform based on a fine-grain event-driven multithreaded program execution model. Fine-grain multithreading permits efficient parallelism exploitation in this application both by taking advantage of asynchronous point-to-point synchronizations and communication with low overheads and by effectively tolerating latency through the overlapping of computation and communication. We have implemented our scheme on EARTH, a fine-grain event-driven multithreaded execution and architecture model which has been ported to a number of parallel machines with off-the-shelf processors. Our experimental results show that the dynamic programming algorithm can be efficiently implemented on EARTH systems with high performance (e.g., speedup of 90 on 120 nodes), good programmability and reasonable cost.

  14. Concurrent applicative implementations of nondeterministic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Salter, R

    1983-01-01

    The author introduces a methodology for utilizing concurrency in place of backtracking in the implementation of nondeterministic algorithms. This is achieved in an applicative setting through the use of the Friedman-Wise multiprogramming primitive frons, and a paradigm which views the action of nondeterministic algorithms as one of data structure construction. The element by element nondeterminism arising from a linearized search is replaced by a control structure which is oriented towards constructing sets of partial computations. This point of view is facilitated by the use of suspensions, which allow control disciplines to be embodied in the form of conceptual data structures that in reality manifest themselves only for purposes of control. He applies this methodology to the class of problems usually solved through the use of simple backtracking (e.g. 'eight queens'), and to a problem presented by Lindstrom (1979) to illustrate the use of coroutine controlled backtracking, to produce backtrack-free solutions. The solution to the latter illustrates the coroutine capability of suspended structures, but also demonstrates a need for further investigations into resolving problems of process communication in applicative languages. 14 references.

  15. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    Science.gov (United States)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  16. Decoding the Brain’s Algorithm for Categorization from its Neural Implementation

    Science.gov (United States)

    Mack, Michael L.; Preston, Alison R.; Love, Bradley C.

    2013-01-01

    Summary Acts of cognition can be described at different levels of analysis: what behavior should characterize the act, what algorithms and representations underlie the behavior, and how the algorithms are physically realized in neural activity [1]. Theories that bridge levels of analysis offer more complete explanations by leveraging the constraints present at each level [2–4]. Despite the great potential for theoretical advances, few studies of cognition bridge levels of analysis. For example, formal cognitive models of category decisions accurately predict human decision making [5, 6], but whether model algorithms and representations supporting category decisions are consistent with underlying neural implementation remains unknown. This uncertainty is largely due to the hurdle of forging links between theory and brain [7–9]. Here, we tackle this critical problem by using brain response to characterize the nature of mental computations that support category decisions to evaluate two dominant, and opposing, models of categorization. We found that brain states during category decisions were significantly more consistent with latent model representations from exemplar [5] rather than prototype theory [10, 11]. Representations of individual experiences, not the abstraction of experiences, are critical for category decision making. Holding models accountable for behavior and neural implementation provides a means for advancing more complete descriptions of the algorithms of cognition. PMID:24094852

  17. Development of CAD implementing the algorithm of boundary elements’ numerical analytical method

    Directory of Open Access Journals (Sweden)

    Yulia V. Korniyenko

    2015-03-01

    Full Text Available Up to recent days the algorithms for numerical-analytical boundary elements method had been implemented with programs written in MATLAB environment language. Each program had a local character, i.e. used to solve a particular problem: calculation of beam, frame, arch, etc. Constructing matrices in these programs was carried out “manually” therefore being time-consuming. The research was purposed onto a reasoned choice of programming language for new CAD development, allows to implement algorithm of numerical analytical boundary elements method and to create visualization tools for initial objects and calculation results. Research conducted shows that among wide variety of programming languages the most efficient one for CAD development, employing the numerical analytical boundary elements method algorithm, is the Java language. This language provides tools not only for development of calculating CAD part, but also to build the graphic interface for geometrical models construction and calculated results interpretation.

  18. [Ulrike Plath. Esten und Deutsche in den baltischen Provinzen Russlands] / Olaf Mertelsmann

    Index Scriptorium Estoniae

    Mertelsmann, Olaf, 1969-

    2013-01-01

    Arvustus: Plath, Ulrike. Esten und Deutsche in den baltischen Provinzen Russlands: Freundheitskonstruktionen, lebenswelten, Kolonialphantasien 1750-1810 (Veröffenlichungen des Nordost-Instituts, Bd. 11.)Harrasowitz. Wiesbaden 2011

  19. Kinder Lernen Deutsch Materials Evaluation Project: Grades K-8.

    Science.gov (United States)

    American Association of Teachers of German.

    The Kinder Lernen Deutsch (Children Learn German) project, begun in 1987, is designed to promote German as a second language in grades K-8. The project is premised on the idea that the German program will contribute to the total development of the child and the child's personality. Included in this guide are a selection of recommended core…

  20. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Guliyev, E. [Kernfysisch Versneller Instituut, University of Groningen, Zernikelaan 25, NL-9747 AA Groningen (Netherlands); Kavatsyuk, M., E-mail: m.kavatsyuk@rug.nl [Kernfysisch Versneller Instituut, University of Groningen, Zernikelaan 25, NL-9747 AA Groningen (Netherlands); Lemmens, P.J.J.; Tambave, G.; Loehner, H. [Kernfysisch Versneller Instituut, University of Groningen, Zernikelaan 25, NL-9747 AA Groningen (Netherlands)

    2012-02-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-source project and is adaptable for other projects and sampling ADCs. Best performance with different types of signal sources can be achieved through flexible parameter selection. The on-line data-processing in FPGA enables to construct an almost dead-time free data acquisition system which is successfully evaluated as a first step towards building a complete trigger-less readout chain. Prototype setups are studied to determine the dead-time of the implemented algorithm, the rate of false triggering, timing performance, and event correlations.

  1. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    International Nuclear Information System (INIS)

    Guliyev, E.; Kavatsyuk, M.; Lemmens, P.J.J.; Tambave, G.; Löhner, H.

    2012-01-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-source project and is adaptable for other projects and sampling ADCs. Best performance with different types of signal sources can be achieved through flexible parameter selection. The on-line data-processing in FPGA enables to construct an almost dead-time free data acquisition system which is successfully evaluated as a first step towards building a complete trigger-less readout chain. Prototype setups are studied to determine the dead-time of the implemented algorithm, the rate of false triggering, timing performance, and event correlations.

  2. Implementation of pencil kernel and depth penetration algorithms for treatment planning of proton beams

    International Nuclear Information System (INIS)

    Russell, K.R.; Saxner, M.; Ahnesjoe, A.; Montelius, A.; Grusell, E.; Dahlgren, C.V.

    2000-01-01

    The implementation of two algorithms for calculating dose distributions for radiation therapy treatment planning of intermediate energy proton beams is described. A pencil kernel algorithm and a depth penetration algorithm have been incorporated into a commercial three-dimensional treatment planning system (Helax-TMS, Helax AB, Sweden) to allow conformal planning techniques using irregularly shaped fields, proton range modulation, range modification and dose calculation for non-coplanar beams. The pencil kernel algorithm is developed from the Fermi-Eyges formalism and Moliere multiple-scattering theory with range straggling corrections applied. The depth penetration algorithm is based on the energy loss in the continuous slowing down approximation with simple correction factors applied to the beam penumbra region and has been implemented for fast, interactive treatment planning. Modelling of the effects of air gaps and range modifying device thickness and position are implicit to both algorithms. Measured and calculated dose values are compared for a therapeutic proton beam in both homogeneous and heterogeneous phantoms of varying complexity. Both algorithms model the beam penumbra as a function of depth in a homogeneous phantom with acceptable accuracy. Results show that the pencil kernel algorithm is required for modelling the dose perturbation effects from scattering in heterogeneous media. (author)

  3. Multi–GPU Implementation of Machine Learning Algorithm using CUDA and OpenCL

    Directory of Open Access Journals (Sweden)

    Jan Masek

    2016-06-01

    Full Text Available Using modern Graphic Processing Units (GPUs becomes very useful for computing complex and time consuming processes. GPUs provide high–performance computation capabilities with a good price. This paper deals with a multi–GPU OpenCL and CUDA implementations of k–Nearest Neighbor (k–NN algorithm. This work compares performances of OpenCLand CUDA implementations where each of them is suitable for different number of used attributes. The proposed CUDA algorithm achieves acceleration up to 880x in comparison witha single thread CPU version. The common k-NN was modified to be faster when the lower number of k neighbors is set. The performance of algorithm was verified with two GPUs dual-core NVIDIA GeForce GTX 690 and CPU Intel Core i7 3770 with 4.1 GHz frequency. The results of speed up were measured for one GPU, two GPUs, three and four GPUs. We performed several tests with data sets containing up to 4 million elements with various number of attributes.

  4. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    Science.gov (United States)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  5. Multiangle Implementation of Atmospheric Correction (MAIAC): 2. Aerosol Algorithm

    Science.gov (United States)

    Lyapustin, A.; Wang, Y.; Laszlo, I.; Kahn, R.; Korkin, S.; Remer, L.; Levy, R.; Reid, J. S.

    2011-01-01

    An aerosol component of a new multiangle implementation of atmospheric correction (MAIAC) algorithm is presented. MAIAC is a generic algorithm developed for the Moderate Resolution Imaging Spectroradiometer (MODIS), which performs aerosol retrievals and atmospheric correction over both dark vegetated surfaces and bright deserts based on a time series analysis and image-based processing. The MAIAC look-up tables explicitly include surface bidirectional reflectance. The aerosol algorithm derives the spectral regression coefficient (SRC) relating surface bidirectional reflectance in the blue (0.47 micron) and shortwave infrared (2.1 micron) bands; this quantity is prescribed in the MODIS operational Dark Target algorithm based on a parameterized formula. The MAIAC aerosol products include aerosol optical thickness and a fine-mode fraction at resolution of 1 km. This high resolution, required in many applications such as air quality, brings new information about aerosol sources and, potentially, their strength. AERONET validation shows that the MAIAC and MOD04 algorithms have similar accuracy over dark and vegetated surfaces and that MAIAC generally improves accuracy over brighter surfaces due to the SRC retrieval and explicit bidirectional reflectance factor characterization, as demonstrated for several U.S. West Coast AERONET sites. Due to its generic nature and developed angular correction, MAIAC performs aerosol retrievals over bright deserts, as demonstrated for the Solar Village Aerosol Robotic Network (AERONET) site in Saudi Arabia.

  6. Review: Larissa Förster, Postkoloniale Erinnerungslandschaften. Wie Deutsche und Herero in Namibia des Kriegs von 1904 gedenken (2010 Buchbesprechung: Larissa Förster, Postkoloniale Erinnerungslandschaften. Wie Deutsche und Herero in Namibia des Kriegs von 1904 gedenken (2010

    Directory of Open Access Journals (Sweden)

    Reinhart Kößler

    2010-01-01

    Full Text Available Review of the Monograph: Larissa Förster (2010, Postkoloniale Erinnerungslandschaften. Wie Deutsche und Herero in Namibia des Kriegs von 1904 gedenken, Frankfurt am Main & New York: Campus, ISBN 978-3-593-39160-1, 391 pages. Besprechung der Monographie: Larissa Förster (2010, Postkoloniale Erinnerungslandschaften. Wie Deutsche und Herero in Namibia des Kriegs von 1904 gedenken, Frankfurt am Main & New York: Campus, ISBN 978-3-593-39160-1, 391 Seiten.

  7. ifo Konjunkturprognose 2015/2016: Deutsche Wirtschaft im Aufschwung

    OpenAIRE

    Wollmershäuser, Timo; Nierhaus, Wolfgang; Berg, Tim Oliver; Breuer, Christian; Garnitz, Johanna; Grimme, Christian; Henzel, Steffen; Hristov, Atanas; Hristov, Nikolay; Meister, Wolfgang; Schröter, Felix; Steiner, Andreas; Wieland, Elisabeth; Wohlrabe, Klaus; Wolf, Anna

    2015-01-01

    Die deutsche Wirtschaft befindet sich derzeit in einem kräftigen Aufschwung. Das reale Bruttoinlandsprodukt wird in diesem Jahr voraussichtlich um 1,9% expandieren und im kommenden Jahr um 1,8%. Der private Konsum bleibt die Stütze des Aufschwungs, da die Einkommensperspektiven der privaten Haushalte aufgrund der sich weiter verbessernden Arbeitsmarktlage gut sind. Allerdings entfallen allmählich die Kaufkraftgewinne durch den Ölpreisrückgang, so dass sich die Konsumdynamik im Prognosezeitrau...

  8. Real-time recursive hyperspectral sample and band processing algorithm architecture and implementation

    CERN Document Server

    Chang, Chein-I

    2017-01-01

    This book explores recursive architectures in designing progressive hyperspectral imaging algorithms. In particular, it makes progressive imaging algorithms recursive by introducing the concept of Kalman filtering in algorithm design so that hyperspectral imagery can be processed not only progressively sample by sample or band by band but also recursively via recursive equations. This book can be considered a companion book of author’s books, Real-Time Progressive Hyperspectral Image Processing, published by Springer in 2016. Explores recursive structures in algorithm architecture Implements algorithmic recursive architecture in conjunction with progressive sample and band processing Derives Recursive Hyperspectral Sample Processing (RHSP) techniques according to Band-Interleaved Sample/Pixel (BIS/BIP) acquisition format Develops Recursive Hyperspectral Band Processing (RHBP) techniques according to Band SeQuential (BSQ) acquisition format for hyperspectral data.

  9. 'Vorwort' Themenheft Deutsche Sprache: 'Modifikation im Deutschen: Kontrastive Untersuchungen zur Nominalphrase'

    DEFF Research Database (Denmark)

    Gunkel, Lutz; Rijkhoff, Jan

    2010-01-01

    Das vorliegende Themenheft der Zeitschrift Deutsche Sprache versammelt vier Beiträge zu einem zentralen Thema der deutschen Grammatik und Textlinguistik: der Form und Funktion von Attributionsstrukturen in der Nominalphrase. Gemeinsam ist allen Beiträgen der kontrastive und/oder funktional...

  10. An Applied Methodology for the Use of "Deutsch, Erstes Buch."

    Science.gov (United States)

    Dimler, G. Richard

    Discussion of teaching methods used with the text, "Deutsch, Erstes Buch" by Hugo Mueller, focuses on practical approaches to the problem of teaching culture through the spoken language and the use of pattern practice. While concentrating on Chapter Eight, "In der Sommerfrische," discussion is presented in subdivisions characteristic of every…

  11. An Implementation and Detailed Analysis of the K-SVD Image Denoising Algorithm

    Directory of Open Access Journals (Sweden)

    Marc Lebrun

    2012-05-01

    Full Text Available K-SVD is a signal representation method which, from a set of signals, can derive a dictionary able to approximate each signal with a sparse combination of the atoms. This paper focuses on the K-SVD-based image denoising algorithm. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.

  12. A complete implementation of the conjugate gradient algorithm on a reconfigurable supercomputer

    International Nuclear Information System (INIS)

    Dubois, David H.; Dubois, Andrew J.; Connor, Carolyn M.; Boorman, Thomas M.; Poole, Stephen W.

    2008-01-01

    The conjugate gradient is a prominent iterative method for solving systems of sparse linear equations. Large-scale scientific applications often utilize a conjugate gradient solver at their computational core. In this paper we present a field programmable gate array (FPGA) based implementation of a double precision, non-preconditioned, conjugate gradient solver for fmite-element or finite-difference methods. OUf work utilizes the SRC Computers, Inc. MAPStation hardware platform along with the 'Carte' software programming environment to ease the programming workload when working with the hybrid (CPUIFPGA) environment. The implementation is designed to handle large sparse matrices of up to order N x N where N <= 116,394, with up to 7 non-zero, 64-bit elements per sparse row. This implementation utilizes an optimized sparse matrix-vector multiply operation which is critical for obtaining high performance. Direct parallel implementations of loop unrolling and loop fusion are utilized to extract performance from the various vector/matrix operations. Rather than utilize the FPGA devices as function off-load accelerators, our implementation uses the FPGAs to implement the core conjugate gradient algorithm. Measured run-time performance data is presented comparing the FPGA implementation to a software-only version showing that the FPGA can outperform processors running up to 30x the clock rate. In conclusion we take a look at the new SRC-7 system and estimate the performance of this algorithm on that architecture.

  13. Hardware Implementation of a Modified Delay-Coordinate Mapping-Based QRS Complex Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Andrej Zemva

    2007-01-01

    Full Text Available We present a modified delay-coordinate mapping-based QRS complex detection algorithm, suitable for hardware implementation. In the original algorithm, the phase-space portrait of an electrocardiogram signal is reconstructed in a two-dimensional plane using the method of delays. Geometrical properties of the obtained phase-space portrait are exploited for QRS complex detection. In our solution, a bandpass filter is used for ECG signal prefiltering and an improved method for detection threshold-level calculation is utilized. We developed the algorithm on the MIT-BIH Arrhythmia Database (sensitivity of 99.82% and positive predictivity of 99.82% and tested it on the long-term ST database (sensitivity of 99.72% and positive predictivity of 99.37%. Our algorithm outperforms several well-known QRS complex detection algorithms, including the original algorithm.

  14. FPGA implementation of ICA algorithm for blind signal separation and adaptive noise canceling.

    Science.gov (United States)

    Kim, Chang-Min; Park, Hyung-Min; Kim, Taesu; Choi, Yoon-Kyung; Lee, Soo-Young

    2003-01-01

    An field programmable gate array (FPGA) implementation of independent component analysis (ICA) algorithm is reported for blind signal separation (BSS) and adaptive noise canceling (ANC) in real time. In order to provide enormous computing power for ICA-based algorithms with multipath reverberation, a special digital processor is designed and implemented in FPGA. The chip design fully utilizes modular concept and several chips may be put together for complex applications with a large number of noise sources. Experimental results with a fabricated test board are reported for ANC only, BSS only, and simultaneous ANC/BSS, which demonstrates successful speech enhancement in real environments in real time.

  15. Purgatorio - A new implementation of the Inferno algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, B; Sonnad, V; Sterne, P; Isaacs, W

    2005-03-29

    For astrophysical applications, as well as modeling laser-produced plasmas, there is a continual need for equation-of-state data over a wide domain of physical conditions. This paper presents algorithmic aspects for computing the Helmholtz free energy of plasma electrons for temperatures spanning from a few Kelvin to several KeV, and densities ranging from essentially isolated ion conditions to such large compressions that most bound orbitals become delocalized. The objective is high precision results in order to compute pressure and other thermodynamic quantities by numerical differentiation. This approach has the advantage that internal thermodynamic self-consistency is ensured, regardless of the specific physical model, but at the cost of very stringent numerical tolerances for each operation. The computational aspects we address in this paper are faced by any model that relies on input from the quantum mechanical spectrum of a spherically symmetric Hamiltonian operator. The particular physical model we employ is that of INFERNO; of a spherically averaged ion embedded in jellium. An overview of PURGATORIO, a new implementation of the INFERNO equation of state model, is presented. The new algorithm emphasizes a novel decimation scheme for automatically resolving the structure of the continuum density of states, circumventing limitations of the pseudo-R matrix algorithm previously utilized.

  16. Implementation of domain decomposition and data decomposition algorithms in RMC code

    International Nuclear Information System (INIS)

    Liang, J.G.; Cai, Y.; Wang, K.; She, D.

    2013-01-01

    The applications of Monte Carlo method in reactor physics analysis is somewhat restricted due to the excessive memory demand in solving large-scale problems. Memory demand in MC simulation is analyzed firstly, it concerns geometry data, data of nuclear cross-sections, data of particles, and data of tallies. It appears that tally data is dominant in memory cost and should be focused on in solving the memory problem. Domain decomposition and tally data decomposition algorithms are separately designed and implemented in the reactor Monte Carlo code RMC. Basically, the domain decomposition algorithm is a strategy of 'divide and rule', which means problems are divided into different sub-domains to be dealt with separately and some rules are established to make sure the whole results are correct. Tally data decomposition consists in 2 parts: data partition and data communication. Two algorithms with differential communication synchronization mechanisms are proposed. Numerical tests have been executed to evaluate performance of the new algorithms. Domain decomposition algorithm shows potentials to speed up MC simulation as a space parallel method. As for tally data decomposition algorithms, memory size is greatly reduced

  17. On Implementing a Homogeneous Interior-Point Algorithm for Nonsymmetric Conic Optimization

    DEFF Research Database (Denmark)

    Skajaa, Anders; Jørgensen, John Bagterp; Hansen, Per Christian

    Based on earlier work by Nesterov, an implementation of a homogeneous infeasible-start interior-point algorithm for solving nonsymmetric conic optimization problems is presented. Starting each iteration from (the vicinity of) the central path, the method computes (nearly) primal-dual symmetric...... approximate tangent directions followed by a purely primal centering procedure to locate the next central primal-dual point. Features of the algorithm include that it makes use only of the primal barrier function, that it is able to detect infeasibilities in the problem and that no phase-I method is needed...

  18. Social Media Reifegradmodell für die deutsche Versicherungswirtschaft

    OpenAIRE

    Füllgraf, Nicola; Völler, Michaele

    2012-01-01

    Social Media werden mittlerweile auch von vielen deutschen Versicherern für die Kommunikation mit ihren Kunden und Interessenten eingesetzt. Die Intensität und der Erfolg unterscheiden sich jedoch signifikant voneinander. Inhalt dieses Artikel ist ein Reifegradmodell für die deutsche Versicherungswirtschaft, das auf Basis belastbarer Key Performance Indikatoren die Social Media-Reife eines Versicherungsunternehmens in Form eines Ratings bemisst. Weiterhin wird eine erste Einschätzung des Reif...

  19. Deutsches Krebsforschungszentrum Heidelberg. Report on scientific results 2002-2003

    International Nuclear Information System (INIS)

    2004-01-01

    The Deutsches Krebsforschungszentrum Heidelberg (DKFZ, German Cancer Research Center) publishes alternating every year the ''Wissenschaftlicher Ergebnisbericht'' (in German) and the ''Research Report'' (in English). Both volumes are reports on the present state of research activities of the DKFZ as a National Research Center to the funding federal and state authorities [Federal Republic of Germany, Land (state) Baden-Wuerttemberg]. The report is structured according to the center's six research programs

  20. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    International Nuclear Information System (INIS)

    Li Yupeng; Deutsch, Clayton V.

    2012-01-01

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells. In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.

  1. VHDL Implementation of Feature-Extraction Algorithm for the PANDA Electromagnetic Calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Löhner, H.; Tambave, G.

    2010-01-01

    The feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The use of modified firmware with the running on-line

  2. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    NARCIS (Netherlands)

    Guliyev, E.; Kavatsyuk, M.; Lemmens, P. J. J.; Tambave, G.; Löhner, H.

    2012-01-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an

  3. Developing and Implementing the Data Mining Algorithms in RAVEN

    International Nuclear Information System (INIS)

    Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea; Rabiti, Cristian

    2015-01-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  4. Developing and Implementing the Data Mining Algorithms in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Ramazan Sonat [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Daniel Patrick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  5. Implementation of an algorithm for cylindrical object identification using range data

    Science.gov (United States)

    Bozeman, Sylvia T.; Martin, Benjamin J.

    1989-01-01

    One of the problems in 3-D object identification and localization is addressed. In robotic and navigation applications the vision system must be able to distinguish cylindrical or spherical objects as well as those of other geometric shapes. An algorithm was developed to identify cylindrical objects in an image when range data is used. The algorithm incorporates the Hough transform for line detection using edge points which emerge from a Sobel mask. Slices of the data are examined to locate arcs of circles using the normal equations of an over-determined linear system. Current efforts are devoted to testing the computer implementation of the algorithm. Refinements are expected to continue in order to accommodate cylinders in various positions. A technique is sought which is robust in the presence of noise and partial occlusions.

  6. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    Science.gov (United States)

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the

  7. GPGPU Implementation of a Genetic Algorithm for Stereo Refinement

    Directory of Open Access Journals (Sweden)

    Álvaro Arranz

    2015-03-01

    Full Text Available During the last decade, the general-purpose computing on graphics processing units Graphics (GPGPU has turned out to be a useful tool for speeding up many scientific calculations. Computer vision is known to be one of the fields with more penetration of these new techniques. This paper explores the advantages of using GPGPU implementation to speedup a genetic algorithm used for stereo refinement. The main contribution of this paper is analyzing which genetic operators take advantage of a parallel approach and the description of an efficient state- of-the-art implementation for each one. As a result, speed-ups close to x80 can be achieved, demonstrating to be the only way of achieving close to real-time performance.

  8. Linear array implementation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1995-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution back projection algorithms. However, the PET image reconstruction based on the EM algorithm is computationally burdensome for today's single processor systems. In addition, a large memory is required for the storage of the image, projection data, and the probability matrix. Since the computations are easily divided into tasks executable in parallel, multiprocessor configurations are the ideal choice for fast execution of the EM algorithms. In tis study, the authors attempt to overcome these two problems by parallelizing the EM algorithm on a multiprocessor systems. The parallel EM algorithm on a linear array topology using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PE's) has been implemented. The performance of the EM algorithm on a 386/387 machine, IBM 6000 RISC workstation, and on the linear array system is discussed and compared. The results show that the computational speed performance of a linear array using 8 DSP chips as PE's executing the EM image reconstruction algorithm is about 15.5 times better than that of the IBM 6000 RISC workstation. The novelty of the scheme is its simplicity. The linear array topology is expandable with a larger number of PE's. The architecture is not dependant on the DSP chip chosen, and the substitution of the latest DSP chip is straightforward and could yield better speed performance

  9. An Algorithm of an X-ray Hit Allocation to a Single Pixel in a Cluster and Its Test-Circuit Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Deptuch, G. W. [AGH-UST, Cracow; Fahim, F. [Fermilab; Grybos, P. [AGH-UST, Cracow; Hoff, J. [Fermilab; Maj, P. [AGH-UST, Cracow; Siddons, D. P. [Brookhaven; Kmon, P. [AGH-UST, Cracow; Trimpl, M. [Fermilab; Zimmerman, T. [Fermilab

    2017-05-06

    An on-chip implementable algorithm for allocation of an X-ray photon imprint, called a hit, to a single pixel in the presence of charge sharing in a highly segmented pixel detector is described. Its proof-of-principle implementation is also given supported by the results of tests using a highly collimated X-ray photon beam from a synchrotron source. The algorithm handles asynchronous arrivals of X-ray photons. Activation of groups of pixels, comparisons of peak amplitudes of pulses within an active neighborhood and finally latching of the results of these comparisons constitute the three procedural steps of the algorithm. A grouping of pixels to one virtual pixel that recovers composite signals and event driven strobes to control comparisons of fractional signals between neighboring pixels are the actuators of the algorithm. The circuitry necessary to implement the algorithm requires an extensive inter-pixel connection grid of analog and digital signals that are exchanged between pixels. A test-circuit implementation of the algorithm was achieved with a small array of 32×32 pixels and the device was exposed to an 8 keV highly collimated to a diameter of 3 μm X-ray beam. The results of these tests are given in the paper assessing physical implementation of the algorithm.

  10. Two Thematic Units for the Middle School Curriculum: An Initiative by the "Kinder lernen Deutsch" Steering Committee's Writing Team

    Science.gov (United States)

    Busch, Iris; Freimann-Cavanaugh, Corinna; Eichler, Ester

    2009-01-01

    The Kinder lernen Deutsch Committee (KLD) is a standing committee of the AATG that has existed since 1987 and that was originally charged to support the advocacy of German in grades K-8. With generous funding by the Standige Arbeitsgruppe Deutsch als Fremdsprache (StADaF) from the German government and the Goethe-Institut, the Kinder lernen…

  11. Interkulturelle kommunikative Kompetenz – ein Versuch der Operationalisierung aus dem Fach Deutsch an der dänischen Lehrerausbildung

    DEFF Research Database (Denmark)

    Bjerre, Kirsten; Daryai-Hansen, Petra

    2017-01-01

    Mit diesem Artikel versuchen wir, den Begriff 'interkulturelle kommunikative Kompetenz‘ theoretisch zu entwickeln. Wir gehen zunächst der Frage nach, warum im Fach Deutsch als Fremdsprache mit interkultureller kommunikativer Kompetenz gearbeitet werden sollte und welche Herausforderungen sich...... hierbei stellen. Im Anschluss präsentieren wir unser Modell der interkulturellen kommunikativen Kompetenz, das wir für das Fach Deutsch ain der dänischen Lehrerausbildung auf der Grundlage eines Modells von Michael Byram aus dem Jahre 1997 entwickelt haben. Byrams Modell der interkulturellen...... zunächst die Begriffe ‚Kompetenz‘, ‚Kultur‘, ‚kommunikative Kompetenz‘ und ‚interkulturelle Kompetenz‘. Wir konkretisieren die Dimensionen des Modells im Anschluss anhand eines Unterrichtsbeispiels für das Fach Deutsch an der dänischen Einheitsschule („Folkeskole“). Abschließend skizzieren wir, wie unser...

  12. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    International Nuclear Information System (INIS)

    Tian, Zhen; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B.; Peng, Fei

    2015-01-01

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is

  13. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun; Jia, Xun, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Jiang, Steve B., E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu [Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas 75390 (United States); Peng, Fei [Computer Science Department, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213 (United States)

    2015-06-15

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is

  14. A New Method of Histogram Computation for Efficient Implementation of the HOG Algorithm

    Directory of Open Access Journals (Sweden)

    Mariana-Eugenia Ilas

    2018-03-01

    Full Text Available In this paper we introduce a new histogram computation method to be used within the histogram of oriented gradients (HOG algorithm. The new method replaces the arctangent with the slope computation and the classical magnitude allocation based on interpolation with a simpler algorithm. The new method allows a more efficient implementation of HOG in general, and particularly in field-programmable gate arrays (FPGAs, by considerably reducing the area (thus increasing the level of parallelism, while maintaining very close classification accuracy compared to the original algorithm. Thus, the new method is attractive for many applications, including car detection and classification.

  15. Power Analysis of Energy Efficient DES Algorithm and Implementation on 28nm FPGA

    DEFF Research Database (Denmark)

    Thind, Vandana; Pandey, Bishwajeet; Hussain, Dil muhammed Akbar

    2016-01-01

    In this work, we have done power analysis ofData Encryption Standard (DES) algorithm using Xilinx ISE software development kit. We have analyzed the amount of power utilized by selective components on board i.e., FPGA Artix-7, where DES algorithm is implemented. The components taken into consider......In this work, we have done power analysis ofData Encryption Standard (DES) algorithm using Xilinx ISE software development kit. We have analyzed the amount of power utilized by selective components on board i.e., FPGA Artix-7, where DES algorithm is implemented. The components taken...... into consideration areclock power, logic power, signals power, IOs power, leakage powerand supply power (dynamic and quiescent). We have used four different WLAN frequencies (2.4 GHz, 3.6 GHz, 4.9GHz, and 5.9 GHz) and four different IO standards like HSTL-I, HSTL-II, HSTL-II-18, HSTL-I-18 for power analysis. We have...... achieved13-47% saving in power at different frequencies and withdifferent energy efficient HSTL IO standard. We calculated the percentage change in the IO power with respect to the mean values of IO power at four different frequencies. We notified that there is minimum of -37.5% and maximum of +35...

  16. Implementation of a parallel algorithm for spherical SN calculations on the IBM 3090

    International Nuclear Information System (INIS)

    Haghighat, A.; Lawrence, R.D.

    1989-01-01

    Parallel S N algorithms based on domain decomposition in angle are straightforward to develop in Cartesian geometry because the computation of the angular fluxes for a specific discrete ordinate can be performed independently of all other angles. This is not the case for curvilinear geometries, where the angular redistribution component of the discretized streaming operator results in coupling between angular fluxes along adjacent discrete ordinates. Previously, the authors developed a parallel algorithm for S N calculations in spherical geometry and examined its iterative convergence for criticality and detector problems with differing scattering/absorption ratios. In this paper, the authors describe the implementation of the algorithm on an IBM 3090 Model 400 (four processors) and present computational results illustrating the efficiency of the algorithm relative to serial execution

  17. An improved non-uniformity correction algorithm and its GPU parallel implementation

    Science.gov (United States)

    Cheng, Kuanhong; Zhou, Huixin; Qin, Hanlin; Zhao, Dong; Qian, Kun; Rong, Shenghui

    2018-05-01

    The performance of SLP-THP based non-uniformity correction algorithm is seriously affected by the result of SLP filter, which always leads to image blurring and ghosting artifacts. To address this problem, an improved SLP-THP based non-uniformity correction method with curvature constraint was proposed. Here we put forward a new way to estimate spatial low frequency component. First, the details and contours of input image were obtained respectively by minimizing local Gaussian curvature and mean curvature of image surface. Then, the guided filter was utilized to combine these two parts together to get the estimate of spatial low frequency component. Finally, we brought this SLP component into SLP-THP method to achieve non-uniformity correction. The performance of proposed algorithm was verified by several real and simulated infrared image sequences. The experimental results indicated that the proposed algorithm can reduce the non-uniformity without detail losing. After that, a GPU based parallel implementation that runs 150 times faster than CPU was presented, which showed the proposed algorithm has great potential for real time application.

  18. Bioinformatics algorithm based on a parallel implementation of a machine learning approach using transducers

    International Nuclear Information System (INIS)

    Roche-Lima, Abiel; Thulasiram, Ruppa K

    2012-01-01

    Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.

  19. A parallel implementation of a maximum entropy reconstruction algorithm for PET images in a visual language

    International Nuclear Information System (INIS)

    Bastiens, K.; Lemahieu, I.

    1994-01-01

    The application of a maximum entropy reconstruction algorithm to PET images requires a lot of computing resources. A parallel implementation could seriously reduce the execution time. However, programming a parallel application is still a non trivial task, needing specialized people. In this paper a programming environment based on a visual programming language is used for a parallel implementation of the reconstruction algorithm. This programming environment allows less experienced programmers to use the performance of multiprocessor systems. (authors)

  20. A GPU implementation of a track-repeating algorithm for proton radiotherapy dose calculations

    International Nuclear Information System (INIS)

    Yepes, Pablo P; Mirkovic, Dragan; Taddei, Phillip J

    2010-01-01

    An essential component in proton radiotherapy is the algorithm to calculate the radiation dose to be delivered to the patient. The most common dose algorithms are fast but they are approximate analytical approaches. However their level of accuracy is not always satisfactory, especially for heterogeneous anatomical areas, like the thorax. Monte Carlo techniques provide superior accuracy; however, they often require large computation resources, which render them impractical for routine clinical use. Track-repeating algorithms, for example the fast dose calculator, have shown promise for achieving the accuracy of Monte Carlo simulations for proton radiotherapy dose calculations in a fraction of the computation time. We report on the implementation of the fast dose calculator for proton radiotherapy on a card equipped with graphics processor units (GPUs) rather than on a central processing unit architecture. This implementation reproduces the full Monte Carlo and CPU-based track-repeating dose calculations within 2%, while achieving a statistical uncertainty of 2% in less than 1 min utilizing one single GPU card, which should allow real-time accurate dose calculations.

  1. PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta.

    Science.gov (United States)

    Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J

    2010-03-01

    PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site.

  2. Low-Cost Super-Resolution Algorithms Implementation Over a HW/SW Video Compression Platform

    Directory of Open Access Journals (Sweden)

    Llopis Rafael Peset

    2006-01-01

    Full Text Available Two approaches are presented in this paper to improve the quality of digital images over the sensor resolution using super-resolution techniques: iterative super-resolution (ISR and noniterative super-resolution (NISR algorithms. The results show important improvements in the image quality, assuming that sufficient sample data and a reasonable amount of aliasing are available at the input images. These super-resolution algorithms have been implemented over a codesign video compression platform developed by Philips Research, performing minimal changes on the overall hardware architecture. In this way, a novel and feasible low-cost implementation has been obtained by using the resources encountered in a generic hybrid video encoder. Although a specific video codec platform has been used, the methodology presented in this paper is easily extendable to any other video encoder architectures. Finally a comparison in terms of memory, computational load, and image quality for both algorithms, as well as some general statements about the final impact of the sampling process on the quality of the super-resolved (SR image, are also presented.

  3. Towards a HPC-oriented parallel implementation of a learning algorithm for bioinformatics applications.

    Science.gov (United States)

    D'Angelo, Gianni; Rampone, Salvatore

    2014-01-01

    The huge quantity of data produced in Biomedical research needs sophisticated algorithmic methodologies for its storage, analysis, and processing. High Performance Computing (HPC) appears as a magic bullet in this challenge. However, several hard to solve parallelization and load balancing problems arise in this context. Here we discuss the HPC-oriented implementation of a general purpose learning algorithm, originally conceived for DNA analysis and recently extended to treat uncertainty on data (U-BRAIN). The U-BRAIN algorithm is a learning algorithm that finds a Boolean formula in disjunctive normal form (DNF), of approximately minimum complexity, that is consistent with a set of data (instances) which may have missing bits. The conjunctive terms of the formula are computed in an iterative way by identifying, from the given data, a family of sets of conditions that must be satisfied by all the positive instances and violated by all the negative ones; such conditions allow the computation of a set of coefficients (relevances) for each attribute (literal), that form a probability distribution, allowing the selection of the term literals. The great versatility that characterizes it, makes U-BRAIN applicable in many of the fields in which there are data to be analyzed. However the memory and the execution time required by the running are of O(n(3)) and of O(n(5)) order, respectively, and so, the algorithm is unaffordable for huge data sets. We find mathematical and programming solutions able to lead us towards the implementation of the algorithm U-BRAIN on parallel computers. First we give a Dynamic Programming model of the U-BRAIN algorithm, then we minimize the representation of the relevances. When the data are of great size we are forced to use the mass memory, and depending on where the data are actually stored, the access times can be quite different. According to the evaluation of algorithmic efficiency based on the Disk Model, in order to reduce the costs of

  4. Universal perceptron and DNA-like learning algorithm for binary neural networks: LSBF and PBF implementations.

    Science.gov (United States)

    Chen, Fangyue; Chen, Guanrong Ron; He, Guolong; Xu, Xiubin; He, Qinbin

    2009-10-01

    Universal perceptron (UP), a generalization of Rosenblatt's perceptron, is considered in this paper, which is capable of implementing all Boolean functions (BFs). In the classification of BFs, there are: 1) linearly separable Boolean function (LSBF) class, 2) parity Boolean function (PBF) class, and 3) non-LSBF and non-PBF class. To implement these functions, UP takes different kinds of simple topological structures in which each contains at most one hidden layer along with the smallest possible number of hidden neurons. Inspired by the concept of DNA sequences in biological systems, a novel learning algorithm named DNA-like learning is developed, which is able to quickly train a network with any prescribed BF. The focus is on performing LSBF and PBF by a single-layer perceptron (SLP) with the new algorithm. Two criteria for LSBF and PBF are proposed, respectively, and a new measure for a BF, named nonlinearly separable degree (NLSD), is introduced. In the sense of this measure, the PBF is the most complex one. The new algorithm has many advantages including, in particular, fast running speed, good robustness, and no need of considering the convergence property. For example, the number of iterations and computations in implementing the basic 2-bit logic operations such as AND, OR, and XOR by using the new algorithm is far smaller than the ones needed by using other existing algorithms such as error-correction (EC) and backpropagation (BP) algorithms. Moreover, the synaptic weights and threshold values derived from UP can be directly used in designing of the template of cellular neural networks (CNNs), which has been considered as a new spatial-temporal sensory computing paradigm.

  5. Maximum entropy algorithm and its implementation for the neutral beam profile measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Wook; Cho, Gyu Seong [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of); Cho, Yong Sub [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A tomography algorithm to maximize the entropy of image using Lagrangian multiplier technique and conjugate gradient method has been designed for the measurement of 2D spatial distribution of intense neutral beams of KSTAR NBI (Korea Superconducting Tokamak Advanced Research Neutral Beam Injector), which is now being designed. A possible detection system was assumed and a numerical simulation has been implemented to test the reconstruction quality of given beam profiles. This algorithm has the good applicability for sparse projection data and thus, can be used for the neutral beam tomography. 8 refs., 3 figs. (Author)

  6. Maximum entropy algorithm and its implementation for the neutral beam profile measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Wook; Cho, Gyu Seong [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of); Cho, Yong Sub [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    A tomography algorithm to maximize the entropy of image using Lagrangian multiplier technique and conjugate gradient method has been designed for the measurement of 2D spatial distribution of intense neutral beams of KSTAR NBI (Korea Superconducting Tokamak Advanced Research Neutral Beam Injector), which is now being designed. A possible detection system was assumed and a numerical simulation has been implemented to test the reconstruction quality of given beam profiles. This algorithm has the good applicability for sparse projection data and thus, can be used for the neutral beam tomography. 8 refs., 3 figs. (Author)

  7. An Effective, Robust And Parallel Implementation Of An Interior Point Algorithm For Limit State Optimization

    DEFF Research Database (Denmark)

    Dollerup, Niels; Jepsen, Michael S.; Damkilde, Lars

    2013-01-01

    The artide describes a robust and effective implementation of the interior point optimization algorithm. The adopted method includes a precalculation step, which reduces the number of variables by fulfilling the equilibrium equations a priori. This work presents an improved implementation of the ...

  8. 2008 Winter meeting of the Deutsches Atomforum: opening address

    International Nuclear Information System (INIS)

    Hohlefelder, W.

    2008-01-01

    It has always been a tradition at the Winter Meeting of the Deutsches Atomforum to review the status of nuclear power in the world, in Europe and, of course, in Germany. On the global and European scenes, nuclear power is experiencing an upswing, while it continues to be blocked in Germany. Given the pressing issues of climate protection, continuity of energy supply, and the prices of energy resources, the future of nuclear power can well be seen in an optimistic light. The EU Commission recognizes the potential of nuclear power for a sustainable energy mix; the mood of the German public is shifting; and even media known for their critical attitude to nuclear power are now clamoring for an unbiased discussion of the issue. The ideological ban on thinking is waning. There will be a reassessment of nuclear power also in Germany because of the realities to be faced. If you really want to protect the climate, you cannot exclude the nuclear power option. After all, this is not a matter of confrontation designed to divide society; such times are past and gone for nuclear power. There is need for a factual dialog. We extend a sincere invitation to join in this dialog, and we want to contribute to it. After all, this is the true purpose of the Deutsches Atomforum, to which all of us feel committed. (orig.)

  9. A parallel implementation of a maximum entropy reconstruction algorithm for PET images in a visual language

    Energy Technology Data Exchange (ETDEWEB)

    Bastiens, K; Lemahieu, I [University of Ghent - ELIS Department, St. Pietersnieuwstraat 41, B-9000 Ghent (Belgium)

    1994-12-31

    The application of a maximum entropy reconstruction algorithm to PET images requires a lot of computing resources. A parallel implementation could seriously reduce the execution time. However, programming a parallel application is still a non trivial task, needing specialized people. In this paper a programming environment based on a visual programming language is used for a parallel implementation of the reconstruction algorithm. This programming environment allows less experienced programmers to use the performance of multiprocessor systems. (authors). 8 refs, 3 figs, 1 tab.

  10. Chernobyl reactor accident. A documentation submitted by the Deutsche Welle radio station. Der Fall Tschernobyl. Eine Dokumentation der Deutschen Welle

    Energy Technology Data Exchange (ETDEWEB)

    1986-01-01

    The documentation abstracted contains a complete survey of the broadcasts transmitted by the Russian wire service of the Deutsche Welle radio station between April 28 and May 15, 1986 on the occasion of the Chernobyl reactor accident. Access is given to extracts of the remarkable eastern and western echoes on the broadcasts of the Deutsche Welle.

  11. Experimental implementation of a quantum random-walk search algorithm using strongly dipolar coupled spins

    International Nuclear Information System (INIS)

    Lu Dawei; Peng Xinhua; Du Jiangfeng; Zhu Jing; Zou Ping; Yu Yihua; Zhang Shanmin; Chen Qun

    2010-01-01

    An important quantum search algorithm based on the quantum random walk performs an oracle search on a database of N items with O(√(phN)) calls, yielding a speedup similar to the Grover quantum search algorithm. The algorithm was implemented on a quantum information processor of three-qubit liquid-crystal nuclear magnetic resonance (NMR) in the case of finding 1 out of 4, and the diagonal elements' tomography of all the final density matrices was completed with comprehensible one-dimensional NMR spectra. The experimental results agree well with the theoretical predictions.

  12. High-frequency asymptotics of the local vertex function. Algorithmic implementations

    Energy Technology Data Exchange (ETDEWEB)

    Tagliavini, Agnese; Wentzell, Nils [Institut fuer Theoretische Physik, Eberhard Karls Universitaet, 72076 Tuebingen (Germany); Institute for Solid State Physics, Vienna University of Technology, 1040 Vienna (Austria); Li, Gang; Rohringer, Georg; Held, Karsten; Toschi, Alessandro [Institute for Solid State Physics, Vienna University of Technology, 1040 Vienna (Austria); Taranto, Ciro [Institute for Solid State Physics, Vienna University of Technology, 1040 Vienna (Austria); Max Planck Institute for Solid State Research, D-70569 Stuttgart (Germany); Andergassen, Sabine [Institut fuer Theoretische Physik, Eberhard Karls Universitaet, 72076 Tuebingen (Germany)

    2016-07-01

    Local vertex functions are a crucial ingredient of several forefront many-body algorithms in condensed matter physics. However, the full treatment of their frequency dependence poses a huge limitation to the numerical performance. A significant advancement requires an efficient treatment of the high-frequency asymptotic behavior of the vertex functions. We here provide a detailed diagrammatic analysis of the high-frequency asymptotic structures and their physical interpretation. Based on these insights, we propose a frequency parametrization, which captures the whole high-frequency asymptotics for arbitrary values of the local Coulomb interaction and electronic density. We present its algorithmic implementation in many-body solvers based on parquet-equations as well as functional renormalization group schemes and assess its validity by comparing our results for the single impurity Anderson model with exact diagonalization calculations.

  13. Real time implementation of a linear predictive coding algorithm on digital signal processor DSP32C

    International Nuclear Information System (INIS)

    Sheikh, N.M.; Usman, S.R.; Fatima, S.

    2002-01-01

    Pulse Code Modulation (PCM) has been widely used in speech coding. However, due to its high bit rate. PCM has severe limitations in application where high spectral efficiency is desired, for example, in mobile communication, CD quality broadcasting system etc. These limitation have motivated research in bit rate reduction techniques. Linear predictive coding (LPC) is one of the most powerful complex techniques for bit rate reduction. With the introduction of powerful digital signal processors (DSP) it is possible to implement the complex LPC algorithm in real time. In this paper we present a real time implementation of the LPC algorithm on AT and T's DSP32C at a sampling frequency of 8192 HZ. Application of the LPC algorithm on two speech signals is discussed. Using this implementation , a bit rate reduction of 1:3 is achieved for better than tool quality speech, while a reduction of 1.16 is possible for speech quality required in military applications. (author)

  14. Further optimization of SeDDaRA blind image deconvolution algorithm and its DSP implementation

    Science.gov (United States)

    Wen, Bo; Zhang, Qiheng; Zhang, Jianlin

    2011-11-01

    Efficient algorithm for blind image deconvolution and its high-speed implementation is of great value in practice. Further optimization of SeDDaRA is developed, from algorithm structure to numerical calculation methods. The main optimization covers that, the structure's modularization for good implementation feasibility, reducing the data computation and dependency of 2D-FFT/IFFT, and acceleration of power operation by segmented look-up table. Then the Fast SeDDaRA is proposed and specialized for low complexity. As the final implementation, a hardware system of image restoration is conducted by using the multi-DSP parallel processing. Experimental results show that, the processing time and memory demand of Fast SeDDaRA decreases 50% at least; the data throughput of image restoration system is over 7.8Msps. The optimization is proved efficient and feasible, and the Fast SeDDaRA is able to support the real-time application.

  15. GPU implementations of online track finding algorithms at PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Herten, Andreas; Stockmanns, Tobias; Ritman, James [Institut fuer Kernphysik, Forschungszentrum Juelich GmbH (Germany); Adinetz, Andrew; Pleiter, Dirk [Juelich Supercomputing Centre, Forschungszentrum Juelich GmbH (Germany); Kraus, Jiri [NVIDIA GmbH (Germany); Collaboration: PANDA-Collaboration

    2014-07-01

    The PANDA experiment is a hadron physics experiment that will investigate antiproton annihilation in the charm quark mass region. The experiment is now being constructed as one of the main parts of the FAIR facility. At an event rate of 2 . 10{sup 7}/s a data rate of 200 GB/s is expected. A reduction of three orders of magnitude is required in order to save the data for further offline analysis. Since signal and background processes at PANDA have similar signatures, no hardware-level trigger is foreseen for the experiment. Instead, a fast online event filter is substituting this element. We investigate the possibility of using graphics processing units (GPUs) for the online tracking part of this task. Researched algorithms are a Hough Transform, a track finder involving Riemann surfaces, and the novel, PANDA-specific Triplet Finder. This talk shows selected advances in the implementations as well as performance evaluations of the GPU tracking algorithms to be used at the PANDA experiment.

  16. Kodiak: An Implementation Framework for Branch and Bound Algorithms

    Science.gov (United States)

    Smith, Andrew P.; Munoz, Cesar A.; Narkawicz, Anthony J.; Markevicius, Mantas

    2015-01-01

    Recursive branch and bound algorithms are often used to refine and isolate solutions to several classes of global optimization problems. A rigorous computation framework for the solution of systems of equations and inequalities involving nonlinear real arithmetic over hyper-rectangular variable and parameter domains is presented. It is derived from a generic branch and bound algorithm that has been formally verified, and utilizes self-validating enclosure methods, namely interval arithmetic and, for polynomials and rational functions, Bernstein expansion. Since bounds computed by these enclosure methods are sound, this approach may be used reliably in software verification tools. Advantage is taken of the partial derivatives of the constraint functions involved in the system, firstly to reduce the branching factor by the use of bisection heuristics and secondly to permit the computation of bifurcation sets for systems of ordinary differential equations. The associated software development, Kodiak, is presented, along with examples of three different branch and bound problem types it implements.

  17. Schreiben auf Deutsch in Japan : Abstufung zwischen akademisch und wissenschaftlich

    OpenAIRE

    Schmidt, Maria Gabriela

    2013-01-01

    In dem Beitrag wird zunächst eine Unterscheidung von akademischem und wissenschaftlichem Schreiben aus didaktischen Gründen vorgeschlagen. Für den Fremdsprachenunterricht Deutsch als Fremdsprache in Japan ist darüberhinaus der Einfluss eigenkultureller Textmusterkonvention zu beachten. „Akademisches Schreiben“ lehnt sich an die englische Bezeichnung „academic writing“ an. In der deutschsprachigen Literatur findet sich häufig „wissenschaftliches Schreiben“, mit dem man ein hohes sprachliches N...

  18. InterCity tilting e.m.u. for Deutsche Bahn; InterCity-Triebzuege mit Neigetechnik fuer die Deutsche Bahn

    Energy Technology Data Exchange (ETDEWEB)

    Behmann, U.

    1999-07-01

    When changing time schedule in the end of may 1999 the German Railway (DB) starts with a new generation of electric intercity trains for 1 AC 15 kV 16 2/3 Hz. Five trains with tilt mechanism will travel between Stuttgart - Singen - Schaffhausen - Zurich. For this purpose these trains are equipped in addition with swiss current collectors and safety engineering. [German] Zum Fahrplanwechsel Ende Mai 1999 wird die Deutsche Bahn (DB) mit einer neuen Generation elektrischer InterCity-Triebzuege fuer 1 AC 15 kV 16 2/3 Hz in den kommerziellen Dienst gehen. Mit fuenf Zuegen Baureihe (BR) 415 mit Neigetechnik (ICT) wird sie die Relation Stuttgart - Singen - Schaffhausen - Zuerich bedienen. Dafuer wurden diese fuenf Zuege zusaetzlich mit schweizerischer Stromabnehmerwippe und Zugsicherungseinrichtung ausgeruestet. (orig/GL)

  19. A Novel Method to Implement the Matrix Pencil Super Resolution Algorithm for Indoor Positioning

    Directory of Open Access Journals (Sweden)

    Tariq Jamil Saifullah Khanzada

    2011-10-01

    Full Text Available This article highlights the estimation of the results for the algorithms implemented in order to estimate the delays and distances for the indoor positioning system. The data sets for the transmitted and received signals are captured at a typical outdoor and indoor area. The estimation super resolution algorithms are applied. Different state of art and super resolution techniques based algorithms are applied to avail the optimal estimates of the delays and distances between the transmitted and received signals and a novel method for matrix pencil algorithm is devised. The algorithms perform variably at different scenarios of transmitted and received positions. Two scenarios are experienced, for the single antenna scenario the super resolution techniques like ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique and theMatrix Pencil algorithms give optimal performance compared to the conventional techniques. In two antenna scenario RootMUSIC and Matrix Pencil algorithm performed better than other algorithms for the distance estimation, however, the accuracy of all the algorithms is worst than the single antenna scenario. In all cases our devised Matrix Pencil algorithm achieved the best estimation results.

  20. Implementation and statistical analysis of Metropolis algorithm for SU(3)

    International Nuclear Information System (INIS)

    Katznelson, E.; Nobile, A.

    1984-12-01

    In this paper we study the statistical properties of an implementation of the Metropolis algorithm for SU(3) gauge theory. It is shown that the results have normal distribution. We demonstrate that in this case error analysis can be carried on in a simple way and we show that applying it to both the measurement strategy and the output data analysis has an important influence on the performance and reliability of the simulation. (author)

  1. Implementation of Human Trafficking Education and Treatment Algorithm in the Emergency Department.

    Science.gov (United States)

    Egyud, Amber; Stephens, Kimberly; Swanson-Bierman, Brenda; DiCuccio, Marge; Whiteman, Kimberly

    2017-11-01

    Health care professionals have not been successful in recognizing or rescuing victims of human trafficking. The purpose of this project was to implement a screening system and treatment algorithm in the emergency department to improve the identification and rescue of victims of human trafficking. The lack of recognition by health care professionals is related to inadequate education and training tools and confusion with other forms of violence such as trauma and sexual assault. A multidisciplinary team was formed to assess the evidence related to human trafficking and make recommendations for practice. After receiving education, staff completed a survey about knowledge gained from the training. An algorithm for identification and treatment of sex trafficking victims was implemented and included a 2-pronged identification approach: (1) medical red flags created by a risk-assessment tool embedded in the electronic health record and (2) a silent notification process. Outcome measures were the number of victims who were identified either by the medical red flags or by silent notification and were offered and accepted intervention. Survey results indicated that 75% of participants reported that the education improved their competence level. The results demonstrated that an education and treatment algorithm may be an effective strategy to improve recognition. One patient was identified as an actual victim of human trafficking; the remaining patients reported other forms of abuse. Education and a treatment algorithm were effective strategies to improve recognition and rescue of human trafficking victims and increase identification of other forms of abuse. Copyright © 2017 Emergency Nurses Association. Published by Elsevier Inc. All rights reserved.

  2. Derivation and implementation of a cone-beam reconstruction algorithm for nonplanar orbits

    International Nuclear Information System (INIS)

    Kudo, Hiroyuki; Saito, Tsuneo

    1994-01-01

    Smith and Grangeat derived a cone-beam inversion formula that can be applied when a nonplanar orbit satisfying the completeness condition is used. Although Grangeat's inversion formula is mathematically different from Smith's, they have similar overall structures to each other. The contribution of this paper is two-fold. First, based on the derivation of Smith, the authors point out that Grangeat's inversion formula and Smith's can be conveniently described using a single formula (the Smith-Grangeat inversion formula) that is in the form of space-variant filtering followed by cone-beam backprojection. Furthermore, the resulting formula is reformulated for data acquisition systems with a planar detector to obtain a new reconstruction algorithm. Second, the authors make two significant modifications to the new algorithm to reduce artifacts and numerical errors encountered in direct implementation of the new algorithm. As for exactness of the new algorithm, the following fact can be stated. The algorithm based on Grangeat's intermediate function is exact for any complete orbit, whereas that based on Smith's intermediate function should be considered as an approximate inverse excepting the special case where almost every plane in 3-D space meets the orbit. The validity of the new algorithm is demonstrated by simulation studies

  3. Implementation of Robert's Coping with Labor Algorithm© in a large tertiary care facility.

    Science.gov (United States)

    Fairchild, Esther; Roberts, Leissa; Zelman, Karen; Michelli, Shelley; Hastings-Tolsma, Marie

    2017-07-01

    to implement use of Roberts' Coping with Labor Algorithm © (CWLA) with laboring women in a large tertiary care facility. this was a quality improvement project to implement an alternate approach to pain assessment during labor. It included system assessment for change readiness, implementation of the algorithm across a 6-week period, evaluation of usefulness by nursing staff, and determination of sustained change at one month. Stakeholder Theory (Friedman and Miles, 2002) and Deming's (1982) Plan-Do-Check-Act Cycle, as adapted by Roberts et al (2010), provided the framework for project implementation. the project was undertaken on a labor and delivery (L&D) unit of a large tertiary care facility in a southwestern state in the USA. The unit had 19 suites with close to 6000 laboring patients each year. full, part-time, and per diem Registered Nurse (RN) staff (N=80), including a subset (n=18) who served as the pilot group and champions for implementing the change. a majority of RNs held a positive attitude toward use of the CWLA to assess laboring women's coping with the pain of labor as compared to a Numeric Rating Scale (NRS). RNs reported usefulness in using the CWLA with patients from a wide variety of ethnicities. A pre-existing well-developed team which advocated for evidence-based practice on the unit proved to be a significant strength which promoted rapid change in practice. this work provides important knowledge supporting use of the CWLA in a large tertiary care facility and an approach for effectively implementing that change. Strengths identified in this project contributed to rapid implementation and could be emulated in other facilities. Participant reports support usefulness of the CWLA with patients of varied ethnicity. Assessment of change sustainability at 1 and 6 months demonstrated widespread use of the algorithm though long-term determination is yet needed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A Flexible VHDL Floating Point Module for Control Algorithm Implementation in Space Applications

    Science.gov (United States)

    Padierna, A.; Nicoleau, C.; Sanchez, J.; Hidalgo, I.; Elvira, S.

    2012-08-01

    The implementation of control loops for space applications is an area with great potential. However, the characteristics of this kind of systems, such as its wide dynamic range of numeric values, make inadequate the use of fixed-point algorithms.However, because the generic chips available for the treatment of floating point data are, in general, not qualified to operate in space environments and the possibility of using an IP module in a FPGA/ASIC qualified for space is not viable due to the low amount of logic cells available for these type of devices, it is necessary to find a viable alternative.For these reasons, in this paper a VHDL Floating Point Module is presented. This proposal allows the design and execution of floating point algorithms with acceptable occupancy to be implemented in FPGAs/ASICs qualified for space environments.

  5. PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta

    Science.gov (United States)

    Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J.

    2010-01-01

    Summary: PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. Availability: PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site. Contact: pyrosetta@graylab.jhu.edu PMID:20061306

  6. Extended Adaptive Biasing Force Algorithm. An On-the-Fly Implementation for Accurate Free-Energy Calculations.

    Science.gov (United States)

    Fu, Haohao; Shao, Xueguang; Chipot, Christophe; Cai, Wensheng

    2016-08-09

    Proper use of the adaptive biasing force (ABF) algorithm in free-energy calculations needs certain prerequisites to be met, namely, that the Jacobian for the metric transformation and its first derivative be available and the coarse variables be independent and fully decoupled from any holonomic constraint or geometric restraint, thereby limiting singularly the field of application of the approach. The extended ABF (eABF) algorithm circumvents these intrinsic limitations by applying the time-dependent bias onto a fictitious particle coupled to the coarse variable of interest by means of a stiff spring. However, with the current implementation of eABF in the popular molecular dynamics engine NAMD, a trajectory-based post-treatment is necessary to derive the underlying free-energy change. Usually, such a posthoc analysis leads to a decrease in the reliability of the free-energy estimates due to the inevitable loss of information, as well as to a drop in efficiency, which stems from substantial read-write accesses to file systems. We have developed a user-friendly, on-the-fly code for performing eABF simulations within NAMD. In the present contribution, this code is probed in eight illustrative examples. The performance of the algorithm is compared with traditional ABF, on the one hand, and the original eABF implementation combined with a posthoc analysis, on the other hand. Our results indicate that the on-the-fly eABF algorithm (i) supplies the correct free-energy landscape in those critical cases where the coarse variables at play are coupled to either each other or to geometric restraints or holonomic constraints, (ii) greatly improves the reliability of the free-energy change, compared to the outcome of a posthoc analysis, and (iii) represents a negligible additional computational effort compared to regular ABF. Moreover, in the proposed implementation, guidelines for choosing two parameters of the eABF algorithm, namely the stiffness of the spring and the mass

  7. Parallel Implementation of the Recursive Approximation of an Unsupervised Hierarchical Segmentation Algorithm. Chapter 5

    Science.gov (United States)

    Tilton, James C.; Plaza, Antonio J. (Editor); Chang, Chein-I. (Editor)

    2008-01-01

    The hierarchical image segmentation algorithm (referred to as HSEG) is a hybrid of hierarchical step-wise optimization (HSWO) and constrained spectral clustering that produces a hierarchical set of image segmentations. HSWO is an iterative approach to region grooving segmentation in which the optimal image segmentation is found at N(sub R) regions, given a segmentation at N(sub R+1) regions. HSEG's addition of constrained spectral clustering makes it a computationally intensive algorithm, for all but, the smallest of images. To counteract this, a computationally efficient recursive approximation of HSEG (called RHSEG) has been devised. Further improvements in processing speed are obtained through a parallel implementation of RHSEG. This chapter describes this parallel implementation and demonstrates its computational efficiency on a Landsat Thematic Mapper test scene.

  8. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  9. Algorithm of parallel: hierarchical transformation and its implementation on FPGA

    Science.gov (United States)

    Timchenko, Leonid I.; Petrovskiy, Mykola S.; Kokryatskay, Natalia I.; Barylo, Alexander S.; Dembitska, Sofia V.; Stepanikuk, Dmytro S.; Suleimenov, Batyrbek; Zyska, Tomasz; Uvaysova, Svetlana; Shedreyeva, Indira

    2017-08-01

    In this paper considers the algorithm of laser beam spots image classification in atmospheric-optical transmission systems. It discusses the need for images filtering using adaptive methods, using, for example, parallel-hierarchical networks. The article also highlights the need to create high-speed memory devices for such networks. Implementation and simulation results of the developed method based on the PLD are demonstrated, which shows that the presented method gives 15-20% better prediction results than similar methods.

  10. FPGA Based Low Power DES Algorithm Design And Implementation using HTML Technology

    DEFF Research Database (Denmark)

    Thind, Vandana; Pandey, Bishwajeet; Kalia, Kartik

    2016-01-01

    In this particular work, we have done power analysis of DES algorithm implemented on 28nm FPGA using HTML (H-HSUL, T-TTL, M-MOBILE_DDR, L-LVCMOS) technology. In this research, we have used high performance software Xilinx ISE where we have selected four different IO Standards i.e. MOBILE_DDR, HSUL...

  11. An efficient implementation of a backpropagation learning algorithm on quadrics parallel supercomputer

    International Nuclear Information System (INIS)

    Taraglio, S.; Massaioli, F.

    1995-08-01

    A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given

  12. A study and implementation of algorithm for automatic ECT result comparison

    International Nuclear Information System (INIS)

    Jang, You Hyun; Nam, Min Woo; Kim, In Chul; Joo, Kyung Mun; Kim, Jong Seog

    2012-01-01

    Automatic ECT Result Comparison Algorithm was developed and implemented with computer language to remove the human error in manual comparison with many data. The structures of two ECT Program (Eddy net and ECT IDS) that have unique file structure were analyzed to open file and upload data in PC memory. Comparison algorithm was defined graphically for easy PC programming language conversion. Automatic Result Program was programmed with C language that is suitable for future code management and has object oriented programming structure and fast development potential. Automatic Result Program has MS Excel file exporting function that is useful to use external S/W for additional analysis and intuitive result visualization function with color mapping in user friendly fashion that helps analyze efficiently

  13. A study and implementation of algorithm for automatic ECT result comparison

    Energy Technology Data Exchange (ETDEWEB)

    Jang, You Hyun; Nam, Min Woo; Kim, In Chul; Joo, Kyung Mun; Kim, Jong Seog [Central Research Institute, Daejeon (Korea, Republic of)

    2012-10-15

    Automatic ECT Result Comparison Algorithm was developed and implemented with computer language to remove the human error in manual comparison with many data. The structures of two ECT Program (Eddy net and ECT IDS) that have unique file structure were analyzed to open file and upload data in PC memory. Comparison algorithm was defined graphically for easy PC programming language conversion. Automatic Result Program was programmed with C language that is suitable for future code management and has object oriented programming structure and fast development potential. Automatic Result Program has MS Excel file exporting function that is useful to use external S/W for additional analysis and intuitive result visualization function with color mapping in user friendly fashion that helps analyze efficiently.

  14. GillespieSSA: Implementing the Gillespie Stochastic Simulation Algorithm in R

    Directory of Open Access Journals (Sweden)

    Mario Pineda-Krch

    2008-02-01

    Full Text Available The deterministic dynamics of populations in continuous time are traditionally described using coupled, first-order ordinary differential equations. While this approach is accurate for large systems, it is often inadequate for small systems where key species may be present in small numbers or where key reactions occur at a low rate. The Gillespie stochastic simulation algorithm (SSA is a procedure for generating time-evolution trajectories of finite populations in continuous time and has become the standard algorithm for these types of stochastic models. This article presents a simple-to-use and flexible framework for implementing the SSA using the high-level statistical computing language R and the package GillespieSSA. Using three ecological models as examples (logistic growth, Rosenzweig-MacArthur predator-prey model, and Kermack-McKendrick SIRS metapopulation model, this paper shows how a deterministic model can be formulated as a finite-population stochastic model within the framework of SSA theory and how it can be implemented in R. Simulations of the stochastic models are performed using four different SSA Monte Carlo methods: one exact method (Gillespie's direct method; and three approximate methods (explicit, binomial, and optimized tau-leap methods. Comparison of simulation results confirms that while the time-evolution trajectories obtained from the different SSA methods are indistinguishable, the approximate methods are up to four orders of magnitude faster than the exact methods.

  15. Neural network fusion capabilities for efficient implementation of tracking algorithms

    Science.gov (United States)

    Sundareshan, Malur K.; Amoozegar, Farid

    1997-03-01

    The ability to efficiently fuse information of different forms to facilitate intelligent decision making is one of the major capabilities of trained multilayer neural networks that is now being recognized. While development of innovative adaptive control algorithms for nonlinear dynamical plants that attempt to exploit these capabilities seems to be more popular, a corresponding development of nonlinear estimation algorithms using these approaches, particularly for application in target surveillance and guidance operations, has not received similar attention. We describe the capabilities and functionality of neural network algorithms for data fusion and implementation of tracking filters. To discuss details and to serve as a vehicle for quantitative performance evaluations, the illustrative case of estimating the position and velocity of surveillance targets is considered. Efficient target- tracking algorithms that can utilize data from a host of sensing modalities and are capable of reliably tracking even uncooperative targets executing fast and complex maneuvers are of interest in a number of applications. The primary motivation for employing neural networks in these applications comes from the efficiency with which more features extracted from different sensor measurements can be utilized as inputs for estimating target maneuvers. A system architecture that efficiently integrates the fusion capabilities of a trained multilayer neural net with the tracking performance of a Kalman filter is described. The innovation lies in the way the fusion of multisensor data is accomplished to facilitate improved estimation without increasing the computational complexity of the dynamical state estimator itself.

  16. VLSI implementation of MIMO detection for 802.11n using a novel adaptive tree search algorithm

    International Nuclear Information System (INIS)

    Yao Heng; Jian Haifang; Zhou Liguo; Shi Yin

    2013-01-01

    A 4×4 64-QAM multiple-input multiple-output (MIMO) detector is presented for the application of an IEEE 802.11n wireless local area network. The detectoris the implementation of a novel adaptive tree search(ATS) algorithm, and multiple ATS cores need to be instantiated to achieve the wideband requirement in the 802.11n standard. Both the ATS algorithm and the architectural considerations are explained. The latency of the detector is 0.75 μs, and the detector has a gate count of 848 k with a total of 19 parallel ATS cores. Each ATS core runs at 67 MHz. Measurement results show that compared with the floating-point ATS algorithm, the fixed-point implementation achieves a loss of 0.9 dB at a BER of 10 −3 . (semiconductor integrated circuits)

  17. Implementation of QR-decomposition based on CORDIC for unitary MUSIC algorithm

    Science.gov (United States)

    Lounici, Merwan; Luan, Xiaoming; Saadi, Wahab

    2013-07-01

    The DOA (Direction Of Arrival) estimation with subspace methods such as MUSIC (MUltiple SIgnal Classification) and ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique) is based on an accurate estimation of the eigenvalues and eigenvectors of covariance matrix. QR decomposition is implemented with the Coordinate Rotation DIgital Computer (CORDIC) algorithm. QRD requires only additions and shifts [6], so it is faster and more regular than other methods. In this article the hardware architecture of an EVD (Eigen Value Decomposition) processor based on TSA (triangular systolic array) for QR decomposition is proposed. Using Xilinx System Generator (XSG), the design is implemented and the estimated logic device resource values are presented for different matrix sizes.

  18. Increasing feasibility of the field-programmable gate array implementation of an iterative image registration using a kernel-warping algorithm

    Science.gov (United States)

    Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.

    2017-09-01

    Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.

  19. Application of the DMRG in two dimensions: a parallel tempering algorithm

    Science.gov (United States)

    Hu, Shijie; Zhao, Jize; Zhang, Xuefeng; Eggert, Sebastian

    The Density Matrix Renormalization Group (DMRG) is known to be a powerful algorithm for treating one-dimensional systems. When the DMRG is applied in two dimensions, however, the convergence becomes much less reliable and typically ''metastable states'' may appear, which are unfortunately quite robust even when keeping a very high number of DMRG states. To overcome this problem we have now successfully developed a parallel tempering DMRG algorithm. Similar to parallel tempering in quantum Monte Carlo, this algorithm allows the systematic switching of DMRG states between different model parameters, which is very efficient for solving convergence problems. Using this method we have figured out the phase diagram of the xxz model on the anisotropic triangular lattice which can be realized by hardcore bosons in optical lattices. SFB Transregio 49 of the Deutsche Forschungsgemeinschaft (DFG) and the Allianz fur Hochleistungsrechnen Rheinland-Pfalz (AHRP).

  20. Implementation of ternary Shor’s algorithm based on vibrational states of an ion in anharmonic potential

    Science.gov (United States)

    Liu, Wei; Chen, Shu-Ming; Zhang, Jian; Wu, Chun-Wang; Wu, Wei; Chen, Ping-Xing

    2015-03-01

    It is widely believed that Shor’s factoring algorithm provides a driving force to boost the quantum computing research. However, a serious obstacle to its binary implementation is the large number of quantum gates. Non-binary quantum computing is an efficient way to reduce the required number of elemental gates. Here, we propose optimization schemes for Shor’s algorithm implementation and take a ternary version for factorizing 21 as an example. The optimized factorization is achieved by a two-qutrit quantum circuit, which consists of only two single qutrit gates and one ternary controlled-NOT gate. This two-qutrit quantum circuit is then encoded into the nine lower vibrational states of an ion trapped in a weakly anharmonic potential. Optimal control theory (OCT) is employed to derive the manipulation electric field for transferring the encoded states. The ternary Shor’s algorithm can be implemented in one single step. Numerical simulation results show that the accuracy of the state transformations is about 0.9919. Project supported by the National Natural Science Foundation of China (Grant No. 61205108) and the High Performance Computing (HPC) Foundation of National University of Defense Technology, China.

  1. FPGA-based implementation for steganalysis: a JPEG-compatibility algorithm

    Science.gov (United States)

    Gutierrez-Fernandez, E.; Portela-García, M.; Lopez-Ongil, C.; Garcia-Valderas, M.

    2013-05-01

    Steganalysis is a process to detect hidden data in cover documents, like digital images, videos, audio files, etc. This is the inverse process of steganography, which is the used method to hide secret messages. The widely use of computers and network technologies make digital files very easy-to-use means for storing secret data or transmitting secret messages through the Internet. Depending on the cover medium used to embed the data, there are different steganalysis methods. In case of images, many of the steganalysis and steganographic methods are focused on JPEG image formats, since JPEG is one of the most common formats. One of the main important handicaps of steganalysis methods is the processing speed, since it is usually necessary to process huge amount of data or it can be necessary to process the on-going internet traffic in real-time. In this paper, a JPEG steganalysis system is implemented in an FPGA in order to speed-up the detection process with respect to software-based implementations and to increase the throughput. In particular, the implemented method is the JPEG-compatibility detection algorithm that is based on the fact that when a JPEG image is modified, the resulting image is incompatible with the JPEG compression process.

  2. IMPLEMENTATION OF IMAGE PROCESSING ALGORITHMS AND GLVQ TO TRACK AN OBJECT USING AR.DRONE CAMERA

    Directory of Open Access Journals (Sweden)

    Muhammad Nanda Kurniawan

    2014-08-01

    Full Text Available Abstract In this research, Parrot AR.Drone as an Unmanned Aerial Vehicle (UAV was used to track an object from above. Development of this system utilized some functions from OpenCV library and Robot Operating System (ROS. Techniques that were implemented in the system are image processing al-gorithm (Centroid-Contour Distance (CCD, feature extraction algorithm (Principal Component Analysis (PCA and an artificial neural network algorithm (Generalized Learning Vector Quantization (GLVQ. The final result of this research is a program for AR.Drone to track a moving object on the floor in fast response time that is under 1 second.

  3. Prospective implementation of an algorithm for bedside intravascular ultrasound-guided filter placement in critically ill patients.

    Science.gov (United States)

    Killingsworth, Christopher D; Taylor, Steven M; Patterson, Mark A; Weinberg, Jordan A; McGwin, Gerald; Melton, Sherry M; Reiff, Donald A; Kerby, Jeffrey D; Rue, Loring W; Jordan, William D; Passman, Marc A

    2010-05-01

    Although contrast venography is the standard imaging method for inferior vena cava (IVC) filter insertion, intravascular ultrasound (IVUS) imaging is a safe and effective option that allows for bedside filter placement and is especially advantageous for immobilized critically ill patients by limiting resource use, risk of transportation, and cost. This study reviewed the effectiveness of a prospectively implemented algorithm for IVUS-guided IVC filter placement in this high-risk population. Current evidence-based guidelines were used to create a clinical decision algorithm for IVUS-guided IVC filter placement in critically ill patients. After a defined lead-in phase to allow dissemination of techniques, the algorithm was prospectively implemented on January 1, 2008. Data were collected for 1 year using accepted reporting standards and a quality assurance review performed based on intent-to-treat at 6, 12, and 18 months. As defined in the prospectively implemented algorithm, 109 patients met criteria for IVUS-directed bedside IVC filter placement. Technical feasibility was 98.1%. Only 2 patients had inadequate IVUS visualization for bedside filter placement and required subsequent placement in the endovascular suite. Technical success, defined as proper deployment in an infrarenal position, was achieved in 104 of the remaining 107 patients (97.2%). The filter was permanent in 21 (19.6%) and retrievable in 86 (80.3%). The single-puncture technique was used in 101 (94.4%), with additional dual access required in 6 (5.6%). Periprocedural complications were rare but included malpositioning requiring retrieval and repositioning in three patients, filter tilt >/=15 degrees in two, and arteriovenous fistula in one. The 30-day mortality rate for the bedside group was 5.5%, with no filter-related deaths. Successful placement of IVC filters using IVUS-guided imaging at the bedside in critically ill patients can be established through an evidence-based prospectively

  4. An Implementation Of Elias Delta Code And ElGamal Algorithm In Image Compression And Security

    Science.gov (United States)

    Rachmawati, Dian; Andri Budiman, Mohammad; Saffiera, Cut Amalia

    2018-01-01

    In data transmission such as transferring an image, confidentiality, integrity, and efficiency of data storage aspects are highly needed. To maintain the confidentiality and integrity of data, one of the techniques used is ElGamal. The strength of this algorithm is found on the difficulty of calculating discrete logs in a large prime modulus. ElGamal belongs to the class of Asymmetric Key Algorithm and resulted in enlargement of the file size, therefore data compression is required. Elias Delta Code is one of the compression algorithms that use delta code table. The image was first compressed using Elias Delta Code Algorithm, then the result of the compression was encrypted by using ElGamal algorithm. Prime test was implemented using Agrawal Biswas Algorithm. The result showed that ElGamal method could maintain the confidentiality and integrity of data with MSE and PSNR values 0 and infinity. The Elias Delta Code method generated compression ratio and space-saving each with average values of 62.49%, and 37.51%.

  5. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    Science.gov (United States)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  6. Implementation of the k -Neighbors Technique in a recommender algorithm for a purchasing system using NFC and Android

    Directory of Open Access Journals (Sweden)

    Oscar Arley Riveros

    2017-01-01

    Full Text Available Introduction: This paper aims to present the design of a mobile application involving NFC technology and a collaborative recommendation algorithm under the K-neighbors technique, allowing to observe personalized suggestions for each client. Objective: Design and develop a mobile application, using NFC technologies and K-Neighbors Technique in a recommendation algorithm, for a Procurement System. Methodology: The process followed for the design and development of the application focuses on: • Review of the state of the art in mobile shopping systems. • State-of-the-art construction in the use of NFC technology and AI techniques for recommending systems focused on K-Neighbors Algorithms • Proposed system design • Parameterization and implementation of the K-Neighbors Technique and integration of NFC Technology • Proposed System Implementation and Testing. Results: Among the results obtained are detailed: • Mobile application that integrates Android, NFC Technologies and a Technique of Algorithm Recommendation • Parameterization of the K-Neighbors Technique, to be used within the recommended algorithm. • Implementation of functional requirements that allow the generation of personalized recommendations for purchase to the user, user ratings Conclusions: The k-neighbors technique in a recommendation algorithm allows the client to provide a series of recommendations with a level of security, since this algorithm performs calculations taking into account multiple parameters and contrasts the results obtained for other users, finding the articles with a Greater degree of similarity with the customer profile. This algorithm starts from a sample of similar, complementary and other unrelated products, applying its respective formulation, we obtain that the recommendation is made only with the complementary products that obtained higher qualification; Making a big difference with most recommending systems on the market, which are limited to

  7. An implementation of differential evolution algorithm for inversion of geoelectrical data

    Science.gov (United States)

    Balkaya, Çağlayan

    2013-11-01

    Differential evolution (DE), a population-based evolutionary algorithm (EA) has been implemented to invert self-potential (SP) and vertical electrical sounding (VES) data sets. The algorithm uses three operators including mutation, crossover and selection similar to genetic algorithm (GA). Mutation is the most important operator for the success of DE. Three commonly used mutation strategies including DE/best/1 (strategy 1), DE/rand/1 (strategy 2) and DE/rand-to-best/1 (strategy 3) were applied together with a binomial type crossover. Evolution cycle of DE was realized without boundary constraints. For the test studies performed with SP data, in addition to both noise-free and noisy synthetic data sets two field data sets observed over the sulfide ore body in the Malachite mine (Colorado) and over the ore bodies in the Neem-Ka Thana cooper belt (India) were considered. VES test studies were carried out using synthetically produced resistivity data representing a three-layered earth model and a field data set example from Gökçeada (Turkey), which displays a seawater infiltration problem. Mutation strategies mentioned above were also extensively tested on both synthetic and field data sets in consideration. Of these, strategy 1 was found to be the most effective strategy for the parameter estimation by providing less computational cost together with a good accuracy. The solutions obtained by DE for the synthetic cases of SP were quite consistent with particle swarm optimization (PSO) which is a more widely used population-based optimization algorithm than DE in geophysics. Estimated parameters of SP and VES data were also compared with those obtained from Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing (SA) without cooling to clarify uncertainties in the solutions. Comparison to the M-H algorithm shows that DE performs a fast approximate posterior sampling for the case of low-dimensional inverse geophysical problems.

  8. Design Approach and Implementation of Application Specific Instruction Set Processor for SHA-3 BLAKE Algorithm

    Science.gov (United States)

    Zhang, Yuli; Han, Jun; Weng, Xinqian; He, Zhongzhu; Zeng, Xiaoyang

    This paper presents an Application Specific Instruction-set Processor (ASIP) for the SHA-3 BLAKE algorithm family by instruction set extensions (ISE) from an RISC (reduced instruction set computer) processor. With a design space exploration for this ASIP to increase the performance and reduce the area cost, we accomplish an efficient hardware and software implementation of BLAKE algorithm. The special instructions and their well-matched hardware function unit improve the calculation of the key section of the algorithm, namely G-functions. Also, relaxing the time constraint of the special function unit can decrease its hardware cost, while keeping the high data throughput of the processor. Evaluation results reveal the ASIP achieves 335Mbps and 176Mbps for BLAKE-256 and BLAKE-512. The extra area cost is only 8.06k equivalent gates. The proposed ASIP outperforms several software approaches on various platforms in cycle per byte. In fact, both high throughput and low hardware cost achieved by this programmable processor are comparable to that of ASIC implementations.

  9. [Sven Jüngerkes. Deutsche Besatzungsverwaltung in Lettland 1941-1945. Eine Kommunikations- und Kulturgeschichte nationalsozialistischer Organisationen] / Toomas Hiio

    Index Scriptorium Estoniae

    Hiio, Toomas, 1965-

    2012-01-01

    Arvustus: Jüngerkes, Sven.Deutsche Besatzungsverwaltung in Lettland 1941-1945. Eine Kommunikations- und Kulturgeschichte nationalsozialistischer Organisationen (Historische Kulturwissenschaft, 15). (Konstanz: UVK Verlagsgesellschaft mbH, 2010)

  10. Implementation of a Wavefront-Sensing Algorithm

    Science.gov (United States)

    Smith, Jeffrey S.; Dean, Bruce; Aronstein, David

    2013-01-01

    A computer program has been written as a unique implementation of an image-based wavefront-sensing algorithm reported in "Iterative-Transform Phase Retrieval Using Adaptive Diversity" (GSC-14879-1), NASA Tech Briefs, Vol. 31, No. 4 (April 2007), page 32. This software was originally intended for application to the James Webb Space Telescope, but is also applicable to other segmented-mirror telescopes. The software is capable of determining optical-wavefront information using, as input, a variable number of irradiance measurements collected in defocus planes about the best focal position. The software also uses input of the geometrical definition of the telescope exit pupil (otherwise denoted the pupil mask) to identify the locations of the segments of the primary telescope mirror. From the irradiance data and mask information, the software calculates an estimate of the optical wavefront (a measure of performance) of the telescope generally and across each primary mirror segment specifically. The software is capable of generating irradiance data, wavefront estimates, and basis functions for the full telescope and for each primary-mirror segment. Optionally, each of these pieces of information can be measured or computed outside of the software and incorporated during execution of the software.

  11. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    Energy Technology Data Exchange (ETDEWEB)

    Madduri, Kamesh; Ediger, David; Jiang, Karl; Bader, David A.; Chavarria-Miranda, Daniel

    2009-02-15

    We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.

  12. Parallel implementation of DNA sequences matching algorithms using PWM on GPU architecture.

    Science.gov (United States)

    Sharma, Rahul; Gupta, Nitin; Narang, Vipin; Mittal, Ankush

    2011-01-01

    Positional Weight Matrices (PWMs) are widely used in representation and detection of Transcription Factor Of Binding Sites (TFBSs) on DNA. We implement online PWM search algorithm over parallel architecture. A large PWM data can be processed on Graphic Processing Unit (GPU) systems in parallel which can help in matching sequences at a faster rate. Our method employs extensive usage of highly multithreaded architecture and shared memory of multi-cored GPU. An efficient use of shared memory is required to optimise parallel reduction in CUDA. Our optimised method has a speedup of 230-280x over linear implementation on GPU named GeForce GTX 280.

  13. Reflections of Practical Implementation of the academic course Analysis and Design of Algorithms taught in the Universities of Pakistan

    Directory of Open Access Journals (Sweden)

    Faryal Shamsi

    2017-12-01

    Full Text Available This Analysis and Design of Algorithm is considered as a compulsory course in the field of Computer Science. It increases the logical and problem solving skills of the students and make their solutions efficient in terms of time and space.  These objectives can only be achieved if a student practically implements what he or she has studied throughout the course. But if the contents of this course are merely studied and rarely practiced then the actual goals of the course is not fulfilled. This article will explore the extent of practical implementation of the course of analysis and design of algorithm. Problems faced by the computer science community and major barriers in the field are also enumerated. Finally, some recommendations are made to overcome the obstacles in the practical implementation of analysis and design of algorithms.

  14. Implementation of the CA-CFAR algorithm for pulsed-doppler radar on a GPU architecture

    CSIR Research Space (South Africa)

    Venter, CJ

    2011-12-01

    Full Text Available /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Implementation of the CA-CFAR Algorithm for Pulsed...

  15. Design and implementation of three-dimension texture mapping algorithm for panoramic system based on smart platform

    Science.gov (United States)

    Liu, Zhi; Zhou, Baotong; Zhang, Changnian

    2017-03-01

    Vehicle-mounted panoramic system is important safety assistant equipment for driving. However, traditional systems only render fixed top-down perspective view of limited view field, which may have potential safety hazard. In this paper, a texture mapping algorithm for 3D vehicle-mounted panoramic system is introduced, and an implementation of the algorithm utilizing OpenGL ES library based on Android smart platform is presented. Initial experiment results show that the proposed algorithm can render a good 3D panorama, and has the ability to change view point freely.

  16. Implementation of PHENIX trigger algorithms on massively parallel computers

    International Nuclear Information System (INIS)

    Petridis, A.N.; Wohn, F.K.

    1995-01-01

    The event selection requirements of contemporary high energy and nuclear physics experiments are met by the introduction of on-line trigger algorithms which identify potentially interesting events and reduce the data acquisition rate to levels that are manageable by the electronics. Such algorithms being parallel in nature can be simulated off-line using massively parallel computers. The PHENIX experiment intends to investigate the possible existence of a new phase of matter called the quark gluon plasma which has been theorized to have existed in very early stages of the evolution of the universe by studying collisions of heavy nuclei at ultra-relativistic energies. Such interactions can also reveal important information regarding the structure of the nucleus and mandate a thorough investigation of the simpler proton-nucleus collisions at the same energies. The complexity of PHENIX events and the need to analyze and also simulate them at rates similar to the data collection ones imposes enormous computation demands. This work is a first effort to implement PHENIX trigger algorithms on parallel computers and to study the feasibility of using such machines to run the complex programs necessary for the simulation of the PHENIX detector response. Fine and coarse grain approaches have been studied and evaluated. Depending on the application the performance of a massively parallel computer can be much better or much worse than that of a serial workstation. A comparison between single instruction and multiple instruction computers is also made and possible applications of the single instruction machines to high energy and nuclear physics experiments are outlined. copyright 1995 American Institute of Physics

  17. Searching Algorithms Implemented on Probabilistic Systolic Arrays

    Czech Academy of Sciences Publication Activity Database

    Kramosil, Ivan

    1996-01-01

    Roč. 25, č. 1 (1996), s. 7-45 ISSN 0308-1079 R&D Projects: GA ČR GA201/93/0781 Keywords : searching algorithms * probabilistic algorithms * systolic arrays * parallel algorithms Impact factor: 0.214, year: 1996

  18. [Ulrike Plath. Esten und Deutsche in den baltischen Provinzen Russlands. Fremdheitskonstruktionen, Lebenswelten, Kolonialphantasien 1750-1850] / Lea Leppik

    Index Scriptorium Estoniae

    Leppik, Lea, 1962-

    2014-01-01

    Arvustus: Plath, Ulrike. Esten und Deutsche in den baltischen Provinzen Russlands. Fremdheitskonstruktionen, Lebenswelten, Kolonialphantasien 1750-1850 (Veröffentlichungen des Nordost-Institut, 11). Harrasowitz. Wiesbaden 2011.

  19. Introduction to quantum information science

    Energy Technology Data Exchange (ETDEWEB)

    Hayashi, Masahito [Nagoya Univ. (Japan). Graduate School of Mathematics; Ishizaka, Satoshi [Hiroshima Univ., Higashi-Hiroshima (Japan). Graduate School of Integrated Arts and Sciences; Kawachi, Akinori [Tokyo Institute of Technology (Japan). Dept. of Mathematical and Computing Sciences; Kimura, Gen [Shibaura Institute of Technology, Saitama (Japan). College of Systems Engineering and Science; Ogawa, Tomohiro [Univ. of Electro-Communications, Tokyo (Japan). Graduate School of Information Systems

    2015-04-01

    Presents the mathematical foundation for quantum information in a very didactic way. Summarizes all required mathematical knowledge in linear algebra. Supports teaching and learning with more than 100 exercises with solutions. Includes brief descriptions to recent results with references. This book presents the basics of quantum information, e.g., foundation of quantum theory, quantum algorithms, quantum entanglement, quantum entropies, quantum coding, quantum error correction and quantum cryptography. The required knowledge is only elementary calculus and linear algebra. This way the book can be understood by undergraduate students. In order to study quantum information, one usually has to study the foundation of quantum theory. This book describes it from more an operational viewpoint which is suitable for quantum information while traditional textbooks of quantum theory lack this viewpoint. The current book bases on Shor's algorithm, Grover's algorithm, Deutsch-Jozsa's algorithm as basic algorithms. To treat several topics in quantum information, this book covers several kinds of information quantities in quantum systems including von Neumann entropy. The limits of several kinds of quantum information processing are given. As important quantum protocols,this book contains quantum teleportation, quantum dense coding, quantum data compression. In particular conversion theory of entanglement via local operation and classical communication are treated too. This theory provides the quantification of entanglement, which coincides with von Neumann entropy. The next part treats the quantum hypothesis testing. The decision problem of two candidates of the unknown state are given. The asymptotic performance of this problem is characterized by information quantities. Using this result, the optimal performance of classical information transmission via noisy quantum channel is derived. Quantum information transmission via noisy quantum channel by quantum error

  20. Introduction to quantum information science

    International Nuclear Information System (INIS)

    Hayashi, Masahito; Ishizaka, Satoshi; Kawachi, Akinori; Kimura, Gen; Ogawa, Tomohiro

    2015-01-01

    Presents the mathematical foundation for quantum information in a very didactic way. Summarizes all required mathematical knowledge in linear algebra. Supports teaching and learning with more than 100 exercises with solutions. Includes brief descriptions to recent results with references. This book presents the basics of quantum information, e.g., foundation of quantum theory, quantum algorithms, quantum entanglement, quantum entropies, quantum coding, quantum error correction and quantum cryptography. The required knowledge is only elementary calculus and linear algebra. This way the book can be understood by undergraduate students. In order to study quantum information, one usually has to study the foundation of quantum theory. This book describes it from more an operational viewpoint which is suitable for quantum information while traditional textbooks of quantum theory lack this viewpoint. The current book bases on Shor's algorithm, Grover's algorithm, Deutsch-Jozsa's algorithm as basic algorithms. To treat several topics in quantum information, this book covers several kinds of information quantities in quantum systems including von Neumann entropy. The limits of several kinds of quantum information processing are given. As important quantum protocols,this book contains quantum teleportation, quantum dense coding, quantum data compression. In particular conversion theory of entanglement via local operation and classical communication are treated too. This theory provides the quantification of entanglement, which coincides with von Neumann entropy. The next part treats the quantum hypothesis testing. The decision problem of two candidates of the unknown state are given. The asymptotic performance of this problem is characterized by information quantities. Using this result, the optimal performance of classical information transmission via noisy quantum channel is derived. Quantum information transmission via noisy quantum channel by quantum error correction are

  1. Implementation of the U.S. Environmental Protection Agency's Waste Reduction (WAR) Algorithm in Cape-Open Based Process Simulators

    Science.gov (United States)

    The Sustainable Technology Division has recently completed an implementation of the U.S. EPA's Waste Reduction (WAR) Algorithm that can be directly accessed from a Cape-Open compliant process modeling environment. The WAR Algorithm add-in can be used in AmsterChem's COFE (Cape-Op...

  2. Get Your Atoms in Order--An Open-Source Implementation of a Novel and Robust Molecular Canonicalization Algorithm.

    Science.gov (United States)

    Schneider, Nadine; Sayle, Roger A; Landrum, Gregory A

    2015-10-26

    Finding a canonical ordering of the atoms in a molecule is a prerequisite for generating a unique representation of the molecule. The canonicalization of a molecule is usually accomplished by applying some sort of graph relaxation algorithm, the most common of which is the Morgan algorithm. There are known issues with that algorithm that lead to noncanonical atom orderings as well as problems when it is applied to large molecules like proteins. Furthermore, each cheminformatics toolkit or software provides its own version of a canonical ordering, most based on unpublished algorithms, which also complicates the generation of a universal unique identifier for molecules. We present an alternative canonicalization approach that uses a standard stable-sorting algorithm instead of a Morgan-like index. Two new invariants that allow canonical ordering of molecules with dependent chirality as well as those with highly symmetrical cyclic graphs have been developed. The new approach proved to be robust and fast when tested on the 1.45 million compounds of the ChEMBL 20 data set in different scenarios like random renumbering of input atoms or SMILES round tripping. Our new algorithm is able to generate a canonical order of the atoms of protein molecules within a few milliseconds. The novel algorithm is implemented in the open-source cheminformatics toolkit RDKit. With this paper, we provide a reference Python implementation of the algorithm that could easily be integrated in any cheminformatics toolkit. This provides a first step toward a common standard for canonical atom ordering to generate a universal unique identifier for molecules other than InChI.

  3. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  4. OpenCL Implementation of a Parallel Universal Kriging Algorithm for Massive Spatial Data Interpolation on Heterogeneous Systems

    Directory of Open Access Journals (Sweden)

    Fang Huang

    2016-06-01

    Full Text Available In some digital Earth engineering applications, spatial interpolation algorithms are required to process and analyze large amounts of data. Due to its powerful computing capacity, heterogeneous computing has been used in many applications for data processing in various fields. In this study, we explore the design and implementation of a parallel universal kriging spatial interpolation algorithm using the OpenCL programming model on heterogeneous computing platforms for massive Geo-spatial data processing. This study focuses primarily on transforming the hotspots in serial algorithms, i.e., the universal kriging interpolation function, into the corresponding kernel function in OpenCL. We also employ parallelization and optimization techniques in our implementation to improve the code performance. Finally, based on the results of experiments performed on two different high performance heterogeneous platforms, i.e., an NVIDIA graphics processing unit system and an Intel Xeon Phi system (MIC, we show that the parallel universal kriging algorithm can achieve the highest speedup of up to 40× with a single computing device and the highest speedup of up to 80× with multiple devices.

  5. A Sparse Self-Consistent Field Algorithm and Its Parallel Implementation: Application to Density-Functional-Based Tight Binding.

    Science.gov (United States)

    Scemama, Anthony; Renon, Nicolas; Rapacioli, Mathias

    2014-06-10

    We present an algorithm and its parallel implementation for solving a self-consistent problem as encountered in Hartree-Fock or density functional theory. The algorithm takes advantage of the sparsity of matrices through the use of local molecular orbitals. The implementation allows one to exploit efficiently modern symmetric multiprocessing (SMP) computer architectures. As a first application, the algorithm is used within the density-functional-based tight binding method, for which most of the computational time is spent in the linear algebra routines (diagonalization of the Fock/Kohn-Sham matrix). We show that with this algorithm (i) single point calculations on very large systems (millions of atoms) can be performed on large SMP machines, (ii) calculations involving intermediate size systems (1000-100 000 atoms) are also strongly accelerated and can run efficiently on standard servers, and (iii) the error on the total energy due to the use of a cutoff in the molecular orbital coefficients can be controlled such that it remains smaller than the SCF convergence criterion.

  6. Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks.

    Science.gov (United States)

    Walter, Florian; Röhrbein, Florian; Knoll, Alois

    2015-12-01

    The application of biologically inspired methods in design and control has a long tradition in robotics. Unlike previous approaches in this direction, the emerging field of neurorobotics not only mimics biological mechanisms at a relatively high level of abstraction but employs highly realistic simulations of actual biological nervous systems. Even today, carrying out these simulations efficiently at appropriate timescales is challenging. Neuromorphic chip designs specially tailored to this task therefore offer an interesting perspective for neurorobotics. Unlike Von Neumann CPUs, these chips cannot be simply programmed with a standard programming language. Like real brains, their functionality is determined by the structure of neural connectivity and synaptic efficacies. Enabling higher cognitive functions for neurorobotics consequently requires the application of neurobiological learning algorithms to adjust synaptic weights in a biologically plausible way. In this paper, we therefore investigate how to program neuromorphic chips by means of learning. First, we provide an overview over selected neuromorphic chip designs and analyze them in terms of neural computation, communication systems and software infrastructure. On the theoretical side, we review neurobiological learning techniques. Based on this overview, we then examine on-die implementations of these learning algorithms on the considered neuromorphic chips. A final discussion puts the findings of this work into context and highlights how neuromorphic hardware can potentially advance the field of autonomous robot systems. The paper thus gives an in-depth overview of neuromorphic implementations of basic mechanisms of synaptic plasticity which are required to realize advanced cognitive capabilities with spiking neural networks. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Continuous-time quantum algorithms for unstructured problems

    International Nuclear Information System (INIS)

    Hen, Itay

    2014-01-01

    We consider a family of unstructured optimization problems, for which we propose a method for constructing analogue, continuous-time (not necessarily adiabatic) quantum algorithms that are faster than their classical counterparts. In this family of problems, which we refer to as ‘scrambled input’ problems, one has to find a minimum-cost configuration of a given integer-valued n-bit black-box function whose input values have been scrambled in some unknown way. Special cases within this set of problems are Grover’s search problem of finding a marked item in an unstructured database, certain random energy models, and the functions of the Deutsch–Josza problem. We consider a couple of examples in detail. In the first, we provide an O(1) deterministic analogue quantum algorithm to solve the seminal problem of Deutsch and Josza, in which one has to determine whether an n-bit boolean function is constant (gives 0 on all inputs or 1 on all inputs) or balanced (returns 0 on half the input states and 1 on the other half). We also study one variant of the random energy model, and show that, as one might expect, its minimum energy configuration can be found quadratically faster with a quantum adiabatic algorithm than with classical algorithms. (paper)

  8. Research and implementation of the algorithm for unwrapped and distortion correction basing on CORDIC for panoramic image

    Science.gov (United States)

    Zhang, Zhenhai; Li, Kejie; Wu, Xiaobing; Zhang, Shujiang

    2008-03-01

    The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and corrected to conventional rectangular image without distortion, which is much more coincident with people's vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction has the lower computation complexity and the architecture for dynamic panoramic image processing has lower hardware cost and power consumption. And the proposed algorithm is valid.

  9. Next Generation Aura-OMI SO2 Retrieval Algorithm: Introduction and Implementation Status

    Science.gov (United States)

    Li, Can; Joiner, Joanna; Krotkov, Nickolay A.; Bhartia, Pawan K.

    2014-01-01

    We introduce our next generation algorithm to retrieve SO2 using radiance measurements from the Aura Ozone Monitoring Instrument (OMI). We employ a principal component analysis technique to analyze OMI radiance spectral in 310.5-340 nm acquired over regions with no significant SO2. The resulting principal components (PCs) capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering, and ozone absorption) and measurement artifacts, enabling us to account for these various interferences in SO2 retrievals. By fitting these PCs along with SO2 Jacobians calculated with a radiative transfer model to OMI-measured radiance spectra, we directly estimate SO2 vertical column density in one step. As compared with the previous generation operational OMSO2 PBL (Planetary Boundary Layer) SO2 product, our new algorithm greatly reduces unphysical biases and decreases the noise by a factor of two, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing long-term, consistent SO2 records for air quality and climate research. We have operationally implemented this new algorithm on OMI SIPS for producing the new generation standard OMI SO2 products.

  10. On the implementation of the Ford | Fulkerson algorithm on the Multiple Instruction and Single Data computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2014-01-01

    Full Text Available Algorithms of optimization in networks and direct graphs find a broad application when solving the practical tasks. However, along with large-scale introduction of information technologies in human activity, requirements for volumes of input data and retrieval rate of solution are aggravated. In spite of the fact that by now the large number of algorithms for the various models of computers and computing systems have been studied and implemented, the solution of key problems of optimization for real dimensions of tasks remains difficult. In this regard search of new and more efficient computing structures, as well as update of known algorithms are of great current interest.The work considers an implementation of the search-end algorithm of the maximum flow on the direct graph for multiple instructions and single data computer system (MISD developed in BMSTU. Key feature of this architecture is deep hardware support of operations over sets and structures of data. Functions of storage and access to them are realized on the specialized processor of structures processing (SP which is capable to perform at the hardware level such operations as: add, delete, search, intersect, complete, merge, and others. Advantage of such system is possibility of parallel execution of parts of the computing tasks regarding the access to the sets to data structures simultaneously with arithmetic and logical processing of information.The previous works present the general principles of the computing process arrangement and features of programs implemented in MISD system, describe the structure and principles of functioning the processor of structures processing, show the general principles of the graph task solutions in such system, and experimentally study the efficiency of the received algorithms.The work gives command formats of the SP processor, offers the technique to update the algorithms realized in MISD system, suggests the option of Ford-Falkersona algorithm

  11. Deutsches Krebsforschungszentrum Heidelberg. Report on scientific results 2000-2001

    International Nuclear Information System (INIS)

    Metzler, H.

    2002-01-01

    The Deutsches Krebsforschungszentrum Heidelberg (DKFZ, German Cancer Research Center) publishes alternating every year the ''Wissenschaftlicher Ergebnisbericht'' (in German) and the ''Research Report'' (in English). Both volumes are reports on the present state of research activities of the DKFZ as a National Research Center to the funding federal and state authorities [Federal Republic of Germany, Land (state) Baden-Wuerttemberg]. Furthermore they shall inform colleagues and the scientifically interested public. Both reports are structured according to the center's eight research programs. The last Research Report was published in 2001. In Germany a new orthography has been accepted. Some authors used the new form others the traditional one. The orthography was not standardized. (orig.)

  12. Implementation of sepsis algorithm by nurses in the intensive care unit

    Directory of Open Access Journals (Sweden)

    Paula Pedroso Peninck

    2012-04-01

    Full Text Available Sepsis is defined as a clinical syndrome consisting of a systemic inflammatory response associated to an infection, which may determine malfunction or failure of multiple organs. This research aims to verify the application of implementation of sepsis algorithm by nurses in the Intensive Care Unit and create an operational nursing assistance guide. This is an exploratory, descriptive study with quantitative approach. A data collection instrument based on relevant literature was elaborated, assessed, corrected and validated. The sample consisted of 20 intensive care unit nurses. We obtained satisfactory evaluations on nurses’ performance, but some issues did not reach 50% accuracy. We emphasize the importance of greater numbers of nurses getting acquainted and correctly applying the sepsis algorithm. Based on the above, an operational septic patient nursing assistance guide was created, based on the difficulties that arose vis-à-vis the variables applied in research and relevant literature.

  13. An Improved Fuzzy C-Means Algorithm for the Implementation of Demand Side Management Measures

    Directory of Open Access Journals (Sweden)

    Ioannis Panapakidis

    2017-09-01

    Full Text Available Load profiling refers to a procedure that leads to the formulation of daily load curves and consumer classes regarding the similarity of the curve shapes. This procedure incorporates a set of unsupervised machine learning algorithms. While many crisp clustering algorithms have been proposed for grouping load curves into clusters, only one soft clustering algorithm is utilized for the aforementioned purpose, namely the Fuzzy C-Means (FCM algorithm. Since the benefits of soft clustering are demonstrated in a variety of applications, the potential of introducing a novel modification of the FCM in the electricity consumer clustering process is examined. Additionally, this paper proposes a novel Demand Side Management (DSM strategy for load management of consumers that are eligible for the implementation of Real-Time Pricing (RTP schemes. The DSM strategy is formulated as a constrained optimization problem that can be easily solved and therefore, making it a useful tool for retailers’ decision-making framework in competitive electricity markets.

  14. IMPLEMENTATION OF A REAL-TIME STACKING ALGORITHM IN A PHOTOGRAMMETRIC DIGITAL CAMERA FOR UAVS

    Directory of Open Access Journals (Sweden)

    A. Audi

    2017-08-01

    Full Text Available In the recent years, unmanned aerial vehicles (UAVs have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery need a longexposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the Nth image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesn’t seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation

  15. Managing conflict in Dutch organizations: A test of the relevance of DeutschUs cooperation theory

    NARCIS (Netherlands)

    de Dreu, C.K.W.; Tjosvold, D.

    1997-01-01

    Deutsch's theory of cooperative and competitive conflict may be usefully extended to Dutch people. Results of LISREL analyses on data collected from interviews of Dutch employees in 2 companies indicate that competitive goals interfered with the open, constructive discussion of opposing views.

  16. Genome sequencing of Deutsch strain of cattle ticks, Rhipicephalus microplus: Raw Pac Bio reads.

    Science.gov (United States)

    Pac Bio RS II whole genome shotgun sequencing technology was used to sequence the genome of the cattle tick, Rhipicephalus microplus. The DNA was derived from 14 day old eggs from the Deutsch Texas outbreak strain reared at the USDA-ARS Cattle Fever Tick Research Laboratory, Edinburg, TX. Each corre...

  17. Implementation of the ALICE HLT hardware cluster finder algorithm in Vivado HLS

    Energy Technology Data Exchange (ETDEWEB)

    Gruell, Frederik; Engel, Heiko; Kebschull, Udo [Infrastructure and Computer Systems in Data Processing, Goethe University Frankfurt (Germany); Collaboration: ALICE-Collaboration

    2016-07-01

    The FastClusterFinder algorithm running in the ALICE High-Level Trigger (HLT) read-out boards extracts clusters from raw data from the Time Projection Chamber (TPC) detector and forwards them to the HLT data processing framework for tracking, event reconstruction and compression. It serves as an early stage of feature extraction in the FPGA of the board. Past and current implementations are written in VHDL on reconfigurable hardware for high throughput and low latency. We examine Vivado HLS, a high-level language that promises an increased developer productivity, as an alternative. The implementation of the application is compared to descriptions in VHDL and MaxJ in terms of productivity, resource usage and maximum clock frequency.

  18. GPU implementation of discrete particle swarm optimization algorithm for endmember extraction from hyperspectral image

    Science.gov (United States)

    Yu, Chaoyin; Yuan, Zhengwu; Wu, Yuanfeng

    2017-10-01

    Hyperspectral image unmixing is an important part of hyperspectral data analysis. The mixed pixel decomposition consists of two steps, endmember (the unique signatures of pure ground components) extraction and abundance (the proportion of each endmember in each pixel) estimation. Recently, a Discrete Particle Swarm Optimization algorithm (DPSO) was proposed for accurately extract endmembers with high optimal performance. However, the DPSO algorithm shows very high computational complexity, which makes the endmember extraction procedure very time consuming for hyperspectral image unmixing. Thus, in this paper, the DPSO endmember extraction algorithm was parallelized, implemented on the CUDA (GPU K20) platform, and evaluated by real hyperspectral remote sensing data. The experimental results show that with increasing the number of particles the parallelized version obtained much higher computing efficiency while maintain the same endmember exaction accuracy.

  19. Implementation of on-line data reduction algorithms in the CMS Endcap Preshower Data Concentrator Cards

    CERN Document Server

    Barney, D; Kokkas, P; Manthos, N; Sidiropoulos, G; Reynaud, S; Vichoudis, P

    2007-01-01

    The CMS Endcap Preshower (ES) sub-detector comprises 4288 silicon sensors, each containing 32 strips. The data are transferred from the detector to the counting room via 1208 optical fibres running at 800Mbps. Each fibre carries data from two, three or four sensors. For the readout of the Preshower, a VME-based system, the Endcap Preshower Data Concentrator Card (ES-DCC), is currently under development. The main objective of each readout board is to acquire on-detector data from up to 36 optical links, perform on-line data reduction via zero suppression and pass the concentrated data to the CMS event builder. This document presents the conceptual design of the Reduction Algorithms as well as their implementation in the ES-DCC FPGAs. These algorithms, as implemented in the ES-DCC, result in a data-reduction factor of 20.

  20. Implementation of On-Line Data Reduction Algorithms in the CMS Endcap Preshower Data Concentrator Card

    CERN Document Server

    Barney, David; Kokkas, Panagiotis; Manthos, Nikolaos; Reynaud, Serge; Sidiropoulos, Georgios; Vichoudis, Paschalis

    2006-01-01

    The CMS Endcap Preshower (ES) sub-detector comprises 4288 silicon sensors, each containing 32 strips. The data are transferred from the detector to the counting room via 1208 optical fibres running at 800Mbps. Each fibre carries data from 2, 3 or 4 sensors. For the readout of the Preshower, a VME-based system - the Endcap Preshower Data Concentrator Card (ES-DCC) is currently under development. The main objective of each readout board is to acquire on-detector data from up to 36 optical links, perform on-line data reduction (zero suppression) and pass the concentrated data to the CMS event builder. This document presents the conceptual design of the Reduction Algorithms as well as their implementation into the ES-DCC FPGAs. The algorithms implemented into the ES-DCC resulted in a reduction factor of ~20.

  1. IMPLEMENTATION OF INCIDENT DETECTION ALGORITHM BASED ON FUZZY LOGIC IN PTV VISSIM

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-05-01

    Full Text Available Traffic incident management is a major challenge in the management of movement, requiring constant attention and significant investment, as well as fast and accurate solutions in order to re-establish normal traffic conditions. Automatic control methods are becoming an important factor for the reduction of traffic congestion caused by an arising incident. In this paper, the algorithm of automatic detection incident based on fuzzy logic is implemented in the software PTV VISSIM. 9 different types of tests were conducted on the two lane road section segment with changing traffic conditions: the location of the road accident, loading of traffic. The main conclusion of the research is that the proposed algorithm for the incidents detection demonstrates good performance in the time of detection and false alarms

  2. Implementation of an Evidence-Based Seizure Algorithm in Intellectual Disability Nursing: A Pilot Study

    Science.gov (United States)

    Auberry, Kathy; Cullen, Deborah

    2016-01-01

    Based on the results of the Surrogate Decision-Making Self Efficacy Scale (Lopez, 2009a), this study sought to determine whether nurses working in the field of intellectual disability (ID) experience increased confidence when they implemented the American Association of Neuroscience Nurses (AANN) Seizure Algorithm during telephone triage. The…

  3. DC Voltage Droop Control Implementation in the AC/DC Power Flow Algorithm: Combinational Approach

    DEFF Research Database (Denmark)

    Akhter, F.; Macpherson, D.E.; Harrison, G.P.

    2015-01-01

    of operational flexibility, as more than one VSC station controls the DC link voltage of the MTDC system. This model enables the study of the effects of DC droop control on the power flows of the combined AC/DC system for steady state studies after VSC station outages or transient conditions without needing...... to use its complete dynamic model. Further, the proposed approach can be extended to include multiple AC and DC grids for combined AC/DC power flow analysis. The algorithm is implemented by modifying the MATPOWER based MATACDC program and the results shows that the algorithm works efficiently....

  4. Algorithm and Implementation of Distributed ESN Using Spark Framework and Parallel PSO

    Directory of Open Access Journals (Sweden)

    Kehe Wu

    2017-04-01

    Full Text Available The echo state network (ESN employs a huge reservoir with sparsely and randomly connected internal nodes and only trains the output weights, which avoids the suboptimal problem, exploding and vanishing gradients, high complexity and other disadvantages faced by traditional recurrent neural network (RNN training. In light of the outstanding adaption to nonlinear dynamical systems, ESN has been applied into a wide range of applications. However, in the era of Big Data, with an enormous amount of data being generated continuously every day, the data are often distributed and stored in real applications, and thus the centralized ESN training process is prone to being technologically unsuitable. In order to achieve the requirement of Big Data applications in the real world, in this study we propose an algorithm and its implementation for distributed ESN training. The mentioned algorithm is based on the parallel particle swarm optimization (P-PSO technique and the implementation uses Spark, a famous large-scale data processing framework. Four extremely large-scale datasets, including artificial benchmarks, real-world data and image data, are adopted to verify our framework on a stretchable platform. Experimental results indicate that the proposed work is accurate in the era of Big Data, regarding speed, accuracy and generalization capabilities.

  5. A novel orthoimage mosaic method using a weighted A∗ algorithm - Implementation and evaluation

    Science.gov (United States)

    Zheng, Maoteng; Xiong, Xiaodong; Zhu, Junfeng

    2018-04-01

    The implementation and evaluation of a weighted A∗ algorithm for orthoimage mosaic with UAV (Unmanned Aircraft Vehicle) imagery is proposed. The initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is generated based on DSM (Digital Surface Model) data; the vertices (conjunction nodes of seam-lines) of the initial network are relocated if they are on high objects (buildings, trees and other artificial structures); and the initial seam-lines are refined using the weighted A∗ algorithm based on the edge diagram and the relocated vertices. Our method was tested with three real UAV datasets. Two quantitative terms are introduced to evaluate the results of the proposed method. Preliminary results show that the method is suitable for regular and irregular aligned UAV images for most terrain types (flat or mountainous areas), and is better than the state-of-the-art method in both quality and efficiency based on the test datasets.

  6. Scheduling of Iterative Algorithms with Matrix Operations for Efficient FPGA Design—Implementation of Finite Interval Constant Modulus Algorithm

    Czech Academy of Sciences Publication Activity Database

    Šůcha, P.; Hanzálek, Z.; Heřmánek, Antonín; Schier, Jan

    2007-01-01

    Roč. 46, č. 1 (2007), s. 35-53 ISSN 0922-5773 R&D Projects: GA AV ČR(CZ) 1ET300750402; GA MŠk(CZ) 1M0567; GA MPO(CZ) FD-K3/082 Institutional research plan: CEZ:AV0Z10750506 Keywords : high-level synthesis * cyclic scheduling * iterative algorithms * imperfectly nested loops * integer linear programming * FPGA * VLSI design * blind equalization * implementation Subject RIV: BA - General Mathematics Impact factor: 0.449, year: 2007 http://www.springerlink.com/content/t217kg0822538014/fulltext.pdf

  7. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  8. An implementation of super-encryption using RC4A and MDTM cipher algorithms for securing PDF Files on android

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.; Parlindungan, M. R.

    2018-03-01

    MDTM is a classical symmetric cryptographic algorithm. As with other classical algorithms, the MDTM Cipher algorithm is easy to implement but it is less secure compared to modern symmetric algorithms. In order to make it more secure, a stream cipher RC4A is added and thus the cryptosystem becomes super encryption. In this process, plaintexts derived from PDFs are firstly encrypted with the MDTM Cipher algorithm and are encrypted once more with the RC4A algorithm. The test results show that the value of complexity is Θ(n2) and the running time is linearly directly proportional to the length of plaintext characters and the keys entered.

  9. The Research and Implementation of MUSER CLEAN Algorithm Based on OpenCL

    Science.gov (United States)

    Feng, Y.; Chen, K.; Deng, H.; Wang, F.; Mei, Y.; Wei, S. L.; Dai, W.; Yang, Q. P.; Liu, Y. B.; Wu, J. P.

    2017-03-01

    It's urgent to carry out high-performance data processing with a single machine in the development of astronomical software. However, due to the different configuration of the machine, traditional programming techniques such as multi-threading, and CUDA (Compute Unified Device Architecture)+GPU (Graphic Processing Unit) have obvious limitations in portability and seamlessness between different operation systems. The OpenCL (Open Computing Language) used in the development of MUSER (MingantU SpEctral Radioheliograph) data processing system is introduced. And the Högbom CLEAN algorithm is re-implemented into parallel CLEAN algorithm by the Python language and PyOpenCL extended package. The experimental results show that the CLEAN algorithm based on OpenCL has approximately equally operating efficiency compared with the former CLEAN algorithm based on CUDA. More important, the data processing in merely CPU (Central Processing Unit) environment of this system can also achieve high performance, which has solved the problem of environmental dependence of CUDA+GPU. Overall, the research improves the adaptability of the system with emphasis on performance of MUSER image clean computing. In the meanwhile, the realization of OpenCL in MUSER proves its availability in scientific data processing. In view of the high-performance computing features of OpenCL in heterogeneous environment, it will probably become the preferred technology in the future high-performance astronomical software development.

  10. Implementation of Tree and Butterfly Barriers with Optimistic Time Management Algorithms for Discrete Event Simulation

    Science.gov (United States)

    Rizvi, Syed S.; Shah, Dipali; Riasat, Aasia

    The Time Wrap algorithm [3] offers a run time recovery mechanism that deals with the causality errors. These run time recovery mechanisms consists of rollback, anti-message, and Global Virtual Time (GVT) techniques. For rollback, there is a need to compute GVT which is used in discrete-event simulation to reclaim the memory, commit the output, detect the termination, and handle the errors. However, the computation of GVT requires dealing with transient message problem and the simultaneous reporting problem. These problems can be dealt in an efficient manner by the Samadi's algorithm [8] which works fine in the presence of causality errors. However, the performance of both Time Wrap and Samadi's algorithms depends on the latency involve in GVT computation. Both algorithms give poor latency for large simulation systems especially in the presence of causality errors. To improve the latency and reduce the processor ideal time, we implement tree and butterflies barriers with the optimistic algorithm. Our analysis shows that the use of synchronous barriers such as tree and butterfly with the optimistic algorithm not only minimizes the GVT latency but also minimizes the processor idle time.

  11. High rise building becomes a ''green tower''. Modernization of the Deutsche Bank administrative building at Frankfurt/Main; Hochhaus wird Green Tower. Modernisierung der Firmenzentrale Deutsche Bank in Frankfurt/M.

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    2009-05-15

    The modernization work on the Deutsche Bank building started in 2008 and will be finished by 2010. The modernized building will be environment-friendly and energy-saving. Energy consumption is to be reduced by at least 50 percent. The building will be the first object in Germany to receive the US Leed platinum certificate. (orig.)

  12. A hybrid Genetic and Simulated Annealing Algorithm for Chordal Ring implementation in large-scale networks

    DEFF Research Database (Denmark)

    Riaz, M. Tahir; Gutierrez Lopez, Jose Manuel; Pedersen, Jens Myrup

    2011-01-01

    The paper presents a hybrid Genetic and Simulated Annealing algorithm for implementing Chordal Ring structure in optical backbone network. In recent years, topologies based on regular graph structures gained a lot of interest due to their good communication properties for physical topology of the...

  13. An Implementation and Parallelization of the Scale Space Meshing Algorithm

    Directory of Open Access Journals (Sweden)

    Julie Digne

    2015-11-01

    Full Text Available Creating an interpolating mesh from an unorganized set of oriented points is a difficult problemwhich is often overlooked. Most methods focus indeed on building a watertight smoothed meshby defining some function whose zero level set is the surface of the object. However in some casesit is crucial to build a mesh that interpolates the points and does not fill the acquisition holes:either because the data are sparse and trying to fill the holes would create spurious artifactsor because the goal is to explore visually the data exactly as they were acquired without anysmoothing process. In this paper we detail a parallel implementation of the Scale-Space Meshingalgorithm, which builds on the scale-space framework for reconstructing a high precision meshfrom an input oriented point set. This algorithm first smoothes the point set, producing asingularity free shape. It then uses a standard mesh reconstruction technique, the Ball PivotingAlgorithm, to build a mesh from the smoothed point set. The final step consists in back-projecting the mesh built on the smoothed positions onto the original point set. The result ofthis process is an interpolating, hole-preserving surface mesh reconstruction.

  14. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    Science.gov (United States)

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  15. An evaluation and implementation of rule-based Home Energy Management System using the Rete algorithm.

    Science.gov (United States)

    Kawakami, Tomoya; Fujita, Naotaka; Yoshihisa, Tomoki; Tsukamoto, Masahiko

    2014-01-01

    In recent years, sensors become popular and Home Energy Management System (HEMS) takes an important role in saving energy without decrease in QoL (Quality of Life). Currently, many rule-based HEMSs have been proposed and almost all of them assume "IF-THEN" rules. The Rete algorithm is a typical pattern matching algorithm for IF-THEN rules. Currently, we have proposed a rule-based Home Energy Management System (HEMS) using the Rete algorithm. In the proposed system, rules for managing energy are processed by smart taps in network, and the loads for processing rules and collecting data are distributed to smart taps. In addition, the number of processes and collecting data are reduced by processing rules based on the Rete algorithm. In this paper, we evaluated the proposed system by simulation. In the simulation environment, rules are processed by a smart tap that relates to the action part of each rule. In addition, we implemented the proposed system as HEMS using smart taps.

  16. Automated Software Acceleration in Programmable Logic for an Efficient NFFT Algorithm Implementation: A Case Study.

    Science.gov (United States)

    Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian

    2017-03-28

    Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation.

  17. High-speed parallel implementation of a modified PBR algorithm on DSP-based EH topology

    Science.gov (United States)

    Rajan, K.; Patnaik, L. M.; Ramakrishna, J.

    1997-08-01

    Algebraic Reconstruction Technique (ART) is an age-old method used for solving the problem of three-dimensional (3-D) reconstruction from projections in electron microscopy and radiology. In medical applications, direct 3-D reconstruction is at the forefront of investigation. The simultaneous iterative reconstruction technique (SIRT) is an ART-type algorithm with the potential of generating in a few iterations tomographic images of a quality comparable to that of convolution backprojection (CBP) methods. Pixel-based reconstruction (PBR) is similar to SIRT reconstruction, and it has been shown that PBR algorithms give better quality pictures compared to those produced by SIRT algorithms. In this work, we propose a few modifications to the PBR algorithms. The modified algorithms are shown to give better quality pictures compared to PBR algorithms. The PBR algorithm and the modified PBR algorithms are highly compute intensive, Not many attempts have been made to reconstruct objects in the true 3-D sense because of the high computational overhead. In this study, we have developed parallel two-dimensional (2-D) and 3-D reconstruction algorithms based on modified PBR. We attempt to solve the two problems encountered by the PBR and modified PBR algorithms, i.e., the long computational time and the large memory requirements, by parallelizing the algorithm on a multiprocessor system. We investigate the possible task and data partitioning schemes by exploiting the potential parallelism in the PBR algorithm subject to minimizing the memory requirement. We have implemented an extended hypercube (EH) architecture for the high-speed execution of the 3-D reconstruction algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs) and dual-port random access memories (DPR) as channels between the PEs. We discuss and compare the performances of the PBR algorithm on an IBM 6000 RISC workstation, on a Silicon

  18. Implementation techniques and acceleration of DBPF reconstruction algorithm based on GPGPU for helical cone beam CT

    International Nuclear Information System (INIS)

    Shen Le; Xing Yuxiang

    2010-01-01

    The derivative back-projection filtered algorithm for a helical cone-beam CT is a newly developed exact reconstruction method. Due to its large computational complexity, the reconstruction is rather slow for practical use. General purpose graphic processing unit (GPGPU) is an SIMD paralleled hardware architecture with powerful float-point operation capacity. In this paper,we propose a new method for PI-line choice and sampling grid, and a paralleled PI-line reconstruction algorithm implemented on NVIDIA's Compute Unified Device Architecture (CUDA). Numerical simulation studies are carried out to validate our method. Compared with conventional CPU implementation, the CUDA accelerated method provides images of the same quality with a speedup factor of 318. Optimization strategies for the GPU acceleration are presented. Finally, influence of the parameters of the PI-line samples on the reconstruction speed and image quality is discussed. (authors)

  19. Zertifikat Deutsch als Fremdsprache and the Oral Proficiency Interview: A Comparison of Test Scores and Examinations.

    Science.gov (United States)

    Lalande, John F.; Schweckendiek, Jurgen

    1986-01-01

    Investigates what correlations might exist between an individual's score on the Zertifikat Deutsch als Fremdsprache and on the Oral Proficiency Interview. The tests themselves are briefly described. Results indicate that the two tests appear to correlate well in their evaluation of speaking skills. (SED)

  20. Corticostriatal circuit mechanisms of value-based action selection: Implementation of reinforcement learning algorithms and beyond.

    Science.gov (United States)

    Morita, Kenji; Jitsev, Jenia; Morrison, Abigail

    2016-09-15

    Value-based action selection has been suggested to be realized in the corticostriatal local circuits through competition among neural populations. In this article, we review theoretical and experimental studies that have constructed and verified this notion, and provide new perspectives on how the local-circuit selection mechanisms implement reinforcement learning (RL) algorithms and computations beyond them. The striatal neurons are mostly inhibitory, and lateral inhibition among them has been classically proposed to realize "Winner-Take-All (WTA)" selection of the maximum-valued action (i.e., 'max' operation). Although this view has been challenged by the revealed weakness, sparseness, and asymmetry of lateral inhibition, which suggest more complex dynamics, WTA-like competition could still occur on short time scales. Unlike the striatal circuit, the cortical circuit contains recurrent excitation, which may enable retention or temporal integration of information and probabilistic "soft-max" selection. The striatal "max" circuit and the cortical "soft-max" circuit might co-implement an RL algorithm called Q-learning; the cortical circuit might also similarly serve for other algorithms such as SARSA. In these implementations, the cortical circuit presumably sustains activity representing the executed action, which negatively impacts dopamine neurons so that they can calculate reward-prediction-error. Regarding the suggested more complex dynamics of striatal, as well as cortical, circuits on long time scales, which could be viewed as a sequence of short WTA fragments, computational roles remain open: such a sequence might represent (1) sequential state-action-state transitions, constituting replay or simulation of the internal model, (2) a single state/action by the whole trajectory, or (3) probabilistic sampling of state/action. Copyright © 2016. Published by Elsevier B.V.

  1. Implementation of the Lanczos algorithm for the Hubbard model on the Connection Machine system

    International Nuclear Information System (INIS)

    Leung, P.W.; Oppenheimer, P.E.

    1992-01-01

    An implementation of the Lanczos algorithm for the exact diagonalization of the two dimensional Hubbard model on a 4x4 square lattice on the Connection Machine CM-2 system is described. The CM-2 is a massively parallel machine with distributed memory. The program is written in C/PARIS. This implementation minimizes memory usage by generating the matrix elements as needed instead of storing them. The Lanczos vectors are stored across the local memory of the processors. Using translational symmetry only, the dimension of the Hilbert space at half filling is more than 10 million. A speed of about 2.4 min per iteration is achieved on a 64K CM-2. This implementation is scalable. Running it on a bigger machine with more processors speeds up the process. The performance analysis of this implementation is shown and discuss its advantages and disadvantages are discussed

  2. Dynamic game balancing implementation using adaptive algorithm in mobile-based Safari Indonesia game

    Science.gov (United States)

    Yuniarti, Anny; Nata Wardanie, Novita; Kuswardayan, Imam

    2018-03-01

    In developing a game there is one method that should be applied to maintain the interest of players, namely dynamic game balancing. Dynamic game balancing is a process to match a player’s playing style with the behaviour, attributes, and game environment. This study applies dynamic game balancing using adaptive algorithm in scrolling shooter game type called Safari Indonesia which developed using Unity. The game of this type is portrayed by a fighter aircraft character trying to defend itself from insistent enemy attacks. This classic game is chosen to implement adaptive algorithms because it has quite complex attributes to be developed using dynamic game balancing. Tests conducted by distributing questionnaires to a number of players indicate that this method managed to reduce frustration and increase the pleasure factor in playing.

  3. FPGA Implementation of an Efficient Algorithm for the Calculation of Charged Particle Trajectories in Cosmic Ray Detectors

    Science.gov (United States)

    Villar, Xabier; Piso, Daniel; Bruguera, Javier D.

    2014-02-01

    This paper presents an FPGA implementation of an algorithm, previously published, for the the reconstruction of cosmic rays' trajectories and the determination of the time of arrival and velocity of the particles. The accuracy and precision issues of the algorithm have been analyzed to propose a suitable implementation. Thus, a 32-bit fixed-point format has been used for the representation of the data values. Moreover, the dependencies among the different operations have been taken into account to obtain a highly parallel and efficient hardware implementation. The final hardware architecture requires 18 cycles to process every particle, and has been exhaustively simulated to validate all the design decisions. The architecture has been mapped over different commercial FPGAs, with a frequency of operation ranging from 300 MHz to 1.3 GHz, depending on the FPGA being used. Consequently, the number of particle trajectories processed per second is between 16 million and 72 million. The high number of particle trajectories calculated per second shows that the proposed FPGA implementation might be used also in high rate environments such as those found in particle and nuclear physics experiments.

  4. Implementation of Rivest Shamir Adleman Algorithm (RSA) and Vigenere Cipher In Web Based Information System

    Science.gov (United States)

    Aryanti, Aryanti; Mekongga, Ikhthison

    2018-02-01

    Data security and confidentiality is one of the most important aspects of information systems at the moment. One attempt to secure data such as by using cryptography. In this study developed a data security system by implementing the cryptography algorithm Rivest, Shamir Adleman (RSA) and Vigenere Cipher. The research was done by combining Rivest, Shamir Adleman (RSA) and Vigenere Cipher cryptographic algorithms to document file either word, excel, and pdf. This application includes the process of encryption and decryption of data, which is created by using PHP software and my SQL. Data encryption is done on the transmit side through RSA cryptographic calculations using the public key, then proceed with Vigenere Cipher algorithm which also uses public key. As for the stage of the decryption side received by using the Vigenere Cipher algorithm still use public key and then the RSA cryptographic algorithm using a private key. Test results show that the system can encrypt files, decrypt files and transmit files. Tests performed on the process of encryption and decryption of files with different file sizes, file size affects the process of encryption and decryption. The larger the file size the longer the process of encryption and decryption.

  5. Deutsch Durch Audio-Visuelle Methode: An Audio-Lingual-Oral Approach to the Teaching of German.

    Science.gov (United States)

    Dickinson Public Schools, ND. Instructional Media Center.

    This teaching guide, designed to accompany Chilton's "Deutsch Durch Audio-Visuelle Methode" for German 1 and 2 in a three-year secondary school program, focuses major attention on the operational plan of the program and a student orientation unit. A section on teaching a unit discusses four phases: (1) presentation, (2) explanation, (3)…

  6. Deutsche Bahn jätkab majanduskriisile vaatamata koostööprojekte Venemaal / Gebhard Hafer ; intervjueerinud Anna Nezhinskaya

    Index Scriptorium Estoniae

    Hafer, Gebhard

    2009-01-01

    Deutsche Bahni SRÜ riikide ja Ida-Euroopa rahvusvahelise osakonna direktor Gebhard Hafer vastab küsimustele, mis puudutavad majanduskriisi mõju ettevõtte tegevusele, töötajate koondamisest hoidumisest, päevakorras olevaid Saksa-Venemaa projekte ning transiidi arengut Hiinast

  7. Numerical methods design, analysis, and computer implementation of algorithms

    CERN Document Server

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or computer implementation--of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book a...

  8. PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta

    OpenAIRE

    Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J.

    2010-01-01

    Summary: PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactive...

  9. Development and implementation of an automatic control algorithm for the University of Utah nuclear reactor

    International Nuclear Information System (INIS)

    Crawford, Kevan C.; Sandquist, Gary M.

    1990-01-01

    The emphasis of this work is the development and implementation of an automatic control philosophy which uses the classical operational philosophies as a foundation. Three control algorithms were derived based on various simplifying assumptions. Two of the algorithms were tested in computer simulations. After realizing the insensitivity of the system to the simplifications, the most reduced form of the algorithms was implemented on the computer control system at the University of Utah (UNEL). Since the operational philosophies have a higher priority than automatic control, they determine when automatic control may be utilized. Unlike the operational philosophies, automatic control is not concerned with component failures. The object of this philosophy is the movement of absorber rods to produce a requested power. When the current power level is compared to the requested power level, an error may be detected which will require the movement of a control rod to correct the error. The automatic control philosophy adds another dimension to the classical operational philosophies. Using this philosophy, normal operator interactions with the computer would be limited only to run parameters such as power, period, and run time. This eliminates subjective judgements, objective judgements under pressure, and distractions to the operator and insures the reactor will be operated in a safe and controlled manner as well as providing reproducible operations

  10. Explorations of the implementation of a parallel IDW interpolation algorithm in a Linux cluster-based parallel GIS

    Science.gov (United States)

    Huang, Fang; Liu, Dingsheng; Tan, Xicheng; Wang, Jian; Chen, Yunping; He, Binbin

    2011-04-01

    To design and implement an open-source parallel GIS (OP-GIS) based on a Linux cluster, the parallel inverse distance weighting (IDW) interpolation algorithm has been chosen as an example to explore the working model and the principle of algorithm parallel pattern (APP), one of the parallelization patterns for OP-GIS. Based on an analysis of the serial IDW interpolation algorithm of GRASS GIS, this paper has proposed and designed a specific parallel IDW interpolation algorithm, incorporating both single process, multiple data (SPMD) and master/slave (M/S) programming modes. The main steps of the parallel IDW interpolation algorithm are: (1) the master node packages the related information, and then broadcasts it to the slave nodes; (2) each node calculates its assigned data extent along one row using the serial algorithm; (3) the master node gathers the data from all nodes; and (4) iterations continue until all rows have been processed, after which the results are outputted. According to the experiments performed in the course of this work, the parallel IDW interpolation algorithm can attain an efficiency greater than 0.93 compared with similar algorithms, which indicates that the parallel algorithm can greatly reduce processing time and maximize speed and performance.

  11. The Place of "Zertifikat Deutsch als Fremdsprache" in the German Curriculum. A Report of a Survey.

    Science.gov (United States)

    Schneider, Gerd K.

    The "Zertifikat Deutsch als Fremdsprache," an examination developed by the Adult Education Centers in West Germany and the Goethe Institute to measure a student's proficiency in German as a foreign language, consists of two main parts, group testing and individual testing. The group testing section covers listening and reading…

  12. Progress in parallel implementation of the multilevel plane wave time domain algorithm

    KAUST Repository

    Liu, Yang

    2013-07-01

    The computational complexity and memory requirements of classical schemes for evaluating transient electromagnetic fields produced by Ns dipoles active for Nt time steps scale as O(NtN s 2) and O(Ns 2), respectively. The multilevel plane wave time domain (PWTD) algorithm [A.A. Ergin et al., Antennas and Propagation Magazine, IEEE, vol. 41, pp. 39-52, 1999], viz. the extension of the frequency domain fast multipole method (FMM) to the time domain, reduces the above costs to O(NtNslog2Ns) and O(Ns α) with α = 1.5 for surface current distributions and α = 4/3 for volumetric ones. Its favorable computational and memory costs notwithstanding, serial implementations of the PWTD scheme unfortunately remain somewhat limited in scope and ill-suited to tackle complex real-world scattering problems, and parallel implementations are called for. © 2013 IEEE.

  13. Genetic Algorithm Applied to the Eigenvalue Equalization Filtered-x LMS Algorithm (EE-FXLMS

    Directory of Open Access Journals (Sweden)

    Stephan P. Lovstedt

    2008-01-01

    Full Text Available The FXLMS algorithm, used extensively in active noise control (ANC, exhibits frequency-dependent convergence behavior. This leads to degraded performance for time-varying tonal noise and noise with multiple stationary tones. Previous work by the authors proposed the eigenvalue equalization filtered-x least mean squares (EE-FXLMS algorithm. For that algorithm, magnitude coefficients of the secondary path transfer function are modified to decrease variation in the eigenvalues of the filtered-x autocorrelation matrix, while preserving the phase, giving faster convergence and increasing overall attenuation. This paper revisits the EE-FXLMS algorithm, using a genetic algorithm to find magnitude coefficients that give the least variation in eigenvalues. This method overcomes some of the problems with implementing the EE-FXLMS algorithm arising from finite resolution of sampled systems. Experimental control results using the original secondary path model, and a modified secondary path model for both the previous implementation of EE-FXLMS and the genetic algorithm implementation are compared.

  14. Parallel Implementation of the Terrain Masking Algorithm

    Science.gov (United States)

    1994-03-01

    contains behavior rules which can define a computation or an algorithm. It can communicate with other process nodes, it can contain local data, and it can...terrain maskirg calculation is being performed. It is this algorithm that comsumes about seventy percent of the total terrain masking calculation time

  15. Improvement and implementation for Canny edge detection algorithm

    Science.gov (United States)

    Yang, Tao; Qiu, Yue-hong

    2015-07-01

    Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.

  16. Bayesian Network Constraint-Based Structure Learning Algorithms: Parallel and Optimized Implementations in the bnlearn R Package

    Directory of Open Access Journals (Sweden)

    Marco Scutari

    2017-03-01

    Full Text Available It is well known in the literature that the problem of learning the structure of Bayesian networks is very hard to tackle: Its computational complexity is super-exponential in the number of nodes in the worst case and polynomial in most real-world scenarios. Efficient implementations of score-based structure learning benefit from past and current research in optimization theory, which can be adapted to the task by using the network score as the objective function to maximize. This is not true for approaches based on conditional independence tests, called constraint-based learning algorithms. The only optimization in widespread use, backtracking, leverages the symmetries implied by the definitions of neighborhood and Markov blanket. In this paper we illustrate how backtracking is implemented in recent versions of the bnlearn R package, and how it degrades the stability of Bayesian network structure learning for little gain in terms of speed. As an alternative, we describe a software architecture and framework that can be used to parallelize constraint-based structure learning algorithms (also implemented in bnlearn and we demonstrate its performance using four reference networks and two real-world data sets from genetics and systems biology. We show that on modern multi-core or multiprocessor hardware parallel implementations are preferable over backtracking, which was developed when single-processor machines were the norm.

  17. Functional segmentation of dynamic PET studies: Open source implementation and validation of a leader-follower-based algorithm.

    Science.gov (United States)

    Mateos-Pérez, José María; Soto-Montenegro, María Luisa; Peña-Zalbidea, Santiago; Desco, Manuel; Vaquero, Juan José

    2016-02-01

    We present a novel segmentation algorithm for dynamic PET studies that groups pixels according to the similarity of their time-activity curves. Sixteen mice bearing a human tumor cell line xenograft (CH-157MN) were imaged with three different (68)Ga-DOTA-peptides (DOTANOC, DOTATATE, DOTATOC) using a small animal PET-CT scanner. Regional activities (input function and tumor) were obtained after manual delineation of regions of interest over the image. The algorithm was implemented under the jClustering framework and used to extract the same regional activities as in the manual approach. The volume of distribution in the tumor was computed using the Logan linear method. A Kruskal-Wallis test was used to investigate significant differences between the manually and automatically obtained volumes of distribution. The algorithm successfully segmented all the studies. No significant differences were found for the same tracer across different segmentation methods. Manual delineation revealed significant differences between DOTANOC and the other two tracers (DOTANOC - DOTATATE, p=0.020; DOTANOC - DOTATOC, p=0.033). Similar differences were found using the leader-follower algorithm. An open implementation of a novel segmentation method for dynamic PET studies is presented and validated in rodent studies. It successfully replicated the manual results obtained in small-animal studies, thus making it a reliable substitute for this task and, potentially, for other dynamic segmentation procedures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Deutsches Krebsforschungszentrum Heidelberg (DKFZ). Report on scientific results 1998/1999

    International Nuclear Information System (INIS)

    Metzler, H.

    2000-01-01

    The Deutsches Krebsforschungszentrum Heidelberg (DKFZ, German Cancer Research Center) publishes alternating every year the 'Wissenschaftlicher Ergebnisbericht' (in German) and the 'Research Report' (in English). Both volumes are reports on the present state of research activities of the DKFZ as a National Research Center to the funding federal and state authorities (Federal Republic of Germany, Land (state) Baden-Wuerttemberg). Furthermore, they shall inform colleagues and the scientifically interested public. Both reports are structured according to the center's eight research programs. The next Research Report will be published in 2001. In Germany a new orthography has been accepted. Some authors used the new form others the traditional one. The orthography was not standardized. (orig.) [de

  19. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications

    Directory of Open Access Journals (Sweden)

    Khairi Nor Asilah

    2017-01-01

    Full Text Available An Internet of Things (IoT device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.

  20. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications

    Science.gov (United States)

    Asilah Khairi, Nor; Bahari Jambek, Asral

    2017-11-01

    An Internet of Things (IoT) device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.

  1. Implementation of a cone-beam reconstruction algorithm for the single-circle source orbit with embedded misalignment correction using homogeneous coordinates

    International Nuclear Information System (INIS)

    Karolczak, Marek; Schaller, Stefan; Engelke, Klaus; Lutz, Andreas; Taubenreuther, Ulrike; Wiesent, Karl; Kalender, Willi

    2001-01-01

    We present an efficient implementation of an approximate cone-beam image reconstruction algorithm for application in tomography, which accounts for scanner mechanical misalignment. The implementation is based on the algorithm proposed by Feldkamp et al. [J. Opt. Soc. Am. A 6, 612-619 (1984)] and is directed at circular scan paths. The algorithm has been developed for the purpose of reconstructing volume data from projections acquired in an experimental x-ray microtomography (μCT) scanner [Engelke et al., Der Radiologe 39, 203-212 (1999)]. To mathematically model misalignment we use matrix notation with homogeneous coordinates to describe the scanner geometry, its misalignment, and the acquisition process. For convenience analysis is carried out for x-ray CT scanners, but it is applicable to any tomographic modality, where two-dimensional projection acquisition in cone beam geometry takes place, e.g., single photon emission computerized tomography. We derive an algorithm assuming misalignment errors to be small enough to weight and filter original projections and to embed compensation for misalignment in the backprojection. We verify the algorithm on simulations of virtual phantoms and scans of a physical multidisk (Defrise) phantom

  2. Spatial updating grand canonical Monte Carlo algorithms for fluid simulation: generalization to continuous potentials and parallel implementation.

    Science.gov (United States)

    O'Keeffe, C J; Ren, Ruichao; Orkoulas, G

    2007-11-21

    Spatial updating grand canonical Monte Carlo algorithms are generalizations of random and sequential updating algorithms for lattice systems to continuum fluid models. The elementary steps, insertions or removals, are constructed by generating points in space either at random (random updating) or in a prescribed order (sequential updating). These algorithms have previously been developed only for systems of impenetrable spheres for which no particle overlap occurs. In this work, spatial updating grand canonical algorithms are generalized to continuous, soft-core potentials to account for overlapping configurations. Results on two- and three-dimensional Lennard-Jones fluids indicate that spatial updating grand canonical algorithms, both random and sequential, converge faster than standard grand canonical algorithms. Spatial algorithms based on sequential updating not only exhibit the fastest convergence but also are ideal for parallel implementation due to the absence of strict detailed balance and the nature of the updating that minimizes interprocessor communication. Parallel simulation results for three-dimensional Lennard-Jones fluids show a substantial reduction of simulation time for systems of moderate and large size. The efficiency improvement by parallel processing through domain decomposition is always in addition to the efficiency improvement by sequential updating.

  3. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Ahmad Audi

    2017-07-01

    Full Text Available Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l’information géographique camera, which has an IMU (Inertial Measurement Unit sensor and an SoC (System on Chip/FPGA (Field-Programmable Gate Array. To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N-th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.

  4. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles.

    Science.gov (United States)

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe; Thom, Christian

    2017-07-18

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l'information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N -th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.

  5. Fast volume reconstruction in positron emission tomography: Implementation of four algorithms on a high-performance scalable parallel platform

    International Nuclear Information System (INIS)

    Egger, M.L.; Scheurer, A.H.; Joseph, C.

    1996-01-01

    The issue of long reconstruction times in PET has been addressed from several points of view, resulting in an affordable dedicated system capable of handling routine 3D reconstruction in a few minutes per frame: on the hardware side using fast processors and a parallel architecture, and on the software side, using efficient implementations of computationally less intensive algorithms. Execution times obtained for the PRT-1 data set on a parallel system of five hybrid nodes, each combining an Alpha processor for computation and a transputer for communication, are the following (256 sinograms of 96 views by 128 radial samples): Ramp algorithm 56 s, Favor 81 s and reprojection algorithm of Kinahan and Rogers 187 s. The implementation of fast rebinning algorithms has shown our hardware platform to become communications-limited; they execute faster on a conventional single-processor Alpha workstation: single-slice rebinning 7 s, Fourier rebinning 22 s, 2D filtered backprojection 5 s. The scalability of the system has been demonstrated, and a saturation effect at network sizes above ten nodes has become visible; new T9000-based products lifting most of the constraints on network topology and link throughput are expected to result in improved parallel efficiency and scalability properties

  6. Implementation of Rivest Shamir Adleman Algorithm (RSA and Vigenere Cipher In Web Based Information System

    Directory of Open Access Journals (Sweden)

    Aryanti Aryanti

    2018-01-01

    Full Text Available Data security and confidentiality is one of the most important aspects of information systems at the moment. One attempt to secure data such as by using cryptography. In this study developed a data security system by implementing the cryptography algorithm Rivest, Shamir Adleman (RSA and Vigenere Cipher. The research was done by combining Rivest, Shamir Adleman (RSA and Vigenere Cipher cryptographic algorithms to document file either word, excel, and pdf. This application includes the process of encryption and decryption of data, which is created by using PHP software and my SQL. Data encryption is done on the transmit side through RSA cryptographic calculations using the public key, then proceed with Vigenere Cipher algorithm which also uses public key. As for the stage of the decryption side received by using the Vigenere Cipher algorithm still use public key and then the RSA cryptographic algorithm using a private key. Test results show that the system can encrypt files, decrypt files and transmit files. Tests performed on the process of encryption and decryption of files with different file sizes, file size affects the process of encryption and decryption. The larger the file size the longer the process of encryption and decryption.

  7. VLSI IMPLEMENTATION OF NOVEL ROUND KEYS GENERATION SCHEME FOR CRYPTOGRAPHY APPLICATIONS BY ERROR CONTROL ALGORITHM

    Directory of Open Access Journals (Sweden)

    B. SENTHILKUMAR

    2015-05-01

    Full Text Available A novel implementation of code based cryptography (Cryptocoding technique for multi-layer key distribution scheme is presented. VLSI chip is designed for storing information on generation of round keys. New algorithm is developed for reduced key size with optimal performance. Error Control Algorithm is employed for both generation of round keys and diffusion of non-linearity among them. Two new functions for bit inversion and its reversal are developed for cryptocoding. Probability of retrieving original key from any other round keys is reduced by diffusing nonlinear selective bit inversions on round keys. Randomized selective bit inversions are done on equal length of key bits by Round Constant Feedback Shift Register within the error correction limits of chosen code. Complexity of retrieving the original key from any other round keys is increased by optimal hardware usage. Proposed design is simulated and synthesized using VHDL coding for Spartan3E FPGA and results are shown. Comparative analysis is done between 128 bit Advanced Encryption Standard round keys and proposed round keys for showing security strength of proposed algorithm. This paper concludes that chip based multi-layer key distribution of proposed algorithm is an enhanced solution to the existing threats on cryptography algorithms.

  8. PyCPR - a python-based implementation of the Conjugate Peak Refinement (CPR) algorithm for finding transition state structures.

    Science.gov (United States)

    Gisdon, Florian J; Culka, Martin; Ullmann, G Matthias

    2016-10-01

    Conjugate peak refinement (CPR) is a powerful and robust method to search transition states on a molecular potential energy surface. Nevertheless, the method was to the best of our knowledge so far only implemented in CHARMM. In this paper, we present PyCPR, a new Python-based implementation of the CPR algorithm within the pDynamo framework. We provide a detailed description of the theory underlying our implementation and discuss the different parts of the implementation. The method is applied to two different problems. First, we illustrate the method by analyzing the gauche to anti-periplanar transition of butane using a semiempirical QM method. Second, we reanalyze the mechanism of a glycyl-radical enzyme, namely of 4-hydroxyphenylacetate decarboxylase (HPD) using QM/MM calculations. In the end, we suggest a strategy how to use our implementation of the CPR algorithm. The integration of PyCPR into the framework pDynamo allows the combination of CPR with the large variety of methods implemented in pDynamo. PyCPR can be used in combination with quantum mechanical and molecular mechanical methods (and hybrid methods) implemented directly in pDynamo, but also in combination with external programs such as ORCA using pDynamo as interface. PyCPR is distributed as free, open source software and can be downloaded from http://www.bisb.uni-bayreuth.de/index.php?page=downloads . Graphical Abstract PyCPR is a search tool for finding saddle points on the potential energy landscape of a molecular system.

  9. Final Report for Award #DE-SC3956 Separating Algorithm and Implementation via programming Model Injection (SAIMI)

    Energy Technology Data Exchange (ETDEWEB)

    Strout, Michelle [Colorado State Univ., Fort Collins, CO (United States)

    2015-08-15

    Programming parallel machines is fraught with difficulties: the obfuscation of algorithms due to implementation details such as communication and synchronization, the need for transparency between language constructs and performance, the difficulty of performing program analysis to enable automatic parallelization techniques, and the existence of important "dusty deck" codes. The SAIMI project developed abstractions that enable the orthogonal specification of algorithms and implementation details within the context of existing DOE applications. The main idea is to enable the injection of small programming models such as expressions involving transcendental functions, polyhedral iteration spaces with sparse constraints, and task graphs into full programs through the use of pragmas. These smaller, more restricted programming models enable orthogonal specification of many implementation details such as how to map the computation on to parallel processors, how to schedule the computation, and how to allocation storage for the computation. At the same time, these small programming models enable the expression of the most computationally intense and communication heavy portions in many scientific simulations. The ability to orthogonally manipulate the implementation for such computations will significantly ease performance programming efforts and expose transformation possibilities and parameter to automated approaches such as autotuning. At Colorado State University, the SAIMI project was supported through DOE grant DE-SC3956 from April 2010 through August 2015. The SAIMI project has contributed a number of important results to programming abstractions that enable the orthogonal specification of implementation details in scientific codes. This final report summarizes the research that was funded by the SAIMI project.

  10. 25 years of Deutsche Kernreaktor-Versicherungsgemeinschaft

    International Nuclear Information System (INIS)

    Hertel, G.

    1982-01-01

    In May 1982, the Deutsche Kernreaktor-Versicherungsgemeinschaft (German Nuclear Reactor Insurance Pool, DKVG), a pool of 111 insurance companies authorized to do business in the Federal Republic of Germany, had been engaged in the nuclear insurance business successfully for twenty-five years. DKVG is mainly engaged in the re-insurance against damage arising from nuclear power and fires, including the costs of decontamination and cleaning of plants used to split nuclear fuels and facilities and inventories, including the source materials and fuels of such facilities, and against legal third party liability arising from the operation of nuclear facilities, including the storage and disposal of waste and plant components turned radioactive. DKVG can reinsure abroad the risks it covers and, in turn, offer re-insurance of risks insured by foreign nuclear insurance pools. The largest damage to be covered also by DKVG to this day has been the property damage arising from the March 28, 1979 accident at the TMI-2 nuclear power station in the United States. The main problem faced by the nuclear plant insurance business in the Federal Republic of Germany, as in some other countries, is an increase in coverage capacity over a medium term. (orig.) [de

  11. Wissenschaftliches Schreiben in der Fremdsprache Deutsch : am Beispiel von Abschlussarbeiten französischer Studierender

    OpenAIRE

    François, Audrey

    2006-01-01

    Die vorliegende Arbeit befasst sich mit dem wissenschaftlichen Schreiben in der Fremdsprache Deutsch am Beispiel von Abschlussarbeiten französischer Studierender. Das Ziel der Dissertation ist herauszufinden, wie französische Germanistikstudierende die innere und äußere Form ihrer universitären Abschlussarbeit gestalten. Dabei soll untersucht werden, ob französische fortgeschrittene DaF-Lernende beim wissenschaftlichen Schreiben bestimmte Schwierigkeiten haben, die nicht allen DaF-Lerne...

  12. The mGA1.0: A common LISP implementation of a messy genetic algorithm

    Science.gov (United States)

    Goldberg, David E.; Kerzic, Travis

    1990-01-01

    Genetic algorithms (GAs) are finding increased application in difficult search, optimization, and machine learning problems in science and engineering. Increasing demands are being placed on algorithm performance, and the remaining challenges of genetic algorithm theory and practice are becoming increasingly unavoidable. Perhaps the most difficult of these challenges is the so-called linkage problem. Messy GAs were created to overcome the linkage problem of simple genetic algorithms by combining variable-length strings, gene expression, messy operators, and a nonhomogeneous phasing of evolutionary processing. Results on a number of difficult deceptive test functions are encouraging with the mGA always finding global optima in a polynomial number of function evaluations. Theoretical and empirical studies are continuing, and a first version of a messy GA is ready for testing by others. A Common LISP implementation called mGA1.0 is documented and related to the basic principles and operators developed by Goldberg et. al. (1989, 1990). Although the code was prepared with care, it is not a general-purpose code, only a research version. Important data structures and global variations are described. Thereafter brief function descriptions are given, and sample input data are presented together with sample program output. A source listing with comments is also included.

  13. Analýza CSR aktivit společnosti T-Mobile Czech Republic, a.s. a jejich porovnání se strategií mateřské společnosti Deutsche Telekom

    OpenAIRE

    Ježková, Ivana

    2008-01-01

    The thesis deals with compilation of CSR activities of T-Mobile Czech Republic, a.s., their inclusion in the CSR strategy of Deutsche Telekom AG and suggestion how to conform current activities to the CSR strategy of Deutsche Telekom AG. The first part of the thesis explains what CSR is, how can we identify responsible company and deals with the question of importance of CSR. In the second part there is an information about T-Mobile Czech Republic and Deutsche Telekom, list of CSR activities ...

  14. Design and implementation of universal mathematical library supporting algorithm development for FPGA based systems in high energy physics experiments

    International Nuclear Information System (INIS)

    Jalmuzna, W.

    2006-02-01

    The X-ray free-electron laser XFEL that is being planned at the DESY research center in cooperation with European partners will produce high-intensity ultra-short Xray flashes with the properties of laser light. This new light source, which can only be described in terms of superlatives, will open up a whole range of new perspectives for the natural sciences. It could also offer very promising opportunities for industrial users. SIMCON (SIMulator and CONtroller) is the project of the fast, low latency digital controller dedicated for LLRF system in VUV FEL experiment based on modern FPGA chips It is being developed by ELHEP group in Institute of Electronic Systems at Warsaw University of Technology. The main purpose of the project is to create a controller for stabilizing the vector sum of fields in cavities of one cryomodule in the experiment. The device can be also used as the simulator of the cavity and testbench for other devices. Flexibility and computation power of this device allow implementation of fast mathematical algorithms. This paper describes the concept, implementation and tests of universal mathematical library for FPGA algorithm implementation. It consists of many useful components such as IQ demodulator, division block, library for complex and floating point operations, etc. It is able to speed up implementation time of many complicated algorithms. Library have already been tested using real accelerator signals and the performance achieved is satisfactory. (Orig.)

  15. Design and implementation of universal mathematical library supporting algorithm development for FPGA based systems in high energy physics experiments

    Energy Technology Data Exchange (ETDEWEB)

    Jalmuzna, W.

    2006-02-15

    The X-ray free-electron laser XFEL that is being planned at the DESY research center in cooperation with European partners will produce high-intensity ultra-short Xray flashes with the properties of laser light. This new light source, which can only be described in terms of superlatives, will open up a whole range of new perspectives for the natural sciences. It could also offer very promising opportunities for industrial users. SIMCON (SIMulator and CONtroller) is the project of the fast, low latency digital controller dedicated for LLRF system in VUV FEL experiment based on modern FPGA chips It is being developed by ELHEP group in Institute of Electronic Systems at Warsaw University of Technology. The main purpose of the project is to create a controller for stabilizing the vector sum of fields in cavities of one cryomodule in the experiment. The device can be also used as the simulator of the cavity and testbench for other devices. Flexibility and computation power of this device allow implementation of fast mathematical algorithms. This paper describes the concept, implementation and tests of universal mathematical library for FPGA algorithm implementation. It consists of many useful components such as IQ demodulator, division block, library for complex and floating point operations, etc. It is able to speed up implementation time of many complicated algorithms. Library have already been tested using real accelerator signals and the performance achieved is satisfactory. (Orig.)

  16. Implementation of an algorithm for absorbed dose calculation in high energy photon beams at off axis points

    International Nuclear Information System (INIS)

    Matos, M.F.; Alvarez, G.D.; Sanz, D.E.

    2008-01-01

    Full text: A semiempirical algorithm for absorbed dose calculation at off-axis points in irregular beams was implemented. It is well known that semiempirical methods are very useful because of their easy implementation and its helpfulness in dose calculation in the clinic. These methods can be used as independent tools for dosimetric calculation in many applications of quality assurance. However, the applicability of such methods has some limitations, even in homogeneous media, specially at off axis points, near beam fringes or outside the beam. Only methods derived from tissue-air-ratio (TAR) or scatter-maximum-ratio (SMR) have been devised for those situations, many years ago. Despite there have been improvements for these manual methods, like the Sc-Sp ones, no attempt has been made to extend their usage at off axis points. In this work, a semiempirical formalism was introduced, based on the works of Venselaar et al. (1999) and Sanz et al. (2004), aimed to the Sc-Sp separation. This new formalism relies on the separation of primary and secondary components of the beam although in a relative way. The data required by the algorithm are reduced to a minimal, allowing for experimental easy. According to modern recommendations, reference measurements in water phantom are performed at 10 cm depth, keeping away electron contamination. Air measurements are done using a mini phantom instead of the old equilibrium caps. Finally, the calculation at off-axis points are done using data measured on the central beam axis; but correcting the results with the introduction of a measured function which depends on the location of the off axis point. The measurements for testing the algorithm were performed in our Siemens MXE linear accelerator. The algorithm was used to determine specific dose profiles for a great number of different beam configurations, and the results were compared with direct measurements to validate the accuracy of the algorithm. Additionally, the results were

  17. Design and Implementation of the Automated Rendezvous Targeting Algorithms for Orion

    Science.gov (United States)

    DSouza, Christopher; Weeks, Michael

    2010-01-01

    The Orion vehicle will be designed to perform several rendezvous missions: rendezvous with the ISS in Low Earth Orbit (LEO), rendezvous with the EDS/Altair in LEO, a contingency rendezvous with the ascent stage of the Altair in Low Lunar Orbit (LLO) and a contingency rendezvous in LLO with the ascent and descent stage in the case of an aborted lunar landing. Therefore, it is not difficult to realize that each of these scenarios imposes different operational, timing, and performance constraints on the GNC system. To this end, a suite of on-board guidance and targeting algorithms have been designed to meet the requirement to perform the rendezvous independent of communications with the ground. This capability is particularly relevant for the lunar missions, some of which may occur on the far side of the moon. This paper will describe these algorithms which are designed to be structured and arranged in such a way so as to be flexible and able to safely perform a wide variety of rendezvous trajectories. The goal of the algorithms is not to merely fly one specific type of canned rendezvous profile. Conversely, it was designed from the start to be general enough such that any type of trajectory profile can be flown.(i.e. a coelliptic profile, a stable orbit rendezvous profile, and a expedited LLO rendezvous profile, etc) all using the same rendezvous suite of algorithms. Each of these profiles makes use of maneuver types which have been designed with dual goals of robustness and performance. They are designed to converge quickly under dispersed conditions and they are designed to perform many of the functions performed on the ground today. The targeting algorithms consist of a phasing maneuver (NC), an altitude adjust maneuver (NH), and plane change maneuver (NPC), a coelliptic maneuver (NSR), a Lambert targeted maneuver, and several multiple-burn targeted maneuvers which combine one of more of these algorithms. The derivation and implementation of each of these

  18. Search of molecular ground state via genetic algorithm: Implementation on a hybrid SIMD-MIMD platform

    International Nuclear Information System (INIS)

    Pucello, N.; D'Agostino, G.; Pisacane, F.

    1997-01-01

    A genetic algorithm for the optimization of the ground-state structure of a metallic cluster has been developed and ported on a SIMD-MIMD parallel platform. The SIMD part of the parallel platform is represented by a Quadrics/APE100 consisting of 512 floating point units, while the MIMD part is formed by a cluster of workstations. The proposed algorithm is composed by a part where the genetic operators are applied to the elements of the population and a part which performs a further local relaxation and the fitness calculation via Molecular Dynamics. These parts have been implemented on the MIMD and on the SIMD part, respectively. Results have been compared to those generated by using Simulated Annealing

  19. Implementation of Super-Encryption with Trithemius Algorithm and Double Transposition Cipher in Securing PDF Files on Android Platform

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.; Jessica

    2018-03-01

    This study aims to combine the trithemus algorithm and double transposition cipher in file security that will be implemented to be an Android-based application. The parameters being examined are the real running time, and the complexity value. The type of file to be used is a file in PDF format. The overall result shows that the complexity of the two algorithms with duper encryption method is reported as Θ (n 2). However, the processing time required in the encryption process uses the Trithemius algorithm much faster than using the Double Transposition Cipher. With the length of plaintext and password linearly proportional to the processing time.

  20. Implementation vigenere algorithm using microcontroller for sending SMS in monitoring radioactive substances transport system

    International Nuclear Information System (INIS)

    Adi Abimanyu; Nurhidayat; Jumari

    2013-01-01

    Aspects of safety and security of radioactive substances from the sender to the receiver is to be secured for not to harm humans. In general, monitoring the transport of radioactive materials is done by communication with a telephone conversation to determine the location and rate of exposure radioactive substances. From the aspect of safety, communication through telephone conversations easily interpreted by others, in addition the possibility of human-error is quite high. SMS service is known for its ease in terms of use so that SMS can be used as a substitute for communication through telephone conversations to monitor the rate of radiation exposure and the position of radioactive substances in the transport of radioactive substances. The system monitors the transport of radioactive materials developed by implement vigenere algorithms using a microcontroller for sending SMS (Short Message Service) to communicate. Tests was conducted to testing encryption and description and computation time required. From the test results obtained they have been successfully implemented vigenere algorithm to encrypt and decrypt the messages on the transport of radioactive monitoring system and the computational time required to encrypt and decrypt the data is 13.05 ms for 36 characters and 13.61 for 37 characters. So for every single character require computing time 0.56 ms. (author)

  1. An efficient and cost effective FPGA based implementation of the Viola-Jones face detection algorithm

    Directory of Open Access Journals (Sweden)

    Peter Irgens

    2017-04-01

    Full Text Available We present an field programmable gate arrays (FPGA based implementation of the popular Viola-Jones face detection algorithm, which is an essential building block in many applications such as video surveillance and tracking. Our implementation is a complete system level hardware design described in a hardware description language and validated on the affordable DE2-115 evaluation board. Our primary objective is to study the achievable performance with a low-end FPGA chip based implementation. In addition, we release to the public domain the entire project. We hope that this will enable other researchers to easily replicate and compare their results to ours and that it will encourage and facilitate further research and educational ideas in the areas of image processing, computer vision, and advanced digital design and FPGA prototyping.

  2. Implementation of intensity ratio change and line-of-sight rate change algorithms for imaging infrared trackers

    Science.gov (United States)

    Viau, C. R.

    2012-06-01

    The use of the intensity change and line-of-sight (LOS) change concepts have previously been documented in the open-literature as techniques used by non-imaging infrared (IR) seekers to reject expendable IR countermeasures (IRCM). The purpose of this project was to implement IR counter-countermeasure (IRCCM) algorithms based on target intensity and kinematic behavior for a generic imaging IR (IIR) seeker model with the underlying goal of obtaining a better understanding of how expendable IRCM can be used to defeat the latest generation of seekers. The report describes the Intensity Ratio Change (IRC) and LOS Rate Change (LRC) discrimination techniques. The algorithms and the seeker model are implemented in a physics-based simulation product called Tactical Engagement Simulation Software (TESS™). TESS is developed in the MATLAB®/Simulink® environment and is a suite of RF/IR missile software simulators used to evaluate and analyze the effectiveness of countermeasures against various classes of guided threats. The investigation evaluates the algorithm and tests their robustness by presenting the results of batch simulation runs of surface-to-air (SAM) and air-to-air (AAM) IIR missiles engaging a non-maneuvering target platform equipped with expendable IRCM as self-protection. The report discusses how varying critical parameters such track memory time, ratio thresholds and hold time can influence the outcome of an engagement.

  3. Experiences with Implementing a Distributed and Self-Organizing Scheduling Algorithm for Energy-Efficient Data Gathering on a Real-Life Sensor Network Platform

    NARCIS (Netherlands)

    Zhang, Y.; Chatterjea, Supriyo; Havinga, Paul J.M.

    2007-01-01

    We report our experiences with implementing a distributed and self-organizing scheduling algorithm designed for energy-efficient data gathering on a 25-node multihop wireless sensor network (WSN). The algorithm takes advantage of spatial correlations that exist in readings of adjacent sensor nodes

  4. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  5. Parallel Implementation and Scaling of an Adaptive Mesh Discrete Ordinates Algorithm for Transport

    International Nuclear Information System (INIS)

    Howell, L H

    2004-01-01

    Block-structured adaptive mesh refinement (AMR) uses a mesh structure built up out of locally-uniform rectangular grids. In the BoxLib parallel framework used by the Raptor code, each processor operates on one or more of these grids at each refinement level. The decomposition of the mesh into grids and the distribution of these grids among processors may change every few timesteps as a calculation proceeds. Finer grids use smaller timesteps than coarser grids, requiring additional work to keep the system synchronized and ensure conservation between different refinement levels. In a paper for NECDC 2002 I presented preliminary results on implementation of parallel transport sweeps on the AMR mesh, conjugate gradient acceleration, accuracy of the AMR solution, and scalar speedup of the AMR algorithm compared to a uniform fully-refined mesh. This paper continues with a more in-depth examination of the parallel scaling properties of the scheme, both in single-level and multi-level calculations. Both sweeping and setup costs are considered. The algorithm scales with acceptable performance to several hundred processors. Trends suggest, however, that this is the limit for efficient calculations with traditional transport sweeps, and that modifications to the sweep algorithm will be increasingly needed as job sizes in the thousands of processors become common

  6. Performance Test of Core Protection and Monitoring Algorithm with DLL for SMART Simulator Implementation

    International Nuclear Information System (INIS)

    Koo, Bonseung; Hwang, Daehyun; Kim, Keungkoo

    2014-01-01

    A multi-purpose best-estimate simulator for SMART is being established, which is intended to be used as a tool to evaluate the impacts of design changes on the safety performance, and to improve and/or optimize the operating procedure of SMART. In keeping with these intentions, a real-time model of the digital core protection and monitoring systems was developed and the real-time performance of the models was verified for various simulation scenarios. In this paper, a performance test of the core protection and monitoring algorithm with a DLL file for the SMART simulator implementation was performed. A DLL file of the simulator application code was made and several real-time evaluation tests were conducted for the steady-state and transient conditions with simulated system variables. A performance test of the core protection and monitoring algorithms for the SMART simulator was performed. A DLL file of the simulator version code was made and several real-time evaluation tests were conducted for various scenarios with a DLL file and simulated system variables. The results of all test cases showed good agreement with the reference results and some features caused by algorithm change were properly reflected to the DLL results. Therefore, it was concluded that the SCOPS S SIM and SCOMS S SIM algorithms and calculational capabilities are appropriate for the core protection and monitoring program in the SMART simulator

  7. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    Science.gov (United States)

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  8. A comparison of native GPU computing versus OpenACC for implementing flow-routing algorithms in hydrological applications

    Science.gov (United States)

    Rueda, Antonio J.; Noguera, José M.; Luque, Adrián

    2016-02-01

    In recent years GPU computing has gained wide acceptance as a simple low-cost solution for speeding up computationally expensive processing in many scientific and engineering applications. However, in most cases accelerating a traditional CPU implementation for a GPU is a non-trivial task that requires a thorough refactorization of the code and specific optimizations that depend on the architecture of the device. OpenACC is a promising technology that aims at reducing the effort required to accelerate C/C++/Fortran code on an attached multicore device. Virtually with this technology the CPU code only has to be augmented with a few compiler directives to identify the areas to be accelerated and the way in which data has to be moved between the CPU and GPU. Its potential benefits are multiple: better code readability, less development time, lower risk of errors and less dependency on the underlying architecture and future evolution of the GPU technology. Our aim with this work is to evaluate the pros and cons of using OpenACC against native GPU implementations in computationally expensive hydrological applications, using the classic D8 algorithm of O'Callaghan and Mark for river network extraction as case-study. We implemented the flow accumulation step of this algorithm in CPU, using OpenACC and two different CUDA versions, comparing the length and complexity of the code and its performance with different datasets. We advance that although OpenACC can not match the performance of a CUDA optimized implementation (×3.5 slower in average), it provides a significant performance improvement against a CPU implementation (×2-6) with by far a simpler code and less implementation effort.

  9. Research and implementation of finger-vein recognition algorithm

    Science.gov (United States)

    Pang, Zengyao; Yang, Jie; Chen, Yilei; Liu, Yin

    2017-06-01

    In finger vein image preprocessing, finger angle correction and ROI extraction are important parts of the system. In this paper, we propose an angle correction algorithm based on the centroid of the vein image, and extract the ROI region according to the bidirectional gray projection method. Inspired by the fact that features in those vein areas have similar appearance as valleys, a novel method was proposed to extract center and width of palm vein based on multi-directional gradients, which is easy-computing, quick and stable. On this basis, an encoding method was designed to determine the gray value distribution of texture image. This algorithm could effectively overcome the edge of the texture extraction error. Finally, the system was equipped with higher robustness and recognition accuracy by utilizing fuzzy threshold determination and global gray value matching algorithm. Experimental results on pairs of matched palm images show that, the proposed method has a EER with 3.21% extracts features at the speed of 27ms per image. It can be concluded that the proposed algorithm has obvious advantages in grain extraction efficiency, matching accuracy and algorithm efficiency.

  10. THE ALGORITHM IMPLEMENTATION OF THE DIVERSIFICATION STRATEGY IN SMALL AND MEDIUM-SIZED ENTERPRISES (FOR EXAMPLE, THE HOSPITALITY INDUSTRY

    Directory of Open Access Journals (Sweden)

    Наталья Николаевна Масюк

    2013-09-01

    Full Text Available Diversification in small businesses in the general sense is an extension of business activities to new areas of business (expanding the range of products, types of services provided, etc.. Application of the strategy of diversification in the small and medium business is justified in cases where the industry does not give us opportunities for further growth or when growth opportunities outside the industry more attractive. To determine whether diversification overdue and justified, the entrepreneur must clearly define the algorithm for their actions.Purpose: To determine the algorithm implementation of the strategy of diversification.Methodology: Desk research.Results: The developed algorithm.Practical implications: Management.DOI: http://dx.doi.org/10.12731/2218-7405-2013-9-19

  11. Architecture for the Secret-Key BC3 Cryptography Algorithm

    Directory of Open Access Journals (Sweden)

    Arif Sasongko

    2011-08-01

    Full Text Available Cryptography is a very important aspect in data security. The focus of research in this field is shifting from merely security aspect to consider as well the implementation aspect. This paper aims to introduce BC3 algorithm with focus on its hardware implementation. It proposes architecture for the hardware implementation for this algorithm. BC3 algorithm is a secret-key cryptography algorithm developed with two considerations: robustness and implementation efficiency. This algorithm has been implemented on software and has good performance compared to AES algorithm. BC3 is improvement of BC2 and AE cryptographic algorithm and it is expected to have the same level of robustness and to gain competitive advantages in the implementation aspect. The development of the architecture gives much attention on (1 resource sharing and (2 having single clock for each round. It exploits regularity of the algorithm. This architecture is then implemented on an FPGA. This implementation is three times smaller area than AES, but about five times faster. Furthermore, this BC3 hardware implementation has better performance compared to BC3 software both in key expansion stage and randomizing stage. For the future, the security of this implementation must be reviewed especially against side channel attack.

  12. Construction of a universal quantum computer

    International Nuclear Information System (INIS)

    Lagana, Antonio A.; Lohe, M. A.; Smekal, Lorenz von

    2009-01-01

    We construct a universal quantum computer following Deutsch's original proposal of a universal quantum Turing machine (UQTM). Like Deutsch's UQTM, our machine can emulate any classical Turing machine and can execute any algorithm that can be implemented in the quantum gate array framework but under the control of a quantum program, and hence is universal. We present the architecture of the machine, which consists of a memory tape and a processor and describe the observables that comprise the registers of the processor and the instruction set, which includes a set of operations that can approximate any unitary operation to any desired accuracy and hence is quantum computationally universal. We present the unitary evolution operators that act on the machine to achieve universal computation and discuss each of them in detail and specify and discuss explicit program halting and concatenation schemes. We define and describe a set of primitive programs in order to demonstrate the universal nature of the machine. These primitive programs facilitate the implementation of more complex algorithms and we demonstrate their use by presenting a program that computes the NAND function, thereby also showing that the machine can compute any classically computable function.

  13. Deutsche Bibliotheksstatistik (DBS): Konzept, Umsetzung und Perspektiven für eine umfassende Datenbasis zum Bibliothekswesen in Deutschland: 10 Fragen von Bruno Bauer an Ronald M. Schmidt, Leiter der DBS / Deutsche Bibliotheksstatistik (DBS): Concept, implementation and prospect for a comprehensive database on library statistics in Germany: 10 questions interview with Ronald M. Schmidt, head of DBS, by Bruno Bauer

    OpenAIRE

    Schmidt, Ronald M.; Bauer, Bruno

    2008-01-01

    The DBS, Deutsche Bibliotheksstatistik (German Library Statistics, http://www.bibliotheksstatistik.de), reports since 1974. Around 9000 libraries file data on facilities, equipment, holdings, usage, budget and staff.Data collection, evaluation, and presentation today are carried out online only. Aim of DBS is the formation of a national data pool containing statistical data on all types of libraries.The interview informs about the concept of DBS and its differentation of public, university an...

  14. The readout system and the trigger algorithm implementation for the UFFO Pathfinder

    DEFF Research Database (Denmark)

    Na, G.W.; Ahmad, S.; Barrillon, P.

    2012-01-01

    ) Pathfinder, to take the sub-minute data for the early photons from GRB. The UFFO Pathfinder has a coded-mask X-ray camera to search the GRB location by the UBAT trigger algorithm. To determine the direction of GRB as soon as possible it requires the fast processing. We have ultimately implemented all...... have been measured within a minute after the gamma ray signal. This lack of sub-minute data limits the study for the characteristics of the UV-optical light curve of the short-hard type GRB and the fast-rising GRB. Therefore, we have developed the telescope named the Ultra-Fast Flash Observatory (UFFO...

  15. Implementations of back propagation algorithm in ecosystems applications

    Science.gov (United States)

    Ali, Khalda F.; Sulaiman, Riza; Elamir, Amir Mohamed

    2015-05-01

    Artificial Neural Networks (ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is in solving problems which are too complex for conventional technologies, that do not have an algorithmic solutions or their algorithmic Solutions is too complex to be found. In general, because of their abstraction from the biological brain, ANNs are developed from concept that evolved in the late twentieth century neuro-physiological experiments on the cells of the human brain to overcome the perceived inadequacies with conventional ecological data analysis methods. ANNs have gained increasing attention in ecosystems applications, because of ANN's capacity to detect patterns in data through non-linear relationships, this characteristic confers them a superior predictive ability. In this research, ANNs is applied in an ecological system analysis. The neural networks use the well known Back Propagation (BP) Algorithm with the Delta Rule for adaptation of the system. The Back Propagation (BP) training Algorithm is an effective analytical method for adaptation of the ecosystems applications, the main reason because of their capacity to detect patterns in data through non-linear relationships. This characteristic confers them a superior predicting ability. The BP algorithm uses supervised learning, which means that we provide the algorithm with examples of the inputs and outputs we want the network to compute, and then the error is calculated. The idea of the back propagation algorithm is to reduce this error, until the ANNs learns the training data. The training begins with random weights, and the goal is to adjust them so that the error will be minimal. This research evaluated the use of artificial neural networks (ANNs) techniques in an ecological system analysis and modeling. The experimental results from this research demonstrate that an artificial neural network system can be trained to act as an expert

  16. One-Step Leapfrog LOD-BOR-FDTD Algorithm with CPML Implementation

    Directory of Open Access Journals (Sweden)

    Yi-Gang Wang

    2016-01-01

    Full Text Available An unconditionally stable one-step leapfrog locally one-dimensional finite-difference time-domain (LOD-FDTD algorithm towards body of revolution (BOR is presented. The equations of the proposed algorithm are obtained by the algebraic manipulation of those used in the conventional LOD-BOR-FDTD algorithm. The equations for z-direction electric and magnetic fields in the proposed algorithm should be treated specially. The new algorithm obtains a higher computational efficiency while preserving the properties of the conventional LOD-BOR-FDTD algorithm. Moreover, the convolutional perfectly matched layer (CPML is introduced into the one-step leapfrog LOD-BOR-FDTD algorithm. The equation of the one-step leapfrog CPML is concise. Numerical results show that its reflection error is small. It can be concluded that the similar CPML scheme can also be easily applied to the one-step leapfrog LOD-FDTD algorithm in the Cartesian coordinate system.

  17. Simulation of subwavelength metallic gratings using a new implementation of the recursive convolution finite-difference time-domain algorithm.

    Science.gov (United States)

    Banerjee, Saswatee; Hoshino, Tetsuya; Cole, James B

    2008-08-01

    We introduce a new implementation of the finite-difference time-domain (FDTD) algorithm with recursive convolution (RC) for first-order Drude metals. We implemented RC for both Maxwell's equations for light polarized in the plane of incidence (TM mode) and the wave equation for light polarized normal to the plane of incidence (TE mode). We computed the Drude parameters at each wavelength using the measured value of the dielectric constant as a function of the spatial and temporal discretization to ensure both the accuracy of the material model and algorithm stability. For the TE mode, where Maxwell's equations reduce to the wave equation (even in a region of nonuniform permittivity) we introduced a wave equation formulation of RC-FDTD. This greatly reduces the computational cost. We used our methods to compute the diffraction characteristics of metallic gratings in the visible wavelength band and compared our results with frequency-domain calculations.

  18. Implementation in an FPGA circuit of Edge detection algorithm based on the Discrete Wavelet Transforms

    Science.gov (United States)

    Bouganssa, Issam; Sbihi, Mohamed; Zaim, Mounia

    2017-07-01

    The 2D Discrete Wavelet Transform (DWT) is a computationally intensive task that is usually implemented on specific architectures in many imaging systems in real time. In this paper, a high throughput edge or contour detection algorithm is proposed based on the discrete wavelet transform. A technique for applying the filters on the three directions (Horizontal, Vertical and Diagonal) of the image is used to present the maximum of the existing contours. The proposed architectures were designed in VHDL and mapped to a Xilinx Sparten6 FPGA. The results of the synthesis show that the proposed architecture has a low area cost and can operate up to 100 MHz, which can perform 2D wavelet analysis for a sequence of images while maintaining the flexibility of the system to support an adaptive algorithm.

  19. China-Knigge für deutsche Geschäftsleute?: die Darstellung Chinas in interkultureller Ratgeberliteratur

    OpenAIRE

    Poerner, Michael

    2009-01-01

    Anhand einer Analyse aktueller "China-Knigge" für deutsche Manager betrachtet Michael Poerner in seinem Beitrag das darin vermittelte Chinabild. Er geht dabei der Frage nach, ob es sich bei den Ratgebern tatsächlich um fachlich fundierte Darstellungen handelt oder ob sie sich vielmehr an den üblichen, im Laufe der Geschichte tradierten, undifferenzierten Wahrnehmungsmustern orientieren. "Aimed specifically at Western businesses and managers, this book offers a general framework for underst...

  20. A Weighted Spatial-Spectral Kernel RX Algorithm and Efficient Implementation on GPUs

    Directory of Open Access Journals (Sweden)

    Chunhui Zhao

    2017-02-01

    Full Text Available The kernel RX (KRX detector proposed by Kwon and Nasrabadi exploits a kernel function to obtain a better detection performance. However, it still has two limits that can be improved. On the one hand, reasonable integration of spatial-spectral information can be used to further improve its detection accuracy. On the other hand, parallel computing can be used to reduce the processing time in available KRX detectors. Accordingly, this paper presents a novel weighted spatial-spectral kernel RX (WSSKRX detector and its parallel implementation on graphics processing units (GPUs. The WSSKRX utilizes the spatial neighborhood resources to reconstruct the testing pixels by introducing a spectral factor and a spatial window, thereby effectively reducing the interference of background noise. Then, the kernel function is redesigned as a mapping trick in a KRX detector to implement the anomaly detection. In addition, a powerful architecture based on the GPU technique is designed to accelerate WSSKRX. To substantiate the performance of the proposed algorithm, both synthetic and real data are conducted for experiments.

  1. Firmware implementation of algorithms for the new topological processor in the ATLAS first level trigger

    Energy Technology Data Exchange (ETDEWEB)

    Maldaner, Stephan; Caputo, Regina; Schaefer, Ulrich; Tapprogge, Stefan [Universitaet Mainz, Staudingerweg 7, 55128 Mainz (Germany)

    2013-07-01

    After the upgrade of the Large Hadron Collider in 2013/2014 proton-proton collisions will be provided at a center-of-mass energy of up to 14 TeV with an instantaneous luminosity of at least 1 . 10{sup 34} cm{sup -2}s{sup -1}. During this upgrade a new FPGA based electronics system (Topological Processor) will be included in the ATLAS trigger chain to keep up with the increased rate of events. To reduce rates while maintaining high signal efficiency of the trigger the processor will make its decisions based upon topological criteria like angular cuts and mass calculations. As a hardware based trigger, it will have to fit into the tight first level trigger latency budget of 2.5 μs and thus provides the challenge of making decisions within very short time. Beside the latency, the main constraints on the algorithms are the required amount of logic resources of the FPGA which will be implemented as firmware. Therefore to be able to use as much information as possible, each module will be equipped with 2 state-of-the-art Xilinx Virtex 7 FPGAs to process the incoming data. This talk will present some of the topological algorithms and discuss properties of their implementation in firmware.

  2. Interrelations between the surface waters of Danube, karst waters and thermal springs of Bad Deutsch Altenburg

    Energy Technology Data Exchange (ETDEWEB)

    Hacker, P [Bundesversuchs- und Forschungsanstalt Arsenal, Vienna (Austria)

    1987-11-15

    Full text: As part of the preliminary works for the hydropower project Hainburg on the Danube, comprehensive geological, geophysical, hydrogeological, hydrological, hydrochemical and radiohydrometrical investigations were carried out. Special attention was paid to the area of Bad Deutsch Altenburg since questions of connections between Danube water, groundwater and the sulphur-medicinal springs of Bad Deutsch Altenburg and karst waters had to be settled. Long term observations and the data from series of analysed water samples led to the following conclusions: (1) The thermal deep groundwater, the autochthonous karst water, the shallow groundwater and the Danube belong to a common system with hydraulic interactions. (2) The discharge of the thermal mineral waters in Bad Deutsch Altenburg is caused by a NW-SE striking fault zone. (3) The thermal mineral waters are overburdened by the karst waters in the area Kirchenberg and Pfaffenberg. At the contact zone mixing occurs. Owing to changing pressure conditions and to the locally different conductivity of the karst aquifer the discharges of mineral waters differ in concentration and temperature. (4) The water level of the thermal mineral waterbody is 1 to 2 m above the water level of the Danube at low flow. This difference is equalized at the Danube water level above 141.5 m a.s.l. Above the mark 142 m a.s.l. a direct influence of the observation wells situated in the Park was observed. (5) Because the Danube has eroded the karst massif (Mesozoic limestones and dolomites, Leitha limestone) down to a depth of about 132-133 m a.s.l. the level of karst water drainage was deeper than today. Currently the area is covered by highly permeable gravels. (6) It is therefore assumed that a considerable amount of thermal water drains directly into the Danube. Recharge and mixing with the shallow groundwater was proved. (7) The considerable discharge implies a catchment which extends beyond the immediate environment. (author)

  3. Fast quantum search algorithm for databases of arbitrary size and its implementation in a cavity QED system

    International Nuclear Information System (INIS)

    Li, H.Y.; Wu, C.W.; Liu, W.T.; Chen, P.X.; Li, C.Z.

    2011-01-01

    We propose a method for implementing the Grover search algorithm directly in a database containing any number of items based on multi-level systems. Compared with the searching procedure in the database with qubits encoding, our modified algorithm needs fewer iteration steps to find the marked item and uses the carriers of the information more economically. Furthermore, we illustrate how to realize our idea in cavity QED using Zeeman's level structure of atoms. And the numerical simulation under the influence of the cavity and atom decays shows that the scheme could be achieved efficiently within current state-of-the-art technology. -- Highlights: ► A modified Grover algorithm is proposed for searching in an arbitrary dimensional Hilbert space. ► Our modified algorithm requires fewer iteration steps to find the marked item. ► The proposed method uses the carriers of the information more economically. ► A scheme for a six-item Grover search in cavity QED is proposed. ► Numerical simulation under decays shows that the scheme can be achieved with enough fidelity.

  4. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    Energy Technology Data Exchange (ETDEWEB)

    Santi, Peter Angelo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cutler, Theresa Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favalli, Andrea [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Koehler, Katrina Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Henzl, Vladimir [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Henzlova, Daniela [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parker, Robert Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Croft, Stephen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-01

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects in all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.

  5. Greentowers in Frankfurt. Second life cycle of the Deutsche Bank towers; Greentowers in Frankfurt. Zweiter Lebenszyklus fuer Hochhaustuerme der Deutschen Bank

    Energy Technology Data Exchange (ETDEWEB)

    Lauster, Marcus

    2011-07-01

    The two towers of the headquarters of Deutsche Bank in Frankfurt were modernized on the basis of a new climate strategy in order to minimize the operating cost and help protect the climate. (orig./AKB)

  6. The metaphysics of D-CTCs: On the underlying assumptions of Deutsch's quantum solution to the paradoxes of time travel

    Science.gov (United States)

    Dunlap, Lucas

    2016-11-01

    I argue that Deutsch's model for the behavior of systems traveling around closed timelike curves (CTCs) relies implicitly on a substantive metaphysical assumption. Deutsch is employing a version of quantum theory with a significantly supplemented ontology of parallel existent worlds, which differ in kind from the many worlds of the Everett interpretation. Standard Everett does not support the existence of multiple identical copies of the world, which the D-CTC model requires. This has been obscured because he often refers to the branching structure of Everett as a "multiverse", and describes quantum interference by reference to parallel interacting definite worlds. But he admits that this is only an approximation to Everett. The D-CTC model, however, relies crucially on the existence of a multiverse of parallel interacting worlds. Since his model is supplemented by structures that go significantly beyond quantum theory, and play an ineliminable role in its predictions and explanations, it does not represent a quantum solution to the paradoxes of time travel.

  7. Implementation of the diagonalization-free algorithm in the self-consistent field procedure within the four-component relativistic scheme.

    Science.gov (United States)

    Hrdá, Marcela; Kulich, Tomáš; Repiský, Michal; Noga, Jozef; Malkina, Olga L; Malkin, Vladimir G

    2014-09-05

    A recently developed Thouless-expansion-based diagonalization-free approach for improving the efficiency of self-consistent field (SCF) methods (Noga and Šimunek, J. Chem. Theory Comput. 2010, 6, 2706) has been adapted to the four-component relativistic scheme and implemented within the program package ReSpect. In addition to the implementation, the method has been thoroughly analyzed, particularly with respect to cases for which it is difficult or computationally expensive to find a good initial guess. Based on this analysis, several modifications of the original algorithm, refining its stability and efficiency, are proposed. To demonstrate the robustness and efficiency of the improved algorithm, we present the results of four-component diagonalization-free SCF calculations on several heavy-metal complexes, the largest of which contains more than 80 atoms (about 6000 4-spinor basis functions). The diagonalization-free procedure is about twice as fast as the corresponding diagonalization. Copyright © 2014 Wiley Periodicals, Inc.

  8. Design and implementation of a hybrid MPI-CUDA model for the Smith-Waterman algorithm.

    Science.gov (United States)

    Khaled, Heba; Faheem, Hossam El Deen Mostafa; El Gohary, Rania

    2015-01-01

    This paper provides a novel hybrid model for solving the multiple pair-wise sequence alignment problem combining message passing interface and CUDA, the parallel computing platform and programming model invented by NVIDIA. The proposed model targets homogeneous cluster nodes equipped with similar Graphical Processing Unit (GPU) cards. The model consists of the Master Node Dispatcher (MND) and the Worker GPU Nodes (WGN). The MND distributes the workload among the cluster working nodes and then aggregates the results. The WGN performs the multiple pair-wise sequence alignments using the Smith-Waterman algorithm. We also propose a modified implementation to the Smith-Waterman algorithm based on computing the alignment matrices row-wise. The experimental results demonstrate a considerable reduction in the running time by increasing the number of the working GPU nodes. The proposed model achieved a performance of about 12 Giga cell updates per second when we tested against the SWISS-PROT protein knowledge base running on four nodes.

  9. Particle filters for object tracking: enhanced algorithm and efficient implementations

    International Nuclear Information System (INIS)

    Abd El-Halym, H.A.

    2010-01-01

    Object tracking and recognition is a hot research topic. In spite of the extensive research efforts expended, the development of a robust and efficient object tracking algorithm remains unsolved due to the inherent difficulty of the tracking problem. Particle filters (PFs) were recently introduced as a powerful, post-Kalman filter, estimation tool that provides a general framework for estimation of nonlinear/ non-Gaussian dynamic systems. Particle filters were advanced for building robust object trackers capable of operation under severe conditions (small image size, noisy background, occlusions, fast object maneuvers ..etc.). The heavy computational load of the particle filter remains a major obstacle towards its wide use.In this thesis, an Excitation Particle Filter (EPF) is introduced for object tracking. A new likelihood model is proposed. It depends on multiple functions: position likelihood; gray level intensity likelihood and similarity likelihood. Also, we modified the PF as a robust estimator to overcome the well-known sample impoverishment problem of the PF. This modification is based on re-exciting the particles if their weights fall below a memorized weight value. The proposed enhanced PF is implemented in software and evaluated. Its results are compared with a single likelihood function PF tracker, Particle Swarm Optimization (PSO) tracker, a correlation tracker, as well as, an edge tracker. The experimental results demonstrated the superior performance of the proposed tracker in terms of accuracy, robustness, and occlusion compared with other methods Efficient novel hardware architectures of the Sample Important Re sample Filter (SIRF) and the EPF are implemented. Three novel hardware architectures of the SIRF for object tracking are introduced. The first architecture is a two-step sequential PF machine, where particle generation, weight calculation and normalization are carried out in parallel during the first step followed by a sequential re

  10. SU-E-T-67: Clinical Implementation and Evaluation of the Acuros Dose Calculation Algorithm

    International Nuclear Information System (INIS)

    Yan, C; Combine, T; Dickens, K; Wynn, R; Pavord, D; Huq, M

    2014-01-01

    Purpose: The main aim of the current study is to present a detailed description of the implementation of the Acuros XB Dose Calculation Algorithm, and subsequently evaluate its clinical impacts by comparing it with AAA algorithm. Methods: The source models for both Acuros XB and AAA were configured by importing the same measured beam data into Eclipse treatment planning system. Both algorithms were evaluated by comparing calculated dose with measured dose on a homogeneous water phantom for field sizes ranging from 6cm × 6cm to 40cm × 40cm. Central axis and off-axis points with different depths were chosen for the comparison. Similarly, wedge fields with wedge angles from 15 to 60 degree were used. In addition, variable field sizes for a heterogeneous phantom were used to evaluate the Acuros algorithm. Finally, both Acuros and AAA were tested on VMAT patient plans for various sites. Does distributions and calculation time were compared. Results: On average, computation time is reduced by at least 50% by Acuros XB compared with AAA on single fields and VMAT plans. When used for open 6MV photon beams on homogeneous water phantom, both Acuros XB and AAA calculated doses were within 1% of measurement. For 23 MV photon beams, the calculated doses were within 1.5% of measured doses for Acuros XB and 2% for AAA. When heterogeneous phantom was used, Acuros XB also improved on accuracy. Conclusion: Compared with AAA, Acuros XB can improve accuracy while significantly reduce computation time for VMAT plans

  11. FPGA Implementation of Gaussian Mixture Model Algorithm for 47 fps Segmentation of 1080p Video

    Directory of Open Access Journals (Sweden)

    Mariangela Genovese

    2013-01-01

    Full Text Available Circuits and systems able to process high quality video in real time are fundamental in nowadays imaging systems. The circuit proposed in the paper, aimed at the robust identification of the background in video streams, implements the improved formulation of the Gaussian Mixture Model (GMM algorithm that is included in the OpenCV library. An innovative, hardware oriented, formulation of the GMM equations, the use of truncated binary multipliers, and ROM compression techniques allow reduced hardware complexity and increased processing capability. The proposed circuit has been designed having commercial FPGA devices as target and provides speed and logic resources occupation that overcome previously proposed implementations. The circuit, when implemented on Virtex6 or StratixIV, processes more than 45 frame per second in 1080p format and uses few percent of FPGA logic resources.

  12. A new algorithm for hip fracture surgery

    DEFF Research Database (Denmark)

    Palm, Henrik; Krasheninnikoff, Michael; Holck, Kim

    2012-01-01

    Background and purpose Treatment of hip fracture patients is controversial. We implemented a new operative and supervision algorithm (the Hvidovre algorithm) for surgical treatment of all hip fractures, primarily based on own previously published results. Methods 2,000 consecutive patients over 50...... years of age who were admitted and operated on because of a hip fracture were prospectively included. 1,000 of these patients were included after implementation of the algorithm. Demographic parameters, hospital treatment, and reoperations within the first postoperative year were assessed from patient...... by reoperations was reduced from 24% of total hospitalization before the algorithm was introduced to 18% after it was introduced. Interpretation It is possible to implement an algorithm for treatment of all hip fracture patients in a large teaching hospital. In our case, the Hvidovre algorithm both raised...

  13. Implementation and Comparison of the Lifting 5/3 and 9/7 Algorithms in MatLab on GPU

    Directory of Open Access Journals (Sweden)

    Randa Khemiri

    2016-06-01

    Full Text Available In order to accelerate the Discrete Wavelet Transform DWT, we have implemented and compared the lifting "Le Gall5/3" and "Cohen-Daubechies-Feauveau9/7" (CDF9/7 algorithms on a low cost NVIDIA’s GPU. The suggested implementation is realized in MatLab using the in-house parallel computation toolbox (PCT. Our experimental results indicate, that the speedup is proportional to the image size until it attains a maximum at 20482 pixels, beyond these values the curve decreases. The performance with GPU enhances above a factor of 2~3 compared with CPU.

  14. Automatically tuned adaptive differencing algorithm for 3-D SN implemented in PENTRAN

    International Nuclear Information System (INIS)

    Sjoden, G.; Courau, T.; Manalo, K.; Yi, C.

    2009-01-01

    We present an adaptive algorithm with an automated tuning feature to augment optimum differencing scheme selection for 3-D S N computations in Cartesian geometry. This adaptive differencing scheme has been implemented in the PENTRAN parallel S N code. Individual fixed zeroth spatial transport moment based schemes, including Diamond Zero (DZ), Directional Theta Weighted (DTW), and Exponential Directional Iterative (EDI) 3-D S N methods were evaluated and compared with solutions generated using a code-tuned adaptive algorithm. Model problems considered include a fixed source slab problem (using reflected y- and z-axes) which contained mixed shielding and diffusive regions, and a 17 x 17 PWR assembly eigenvalue test problem; these problems were benchmarked against multigroup MCNP5 Monte Carlo computations. Both problems were effective in highlighting the performance of the adaptive scheme compared to single schemes, and demonstrated that the adaptive tuning handles exceptions to the standard DZ-DTW-EDI adaptive strategy. The tuning feature includes special scheme selection provisions for optically thin cells, and incorporates the ratio of the angular source density relative to the total angular collision density to best select the differencing method. Overall, the adaptive scheme demonstrated the best overall solution accuracy in the test problems. (authors)

  15. First massively parallel algorithm to be implemented in Apollo-II code

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1994-01-01

    The collision probability (CP) method in neutron transport, as applied to arbitrary 2D XY geometries, like the TDT module in APOLLO-II, is very time consuming. Consequently RZ or 3D extensions became prohibitive. Fortunately, this method is very suitable for parallelization. Massively parallel computer architectures, especially MIMD machines, bring a new breath to this method. In this paper we present a CM5 implementation of the CP method. Parallelization is applied to the energy groups, using the CMMD message passing library. In our case we use 32 processors for the standard 99-group APOLLIB-II library. The real advantage of this algorithm will appear in the calculation of the future fine multigroup library (about 8000 groups) of the SAPHYR project with a massively parallel computer (to the order of hundreds of processors). (author). 3 tabs., 4 figs., 4 refs

  16. First massively parallel algorithm to be implemented in APOLLO-II code

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1994-01-01

    The collision probability method in neutron transport, as applied to arbitrary 2-dimensional geometries, like the two dimensional transport module in APOLLO-II is very time consuming. Consequently 3-dimensional extension became prohibitive. Fortunately, this method is very suitable for parallelization. Massively parallel computer architectures, especially MIMD machines, bring a new breath to this method. In this paper we present a CM5 implementation of the collision probability method. Parallelization is applied to the energy groups, using the CMMD massage passing library. In our case we used 32 processors for the standard 99-group APOLLIB-II library. The real advantage of this algorithm will appear in the calculation of the future multigroup library (about 8000 groups) of the SAPHYR project with a massively parallel computer (to the order of hundreds of processors). (author). 4 refs., 4 figs., 3 tabs

  17. Decoding Hermitian Codes with Sudan's Algorithm

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    1999-01-01

    We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q-polynomial, and a reduct......We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q...

  18. A postal history of the First World War in Africa and its aftermath - German colonies : III Deutsch-Sdwestafrika (SWA)

    NARCIS (Netherlands)

    Dietz, A.J.

    2015-01-01

    The 'Great War' had a major impact on Africa and that is visible in the post stamps used in the various postal territories in Africa. This paper discusses the postal offices, postal services, and stamps used in the German colony Deutsch-Sdwestafrika (SWA) during the early twentieth century. For the

  19. An Inconvenient History: the Nuclear-Fission Display in the Deutsches Museum

    Science.gov (United States)

    Sime, Ruth Lewin

    2010-06-01

    One of the longstanding attractions of the Deutsches Museum in Munich, Germany, has been its display of the apparatus associated with the discovery of nuclear fission. Although the discovery involved three scientists, Otto Hahn, Lise Meitner, and Fritz Strassmann, the fission display was designated for over 30 years as the Arbeitstisch von Otto Hahn (Otto Hahn’s Worktable), with Strassmann mentioned peripherally and Meitner not at all, and it was not until the early 1990s that the display was revised to include all three codiscoverers more equitably. I examine the creation of the fission display in the context of the postwar German culture of silencing the National Socialist past, and trace the eventual transformation of the display into a contemporary exhibit that more accurately represents the scientific history of the fission discovery.

  20. Architecture for the Secret-Key BC3 Cryptography Algorithm

    Directory of Open Access Journals (Sweden)

    Arif Sasongko

    2014-11-01

    Full Text Available Cryptography is a very important aspect in data security. The focus of research in this field is shifting from merely security aspect to consider as well the  implementation  aspect.  This  paper  aims  to  introduce  BC3  algorithm  with focus  on  its  hardware  implementation.  It  proposes  an  architecture  for  the hardware  implementation  for  this  algorithm.  BC3  algorithm  is  a  secret-key cryptography  algorithm  developed  with  two  considerations:  robustness  and implementation  efficiency.  This  algorithm  has  been  implemented  on  software and has good performance compared to AES algorithm. BC3 is improvement of BC2 and AE cryptographic algorithm and it is expected to have the same level of robustness and to gain competitive advantages in the implementation aspect. The development of the architecture gives much attention on (1 resource sharing and (2  having  single  clock  for  each  round.  It  exploits  regularity  of  the  algorithm. This architecture is then implemented on an FPGA. This implementation is three times smaller area than AES, but about five times faster. Furthermore, this BC3 hardware  implementation  has  better  performance  compared  to  BC3  software both in key expansion stage and randomizing stage. For the future, the security of this implementation must be reviewed especially against side channel attack.

  1. SU-F-SPS-06: Implementation of a Back-Projection Algorithm for 2D in Vivo Dosimetry with An EPID System

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez Reyes, B; Rodriguez Perez, E; Sosa Aquino, M [Universidad de Guanajuato, Leon, Guanajuato (Mexico)

    2016-06-15

    Purpose: To implement a back-projection algorithm for 2D dose reconstructions for in vivo dosimetry in radiation therapy using an Electronic Portal Imaging Device (EPID) based on amorphous silicon. Methods: An EPID system was used to calculate dose-response function, pixel sensitivity map, exponential scatter kernels and beam hardenig correction for the back-projection algorithm. All measurements were done with a 6 MV beam. A 2D dose reconstruction for an irradiated water phantom (30×30×30 cm{sup 3}) was done to verify the algorithm implementation. Gamma index evaluation between the 2D reconstructed dose and the calculated with a treatment planning system (TPS) was done. Results: A linear fit was found for the dose-response function. The pixel sensitivity map has a radial symmetry and was calculated with a profile of the pixel sensitivity variation. The parameters for the scatter kernels were determined only for a 6 MV beam. The primary dose was estimated applying the scatter kernel within EPID and scatter kernel within the patient. The beam hardening coefficient is σBH= 3.788×10{sup −4} cm{sup 2} and the effective linear attenuation coefficient is µAC= 0.06084 cm{sup −1}. The 95% of points evaluated had γ values not longer than the unity, with gamma criteria of ΔD = 3% and Δd = 3 mm, and within the 50% isodose surface. Conclusion: The use of EPID systems proved to be a fast tool for in vivo dosimetry, but the implementation is more complex that the elaborated for pre-treatment dose verification, therefore, a simplest method must be investigated. The accuracy of this method should be improved modifying the algorithm in order to compare lower isodose curves.

  2. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    International Nuclear Information System (INIS)

    Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel

    2017-01-01

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) , we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.

  3. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    Science.gov (United States)

    Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel

    2017-07-01

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.

  4. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    Energy Technology Data Exchange (ETDEWEB)

    Schaerer, Roman Pascal, E-mail: schaerer@mathcces.rwth-aachen.de; Bansal, Pratyuksh; Torrilhon, Manuel

    2017-07-01

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) , we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.

  5. Integrated audit in labour, health and environmental protection in RAG Deutsche Steinkohle AG; Das integrierte Audit im Arbeits-, Gesundheits- und Umweltschutz bei der RAG Deutsche Steinkohle AG

    Energy Technology Data Exchange (ETDEWEB)

    Jaensch, Christian [Zentralbereich Arbeits-, Gesundheits- und Umweltschutz, RAG Aktiengesellschaft, Herne (Germany)

    2009-07-02

    On the basis of the experience acquired with the environmental audit at RAG the internal audit was extended by the safety at work and health protection fields. This approach is the logical adaptation to the development of the integrated management system in labour, health and environmental protection (LHE), which is specified in the internal RAG LHE concept. The audit serves essentially for regular and systematic checking of the management process in labour, health and environmental protection. The aims pursued with this integrated audit and also the course of an audit are explained. In addition the special requirements both on an audit in a mining company and also on own auditors are outlined. This internal check has been carried out in all RAG Deutsche Steinkohle companies since 2008. (orig.)

  6. Implementation and preliminary evaluation of 'C-tone': A novel algorithm to improve lexical tone recognition in Mandarin-speaking cochlear implant users.

    Science.gov (United States)

    Ping, Lichuan; Wang, Ningyuan; Tang, Guofang; Lu, Thomas; Yin, Li; Tu, Wenhe; Fu, Qian-Jie

    2017-09-01

    Because of limited spectral resolution, Mandarin-speaking cochlear implant (CI) users have difficulty perceiving fundamental frequency (F0) cues that are important to lexical tone recognition. To improve Mandarin tone recognition in CI users, we implemented and evaluated a novel real-time algorithm (C-tone) to enhance the amplitude contour, which is strongly correlated with the F0 contour. The C-tone algorithm was implemented in clinical processors and evaluated in eight users of the Nurotron NSP-60 CI system. Subjects were given 2 weeks of experience with C-tone. Recognition of Chinese tones, monosyllables, and disyllables in quiet was measured with and without the C-tone algorithm. Subjective quality ratings were also obtained for C-tone. After 2 weeks of experience with C-tone, there were small but significant improvements in recognition of lexical tones, monosyllables, and disyllables (P C-tone were greater for disyllables than for monosyllables. Subjective quality ratings showed no strong preference for or against C-tone, except for perception of own voice, where C-tone was preferred. The real-time C-tone algorithm provided small but significant improvements for speech performance in quiet with no change in sound quality. Pre-processing algorithms to reduce noise and better real-time F0 extraction would improve the benefits of C-tone in complex listening environments. Chinese CI users' speech recognition in quiet can be significantly improved by modifying the amplitude contour to better resemble the F0 contour.

  7. Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm

    Directory of Open Access Journals (Sweden)

    Sandeep Kakde

    2017-12-01

    Full Text Available For binary field and long code lengths, Low Density Parity Check (LDPC code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algorithm. VLSI Architecture is proposed which uses the value re-use property of min-sum algorithm and gives high throughput. The proposed work has been implemented and tested on Xilinx Virtex 5 FPGA. The MATLAB result of LDPC decoder for low bit error rate (BER gives bit error rate in the range of 10-1 to 10-3.5 at SNR=1 to 2 for 20 no of iterations. So it gives good bit error rate performance. The latency of the parallel design of LDPC decoder has also reduced. It has accomplished 141.22 MHz maximum frequency and throughput of 2.02 Gbps while consuming less area of the design.

  8. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform.

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance.

  9. Implementation and analysis of list mode algorithm using tubes of response on a dedicated brain and breast PET

    Science.gov (United States)

    Moliner, L.; Correcher, C.; González, A. J.; Conde, P.; Hernández, L.; Orero, A.; Rodríguez-Álvarez, M. J.; Sánchez, F.; Soriano, A.; Vidal, L. F.; Benlloch, J. M.

    2013-02-01

    In this work we present an innovative algorithm for the reconstruction of PET images based on the List-Mode (LM) technique which improves their spatial resolution compared to results obtained with current MLEM algorithms. This study appears as a part of a large project with the aim of improving diagnosis in early Alzheimer disease stages by means of a newly developed hybrid PET-MR insert. At the present, Alzheimer is the most relevant neurodegenerative disease and the best way to apply an effective treatment is its early diagnosis. The PET device will consist of several monolithic LYSO crystals coupled to SiPM detectors. Monolithic crystals can reduce scanner costs with the advantage to enable implementation of very small virtual pixels in their geometry. This is especially useful for LM reconstruction algorithms, since they do not need a pre-calculated system matrix. We have developed an LM algorithm which has been initially tested with a large aperture (186 mm) breast PET system. Such an algorithm instead of using the common lines of response, incorporates a novel calculation of tubes of response. The new approach improves the volumetric spatial resolution about a factor 2 at the border of the field of view when compared with traditionally used MLEM algorithm. Moreover, it has also shown to decrease the image noise, thus increasing the image quality.

  10. Implementation and analysis of list mode algorithm using tubes of response on a dedicated brain and breast PET

    International Nuclear Information System (INIS)

    Moliner, L.; Correcher, C.; González, A.J.; Conde, P.; Hernández, L.; Orero, A.; Rodríguez-Álvarez, M.J.; Sánchez, F.; Soriano, A.; Vidal, L.F.; Benlloch, J.M.

    2013-01-01

    In this work we present an innovative algorithm for the reconstruction of PET images based on the List-Mode (LM) technique which improves their spatial resolution compared to results obtained with current MLEM algorithms. This study appears as a part of a large project with the aim of improving diagnosis in early Alzheimer disease stages by means of a newly developed hybrid PET-MR insert. At the present, Alzheimer is the most relevant neurodegenerative disease and the best way to apply an effective treatment is its early diagnosis. The PET device will consist of several monolithic LYSO crystals coupled to SiPM detectors. Monolithic crystals can reduce scanner costs with the advantage to enable implementation of very small virtual pixels in their geometry. This is especially useful for LM reconstruction algorithms, since they do not need a pre-calculated system matrix. We have developed an LM algorithm which has been initially tested with a large aperture (186 mm) breast PET system. Such an algorithm instead of using the common lines of response, incorporates a novel calculation of tubes of response. The new approach improves the volumetric spatial resolution about a factor 2 at the border of the field of view when compared with traditionally used MLEM algorithm. Moreover, it has also shown to decrease the image noise, thus increasing the image quality

  11. Implementing O(N N-Body Algorithms Efficiently in Data-Parallel Languages

    Directory of Open Access Journals (Sweden)

    Yu Hu

    1996-01-01

    Full Text Available The optimization techniques for hierarchical O(N N-body algorithms described here focus on managing the data distribution and the data references, both between the memories of different nodes and within the memory hierarchy of each node. We show how the techniques can be expressed in data-parallel languages, such as High Performance Fortran (HPF and Connection Machine Fortran (CMF. The effectiveness of our techniques is demonstrated on an implementation of Anderson's hierarchical O(N N-body method for the Connection Machine system CM-5/5E. Of the total execution time, communication accounts for about 10–20% of the total time, with the average efficiency for arithmetic operations being about 40% and the total efficiency (including communication being about 35%. For the CM-5E, a performance in excess of 60 Mflop/s per node (peak 160 Mflop/s per node has been measured.

  12. Optimizing graph algorithms on pregel-like systems

    KAUST Repository

    Salihoglu, Semih

    2014-03-01

    We study the problem of implementing graph algorithms efficiently on Pregel-like systems, which can be surprisingly challenging. Standard graph algorithms in this setting can incur unnecessary inefficiencies such as slow convergence or high communication or computation cost, typically due to structural properties of the input graphs such as large diameters or skew in component sizes. We describe several optimization techniques to address these inefficiencies. Our most general technique is based on the idea of performing some serial computation on a tiny fraction of the input graph, complementing Pregel\\'s vertex-centric parallelism. We base our study on thorough implementations of several fundamental graph algorithms, some of which have, to the best of our knowledge, not been implemented on Pregel-like systems before. The algorithms and optimizations we describe are fully implemented in our open-source Pregel implementation. We present detailed experiments showing that our optimization techniques improve runtime significantly on a variety of very large graph datasets.

  13. Nuclear energy in Germany. Annual report 1999 - Deutsches Atomforum e.V.. Working report 1999. Special issue for members of Deutsches Atomforum e.V

    International Nuclear Information System (INIS)

    Gey, A.

    2000-01-01

    Total nuclear power generation in Germany in 1999 sums up to 169.7 billion kWh and thus almost equals the all-time high of the operating year 1997, which was at 170.4 billion kWh. Power generation in nuclear power plants has been contributing well a third of the total domestic power supply since 1988, which is about ten per cent of the national power consumption. This is one aspect of the information contained in the annual report of Deutsches Atomforum e.V. Expressing this 1999 output in terms of carbon dioxide emissions avoided, the figure is 170 million tonnes. This is equal to the annual CO2 emissions in 1999 emanating from road transport and traffic in Germany. From the very beginning of nuclear power generation in 1961 until today, aggregated nuclear power generation from uranium and plutonium fuels amounts to about 2.8 billion kWh, which means that over this period, more than two billion tonnes of carbon dioxide emissions have been avoided. (orig./CB) [de

  14. Parallel asynchronous systems and image processing algorithms

    Science.gov (United States)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  15. Autonomous intelligent vehicles theory, algorithms, and implementation

    CERN Document Server

    Cheng, Hong

    2011-01-01

    Here is the latest on intelligent vehicles, covering object and obstacle detection and recognition and vehicle motion control. Includes a navigation approach using global views; introduces algorithms for lateral and longitudinal motion control and more.

  16. ifo Konjunkturprognose 2016–2018: Robuste deutsche Konjunktur vor einem Jahr ungewisser internationaler Wirtschaftspolitik

    OpenAIRE

    Wollmershäuser, Timo; Nierhaus, Wolfgang; Hristov, Nikolay; Boumans, Dorine; Garnitz, Johanna; Göttert, Marcell; Grimme, Christian; Lauterbacher, Stefan; Lehmann, Robert; Meister, Wolfgang; Reif, Magnus; Schröter, Felix; Steiner, Andreas; Stöckli, Marc; Klaus, Wohlrabe

    2016-01-01

    Am 16. Dezember 2016 stellte das ifo Institut seine Prognose für die Jahre 2016, 2017 und 2018 vor. Der robuste Aufschwung, in dem sich die deutsche Wirtschaft seit dem Jahr 2013 befindet, wird sich fortsetzen. In diesem Jahr ist mit einem Zuwachs des realen BIP von 1,9% zu rechnen. 2017 dürfte der Anstieg auf 1,5% zurückgehen, was jedoch nur auf eine im Vergleich zum Vorjahr geringere Anzahl von Arbeitstagen zurückzuführen ist. Im Jahr 2018 wird das reale BIP vor­aussichtlich um 1,7% expandi...

  17. Implementation of a Multichannel Serial Data Streaming Algorithm using the Xilinx Serial RapidIO Solution

    Science.gov (United States)

    Doxley, Charles A.

    2016-01-01

    In the current world of applications that use reconfigurable technology implemented on field programmable gate arrays (FPGAs), there is a need for flexible architectures that can grow as the systems evolve. A project has limited resources and a fixed set of requirements that development efforts are tasked to meet. Designers must develop robust solutions that practically meet the current customer demands and also have the ability to grow for future performance. This paper describes the development of a high speed serial data streaming algorithm that allows for transmission of multiple data channels over a single serial link. The technique has the ability to change to meet new applications developed for future design considerations. This approach uses the Xilinx Serial RapidIO LOGICORE Solution to implement a flexible infrastructure to meet the current project requirements with the ability to adapt future system designs.

  18. FIREWORKS ALGORITHM FOR UNCONSTRAINED FUNCTION OPTIMIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Evans BAIDOO

    2017-03-01

    Full Text Available Modern real world science and engineering problems can be classified as multi-objective optimisation problems which demand for expedient and efficient stochastic algorithms to respond to the optimization needs. This paper presents an object-oriented software application that implements a firework optimization algorithm for function optimization problems. The algorithm, a kind of parallel diffuse optimization algorithm is based on the explosive phenomenon of fireworks. The algorithm presented promising results when compared to other population or iterative based meta-heuristic algorithm after it was experimented on five standard benchmark problems. The software application was implemented in Java with interactive interface which allow for easy modification and extended experimentation. Additionally, this paper validates the effect of runtime on the algorithm performance.

  19. Designing and implementing of improved cryptographic algorithm using modular arithmetic theory

    Directory of Open Access Journals (Sweden)

    Maryam Kamarzarrin

    2015-05-01

    Full Text Available Maintaining the privacy and security of people information are two most important principles of electronic health plan. One of the methods of creating privacy and securing of information is using Public key cryptography system. In this paper, we compare two algorithms, Common And Fast Exponentiation algorithms, for enhancing the efficiency of public key cryptography. We express that a designed system by Fast Exponentiation Algorithm has high speed and performance but low power consumption and space occupied compared with Common Exponentiation algorithm. Although designed systems by Common Exponentiation algorithm have slower speed and lower performance, designing by this algorithm has less complexity, and easier designing compared with Fast Exponentiation algorithm. In this paper, we will try to examine and compare two different methods of exponentiation, also observe performance Impact of these two approaches in the form of hardware with VHDL language on FPGA.

  20. A hybrid, massively parallel implementation of a genetic algorithm for optimization of the impact performance of a metal/polymer composite plate

    KAUST Repository

    Narayanan, Kiran; Mora Cordova, Angel; Allsopp, Nicholas; El Sayed, Tamer S.

    2012-01-01

    A hybrid parallelization method composed of a coarse-grained genetic algorithm (GA) and fine-grained objective function evaluations is implemented on a heterogeneous computational resource consisting of 16 IBM Blue Gene/P racks, a single x86 cluster

  1. Experimental implementation of a robust damped-oscillation control algorithm on a full-sized, two-degree-of-freedom, AC induction motor-driven crane

    International Nuclear Information System (INIS)

    Kress, R.L.; Jansen, J.F.; Noakes, M.W.

    1994-01-01

    When suspended payloads are moved with an overhead crane, pendulum like oscillations are naturally introduced. This presents a problem any time a crane is used, especially when expensive and/or delicate objects are moved, when moving in a cluttered an or hazardous environment, and when objects are to be placed in tight locations. Damped-oscillation control algorithms have been demonstrated over the past several years for laboratory-scale robotic systems on dc motor-driven overhead cranes. Most overhead cranes presently in use in industry are driven by ac induction motors; consequently, Oak Ridge National Laboratory has implemented damped-oscillation crane control on one of its existing facility ac induction motor-driven overhead cranes. The purpose of this test was to determine feasibility, to work out control and interfacing specifications, and to establish the capability of newly available ac motor control hardware with respect to use in damped-oscillation-controlled systems. Flux vector inverter drives are used to investigate their acceptability for damped-oscillation crane control. The purpose of this paper is to describe the experimental implementation of a control algorithm on a full-sized, two-degree-of-freedom, industrial crane; describe the experimental evaluation of the controller including robustness to payload length changes; explain the results of experiments designed to determine the hardware required for implementation of the control algorithms; and to provide a theoretical description of the controller

  2. A FPGA implementation of solder paste deposit on printed circuit boards errors detector based in a bright and contrast algorithm

    OpenAIRE

    De Luca-Pennacchia, A.; Sánchez-Martínez, M. Á.

    2007-01-01

    Solder paste deposit on printed circuit boards (PCB) is a critical stage. It is known that about 60% of functionality defects in this type of boards are due to poor solder paste printing. These defects can be diminished by means of automatic optical inspection of this printing. Actually, this process is implemented by image processing software with its inherent high computational time cost. In this paper we propose to implement a high parallel degree image comparison algorithm suitable to be ...

  3. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  4. ADORE-GA: Genetic algorithm variant of the ADORE algorithm for ROP detector layout optimization in CANDU reactors

    International Nuclear Information System (INIS)

    Kastanya, Doddy

    2012-01-01

    Highlights: ► ADORE is an algorithm for CANDU ROP Detector Layout Optimization. ► ADORE-GA is a Genetic Algorithm variant of the ADORE algorithm. ► Robustness test of ADORE-GA algorithm is presented in this paper. - Abstract: The regional overpower protection (ROP) systems protect CANDU® reactors against overpower in the fuel that could reduce the safety margin-to-dryout. The overpower could originate from a localized power peaking within the core or a general increase in the global core power level. The design of the detector layout for ROP systems is a challenging discrete optimization problem. In recent years, two algorithms have been developed to find a quasi optimal solution to this detector layout optimization problem. Both of these algorithms utilize the simulated annealing (SA) algorithm as their optimization engine. In the present paper, an alternative optimization algorithm, namely the genetic algorithm (GA), has been implemented as the optimization engine. The implementation is done within the ADORE algorithm. Results from evaluating the effects of using various mutation rates and crossover parameters are presented in this paper. It has been demonstrated that the algorithm is sufficiently robust in producing similar quality solutions.

  5. Implementing peak load reduction algorithms for household electrical appliances

    International Nuclear Information System (INIS)

    Dlamini, Ndumiso G.; Cromieres, Fabien

    2012-01-01

    Considering household appliance automation for reduction of household peak power demand, this study explored aspects of the interaction between household automation technology and human behaviour. Given a programmable household appliance switching system, and user-reported appliance use times, we simulated the load reduction effectiveness of three types of algorithms, which were applied at both the single household level and across all 30 households. All three algorithms effected significant load reductions, while the least-to-highest potential user inconvenience ranking was: coordinating the timing of frequent intermittent loads (algorithm 2); moving period-of-day time-flexible loads to off-peak times (algorithm 1); and applying short-term time delays to avoid high peaks (algorithm 3) (least accommodating). Peak reduction was facilitated by load interruptibility, time of use flexibility and the willingness of users to forgo impulsive appliance use. We conclude that a general factor determining the ability to shift the load due to a particular appliance is the time-buffering between the service delivered and the power demand of an appliance. Time-buffering can be ‘technologically inherent’, due to human habits, or realised by managing user expectations. There are implications for the design of appliances and home automation systems. - Highlights: ► We explored the interaction between appliance automation and human behaviour. ► There is potential for considerable load shifting of household appliances. ► Load shifting for load reduction is eased with increased time buffering. ► Design, human habits and user expectations all influence time buffering. ► Certain automation and appliance design features can facilitate load shifting.

  6. The Analysis of Alpha Beta Pruning and MTD(f) Algorithm to Determine the Best Algorithm to be Implemented at Connect Four Prototype

    Science.gov (United States)

    Tommy, Lukas; Hardjianto, Mardi; Agani, Nazori

    2017-04-01

    Connect Four is a two-player game which the players take turns dropping discs into a grid to connect 4 of one’s own discs next to each other vertically, horizontally, or diagonally. At Connect Four, Computer requires artificial intelligence (AI) in order to play properly like human. There are many AI algorithms that can be implemented to Connect Four, but the suitable algorithms are unknown. The suitable algorithm means optimal in choosing move and its execution time is not slow at search depth which is deep enough. In this research, analysis and comparison between standard alpha beta (AB) Pruning and MTD(f) will be carried out at the prototype of Connect Four in terms of optimality (win percentage) and speed (execution time and the number of leaf nodes). Experiments are carried out by running computer versus computer mode with 12 different conditions, i.e. varied search depth (5 through 10) and who moves first. The percentage achieved by MTD(f) based on experiments is win 45,83%, lose 37,5% and draw 16,67%. In the experiments with search depth 8, MTD(f) execution time is 35, 19% faster and evaluate 56,27% fewer leaf nodes than AB Pruning. The results of this research are MTD(f) is as optimal as AB Pruning at Connect Four prototype, but MTD(f) on average is faster and evaluates fewer leaf nodes than AB Pruning. The execution time of MTD(f) is not slow and much faster than AB Pruning at search depth which is deep enough.

  7. Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm

    Science.gov (United States)

    Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.

    2017-12-01

    Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.

  8. Cascade Error Projection: A New Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.

    1995-01-01

    A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.

  9. Engineering a Cache-Oblivious Sorting Algorithm

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Vinther, Kristoffer

    2007-01-01

    This paper is an algorithmic engineering study of cache-oblivious sorting. We investigate by empirical methods a number of implementation issues and parameter choices for the cache-oblivious sorting algorithm Lazy Funnelsort, and compare the final algorithm with Quicksort, the established standard...

  10. Specification of technical means for implementation of supervisory algorithms of the status of a nuclear reactor and of the main coolant pump of a NPP

    International Nuclear Information System (INIS)

    Jirsa, P.

    2000-11-01

    Inclusion into the programming of inputs of the supervisory algorithm (data collection from the monitoring system, transmission of diagnostic output fro other system and transmission of technological data), of the supervisory process proper based on the data obtained (data analysis) and of the output (presentation of the results to the operator, communication with the master and archiving systems, etc.) requires knowledge of the format of the data transmitted, their availability, communication network protocols, operating system, etc. Hence, the environment for which the algorithm will be developed should be specified, roughly at least. The following topics are addressed: Description of technical means of Czech nuclear power plants (Dukovany, Temelin, Mochovce), and Proposal for technical means to implement the monitoring algorithm (Requirements related to the monitoring systems, Identification of the reference system, Parameters of the selected system). Since no domestic manufacturer of HW for monitoring and diagnostic systems exists, a novel system of the Brueel and Kjaer company for on-line diagnosis and monitoring, COMPASS, was selected as a model model system for the implementation of the supervisory algorithms. (P.A.)

  11. Empirical study of parallel LRU simulation algorithms

    Science.gov (United States)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  12. Optimized Data Indexing Algorithms for OLAP Systems

    Directory of Open Access Journals (Sweden)

    Lucian BORNAZ

    2010-12-01

    Full Text Available The need to process and analyze large data volumes, as well as to convey the information contained therein to decision makers naturally led to the development of OLAP systems. Similarly to SGBDs, OLAP systems must ensure optimum access to the storage environment. Although there are several ways to optimize database systems, implementing a correct data indexing solution is the most effective and less costly. Thus, OLAP uses indexing algorithms for relational data and n-dimensional summarized data stored in cubes. Today database systems implement derived indexing algorithms based on well-known Tree, Bitmap and Hash indexing algorithms. This is because no indexing algorithm provides the best performance for any particular situation (type, structure, data volume, application. This paper presents a new n-dimensional cube indexing algorithm, derived from the well known B-Tree index, which indexes data stored in data warehouses taking in consideration their multi-dimensional nature and provides better performance in comparison to the already implemented Tree-like index types.

  13. Conjugate gradient algorithms using multiple recursions

    Energy Technology Data Exchange (ETDEWEB)

    Barth, T.; Manteuffel, T.

    1996-12-31

    Much is already known about when a conjugate gradient method can be implemented with short recursions for the direction vectors. The work done in 1984 by Faber and Manteuffel gave necessary and sufficient conditions on the iteration matrix A, in order for a conjugate gradient method to be implemented with a single recursion of a certain form. However, this form does not take into account all possible recursions. This became evident when Jagels and Reichel used an algorithm of Gragg for unitary matrices to demonstrate that the class of matrices for which a practical conjugate gradient algorithm exists can be extended to include unitary and shifted unitary matrices. The implementation uses short double recursions for the direction vectors. This motivates the study of multiple recursion algorithms.

  14. Optimal control of hybrid qubits: Implementing the quantum permutation algorithm

    Science.gov (United States)

    Rivera-Ruiz, C. M.; de Lima, E. F.; Fanchini, F. F.; Lopez-Richard, V.; Castelano, L. K.

    2018-03-01

    The optimal quantum control theory is employed to determine electric pulses capable of producing quantum gates with a fidelity higher than 0.9997, when noise is not taken into account. Particularly, these quantum gates were chosen to perform the permutation algorithm in hybrid qubits in double quantum dots (DQDs). The permutation algorithm is an oracle based quantum algorithm that solves the problem of the permutation parity faster than a classical algorithm without the necessity of entanglement between particles. The only requirement for achieving the speedup is the use of a one-particle quantum system with at least three levels. The high fidelity found in our results is closely related to the quantum speed limit, which is a measure of how fast a quantum state can be manipulated. Furthermore, we model charge noise by considering an average over the optimal field centered at different values of the reference detuning, which follows a Gaussian distribution. When the Gaussian spread is of the order of 5 μ eV (10% of the correct value), the fidelity is still higher than 0.95. Our scheme also can be used for the practical realization of different quantum algorithms in DQDs.

  15. An Implementation of RC4+ Algorithm and Zig-zag Algorithm in a Super Encryption Scheme for Text Security

    Science.gov (United States)

    Budiman, M. A.; Amalia; Chayanie, N. I.

    2018-03-01

    Cryptography is the art and science of using mathematical methods to preserve message security. There are two types of cryptography, namely classical and modern cryptography. Nowadays, most people would rather use modern cryptography than classical cryptography because it is harder to break than the classical one. One of classical algorithm is the Zig-zag algorithm that uses the transposition technique: the original message is unreadable unless the person has the key to decrypt the message. To improve the security, the Zig-zag Cipher is combined with RC4+ Cipher which is one of the symmetric key algorithms in the form of stream cipher. The two algorithms are combined to make a super-encryption. By combining these two algorithms, the message will be harder to break by a cryptanalyst. The result showed that complexity of the combined algorithm is θ(n2 ), while the complexity of Zig-zag Cipher and RC4+ Cipher are θ(n2 ) and θ(n), respectively.

  16. Supercomputer implementation of finite element algorithms for high speed compressible flows. Progress report, period ending 30 June 1986

    International Nuclear Information System (INIS)

    Thornton, E.A.; Ramakrishnan, R.

    1986-06-01

    Prediction of compressible flow phenomena using the finite element method is of recent origin and considerable interest. Two shock capturing finite element formulations for high speed compressible flows are described. A Taylor-Galerkin formulation uses a Taylor series expansion in time coupled with a Galerkin weighted residual statement. The Taylor-Galerkin algorithms use explicit artificial dissipation, and the performance of three dissipation models are compared. A Petrov-Galerkin algorithm has as its basis the concepts of streamline upwinding. Vectorization strategies are developed to implement the finite element formulations on the NASA Langley VPS-32. The vectorization scheme results in finite element programs that use vectors of length of the order of the number of nodes or elements. The use of the vectorization procedure speeds up processing rates by over two orders of magnitude. The Taylor-Galerkin and Petrov-Galerkin algorithms are evaluated for 2D inviscid flows on criteria such as solution accuracy, shock resolution, computational speed and storage requirements. The convergence rates for both algorithms are enhanced by local time-stepping schemes. Extension of the vectorization procedure for predicting 2D viscous and 3D inviscid flows are demonstrated. Conclusions are drawn regarding the applicability of the finite element procedures for realistic problems that require hundreds of thousands of nodes

  17. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  18. Column Reduction of Polynomial Matrices; Some Remarks on the Algorithm of Wolovich

    NARCIS (Netherlands)

    Praagman, C.

    1996-01-01

    Recently an algorithm has been developed for column reduction of polynomial matrices. In a previous report the authors described a Fortran implementation of this algorithm. In this paper we compare the results of that implementation with an implementation of the algorithm originally developed by

  19. Von schnellen Teilchen und hellem Licht 50 Jahre Deutsches Elektronen-Synchrotron DESY

    CERN Document Server

    Lohrmann, Erich

    2009-01-01

    Das 1959 in Hamburg als Zentrum für Elementarteilchenforschung gegründete "Deutsche Elektronen-Synchrotron DESY" gelangte durch Pionierleistungen in der Entwicklung von Hochenergie-Beschleunigern und Speicherringen zu weltweiter Anerkennung. Seiner internationalen Nutzergemeinde sind eine Reihe wichtiger Entdeckungen über die Materiebausteine Quarks und Gluonen und die Kräfte, welche die Welt im Innersten zusammenhalten, zu verdanken. Zusätzlich wurde die Synchrotronstrahlung für breite Anwendungsgebiete erschlossen und ihre Nutzung systematisch ausgebaut mit Ergebnissen, die gleichrangig neben denen in der Teilchenforschung stehen. Dieses Buch behandelt die Entwicklung des Forschungszentrums von den Anfängen bis in das Jahr 2003, in dem mit dem Beschluss zum Bau des Europäischen Röntgenlasers in Hamburg eine Änderung der Schwerpunktsetzung in der Beschleunigerentwicklung erfolgte, die seitdem auf neuartige Photonenquellen gerichtet ist. Die wichtigsten nachfolgenden Ereignisse werden ebenfalls kurz...

  20. Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.

    Science.gov (United States)

    Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane

    2016-08-01

    Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.

  1. An Implementation of Bigraph Matching

    DEFF Research Database (Denmark)

    Glenstrup, Arne John; Damgaard, Troels Christoffer; Birkedal, Lars

    We describe a provably sound and complete matching algorithm for bigraphical reactive systems. The algorithm has been implemented in our BPL Tool, a first implementation of bigraphical reactive systems. We describe the tool and present a concrete example of how it can be used to simulate a model...

  2. High speed numerical integration algorithm using FPGA | Razak ...

    African Journals Online (AJOL)

    Conventionally, numerical integration algorithm is executed in software and time consuming to accomplish. Field Programmable Gate Arrays (FPGAs) can be used as a much faster, very efficient and reliable alternative to implement the numerical integration algorithm. This paper proposed a hardware implementation of four ...

  3. German refrigeration sytems for Hong Kong Airport Chek Lap Kok and for Bangkok. Prause and Partner construct systems for Asia; Deutsche Kaeltetechnik fuer Hong Kongs Airport Chek Lap Kok, aber auch fuer Bangkok. Prause und Partner baut taifunsicher

    Energy Technology Data Exchange (ETDEWEB)

    Weissenborn, P.

    1998-09-01

    Hong Kong`s new airport Chek Lap Kok was commissioned on 6 July 1998 and is to become a turntable of air traffic in the Asian region. Prause and Partner, Goslar, will provide catering refrigeration systems for the airport. (orig.) [Deutsch] Hong Kongs neuer Airport Chek Lap Kok wird sich nach seiner Inbetriebnahme am 6. Juli 1998 zu einer der wichtigsten Drehscheiben im Luftverkehr Asiens entwickeln. Millionen von Fluggaesten werden dann taeglich mit Bordspeisen verpflegt werden muessen, bei deren Zubereitung und Konservierung die Kaeltetechnik eine bedeutende Funktion einnimmt. Deutsches Ingenieurwesen und solide deutsche Handwerkstechnik tragen dazu bei, dass `Catering-Refrigeration made by Prause and Partner`, Goslar, auch die Airline-Logistik der LSG Hong Kong zuverlaessig stuetzt. (orig./MSK)

  4. Algorithmic strategies for FPGA-based vision

    OpenAIRE

    Lim, Yoong Kang

    2016-01-01

    As demands for real-time computer vision applications increase, implementations on alternative architectures have been explored. These architectures include Field-Programmable Gate Arrays (FPGAs), which offer a high degree of flexibility and parallelism. A problem with this is that many computer vision algorithms have been optimized for serial processing, and this often does not map well to FPGA implementation. This thesis introduces the concept of FPGA-tailored computer vision algorithms...

  5. How to implement a quantum algorithm on a large number of qubits by controlling one central qubit

    Science.gov (United States)

    Zagoskin, Alexander; Ashhab, Sahel; Johansson, J. R.; Nori, Franco

    2010-03-01

    It is desirable to minimize the number of control parameters needed to perform a quantum algorithm. We show that, under certain conditions, an entire quantum algorithm can be efficiently implemented by controlling a single central qubit in a quantum computer. We also show that the different system parameters do not need to be designed accurately during fabrication. They can be determined through the response of the central qubit to external driving. Our proposal is well suited for hybrid architectures that combine microscopic and macroscopic qubits. More details can be found in: A.M. Zagoskin, S. Ashhab, J.R. Johansson, F. Nori, Quantum two-level systems in Josephson junctions as naturally formed qubits, Phys. Rev. Lett. 97, 077001 (2006); and S. Ashhab, J.R. Johansson, F. Nori, Rabi oscillations in a qubit coupled to a quantum two-level system, New J. Phys. 8, 103 (2006).

  6. Computation of watersheds based on parallel graph algorithms

    NARCIS (Netherlands)

    Meijster, A.; Roerdink, J.B.T.M.; Maragos, P; Schafer, RW; Butt, MA

    1996-01-01

    In this paper the implementation of a parallel watershed algorithm is described. The algorithm has been implemented on a Cray J932, which is a shared memory architecture with 32 processors. The watershed transform has generally been considered to be inherently sequential, but recently a few research

  7. Implementation of the LandTrendr Algorithm on Google Earth Engine

    Directory of Open Access Journals (Sweden)

    Robert E Kennedy

    2018-05-01

    Full Text Available The LandTrendr (LT algorithm has been used widely for analysis of change in Landsat spectral time series data, but requires significant pre-processing, data management, and computational resources, and is only accessible to the community in a proprietary programming language (IDL. Here, we introduce LT for the Google Earth Engine (GEE platform. The GEE platform simplifies pre-processing steps, allowing focus on the translation of the core temporal segmentation algorithm. Temporal segmentation involved a series of repeated random access calls to each pixel’s time series, resulting in a set of breakpoints (“vertices” that bound straight-line segments. The translation of the algorithm into GEE included both transliteration and code analysis, resulting in improvement and logic error fixes. At six study areas representing diverse land cover types across the U.S., we conducted a direct comparison of the new LT-GEE code against the heritage code (LT-IDL. The algorithms agreed in most cases, and where disagreements occurred, they were largely attributable to logic error fixes in the code translation process. The practical impact of these changes is minimal, as shown by an example of forest disturbance mapping. We conclude that the LT-GEE algorithm represents a faithful translation of the LT code into a platform easily accessible by the broader user community.

  8. Executable Pseudocode for Graph Algorithms

    NARCIS (Netherlands)

    B. Ó Nualláin (Breanndán)

    2015-01-01

    textabstract Algorithms are written in pseudocode. However the implementation of an algorithm in a conventional, imperative programming language can often be scattered over hundreds of lines of code thus obscuring its essence. This can lead to difficulties in understanding or verifying the

  9. An Ordering Linear Unification Algorithm

    Institute of Scientific and Technical Information of China (English)

    胡运发

    1989-01-01

    In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.

  10. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  11. Seamless Merging of Hypertext and Algorithm Animation

    Science.gov (United States)

    Karavirta, Ville

    2009-01-01

    Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…

  12. Understanding conflict-resolution taskload: Implementing advisory conflict-detection and resolution algorithms in an airspace

    Science.gov (United States)

    Vela, Adan Ernesto

    2011-12-01

    From 2010 to 2030, the number of instrument flight rules aircraft operations handled by Federal Aviation Administration en route traffic centers is predicted to increase from approximately 39 million flights to 64 million flights. The projected growth in air transportation demand is likely to result in traffic levels that exceed the abilities of the unaided air traffic controller in managing, separating, and providing services to aircraft. Consequently, the Federal Aviation Administration, and other air navigation service providers around the world, are making several efforts to improve the capacity and throughput of existing airspaces. Ultimately, the stated goal of the Federal Aviation Administration is to triple the available capacity of the National Airspace System by 2025. In an effort to satisfy air traffic demand through the increase of airspace capacity, air navigation service providers are considering the inclusion of advisory conflict-detection and resolution systems. In a human-in-the-loop framework, advisory conflict-detection and resolution decision-support tools identify potential conflicts and propose resolution commands for the air traffic controller to verify and issue to aircraft. A number of researchers and air navigation service providers hypothesize that the inclusion of combined conflict-detection and resolution tools into air traffic control systems will reduce or transform controller workload and enable the required increases in airspace capacity. In an effort to understand the potential workload implications of introducing advisory conflict-detection and resolution tools, this thesis provides a detailed study of the conflict event process and the implementation of conflict-detection and resolution algorithms. Specifically, the research presented here examines a metric of controller taskload: how many resolution commands an air traffic controller issues under the guidance of a conflict-detection and resolution decision-support tool. The goal

  13. Implementation of Chaid Algorithm: A Hotel Case

    Directory of Open Access Journals (Sweden)

    Celal Hakan Kagnicioglu

    2016-01-01

    Full Text Available Today, companies are planning their own activities depending on efficiency and effectiveness. In order to have plans for the future activities they need historical data coming from outside and inside of the companies. However, this data is in huge amounts to understand easily. Since, this huge amount of data creates complexity in business for many industries like hospitality industry, reliable, accurate and fast access to this data is to be one of the greatest problems. Besides, management of this data is another big problem. In order to analyze this huge amount of data, Data Mining (DM tools, can be used effectively. In this study, after giving brief definition about fundamentals of data mining, Chi Squared Automatic Interaction Detection (CHAID algorithm, one of the mostly used DM tool, will be introduced. By CHAID algorithm, the most used materials in room cleaning process and the relations of these materials based on in a five star hotel data are tried to be determined. At the end of the analysis, it is seen that while some variables have strong relation with the number of rooms cleaned in the hotel, the others have no or weak relation.

  14. Implementation of Chaid Algorithm: A Hotel Case

    Directory of Open Access Journals (Sweden)

    Celal Hakan Kağnicioğlu

    2014-11-01

    Full Text Available Today, companies are planning their own activities depending on efficiency and effectiveness. In order to have plans for the future activities they need historical data coming from outside and inside of the companies. However, this data is in huge amounts to understand easily. Since, this huge amount of data creates complexity in business for many industries like hospitality industry, reliable, accurate and fast access to this data is to be one of the greatest problems. Besides, management of this data is another big problem. In order to analyze this huge amount of data, Data Mining (DM tools, can be used effectively. In this study, after giving brief definition about fundamentals of data mining, Chi Squared Automatic Interaction Detection (CHAID algorithm, one of the mostly used DM tool, will be introduced. By CHAID algorithm, the most used materials in room cleaning process and the relations of these materials based on in a five star hotel data are tried to be determined. At the end of the analysis, it is seen that while some variables have strong relation with the number of rooms cleaned in the hotel, the others have no or weak relation.

  15. A Novel Parallel Algorithm for Edit Distance Computation

    Directory of Open Access Journals (Sweden)

    Muhammad Murtaza Yousaf

    2018-01-01

    Full Text Available The edit distance between two sequences is the minimum number of weighted transformation-operations that are required to transform one string into the other. The weighted transformation-operations are insert, remove, and substitute. Dynamic programming solution to find edit distance exists but it becomes computationally intensive when the lengths of strings become very large. This work presents a novel parallel algorithm to solve edit distance problem of string matching. The algorithm is based on resolving dependencies in the dynamic programming solution of the problem and it is able to compute each row of edit distance table in parallel. In this way, it becomes possible to compute the complete table in min(m,n iterations for strings of size m and n whereas state-of-the-art parallel algorithm solves the problem in max(m,n iterations. The proposed algorithm also increases the amount of parallelism in each of its iteration. The algorithm is also capable of exploiting spatial locality while its implementation. Additionally, the algorithm works in a load balanced way that further improves its performance. The algorithm is implemented for multicore systems having shared memory. Implementation of the algorithm in OpenMP shows linear speedup and better execution time as compared to state-of-the-art parallel approach. Efficiency of the algorithm is also proven better in comparison to its competitor.

  16. Muusikamaailm : "Kevadpühitsus" sajandi teoseks. Nikolaus Harnoncourt"i juubel. Kent Nagano Deutsche Operisse. "Musica viva" Münchenis / Priit Kuusk

    Index Scriptorium Estoniae

    Kuusk, Priit, 1938-

    1999-01-01

    Lääne-Euroopa populaarsete muusikaajakirjade "BBC Music Magazine" ja "Le Mond de la Musique" küsitluse põhjal osutus 20. saj. tähtsaimaks muusikateoseks I. Stravinski ballett "Kevadpühitsus". N. Harnoncourt tähistas 6. dets. 70. juubelit. Dirigendi tegevusest. K. Nagano kinnitati Berliini Deutsche Operi muusikadirektoriks aastast 2001. Dirigendi senisest tegevusest. Baieri Raadio korraldatava uue muusika sarja "Musica viva" kontsertidest

  17. Implementation and analysis of an adaptive multilevel Monte Carlo algorithm

    KAUST Repository

    Hoel, Hakon; Von Schwerin, Erik; Szepessy, Anders; Tempone, Raul

    2014-01-01

    We present an adaptive multilevel Monte Carlo (MLMC) method for weak approximations of solutions to Itô stochastic dierential equations (SDE). The work [11] proposed and analyzed an MLMC method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a single level Euler-Maruyama Monte Carlo method from O(TOL-3) to O(TOL-2 log(TOL-1)2) for a mean square error of O(TOL2). Later, the work [17] presented an MLMC method using a hierarchy of adaptively re ned, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform time discretizationMLMC method. This work improves the adaptiveMLMC algorithms presented in [17] and it also provides mathematical analysis of the improved algorithms. In particular, we show that under some assumptions our adaptive MLMC algorithms are asymptotically accurate and essentially have the correct complexity but with improved control of the complexity constant factor in the asymptotic analysis. Numerical tests include one case with singular drift and one with stopped diusion, where the complexity of a uniform single level method is O(TOL-4). For both these cases the results con rm the theory, exhibiting savings in the computational cost for achieving the accuracy O(TOL) from O(TOL-3) for the adaptive single level algorithm to essentially O(TOL-2 log(TOL-1)2) for the adaptive MLMC algorithm. © 2014 by Walter de Gruyter Berlin/Boston 2014.

  18. MICADO: Parallel implementation of a 2D-1D iterative algorithm for the 3D neutron transport problem in prismatic geometries

    International Nuclear Information System (INIS)

    Fevotte, F.; Lathuiliere, B.

    2013-01-01

    The large increase in computing power over the past few years now makes it possible to consider developing 3D full-core heterogeneous deterministic neutron transport solvers for reference calculations. Among all approaches presented in the literature, the method first introduced in [1] seems very promising. It consists in iterating over resolutions of 2D and ID MOC problems by taking advantage of prismatic geometries without introducing approximations of a low order operator such as diffusion. However, before developing a solver with all industrial options at EDF, several points needed to be clarified. In this work, we first prove the convergence of this iterative process, under some assumptions. We then present our high-performance, parallel implementation of this algorithm in the MICADO solver. Benchmarking the solver against the Takeda case shows that the 2D-1D coupling algorithm does not seem to affect the spatial convergence order of the MOC solver. As for performance issues, our study shows that even though the data distribution is suited to the 2D solver part, the efficiency of the ID part is sufficient to ensure a good parallel efficiency of the global algorithm. After this study, the main remaining difficulty implementation-wise is about the memory requirement of a vector used for initialization. An efficient acceleration operator will also need to be developed. (authors)

  19. A fast fractional difference algorithm

    DEFF Research Database (Denmark)

    Jensen, Andreas Noack; Nielsen, Morten Ørregaard

    2014-01-01

    We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...

  20. A Fast Fractional Difference Algorithm

    DEFF Research Database (Denmark)

    Jensen, Andreas Noack; Nielsen, Morten Ørregaard

    We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...

  1. Redesigned-Scale-Free CORDIC Algorithm Based FPGA Implementation of Window Functions to Minimize Area and Latency

    Directory of Open Access Journals (Sweden)

    Supriya Aggarwal

    2012-01-01

    Full Text Available One of the most important steps in spectral analysis is filtering, where window functions are generally used to design filters. In this paper, we modify the existing architecture for realizing the window functions using CORDIC processor. Firstly, we modify the conventional CORDIC algorithm to reduce its latency and area. The proposed CORDIC algorithm is completely scale-free for the range of convergence that spans the entire coordinate space. Secondly, we realize the window functions using a single CORDIC processor as against two serially connected CORDIC processors in existing technique, thus optimizing it for area and latency. The linear CORDIC processor is replaced by a shift-add network which drastically reduces the number of pipelining stages required in the existing design. The proposed design on an average requires approximately 64% less pipeline stages and saves up to 44.2% area. Currently, the processor is designed to implement Blackman windowing architecture, which with slight modifications can be extended to other widow functions as well. The details of the proposed architecture are discussed in the paper.

  2. Implementing a C++ Version of the Joint Seismic-Geodetic Algorithm for Finite-Fault Detection and Slip Inversion for Earthquake Early Warning

    Science.gov (United States)

    Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Guillemot, C.; Murray, J. R.

    2015-12-01

    The earthquake early warning (EEW) systems in California and elsewhere can greatly benefit from algorithms that generate estimates of finite-fault parameters. These estimates could significantly improve real-time shaking calculations and yield important information for immediate disaster response. Minson et al. (2015) determined that combining FinDer's seismic-based algorithm (Böse et al., 2012) with BEFORES' geodetic-based algorithm (Minson et al., 2014) yields a more robust and informative joint solution than using either algorithm alone. FinDer examines the distribution of peak ground accelerations from seismic stations and determines the best finite-fault extent and strike from template matching. BEFORES employs a Bayesian framework to search for the best slip inversion over all possible fault geometries in terms of strike and dip. Using FinDer and BEFORES together generates estimates of finite-fault extent, strike, dip, preferred slip, and magnitude. To yield the quickest, most flexible, and open-source version of the joint algorithm, we translated BEFORES and FinDer from Matlab into C++. We are now developing a C++ Application Protocol Interface for these two algorithms to be connected to the seismic and geodetic data flowing from the EEW system. The interface that is being developed will also enable communication between the two algorithms to generate the joint solution of finite-fault parameters. Once this interface is developed and implemented, the next step will be to run test seismic and geodetic data through the system via the Earthworm module, Tank Player. This will allow us to examine algorithm performance on simulated data and past real events.

  3. Algorithmic Mechanism Design of Evolutionary Computation.

    Science.gov (United States)

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.

  4. Parallel data encryption with RSA algorithm

    OpenAIRE

    Неретин, А. А.

    2016-01-01

    In this paper a parallel RSA algorithm with preliminary shuffling of source text was presented.Dependence of an encryption speed on the number of encryption nodes has been analysed, The proposed algorithm was implemented on C# language.

  5. Schreibkompetenz in der Zielsprache Deutsch in einem mehrsprachigen Schweizer Kontext: Zur Entwicklung von Basisstandards am Beispiel eines bilingualen Schulmodells (Romanisch-Deutsch

    Directory of Open Access Journals (Sweden)

    Elisabeth Peyer

    2014-10-01

    Full Text Available Im Fokus dieses Artikels stehen Verfahren zur Erarbeitung von Basisstandards (Grundkompetenzen für die Schreibkompetenz in Deutsch in einem mehrsprachigen Kontext. Gegenstand derUntersuchungen ist die bilinguale Schule der bündnerromanischen Minderheit der Schweiz. Basierend auf qualitativ validierten Kann-Beschreibungen wurden kommunikative Testaufgaben entwickelt und bei einer grossen Stichprobe (N = 325 eingesetzt; die entstandenen Lernertextewurden mithilfe eines analytischen Ratings eingestuft. Nebst dem Verfahren des Standard-Settings werden an dieser Stelleauch Resultate der quantitativen Auswertung der Daten diskutiert. Beispielsweise zeigte sich im Rahmen von Multifacetten-Rasch-Analysen, dass die Bewertungskriterien ‚lexikalisches Spektrum‘, ,grammatikalisches Spektrum‘ und ‚Kohärenz‘ besonders hoch korrelieren und somit tentativ alseine Schreibkompetenz-Dimension interpretiert werden können. Using the example of the bilingual school for the Romansh-German minority group, this study describes procedures forestablishing minimum standards for writing competence in German in a multilingual context. Based on qualitatively vali-dated can-do descriptions communication test tasks were formulated and administered to a large sample (N=325. The resulting learner texts were subsequently classified with the help of an analytical rating scale. In addition to discussing the procedure used for the setting of standards the results of a quantitative evaluation of the data are presented.For instance, a multi-facetted Rasch analysis showed that therating criteria ‘lexical range’, ‘grammatical range’ and ‘coherence’ correlated particularly highly and can thus be tentatively interpreted as a single dimension of writing competence.

  6. Cuckoo search and firefly algorithm theory and applications

    CERN Document Server

    2014-01-01

    Nature-inspired algorithms such as cuckoo search and firefly algorithm have become popular and widely used in recent years in many applications. These algorithms are flexible, efficient and easy to implement. New progress has been made in the last few years, and it is timely to summarize the latest developments of cuckoo search and firefly algorithm and their diverse applications. This book will review both theoretical studies and applications with detailed algorithm analysis, implementation and case studies so that readers can benefit most from this book.  Application topics are contributed by many leading experts in the field. Topics include cuckoo search, firefly algorithm, algorithm analysis, feature selection, image processing, travelling salesman problem, neural network, GPU optimization, scheduling, queuing, multi-objective manufacturing optimization, semantic web service, shape optimization, and others.   This book can serve as an ideal reference for both graduates and researchers in computer scienc...

  7. Implementation of digital image encryption algorithm using logistic function and DNA encoding

    Science.gov (United States)

    Suryadi, MT; Satria, Yudi; Fauzi, Muhammad

    2018-03-01

    Cryptography is a method to secure information that might be in form of digital image. Based on past research, in order to increase security level of chaos based encryption algorithm and DNA based encryption algorithm, encryption algorithm using logistic function and DNA encoding was proposed. Digital image encryption algorithm using logistic function and DNA encoding use DNA encoding to scramble the pixel values into DNA base and scramble it in DNA addition, DNA complement, and XOR operation. The logistic function in this algorithm used as random number generator needed in DNA complement and XOR operation. The result of the test show that the PSNR values of cipher images are 7.98-7.99 bits, the entropy values are close to 8, the histogram of cipher images are uniformly distributed and the correlation coefficient of cipher images are near 0. Thus, the cipher image can be decrypted perfectly and the encryption algorithm has good resistance to entropy attack and statistical attack.

  8. Using Genetic Algorithms for Building Metrics of Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Cristian CIUREA

    2011-01-01

    Full Text Available he paper objective is to reveal the importance of genetic algorithms in building robust metrics of collaborative systems. The main types of collaborative systems in economy are presented and some characteristics of genetic algorithms are described. A genetic algorithm was implemented in order to determine the local maximum and minimum points of the relative complexity function associated to a collaborative banking system. The intelligent collaborative systems based on genetic algorithms, representing the new generation of collaborative systems, are analyzed and the implementation of auto-adaptive interfaces in a banking application is described.

  9. Development of a sensorimotor algorithm able to deal with unforeseen pushes and its implementation based on VHDL

    OpenAIRE

    Lezcano Giménez, Pablo Gabriel

    2015-01-01

    Development of a Sensorimotor Algorithm Able to Deal with Unforeseen Pushes and Its Implementation Based on VHDL is the title of my thesis which concludes my Bachelor Degree in the Escuela Técnica Superior de Ingeniería y Sistemas de Telecomunicación of the Universidad Politécnica de Madrid. It encloses the overall work I did in the Neurorobotics Research Laboratory from the Beuth Hochschule für Technik Berlin during my ERASMUS year in 2015. This thesis is focused on the field of robotics, sp...

  10. DNA Cryptography and Deep Learning using Genetic Algorithm with NW algorithm for Key Generation.

    Science.gov (United States)

    Kalsi, Shruti; Kaur, Harleen; Chang, Victor

    2017-12-05

    Cryptography is not only a science of applying complex mathematics and logic to design strong methods to hide data called as encryption, but also to retrieve the original data back, called decryption. The purpose of cryptography is to transmit a message between a sender and receiver such that an eavesdropper is unable to comprehend it. To accomplish this, not only we need a strong algorithm, but a strong key and a strong concept for encryption and decryption process. We have introduced a concept of DNA Deep Learning Cryptography which is defined as a technique of concealing data in terms of DNA sequence and deep learning. In the cryptographic technique, each alphabet of a letter is converted into a different combination of the four bases, namely; Adenine (A), Cytosine (C), Guanine (G) and Thymine (T), which make up the human deoxyribonucleic acid (DNA). Actual implementations with the DNA don't exceed laboratory level and are expensive. To bring DNA computing on a digital level, easy and effective algorithms are proposed in this paper. In proposed work we have introduced firstly, a method and its implementation for key generation based on the theory of natural selection using Genetic Algorithm with Needleman-Wunsch (NW) algorithm and Secondly, a method for implementation of encryption and decryption based on DNA computing using biological operations Transcription, Translation, DNA Sequencing and Deep Learning.

  11. Distributed Algorithms for Time Optimal Reachability Analysis

    DEFF Research Database (Denmark)

    Zhang, Zhengkui; Nielsen, Brian; Larsen, Kim Guldstrand

    2016-01-01

    . We propose distributed computing to accelerate time optimal reachability analysis. We develop five distributed state exploration algorithms, implement them in \\uppaal enabling it to exploit the compute resources of a dedicated model-checking cluster. We experimentally evaluate the implemented...... algorithms with four models in terms of their ability to compute near- or proven-optimal solutions, their scalability, time and memory consumption and communication overhead. Our results show that distributed algorithms work much faster than sequential algorithms and have good speedup in general.......Time optimal reachability analysis is a novel model based technique for solving scheduling and planning problems. After modeling them as reachability problems using timed automata, a real-time model checker can compute the fastest trace to the goal states which constitutes a time optimal schedule...

  12. Deutsch, Toffoli, and cnot Gates via Rydberg Blockade of Neutral Atoms

    Science.gov (United States)

    Shi, Xiao-Feng

    2018-05-01

    Universal quantum gates and quantum error correction (QEC) lie at the heart of quantum-information science. Large-scale quantum computing depends on a universal set of quantum gates, in which some gates may be easily carried out, while others are restricted to certain physical systems. There is a unique three-qubit quantum gate called the Deutsch gate [D (θ )], from which a circuit can be constructed so that any feasible quantum computing is attainable. We design an easily realizable D (θ ) by using the Rydberg blockade of neutral atoms, where θ can be tuned to any value in [0 ,π ] by adjusting the strengths of external control fields. Using similar protocols, we further show that both the Toffoli and controlled-not gates can be achieved with only three laser pulses. The Toffoli gate, being universal for classical reversible computing, is also useful for QEC, which plays an important role in quantum communication and fault-tolerant quantum computation. The possibility and speed of realizing these gates shed light on the study of quantum information with neutral atoms.

  13. Deutsch-slawischer Siedlungs- und Sprachkontakt im Gebiet zwischen Saale und Neiße – vorgestellt an ausgewählten Ortsnamen (Siedlungsnamen

    Directory of Open Access Journals (Sweden)

    Inge Bily

    2015-12-01

    Full Text Available Saale und Elbe bilden im Wesentlichen die westliche Begrenzung des ehemals kompakten altsorbischen Sprachgebietes. Im Norden schließt das Altsorbische an das Altpolabische, im Osten und Südosten an das Polnische und Tschechische an. Eigennamen bilden eine wichtige Quelle sowohl für die Aufhellung der Geschichte der Besiedlung wie auch ethnischer, sprachlicher und sozialer Verhältnisse, denn historische Siedlungsprozesse fanden ihren Niederschlag u.a. in historischen Belegen von Namen. Diese Belege wie auch die Ableitungsbasen und Benennungsmotive ebenso wie die phonologischen und morphologischen Merkmale der Namen des altsorbischen Kontaktgebietes enthalten eine Vielzahl von Zeugnissen deutsch-slawischer Kontinuität. Auf der Grundlage umfangreicher Studien zu Ortsnamen stellt der Beitrag ausgewählte Beispiele vor. Im ehemals altsorbischen Kontaktgebiet können Ortsnamen (Siedlungsnamen und ihre historische Überlieferung Hinweise auf Siedlungs- und Sprachkontakt geben. Dies belegen eine ganze Reihe von Merkmalen, so z.B.: 1. Unterscheidende Bestimmungswörter 2. Parallele Namengebung mit zeitweiliger Mehrnamigkeit 3. Umbenennung 4. Übersetzung 5. Benennungsparallelismus im deutsch-slawischen Kontaktgebiet 6. Scheinbare sekundäre semantische Verankerung (SSSV 7. Namenpaare 8. Unterscheidende Zusätze 9. Mischnamen (Hybride

  14. "Wenn Sie uns in die Entscheidungsfindung einbeziehen, wird Polen Sie unterstützen": Deutsche Europapolitik aus der Sicht Polens

    OpenAIRE

    Łada, Agnieszka

    2012-01-01

    Das polnisch-deutsche Verhältnis ist heute so gut wie seit vielen Jahren nicht mehr. Die positive Sicht auf Deutschland wird auch von der Wahrnehmung der eigenen Rolle innerhalb der Europäischen Union beeinflusst. Seit ihrem Amtsantritt im Jahre 2007 hat sich die jetzige polnische Regierung eine gute Position in der EU geschaffen, die auf nachvollziehbarer Politik, Initiativen wie der Östlichen Partnerschaft, der Fähigkeit zur Koalitionsbildung und außergewöhnlichen wirtschaftlichen Indikator...

  15. Implementation of ESPRIT Algorithm on GPS TEC for Percussive Signatures of Earthquakes in Ionosphere

    Science.gov (United States)

    Kiran, Uday; Koteswara Rao, S.; Ramesh, K. S.

    2017-01-01

    As Global Positioning System is very effective mechanism to find out the disturbances in Ionosphere during the solar events. Spectral estimation of the ionospheric total electron content perturbations leads to better interpretation of their source mechanisms. Seismo-ionospheric perturbations of an earthquake occurred at 12th December 2013 was considered in the present work. Estimation of signal parameters via rotational in variance technique (ESPRIT) is implemented on the vertical total electron content data. It was clearly observed that during disturbance the power spectral density of the dominant frequency had reduced to -2.487 dB from 7.841 dB. The application of ESPRIT algorithm on seismic perturbations in GPS TEC has found the dominant frequency in the spectrum and new frequency present at the time of perturbations

  16. Experience with remediation implementation at railroad station Freital-Potschappel

    International Nuclear Information System (INIS)

    Streubel, G.; Tottewitz, K.

    1995-01-01

    As a result of the measuring activities for the contaminated sites cadastre, the Saxonian Landesamt fuer Umwelt und Geologie requested the Deutsche Bahn AG as the responsible site owner to clean up the radioactively contaminated surfaces open to the general public. In response, the Deutsche Bahn AG commissioned the TUeV Sachsen GmbH to carry out the remediation work. The lecture reports on aspects of main interest and experience obtained in these activities. (orig./DG) [de

  17. UTV Expansion Pack: Special-Purpose Rank-Revealing Algorithms

    DEFF Research Database (Denmark)

    Fierro, Ricardo D.; Hansen, Per Christian

    2005-01-01

    This collection of Matlab 7.0 software supplements and complements the package UTV Tools from 1999, and includes implementations of special-purpose rank-revealing algorithms developed since the publication of the original package. We provide algorithms for computing and modifying symmetric rank-r...... values of a sparse or structured matrix. These new algorithms have applications in signal processing, optimization and LSI information retrieval.......This collection of Matlab 7.0 software supplements and complements the package UTV Tools from 1999, and includes implementations of special-purpose rank-revealing algorithms developed since the publication of the original package. We provide algorithms for computing and modifying symmetric rank......-revealing VSV decompositions, we expand the algorithms for the ULLV decomposition of a matrix pair to handle interference-type problems with a rank-deficient covariance matrix, and we provide a robust and reliable Lanczos algorithm which - despite its simplicity - is able to capture all the dominant singular...

  18. A cellular automata based FPGA realization of a new metaheuristic bat-inspired algorithm

    Science.gov (United States)

    Progias, Pavlos; Amanatiadis, Angelos A.; Spataro, William; Trunfio, Giuseppe A.; Sirakoulis, Georgios Ch.

    2016-10-01

    Optimization algorithms are often inspired by processes occuring in nature, such as animal behavioral patterns. The main concern with implementing such algorithms in software is the large amounts of processing power they require. In contrast to software code, that can only perform calculations in a serial manner, an implementation in hardware, exploiting the inherent parallelism of single-purpose processors, can prove to be much more efficient both in speed and energy consumption. Furthermore, the use of Cellular Automata (CA) in such an implementation would be efficient both as a model for natural processes, as well as a computational paradigm implemented well on hardware. In this paper, we propose a VHDL implementation of a metaheuristic algorithm inspired by the echolocation behavior of bats. More specifically, the CA model is inspired by the metaheuristic algorithm proposed earlier in the literature, which could be considered at least as efficient than other existing optimization algorithms. The function of the FPGA implementation of our algorithm is explained in full detail and results of our simulations are also demonstrated.

  19. Parallel Computing Strategies for Irregular Algorithms

    Science.gov (United States)

    Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)

    2002-01-01

    Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.

  20. Overview of implementation of DARPA GPU program in SAIC

    Science.gov (United States)

    Braunreiter, Dennis; Furtek, Jeremy; Chen, Hai-Wen; Healy, Dennis

    2008-04-01

    This paper reviews the implementation of DARPA MTO STAP-BOY program for both Phase I and II conducted at Science Applications International Corporation (SAIC). The STAP-BOY program conducts fast covariance factorization and tuning techniques for space-time adaptive process (STAP) Algorithm Implementation on Graphics Processor unit (GPU) Architectures for Embedded Systems. The first part of our presentation on the DARPA STAP-BOY program will focus on GPU implementation and algorithm innovations for a prototype radar STAP algorithm. The STAP algorithm will be implemented on the GPU, using stream programming (from companies such as PeakStream, ATI Technologies' CTM, and NVIDIA) and traditional graphics APIs. This algorithm will include fast range adaptive STAP weight updates and beamforming applications, each of which has been modified to exploit the parallel nature of graphics architectures.

  1. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  2. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  3. Fluid-structure-coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure, and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed here have been extended to three dimensions and implemented in the computer code PELE-3D

  4. Fluid structure coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D

  5. A Novel Algorithm for Determining the Contextual Characteristics of Movement Behaviors by Combining Accelerometer Features and Wireless Beacons: Development and Implementation.

    Science.gov (United States)

    Magistro, Daniele; Sessa, Salvatore; Kingsnorth, Andrew P; Loveday, Adam; Simeone, Alessandro; Zecca, Massimiliano; Esliger, Dale W

    2018-04-20

    Unfortunately, global efforts to promote "how much" physical activity people should be undertaking have been largely unsuccessful. Given the difficulty of achieving a sustained lifestyle behavior change, many scientists are reexamining their approaches. One such approach is to focus on understanding the context of the lifestyle behavior (ie, where, when, and with whom) with a view to identifying promising intervention targets. The aim of this study was to develop and implement an innovative algorithm to determine "where" physical activity occurs using proximity sensors coupled with a widely used physical activity monitor. A total of 19 Bluetooth beacons were placed in fixed locations within a multilevel, mixed-use building. In addition, 4 receiver-mode sensors were fitted to the wrists of a roving technician who moved throughout the building. The experiment was divided into 4 trials with different walking speeds and dwelling times. The data were analyzed using an original and innovative algorithm based on graph generation and Bayesian filters. Linear regression models revealed significant correlations between beacon-derived location and ground-truth tracking time, with intraclass correlations suggesting a high goodness of fit (R 2 =.9780). The algorithm reliably predicted indoor location, and the robustness of the algorithm improved with a longer dwelling time (>100 s; error location of an individual within an indoor environment. This novel implementation of "context sensing" will facilitate a wealth of new research questions on promoting healthy behavior change, the optimization of patient care, and efficient health care planning (eg, patient-clinician flow, patient-clinician interaction). ©Daniele Magistro, Salvatore Sessa, Andrew P Kingsnorth, Adam Loveday, Alessandro Simeone, Massimiliano Zecca, Dale W Esliger. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 20.04.2018.

  6. Application of epidemic algorithms for smart grids control

    International Nuclear Information System (INIS)

    Krkoleva, Aleksandra

    2012-01-01

    Smart Grids are a new concept for electricity networks development, aiming to provide economically efficient and sustainable power system by integrating effectively the actions and needs of the network users. The thesis addresses the Smart Grids concept, with emphasis on the control strategies developed on the basis of epidemic algorithms, more specifically, gossip algorithms. The thesis is developed around three Smart grid aspects: the changed role of consumers in terms of taking part in providing services within Smart Grids; the possibilities to implement decentralized control strategies based on distributed algorithms; and information exchange and benefits emerging from implementation of information and communication technologies. More specifically, the thesis presents a novel approach for providing ancillary services by implementing gossip algorithms. In a decentralized manner, by exchange of information between the consumers and by making decisions on local level, based on the received information and local parameters, the group achieves its global objective, i. e. providing ancillary services. The thesis presents an overview of the Smart Grids control strategies with emphasises on new strategies developed for the most promising Smart Grids concepts, as Micro grids and Virtual power plants. The thesis also presents the characteristics of epidemic algorithms and possibilities for their implementation in Smart Grids. Based on the research on epidemic algorithms, two applications have been developed. These applications are the main outcome of the research. The first application enables consumers, represented by their commercial aggregators, to participate in load reduction and consequently, to participate in balancing market or reduce the balancing costs of the group. In this context, the gossip algorithms are used for aggregator's message dissemination for load reduction and households and small commercial and industrial consumers to participate in maintaining

  7. DiamondTorre Algorithm for High-Performance Wave Modeling

    Directory of Open Access Journals (Sweden)

    Vadim Levchenko

    2016-08-01

    Full Text Available Effective algorithms of physical media numerical modeling problems’ solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five.

  8. New high-speed multiple units for Deutsche Bahn for operating international services; Neue Hochgeschwindigkeitstriebzuege der DB fuer den internationalen Einsatz

    Energy Technology Data Exchange (ETDEWEB)

    Panier, Frank [DB Technik/Beschaffung HGV-Zuege (Germany). Auslaendische HGV-Verkehre

    2010-09-15

    Deutsche Bahn (DB AG) has been involved in the procurement of its third generation of ICE (Inter City Express) trains since 1994 - a project that has also envisaged multi-system variants. This has resulted in a four-system train, the class 406. It was then decided in 2007 to call for tenders for a multi-system train, which would also be able to run to southeast France and even as far as the Mediterranean. (orig.)

  9. Optimizing graph algorithms on pregel-like systems

    KAUST Repository

    Salihoglu, Semih; Widom, Jennifer

    2014-01-01

    We study the problem of implementing graph algorithms efficiently on Pregel-like systems, which can be surprisingly challenging. Standard graph algorithms in this setting can incur unnecessary inefficiencies such as slow convergence or high

  10. Contact-impact algorithms on parallel computers

    International Nuclear Information System (INIS)

    Zhong Zhihua; Nilsson, Larsgunnar

    1994-01-01

    Contact-impact algorithms on parallel computers are discussed within the context of explicit finite element analysis. The algorithms concerned include a contact searching algorithm and an algorithm for contact force calculations. The contact searching algorithm is based on the territory concept of the general HITA algorithm. However, no distinction is made between different contact bodies, or between different contact surfaces. All contact segments from contact boundaries are taken as a single set. Hierarchy territories and contact territories are expanded. A three-dimensional bucket sort algorithm is used to sort contact nodes. The defence node algorithm is used in the calculation of contact forces. Both the contact searching algorithm and the defence node algorithm are implemented on the connection machine CM-200. The performance of the algorithms is examined under different circumstances, and numerical results are presented. ((orig.))

  11. A practical guide to data structures and algorithms using Java

    CERN Document Server

    Goldman, Sally A

    2007-01-01

    Although traditional texts present isolated algorithms and data structures, they do not provide a unifying structure and offer little guidance on how to appropriately select among them. Furthermore, these texts furnish little, if any, source code and leave many of the more difficult aspects of the implementation as exercises. A fresh alternative to conventional data structures and algorithms books, A Practical Guide to Data Structures and Algorithms using Java presents comprehensive coverage of fundamental data structures and algorithms in a unifying framework with full implementation details.

  12. Efficient Hardware Implementation of the Horn-Schunck Algorithm for High-Resolution Real-Time Dense Optical Flow Sensor

    Science.gov (United States)

    Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek

    2014-01-01

    This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1, 920 × 1, 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems. PMID:24526303

  13. Quantum-circuit model of Hamiltonian search algorithms

    International Nuclear Information System (INIS)

    Roland, Jeremie; Cerf, Nicolas J.

    2003-01-01

    We analyze three different quantum search algorithms, namely, the traditional circuit-based Grover's algorithm, its continuous-time analog by Hamiltonian evolution, and the quantum search by local adiabatic evolution. We show that these algorithms are closely related in the sense that they all perform a rotation, at a constant angular velocity, from a uniform superposition of all states to the solution state. This makes it possible to implement the two Hamiltonian-evolution algorithms on a conventional quantum circuit, while keeping the quadratic speedup of Grover's original algorithm. It also clarifies the link between the adiabatic search algorithm and Grover's algorithm

  14. Implementation of Tuy's cone-beam inversion formula

    International Nuclear Information System (INIS)

    Zeng, G.L.; Clack, R.; Gullberg, G.T.

    1994-01-01

    Tuy's cone-beam inversion formula was modified to develop a cone-beam reconstruction algorithm. The algorithm was implemented for a cone-beam vertex orbit consisting of a circle and two orthogonal lines. This orbit geometry satisfies the cone-beam data sufficiency condition and is easy to implement on commercial single photon emission computed tomography (SPECT) systems. The algorithm which consists of two derivative steps, one rebinning step, and one three-dimensional backprojection step, was verified by computer simulations and by reconstructing physical phantom data collected on a clinical SPECT system. The proposed algorithm gives equivalent results and is as efficient as other analytical cone-beam reconstruction algorithms. (Author)

  15. Algorithm of search and track of static and moving large-scale objects

    Directory of Open Access Journals (Sweden)

    Kalyaev Anatoly

    2017-01-01

    Full Text Available We suggest an algorithm for processing of a sequence, which contains images of search and track of static and moving large-scale objects. The possible software implementation of the algorithm, based on multithread CUDA processing, is suggested. Experimental analysis of the suggested algorithm implementation is performed.

  16. Comparison of spike-sorting algorithms for future hardware implementation.

    Science.gov (United States)

    Gibson, Sarah; Judy, Jack W; Markovic, Dejan

    2008-01-01

    Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.

  17. Feicim: A browser for data and algorithms

    International Nuclear Information System (INIS)

    Lazar, Z I; McNulty, R; Kechadi, T

    2008-01-01

    As programming and programming environments become increasingly complex, more effort must be invested in presenting the user with a simple yet comprehensive interface. Feicim is a tool that unifies the representation of data and algorithms. It provides resource discovery of data-files, data-content and algorithm implementation through an intuitive graphical user interface. It allows local or remote data stored on Grid type platforms to be accessed by the users, the viewing and creation of user-defined or collaboration-defined algorithms, the implementation of algorithms, and the production of output data-files and/or histograms. An application of Feicim is illustrated using the LHCb data. It provides a graphical view of the Gaudi architecture, LHCb event data model, and interfaces to the file catalogue. Feicim is particularly suited to such frameworks as Gaudi which consider algorithms as objects [2]. Instant viewing of any LHCb data will be of particular value in the commissioning of the detector and for quickly familiarizing newcomers to the data and software environment

  18. An efficient quantum algorithm for spectral estimation

    Science.gov (United States)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  19. Parallel implementation and evaluation of motion estimation system algorithms on a distributed memory multiprocessor using knowledge based mappings

    Science.gov (United States)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.

  20. A neural network based implementation of an MPC algorithm applied in the control systems of electromechanical plants

    Science.gov (United States)

    Marusak, Piotr M.; Kuntanapreeda, Suwat

    2018-01-01

    The paper considers application of a neural network based implementation of a model predictive control (MPC) control algorithm to electromechanical plants. Properties of such control plants implicate that a relatively short sampling time should be used. However, in such a case, finding the control value numerically may be too time-consuming. Therefore, the current paper tests the solution based on transforming the MPC optimization problem into a set of differential equations whose solution is the same as that of the original optimization problem. This set of differential equations can be interpreted as a dynamic neural network. In such an approach, the constraints can be introduced into the optimization problem with relative ease. Moreover, the solution of the optimization problem can be obtained faster than when the standard numerical quadratic programming routine is used. However, a very careful tuning of the algorithm is needed to achieve this. A DC motor and an electrohydraulic actuator are taken as illustrative examples. The feasibility and effectiveness of the proposed approach are demonstrated through numerical simulations.

  1. Effects of a random noisy oracle on search algorithm complexity

    International Nuclear Information System (INIS)

    Shenvi, Neil; Brown, Kenneth R.; Whaley, K. Birgitta

    2003-01-01

    Grover's algorithm provides a quadratic speed-up over classical algorithms for unstructured database or library searches. This paper examines the robustness of Grover's search algorithm to a random phase error in the oracle and analyzes the complexity of the search process as a function of the scaling of the oracle error with database or library size. Both the discrete- and continuous-time implementations of the search algorithm are investigated. It is shown that unless the oracle phase error scales as O(N -1/4 ), neither the discrete- nor the continuous-time implementation of Grover's algorithm is scalably robust to this error in the absence of error correction

  2. An implementation of signal processing algorithms for ultrasonic NDE

    International Nuclear Information System (INIS)

    Ericsson, L.; Stepinski, T.

    1994-01-01

    Probability of detection flaws during ultrasonic pulse-echo inspection is often limited by the presence of backscattered echoes from the material structure. A digital signal processing technique for removal of this material noise, referred to as split spectrum processing (SSP), has been developed and verified using laboratory experiments during the last decade. The authors have performed recently a limited scale evaluation of various SSP techniques for ultrasonic signals acquired during the inspection of welds in austenitic steel. They have obtained very encouraging results that indicate promising capabilities of the SSP for inspection of nuclear power plants. Thus, a more extensive investigation of the technique using large amounts of ultrasonic data is motivated. This analysis should employ different combinations of materials, flaws and transducers. Due to the considerable number of ultrasonic signals required to verify the technique for future practical use, a custom-made computer software is necessary. At the request of the Swedish nuclear power industry the authors have developed such a program package. The program provides a user-friendly graphical interface and is intended for processing of B-scan data in a flexible way. Assembled in the program are a number of signal processing algorithms including traditional Split Spectrum Processing and the more recent Cut Spectrum Processing algorithm developed by them. The program and some results obtained using the various algorithms are presented in the paper

  3. Hardware Design Considerations for Edge-Accelerated Stereo Correspondence Algorithms

    Directory of Open Access Journals (Sweden)

    Christos Ttofis

    2012-01-01

    Full Text Available Stereo correspondence is a popular algorithm for the extraction of depth information from a pair of rectified 2D images. Hence, it has been used in many computer vision applications that require knowledge about depth. However, stereo correspondence is a computationally intensive algorithm and requires high-end hardware resources in order to achieve real-time processing speed in embedded computer vision systems. This paper presents an overview of the use of edge information as a means to accelerate hardware implementations of stereo correspondence algorithms. The presented approach restricts the stereo correspondence algorithm only to the edges of the input images rather than to all image points, thus resulting in a considerable reduction of the search space. The paper highlights the benefits of the edge-directed approach by applying it to two stereo correspondence algorithms: an SAD-based fixed-support algorithm and a more complex adaptive support weight algorithm. Furthermore, we present design considerations about the implementation of these algorithms on reconfigurable hardware and also discuss issues related to the memory structures needed, the amount of parallelism that can be exploited, the organization of the processing blocks, and so forth. The two architectures (fixed-support based versus adaptive-support weight based are compared in terms of processing speed, disparity map accuracy, and hardware overheads, when both are implemented on a Virtex-5 FPGA platform.

  4. A Novel Enhanced Positioning Trilateration Algorithm Implemented for Medical Implant In-Body Localization

    Directory of Open Access Journals (Sweden)

    Peter Brida

    2013-01-01

    Full Text Available Medical implants based on wireless communication will play crucial role in healthcare systems. Some applications need to know the exact position of each implant. RF positioning seems to be an effective approach for implant localization. The two most common positioning data typically used for RF positioning are received signal strength and time of flight of a radio signal between transmitter and receivers (medical implant and network of reference devices with known position. This leads to positioning methods: received signal strength (RSS and time of arrival (ToA. Both methods are based on trilateration. Used positioning data are very important, but the positioning algorithm which estimates the implant position is important as well. In this paper, the proposal of novel algorithm for trilateration is presented. The proposed algorithm improves the quality of basic trilateration algorithms with the same quality of measured positioning data. It is called Enhanced Positioning Trilateration Algorithm (EPTA. The proposed algorithm can be divided into two phases. The first phase is focused on the selection of the most suitable sensors for position estimation. The goal of the second one lies in the positioning accuracy improving by adaptive algorithm. Finally, we provide performance analysis of the proposed algorithm by computer simulations.

  5. Parallel grid generation algorithm for distributed memory computers

    Science.gov (United States)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  6. Quantum algorithm for support matrix machines

    Science.gov (United States)

    Duan, Bojia; Yuan, Jiabin; Liu, Ying; Li, Dan

    2017-09-01

    We propose a quantum algorithm for support matrix machines (SMMs) that efficiently addresses an image classification problem by introducing a least-squares reformulation. This algorithm consists of two core subroutines: a quantum matrix inversion (Harrow-Hassidim-Lloyd, HHL) algorithm and a quantum singular value thresholding (QSVT) algorithm. The two algorithms can be implemented on a universal quantum computer with complexity O[log(npq) ] and O[log(pq)], respectively, where n is the number of the training data and p q is the size of the feature space. By iterating the algorithms, we can find the parameters for the SMM classfication model. Our analysis shows that both HHL and QSVT algorithms achieve an exponential increase of speed over their classical counterparts.

  7. Synthesis algorithm of VLSI multipliers for ASIC

    Science.gov (United States)

    Chua, O. H.; Eldin, A. G.

    1993-01-01

    Multipliers are critical sub-blocks in ASIC design, especially for digital signal processing and communications applications. A flexible multiplier synthesis tool is developed which is capable of generating multiplier blocks for word size in the range of 4 to 256 bits. A comparison of existing multiplier algorithms is made in terms of speed, silicon area, and suitability for automated synthesis and verification of its VLSI implementation. The algorithm divides the range of supported word sizes into sub-ranges and provides each sub-range with a specific multiplier architecture for optimal speed and area. The algorithm of the synthesis tool and the multiplier architectures are presented. Circuit implementation and the automated synthesis methodology are discussed.

  8. Hardware modules of the RSA algorithm

    Directory of Open Access Journals (Sweden)

    Škobić Velibor

    2014-01-01

    Full Text Available This paper describes basic principles of data protection using the RSA algorithm, as well as algorithms for its calculation. The RSA algorithm is implemented on FPGA integrated circuit EP4CE115F29C7, family Cyclone IV, Altera. Four modules of Montgomery algorithm are designed using VHDL. Synthesis and simulation are done using Quartus II software and ModelSim. The modules are analyzed for different key lengths (16 to 1024 in terms of the number of logic elements, the maximum frequency and speed.

  9. Fuzzy logic and A* algorithm implementation on goat foraging games

    Science.gov (United States)

    Harsani, P.; Mulyana, I.; Zakaria, D.

    2018-03-01

    Goat foraging is one of the games that apply the search techniques within the scope of artificial intelligence. This game involves several actors including players and enemies. The method used in this research is fuzzy logic and Algorithm A*. Fuzzy logic is used to determine enemy behaviour. The A* algorithm is used to search for the shortest path. There are two input variables: the distance between the player and the enemy and the anger level of the goat. The output variable that has been defined is the enemy behaviour. The A* algorithm is used to determine the closest path between the player and the enemy and define the enemy's escape path to avoid the player. There are 4 types of enemies namely farmers, planters, farmers and sellers of plants. Players are goats that aims to find a meal that is a plant. In this game goats aim to spend grass in the garden in the form of a maze while avoiding the enemy. The game provides an application of artificial intelligence and is made in four difficulty levels.

  10. Deutsche Bibliotheksstatistik (DBS: Konzept, Umsetzung und Perspektiven für eine umfassende Datenbasis zum Bibliothekswesen in Deutschland: 10 Fragen von Bruno Bauer an Ronald M. Schmidt, Leiter der DBS / Deutsche Bibliotheksstatistik (DBS: Concept, implementation and prospect for a comprehensive database on library statistics in Germany: 10 questions interview with Ronald M. Schmidt, head of DBS, by Bruno Bauer

    Directory of Open Access Journals (Sweden)

    Schmidt, Ronald M.

    2008-06-01

    Full Text Available The DBS, Deutsche Bibliotheksstatistik (German Library Statistics, http://www.bibliotheksstatistik.de, reports since 1974. Around 9000 libraries file data on facilities, equipment, holdings, usage, budget and staff.Data collection, evaluation, and presentation today are carried out online only. Aim of DBS is the formation of a national data pool containing statistical data on all types of libraries.The interview informs about the concept of DBS and its differentation of public, university and specialised libraries. It covers at length the increasing important topic of data collection of holdings and usage in digital libraries. The DBS process of data evaluation and publication will be described and connections between DBS and the library benchmark index BIX will be explained. Finally international cooperation options for DBS will be discussed.

  11. Selection of views to materialize using simulated annealing algorithms

    Science.gov (United States)

    Zhou, Lijuan; Liu, Chi; Wang, Hongfeng; Liu, Daixin

    2002-03-01

    A data warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support or OLAP queries. It is important to select the right view to materialize that answer a given set of queries. The goal is the minimization of the combination of the query evaluation and view maintenance costs. In this paper, we have addressed and designed algorithms for selecting a set of views to be materialized so that the sum of processing a set of queries and maintaining the materialized views is minimized. We develop an approach using simulated annealing algorithms to solve it. First, we explore simulated annealing algorithms to optimize the selection of materialized views. Then we use experiments to demonstrate our approach. The results show that our algorithm works better. We implemented our algorithms and a performance study of the algorithms shows that the proposed algorithm gives an optimal solution.

  12. Long-term power generation expansion planning with short-term demand response: Model, algorithms, implementation, and electricity policies

    Science.gov (United States)

    Lohmann, Timo

    Electric sector models are powerful tools that guide policy makers and stakeholders. Long-term power generation expansion planning models are a prominent example and determine a capacity expansion for an existing power system over a long planning horizon. With the changes in the power industry away from monopolies and regulation, the focus of these models has shifted to competing electric companies maximizing their profit in a deregulated electricity market. In recent years, consumers have started to participate in demand response programs, actively influencing electricity load and price in the power system. We introduce a model that features investment and retirement decisions over a long planning horizon of more than 20 years, as well as an hourly representation of day-ahead electricity markets in which sellers of electricity face buyers. This combination makes our model both unique and challenging to solve. Decomposition algorithms, and especially Benders decomposition, can exploit the model structure. We present a novel method that can be seen as an alternative to generalized Benders decomposition and relies on dynamic linear overestimation. We prove its finite convergence and present computational results, demonstrating its superiority over traditional approaches. In certain special cases of our model, all necessary solution values in the decomposition algorithms can be directly calculated and solving mathematical programming problems becomes entirely obsolete. This leads to highly efficient algorithms that drastically outperform their programming problem-based counterparts. Furthermore, we discuss the implementation of all tailored algorithms and the challenges from a modeling software developer's standpoint, providing an insider's look into the modeling language GAMS. Finally, we apply our model to the Texas power system and design two electricity policies motivated by the U.S. Environment Protection Agency's recently proposed CO2 emissions targets for the

  13. The theory of hybrid stochastic algorithms

    International Nuclear Information System (INIS)

    Duane, S.; Kogut, J.B.

    1986-01-01

    The theory of hybrid stochastic algorithms is developed. A generalized Fokker-Planck equation is derived and is used to prove that the correct equilibrium distribution is generated by the algorithm. Systematic errors following from the discrete time-step used in the numerical implementation of the scheme are computed. Hybrid algorithms which simulate lattice gauge theory with dynamical fermions are presented. They are optimized in computer simulations and their systematic errors and efficiencies are studied. (orig.)

  14. The Retina Algorithm

    CERN Multimedia

    CERN. Geneva; PUNZI, Giovanni

    2015-01-01

    Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.

  15. GPU-Vote: A Framework for Accelerating Voting Algorithms on GPU.

    NARCIS (Netherlands)

    Braak, van den G.J.W.; Nugteren, C.; Mesman, B.; Corporaal, H.; Kaklamanis, C.; Papatheodorou, T.; Spirakis, P.G.

    2012-01-01

    Voting algorithms, such as histogram and Hough transforms, are frequently used algorithms in various domains, such as statistics and image processing. Algorithms in these domains may be accelerated using GPUs. Implementing voting algorithms efficiently on a GPU however is far from trivial due to

  16. Secure quantum private information retrieval using phase-encoded queries

    Energy Technology Data Exchange (ETDEWEB)

    Olejnik, Lukasz [CERN, 1211 Geneva 23, Switzerland and Poznan Supercomputing and Networking Center, Noskowskiego 12/14, PL-61-704 Poznan (Poland)

    2011-08-15

    We propose a quantum solution to the classical private information retrieval (PIR) problem, which allows one to query a database in a private manner. The protocol offers privacy thresholds and allows the user to obtain information from a database in a way that offers the potential adversary, in this model the database owner, no possibility of deterministically establishing the query contents. This protocol may also be viewed as a solution to the symmetrically private information retrieval problem in that it can offer database security (inability for a querying user to steal its contents). Compared to classical solutions, the protocol offers substantial improvement in terms of communication complexity. In comparison with the recent quantum private queries [Phys. Rev. Lett. 100, 230502 (2008)] protocol, it is more efficient in terms of communication complexity and the number of rounds, while offering a clear privacy parameter. We discuss the security of the protocol and analyze its strengths and conclude that using this technique makes it challenging to obtain the unconditional (in the information-theoretic sense) privacy degree; nevertheless, in addition to being simple, the protocol still offers a privacy level. The oracle used in the protocol is inspired both by the classical computational PIR solutions as well as the Deutsch-Jozsa oracle.

  17. Secure quantum private information retrieval using phase-encoded queries

    International Nuclear Information System (INIS)

    Olejnik, Lukasz

    2011-01-01

    We propose a quantum solution to the classical private information retrieval (PIR) problem, which allows one to query a database in a private manner. The protocol offers privacy thresholds and allows the user to obtain information from a database in a way that offers the potential adversary, in this model the database owner, no possibility of deterministically establishing the query contents. This protocol may also be viewed as a solution to the symmetrically private information retrieval problem in that it can offer database security (inability for a querying user to steal its contents). Compared to classical solutions, the protocol offers substantial improvement in terms of communication complexity. In comparison with the recent quantum private queries [Phys. Rev. Lett. 100, 230502 (2008)] protocol, it is more efficient in terms of communication complexity and the number of rounds, while offering a clear privacy parameter. We discuss the security of the protocol and analyze its strengths and conclude that using this technique makes it challenging to obtain the unconditional (in the information-theoretic sense) privacy degree; nevertheless, in addition to being simple, the protocol still offers a privacy level. The oracle used in the protocol is inspired both by the classical computational PIR solutions as well as the Deutsch-Jozsa oracle.

  18. Heterogeneous architecture to process swarm optimization algorithms

    Directory of Open Access Journals (Sweden)

    Maria A. Dávila-Guzmán

    2014-01-01

    Full Text Available Since few years ago, the parallel processing has been embedded in personal computers by including co-processing units as the graphics processing units resulting in a heterogeneous platform. This paper presents the implementation of swarm algorithms on this platform to solve several functions from optimization problems, where they highlight their inherent parallel processing and distributed control features. In the swarm algorithms, each individual and dimension problem are parallelized by the granularity of the processing system which also offer low communication latency between individuals through the embedded processing. To evaluate the potential of swarm algorithms on graphics processing units we have implemented two of them: the particle swarm optimization algorithm and the bacterial foraging optimization algorithm. The algorithms’ performance is measured using the acceleration where they are contrasted between a typical sequential processing platform and the NVIDIA GeForce GTX480 heterogeneous platform; the results show that the particle swarm algorithm obtained up to 36.82x and the bacterial foraging swarm algorithm obtained up to 9.26x. Finally, the effect to increase the size of the population is evaluated where we show both the dispersion and the quality of the solutions are decreased despite of high acceleration performance since the initial distribution of the individuals can converge to local optimal solution.

  19. Parallel implementation of geometric transformations

    Energy Technology Data Exchange (ETDEWEB)

    Clarke, K A; Ip, H H.S.

    1982-10-01

    An implementation of digitized picture rotation and magnification based on Weiman's algorithm is presented. In a programmable array machine routines to perform small transformations code efficiently. The method illustrates the interpolative nature of the algorithm. 6 references.

  20. Appendix F. Developmental enforcement algorithm definition document : predictive braking enforcement algorithm definition document.

    Science.gov (United States)

    2012-05-01

    The purpose of this document is to fully define and describe the logic flow and mathematical equations for a predictive braking enforcement algorithm intended for implementation in a Positive Train Control (PTC) system.