WorldWideScience

Sample records for deutsch-jozsa algorithm implemented

  1. A Cavity QED Implementation of Deutsch-Jozsa Algorithm

    OpenAIRE

    Guerra, E. S.

    2004-01-01

    The Deutsch-Jozsa algorithm is a generalization of the Deutsch algorithm which was the first algorithm written. We present schemes to implement the Deutsch algorithm and the Deutsch-Jozsa algorithm via cavity QED.

  2. Implementing Deutsch-Jozsa algorithm using light shifts and atomic ensembles

    International Nuclear Information System (INIS)

    Dasgupta, Shubhrangshu; Biswas, Asoka; Agarwal, G.S.

    2005-01-01

    We present an optical scheme to implement the Deutsch-Jozsa algorithm using ac Stark shifts. The scheme uses an atomic ensemble consisting of four-level atoms interacting dispersively with a field. This leads to a Hamiltonian in the atom-field basis which is quite suitable for quantum computation. We show how one can implement the algorithm by performing proper one- and two-qubit operations. We emphasize that in our model the decoherence is expected to be minimal due to our usage of atomic ground states and freely propagating photon

  3. Implementation schemes in NMR of quantum processors and the Deutsch-Jozsa algorithm by using virtual spin representation

    International Nuclear Information System (INIS)

    Kessel, Alexander R.; Yakovleva, Natalia M.

    2002-01-01

    Schemes of experimental realization of the main two-qubit processors for quantum computers and the Deutsch-Jozsa algorithm are derived in virtual spin representation. The results are applicable for every four quantum states allowing the required properties for quantum processor implementation if for qubit encoding, virtual spin representation is used. A four-dimensional Hilbert space of nuclear spin 3/2 is considered in detail for this aim

  4. Realization of seven-qubit Deutsch-Jozsa algorithm on NMR quantum computer

    International Nuclear Information System (INIS)

    Wei Daxiu; Yang Xiaodong; Luo Jun; Sun Xianping; Zeng Xizhi; Liu Maili; Ding Shangwu

    2002-01-01

    Recent years, remarkable progresses in experimental realization of quantum information have been made, especially based on nuclear magnetic resonance (NMR) theory. In all quantum algorithms, Deutsch-Jozsa algorithm has been widely studied. It can be realized on NMR quantum computer and also can be simplified by using the Cirac's scheme. At first the principle of Deutsch-Jozsa quantum algorithm is analyzed, then the authors implement the seven-qubit Deutsch-Jozsa algorithm on NMR quantum computer

  5. Implementation of a three-qubit refined Deutsch-Jozsa algorithm using SFG quantum logic gates

    International Nuclear Information System (INIS)

    Duce, A Del; Savory, S; Bayvel, P

    2006-01-01

    In this paper we present a quantum logic circuit which can be used for the experimental demonstration of a three-qubit solid state quantum computer based on a recent proposal of optically driven quantum logic gates. In these gates, the entanglement of randomly placed electron spin qubits is manipulated by optical excitation of control electrons. The circuit we describe solves the Deutsch problem with an improved algorithm called the refined Deutsch-Jozsa algorithm. We show that it is possible to select optical pulses that solve the Deutsch problem correctly, and do so without losing quantum information to the control electrons, even though the gate parameters vary substantially from one gate to another

  6. Implementation of a three-qubit refined Deutsch-Jozsa algorithm using SFG quantum logic gates

    Energy Technology Data Exchange (ETDEWEB)

    Duce, A Del; Savory, S; Bayvel, P [Department of Electronic and Electrical Engineering, University College London, Torrington Place, London WC1E 7JE (United Kingdom)

    2006-05-31

    In this paper we present a quantum logic circuit which can be used for the experimental demonstration of a three-qubit solid state quantum computer based on a recent proposal of optically driven quantum logic gates. In these gates, the entanglement of randomly placed electron spin qubits is manipulated by optical excitation of control electrons. The circuit we describe solves the Deutsch problem with an improved algorithm called the refined Deutsch-Jozsa algorithm. We show that it is possible to select optical pulses that solve the Deutsch problem correctly, and do so without losing quantum information to the control electrons, even though the gate parameters vary substantially from one gate to another.

  7. Implementation of a three-qubit refined Deutsch Jozsa algorithm using SFG quantum logic gates

    Science.gov (United States)

    DelDuce, A.; Savory, S.; Bayvel, P.

    2006-05-01

    In this paper we present a quantum logic circuit which can be used for the experimental demonstration of a three-qubit solid state quantum computer based on a recent proposal of optically driven quantum logic gates. In these gates, the entanglement of randomly placed electron spin qubits is manipulated by optical excitation of control electrons. The circuit we describe solves the Deutsch problem with an improved algorithm called the refined Deutsch-Jozsa algorithm. We show that it is possible to select optical pulses that solve the Deutsch problem correctly, and do so without losing quantum information to the control electrons, even though the gate parameters vary substantially from one gate to another.

  8. Discrimination of unitary transformations in the Deutsch-Jozsa algorithm: Implications for thermal-equilibrium-ensemble implementations

    International Nuclear Information System (INIS)

    Collins, David

    2010-01-01

    A general framework for regarding oracle-assisted quantum algorithms as tools for discriminating among unitary transformations is described. This framework is applied to the Deutsch-Jozsa problem and all possible quantum algorithms which solve the problem with certainty using oracle unitaries in a particular form are derived. It is also used to show that any quantum algorithm that solves the Deutsch-Jozsa problem starting with a quantum system in a particular class of initial, thermal equilibrium-based states of the type encountered in solution-state NMR can only succeed with greater probability than a classical algorithm when the problem size n exceeds ∼10 5 .

  9. Unifying parameter estimation and the Deutsch-Jozsa algorithm for continuous variables

    International Nuclear Information System (INIS)

    Zwierz, Marcin; Perez-Delgado, Carlos A.; Kok, Pieter

    2010-01-01

    We reveal a close relationship between quantum metrology and the Deutsch-Jozsa algorithm on continuous-variable quantum systems. We develop a general procedure, characterized by two parameters, that unifies parameter estimation and the Deutsch-Jozsa algorithm. Depending on which parameter we keep constant, the procedure implements either the parameter-estimation protocol or the Deutsch-Jozsa algorithm. The parameter-estimation part of the procedure attains the Heisenberg limit and is therefore optimal. Due to the use of approximate normalizable continuous-variable eigenstates, the Deutsch-Jozsa algorithm is probabilistic. The procedure estimates a value of an unknown parameter and solves the Deutsch-Jozsa problem without the use of any entanglement.

  10. Quantum Cryptography Based on the Deutsch-Jozsa Algorithm

    Science.gov (United States)

    Nagata, Koji; Nakamura, Tadao; Farouk, Ahmed

    2017-09-01

    Recently, secure quantum key distribution based on Deutsch's algorithm using the Bell state is reported (Nagata and Nakamura, Int. J. Theor. Phys. doi: 10.1007/s10773-017-3352-4, 2017). Our aim is of extending the result to a multipartite system. In this paper, we propose a highly speedy key distribution protocol. We present sequre quantum key distribution based on a special Deutsch-Jozsa algorithm using Greenberger-Horne-Zeilinger states. Bob has promised to use a function f which is of one of two kinds; either the value of f( x) is constant for all values of x, or else the value of f( x) is balanced, that is, equal to 1 for exactly half of the possible x, and 0 for the other half. Here, we introduce an additional condition to the function when it is balanced. Our quantum key distribution overcomes a classical counterpart by a factor O(2 N ).

  11. Efficient classical simulation of the Deutsch-Jozsa and Simon's algorithms

    Science.gov (United States)

    Johansson, Niklas; Larsson, Jan-Åke

    2017-09-01

    A long-standing aim of quantum information research is to understand what gives quantum computers their advantage. This requires separating problems that need genuinely quantum resources from those for which classical resources are enough. Two examples of quantum speed-up are the Deutsch-Jozsa and Simon's problem, both efficiently solvable on a quantum Turing machine, and both believed to lack efficient classical solutions. Here we present a framework that can simulate both quantum algorithms efficiently, solving the Deutsch-Jozsa problem with probability 1 using only one oracle query, and Simon's problem using linearly many oracle queries, just as expected of an ideal quantum computer. The presented simulation framework is in turn efficiently simulatable in a classical probabilistic Turing machine. This shows that the Deutsch-Jozsa and Simon's problem do not require any genuinely quantum resources, and that the quantum algorithms show no speed-up when compared with their corresponding classical simulation. Finally, this gives insight into what properties are needed in the two algorithms and calls for further study of oracle separation between quantum and classical computation.

  12. Non-Markovianity-assisted high-fidelity Deutsch-Jozsa algorithm in diamond

    Science.gov (United States)

    Dong, Yang; Zheng, Yu; Li, Shen; Li, Cong-Cong; Chen, Xiang-Dong; Guo, Guang-Can; Sun, Fang-Wen

    2018-01-01

    The memory effects in non-Markovian quantum dynamics can induce the revival of quantum coherence, which is believed to provide important physical resources for quantum information processing (QIP). However, no real quantum algorithms have been demonstrated with the help of such memory effects. Here, we experimentally implemented a non-Markovianity-assisted high-fidelity refined Deutsch-Jozsa algorithm (RDJA) with a solid spin in diamond. The memory effects can induce pronounced non-monotonic variations in the RDJA results, which were confirmed to follow a non-Markovian quantum process by measuring the non-Markovianity of the spin system. By applying the memory effects as physical resources with the assistance of dynamical decoupling, the probability of success of RDJA was elevated above 97% in the open quantum system. This study not only demonstrates that the non-Markovianity is an important physical resource but also presents a feasible way to employ this physical resource. It will stimulate the application of the memory effects in non-Markovian quantum dynamics to improve the performance of practical QIP.

  13. Quantum entanglement and quantum computational algorithms

    Indian Academy of Sciences (India)

    We demonstrate that the one- and the two-bit Deutsch-Jozsa algorithm does not require entanglement and can be mapped onto a classical optical scheme. It is only for three and more input bits that the DJ algorithm requires the implementation of entangling transformations and in these cases it is impossible to implement ...

  14. Optical simulation of quantum algorithms using programmable liquid-crystal displays

    International Nuclear Information System (INIS)

    Puentes, Graciana; La Mela, Cecilia; Ledesma, Silvia; Iemmi, Claudio; Paz, Juan Pablo; Saraceno, Marcos

    2004-01-01

    We present a scheme to perform an all optical simulation of quantum algorithms and maps. The main components are lenses to efficiently implement the Fourier transform and programmable liquid-crystal displays to introduce space dependent phase changes on a classical optical beam. We show how to simulate Deutsch-Jozsa and Grover's quantum algorithms using essentially the same optical array programmed in two different ways

  15. Quantum entanglement and quantum computational algorithms

    Indian Academy of Sciences (India)

    Abstract. The existence of entangled quantum states gives extra power to quantum computers over their classical counterparts. Quantum entanglement shows up qualitatively at the level of two qubits. We demonstrate that the one- and the two-bit Deutsch-Jozsa algorithm does not require entanglement and can be mapped ...

  16. Initialization-free generalized Deutsche-Jazz's algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chi, Dong Pyo [School of Mathematical Sciences, Seoul National University, Seoul (Korea, Republic of)]. E-mail: dpchi@math.snu.ac.kr; Kim, Jinsoo [School of Electrical Engineering and Computer Science, Seoul National University, Seoul (Korea)]. E-mail: jkim@ee.snu.ac.kr; Lee, Soojoon [School of Mathematical Sciences, Seoul National University, Seoul (Korea)]. E-mail: level@math.snu.ac.kr

    2001-06-29

    We generalize the Deutsch-Jozsa algorithm by exploiting summations of the roots of unity. The generalized algorithm distinguishes a wider class of functions promised to be either constant or many to one and onto an evenly spaced range. As previously, the generalized quantum algorithm solves this problem using a single functional evaluation. We also consider the problem of distinguishing constant and evenly balanced functions and present a quantum algorithm for this problem that does not require any initialization of an auxiliary register involved in the process of functional evaluation and after solving the problem recovers the initial state of an auxiliary register. (author)

  17. Demonstration of two-qubit algorithms with a superconducting quantum processor.

    Science.gov (United States)

    DiCarlo, L; Chow, J M; Gambetta, J M; Bishop, Lev S; Johnson, B R; Schuster, D I; Majer, J; Blais, A; Frunzio, L; Girvin, S M; Schoelkopf, R J

    2009-07-09

    Quantum computers, which harness the superposition and entanglement of physical states, could outperform their classical counterparts in solving problems with technological impact-such as factoring large numbers and searching databases. A quantum processor executes algorithms by applying a programmable sequence of gates to an initialized register of qubits, which coherently evolves into a final state containing the result of the computation. Building a quantum processor is challenging because of the need to meet simultaneously requirements that are in conflict: state preparation, long coherence times, universal gate operations and qubit readout. Processors based on a few qubits have been demonstrated using nuclear magnetic resonance, cold ion trap and optical systems, but a solid-state realization has remained an outstanding challenge. Here we demonstrate a two-qubit superconducting processor and the implementation of the Grover search and Deutsch-Jozsa quantum algorithms. We use a two-qubit interaction, tunable in strength by two orders of magnitude on nanosecond timescales, which is mediated by a cavity bus in a circuit quantum electrodynamics architecture. This interaction allows the generation of highly entangled states with concurrence up to 94 per cent. Although this processor constitutes an important step in quantum computing with integrated circuits, continuing efforts to increase qubit coherence times, gate performance and register size will be required to fulfil the promise of a scalable technology.

  18. A review on quantum search algorithms

    Science.gov (United States)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  19. Generalization of some hidden subgroup algorithms for input sets of arbitrary size

    Science.gov (United States)

    Poslu, Damla; Say, A. C. Cem

    2006-05-01

    We consider the problem of generalizing some quantum algorithms so that they will work on input domains whose cardinalities are not necessarily powers of two. When analyzing the algorithms we assume that generating superpositions of arbitrary subsets of basis states whose cardinalities are not necessarily powers of two perfectly is possible. We have taken Ballhysa's model as a template and have extended it to Chi, Kim and Lee's generalizations of the Deutsch-Jozsa algorithm and to Simon's algorithm. With perfectly equal superpositions of input sets of arbitrary size, Chi, Kim and Lee's generalized Deutsch-Jozsa algorithms, both for evenly-distributed and evenly-balanced functions, worked with one-sided error property. For Simon's algorithm the success probability of the generalized algorithm is the same as that of the original for input sets of arbitrary cardinalities with equiprobable superpositions, since the property that the measured strings are all those which have dot product zero with the string we search, for the case where the function is 2-to-1, is not lost.

  20. A strategy for quantum algorithm design assisted by machine learning

    Science.gov (United States)

    Bang, Jeongho; Ryu, Junghee; Yoo, Seokwon; Pawłowski, Marcin; Lee, Jinhyoung

    2014-07-01

    We propose a method for quantum algorithm design assisted by machine learning. The method uses a quantum-classical hybrid simulator, where a ‘quantum student’ is being taught by a ‘classical teacher’. In other words, in our method, the learning system is supposed to evolve into a quantum algorithm for a given problem, assisted by a classical main-feedback system. Our method is applicable for designing quantum oracle-based algorithms. We chose, as a case study, an oracle decision problem, called a Deutsch-Jozsa problem. We showed by using Monte Carlo simulations that our simulator can faithfully learn a quantum algorithm for solving the problem for a given oracle. Remarkably, the learning time is proportional to the square root of the total number of parameters, rather than showing the exponential dependence found in the classical machine learning-based method.

  1. Global alignment algorithms implementations | Fatumo ...

    African Journals Online (AJOL)

    In this paper, we implemented the two routes for sequence comparison, that is; the dotplot and Needleman-wunsch algorithm for global sequence alignment. Our algorithms were implemented in python programming language and were tested on Linux platform 1.60GHz, 512 MB of RAM SUSE 9.2 and 10.1 versions.

  2. Elementary functions algorithms and implementation

    CERN Document Server

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  3. Efficient Implementation Algorithms for Homogenized Energy Models

    National Research Council Canada - National Science Library

    Braun, Thomas R; Smith, Ralph C

    2005-01-01

    ... for real-time control implementation. In this paper, we develop algorithms employing lookup tables which permit the high speed implementation of formulations which incorporate relaxation mechanisms and electromechanical coupling...

  4. An implementation of the Heaviside algorithm

    International Nuclear Information System (INIS)

    Dimovski, I.H.; Spiridonova, M.N.

    2011-01-01

    The so-called Heaviside algorithm based on the operational calculus approach is intended for solving initial value problems for linear ordinary differential equations with constant coefficients. We use it in the framework of Mikusinski's operational calculus. A description and implementation of the Heaviside algorithm using a computer algebra system are considered. Special attention is paid to the features making this implementation efficient. Illustrative examples are included

  5. Adaptive Filtering Algorithms and Practical Implementation

    CERN Document Server

    Diniz, Paulo S R

    2013-01-01

    In the fourth edition of Adaptive Filtering: Algorithms and Practical Implementation, author Paulo S.R. Diniz presents the basic concepts of adaptive signal processing and adaptive filtering in a concise and straightforward manner. The main classes of adaptive filtering algorithms are presented in a unified framework, using clear notations that facilitate actual implementation. The main algorithms are described in tables, which are detailed enough to allow the reader to verify the covered concepts. Many examples address problems drawn from actual applications. New material to this edition includes: Analytical and simulation examples in Chapters 4, 5, 6 and 10 Appendix E, which summarizes the analysis of set-membership algorithm Updated problems and references Providing a concise background on adaptive filtering, this book covers the family of LMS, affine projection, RLS and data-selective set-membership algorithms as well as nonlinear, sub-band, blind, IIR adaptive filtering, and more. Several problems are...

  6. FPGA Implementation of Computer Vision Algorithm

    OpenAIRE

    Zhou, Zhonghua

    2014-01-01

    Computer vision algorithms, which play an significant role in vision processing, is widely applied in many aspects such as geology survey, traffic management and medical care, etc.. Most of the situations require the process to be real-timed, in other words, as fast as possible. Field Programmable Gate Arrays (FPGAs) have a advantage of parallelism fabric in programming, comparing to the serial communications of CPUs, which makes FPGA a perfect platform for implementing vision algorithms. The...

  7. AES ALGORITHM IMPLEMENTATION IN PROGRAMMING LANGUAGES

    Directory of Open Access Journals (Sweden)

    Luminiţa DEFTA

    2010-12-01

    Full Text Available Information encryption represents the usage of an algorithm to convert an unknown message into an encrypted one. It is used to protect the data against unauthorized access. Protected data can be stored on a media device or can be transmitted through the network. In this paper we describe a concrete implementation of the AES algorithm in the Java programming language (available from Java Development Kit 6 libraries and C (using the OpenSSL library. AES (Advanced Encryption Standard is an asymmetric key encryption algorithm formally adopted by the U.S. government and was elected after a long process of standardization.

  8. Interfacing external quantum devices to a universal quantum computer.

    Directory of Open Access Journals (Sweden)

    Antonio A Lagana

    Full Text Available We present a scheme to use external quantum devices using the universal quantum computer previously constructed. We thereby show how the universal quantum computer can utilize networked quantum information resources to carry out local computations. Such information may come from specialized quantum devices or even from remote universal quantum computers. We show how to accomplish this by devising universal quantum computer programs that implement well known oracle based quantum algorithms, namely the Deutsch, Deutsch-Jozsa, and the Grover algorithms using external black-box quantum oracle devices. In the process, we demonstrate a method to map existing quantum algorithms onto the universal quantum computer.

  9. Parallel GPU implementation of iterative PCA algorithms.

    Science.gov (United States)

    Andrecut, M

    2009-11-01

    Principal component analysis (PCA) is a key statistical technique for multivariate data analysis. For large data sets, the common approach to PCA computation is based on the standard NIPALS-PCA algorithm, which unfortunately suffers from loss of orthogonality, and therefore its applicability is usually limited to the estimation of the first few components. Here we present an algorithm based on Gram-Schmidt orthogonalization (called GS-PCA), which eliminates this shortcoming of NIPALS-PCA. Also, we discuss the GPU (Graphics Processing Unit) parallel implementation of both NIPALS-PCA and GS-PCA algorithms. The numerical results show that the GPU parallel optimized versions, based on CUBLAS (NVIDIA), are substantially faster (up to 12 times) than the CPU optimized versions based on CBLAS (GNU Scientific Library).

  10. Low Rank Approximation Algorithms, Implementation, Applications

    CERN Document Server

    Markovsky, Ivan

    2012-01-01

    Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...

  11. Implementation of trigonometric function using CORDIC algorithms

    Science.gov (United States)

    Mokhtar, A. S. N.; Ayub, M. I.; Ismail, N.; Daud, N. G. Nik

    2018-02-01

    In 1959, Jack E. Volder presents a brand new formula to the real-time solution of the equation raised in navigation system. This new algorithm was the most beneficial replacement of analog navigation system by the digital. The CORDIC (Coordinate Rotation Digital Computer) algorithm are used for the rapid calculation associated with elementary operates like trigonometric function, multiplication, division and logarithm function, and also various conversions such as conversion of rectangular to polar coordinate including the conversion between binary coded information. In this current time CORDIC formula have many applications in the field of communication, signal processing, 3-D graphics, and others. This paper would be presents the trigonometric function implementation by using CORDIC algorithm in rotation mode for circular coordinate system. The CORDIC technique is used in order to generating the output angle between range 0o to 90o and error analysis is concern. The result showed that the average percentage error is about 0.042% at angles between ranges 00 to 900. But the average percentage error rose up to 45% at angle 90o and above. So, this method is very accurate at the 1st quadrant. The mirror properties method is used to find out an angle at 2nd, 3rd and 4th quadrant.

  12. Concurrent applicative implementations of nondeterministic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Salter, R

    1983-01-01

    The author introduces a methodology for utilizing concurrency in place of backtracking in the implementation of nondeterministic algorithms. This is achieved in an applicative setting through the use of the Friedman-Wise multiprogramming primitive frons, and a paradigm which views the action of nondeterministic algorithms as one of data structure construction. The element by element nondeterminism arising from a linearized search is replaced by a control structure which is oriented towards constructing sets of partial computations. This point of view is facilitated by the use of suspensions, which allow control disciplines to be embodied in the form of conceptual data structures that in reality manifest themselves only for purposes of control. He applies this methodology to the class of problems usually solved through the use of simple backtracking (e.g. 'eight queens'), and to a problem presented by Lindstrom (1979) to illustrate the use of coroutine controlled backtracking, to produce backtrack-free solutions. The solution to the latter illustrates the coroutine capability of suspended structures, but also demonstrates a need for further investigations into resolving problems of process communication in applicative languages. 14 references.

  13. A very fast implementation of 2D iterative reconstruction algorithms

    DEFF Research Database (Denmark)

    Toft, Peter Aundal; Jensen, Peter James

    1996-01-01

    that iterative reconstruction algorithms can be implemented and run almost as fast as direct reconstruction algorithms. The method has been implemented in a software package that is available for free, providing reconstruction algorithms using ART, EM, and the Least Squares Conjugate Gradient Method...

  14. Categorizing Variations of Student-Implemented Sorting Algorithms

    Science.gov (United States)

    Taherkhani, Ahmad; Korhonen, Ari; Malmi, Lauri

    2012-01-01

    In this study, we examined freshmen students' sorting algorithm implementations in data structures and algorithms' course in two phases: at the beginning of the course before the students received any instruction on sorting algorithms, and after taking a lecture on sorting algorithms. The analysis revealed that many students have insufficient…

  15. Searching Algorithms Implemented on Probabilistic Systolic Arrays

    Czech Academy of Sciences Publication Activity Database

    Kramosil, Ivan

    1996-01-01

    Roč. 25, č. 1 (1996), s. 7-45 ISSN 0308-1079 R&D Projects: GA ČR GA201/93/0781 Keywords : searching algorithms * probabilistic algorithms * systolic arrays * parallel algorithms Impact factor: 0.214, year: 1996

  16. Implementation of a Wavefront-Sensing Algorithm

    Science.gov (United States)

    Smith, Jeffrey S.; Dean, Bruce; Aronstein, David

    2013-01-01

    A computer program has been written as a unique implementation of an image-based wavefront-sensing algorithm reported in "Iterative-Transform Phase Retrieval Using Adaptive Diversity" (GSC-14879-1), NASA Tech Briefs, Vol. 31, No. 4 (April 2007), page 32. This software was originally intended for application to the James Webb Space Telescope, but is also applicable to other segmented-mirror telescopes. The software is capable of determining optical-wavefront information using, as input, a variable number of irradiance measurements collected in defocus planes about the best focal position. The software also uses input of the geometrical definition of the telescope exit pupil (otherwise denoted the pupil mask) to identify the locations of the segments of the primary telescope mirror. From the irradiance data and mask information, the software calculates an estimate of the optical wavefront (a measure of performance) of the telescope generally and across each primary mirror segment specifically. The software is capable of generating irradiance data, wavefront estimates, and basis functions for the full telescope and for each primary-mirror segment. Optionally, each of these pieces of information can be measured or computed outside of the software and incorporated during execution of the software.

  17. Parallel Implementation of the Terrain Masking Algorithm

    Science.gov (United States)

    1994-03-01

    contains behavior rules which can define a computation or an algorithm. It can communicate with other process nodes, it can contain local data, and it can...terrain maskirg calculation is being performed. It is this algorithm that comsumes about seventy percent of the total terrain masking calculation time

  18. Autonomous intelligent vehicles theory, algorithms, and implementation

    CERN Document Server

    Cheng, Hong

    2011-01-01

    Here is the latest on intelligent vehicles, covering object and obstacle detection and recognition and vehicle motion control. Includes a navigation approach using global views; introduces algorithms for lateral and longitudinal motion control and more.

  19. Object-Oriented Implementation of Adaptive Mesh Refinement Algorithms

    Directory of Open Access Journals (Sweden)

    William Y. Crutchfield

    1993-01-01

    Full Text Available We describe C++ classes that simplify development of adaptive mesh refinement (AMR algorithms. The classes divide into two groups, generic classes that are broadly useful in adaptive algorithms, and application-specific classes that are the basis for our AMR algorithm. We employ two languages, with C++ responsible for the high-level data structures, and Fortran responsible for low-level numerics. The C++ implementation is as fast as the original Fortran implementation. Use of inheritance has allowed us to extend the original AMR algorithm to other problems with greatly reduced development time.

  20. Implementation of fuzzy logic control algorithm in embedded ...

    African Journals Online (AJOL)

    Fuzzy logic control algorithm solves problems that are difficult to address with traditional control techniques. This paper describes an implementation of fuzzy logic control algorithm using inexpensive hardware as well as how to use fuzzy logic to tackle a specific control problem without any special software tools. As a case ...

  1. Multiple Lookup Table-Based AES Encryption Algorithm Implementation

    Science.gov (United States)

    Gong, Jin; Liu, Wenyi; Zhang, Huixin

    Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.

  2. Implementation of Chaid Algorithm: A Hotel Case

    Directory of Open Access Journals (Sweden)

    Celal Hakan Kagnicioglu

    2016-01-01

    Full Text Available Today, companies are planning their own activities depending on efficiency and effectiveness. In order to have plans for the future activities they need historical data coming from outside and inside of the companies. However, this data is in huge amounts to understand easily. Since, this huge amount of data creates complexity in business for many industries like hospitality industry, reliable, accurate and fast access to this data is to be one of the greatest problems. Besides, management of this data is another big problem. In order to analyze this huge amount of data, Data Mining (DM tools, can be used effectively. In this study, after giving brief definition about fundamentals of data mining, Chi Squared Automatic Interaction Detection (CHAID algorithm, one of the mostly used DM tool, will be introduced. By CHAID algorithm, the most used materials in room cleaning process and the relations of these materials based on in a five star hotel data are tried to be determined. At the end of the analysis, it is seen that while some variables have strong relation with the number of rooms cleaned in the hotel, the others have no or weak relation.

  3. Implementation of Chaid Algorithm: A Hotel Case

    Directory of Open Access Journals (Sweden)

    Celal Hakan Kağnicioğlu

    2014-11-01

    Full Text Available Today, companies are planning their own activities depending on efficiency and effectiveness. In order to have plans for the future activities they need historical data coming from outside and inside of the companies. However, this data is in huge amounts to understand easily. Since, this huge amount of data creates complexity in business for many industries like hospitality industry, reliable, accurate and fast access to this data is to be one of the greatest problems. Besides, management of this data is another big problem. In order to analyze this huge amount of data, Data Mining (DM tools, can be used effectively. In this study, after giving brief definition about fundamentals of data mining, Chi Squared Automatic Interaction Detection (CHAID algorithm, one of the mostly used DM tool, will be introduced. By CHAID algorithm, the most used materials in room cleaning process and the relations of these materials based on in a five star hotel data are tried to be determined. At the end of the analysis, it is seen that while some variables have strong relation with the number of rooms cleaned in the hotel, the others have no or weak relation.

  4. Comparison of tracking algorithms implemented in OpenCV

    Directory of Open Access Journals (Sweden)

    Janku Peter

    2016-01-01

    Full Text Available Computer vision is very progressive and modern part of computer science. From scientific point of view, theoretical aspects of computer vision algorithms prevail in many papers and publications. The underlying theory is really important, but on the other hand, the final implementation of an algorithm significantly affects its performance and robustness. For this reason, this paper tries to compare real implementation of tracking algorithms (one part of computer vision problem, which can be found in the very popular library OpenCV. Moreover, the possibilities of optimizations are discussed.

  5. FPGA Implementation of a Frame Synchronization Algorithm for Powerline Communications

    Directory of Open Access Journals (Sweden)

    S. Tsakiris

    2009-09-01

    Full Text Available This paper presents an FPGA implementation of a pilot–based time synchronization scheme employing orthogonal frequency division multiplexing for powerline communication channels. The functionality of the algorithm is analyzed and tested over a real powerline residential network. For this purpose, an appropriate transmitter circuit, implemented by an FPGA, and suitable coupling circuits are constructed. The system has been developed using VHDL language on Nallatech XtremeDSP development kits. The communication system operates in the baseband up to 30 MHz. Measurements of the algorithm's good performance in terms of the number of detected frames and timing offset error are taken and compared to simulations of existing algorithms.

  6. EV Charging Algorithm Implementation with User Price Preference

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Bin; Hu, Boyang; Qiu, Charlie; Chu, Peter; Gadh, Rajit

    2015-02-17

    in this paper, we propose and implement a smart Electric Vehicle (EV) charging algorithm to control the EV charging infrastructures according to users’ price preferences. EVSE (Electric Vehicle Supply Equipment), equipped with bidirectional communication devices and smart meters, can be remotely monitored by the proposed charging algorithm applied to EV control center and mobile app. On the server side, ARIMA model is utilized to fit historical charging load data and perform day-ahead prediction. A pricing strategy with energy bidding policy is proposed and implemented to generate a charging price list to be broadcasted to EV users through mobile app. On the user side, EV drivers can submit their price preferences and daily travel schedules to negotiate with Control Center to consume the expected energy and minimize charging cost simultaneously. The proposed algorithm is tested and validated through the experimental implementations in UCLA parking lots.

  7. Super-Encryption Implementation Using Monoalphabetic Algorithm and XOR Algorithm for Data Security

    Science.gov (United States)

    Rachmawati, Dian; Andri Budiman, Mohammad; Aulia, Indra

    2018-03-01

    The exchange of data that occurs offline and online is very vulnerable to the threat of data theft. In general, cryptography is a science and art to maintain data secrecy. An encryption is a cryptography algorithm in which data is transformed into cipher text, which is something that is unreadable and meaningless so it cannot be read or understood by other parties. In super-encryption, two or more encryption algorithms are combined to make it more secure. In this work, Monoalphabetic algorithm and XOR algorithm are combined to form a super- encryption. Monoalphabetic algorithm works by changing a particular letter into a new letter based on existing keywords while the XOR algorithm works by using logic operation XOR Since Monoalphabetic algorithm is a classical cryptographic algorithm and XOR algorithm is a modern cryptographic algorithm, this scheme is expected to be both easy-to-implement and more secure. The combination of the two algorithms is capable of securing the data and restoring it back to its original form (plaintext), so the data integrity is still ensured.

  8. Modification of MSDR algorithm and ITS implementation on graph clustering

    Science.gov (United States)

    Prastiwi, D.; Sugeng, K. A.; Siswantining, T.

    2017-07-01

    Maximum Standard Deviation Reduction (MSDR) is a graph clustering algorithm to minimize the distance variation within a cluster. In this paper we propose a modified MSDR by replacing one technical step in MSDR which uses polynomial regression, with a new and simpler step. This leads to our new algorithm called Modified MSDR (MMSDR). We implement the new algorithm to separate a domestic flight network of an Indonesian airline into two large clusters. Further analysis allows us to discover a weak link in the network, which should be improved by adding more flights.

  9. Research and Implementation of the Practical Texture Synthesis Algorithms

    Institute of Scientific and Technical Information of China (English)

    孙家广; 周毅

    1991-01-01

    How to generate pictures real and esthetic objects is an important subject of computer graphics.The techniques of mapping textures onto the surfaces of an object in the 3D space are efficient approaches for the purpose.We developed and implemented algorithms for generating objects with appearances stone,wood grain,ice lattice,brick,doors and windows on Apollo workstations.All the algorithms have been incorporated into the 3D grometry modelling system (GEMS) developed by the CAD Center of Tsinghua University.This paper emphasizes the wood grain and the ice lattice algorithms.

  10. Study of hardware implementations of fast tracking algorithms

    International Nuclear Information System (INIS)

    Song, Z.; Huang, G.; Wang, D.; Lentdecker, G. De; Dong, J.; Léonard, A.; Robert, F.; Yang, Y.

    2017-01-01

    Real-time track reconstruction at high event rates is a major challenge for future experiments in high energy physics. To perform pattern-recognition and track fitting, artificial retina or Hough transformation methods have been introduced in the field which have to be implemented in FPGA firmware. In this note we report on a case study of a possible FPGA hardware implementation approach of the retina algorithm based on a Floating-Point core. Detailed measurements with this algorithm are investigated. Retina performance and capabilities of the FPGA are discussed along with perspectives for further optimization and applications.

  11. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    Science.gov (United States)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  12. On a new implementation of the Lanczos algorithm

    International Nuclear Information System (INIS)

    Caurier, E.; Zuker, A.P.; Poves, A.

    1991-01-01

    The new implementation proposed is based on a block labelling scheme described in detail. Time reversal, f-projection, sum rule pivots and strength functions are discussed by the aid of the new implementation of the Lanczos algorithm. Energetics and magnetic dipole behaviour of 48 Ti are studied as examples illustrating the applications of the method. (G.P.) 9 refs.; 4 figs.; 1 tab

  13. An analytic parton shower. Algorithms, implementation and validation

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Sebastian

    2012-06-15

    The realistic simulation of particle collisions is an indispensable tool to interpret the data measured at high-energy colliders, for example the now running Large Hadron Collider at CERN. These collisions at these colliders are usually simulated in the form of exclusive events. This thesis focuses on the perturbative QCD part involved in the simulation of these events, particularly parton showers and the consistent combination of parton showers and matrix elements. We present an existing parton shower algorithm for emissions off final state partons along with some major improvements. Moreover, we present a new parton shower algorithm for emissions off incoming partons. The aim of these particular algorithms, called analytic parton shower algorithms, is to be able to calculate the probabilities for branchings and for whole events after the event has been generated. This allows a reweighting procedure to be applied after the events have been simulated. We show a detailed description of the algorithms, their implementation and the interfaces to the event generator WHIZARD. Moreover we discuss the implementation of a MLM-type matching procedure and an interface to the shower and hadronization routines from PYTHIA. Finally, we compare several predictions by our implementation to experimental measurements at LEP, Tevatron and LHC, as well as to predictions obtained using PYTHIA. (orig.)

  14. An analytic parton shower. Algorithms, implementation and validation

    International Nuclear Information System (INIS)

    Schmidt, Sebastian

    2012-06-01

    The realistic simulation of particle collisions is an indispensable tool to interpret the data measured at high-energy colliders, for example the now running Large Hadron Collider at CERN. These collisions at these colliders are usually simulated in the form of exclusive events. This thesis focuses on the perturbative QCD part involved in the simulation of these events, particularly parton showers and the consistent combination of parton showers and matrix elements. We present an existing parton shower algorithm for emissions off final state partons along with some major improvements. Moreover, we present a new parton shower algorithm for emissions off incoming partons. The aim of these particular algorithms, called analytic parton shower algorithms, is to be able to calculate the probabilities for branchings and for whole events after the event has been generated. This allows a reweighting procedure to be applied after the events have been simulated. We show a detailed description of the algorithms, their implementation and the interfaces to the event generator WHIZARD. Moreover we discuss the implementation of a MLM-type matching procedure and an interface to the shower and hadronization routines from PYTHIA. Finally, we compare several predictions by our implementation to experimental measurements at LEP, Tevatron and LHC, as well as to predictions obtained using PYTHIA. (orig.)

  15. Efficient Implementation of Nested-Loop Multimedia Algorithms

    Directory of Open Access Journals (Sweden)

    Kittitornkun Surin

    2001-01-01

    Full Text Available A novel dependence graph representation called the multiple-order dependence graph for nested-loop formulated multimedia signal processing algorithms is proposed. It allows a concise representation of an entire family of dependence graphs. This powerful representation facilitates the development of innovative implementation approach for nested-loop formulated multimedia algorithms such as motion estimation, matrix-matrix product, 2D linear transform, and others. In particular, algebraic linear mapping (assignment and scheduling methodology can be applied to implement such algorithms on an array of simple-processing elements. The feasibility of this new approach is demonstrated in three major target architectures: application-specific integrated circuit (ASIC, field programmable gate array (FPGA, and a programmable clustered VLIW processor.

  16. An algorithm, implementation and execution ontology design pattern

    NARCIS (Netherlands)

    Lawrynowicz, A.; Esteves, D.; Panov, P.; Soru, T.; Dzeroski, S.; Vanschoren, J.

    2016-01-01

    This paper describes an ontology design pattern for modeling algorithms, their implementations and executions. This pattern is derived from the research results on data mining/machine learning ontologies, but is more generic. We argue that the proposed pattern will foster the development of

  17. Implementing Modifed Burg Algorithms in Multivariate Subset Autoregressive Modeling

    Directory of Open Access Journals (Sweden)

    A. Alexandre Trindade

    2003-02-01

    Full Text Available The large number of parameters in subset vector autoregressive models often leads one to procure fast, simple, and efficient alternatives or precursors to maximum likelihood estimation. We present the solution of the multivariate subset Yule-Walker equations as one such alternative. In recent work, Brockwell, Dahlhaus, and Trindade (2002, show that the Yule-Walker estimators can actually be obtained as a special case of a general recursive Burg-type algorithm. We illustrate the structure of this Algorithm, and discuss its implementation in a high-level programming language. Applications of the Algorithm in univariate and bivariate modeling are showcased in examples. Univariate and bivariate versions of the Algorithm written in Fortran 90 are included in the appendix, and their use illustrated.

  18. Implementation of anomaly detection algorithms for detecting transmission control protocol synchronized flooding attacks

    CSIR Research Space (South Africa)

    Mkuzangwe, NNP

    2015-08-01

    Full Text Available This work implements two anomaly detection algorithms for detecting Transmission Control Protocol Synchronized (TCP SYN) flooding attack. The two algorithms are an adaptive threshold algorithm and a cumulative sum (CUSUM) based algorithm...

  19. Implementation of the Grover search algorithm with Josephson charge qubits

    International Nuclear Information System (INIS)

    Zheng Xiaohu; Dong Ping; Xue Zhengyuan; Cao Zhuoliang

    2007-01-01

    A scheme of implementing the Grover search algorithm based on Josephson charge qubits has been proposed, which would be a key step to scale more complex quantum algorithms and very important for constructing a real quantum computer via Josephson charge qubits. The present scheme is simple but fairly efficient, and easily manipulated because any two-charge-qubit can be selectively and effectively coupled by a common inductance. More manipulations can be carried out before decoherence sets in. Our scheme can be realized within the current technology

  20. FPGA-Based Implementation of Lithuanian Isolated Word Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Tomyslav Sledevič

    2013-05-01

    Full Text Available The paper describes the FPGA-based implementation of Lithuanian isolated word recognition algorithm. FPGA is selected for parallel process implementation using VHDL to ensure fast signal processing at low rate clock signal. Cepstrum analysis was applied to features extraction in voice. The dynamic time warping algorithm was used to compare the vectors of cepstrum coefficients. A library of 100 words features was created and stored in the internal FPGA BRAM memory. Experimental testing with speaker dependent records demonstrated the recognition rate of 94%. The recognition rate of 58% was achieved for speaker-independent records. Calculation of cepstrum coefficients lasted for 8.52 ms at 50 MHz clock, while 100 DTWs took 66.56 ms at 25 MHz clock.Article in Lithuanian

  1. A high performance hardware implementation image encryption with AES algorithm

    Science.gov (United States)

    Farmani, Ali; Jafari, Mohamad; Miremadi, Seyed Sohrab

    2011-06-01

    This paper describes implementation of a high-speed encryption algorithm with high throughput for encrypting the image. Therefore, we select a highly secured symmetric key encryption algorithm AES(Advanced Encryption Standard), in order to increase the speed and throughput using pipeline technique in four stages, control unit based on logic gates, optimal design of multiplier blocks in mixcolumn phase and simultaneous production keys and rounds. Such procedure makes AES suitable for fast image encryption. Implementation of a 128-bit AES on FPGA of Altra company has been done and the results are as follow: throughput, 6 Gbps in 471MHz. The time of encrypting in tested image with 32*32 size is 1.15ms.

  2. Pipeline Implementation of Polyphase PSO for Adaptive Beamforming Algorithm

    Directory of Open Access Journals (Sweden)

    Shaobing Huang

    2017-01-01

    Full Text Available Adaptive beamforming is a powerful technique for anti-interference, where searching and tracking optimal solutions are a great challenge. In this paper, a partial Particle Swarm Optimization (PSO algorithm is proposed to track the optimal solution of an adaptive beamformer due to its great global searching character. Also, due to its naturally parallel searching capabilities, a novel Field Programmable Gate Arrays (FPGA pipeline architecture using polyphase filter bank structure is designed. In order to perform computations with large dynamic range and high precision, the proposed implementation algorithm uses an efficient user-defined floating-point arithmetic. In addition, a polyphase architecture is proposed to achieve full pipeline implementation. In the case of PSO with large population, the polyphase architecture can significantly save hardware resources while achieving high performance. Finally, the simulation results are presented by cosimulation with ModelSim and SIMULINK.

  3. Implementation of several mathematical algorithms to breast tissue density classification

    International Nuclear Information System (INIS)

    Quintana, C.; Redondo, M.; Tirao, G.

    2014-01-01

    The accuracy of mammographic abnormality detection methods is strongly dependent on breast tissue characteristics, where a dense breast tissue can hide lesions causing cancer to be detected at later stages. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. This paper presents the implementation and the performance of different mathematical algorithms designed to standardize the categorization of mammographic images, according to the American College of Radiology classifications. These mathematical techniques are based on intrinsic properties calculations and on comparison with an ideal homogeneous image (joint entropy, mutual information, normalized cross correlation and index Q) as categorization parameters. The algorithms evaluation was performed on 100 cases of the mammographic data sets provided by the Ministerio de Salud de la Provincia de Córdoba, Argentina—Programa de Prevención del Cáncer de Mama (Department of Public Health, Córdoba, Argentina, Breast Cancer Prevention Program). The obtained breast classifications were compared with the expert medical diagnostics, showing a good performance. The implemented algorithms revealed a high potentiality to classify breasts into tissue density categories. - Highlights: • Breast density classification can be obtained by suitable mathematical algorithms. • Mathematical processing help radiologists to obtain the BI-RADS classification. • The entropy and joint entropy show high performance for density classification

  4. FPGA implementation of image dehazing algorithm for real time applications

    Science.gov (United States)

    Kumar, Rahul; Kaushik, Brajesh Kumar; Balasubramanian, R.

    2017-09-01

    Weather degradation such as haze, fog, mist, etc. severely reduces the effective range of visual surveillance. This degradation is a spatially varying phenomena, which makes this problem non trivial. Dehazing is an essential preprocessing stage in applications such as long range imaging, border security, intelligent transportation system, etc. However, these applications require low latency of the preprocessing block. In this work, single image dark channel prior algorithm is modified and implemented for fast processing with comparable visual quality of the restored image/video. Although conventional single image dark channel prior algorithm is computationally expensive, it yields impressive results. Moreover, a two stage image dehazing architecture is introduced, wherein, dark channel and airlight are estimated in the first stage. Whereas, transmission map and intensity restoration are computed in the next stages. The algorithm is implemented using Xilinx Vivado software and validated by using Xilinx zc702 development board, which contains an Artix7 equivalent Field Programmable Gate Array (FPGA) and ARM Cortex A9 dual core processor. Additionally, high definition multimedia interface (HDMI) has been incorporated for video feed and display purposes. The results show that the dehazing algorithm attains 29 frames per second for the image resolution of 1920x1080 which is suitable of real time applications. The design utilizes 9 18K_BRAM, 97 DSP_48, 6508 FFs and 8159 LUTs.

  5. VIRTEX-5 Fpga Implementation of Advanced Encryption Standard Algorithm

    Science.gov (United States)

    Rais, Muhammad H.; Qasim, Syed M.

    2010-06-01

    In this paper, we present an implementation of Advanced Encryption Standard (AES) cryptographic algorithm using state-of-the-art Virtex-5 Field Programmable Gate Array (FPGA). The design is coded in Very High Speed Integrated Circuit Hardware Description Language (VHDL). Timing simulation is performed to verify the functionality of the designed circuit. Performance evaluation is also done in terms of throughput and area. The design implemented on Virtex-5 (XC5VLX50FFG676-3) FPGA achieves a maximum throughput of 4.34 Gbps utilizing a total of 399 slices.

  6. Implementation and statistical analysis of Metropolis algorithm for SU(3)

    International Nuclear Information System (INIS)

    Katznelson, E.; Nobile, A.

    1984-12-01

    In this paper we study the statistical properties of an implementation of the Metropolis algorithm for SU(3) gauge theory. It is shown that the results have normal distribution. We demonstrate that in this case error analysis can be carried on in a simple way and we show that applying it to both the measurement strategy and the output data analysis has an important influence on the performance and reliability of the simulation. (author)

  7. Algorithm of parallel: hierarchical transformation and its implementation on FPGA

    Science.gov (United States)

    Timchenko, Leonid I.; Petrovskiy, Mykola S.; Kokryatskay, Natalia I.; Barylo, Alexander S.; Dembitska, Sofia V.; Stepanikuk, Dmytro S.; Suleimenov, Batyrbek; Zyska, Tomasz; Uvaysova, Svetlana; Shedreyeva, Indira

    2017-08-01

    In this paper considers the algorithm of laser beam spots image classification in atmospheric-optical transmission systems. It discusses the need for images filtering using adaptive methods, using, for example, parallel-hierarchical networks. The article also highlights the need to create high-speed memory devices for such networks. Implementation and simulation results of the developed method based on the PLD are demonstrated, which shows that the presented method gives 15-20% better prediction results than similar methods.

  8. Multiangle Implementation of Atmospheric Correction (MAIAC): 2. Aerosol Algorithm

    Science.gov (United States)

    Lyapustin, A.; Wang, Y.; Laszlo, I.; Kahn, R.; Korkin, S.; Remer, L.; Levy, R.; Reid, J. S.

    2011-01-01

    An aerosol component of a new multiangle implementation of atmospheric correction (MAIAC) algorithm is presented. MAIAC is a generic algorithm developed for the Moderate Resolution Imaging Spectroradiometer (MODIS), which performs aerosol retrievals and atmospheric correction over both dark vegetated surfaces and bright deserts based on a time series analysis and image-based processing. The MAIAC look-up tables explicitly include surface bidirectional reflectance. The aerosol algorithm derives the spectral regression coefficient (SRC) relating surface bidirectional reflectance in the blue (0.47 micron) and shortwave infrared (2.1 micron) bands; this quantity is prescribed in the MODIS operational Dark Target algorithm based on a parameterized formula. The MAIAC aerosol products include aerosol optical thickness and a fine-mode fraction at resolution of 1 km. This high resolution, required in many applications such as air quality, brings new information about aerosol sources and, potentially, their strength. AERONET validation shows that the MAIAC and MOD04 algorithms have similar accuracy over dark and vegetated surfaces and that MAIAC generally improves accuracy over brighter surfaces due to the SRC retrieval and explicit bidirectional reflectance factor characterization, as demonstrated for several U.S. West Coast AERONET sites. Due to its generic nature and developed angular correction, MAIAC performs aerosol retrievals over bright deserts, as demonstrated for the Solar Village Aerosol Robotic Network (AERONET) site in Saudi Arabia.

  9. The selection and implementation of hidden line algorithms

    International Nuclear Information System (INIS)

    Schneider, A.

    1983-06-01

    One of the most challenging problems in the field of computer graphics is the elimination of hidden lines in images of nontransparent bodies. In the real world the nontransparent material hinders the light ray coming from hidden regions to the observer. In the computer based image formation process there is no automatic visibility regulation of this kind. So many lines are created which result in a poor quality of the spacial representation. Therefore a three-dimensional representation on the screen is only meaningfull if the hidden lines are eliminated. For this process many algorithms have been developed in the past. A common feature of these codes is the large amount of computer time needed. In the first generation of algorithms, which are commonly used today, the bodies are modeled by plane polygons. More recently, however, also algorithms are in use, which are able to treat curved surfaces without discretisation by plane surfaces. In this paper the first group of algorithms is reviewed, and the most important codes are described. The experience obtained during the implementation of two algorithms is presented. (orig.) [de

  10. Hardware realization of an SVM algorithm implemented in FPGAs

    Science.gov (United States)

    Wiśniewski, Remigiusz; Bazydło, Grzegorz; Szcześniak, Paweł

    2017-08-01

    The paper proposes a technique of hardware realization of a space vector modulation (SVM) of state function switching in matrix converter (MC), oriented on the implementation in a single field programmable gate array (FPGA). In MC the SVM method is based on the instantaneous space-vector representation of input currents and output voltages. The traditional computation algorithms usually involve digital signal processors (DSPs) which consumes the large number of power transistors (18 transistors and 18 independent PWM outputs) and "non-standard positions of control pulses" during the switching sequence. Recently, hardware implementations become popular since computed operations may be executed much faster and efficient due to nature of the digital devices (especially concurrency). In the paper, we propose a hardware algorithm of SVM computation. In opposite to the existing techniques, the presented solution applies COordinate Rotation DIgital Computer (CORDIC) method to solve the trigonometric operations. Furthermore, adequate arithmetic modules (that is, sub-devices) used for intermediate calculations, such as code converters or proper sectors selectors (for output voltages and input current) are presented in detail. The proposed technique has been implemented as a design described with the use of Verilog hardware description language. The preliminary results of logic implementation oriented on the Xilinx FPGA (particularly, low-cost device from Artix-7 family from Xilinx was used) are also presented.

  11. Implementation of several mathematical algorithms to breast tissue density classification

    Science.gov (United States)

    Quintana, C.; Redondo, M.; Tirao, G.

    2014-02-01

    The accuracy of mammographic abnormality detection methods is strongly dependent on breast tissue characteristics, where a dense breast tissue can hide lesions causing cancer to be detected at later stages. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. This paper presents the implementation and the performance of different mathematical algorithms designed to standardize the categorization of mammographic images, according to the American College of Radiology classifications. These mathematical techniques are based on intrinsic properties calculations and on comparison with an ideal homogeneous image (joint entropy, mutual information, normalized cross correlation and index Q) as categorization parameters. The algorithms evaluation was performed on 100 cases of the mammographic data sets provided by the Ministerio de Salud de la Provincia de Córdoba, Argentina—Programa de Prevención del Cáncer de Mama (Department of Public Health, Córdoba, Argentina, Breast Cancer Prevention Program). The obtained breast classifications were compared with the expert medical diagnostics, showing a good performance. The implemented algorithms revealed a high potentiality to classify breasts into tissue density categories.

  12. Neural network fusion capabilities for efficient implementation of tracking algorithms

    Science.gov (United States)

    Sundareshan, Malur K.; Amoozegar, Farid

    1997-03-01

    The ability to efficiently fuse information of different forms to facilitate intelligent decision making is one of the major capabilities of trained multilayer neural networks that is now being recognized. While development of innovative adaptive control algorithms for nonlinear dynamical plants that attempt to exploit these capabilities seems to be more popular, a corresponding development of nonlinear estimation algorithms using these approaches, particularly for application in target surveillance and guidance operations, has not received similar attention. We describe the capabilities and functionality of neural network algorithms for data fusion and implementation of tracking filters. To discuss details and to serve as a vehicle for quantitative performance evaluations, the illustrative case of estimating the position and velocity of surveillance targets is considered. Efficient target- tracking algorithms that can utilize data from a host of sensing modalities and are capable of reliably tracking even uncooperative targets executing fast and complex maneuvers are of interest in a number of applications. The primary motivation for employing neural networks in these applications comes from the efficiency with which more features extracted from different sensor measurements can be utilized as inputs for estimating target maneuvers. A system architecture that efficiently integrates the fusion capabilities of a trained multilayer neural net with the tracking performance of a Kalman filter is described. The innovation lies in the way the fusion of multisensor data is accomplished to facilitate improved estimation without increasing the computational complexity of the dynamical state estimator itself.

  13. Developing and Implementing the Data Mining Algorithms in RAVEN

    International Nuclear Information System (INIS)

    Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea; Rabiti, Cristian

    2015-01-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  14. Developing and Implementing the Data Mining Algorithms in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Ramazan Sonat [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Daniel Patrick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  15. A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm

    Science.gov (United States)

    Thirer, Nonel

    2013-05-01

    With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.

  16. Kodiak: An Implementation Framework for Branch and Bound Algorithms

    Science.gov (United States)

    Smith, Andrew P.; Munoz, Cesar A.; Narkawicz, Anthony J.; Markevicius, Mantas

    2015-01-01

    Recursive branch and bound algorithms are often used to refine and isolate solutions to several classes of global optimization problems. A rigorous computation framework for the solution of systems of equations and inequalities involving nonlinear real arithmetic over hyper-rectangular variable and parameter domains is presented. It is derived from a generic branch and bound algorithm that has been formally verified, and utilizes self-validating enclosure methods, namely interval arithmetic and, for polynomials and rational functions, Bernstein expansion. Since bounds computed by these enclosure methods are sound, this approach may be used reliably in software verification tools. Advantage is taken of the partial derivatives of the constraint functions involved in the system, firstly to reduce the branching factor by the use of bisection heuristics and secondly to permit the computation of bifurcation sets for systems of ordinary differential equations. The associated software development, Kodiak, is presented, along with examples of three different branch and bound problem types it implements.

  17. GPGPU Implementation of a Genetic Algorithm for Stereo Refinement

    Directory of Open Access Journals (Sweden)

    Álvaro Arranz

    2015-03-01

    Full Text Available During the last decade, the general-purpose computing on graphics processing units Graphics (GPGPU has turned out to be a useful tool for speeding up many scientific calculations. Computer vision is known to be one of the fields with more penetration of these new techniques. This paper explores the advantages of using GPGPU implementation to speedup a genetic algorithm used for stereo refinement. The main contribution of this paper is analyzing which genetic operators take advantage of a parallel approach and the description of an efficient state- of-the-art implementation for each one. As a result, speed-ups close to x80 can be achieved, demonstrating to be the only way of achieving close to real-time performance.

  18. Purgatorio - A new implementation of the Inferno algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, B; Sonnad, V; Sterne, P; Isaacs, W

    2005-03-29

    For astrophysical applications, as well as modeling laser-produced plasmas, there is a continual need for equation-of-state data over a wide domain of physical conditions. This paper presents algorithmic aspects for computing the Helmholtz free energy of plasma electrons for temperatures spanning from a few Kelvin to several KeV, and densities ranging from essentially isolated ion conditions to such large compressions that most bound orbitals become delocalized. The objective is high precision results in order to compute pressure and other thermodynamic quantities by numerical differentiation. This approach has the advantage that internal thermodynamic self-consistency is ensured, regardless of the specific physical model, but at the cost of very stringent numerical tolerances for each operation. The computational aspects we address in this paper are faced by any model that relies on input from the quantum mechanical spectrum of a spherically symmetric Hamiltonian operator. The particular physical model we employ is that of INFERNO; of a spherically averaged ion embedded in jellium. An overview of PURGATORIO, a new implementation of the INFERNO equation of state model, is presented. The new algorithm emphasizes a novel decimation scheme for automatically resolving the structure of the continuum density of states, circumventing limitations of the pseudo-R matrix algorithm previously utilized.

  19. Implementation of PHENIX trigger algorithms on massively parallel computers

    International Nuclear Information System (INIS)

    Petridis, A.N.; Wohn, F.K.

    1995-01-01

    The event selection requirements of contemporary high energy and nuclear physics experiments are met by the introduction of on-line trigger algorithms which identify potentially interesting events and reduce the data acquisition rate to levels that are manageable by the electronics. Such algorithms being parallel in nature can be simulated off-line using massively parallel computers. The PHENIX experiment intends to investigate the possible existence of a new phase of matter called the quark gluon plasma which has been theorized to have existed in very early stages of the evolution of the universe by studying collisions of heavy nuclei at ultra-relativistic energies. Such interactions can also reveal important information regarding the structure of the nucleus and mandate a thorough investigation of the simpler proton-nucleus collisions at the same energies. The complexity of PHENIX events and the need to analyze and also simulate them at rates similar to the data collection ones imposes enormous computation demands. This work is a first effort to implement PHENIX trigger algorithms on parallel computers and to study the feasibility of using such machines to run the complex programs necessary for the simulation of the PHENIX detector response. Fine and coarse grain approaches have been studied and evaluated. Depending on the application the performance of a massively parallel computer can be much better or much worse than that of a serial workstation. A comparison between single instruction and multiple instruction computers is also made and possible applications of the single instruction machines to high energy and nuclear physics experiments are outlined. copyright 1995 American Institute of Physics

  20. Numerical methods design, analysis, and computer implementation of algorithms

    CERN Document Server

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or computer implementation--of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book a...

  1. An Implementation and Parallelization of the Scale Space Meshing Algorithm

    Directory of Open Access Journals (Sweden)

    Julie Digne

    2015-11-01

    Full Text Available Creating an interpolating mesh from an unorganized set of oriented points is a difficult problemwhich is often overlooked. Most methods focus indeed on building a watertight smoothed meshby defining some function whose zero level set is the surface of the object. However in some casesit is crucial to build a mesh that interpolates the points and does not fill the acquisition holes:either because the data are sparse and trying to fill the holes would create spurious artifactsor because the goal is to explore visually the data exactly as they were acquired without anysmoothing process. In this paper we detail a parallel implementation of the Scale-Space Meshingalgorithm, which builds on the scale-space framework for reconstructing a high precision meshfrom an input oriented point set. This algorithm first smoothes the point set, producing asingularity free shape. It then uses a standard mesh reconstruction technique, the Ball PivotingAlgorithm, to build a mesh from the smoothed point set. The final step consists in back-projecting the mesh built on the smoothed positions onto the original point set. The result ofthis process is an interpolating, hole-preserving surface mesh reconstruction.

  2. GPU implementations of online track finding algorithms at PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Herten, Andreas; Stockmanns, Tobias; Ritman, James [Institut fuer Kernphysik, Forschungszentrum Juelich GmbH (Germany); Adinetz, Andrew; Pleiter, Dirk [Juelich Supercomputing Centre, Forschungszentrum Juelich GmbH (Germany); Kraus, Jiri [NVIDIA GmbH (Germany); Collaboration: PANDA-Collaboration

    2014-07-01

    The PANDA experiment is a hadron physics experiment that will investigate antiproton annihilation in the charm quark mass region. The experiment is now being constructed as one of the main parts of the FAIR facility. At an event rate of 2 . 10{sup 7}/s a data rate of 200 GB/s is expected. A reduction of three orders of magnitude is required in order to save the data for further offline analysis. Since signal and background processes at PANDA have similar signatures, no hardware-level trigger is foreseen for the experiment. Instead, a fast online event filter is substituting this element. We investigate the possibility of using graphics processing units (GPUs) for the online tracking part of this task. Researched algorithms are a Hough Transform, a track finder involving Riemann surfaces, and the novel, PANDA-specific Triplet Finder. This talk shows selected advances in the implementations as well as performance evaluations of the GPU tracking algorithms to be used at the PANDA experiment.

  3. Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks.

    Science.gov (United States)

    Walter, Florian; Röhrbein, Florian; Knoll, Alois

    2015-12-01

    The application of biologically inspired methods in design and control has a long tradition in robotics. Unlike previous approaches in this direction, the emerging field of neurorobotics not only mimics biological mechanisms at a relatively high level of abstraction but employs highly realistic simulations of actual biological nervous systems. Even today, carrying out these simulations efficiently at appropriate timescales is challenging. Neuromorphic chip designs specially tailored to this task therefore offer an interesting perspective for neurorobotics. Unlike Von Neumann CPUs, these chips cannot be simply programmed with a standard programming language. Like real brains, their functionality is determined by the structure of neural connectivity and synaptic efficacies. Enabling higher cognitive functions for neurorobotics consequently requires the application of neurobiological learning algorithms to adjust synaptic weights in a biologically plausible way. In this paper, we therefore investigate how to program neuromorphic chips by means of learning. First, we provide an overview over selected neuromorphic chip designs and analyze them in terms of neural computation, communication systems and software infrastructure. On the theoretical side, we review neurobiological learning techniques. Based on this overview, we then examine on-die implementations of these learning algorithms on the considered neuromorphic chips. A final discussion puts the findings of this work into context and highlights how neuromorphic hardware can potentially advance the field of autonomous robot systems. The paper thus gives an in-depth overview of neuromorphic implementations of basic mechanisms of synaptic plasticity which are required to realize advanced cognitive capabilities with spiking neural networks. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. The implement of Talmud property allocation algorithm based on graphic point-segment way

    Science.gov (United States)

    Cen, Haifeng

    2017-04-01

    Under the guidance of the Talmud allocation scheme's theory, the paper analyzes the algorithm implemented process via the perspective of graphic point-segment way, and designs the point-segment way's Talmud property allocation algorithm. Then it uses Java language to implement the core of allocation algorithm, by using Android programming to build a visual interface.

  5. Particle filters for object tracking: enhanced algorithm and efficient implementations

    International Nuclear Information System (INIS)

    Abd El-Halym, H.A.

    2010-01-01

    Object tracking and recognition is a hot research topic. In spite of the extensive research efforts expended, the development of a robust and efficient object tracking algorithm remains unsolved due to the inherent difficulty of the tracking problem. Particle filters (PFs) were recently introduced as a powerful, post-Kalman filter, estimation tool that provides a general framework for estimation of nonlinear/ non-Gaussian dynamic systems. Particle filters were advanced for building robust object trackers capable of operation under severe conditions (small image size, noisy background, occlusions, fast object maneuvers ..etc.). The heavy computational load of the particle filter remains a major obstacle towards its wide use.In this thesis, an Excitation Particle Filter (EPF) is introduced for object tracking. A new likelihood model is proposed. It depends on multiple functions: position likelihood; gray level intensity likelihood and similarity likelihood. Also, we modified the PF as a robust estimator to overcome the well-known sample impoverishment problem of the PF. This modification is based on re-exciting the particles if their weights fall below a memorized weight value. The proposed enhanced PF is implemented in software and evaluated. Its results are compared with a single likelihood function PF tracker, Particle Swarm Optimization (PSO) tracker, a correlation tracker, as well as, an edge tracker. The experimental results demonstrated the superior performance of the proposed tracker in terms of accuracy, robustness, and occlusion compared with other methods Efficient novel hardware architectures of the Sample Important Re sample Filter (SIRF) and the EPF are implemented. Three novel hardware architectures of the SIRF for object tracking are introduced. The first architecture is a two-step sequential PF machine, where particle generation, weight calculation and normalization are carried out in parallel during the first step followed by a sequential re

  6. Hard Ware Implementation of Diamond Search Algorithm for Motion Estimation and Object Tracking

    International Nuclear Information System (INIS)

    Hashimaa, S.M.; Mahmoud, I.I.; Elazm, A.A.

    2009-01-01

    Object tracking is very important task in computer vision. Fast search algorithms emerged as important search technique to achieve real time tracking results. To enhance the performance of these algorithms, we advocate the hardware implementation of such algorithms. Diamond search block matching motion estimation has been proposed recently to reduce the complexity of motion estimation. In this paper we selected the diamond search algorithm (DS) for implementation using FPGA. This is due to its fundamental role in all fast search patterns. The proposed architecture is simulated and synthesized using Xilinix and modelsim soft wares. The results agree with the algorithm implementation in Matlab environment.

  7. Improvement of ECM Techniques through Implementation of a Genetic Algorithm

    National Research Council Canada - National Science Library

    Townsend, James D

    2008-01-01

    This research effort develops the necessary interfaces between the radar signal processing components and an optimization routine, such as genetic algorithms, to develop Electronic Countermeasure (ECM...

  8. Improvement and implementation for Canny edge detection algorithm

    Science.gov (United States)

    Yang, Tao; Qiu, Yue-hong

    2015-07-01

    Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.

  9. An implementation of Kovacic's algorithm for solving ordinary differential equations in FORMAC

    International Nuclear Information System (INIS)

    Zharkov, A.Yu.

    1987-01-01

    An implementation of Kovacic's algorithm for finding Liouvillian solutions of the differential equations y'' + a(x)y' + b(x)y = 0 with rational coefficients a(x) and b(x) in the Computer Algebra System FORMAC is described. The algorithm description is presented in such a way that one can easily implement it in a suitable Computer Algebra System

  10. Error tolerance in an NMR implementation of Grover's fixed-point quantum search algorithm

    International Nuclear Information System (INIS)

    Xiao Li; Jones, Jonathan A.

    2005-01-01

    We describe an implementation of Grover's fixed-point quantum search algorithm on a nuclear magnetic resonance quantum computer, searching for either one or two matching items in an unsorted database of four items. In this algorithm the target state (an equally weighted superposition of the matching states) is a fixed point of the recursive search operator, so that the algorithm always moves towards the desired state. The effects of systematic errors in the implementation are briefly explored

  11. DNA algorithms of implementing biomolecular databases on a biological computer.

    Science.gov (United States)

    Chang, Weng-Long; Vasilakos, Athanasios V

    2015-01-01

    In this paper, DNA algorithms are proposed to perform eight operations of relational algebra (calculus), which include Cartesian product, union, set difference, selection, projection, intersection, join, and division, on biomolecular relational databases.

  12. Research and implementation of finger-vein recognition algorithm

    Science.gov (United States)

    Pang, Zengyao; Yang, Jie; Chen, Yilei; Liu, Yin

    2017-06-01

    In finger vein image preprocessing, finger angle correction and ROI extraction are important parts of the system. In this paper, we propose an angle correction algorithm based on the centroid of the vein image, and extract the ROI region according to the bidirectional gray projection method. Inspired by the fact that features in those vein areas have similar appearance as valleys, a novel method was proposed to extract center and width of palm vein based on multi-directional gradients, which is easy-computing, quick and stable. On this basis, an encoding method was designed to determine the gray value distribution of texture image. This algorithm could effectively overcome the edge of the texture extraction error. Finally, the system was equipped with higher robustness and recognition accuracy by utilizing fuzzy threshold determination and global gray value matching algorithm. Experimental results on pairs of matched palm images show that, the proposed method has a EER with 3.21% extracts features at the speed of 27ms per image. It can be concluded that the proposed algorithm has obvious advantages in grain extraction efficiency, matching accuracy and algorithm efficiency.

  13. Optimal control of hybrid qubits: Implementing the quantum permutation algorithm

    Science.gov (United States)

    Rivera-Ruiz, C. M.; de Lima, E. F.; Fanchini, F. F.; Lopez-Richard, V.; Castelano, L. K.

    2018-03-01

    The optimal quantum control theory is employed to determine electric pulses capable of producing quantum gates with a fidelity higher than 0.9997, when noise is not taken into account. Particularly, these quantum gates were chosen to perform the permutation algorithm in hybrid qubits in double quantum dots (DQDs). The permutation algorithm is an oracle based quantum algorithm that solves the problem of the permutation parity faster than a classical algorithm without the necessity of entanglement between particles. The only requirement for achieving the speedup is the use of a one-particle quantum system with at least three levels. The high fidelity found in our results is closely related to the quantum speed limit, which is a measure of how fast a quantum state can be manipulated. Furthermore, we model charge noise by considering an average over the optimal field centered at different values of the reference detuning, which follows a Gaussian distribution. When the Gaussian spread is of the order of 5 μ eV (10% of the correct value), the fidelity is still higher than 0.95. Our scheme also can be used for the practical realization of different quantum algorithms in DQDs.

  14. RSA Algorithm. Features of the C # Object Programming Implementation

    Directory of Open Access Journals (Sweden)

    Elena V. Staver

    2012-08-01

    Full Text Available Public-key algorithms depend on the encryption key and the decoding key, connected with the first one. For data public key encryption, the text is divided into blocks, each of which is represented as a number. To decrypt the message a secret key is used.

  15. Computationally efficient algorithms for statistical image processing : implementation in R

    NARCIS (Netherlands)

    Langovoy, M.; Wittich, O.

    2010-01-01

    In the series of our earlier papers on the subject, we proposed a novel statistical hypothesis testing method for detection of objects in noisy images. The method uses results from percolation theory and random graph theory. We developed algorithms that allowed to detect objects of unknown shapes in

  16. Implementing the conjugate gradient algorithm on multi-core systems

    NARCIS (Netherlands)

    Wiggers, W.A.; Bakker, Vincent; Kokkeler, Andre B.J.; Smit, Gerardus Johannes Maria; Nurmi, J.; Takala, J.; Vainio, O.

    2007-01-01

    In linear solvers, like the conjugate gradient algorithm, sparse-matrix vector multiplication is an important kernel. Due to the sparseness of the matrices, the solver runs relatively slow. For digital optical tomography (DOT), a large set of linear equations have to be solved which currently takes

  17. Implementations of back propagation algorithm in ecosystems applications

    Science.gov (United States)

    Ali, Khalda F.; Sulaiman, Riza; Elamir, Amir Mohamed

    2015-05-01

    Artificial Neural Networks (ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is in solving problems which are too complex for conventional technologies, that do not have an algorithmic solutions or their algorithmic Solutions is too complex to be found. In general, because of their abstraction from the biological brain, ANNs are developed from concept that evolved in the late twentieth century neuro-physiological experiments on the cells of the human brain to overcome the perceived inadequacies with conventional ecological data analysis methods. ANNs have gained increasing attention in ecosystems applications, because of ANN's capacity to detect patterns in data through non-linear relationships, this characteristic confers them a superior predictive ability. In this research, ANNs is applied in an ecological system analysis. The neural networks use the well known Back Propagation (BP) Algorithm with the Delta Rule for adaptation of the system. The Back Propagation (BP) training Algorithm is an effective analytical method for adaptation of the ecosystems applications, the main reason because of their capacity to detect patterns in data through non-linear relationships. This characteristic confers them a superior predicting ability. The BP algorithm uses supervised learning, which means that we provide the algorithm with examples of the inputs and outputs we want the network to compute, and then the error is calculated. The idea of the back propagation algorithm is to reduce this error, until the ANNs learns the training data. The training begins with random weights, and the goal is to adjust them so that the error will be minimal. This research evaluated the use of artificial neural networks (ANNs) techniques in an ecological system analysis and modeling. The experimental results from this research demonstrate that an artificial neural network system can be trained to act as an expert

  18. Implementing peak load reduction algorithms for household electrical appliances

    International Nuclear Information System (INIS)

    Dlamini, Ndumiso G.; Cromieres, Fabien

    2012-01-01

    Considering household appliance automation for reduction of household peak power demand, this study explored aspects of the interaction between household automation technology and human behaviour. Given a programmable household appliance switching system, and user-reported appliance use times, we simulated the load reduction effectiveness of three types of algorithms, which were applied at both the single household level and across all 30 households. All three algorithms effected significant load reductions, while the least-to-highest potential user inconvenience ranking was: coordinating the timing of frequent intermittent loads (algorithm 2); moving period-of-day time-flexible loads to off-peak times (algorithm 1); and applying short-term time delays to avoid high peaks (algorithm 3) (least accommodating). Peak reduction was facilitated by load interruptibility, time of use flexibility and the willingness of users to forgo impulsive appliance use. We conclude that a general factor determining the ability to shift the load due to a particular appliance is the time-buffering between the service delivered and the power demand of an appliance. Time-buffering can be ‘technologically inherent’, due to human habits, or realised by managing user expectations. There are implications for the design of appliances and home automation systems. - Highlights: ► We explored the interaction between appliance automation and human behaviour. ► There is potential for considerable load shifting of household appliances. ► Load shifting for load reduction is eased with increased time buffering. ► Design, human habits and user expectations all influence time buffering. ► Certain automation and appliance design features can facilitate load shifting.

  19. Implementation and analysis of an adaptive multilevel Monte Carlo algorithm

    KAUST Repository

    Hoel, Hakon; Von Schwerin, Erik; Szepessy, Anders; Tempone, Raul

    2014-01-01

    We present an adaptive multilevel Monte Carlo (MLMC) method for weak approximations of solutions to Itô stochastic dierential equations (SDE). The work [11] proposed and analyzed an MLMC method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a single level Euler-Maruyama Monte Carlo method from O(TOL-3) to O(TOL-2 log(TOL-1)2) for a mean square error of O(TOL2). Later, the work [17] presented an MLMC method using a hierarchy of adaptively re ned, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform time discretizationMLMC method. This work improves the adaptiveMLMC algorithms presented in [17] and it also provides mathematical analysis of the improved algorithms. In particular, we show that under some assumptions our adaptive MLMC algorithms are asymptotically accurate and essentially have the correct complexity but with improved control of the complexity constant factor in the asymptotic analysis. Numerical tests include one case with singular drift and one with stopped diusion, where the complexity of a uniform single level method is O(TOL-4). For both these cases the results con rm the theory, exhibiting savings in the computational cost for achieving the accuracy O(TOL) from O(TOL-3) for the adaptive single level algorithm to essentially O(TOL-2 log(TOL-1)2) for the adaptive MLMC algorithm. © 2014 by Walter de Gruyter Berlin/Boston 2014.

  20. Comparison Of Hybrid Sorting Algorithms Implemented On Different Parallel Hardware Platforms

    Directory of Open Access Journals (Sweden)

    Dominik Zurek

    2013-01-01

    Full Text Available Sorting is a common problem in computer science. There are lot of well-known sorting algorithms created for sequential execution on a single processor. Recently, hardware platforms enable to create wide parallel algorithms. We have standard processors consist of multiple cores and hardware accelerators like GPU. The graphic cards with their parallel architecture give new possibility to speed up many algorithms. In this paper we describe results of implementation of a few different sorting algorithms on GPU cards and multicore processors. Then hybrid algorithm will be presented which consists of parts executed on both platforms, standard CPU and GPU.

  1. IMPLEMENTATION OF OBJECT TRACKING ALGORITHMS ON THE BASIS OF CUDA TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    B. A. Zalesky

    2014-01-01

    Full Text Available A fast version of correlation algorithm to track objects on video-sequences made by a nonstabilized camcorder is presented. The algorithm is based on comparison of local correlations of the object image and regions of video-frames. The algorithm is implemented in programming technology CUDA. Application of CUDA allowed to attain real time execution of the algorithm. To improve its precision and stability, a robust version of the Kalman filter has been incorporated into the flowchart. Tests showed applicability of the algorithm to practical object tracking.

  2. Comparison of spike-sorting algorithms for future hardware implementation.

    Science.gov (United States)

    Gibson, Sarah; Judy, Jack W; Markovic, Dejan

    2008-01-01

    Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.

  3. Implementation of dictionary pair learning algorithm for image quality improvement

    Science.gov (United States)

    Vimala, C.; Aruna Priya, P.

    2018-04-01

    This paper proposes an image denoising on dictionary pair learning algorithm. Visual information is transmitted in the form of digital images is becoming a major method of communication in the modern age, but the image obtained after transmissions is often corrupted with noise. The received image needs processing before it can be used in applications. Image denoising involves the manipulation of the image data to produce a visually high quality image.

  4. Fuzzy logic and A* algorithm implementation on goat foraging games

    Science.gov (United States)

    Harsani, P.; Mulyana, I.; Zakaria, D.

    2018-03-01

    Goat foraging is one of the games that apply the search techniques within the scope of artificial intelligence. This game involves several actors including players and enemies. The method used in this research is fuzzy logic and Algorithm A*. Fuzzy logic is used to determine enemy behaviour. The A* algorithm is used to search for the shortest path. There are two input variables: the distance between the player and the enemy and the anger level of the goat. The output variable that has been defined is the enemy behaviour. The A* algorithm is used to determine the closest path between the player and the enemy and define the enemy's escape path to avoid the player. There are 4 types of enemies namely farmers, planters, farmers and sellers of plants. Players are goats that aims to find a meal that is a plant. In this game goats aim to spend grass in the garden in the form of a maze while avoiding the enemy. The game provides an application of artificial intelligence and is made in four difficulty levels.

  5. An implementation of signal processing algorithms for ultrasonic NDE

    International Nuclear Information System (INIS)

    Ericsson, L.; Stepinski, T.

    1994-01-01

    Probability of detection flaws during ultrasonic pulse-echo inspection is often limited by the presence of backscattered echoes from the material structure. A digital signal processing technique for removal of this material noise, referred to as split spectrum processing (SSP), has been developed and verified using laboratory experiments during the last decade. The authors have performed recently a limited scale evaluation of various SSP techniques for ultrasonic signals acquired during the inspection of welds in austenitic steel. They have obtained very encouraging results that indicate promising capabilities of the SSP for inspection of nuclear power plants. Thus, a more extensive investigation of the technique using large amounts of ultrasonic data is motivated. This analysis should employ different combinations of materials, flaws and transducers. Due to the considerable number of ultrasonic signals required to verify the technique for future practical use, a custom-made computer software is necessary. At the request of the Swedish nuclear power industry the authors have developed such a program package. The program provides a user-friendly graphical interface and is intended for processing of B-scan data in a flexible way. Assembled in the program are a number of signal processing algorithms including traditional Split Spectrum Processing and the more recent Cut Spectrum Processing algorithm developed by them. The program and some results obtained using the various algorithms are presented in the paper

  6. Implementation of Period-Finding Algorithm by Means of Simulating Quantum Fourier Transform

    Directory of Open Access Journals (Sweden)

    Zohreh Moghareh Abed

    2010-01-01

    Full Text Available In this paper, we introduce quantum fourier transform as a key ingredient for many useful algorithms. These algorithms make a solution for problems which is considered to be intractable problems on a classical computer. Quantum Fourier transform is propounded as a key for quantum phase estimation algorithm. In this paper our aim is the implementation of period-finding algorithm.Quantum computer solves this problem, exponentially faster than classical one. Quantum phase estimation algorithm is the key for the period-finding problem .Therefore, by means of simulating quantum Fourier transform, we are able to implement the period-finding algorithm. In this paper, the simulation of quantum Fourier transform is carried out by Matlab software.

  7. Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed

    Science.gov (United States)

    Tian, Ye; Song, Qi; Cattafesta, Louis

    2005-01-01

    This report summarizes the activities on "Implementation of Real-Time Feedback Flow Control Algorithms on a Canonical Testbed." The work summarized consists primarily of two parts. The first part summarizes our previous work and the extensions to adaptive ID and control algorithms. The second part concentrates on the validation of adaptive algorithms by applying them to a vibration beam test bed. Extensions to flow control problems are discussed.

  8. Caliko: An Inverse Kinematics Software Library Implementation of the FABRIK Algorithm

    OpenAIRE

    Lansley, Alastair; Vamplew, Peter; Smith, Philip; Foale, Cameron

    2016-01-01

    The Caliko library is an implementation of the FABRIK (Forward And Backward Reaching Inverse Kinematics) algorithm written in Java. The inverse kinematics (IK) algorithm is implemented in both 2D and 3D, and incorporates a variety of joint constraints as well as the ability to connect multiple IK chains together in a hierarchy. The library allows for the simple creation and solving of multiple IK chains as well as visualisation of these solutions. It is licensed under the MIT software license...

  9. A parallel implementation of a maximum entropy reconstruction algorithm for PET images in a visual language

    International Nuclear Information System (INIS)

    Bastiens, K.; Lemahieu, I.

    1994-01-01

    The application of a maximum entropy reconstruction algorithm to PET images requires a lot of computing resources. A parallel implementation could seriously reduce the execution time. However, programming a parallel application is still a non trivial task, needing specialized people. In this paper a programming environment based on a visual programming language is used for a parallel implementation of the reconstruction algorithm. This programming environment allows less experienced programmers to use the performance of multiprocessor systems. (authors)

  10. A parallel implementation of a maximum entropy reconstruction algorithm for PET images in a visual language

    Energy Technology Data Exchange (ETDEWEB)

    Bastiens, K; Lemahieu, I [University of Ghent - ELIS Department, St. Pietersnieuwstraat 41, B-9000 Ghent (Belgium)

    1994-12-31

    The application of a maximum entropy reconstruction algorithm to PET images requires a lot of computing resources. A parallel implementation could seriously reduce the execution time. However, programming a parallel application is still a non trivial task, needing specialized people. In this paper a programming environment based on a visual programming language is used for a parallel implementation of the reconstruction algorithm. This programming environment allows less experienced programmers to use the performance of multiprocessor systems. (authors). 8 refs, 3 figs, 1 tab.

  11. Implementation of perceptual aspects in a face recognition algorithm

    International Nuclear Information System (INIS)

    Crenna, F; Bovio, L; Rossi, G B; Zappa, E; Testa, R; Gasparetto, M

    2013-01-01

    Automatic face recognition is a biometric technique particularly appreciated in security applications. In fact face recognition presents the opportunity to operate at a low invasive level without the collaboration of the subjects under tests, with face images gathered either from surveillance systems or from specific cameras located in strategic points. The automatic recognition algorithms perform a measurement, on the face images, of a set of specific characteristics of the subject and provide a recognition decision based on the measurement results. Unfortunately several quantities may influence the measurement of the face geometry such as its orientation, the lighting conditions, the expression and so on, affecting the recognition rate. On the other hand human recognition of face is a very robust process far less influenced by the surrounding conditions. For this reason it may be interesting to insert perceptual aspects in an automatic facial-based recognition algorithm to improve its robustness. This paper presents a first study in this direction investigating the correlation between the results of a perception experiment and the facial geometry, estimated by means of the position of a set of repere points

  12. Clinical implementation and evaluation of the Acuros dose calculation algorithm.

    Science.gov (United States)

    Yan, Chenyu; Combine, Anthony G; Bednarz, Greg; Lalonde, Ronald J; Hu, Bin; Dickens, Kathy; Wynn, Raymond; Pavord, Daniel C; Saiful Huq, M

    2017-09-01

    The main aim of this study is to validate the Acuros XB dose calculation algorithm for a Varian Clinac iX linac in our clinics, and subsequently compare it with the wildely used AAA algorithm. The source models for both Acuros XB and AAA were configured by importing the same measured beam data into Eclipse treatment planning system. Both algorithms were validated by comparing calculated dose with measured dose on a homogeneous water phantom for field sizes ranging from 6 cm × 6 cm to 40 cm × 40 cm. Central axis and off-axis points with different depths were chosen for the comparison. In addition, the accuracy of Acuros was evaluated for wedge fields with wedge angles from 15 to 60°. Similarly, variable field sizes for an inhomogeneous phantom were chosen to validate the Acuros algorithm. In addition, doses calculated by Acuros and AAA at the center of lung equivalent tissue from three different VMAT plans were compared to the ion chamber measured doses in QUASAR phantom, and the calculated dose distributions by the two algorithms and their differences on patients were compared. Computation time on VMAT plans was also evaluated for Acuros and AAA. Differences between dose-to-water (calculated by AAA and Acuros XB) and dose-to-medium (calculated by Acuros XB) on patient plans were compared and evaluated. For open 6 MV photon beams on the homogeneous water phantom, both Acuros XB and AAA calculations were within 1% of measurements. For 23 MV photon beams, the calculated doses were within 1.5% of measured doses for Acuros XB and 2% for AAA. Testing on the inhomogeneous phantom demonstrated that AAA overestimated doses by up to 8.96% at a point close to lung/solid water interface, while Acuros XB reduced that to 1.64%. The test on QUASAR phantom showed that Acuros achieved better agreement in lung equivalent tissue while AAA underestimated dose for all VMAT plans by up to 2.7%. Acuros XB computation time was about three times faster than AAA for VMAT plans, and

  13. Prototype Implementation of Two Efficient Low-Complexity Digital Predistortion Algorithms

    Directory of Open Access Journals (Sweden)

    Timo I. Laakso

    2008-01-01

    Full Text Available Predistortion (PD lineariser for microwave power amplifiers (PAs is an important topic of research. With larger and larger bandwidth as it appears today in modern WiMax standards as well as in multichannel base stations for 3GPP standards, the relatively simple nonlinear effect of a PA becomes a complex memory-including function, severely distorting the output signal. In this contribution, two digital PD algorithms are investigated for the linearisation of microwave PAs in mobile communications. The first one is an efficient and low-complexity algorithm based on a memoryless model, called the simplicial canonical piecewise linear (SCPWL function that describes the static nonlinear characteristic of the PA. The second algorithm is more general, approximating the pre-inverse filter of a nonlinear PA iteratively using a Volterra model. The first simpler algorithm is suitable for compensation of amplitude compression and amplitude-to-phase conversion, for example, in mobile units with relatively small bandwidths. The second algorithm can be used to linearise PAs operating with larger bandwidths, thus exhibiting memory effects, for example, in multichannel base stations. A measurement testbed which includes a transmitter-receiver chain with a microwave PA is built for testing and prototyping of the proposed PD algorithms. In the testing phase, the PD algorithms are implemented using MATLAB (floating-point representation and tested in record-and-playback mode. The iterative PD algorithm is then implemented on a Field Programmable Gate Array (FPGA using fixed-point representation. The FPGA implementation allows the pre-inverse filter to be tested in a real-time mode. Measurement results show excellent linearisation capabilities of both the proposed algorithms in terms of adjacent channel power suppression. It is also shown that the fixed-point FPGA implementation of the iterative algorithm performs as well as the floating-point implementation.

  14. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  15. An implementation of the relational k-means algorithm

    OpenAIRE

    Szalkai, Balázs

    2013-01-01

    A C# implementation of a generalized k-means variant called relational k-means is described here. Relational k-means is a generalization of the well-known k-means clustering method which works for non-Euclidean scenarios as well. The input is an arbitrary distance matrix, as opposed to the traditional k-means method, where the clustered objects need to be identified with vectors.

  16. Implementation of Genetic Algorithm in Control Structure of Induction Motor A.C. Drive

    Directory of Open Access Journals (Sweden)

    BRANDSTETTER, P.

    2014-11-01

    Full Text Available Modern concepts of control systems with digital signal processors allow the implementation of time-consuming control algorithms in real-time, for example soft computing methods. The paper deals with the design and technical implementation of a genetic algorithm for setting proportional and integral gain of the speed controller of the A.C. drive with the vector-controlled induction motor. Important simulations and experimental measurements have been realized that confirm the correctness of the proposed speed controller tuned by the genetic algorithm and the quality speed response of the A.C. drive with changing parameters and disturbance variables, such as changes in load torque.

  17. On distribution reduction and algorithm implementation in inconsistent ordered information systems.

    Science.gov (United States)

    Zhang, Yanqin

    2014-01-01

    As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.

  18. Implementation of software-based sensor linearization algorithms on low-cost microcontrollers.

    Science.gov (United States)

    Erdem, Hamit

    2010-10-01

    Nonlinear sensors and microcontrollers are used in many embedded system designs. As the input-output characteristic of most sensors is nonlinear in nature, obtaining data from a nonlinear sensor by using an integer microcontroller has always been a design challenge. This paper discusses the implementation of six software-based sensor linearization algorithms for low-cost microcontrollers. The comparative study of the linearization algorithms is performed by using a nonlinear optical distance-measuring sensor. The performance of the algorithms is examined with respect to memory space usage, linearization accuracy and algorithm execution time. The implementation and comparison results can be used for selection of a linearization algorithm based on the sensor transfer function, expected linearization accuracy and microcontroller capacity. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    Science.gov (United States)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  20. A GPU-paralleled implementation of an enhanced face recognition algorithm

    Science.gov (United States)

    Chen, Hao; Liu, Xiyang; Shao, Shuai; Zan, Jiguo

    2013-03-01

    Face recognition algorithm based on compressed sensing and sparse representation is hotly argued in these years. The scheme of this algorithm increases recognition rate as well as anti-noise capability. However, the computational cost is expensive and has become a main restricting factor for real world applications. In this paper, we introduce a GPU-accelerated hybrid variant of face recognition algorithm named parallel face recognition algorithm (pFRA). We describe here how to carry out parallel optimization design to take full advantage of many-core structure of a GPU. The pFRA is tested and compared with several other implementations under different data sample size. Finally, Our pFRA, implemented with NVIDIA GPU and Computer Unified Device Architecture (CUDA) programming model, achieves a significant speedup over the traditional CPU implementations.

  1. An Effective, Robust And Parallel Implementation Of An Interior Point Algorithm For Limit State Optimization

    DEFF Research Database (Denmark)

    Dollerup, Niels; Jepsen, Michael S.; Damkilde, Lars

    2013-01-01

    The artide describes a robust and effective implementation of the interior point optimization algorithm. The adopted method includes a precalculation step, which reduces the number of variables by fulfilling the equilibrium equations a priori. This work presents an improved implementation of the ...

  2. Quantum algorithms and quantum maps - implementation and error correction

    International Nuclear Information System (INIS)

    Alber, G.; Shepelyansky, D.

    2005-01-01

    Full text: We investigate the dynamics of the quantum tent map under the influence of errors and explore the possibilities of quantum error correcting methods for the purpose of stabilizing this quantum algorithm. It is known that static but uncontrollable inter-qubit couplings between the qubits of a quantum information processor lead to a rapid Gaussian decay of the fidelity of the quantum state. We present a new error correcting method which slows down this fidelity decay to a linear-in-time exponential one. One of its advantages is that it does not require redundancy so that all physical qubits involved can be used for logical purposes. We also study the influence of decoherence due to spontaneous decay processes which can be corrected by quantum jump-codes. It is demonstrated how universal encoding can be performed in these code spaces. For this purpose we discuss a new entanglement gate which can be used for lowest level encoding in concatenated error-correcting architectures. (author)

  3. Implementation of a virtual laryngoscope system using efficient reconstruction algorithms.

    Science.gov (United States)

    Luo, Shouhua; Yan, Yuling

    2009-08-01

    Conventional fiberoptic laryngoscope may cause discomfort to the patient and in some cases it can lead to side effects that include perforation, infection and hemorrhage. Virtual laryngoscopy (VL) can overcome this problem and further it may lower the risk of operation failures. Very few virtual endoscope (VE) based investigations of the larynx have been described in the literature. CT data sets from a healthy subject were used for the VL studies. An algorithm of preprocessing and region-growing for 3-D image segmentation is developed. An octree based approach is applied in our VL system which facilitates a rapid construction of iso-surfaces. Some locating techniques are used for fast rendering and navigation (fly-through). Our VL visualization system provides for real time and efficient 'fly-through' navigation. The virtual camera can be arranged so that it moves along the airway in either direction. Snap shots were taken during fly-throughs. The system can automatically adjust the direction of the virtual camera and prevent collisions of the camera and the wall of the airway. A virtual laryngoscope (VL) system using OpenGL (Open Graphics Library) platform for interactive rendering and 3D visualization of the laryngeal framework and upper airway is established. OpenGL is supported on major operating systems and works with every major windowing system. The VL system runs on regular PC workstations and was successfully tested and evaluated using CT data from a normal subject.

  4. Hardware Implementation of a Modified Delay-Coordinate Mapping-Based QRS Complex Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Andrej Zemva

    2007-01-01

    Full Text Available We present a modified delay-coordinate mapping-based QRS complex detection algorithm, suitable for hardware implementation. In the original algorithm, the phase-space portrait of an electrocardiogram signal is reconstructed in a two-dimensional plane using the method of delays. Geometrical properties of the obtained phase-space portrait are exploited for QRS complex detection. In our solution, a bandpass filter is used for ECG signal prefiltering and an improved method for detection threshold-level calculation is utilized. We developed the algorithm on the MIT-BIH Arrhythmia Database (sensitivity of 99.82% and positive predictivity of 99.82% and tested it on the long-term ST database (sensitivity of 99.72% and positive predictivity of 99.37%. Our algorithm outperforms several well-known QRS complex detection algorithms, including the original algorithm.

  5. Implementation of a partitioned algorithm for simulation of large CSI problems

    Science.gov (United States)

    Alvin, Kenneth F.; Park, K. C.

    1991-01-01

    The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.

  6. Signal processing for 5G algorithms and implementations

    CERN Document Server

    Luo, Fa-Long

    2016-01-01

    A comprehensive and invaluable guide to 5G technology, implementation and practice in one single volume. For all things 5G, this book is a must-read. Signal processing techniques have played the most important role in wireless communications since the second generation of cellular systems. It is anticipated that new techniques employed in 5G wireless networks will not only improve peak service rates significantly, but also enhance capacity, coverage, reliability , low-latency, efficiency, flexibility, compatibility and convergence to meet the increasing demands imposed by applications such as big data, cloud service, machine-to-machine (M2M) and mission-critical communications. This book is a comprehensive and detailed guide to all signal processing techniques employed in 5G wireless networks. Uniquely organized into four categories, New Modulation and &n sp;Coding, New Spatial Processing, New Spectrum Opportunities and New System-level Enabling Technologies, it covers everything from network architecture...

  7. A Novel Method to Implement the Matrix Pencil Super Resolution Algorithm for Indoor Positioning

    Directory of Open Access Journals (Sweden)

    Tariq Jamil Saifullah Khanzada

    2011-10-01

    Full Text Available This article highlights the estimation of the results for the algorithms implemented in order to estimate the delays and distances for the indoor positioning system. The data sets for the transmitted and received signals are captured at a typical outdoor and indoor area. The estimation super resolution algorithms are applied. Different state of art and super resolution techniques based algorithms are applied to avail the optimal estimates of the delays and distances between the transmitted and received signals and a novel method for matrix pencil algorithm is devised. The algorithms perform variably at different scenarios of transmitted and received positions. Two scenarios are experienced, for the single antenna scenario the super resolution techniques like ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique and theMatrix Pencil algorithms give optimal performance compared to the conventional techniques. In two antenna scenario RootMUSIC and Matrix Pencil algorithm performed better than other algorithms for the distance estimation, however, the accuracy of all the algorithms is worst than the single antenna scenario. In all cases our devised Matrix Pencil algorithm achieved the best estimation results.

  8. Implementation of an Algorithm for Prosthetic Joint Infection: Deviations and Problems.

    Science.gov (United States)

    Mühlhofer, Heinrich M L; Kanz, Karl-Georg; Pohlig, Florian; Lenze, Ulrich; Lenze, Florian; Toepfer, Andreas; von Eisenhart-Rothe, Ruediger; Schauwecker, Johannes

    The outcome of revision surgery in arthroplasty is based on a precise diagnosis. In addition, the treatment varies based on whether the prosthetic failure is caused by aseptic or septic loosening. Algorithms can help to identify periprosthetic joint infections (PJI) and standardize diagnostic steps, however, algorithms tend to oversimplify the treatment of complex cases. We conducted a process analysis during the implementation of a PJI algorithm to determine problems and deviations associated with the implementation of this algorithm. Fifty patients who were treated after implementing a standardized algorithm were monitored retrospectively. Their treatment plans and diagnostic cascades were analyzed for deviations from the implemented algorithm. Each diagnostic procedure was recorded, compared with the algorithm, and evaluated statistically. We detected 52 deviations while treating 50 patients. In 25 cases, no discrepancy was observed. Synovial fluid aspiration was not performed in 31.8% of patients (95% confidence interval [CI], 18.1%-45.6%), while white blood cell counts (WBCs) and neutrophil differentiation were assessed in 54.5% of patients (95% CI, 39.8%-69.3%). We also observed that the prolonged incubation of cultures was not requested in 13.6% of patients (95% CI, 3.5%-23.8%). In seven of 13 cases (63.6%; 95% CI, 35.2%-92.1%), arthroscopic biopsy was performed; 6 arthroscopies were performed in discordance with the algorithm (12%; 95% CI, 3%-21%). Self-critical analysis of diagnostic processes and monitoring of deviations using algorithms are important and could increase the quality of treatment by revealing recurring faults.

  9. Quantum computation: algorithms and implementation in quantum dot devices

    Science.gov (United States)

    Gamble, John King

    In this thesis, we explore several aspects of both the software and hardware of quantum computation. First, we examine the computational power of multi-particle quantum random walks in terms of distinguishing mathematical graphs. We study both interacting and non-interacting multi-particle walks on strongly regular graphs, proving some limitations on distinguishing powers and presenting extensive numerical evidence indicative of interactions providing more distinguishing power. We then study the recently proposed adiabatic quantum algorithm for Google PageRank, and show that it exhibits power-law scaling for realistic WWW-like graphs. Turning to hardware, we next analyze the thermal physics of two nearby 2D electron gas (2DEG), and show that an analogue of the Coulomb drag effect exists for heat transfer. In some distance and temperature, this heat transfer is more significant than phonon dissipation channels. After that, we study the dephasing of two-electron states in a single silicon quantum dot. Specifically, we consider dephasing due to the electron-phonon coupling and charge noise, separately treating orbital and valley excitations. In an ideal system, dephasing due to charge noise is strongly suppressed due to a vanishing dipole moment. However, introduction of disorder or anharmonicity leads to large effective dipole moments, and hence possibly strong dephasing. Building on this work, we next consider more realistic systems, including structural disorder systems. We present experiment and theory, which demonstrate energy levels that vary with quantum dot translation, implying a structurally disordered system. Finally, we turn to the issues of valley mixing and valley-orbit hybridization, which occurs due to atomic-scale disorder at quantum well interfaces. We develop a new theoretical approach to study these effects, which we name the disorder-expansion technique. We demonstrate that this method successfully reproduces atomistic tight-binding techniques

  10. Implementing embedded artificial intelligence rules within algorithmic programming languages

    Science.gov (United States)

    Feyock, Stefan

    1988-01-01

    Most integrations of artificial intelligence (AI) capabilities with non-AI (usually FORTRAN-based) application programs require the latter to execute separately to run as a subprogram or, at best, as a coroutine, of the AI system. In many cases, this organization is unacceptable; instead, the requirement is for an AI facility that runs in embedded mode; i.e., is called as subprogram by the application program. The design and implementation of a Prolog-based AI capability that can be invoked in embedded mode are described. The significance of this system is twofold: Provision of Prolog-based symbol-manipulation and deduction facilities makes a powerful symbolic reasoning mechanism available to applications programs written in non-AI languages. The power of the deductive and non-procedural descriptive capabilities of Prolog, which allow the user to describe the problem to be solved, rather than the solution, is to a large extent vitiated by the absence of the standard control structures provided by other languages. Embedding invocations of Prolog rule bases in programs written in non-AI languages makes it possible to put Prolog calls inside DO loops and similar control constructs. The resulting merger of non-AI and AI languages thus results in a symbiotic system in which the advantages of both programming systems are retained, and their deficiencies largely remedied.

  11. Implementation of pencil kernel and depth penetration algorithms for treatment planning of proton beams

    International Nuclear Information System (INIS)

    Russell, K.R.; Saxner, M.; Ahnesjoe, A.; Montelius, A.; Grusell, E.; Dahlgren, C.V.

    2000-01-01

    The implementation of two algorithms for calculating dose distributions for radiation therapy treatment planning of intermediate energy proton beams is described. A pencil kernel algorithm and a depth penetration algorithm have been incorporated into a commercial three-dimensional treatment planning system (Helax-TMS, Helax AB, Sweden) to allow conformal planning techniques using irregularly shaped fields, proton range modulation, range modification and dose calculation for non-coplanar beams. The pencil kernel algorithm is developed from the Fermi-Eyges formalism and Moliere multiple-scattering theory with range straggling corrections applied. The depth penetration algorithm is based on the energy loss in the continuous slowing down approximation with simple correction factors applied to the beam penumbra region and has been implemented for fast, interactive treatment planning. Modelling of the effects of air gaps and range modifying device thickness and position are implicit to both algorithms. Measured and calculated dose values are compared for a therapeutic proton beam in both homogeneous and heterogeneous phantoms of varying complexity. Both algorithms model the beam penumbra as a function of depth in a homogeneous phantom with acceptable accuracy. Results show that the pencil kernel algorithm is required for modelling the dose perturbation effects from scattering in heterogeneous media. (author)

  12. Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data

    Science.gov (United States)

    Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam

    2018-06-01

    Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.

  13. Efficiency Analysis of the Parallel Implementation of the SIMPLE Algorithm on Multiprocessor Computers

    Science.gov (United States)

    Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.

    2017-12-01

    This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.

  14. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  15. Infinitely oscillating wavelets and a efficient implementation algorithm based the FFT

    Directory of Open Access Journals (Sweden)

    Marcela Fabio

    2015-01-01

    Full Text Available In this work we present the design of an orthogonal wavelet, infinitely oscillating, located in time with decay 1/|t|n and limited-band. Its appli- cation leads to the signal decomposition in waves of instantaneous, well defined frequency. We also present the implementation algorithm for the analysis and synthesis based on the Fast Fourier Transform (FFT with the same complexity as Mallat’s algorithm.

  16. Quantum computation with classical light: Implementation of the Deutsch–Jozsa algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Garcia, Benjamin [Photonics and Mathematical Optics Group, Tecnológico de Monterrey, Monterrey 64849 (Mexico); University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); School of Chemistry and Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000 (South Africa); McLaren, Melanie [University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); Goyal, Sandeep K. [School of Chemistry and Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000 (South Africa); Institute of Quantum Science and Technology, University of Calgary, Alberta T2N 1N4 (Canada); Hernandez-Aranda, Raul I. [Photonics and Mathematical Optics Group, Tecnológico de Monterrey, Monterrey 64849 (Mexico); Forbes, Andrew [University of the Witwatersrand, Private Bag 3, Johannesburg 2050 (South Africa); Konrad, Thomas, E-mail: konradt@ukzn.ac.za [School of Chemistry and Physics, University of KwaZulu-Natal, Private Bag X54001, Durban 4000 (South Africa); National Institute of Theoretical Physics, Durban Node, Private Bag X54001, Durban 4000 (South Africa)

    2016-05-20

    Highlights: • An implementation of the Deutsch–Jozsa algorithm using classical optics is proposed. • Constant and certain balanced functions can be encoded and distinguished efficiently. • The encoding and the detection process does not require to access single path qubits. • While the scheme might be scalable in principle, it might not be in practice. • We suggest a generalisation of the Deutsch–Jozsa algorithm and its implementation. - Abstract: We propose an optical implementation of the Deutsch–Jozsa Algorithm using classical light in a binary decision-tree scheme. Our approach uses a ring cavity and linear optical devices in order to efficiently query the oracle functional values. In addition, we take advantage of the intrinsic Fourier transforming properties of a lens to read out whether the function given by the oracle is balanced or constant.

  17. Real time implementation of a linear predictive coding algorithm on digital signal processor DSP32C

    International Nuclear Information System (INIS)

    Sheikh, N.M.; Usman, S.R.; Fatima, S.

    2002-01-01

    Pulse Code Modulation (PCM) has been widely used in speech coding. However, due to its high bit rate. PCM has severe limitations in application where high spectral efficiency is desired, for example, in mobile communication, CD quality broadcasting system etc. These limitation have motivated research in bit rate reduction techniques. Linear predictive coding (LPC) is one of the most powerful complex techniques for bit rate reduction. With the introduction of powerful digital signal processors (DSP) it is possible to implement the complex LPC algorithm in real time. In this paper we present a real time implementation of the LPC algorithm on AT and T's DSP32C at a sampling frequency of 8192 HZ. Application of the LPC algorithm on two speech signals is discussed. Using this implementation , a bit rate reduction of 1:3 is achieved for better than tool quality speech, while a reduction of 1.16 is possible for speech quality required in military applications. (author)

  18. Quantum computation with classical light: Implementation of the Deutsch–Jozsa algorithm

    International Nuclear Information System (INIS)

    Perez-Garcia, Benjamin; McLaren, Melanie; Goyal, Sandeep K.; Hernandez-Aranda, Raul I.; Forbes, Andrew; Konrad, Thomas

    2016-01-01

    Highlights: • An implementation of the Deutsch–Jozsa algorithm using classical optics is proposed. • Constant and certain balanced functions can be encoded and distinguished efficiently. • The encoding and the detection process does not require to access single path qubits. • While the scheme might be scalable in principle, it might not be in practice. • We suggest a generalisation of the Deutsch–Jozsa algorithm and its implementation. - Abstract: We propose an optical implementation of the Deutsch–Jozsa Algorithm using classical light in a binary decision-tree scheme. Our approach uses a ring cavity and linear optical devices in order to efficiently query the oracle functional values. In addition, we take advantage of the intrinsic Fourier transforming properties of a lens to read out whether the function given by the oracle is balanced or constant.

  19. Improved implementation algorithms of the two-dimensional nonseparable linear canonical transform.

    Science.gov (United States)

    Ding, Jian-Jiun; Pei, Soo-Chang; Liu, Chun-Lin

    2012-08-01

    The two-dimensional nonseparable linear canonical transform (2D NSLCT), which is a generalization of the fractional Fourier transform and the linear canonical transform, is useful for analyzing optical systems. However, since the 2D NSLCT has 16 parameters and is very complicated, it is a great challenge to implement it in an efficient way. In this paper, we improved the previous work and propose an efficient way to implement the 2D NSLCT. The proposed algorithm can minimize the numerical error arising from interpolation operations and requires fewer chirp multiplications. The simulation results show that, compared with the existing algorithm, the proposed algorithms can implement the 2D NSLCT more accurately and the required computation time is also less.

  20. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    Science.gov (United States)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  1. Real-time recursive hyperspectral sample and band processing algorithm architecture and implementation

    CERN Document Server

    Chang, Chein-I

    2017-01-01

    This book explores recursive architectures in designing progressive hyperspectral imaging algorithms. In particular, it makes progressive imaging algorithms recursive by introducing the concept of Kalman filtering in algorithm design so that hyperspectral imagery can be processed not only progressively sample by sample or band by band but also recursively via recursive equations. This book can be considered a companion book of author’s books, Real-Time Progressive Hyperspectral Image Processing, published by Springer in 2016. Explores recursive structures in algorithm architecture Implements algorithmic recursive architecture in conjunction with progressive sample and band processing Derives Recursive Hyperspectral Sample Processing (RHSP) techniques according to Band-Interleaved Sample/Pixel (BIS/BIP) acquisition format Develops Recursive Hyperspectral Band Processing (RHBP) techniques according to Band SeQuential (BSQ) acquisition format for hyperspectral data.

  2. AlgoRun: a Docker-based packaging system for platform-agnostic implemented algorithms.

    Science.gov (United States)

    Hosny, Abdelrahman; Vera-Licona, Paola; Laubenbacher, Reinhard; Favre, Thibauld

    2016-08-01

    There is a growing need in bioinformatics for easy-to-use software implementations of algorithms that are usable across platforms. At the same time, reproducibility of computational results is critical and often a challenge due to source code changes over time and dependencies. The approach introduced in this paper addresses both of these needs with AlgoRun, a dedicated packaging system for implemented algorithms, using Docker technology. Implemented algorithms, packaged with AlgoRun, can be executed through a user-friendly interface directly from a web browser or via a standardized RESTful web API to allow easy integration into more complex workflows. The packaged algorithm includes the entire software execution environment, thereby eliminating the common problem of software dependencies and the irreproducibility of computations over time. AlgoRun-packaged algorithms can be published on http://algorun.org, a centralized searchable directory to find existing AlgoRun-packaged algorithms. AlgoRun is available at http://algorun.org and the source code under GPL license is available at https://github.com/algorun laubenbacher@uchc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Implementations of PI-line based FBP and BPF algorithms on GPGPU

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Le [Tsinghua Univ., Beijing (China). Dept. of Engineering Physics; Xing, Yuxiang [Tsinghua Univ., Beijing (China). Dept. of Engineering Physics; Ministry of Education, Beijing (China). Key Lab. of Particle and Radiation Imaging

    2011-07-01

    Exact reconstruction is under the spotlight in cone beam CT. Katsevich put forward the first exact inversion formula for helical cone beam CT, which belongs to FBP type. Also, Pan Xiaochuan's group proposed another PI-line based exact reconstruction algorithm of BPF type. These two exact reconstruction algorithms and their derivative forms have been widely studied. In this paper, we present a different way of selecting PI-line segments appropriate for both Katsevich's FBP and Pan Xiaochuan's BPF algorithms. As 3D reconstruction contributes to massive computations and takes long time, people have made efforts to speed up the algorithms with the help of multi-core CPUs and GPGPU (General Purpose Graphics Processing Unit). In this paper, we also presents implementations for these two algorithms on GPGPU using an innovative way of selecting PI-line segments. Acceleration techniques and implementations are addressed in detail. The methods are tested on the Shepp-Logan phantom. Compared with our CPU's implementations, the accelerated algorithms on GPGPU are tens to hundreds times faster. (orig.)

  4. An Implementable First-Order Primal-Dual Algorithm for Structured Convex Optimization

    Directory of Open Access Journals (Sweden)

    Feng Ma

    2014-01-01

    Full Text Available Many application problems of practical interest can be posed as structured convex optimization models. In this paper, we study a new first-order primaldual algorithm. The method can be easily implementable, provided that the resolvent operators of the component objective functions are simple to evaluate. We show that the proposed method can be interpreted as a proximal point algorithm with a customized metric proximal parameter. Convergence property is established under the analytic contraction framework. Finally, we verify the efficiency of the algorithm by solving the stable principal component pursuit problem.

  5. IMPLEMENTATION OF IMAGE PROCESSING ALGORITHMS AND GLVQ TO TRACK AN OBJECT USING AR.DRONE CAMERA

    Directory of Open Access Journals (Sweden)

    Muhammad Nanda Kurniawan

    2014-08-01

    Full Text Available Abstract In this research, Parrot AR.Drone as an Unmanned Aerial Vehicle (UAV was used to track an object from above. Development of this system utilized some functions from OpenCV library and Robot Operating System (ROS. Techniques that were implemented in the system are image processing al-gorithm (Centroid-Contour Distance (CCD, feature extraction algorithm (Principal Component Analysis (PCA and an artificial neural network algorithm (Generalized Learning Vector Quantization (GLVQ. The final result of this research is a program for AR.Drone to track a moving object on the floor in fast response time that is under 1 second.

  6. Experimental implementation of a quantum random-walk search algorithm using strongly dipolar coupled spins

    International Nuclear Information System (INIS)

    Lu Dawei; Peng Xinhua; Du Jiangfeng; Zhu Jing; Zou Ping; Yu Yihua; Zhang Shanmin; Chen Qun

    2010-01-01

    An important quantum search algorithm based on the quantum random walk performs an oracle search on a database of N items with O(√(phN)) calls, yielding a speedup similar to the Grover quantum search algorithm. The algorithm was implemented on a quantum information processor of three-qubit liquid-crystal nuclear magnetic resonance (NMR) in the case of finding 1 out of 4, and the diagonal elements' tomography of all the final density matrices was completed with comprehensible one-dimensional NMR spectra. The experimental results agree well with the theoretical predictions.

  7. A New Method of Histogram Computation for Efficient Implementation of the HOG Algorithm

    Directory of Open Access Journals (Sweden)

    Mariana-Eugenia Ilas

    2018-03-01

    Full Text Available In this paper we introduce a new histogram computation method to be used within the histogram of oriented gradients (HOG algorithm. The new method replaces the arctangent with the slope computation and the classical magnitude allocation based on interpolation with a simpler algorithm. The new method allows a more efficient implementation of HOG in general, and particularly in field-programmable gate arrays (FPGAs, by considerably reducing the area (thus increasing the level of parallelism, while maintaining very close classification accuracy compared to the original algorithm. Thus, the new method is attractive for many applications, including car detection and classification.

  8. PCIU: Hardware Implementations of an Efficient Packet Classification Algorithm with an Incremental Update Capability

    Directory of Open Access Journals (Sweden)

    O. Ahmed

    2011-01-01

    Full Text Available Packet classification plays a crucial role for a number of network services such as policy-based routing, firewalls, and traffic billing, to name a few. However, classification can be a bottleneck in the above-mentioned applications if not implemented properly and efficiently. In this paper, we propose PCIU, a novel classification algorithm, which improves upon previously published work. PCIU provides lower preprocessing time, lower memory consumption, ease of incremental rule update, and reasonable classification time compared to state-of-the-art algorithms. The proposed algorithm was evaluated and compared to RFC and HiCut using several benchmarks. Results obtained indicate that PCIU outperforms these algorithms in terms of speed, memory usage, incremental update capability, and preprocessing time. The algorithm, furthermore, was improved and made more accessible for a variety of applications through implementation in hardware. Two such implementations are detailed and discussed in this paper. The results indicate that a hardware/software codesign approach results in a slower, but easier to optimize and improve within time constraints, PCIU solution. A hardware accelerator based on an ESL approach using Handel-C, on the other hand, resulted in a 31x speed-up over a pure software implementation running on a state of the art Xeon processor.

  9. High-speed parallel implementation of a modified PBR algorithm on DSP-based EH topology

    Science.gov (United States)

    Rajan, K.; Patnaik, L. M.; Ramakrishna, J.

    1997-08-01

    Algebraic Reconstruction Technique (ART) is an age-old method used for solving the problem of three-dimensional (3-D) reconstruction from projections in electron microscopy and radiology. In medical applications, direct 3-D reconstruction is at the forefront of investigation. The simultaneous iterative reconstruction technique (SIRT) is an ART-type algorithm with the potential of generating in a few iterations tomographic images of a quality comparable to that of convolution backprojection (CBP) methods. Pixel-based reconstruction (PBR) is similar to SIRT reconstruction, and it has been shown that PBR algorithms give better quality pictures compared to those produced by SIRT algorithms. In this work, we propose a few modifications to the PBR algorithms. The modified algorithms are shown to give better quality pictures compared to PBR algorithms. The PBR algorithm and the modified PBR algorithms are highly compute intensive, Not many attempts have been made to reconstruct objects in the true 3-D sense because of the high computational overhead. In this study, we have developed parallel two-dimensional (2-D) and 3-D reconstruction algorithms based on modified PBR. We attempt to solve the two problems encountered by the PBR and modified PBR algorithms, i.e., the long computational time and the large memory requirements, by parallelizing the algorithm on a multiprocessor system. We investigate the possible task and data partitioning schemes by exploiting the potential parallelism in the PBR algorithm subject to minimizing the memory requirement. We have implemented an extended hypercube (EH) architecture for the high-speed execution of the 3-D reconstruction algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs) and dual-port random access memories (DPR) as channels between the PEs. We discuss and compare the performances of the PBR algorithm on an IBM 6000 RISC workstation, on a Silicon

  10. Caliko: An Inverse Kinematics Software Library Implementation of the FABRIK Algorithm

    Directory of Open Access Journals (Sweden)

    Alastair Lansley

    2016-09-01

    Full Text Available The Caliko library is an implementation of the FABRIK (Forward And Backward Reaching Inverse Kinematics algorithm written in Java. The inverse kinematics (IK algorithm is implemented in both 2D and 3D, and incorporates a variety of joint constraints as well as the ability to connect multiple IK chains together in a hierarchy. The library allows for the simple creation and solving of multiple IK chains as well as visualisation of these solutions. It is licensed under the MIT software license and the source code is freely available for use and modification at: https://github.com/feduni/caliko

  11. FPGA implementation of ICA algorithm for blind signal separation and adaptive noise canceling.

    Science.gov (United States)

    Kim, Chang-Min; Park, Hyung-Min; Kim, Taesu; Choi, Yoon-Kyung; Lee, Soo-Young

    2003-01-01

    An field programmable gate array (FPGA) implementation of independent component analysis (ICA) algorithm is reported for blind signal separation (BSS) and adaptive noise canceling (ANC) in real time. In order to provide enormous computing power for ICA-based algorithms with multipath reverberation, a special digital processor is designed and implemented in FPGA. The chip design fully utilizes modular concept and several chips may be put together for complex applications with a large number of noise sources. Experimental results with a fabricated test board are reported for ANC only, BSS only, and simultaneous ANC/BSS, which demonstrates successful speech enhancement in real environments in real time.

  12. Read-only-memory-based quantum computation: Experimental explorations using nuclear magnetic resonance and future prospects

    International Nuclear Information System (INIS)

    Sypher, D.R.; Brereton, I.M.; Wiseman, H.M.; Hollis, B.L.; Travaglione, B.C.

    2002-01-01

    Read-only-memory-based (ROM-based) quantum computation (QC) is an alternative to oracle-based QC. It has the advantages of being less 'magical', and being more suited to implementing space-efficient computation (i.e., computation using the minimum number of writable qubits). Here we consider a number of small (one- and two-qubit) quantum algorithms illustrating different aspects of ROM-based QC. They are: (a) a one-qubit algorithm to solve the Deutsch problem; (b) a one-qubit binary multiplication algorithm; (c) a two-qubit controlled binary multiplication algorithm; and (d) a two-qubit ROM-based version of the Deutsch-Jozsa algorithm. For each algorithm we present experimental verification using nuclear magnetic resonance ensemble QC. The average fidelities for the implementation were in the ranges 0.9-0.97 for the one-qubit algorithms, and 0.84-0.94 for the two-qubit algorithms. We conclude with a discussion of future prospects for ROM-based quantum computation. We propose a four-qubit algorithm, using Grover's iterate, for solving a miniature 'real-world' problem relating to the lengths of paths in a network

  13. A fast implementation of the incremental backprojection algorithms for parallel beam geometries

    International Nuclear Information System (INIS)

    Chen, C.M.; Wang, C.Y.; Cho, Z.H.

    1996-01-01

    Filtered-backprojection algorithms are the most widely used approaches for reconstruction of computed tomographic (CT) images, such as X-ray CT and positron emission tomographic (PET) images. The Incremental backprojection algorithm is a fast backprojection approach based on restructuring the Shepp and Logan algorithm. By exploiting interdependency (position and values) of adjacent pixels, the Incremental algorithm requires only O(N) and O(N 2 ) multiplications in contrast to O(N 2 ) and O(N 3 ) multiplications for the Shepp and Logan algorithm in two-dimensional (2-D) and three-dimensional (3-D) backprojections, respectively, for each view, where N is the size of the image in each dimension. In addition, it may reduce the number of additions for each pixel computation. The improvement achieved by the Incremental algorithm in practice was not, however, as significant as expected. One of the main reasons is due to inevitably visiting pixels outside the beam in the searching flow scheme originally developed for the Incremental algorithm. To optimize implementation of the Incremental algorithm, an efficient scheme, namely, coded searching flow scheme, is proposed in this paper to minimize the overhead caused by searching for all pixels in a beam. The key idea of this scheme is to encode the searching flow for all pixels inside each beam. While backprojecting, all pixels may be visited without any overhead due to using the coded searching flow as the a priori information. The proposed coded searching flow scheme has been implemented on a Sun Sparc 10 and a Sun Sparc 20 workstations. The implementation results show that the proposed scheme is 1.45--2.0 times faster than the original searching flow scheme for most cases tested

  14. Scheduling of Iterative Algorithms with Matrix Operations for Efficient FPGA Design—Implementation of Finite Interval Constant Modulus Algorithm

    Czech Academy of Sciences Publication Activity Database

    Šůcha, P.; Hanzálek, Z.; Heřmánek, Antonín; Schier, Jan

    2007-01-01

    Roč. 46, č. 1 (2007), s. 35-53 ISSN 0922-5773 R&D Projects: GA AV ČR(CZ) 1ET300750402; GA MŠk(CZ) 1M0567; GA MPO(CZ) FD-K3/082 Institutional research plan: CEZ:AV0Z10750506 Keywords : high-level synthesis * cyclic scheduling * iterative algorithms * imperfectly nested loops * integer linear programming * FPGA * VLSI design * blind equalization * implementation Subject RIV: BA - General Mathematics Impact factor: 0.449, year: 2007 http://www.springerlink.com/content/t217kg0822538014/fulltext.pdf

  15. Linear array implementation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1995-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution back projection algorithms. However, the PET image reconstruction based on the EM algorithm is computationally burdensome for today's single processor systems. In addition, a large memory is required for the storage of the image, projection data, and the probability matrix. Since the computations are easily divided into tasks executable in parallel, multiprocessor configurations are the ideal choice for fast execution of the EM algorithms. In tis study, the authors attempt to overcome these two problems by parallelizing the EM algorithm on a multiprocessor systems. The parallel EM algorithm on a linear array topology using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PE's) has been implemented. The performance of the EM algorithm on a 386/387 machine, IBM 6000 RISC workstation, and on the linear array system is discussed and compared. The results show that the computational speed performance of a linear array using 8 DSP chips as PE's executing the EM image reconstruction algorithm is about 15.5 times better than that of the IBM 6000 RISC workstation. The novelty of the scheme is its simplicity. The linear array topology is expandable with a larger number of PE's. The architecture is not dependant on the DSP chip chosen, and the substitution of the latest DSP chip is straightforward and could yield better speed performance

  16. Implementation Aspects of a Flexible Frequency Spectrum Usage Algorithm for Cognitive OFDM Systems

    DEFF Research Database (Denmark)

    Sacchi, Claudio; Tonelli, Oscar; Cattoni, Andrea Fabio

    2011-01-01

    time on a shared spectrum chunk, emphasizes the role of resource allocation as a critical system design issue. This work is aimed at analyzing the practical issues related to the Software Defined Radio (SDR)-based implementation of a dynamic spectrum allocation algorithm, designed for OFDM...... on a Xilinx ML506 development board is performed. The main novelty proposed in this paper consists in the SDR-based implementation of a computationally-sustainable resource allocation algorithm for FSU on low-cost commercial FPGA platforms. The proposed implementation is competitive with respect to other ones...... on a Virtex 5 FPGA. Experimental results will illustrate that the selected core functionalities are effectively implementable with around 3% or less of the total FPGA computing resources....

  17. An Implementation and Detailed Analysis of the K-SVD Image Denoising Algorithm

    Directory of Open Access Journals (Sweden)

    Marc Lebrun

    2012-05-01

    Full Text Available K-SVD is a signal representation method which, from a set of signals, can derive a dictionary able to approximate each signal with a sparse combination of the atoms. This paper focuses on the K-SVD-based image denoising algorithm. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.

  18. Modified SURF Algorithm Implementation on FPGA For Real-Time Object Tracking

    Directory of Open Access Journals (Sweden)

    Tomyslav Sledevič

    2013-05-01

    Full Text Available The paper describes the FPGA-based implementation of the modified speeded-up robust features (SURF algorithm. FPGA was selected for parallel process implementation using VHDL to ensure features extraction in real-time. A sliding 84×84 size window was used to store integral pixels and accelerate Hessian determinant calculation, orientation assignment and descriptor estimation. The local extreme searching was used to find point of interest in 8 scales. The simplified descriptor and orientation vector were calculated in parallel in 6 scales. The algorithm was investigated by tracking marker and drawing a plane or cube. All parts of algorithm worked on 25 MHz clock. The video stream was generated using 60 fps and 640×480 pixel camera.Article in Lithuanian

  19. Development of CAD implementing the algorithm of boundary elements’ numerical analytical method

    Directory of Open Access Journals (Sweden)

    Yulia V. Korniyenko

    2015-03-01

    Full Text Available Up to recent days the algorithms for numerical-analytical boundary elements method had been implemented with programs written in MATLAB environment language. Each program had a local character, i.e. used to solve a particular problem: calculation of beam, frame, arch, etc. Constructing matrices in these programs was carried out “manually” therefore being time-consuming. The research was purposed onto a reasoned choice of programming language for new CAD development, allows to implement algorithm of numerical analytical boundary elements method and to create visualization tools for initial objects and calculation results. Research conducted shows that among wide variety of programming languages the most efficient one for CAD development, employing the numerical analytical boundary elements method algorithm, is the Java language. This language provides tools not only for development of calculating CAD part, but also to build the graphic interface for geometrical models construction and calculated results interpretation.

  20. Design and Implementation of Numerical Linear Algebra Algorithms on Fixed Point DSPs

    Directory of Open Access Journals (Sweden)

    Gene Frantz

    2007-01-01

    Full Text Available Numerical linear algebra algorithms use the inherent elegance of matrix formulations and are usually implemented using C/C++ floating point representation. The system implementation is faced with practical constraints because these algorithms usually need to run in real time on fixed point digital signal processors (DSPs to reduce total hardware costs. Converting the simulation model to fixed point arithmetic and then porting it to a target DSP device is a difficult and time-consuming process. In this paper, we analyze the conversion process. We transformed selected linear algebra algorithms from floating point to fixed point arithmetic, and compared real-time requirements and performance between the fixed point DSP and floating point DSP algorithm implementations. We also introduce an advanced code optimization and an implementation by DSP-specific, fixed point C code generation. By using the techniques described in the paper, speed can be increased by a factor of up to 10 compared to floating point emulation on fixed point hardware.

  1. Implementation of the CA-CFAR algorithm for pulsed-doppler radar on a GPU architecture

    CSIR Research Space (South Africa)

    Venter, CJ

    2011-12-01

    Full Text Available /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Implementation of the CA-CFAR Algorithm for Pulsed...

  2. FPGA Based Low Power DES Algorithm Design And Implementation using HTML Technology

    DEFF Research Database (Denmark)

    Thind, Vandana; Pandey, Bishwajeet; Kalia, Kartik

    2016-01-01

    In this particular work, we have done power analysis of DES algorithm implemented on 28nm FPGA using HTML (H-HSUL, T-TTL, M-MOBILE_DDR, L-LVCMOS) technology. In this research, we have used high performance software Xilinx ISE where we have selected four different IO Standards i.e. MOBILE_DDR, HSUL...

  3. VHDL Implementation of Feature-Extraction Algorithm for the PANDA Electromagnetic Calorimeter

    NARCIS (Netherlands)

    Kavatsyuk, M.; Guliyev, E.; Lemmens, P. J. J.; Löhner, H.; Tambave, G.

    2010-01-01

    The feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA detector at the future FAIR facility, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The use of modified firmware with the running on-line

  4. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    NARCIS (Netherlands)

    Guliyev, E.; Kavatsyuk, M.; Lemmens, P. J. J.; Tambave, G.; Löhner, H.

    2012-01-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an

  5. An implementation of a data-transmission pipelining algorithm on Imote2 platforms

    Science.gov (United States)

    Li, Xu; Dorvash, Siavash; Cheng, Liang; Pakzad, Shamim

    2011-04-01

    Over the past several years, wireless network systems and sensing technologies have been developed significantly. This has resulted in the broad application of wireless sensor networks (WSNs) in many engineering fields and in particular structural health monitoring (SHM). The movement of traditional SHM toward the new generation of SHM, which utilizes WSNs, relies on the advantages of this new approach such as relatively low costs, ease of implementation and the capability of onboard data processing and management. In the particular case of long span bridge monitoring, a WSN should be capable of transmitting commands and measurement data over long network geometry in a reliable manner. While using single-hop data transmission in such geometry requires a long radio range and consequently a high level of power supply, multi-hop communication may offer an effective and reliable way for data transmissions across the network. Using a multi-hop communication protocol, the network relays data from a remote node to the base station via intermediary nodes. We have proposed a data-transmission pipelining algorithm to enable an effective use of the available bandwidth and minimize the energy consumption and the delay performance by the multi-hop communication protocol. This paper focuses on the implementation aspect of the pipelining algorithm on Imote2 platforms for SHM applications, describes its interaction with underlying routing protocols, and presents the solutions to various implementation issues of the proposed pipelining algorithm. Finally, the performance of the algorithm is evaluated based on the results of an experimental implementation.

  6. Implementation of an Evidence-Based Seizure Algorithm in Intellectual Disability Nursing: A Pilot Study

    Science.gov (United States)

    Auberry, Kathy; Cullen, Deborah

    2016-01-01

    Based on the results of the Surrogate Decision-Making Self Efficacy Scale (Lopez, 2009a), this study sought to determine whether nurses working in the field of intellectual disability (ID) experience increased confidence when they implemented the American Association of Neuroscience Nurses (AANN) Seizure Algorithm during telephone triage. The…

  7. A hybrid Genetic and Simulated Annealing Algorithm for Chordal Ring implementation in large-scale networks

    DEFF Research Database (Denmark)

    Riaz, M. Tahir; Gutierrez Lopez, Jose Manuel; Pedersen, Jens Myrup

    2011-01-01

    The paper presents a hybrid Genetic and Simulated Annealing algorithm for implementing Chordal Ring structure in optical backbone network. In recent years, topologies based on regular graph structures gained a lot of interest due to their good communication properties for physical topology of the...

  8. An efficient implementation of a backpropagation learning algorithm on quadrics parallel supercomputer

    International Nuclear Information System (INIS)

    Taraglio, S.; Massaioli, F.

    1995-08-01

    A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given

  9. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    International Nuclear Information System (INIS)

    Li Yupeng; Deutsch, Clayton V.

    2012-01-01

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells. In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.

  10. Implementation of a parallel algorithm for spherical SN calculations on the IBM 3090

    International Nuclear Information System (INIS)

    Haghighat, A.; Lawrence, R.D.

    1989-01-01

    Parallel S N algorithms based on domain decomposition in angle are straightforward to develop in Cartesian geometry because the computation of the angular fluxes for a specific discrete ordinate can be performed independently of all other angles. This is not the case for curvilinear geometries, where the angular redistribution component of the discretized streaming operator results in coupling between angular fluxes along adjacent discrete ordinates. Previously, the authors developed a parallel algorithm for S N calculations in spherical geometry and examined its iterative convergence for criticality and detector problems with differing scattering/absorption ratios. In this paper, the authors describe the implementation of the algorithm on an IBM 3090 Model 400 (four processors) and present computational results illustrating the efficiency of the algorithm relative to serial execution

  11. An Improved Fuzzy C-Means Algorithm for the Implementation of Demand Side Management Measures

    Directory of Open Access Journals (Sweden)

    Ioannis Panapakidis

    2017-09-01

    Full Text Available Load profiling refers to a procedure that leads to the formulation of daily load curves and consumer classes regarding the similarity of the curve shapes. This procedure incorporates a set of unsupervised machine learning algorithms. While many crisp clustering algorithms have been proposed for grouping load curves into clusters, only one soft clustering algorithm is utilized for the aforementioned purpose, namely the Fuzzy C-Means (FCM algorithm. Since the benefits of soft clustering are demonstrated in a variety of applications, the potential of introducing a novel modification of the FCM in the electricity consumer clustering process is examined. Additionally, this paper proposes a novel Demand Side Management (DSM strategy for load management of consumers that are eligible for the implementation of Real-Time Pricing (RTP schemes. The DSM strategy is formulated as a constrained optimization problem that can be easily solved and therefore, making it a useful tool for retailers’ decision-making framework in competitive electricity markets.

  12. Bioinformatics algorithm based on a parallel implementation of a machine learning approach using transducers

    International Nuclear Information System (INIS)

    Roche-Lima, Abiel; Thulasiram, Ruppa K

    2012-01-01

    Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.

  13. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  14. A multithreaded parallel implementation of a dynamic programming algorithm for sequence comparison.

    Science.gov (United States)

    Martins, W S; Del Cuvillo, J B; Useche, F J; Theobald, K B; Gao, G R

    2001-01-01

    This paper discusses the issues involved in implementing a dynamic programming algorithm for biological sequence comparison on a general-purpose parallel computing platform based on a fine-grain event-driven multithreaded program execution model. Fine-grain multithreading permits efficient parallelism exploitation in this application both by taking advantage of asynchronous point-to-point synchronizations and communication with low overheads and by effectively tolerating latency through the overlapping of computation and communication. We have implemented our scheme on EARTH, a fine-grain event-driven multithreaded execution and architecture model which has been ported to a number of parallel machines with off-the-shelf processors. Our experimental results show that the dynamic programming algorithm can be efficiently implemented on EARTH systems with high performance (e.g., speedup of 90 on 120 nodes), good programmability and reasonable cost.

  15. Further optimization of SeDDaRA blind image deconvolution algorithm and its DSP implementation

    Science.gov (United States)

    Wen, Bo; Zhang, Qiheng; Zhang, Jianlin

    2011-11-01

    Efficient algorithm for blind image deconvolution and its high-speed implementation is of great value in practice. Further optimization of SeDDaRA is developed, from algorithm structure to numerical calculation methods. The main optimization covers that, the structure's modularization for good implementation feasibility, reducing the data computation and dependency of 2D-FFT/IFFT, and acceleration of power operation by segmented look-up table. Then the Fast SeDDaRA is proposed and specialized for low complexity. As the final implementation, a hardware system of image restoration is conducted by using the multi-DSP parallel processing. Experimental results show that, the processing time and memory demand of Fast SeDDaRA decreases 50% at least; the data throughput of image restoration system is over 7.8Msps. The optimization is proved efficient and feasible, and the Fast SeDDaRA is able to support the real-time application.

  16. A complete implementation of the conjugate gradient algorithm on a reconfigurable supercomputer

    International Nuclear Information System (INIS)

    Dubois, David H.; Dubois, Andrew J.; Connor, Carolyn M.; Boorman, Thomas M.; Poole, Stephen W.

    2008-01-01

    The conjugate gradient is a prominent iterative method for solving systems of sparse linear equations. Large-scale scientific applications often utilize a conjugate gradient solver at their computational core. In this paper we present a field programmable gate array (FPGA) based implementation of a double precision, non-preconditioned, conjugate gradient solver for fmite-element or finite-difference methods. OUf work utilizes the SRC Computers, Inc. MAPStation hardware platform along with the 'Carte' software programming environment to ease the programming workload when working with the hybrid (CPUIFPGA) environment. The implementation is designed to handle large sparse matrices of up to order N x N where N <= 116,394, with up to 7 non-zero, 64-bit elements per sparse row. This implementation utilizes an optimized sparse matrix-vector multiply operation which is critical for obtaining high performance. Direct parallel implementations of loop unrolling and loop fusion are utilized to extract performance from the various vector/matrix operations. Rather than utilize the FPGA devices as function off-load accelerators, our implementation uses the FPGAs to implement the core conjugate gradient algorithm. Measured run-time performance data is presented comparing the FPGA implementation to a software-only version showing that the FPGA can outperform processors running up to 30x the clock rate. In conclusion we take a look at the new SRC-7 system and estimate the performance of this algorithm on that architecture.

  17. Efficient parallel implementation of active appearance model fitting algorithm on GPU.

    Science.gov (United States)

    Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou

    2014-01-01

    The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  18. Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU

    Directory of Open Access Journals (Sweden)

    Jinwei Wang

    2014-01-01

    Full Text Available The active appearance model (AAM is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA on the Nvidia’s GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.

  19. Implementation and analysis of a Navier-Stokes algorithm on parallel computers

    Science.gov (United States)

    Fatoohi, Raad A.; Grosch, Chester E.

    1988-01-01

    The results of the implementation of a Navier-Stokes algorithm on three parallel/vector computers are presented. The object of this research is to determine how well, or poorly, a single numerical algorithm would map onto three different architectures. The algorithm is a compact difference scheme for the solution of the incompressible, two-dimensional, time-dependent Navier-Stokes equations. The computers were chosen so as to encompass a variety of architectures. They are the following: the MPP, an SIMD machine with 16K bit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. The basic comparison is among SIMD instruction parallelism on the MPP, MIMD process parallelism on the Flex/32, and vectorization of a serial code on the Cray/2. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.

  20. Implementation of an evolutionary algorithm in planning investment in a power distribution system

    Directory of Open Access Journals (Sweden)

    Carlos Andrés García Montoya

    2011-06-01

    Full Text Available The definition of an investment plan to implement in a distribution power system, is a task that constantly faced by utilities. This work presents a methodology for determining the investment plan for a distribution power system under a shortterm, using as a criterion for evaluating investment projects, associated costs and customers benefit from its implementation. Given the number of projects carried out annually on the system, the definition of an investment plan requires the use of computational tools to evaluate, a set of possibilities, the one that best suits the needs of the present system and better results. That is why in the job, implementing a multi objective evolutionary algorithm SPEA (Strength Pareto Evolutionary Algorithm, which, based on the principles of Pareto optimality, it deliver to the planning expert, the best solutions found in the optimization process. The performance of the algorithm is tested using a set of projects to determine the best among the possible plans. We analyze also the effect of operators on the performance of evolutionary algorithm and results.

  1. Universal perceptron and DNA-like learning algorithm for binary neural networks: LSBF and PBF implementations.

    Science.gov (United States)

    Chen, Fangyue; Chen, Guanrong Ron; He, Guolong; Xu, Xiubin; He, Qinbin

    2009-10-01

    Universal perceptron (UP), a generalization of Rosenblatt's perceptron, is considered in this paper, which is capable of implementing all Boolean functions (BFs). In the classification of BFs, there are: 1) linearly separable Boolean function (LSBF) class, 2) parity Boolean function (PBF) class, and 3) non-LSBF and non-PBF class. To implement these functions, UP takes different kinds of simple topological structures in which each contains at most one hidden layer along with the smallest possible number of hidden neurons. Inspired by the concept of DNA sequences in biological systems, a novel learning algorithm named DNA-like learning is developed, which is able to quickly train a network with any prescribed BF. The focus is on performing LSBF and PBF by a single-layer perceptron (SLP) with the new algorithm. Two criteria for LSBF and PBF are proposed, respectively, and a new measure for a BF, named nonlinearly separable degree (NLSD), is introduced. In the sense of this measure, the PBF is the most complex one. The new algorithm has many advantages including, in particular, fast running speed, good robustness, and no need of considering the convergence property. For example, the number of iterations and computations in implementing the basic 2-bit logic operations such as AND, OR, and XOR by using the new algorithm is far smaller than the ones needed by using other existing algorithms such as error-correction (EC) and backpropagation (BP) algorithms. Moreover, the synaptic weights and threshold values derived from UP can be directly used in designing of the template of cellular neural networks (CNNs), which has been considered as a new spatial-temporal sensory computing paradigm.

  2. Maximum entropy algorithm and its implementation for the neutral beam profile measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Wook; Cho, Gyu Seong [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of); Cho, Yong Sub [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    A tomography algorithm to maximize the entropy of image using Lagrangian multiplier technique and conjugate gradient method has been designed for the measurement of 2D spatial distribution of intense neutral beams of KSTAR NBI (Korea Superconducting Tokamak Advanced Research Neutral Beam Injector), which is now being designed. A possible detection system was assumed and a numerical simulation has been implemented to test the reconstruction quality of given beam profiles. This algorithm has the good applicability for sparse projection data and thus, can be used for the neutral beam tomography. 8 refs., 3 figs. (Author)

  3. On Implementing a Homogeneous Interior-Point Algorithm for Nonsymmetric Conic Optimization

    DEFF Research Database (Denmark)

    Skajaa, Anders; Jørgensen, John Bagterp; Hansen, Per Christian

    Based on earlier work by Nesterov, an implementation of a homogeneous infeasible-start interior-point algorithm for solving nonsymmetric conic optimization problems is presented. Starting each iteration from (the vicinity of) the central path, the method computes (nearly) primal-dual symmetric...... approximate tangent directions followed by a purely primal centering procedure to locate the next central primal-dual point. Features of the algorithm include that it makes use only of the primal barrier function, that it is able to detect infeasibilities in the problem and that no phase-I method is needed...

  4. Maximum entropy algorithm and its implementation for the neutral beam profile measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Wook; Cho, Gyu Seong [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of); Cho, Yong Sub [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A tomography algorithm to maximize the entropy of image using Lagrangian multiplier technique and conjugate gradient method has been designed for the measurement of 2D spatial distribution of intense neutral beams of KSTAR NBI (Korea Superconducting Tokamak Advanced Research Neutral Beam Injector), which is now being designed. A possible detection system was assumed and a numerical simulation has been implemented to test the reconstruction quality of given beam profiles. This algorithm has the good applicability for sparse projection data and thus, can be used for the neutral beam tomography. 8 refs., 3 figs. (Author)

  5. DC Voltage Droop Control Implementation in the AC/DC Power Flow Algorithm: Combinational Approach

    DEFF Research Database (Denmark)

    Akhter, F.; Macpherson, D.E.; Harrison, G.P.

    2015-01-01

    of operational flexibility, as more than one VSC station controls the DC link voltage of the MTDC system. This model enables the study of the effects of DC droop control on the power flows of the combined AC/DC system for steady state studies after VSC station outages or transient conditions without needing...... to use its complete dynamic model. Further, the proposed approach can be extended to include multiple AC and DC grids for combined AC/DC power flow analysis. The algorithm is implemented by modifying the MATPOWER based MATACDC program and the results shows that the algorithm works efficiently....

  6. Implementation of Tree and Butterfly Barriers with Optimistic Time Management Algorithms for Discrete Event Simulation

    Science.gov (United States)

    Rizvi, Syed S.; Shah, Dipali; Riasat, Aasia

    The Time Wrap algorithm [3] offers a run time recovery mechanism that deals with the causality errors. These run time recovery mechanisms consists of rollback, anti-message, and Global Virtual Time (GVT) techniques. For rollback, there is a need to compute GVT which is used in discrete-event simulation to reclaim the memory, commit the output, detect the termination, and handle the errors. However, the computation of GVT requires dealing with transient message problem and the simultaneous reporting problem. These problems can be dealt in an efficient manner by the Samadi's algorithm [8] which works fine in the presence of causality errors. However, the performance of both Time Wrap and Samadi's algorithms depends on the latency involve in GVT computation. Both algorithms give poor latency for large simulation systems especially in the presence of causality errors. To improve the latency and reduce the processor ideal time, we implement tree and butterflies barriers with the optimistic algorithm. Our analysis shows that the use of synchronous barriers such as tree and butterfly with the optimistic algorithm not only minimizes the GVT latency but also minimizes the processor idle time.

  7. An Implementation Of Elias Delta Code And ElGamal Algorithm In Image Compression And Security

    Science.gov (United States)

    Rachmawati, Dian; Andri Budiman, Mohammad; Saffiera, Cut Amalia

    2018-01-01

    In data transmission such as transferring an image, confidentiality, integrity, and efficiency of data storage aspects are highly needed. To maintain the confidentiality and integrity of data, one of the techniques used is ElGamal. The strength of this algorithm is found on the difficulty of calculating discrete logs in a large prime modulus. ElGamal belongs to the class of Asymmetric Key Algorithm and resulted in enlargement of the file size, therefore data compression is required. Elias Delta Code is one of the compression algorithms that use delta code table. The image was first compressed using Elias Delta Code Algorithm, then the result of the compression was encrypted by using ElGamal algorithm. Prime test was implemented using Agrawal Biswas Algorithm. The result showed that ElGamal method could maintain the confidentiality and integrity of data with MSE and PSNR values 0 and infinity. The Elias Delta Code method generated compression ratio and space-saving each with average values of 62.49%, and 37.51%.

  8. Derivation and implementation of a cone-beam reconstruction algorithm for nonplanar orbits

    International Nuclear Information System (INIS)

    Kudo, Hiroyuki; Saito, Tsuneo

    1994-01-01

    Smith and Grangeat derived a cone-beam inversion formula that can be applied when a nonplanar orbit satisfying the completeness condition is used. Although Grangeat's inversion formula is mathematically different from Smith's, they have similar overall structures to each other. The contribution of this paper is two-fold. First, based on the derivation of Smith, the authors point out that Grangeat's inversion formula and Smith's can be conveniently described using a single formula (the Smith-Grangeat inversion formula) that is in the form of space-variant filtering followed by cone-beam backprojection. Furthermore, the resulting formula is reformulated for data acquisition systems with a planar detector to obtain a new reconstruction algorithm. Second, the authors make two significant modifications to the new algorithm to reduce artifacts and numerical errors encountered in direct implementation of the new algorithm. As for exactness of the new algorithm, the following fact can be stated. The algorithm based on Grangeat's intermediate function is exact for any complete orbit, whereas that based on Smith's intermediate function should be considered as an approximate inverse excepting the special case where almost every plane in 3-D space meets the orbit. The validity of the new algorithm is demonstrated by simulation studies

  9. Implementation of domain decomposition and data decomposition algorithms in RMC code

    International Nuclear Information System (INIS)

    Liang, J.G.; Cai, Y.; Wang, K.; She, D.

    2013-01-01

    The applications of Monte Carlo method in reactor physics analysis is somewhat restricted due to the excessive memory demand in solving large-scale problems. Memory demand in MC simulation is analyzed firstly, it concerns geometry data, data of nuclear cross-sections, data of particles, and data of tallies. It appears that tally data is dominant in memory cost and should be focused on in solving the memory problem. Domain decomposition and tally data decomposition algorithms are separately designed and implemented in the reactor Monte Carlo code RMC. Basically, the domain decomposition algorithm is a strategy of 'divide and rule', which means problems are divided into different sub-domains to be dealt with separately and some rules are established to make sure the whole results are correct. Tally data decomposition consists in 2 parts: data partition and data communication. Two algorithms with differential communication synchronization mechanisms are proposed. Numerical tests have been executed to evaluate performance of the new algorithms. Domain decomposition algorithm shows potentials to speed up MC simulation as a space parallel method. As for tally data decomposition algorithms, memory size is greatly reduced

  10. Implementation of Robert's Coping with Labor Algorithm© in a large tertiary care facility.

    Science.gov (United States)

    Fairchild, Esther; Roberts, Leissa; Zelman, Karen; Michelli, Shelley; Hastings-Tolsma, Marie

    2017-07-01

    to implement use of Roberts' Coping with Labor Algorithm © (CWLA) with laboring women in a large tertiary care facility. this was a quality improvement project to implement an alternate approach to pain assessment during labor. It included system assessment for change readiness, implementation of the algorithm across a 6-week period, evaluation of usefulness by nursing staff, and determination of sustained change at one month. Stakeholder Theory (Friedman and Miles, 2002) and Deming's (1982) Plan-Do-Check-Act Cycle, as adapted by Roberts et al (2010), provided the framework for project implementation. the project was undertaken on a labor and delivery (L&D) unit of a large tertiary care facility in a southwestern state in the USA. The unit had 19 suites with close to 6000 laboring patients each year. full, part-time, and per diem Registered Nurse (RN) staff (N=80), including a subset (n=18) who served as the pilot group and champions for implementing the change. a majority of RNs held a positive attitude toward use of the CWLA to assess laboring women's coping with the pain of labor as compared to a Numeric Rating Scale (NRS). RNs reported usefulness in using the CWLA with patients from a wide variety of ethnicities. A pre-existing well-developed team which advocated for evidence-based practice on the unit proved to be a significant strength which promoted rapid change in practice. this work provides important knowledge supporting use of the CWLA in a large tertiary care facility and an approach for effectively implementing that change. Strengths identified in this project contributed to rapid implementation and could be emulated in other facilities. Participant reports support usefulness of the CWLA with patients of varied ethnicity. Assessment of change sustainability at 1 and 6 months demonstrated widespread use of the algorithm though long-term determination is yet needed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Implementation of an algorithm for cylindrical object identification using range data

    Science.gov (United States)

    Bozeman, Sylvia T.; Martin, Benjamin J.

    1989-01-01

    One of the problems in 3-D object identification and localization is addressed. In robotic and navigation applications the vision system must be able to distinguish cylindrical or spherical objects as well as those of other geometric shapes. An algorithm was developed to identify cylindrical objects in an image when range data is used. The algorithm incorporates the Hough transform for line detection using edge points which emerge from a Sobel mask. Slices of the data are examined to locate arcs of circles using the normal equations of an over-determined linear system. Current efforts are devoted to testing the computer implementation of the algorithm. Refinements are expected to continue in order to accommodate cylinders in various positions. A technique is sought which is robust in the presence of noise and partial occlusions.

  12. GPU implementation of discrete particle swarm optimization algorithm for endmember extraction from hyperspectral image

    Science.gov (United States)

    Yu, Chaoyin; Yuan, Zhengwu; Wu, Yuanfeng

    2017-10-01

    Hyperspectral image unmixing is an important part of hyperspectral data analysis. The mixed pixel decomposition consists of two steps, endmember (the unique signatures of pure ground components) extraction and abundance (the proportion of each endmember in each pixel) estimation. Recently, a Discrete Particle Swarm Optimization algorithm (DPSO) was proposed for accurately extract endmembers with high optimal performance. However, the DPSO algorithm shows very high computational complexity, which makes the endmember extraction procedure very time consuming for hyperspectral image unmixing. Thus, in this paper, the DPSO endmember extraction algorithm was parallelized, implemented on the CUDA (GPU K20) platform, and evaluated by real hyperspectral remote sensing data. The experimental results show that with increasing the number of particles the parallelized version obtained much higher computing efficiency while maintain the same endmember exaction accuracy.

  13. A novel orthoimage mosaic method using a weighted A∗ algorithm - Implementation and evaluation

    Science.gov (United States)

    Zheng, Maoteng; Xiong, Xiaodong; Zhu, Junfeng

    2018-04-01

    The implementation and evaluation of a weighted A∗ algorithm for orthoimage mosaic with UAV (Unmanned Aircraft Vehicle) imagery is proposed. The initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is generated based on DSM (Digital Surface Model) data; the vertices (conjunction nodes of seam-lines) of the initial network are relocated if they are on high objects (buildings, trees and other artificial structures); and the initial seam-lines are refined using the weighted A∗ algorithm based on the edge diagram and the relocated vertices. Our method was tested with three real UAV datasets. Two quantitative terms are introduced to evaluate the results of the proposed method. Preliminary results show that the method is suitable for regular and irregular aligned UAV images for most terrain types (flat or mountainous areas), and is better than the state-of-the-art method in both quality and efficiency based on the test datasets.

  14. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    Science.gov (United States)

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the

  15. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles.

    Science.gov (United States)

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe; Thom, Christian

    2017-07-18

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l'information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N -th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.

  16. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Ahmad Audi

    2017-07-01

    Full Text Available Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l’information géographique camera, which has an IMU (Inertial Measurement Unit sensor and an SoC (System on Chip/FPGA (Field-Programmable Gate Array. To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N-th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.

  17. Implementation techniques and acceleration of DBPF reconstruction algorithm based on GPGPU for helical cone beam CT

    International Nuclear Information System (INIS)

    Shen Le; Xing Yuxiang

    2010-01-01

    The derivative back-projection filtered algorithm for a helical cone-beam CT is a newly developed exact reconstruction method. Due to its large computational complexity, the reconstruction is rather slow for practical use. General purpose graphic processing unit (GPGPU) is an SIMD paralleled hardware architecture with powerful float-point operation capacity. In this paper,we propose a new method for PI-line choice and sampling grid, and a paralleled PI-line reconstruction algorithm implemented on NVIDIA's Compute Unified Device Architecture (CUDA). Numerical simulation studies are carried out to validate our method. Compared with conventional CPU implementation, the CUDA accelerated method provides images of the same quality with a speedup factor of 318. Optimization strategies for the GPU acceleration are presented. Finally, influence of the parameters of the PI-line samples on the reconstruction speed and image quality is discussed. (authors)

  18. A Flexible VHDL Floating Point Module for Control Algorithm Implementation in Space Applications

    Science.gov (United States)

    Padierna, A.; Nicoleau, C.; Sanchez, J.; Hidalgo, I.; Elvira, S.

    2012-08-01

    The implementation of control loops for space applications is an area with great potential. However, the characteristics of this kind of systems, such as its wide dynamic range of numeric values, make inadequate the use of fixed-point algorithms.However, because the generic chips available for the treatment of floating point data are, in general, not qualified to operate in space environments and the possibility of using an IP module in a FPGA/ASIC qualified for space is not viable due to the low amount of logic cells available for these type of devices, it is necessary to find a viable alternative.For these reasons, in this paper a VHDL Floating Point Module is presented. This proposal allows the design and execution of floating point algorithms with acceptable occupancy to be implemented in FPGAs/ASICs qualified for space environments.

  19. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Guliyev, E. [Kernfysisch Versneller Instituut, University of Groningen, Zernikelaan 25, NL-9747 AA Groningen (Netherlands); Kavatsyuk, M., E-mail: m.kavatsyuk@rug.nl [Kernfysisch Versneller Instituut, University of Groningen, Zernikelaan 25, NL-9747 AA Groningen (Netherlands); Lemmens, P.J.J.; Tambave, G.; Loehner, H. [Kernfysisch Versneller Instituut, University of Groningen, Zernikelaan 25, NL-9747 AA Groningen (Netherlands)

    2012-02-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-source project and is adaptable for other projects and sampling ADCs. Best performance with different types of signal sources can be achieved through flexible parameter selection. The on-line data-processing in FPGA enables to construct an almost dead-time free data acquisition system which is successfully evaluated as a first step towards building a complete trigger-less readout chain. Prototype setups are studied to determine the dead-time of the implemented algorithm, the rate of false triggering, timing performance, and event correlations.

  20. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    International Nuclear Information System (INIS)

    Guliyev, E.; Kavatsyuk, M.; Lemmens, P.J.J.; Tambave, G.; Löhner, H.

    2012-01-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-source project and is adaptable for other projects and sampling ADCs. Best performance with different types of signal sources can be achieved through flexible parameter selection. The on-line data-processing in FPGA enables to construct an almost dead-time free data acquisition system which is successfully evaluated as a first step towards building a complete trigger-less readout chain. Prototype setups are studied to determine the dead-time of the implemented algorithm, the rate of false triggering, timing performance, and event correlations.

  1. Parallel Implementation of the Recursive Approximation of an Unsupervised Hierarchical Segmentation Algorithm. Chapter 5

    Science.gov (United States)

    Tilton, James C.; Plaza, Antonio J. (Editor); Chang, Chein-I. (Editor)

    2008-01-01

    The hierarchical image segmentation algorithm (referred to as HSEG) is a hybrid of hierarchical step-wise optimization (HSWO) and constrained spectral clustering that produces a hierarchical set of image segmentations. HSWO is an iterative approach to region grooving segmentation in which the optimal image segmentation is found at N(sub R) regions, given a segmentation at N(sub R+1) regions. HSEG's addition of constrained spectral clustering makes it a computationally intensive algorithm, for all but, the smallest of images. To counteract this, a computationally efficient recursive approximation of HSEG (called RHSEG) has been devised. Further improvements in processing speed are obtained through a parallel implementation of RHSEG. This chapter describes this parallel implementation and demonstrates its computational efficiency on a Landsat Thematic Mapper test scene.

  2. Implementation of on-line data reduction algorithms in the CMS Endcap Preshower Data Concentrator Cards

    CERN Document Server

    Barney, D; Kokkas, P; Manthos, N; Sidiropoulos, G; Reynaud, S; Vichoudis, P

    2007-01-01

    The CMS Endcap Preshower (ES) sub-detector comprises 4288 silicon sensors, each containing 32 strips. The data are transferred from the detector to the counting room via 1208 optical fibres running at 800Mbps. Each fibre carries data from two, three or four sensors. For the readout of the Preshower, a VME-based system, the Endcap Preshower Data Concentrator Card (ES-DCC), is currently under development. The main objective of each readout board is to acquire on-detector data from up to 36 optical links, perform on-line data reduction via zero suppression and pass the concentrated data to the CMS event builder. This document presents the conceptual design of the Reduction Algorithms as well as their implementation in the ES-DCC FPGAs. These algorithms, as implemented in the ES-DCC, result in a data-reduction factor of 20.

  3. Implementation of On-Line Data Reduction Algorithms in the CMS Endcap Preshower Data Concentrator Card

    CERN Document Server

    Barney, David; Kokkas, Panagiotis; Manthos, Nikolaos; Reynaud, Serge; Sidiropoulos, Georgios; Vichoudis, Paschalis

    2006-01-01

    The CMS Endcap Preshower (ES) sub-detector comprises 4288 silicon sensors, each containing 32 strips. The data are transferred from the detector to the counting room via 1208 optical fibres running at 800Mbps. Each fibre carries data from 2, 3 or 4 sensors. For the readout of the Preshower, a VME-based system - the Endcap Preshower Data Concentrator Card (ES-DCC) is currently under development. The main objective of each readout board is to acquire on-detector data from up to 36 optical links, perform on-line data reduction (zero suppression) and pass the concentrated data to the CMS event builder. This document presents the conceptual design of the Reduction Algorithms as well as their implementation into the ES-DCC FPGAs. The algorithms implemented into the ES-DCC resulted in a reduction factor of ~20.

  4. Implementation of the Lanczos algorithm for the Hubbard model on the Connection Machine system

    International Nuclear Information System (INIS)

    Leung, P.W.; Oppenheimer, P.E.

    1992-01-01

    An implementation of the Lanczos algorithm for the exact diagonalization of the two dimensional Hubbard model on a 4x4 square lattice on the Connection Machine CM-2 system is described. The CM-2 is a massively parallel machine with distributed memory. The program is written in C/PARIS. This implementation minimizes memory usage by generating the matrix elements as needed instead of storing them. The Lanczos vectors are stored across the local memory of the processors. Using translational symmetry only, the dimension of the Hilbert space at half filling is more than 10 million. A speed of about 2.4 min per iteration is achieved on a 64K CM-2. This implementation is scalable. Running it on a bigger machine with more processors speeds up the process. The performance analysis of this implementation is shown and discuss its advantages and disadvantages are discussed

  5. Power Analysis of Energy Efficient DES Algorithm and Implementation on 28nm FPGA

    DEFF Research Database (Denmark)

    Thind, Vandana; Pandey, Bishwajeet; Hussain, Dil muhammed Akbar

    2016-01-01

    In this work, we have done power analysis ofData Encryption Standard (DES) algorithm using Xilinx ISE software development kit. We have analyzed the amount of power utilized by selective components on board i.e., FPGA Artix-7, where DES algorithm is implemented. The components taken into consider......In this work, we have done power analysis ofData Encryption Standard (DES) algorithm using Xilinx ISE software development kit. We have analyzed the amount of power utilized by selective components on board i.e., FPGA Artix-7, where DES algorithm is implemented. The components taken...... into consideration areclock power, logic power, signals power, IOs power, leakage powerand supply power (dynamic and quiescent). We have used four different WLAN frequencies (2.4 GHz, 3.6 GHz, 4.9GHz, and 5.9 GHz) and four different IO standards like HSTL-I, HSTL-II, HSTL-II-18, HSTL-I-18 for power analysis. We have...... achieved13-47% saving in power at different frequencies and withdifferent energy efficient HSTL IO standard. We calculated the percentage change in the IO power with respect to the mean values of IO power at four different frequencies. We notified that there is minimum of -37.5% and maximum of +35...

  6. Decoding the Brain’s Algorithm for Categorization from its Neural Implementation

    Science.gov (United States)

    Mack, Michael L.; Preston, Alison R.; Love, Bradley C.

    2013-01-01

    Summary Acts of cognition can be described at different levels of analysis: what behavior should characterize the act, what algorithms and representations underlie the behavior, and how the algorithms are physically realized in neural activity [1]. Theories that bridge levels of analysis offer more complete explanations by leveraging the constraints present at each level [2–4]. Despite the great potential for theoretical advances, few studies of cognition bridge levels of analysis. For example, formal cognitive models of category decisions accurately predict human decision making [5, 6], but whether model algorithms and representations supporting category decisions are consistent with underlying neural implementation remains unknown. This uncertainty is largely due to the hurdle of forging links between theory and brain [7–9]. Here, we tackle this critical problem by using brain response to characterize the nature of mental computations that support category decisions to evaluate two dominant, and opposing, models of categorization. We found that brain states during category decisions were significantly more consistent with latent model representations from exemplar [5] rather than prototype theory [10, 11]. Representations of individual experiences, not the abstraction of experiences, are critical for category decision making. Holding models accountable for behavior and neural implementation provides a means for advancing more complete descriptions of the algorithms of cognition. PMID:24094852

  7. An implementation of super-encryption using RC4A and MDTM cipher algorithms for securing PDF Files on android

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.; Parlindungan, M. R.

    2018-03-01

    MDTM is a classical symmetric cryptographic algorithm. As with other classical algorithms, the MDTM Cipher algorithm is easy to implement but it is less secure compared to modern symmetric algorithms. In order to make it more secure, a stream cipher RC4A is added and thus the cryptosystem becomes super encryption. In this process, plaintexts derived from PDFs are firstly encrypted with the MDTM Cipher algorithm and are encrypted once more with the RC4A algorithm. The test results show that the value of complexity is Θ(n2) and the running time is linearly directly proportional to the length of plaintext characters and the keys entered.

  8. Parallel implementation of DNA sequences matching algorithms using PWM on GPU architecture.

    Science.gov (United States)

    Sharma, Rahul; Gupta, Nitin; Narang, Vipin; Mittal, Ankush

    2011-01-01

    Positional Weight Matrices (PWMs) are widely used in representation and detection of Transcription Factor Of Binding Sites (TFBSs) on DNA. We implement online PWM search algorithm over parallel architecture. A large PWM data can be processed on Graphic Processing Unit (GPU) systems in parallel which can help in matching sequences at a faster rate. Our method employs extensive usage of highly multithreaded architecture and shared memory of multi-cored GPU. An efficient use of shared memory is required to optimise parallel reduction in CUDA. Our optimised method has a speedup of 230-280x over linear implementation on GPU named GeForce GTX 280.

  9. PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta

    OpenAIRE

    Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J.

    2010-01-01

    Summary: PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactive...

  10. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    Energy Technology Data Exchange (ETDEWEB)

    Madduri, Kamesh; Ediger, David; Jiang, Karl; Bader, David A.; Chavarria-Miranda, Daniel

    2009-02-15

    We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.

  11. Implementation of Rivest Shamir Adleman Algorithm (RSA) and Vigenere Cipher In Web Based Information System

    Science.gov (United States)

    Aryanti, Aryanti; Mekongga, Ikhthison

    2018-02-01

    Data security and confidentiality is one of the most important aspects of information systems at the moment. One attempt to secure data such as by using cryptography. In this study developed a data security system by implementing the cryptography algorithm Rivest, Shamir Adleman (RSA) and Vigenere Cipher. The research was done by combining Rivest, Shamir Adleman (RSA) and Vigenere Cipher cryptographic algorithms to document file either word, excel, and pdf. This application includes the process of encryption and decryption of data, which is created by using PHP software and my SQL. Data encryption is done on the transmit side through RSA cryptographic calculations using the public key, then proceed with Vigenere Cipher algorithm which also uses public key. As for the stage of the decryption side received by using the Vigenere Cipher algorithm still use public key and then the RSA cryptographic algorithm using a private key. Test results show that the system can encrypt files, decrypt files and transmit files. Tests performed on the process of encryption and decryption of files with different file sizes, file size affects the process of encryption and decryption. The larger the file size the longer the process of encryption and decryption.

  12. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    Science.gov (United States)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  13. Implementation of Rivest Shamir Adleman Algorithm (RSA and Vigenere Cipher In Web Based Information System

    Directory of Open Access Journals (Sweden)

    Aryanti Aryanti

    2018-01-01

    Full Text Available Data security and confidentiality is one of the most important aspects of information systems at the moment. One attempt to secure data such as by using cryptography. In this study developed a data security system by implementing the cryptography algorithm Rivest, Shamir Adleman (RSA and Vigenere Cipher. The research was done by combining Rivest, Shamir Adleman (RSA and Vigenere Cipher cryptographic algorithms to document file either word, excel, and pdf. This application includes the process of encryption and decryption of data, which is created by using PHP software and my SQL. Data encryption is done on the transmit side through RSA cryptographic calculations using the public key, then proceed with Vigenere Cipher algorithm which also uses public key. As for the stage of the decryption side received by using the Vigenere Cipher algorithm still use public key and then the RSA cryptographic algorithm using a private key. Test results show that the system can encrypt files, decrypt files and transmit files. Tests performed on the process of encryption and decryption of files with different file sizes, file size affects the process of encryption and decryption. The larger the file size the longer the process of encryption and decryption.

  14. VLSI IMPLEMENTATION OF NOVEL ROUND KEYS GENERATION SCHEME FOR CRYPTOGRAPHY APPLICATIONS BY ERROR CONTROL ALGORITHM

    Directory of Open Access Journals (Sweden)

    B. SENTHILKUMAR

    2015-05-01

    Full Text Available A novel implementation of code based cryptography (Cryptocoding technique for multi-layer key distribution scheme is presented. VLSI chip is designed for storing information on generation of round keys. New algorithm is developed for reduced key size with optimal performance. Error Control Algorithm is employed for both generation of round keys and diffusion of non-linearity among them. Two new functions for bit inversion and its reversal are developed for cryptocoding. Probability of retrieving original key from any other round keys is reduced by diffusing nonlinear selective bit inversions on round keys. Randomized selective bit inversions are done on equal length of key bits by Round Constant Feedback Shift Register within the error correction limits of chosen code. Complexity of retrieving the original key from any other round keys is increased by optimal hardware usage. Proposed design is simulated and synthesized using VHDL coding for Spartan3E FPGA and results are shown. Comparative analysis is done between 128 bit Advanced Encryption Standard round keys and proposed round keys for showing security strength of proposed algorithm. This paper concludes that chip based multi-layer key distribution of proposed algorithm is an enhanced solution to the existing threats on cryptography algorithms.

  15. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  16. An efficient and cost effective FPGA based implementation of the Viola-Jones face detection algorithm

    Directory of Open Access Journals (Sweden)

    Peter Irgens

    2017-04-01

    Full Text Available We present an field programmable gate arrays (FPGA based implementation of the popular Viola-Jones face detection algorithm, which is an essential building block in many applications such as video surveillance and tracking. Our implementation is a complete system level hardware design described in a hardware description language and validated on the affordable DE2-115 evaluation board. Our primary objective is to study the achievable performance with a low-end FPGA chip based implementation. In addition, we release to the public domain the entire project. We hope that this will enable other researchers to easily replicate and compare their results to ours and that it will encourage and facilitate further research and educational ideas in the areas of image processing, computer vision, and advanced digital design and FPGA prototyping.

  17. An implementation of differential evolution algorithm for inversion of geoelectrical data

    Science.gov (United States)

    Balkaya, Çağlayan

    2013-11-01

    Differential evolution (DE), a population-based evolutionary algorithm (EA) has been implemented to invert self-potential (SP) and vertical electrical sounding (VES) data sets. The algorithm uses three operators including mutation, crossover and selection similar to genetic algorithm (GA). Mutation is the most important operator for the success of DE. Three commonly used mutation strategies including DE/best/1 (strategy 1), DE/rand/1 (strategy 2) and DE/rand-to-best/1 (strategy 3) were applied together with a binomial type crossover. Evolution cycle of DE was realized without boundary constraints. For the test studies performed with SP data, in addition to both noise-free and noisy synthetic data sets two field data sets observed over the sulfide ore body in the Malachite mine (Colorado) and over the ore bodies in the Neem-Ka Thana cooper belt (India) were considered. VES test studies were carried out using synthetically produced resistivity data representing a three-layered earth model and a field data set example from Gökçeada (Turkey), which displays a seawater infiltration problem. Mutation strategies mentioned above were also extensively tested on both synthetic and field data sets in consideration. Of these, strategy 1 was found to be the most effective strategy for the parameter estimation by providing less computational cost together with a good accuracy. The solutions obtained by DE for the synthetic cases of SP were quite consistent with particle swarm optimization (PSO) which is a more widely used population-based optimization algorithm than DE in geophysics. Estimated parameters of SP and VES data were also compared with those obtained from Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing (SA) without cooling to clarify uncertainties in the solutions. Comparison to the M-H algorithm shows that DE performs a fast approximate posterior sampling for the case of low-dimensional inverse geophysical problems.

  18. PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta.

    Science.gov (United States)

    Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J

    2010-03-01

    PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site.

  19. A GPU implementation of a track-repeating algorithm for proton radiotherapy dose calculations

    International Nuclear Information System (INIS)

    Yepes, Pablo P; Mirkovic, Dragan; Taddei, Phillip J

    2010-01-01

    An essential component in proton radiotherapy is the algorithm to calculate the radiation dose to be delivered to the patient. The most common dose algorithms are fast but they are approximate analytical approaches. However their level of accuracy is not always satisfactory, especially for heterogeneous anatomical areas, like the thorax. Monte Carlo techniques provide superior accuracy; however, they often require large computation resources, which render them impractical for routine clinical use. Track-repeating algorithms, for example the fast dose calculator, have shown promise for achieving the accuracy of Monte Carlo simulations for proton radiotherapy dose calculations in a fraction of the computation time. We report on the implementation of the fast dose calculator for proton radiotherapy on a card equipped with graphics processor units (GPUs) rather than on a central processing unit architecture. This implementation reproduces the full Monte Carlo and CPU-based track-repeating dose calculations within 2%, while achieving a statistical uncertainty of 2% in less than 1 min utilizing one single GPU card, which should allow real-time accurate dose calculations.

  20. Multi–GPU Implementation of Machine Learning Algorithm using CUDA and OpenCL

    Directory of Open Access Journals (Sweden)

    Jan Masek

    2016-06-01

    Full Text Available Using modern Graphic Processing Units (GPUs becomes very useful for computing complex and time consuming processes. GPUs provide high–performance computation capabilities with a good price. This paper deals with a multi–GPU OpenCL and CUDA implementations of k–Nearest Neighbor (k–NN algorithm. This work compares performances of OpenCLand CUDA implementations where each of them is suitable for different number of used attributes. The proposed CUDA algorithm achieves acceleration up to 880x in comparison witha single thread CPU version. The common k-NN was modified to be faster when the lower number of k neighbors is set. The performance of algorithm was verified with two GPUs dual-core NVIDIA GeForce GTX 690 and CPU Intel Core i7 3770 with 4.1 GHz frequency. The results of speed up were measured for one GPU, two GPUs, three and four GPUs. We performed several tests with data sets containing up to 4 million elements with various number of attributes.

  1. Low-Cost Super-Resolution Algorithms Implementation Over a HW/SW Video Compression Platform

    Directory of Open Access Journals (Sweden)

    Llopis Rafael Peset

    2006-01-01

    Full Text Available Two approaches are presented in this paper to improve the quality of digital images over the sensor resolution using super-resolution techniques: iterative super-resolution (ISR and noniterative super-resolution (NISR algorithms. The results show important improvements in the image quality, assuming that sufficient sample data and a reasonable amount of aliasing are available at the input images. These super-resolution algorithms have been implemented over a codesign video compression platform developed by Philips Research, performing minimal changes on the overall hardware architecture. In this way, a novel and feasible low-cost implementation has been obtained by using the resources encountered in a generic hybrid video encoder. Although a specific video codec platform has been used, the methodology presented in this paper is easily extendable to any other video encoder architectures. Finally a comparison in terms of memory, computational load, and image quality for both algorithms, as well as some general statements about the final impact of the sampling process on the quality of the super-resolved (SR image, are also presented.

  2. Development and implementation of an automatic control algorithm for the University of Utah nuclear reactor

    International Nuclear Information System (INIS)

    Crawford, Kevan C.; Sandquist, Gary M.

    1990-01-01

    The emphasis of this work is the development and implementation of an automatic control philosophy which uses the classical operational philosophies as a foundation. Three control algorithms were derived based on various simplifying assumptions. Two of the algorithms were tested in computer simulations. After realizing the insensitivity of the system to the simplifications, the most reduced form of the algorithms was implemented on the computer control system at the University of Utah (UNEL). Since the operational philosophies have a higher priority than automatic control, they determine when automatic control may be utilized. Unlike the operational philosophies, automatic control is not concerned with component failures. The object of this philosophy is the movement of absorber rods to produce a requested power. When the current power level is compared to the requested power level, an error may be detected which will require the movement of a control rod to correct the error. The automatic control philosophy adds another dimension to the classical operational philosophies. Using this philosophy, normal operator interactions with the computer would be limited only to run parameters such as power, period, and run time. This eliminates subjective judgements, objective judgements under pressure, and distractions to the operator and insures the reactor will be operated in a safe and controlled manner as well as providing reproducible operations

  3. Design Approach and Implementation of Application Specific Instruction Set Processor for SHA-3 BLAKE Algorithm

    Science.gov (United States)

    Zhang, Yuli; Han, Jun; Weng, Xinqian; He, Zhongzhu; Zeng, Xiaoyang

    This paper presents an Application Specific Instruction-set Processor (ASIP) for the SHA-3 BLAKE algorithm family by instruction set extensions (ISE) from an RISC (reduced instruction set computer) processor. With a design space exploration for this ASIP to increase the performance and reduce the area cost, we accomplish an efficient hardware and software implementation of BLAKE algorithm. The special instructions and their well-matched hardware function unit improve the calculation of the key section of the algorithm, namely G-functions. Also, relaxing the time constraint of the special function unit can decrease its hardware cost, while keeping the high data throughput of the processor. Evaluation results reveal the ASIP achieves 335Mbps and 176Mbps for BLAKE-256 and BLAKE-512. The extra area cost is only 8.06k equivalent gates. The proposed ASIP outperforms several software approaches on various platforms in cycle per byte. In fact, both high throughput and low hardware cost achieved by this programmable processor are comparable to that of ASIC implementations.

  4. PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta

    Science.gov (United States)

    Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J.

    2010-01-01

    Summary: PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. Availability: PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site. Contact: pyrosetta@graylab.jhu.edu PMID:20061306

  5. The Research and Implementation of MUSER CLEAN Algorithm Based on OpenCL

    Science.gov (United States)

    Feng, Y.; Chen, K.; Deng, H.; Wang, F.; Mei, Y.; Wei, S. L.; Dai, W.; Yang, Q. P.; Liu, Y. B.; Wu, J. P.

    2017-03-01

    It's urgent to carry out high-performance data processing with a single machine in the development of astronomical software. However, due to the different configuration of the machine, traditional programming techniques such as multi-threading, and CUDA (Compute Unified Device Architecture)+GPU (Graphic Processing Unit) have obvious limitations in portability and seamlessness between different operation systems. The OpenCL (Open Computing Language) used in the development of MUSER (MingantU SpEctral Radioheliograph) data processing system is introduced. And the Högbom CLEAN algorithm is re-implemented into parallel CLEAN algorithm by the Python language and PyOpenCL extended package. The experimental results show that the CLEAN algorithm based on OpenCL has approximately equally operating efficiency compared with the former CLEAN algorithm based on CUDA. More important, the data processing in merely CPU (Central Processing Unit) environment of this system can also achieve high performance, which has solved the problem of environmental dependence of CUDA+GPU. Overall, the research improves the adaptability of the system with emphasis on performance of MUSER image clean computing. In the meanwhile, the realization of OpenCL in MUSER proves its availability in scientific data processing. In view of the high-performance computing features of OpenCL in heterogeneous environment, it will probably become the preferred technology in the future high-performance astronomical software development.

  6. Performance Test of Core Protection and Monitoring Algorithm with DLL for SMART Simulator Implementation

    International Nuclear Information System (INIS)

    Koo, Bonseung; Hwang, Daehyun; Kim, Keungkoo

    2014-01-01

    A multi-purpose best-estimate simulator for SMART is being established, which is intended to be used as a tool to evaluate the impacts of design changes on the safety performance, and to improve and/or optimize the operating procedure of SMART. In keeping with these intentions, a real-time model of the digital core protection and monitoring systems was developed and the real-time performance of the models was verified for various simulation scenarios. In this paper, a performance test of the core protection and monitoring algorithm with a DLL file for the SMART simulator implementation was performed. A DLL file of the simulator application code was made and several real-time evaluation tests were conducted for the steady-state and transient conditions with simulated system variables. A performance test of the core protection and monitoring algorithms for the SMART simulator was performed. A DLL file of the simulator version code was made and several real-time evaluation tests were conducted for various scenarios with a DLL file and simulated system variables. The results of all test cases showed good agreement with the reference results and some features caused by algorithm change were properly reflected to the DLL results. Therefore, it was concluded that the SCOPS S SIM and SCOMS S SIM algorithms and calculational capabilities are appropriate for the core protection and monitoring program in the SMART simulator

  7. Next Generation Aura-OMI SO2 Retrieval Algorithm: Introduction and Implementation Status

    Science.gov (United States)

    Li, Can; Joiner, Joanna; Krotkov, Nickolay A.; Bhartia, Pawan K.

    2014-01-01

    We introduce our next generation algorithm to retrieve SO2 using radiance measurements from the Aura Ozone Monitoring Instrument (OMI). We employ a principal component analysis technique to analyze OMI radiance spectral in 310.5-340 nm acquired over regions with no significant SO2. The resulting principal components (PCs) capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering, and ozone absorption) and measurement artifacts, enabling us to account for these various interferences in SO2 retrievals. By fitting these PCs along with SO2 Jacobians calculated with a radiative transfer model to OMI-measured radiance spectra, we directly estimate SO2 vertical column density in one step. As compared with the previous generation operational OMSO2 PBL (Planetary Boundary Layer) SO2 product, our new algorithm greatly reduces unphysical biases and decreases the noise by a factor of two, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing long-term, consistent SO2 records for air quality and climate research. We have operationally implemented this new algorithm on OMI SIPS for producing the new generation standard OMI SO2 products.

  8. An evaluation and implementation of rule-based Home Energy Management System using the Rete algorithm.

    Science.gov (United States)

    Kawakami, Tomoya; Fujita, Naotaka; Yoshihisa, Tomoki; Tsukamoto, Masahiko

    2014-01-01

    In recent years, sensors become popular and Home Energy Management System (HEMS) takes an important role in saving energy without decrease in QoL (Quality of Life). Currently, many rule-based HEMSs have been proposed and almost all of them assume "IF-THEN" rules. The Rete algorithm is a typical pattern matching algorithm for IF-THEN rules. Currently, we have proposed a rule-based Home Energy Management System (HEMS) using the Rete algorithm. In the proposed system, rules for managing energy are processed by smart taps in network, and the loads for processing rules and collecting data are distributed to smart taps. In addition, the number of processes and collecting data are reduced by processing rules based on the Rete algorithm. In this paper, we evaluated the proposed system by simulation. In the simulation environment, rules are processed by a smart tap that relates to the action part of each rule. In addition, we implemented the proposed system as HEMS using smart taps.

  9. An improved non-uniformity correction algorithm and its GPU parallel implementation

    Science.gov (United States)

    Cheng, Kuanhong; Zhou, Huixin; Qin, Hanlin; Zhao, Dong; Qian, Kun; Rong, Shenghui

    2018-05-01

    The performance of SLP-THP based non-uniformity correction algorithm is seriously affected by the result of SLP filter, which always leads to image blurring and ghosting artifacts. To address this problem, an improved SLP-THP based non-uniformity correction method with curvature constraint was proposed. Here we put forward a new way to estimate spatial low frequency component. First, the details and contours of input image were obtained respectively by minimizing local Gaussian curvature and mean curvature of image surface. Then, the guided filter was utilized to combine these two parts together to get the estimate of spatial low frequency component. Finally, we brought this SLP component into SLP-THP method to achieve non-uniformity correction. The performance of proposed algorithm was verified by several real and simulated infrared image sequences. The experimental results indicated that the proposed algorithm can reduce the non-uniformity without detail losing. After that, a GPU based parallel implementation that runs 150 times faster than CPU was presented, which showed the proposed algorithm has great potential for real time application.

  10. SU-E-T-67: Clinical Implementation and Evaluation of the Acuros Dose Calculation Algorithm

    International Nuclear Information System (INIS)

    Yan, C; Combine, T; Dickens, K; Wynn, R; Pavord, D; Huq, M

    2014-01-01

    Purpose: The main aim of the current study is to present a detailed description of the implementation of the Acuros XB Dose Calculation Algorithm, and subsequently evaluate its clinical impacts by comparing it with AAA algorithm. Methods: The source models for both Acuros XB and AAA were configured by importing the same measured beam data into Eclipse treatment planning system. Both algorithms were evaluated by comparing calculated dose with measured dose on a homogeneous water phantom for field sizes ranging from 6cm × 6cm to 40cm × 40cm. Central axis and off-axis points with different depths were chosen for the comparison. Similarly, wedge fields with wedge angles from 15 to 60 degree were used. In addition, variable field sizes for a heterogeneous phantom were used to evaluate the Acuros algorithm. Finally, both Acuros and AAA were tested on VMAT patient plans for various sites. Does distributions and calculation time were compared. Results: On average, computation time is reduced by at least 50% by Acuros XB compared with AAA on single fields and VMAT plans. When used for open 6MV photon beams on homogeneous water phantom, both Acuros XB and AAA calculated doses were within 1% of measurement. For 23 MV photon beams, the calculated doses were within 1.5% of measured doses for Acuros XB and 2% for AAA. When heterogeneous phantom was used, Acuros XB also improved on accuracy. Conclusion: Compared with AAA, Acuros XB can improve accuracy while significantly reduce computation time for VMAT plans

  11. The mGA1.0: A common LISP implementation of a messy genetic algorithm

    Science.gov (United States)

    Goldberg, David E.; Kerzic, Travis

    1990-01-01

    Genetic algorithms (GAs) are finding increased application in difficult search, optimization, and machine learning problems in science and engineering. Increasing demands are being placed on algorithm performance, and the remaining challenges of genetic algorithm theory and practice are becoming increasingly unavoidable. Perhaps the most difficult of these challenges is the so-called linkage problem. Messy GAs were created to overcome the linkage problem of simple genetic algorithms by combining variable-length strings, gene expression, messy operators, and a nonhomogeneous phasing of evolutionary processing. Results on a number of difficult deceptive test functions are encouraging with the mGA always finding global optima in a polynomial number of function evaluations. Theoretical and empirical studies are continuing, and a first version of a messy GA is ready for testing by others. A Common LISP implementation called mGA1.0 is documented and related to the basic principles and operators developed by Goldberg et. al. (1989, 1990). Although the code was prepared with care, it is not a general-purpose code, only a research version. Important data structures and global variations are described. Thereafter brief function descriptions are given, and sample input data are presented together with sample program output. A source listing with comments is also included.

  12. A study and implementation of algorithm for automatic ECT result comparison

    International Nuclear Information System (INIS)

    Jang, You Hyun; Nam, Min Woo; Kim, In Chul; Joo, Kyung Mun; Kim, Jong Seog

    2012-01-01

    Automatic ECT Result Comparison Algorithm was developed and implemented with computer language to remove the human error in manual comparison with many data. The structures of two ECT Program (Eddy net and ECT IDS) that have unique file structure were analyzed to open file and upload data in PC memory. Comparison algorithm was defined graphically for easy PC programming language conversion. Automatic Result Program was programmed with C language that is suitable for future code management and has object oriented programming structure and fast development potential. Automatic Result Program has MS Excel file exporting function that is useful to use external S/W for additional analysis and intuitive result visualization function with color mapping in user friendly fashion that helps analyze efficiently

  13. Search of molecular ground state via genetic algorithm: Implementation on a hybrid SIMD-MIMD platform

    International Nuclear Information System (INIS)

    Pucello, N.; D'Agostino, G.; Pisacane, F.

    1997-01-01

    A genetic algorithm for the optimization of the ground-state structure of a metallic cluster has been developed and ported on a SIMD-MIMD parallel platform. The SIMD part of the parallel platform is represented by a Quadrics/APE100 consisting of 512 floating point units, while the MIMD part is formed by a cluster of workstations. The proposed algorithm is composed by a part where the genetic operators are applied to the elements of the population and a part which performs a further local relaxation and the fitness calculation via Molecular Dynamics. These parts have been implemented on the MIMD and on the SIMD part, respectively. Results have been compared to those generated by using Simulated Annealing

  14. Dynamic game balancing implementation using adaptive algorithm in mobile-based Safari Indonesia game

    Science.gov (United States)

    Yuniarti, Anny; Nata Wardanie, Novita; Kuswardayan, Imam

    2018-03-01

    In developing a game there is one method that should be applied to maintain the interest of players, namely dynamic game balancing. Dynamic game balancing is a process to match a player’s playing style with the behaviour, attributes, and game environment. This study applies dynamic game balancing using adaptive algorithm in scrolling shooter game type called Safari Indonesia which developed using Unity. The game of this type is portrayed by a fighter aircraft character trying to defend itself from insistent enemy attacks. This classic game is chosen to implement adaptive algorithms because it has quite complex attributes to be developed using dynamic game balancing. Tests conducted by distributing questionnaires to a number of players indicate that this method managed to reduce frustration and increase the pleasure factor in playing.

  15. IMPLEMENTATION OF INCIDENT DETECTION ALGORITHM BASED ON FUZZY LOGIC IN PTV VISSIM

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-05-01

    Full Text Available Traffic incident management is a major challenge in the management of movement, requiring constant attention and significant investment, as well as fast and accurate solutions in order to re-establish normal traffic conditions. Automatic control methods are becoming an important factor for the reduction of traffic congestion caused by an arising incident. In this paper, the algorithm of automatic detection incident based on fuzzy logic is implemented in the software PTV VISSIM. 9 different types of tests were conducted on the two lane road section segment with changing traffic conditions: the location of the road accident, loading of traffic. The main conclusion of the research is that the proposed algorithm for the incidents detection demonstrates good performance in the time of detection and false alarms

  16. Implementation in an FPGA circuit of Edge detection algorithm based on the Discrete Wavelet Transforms

    Science.gov (United States)

    Bouganssa, Issam; Sbihi, Mohamed; Zaim, Mounia

    2017-07-01

    The 2D Discrete Wavelet Transform (DWT) is a computationally intensive task that is usually implemented on specific architectures in many imaging systems in real time. In this paper, a high throughput edge or contour detection algorithm is proposed based on the discrete wavelet transform. A technique for applying the filters on the three directions (Horizontal, Vertical and Diagonal) of the image is used to present the maximum of the existing contours. The proposed architectures were designed in VHDL and mapped to a Xilinx Sparten6 FPGA. The results of the synthesis show that the proposed architecture has a low area cost and can operate up to 100 MHz, which can perform 2D wavelet analysis for a sequence of images while maintaining the flexibility of the system to support an adaptive algorithm.

  17. High-frequency asymptotics of the local vertex function. Algorithmic implementations

    Energy Technology Data Exchange (ETDEWEB)

    Tagliavini, Agnese; Wentzell, Nils [Institut fuer Theoretische Physik, Eberhard Karls Universitaet, 72076 Tuebingen (Germany); Institute for Solid State Physics, Vienna University of Technology, 1040 Vienna (Austria); Li, Gang; Rohringer, Georg; Held, Karsten; Toschi, Alessandro [Institute for Solid State Physics, Vienna University of Technology, 1040 Vienna (Austria); Taranto, Ciro [Institute for Solid State Physics, Vienna University of Technology, 1040 Vienna (Austria); Max Planck Institute for Solid State Research, D-70569 Stuttgart (Germany); Andergassen, Sabine [Institut fuer Theoretische Physik, Eberhard Karls Universitaet, 72076 Tuebingen (Germany)

    2016-07-01

    Local vertex functions are a crucial ingredient of several forefront many-body algorithms in condensed matter physics. However, the full treatment of their frequency dependence poses a huge limitation to the numerical performance. A significant advancement requires an efficient treatment of the high-frequency asymptotic behavior of the vertex functions. We here provide a detailed diagrammatic analysis of the high-frequency asymptotic structures and their physical interpretation. Based on these insights, we propose a frequency parametrization, which captures the whole high-frequency asymptotics for arbitrary values of the local Coulomb interaction and electronic density. We present its algorithmic implementation in many-body solvers based on parquet-equations as well as functional renormalization group schemes and assess its validity by comparing our results for the single impurity Anderson model with exact diagonalization calculations.

  18. A study and implementation of algorithm for automatic ECT result comparison

    Energy Technology Data Exchange (ETDEWEB)

    Jang, You Hyun; Nam, Min Woo; Kim, In Chul; Joo, Kyung Mun; Kim, Jong Seog [Central Research Institute, Daejeon (Korea, Republic of)

    2012-10-15

    Automatic ECT Result Comparison Algorithm was developed and implemented with computer language to remove the human error in manual comparison with many data. The structures of two ECT Program (Eddy net and ECT IDS) that have unique file structure were analyzed to open file and upload data in PC memory. Comparison algorithm was defined graphically for easy PC programming language conversion. Automatic Result Program was programmed with C language that is suitable for future code management and has object oriented programming structure and fast development potential. Automatic Result Program has MS Excel file exporting function that is useful to use external S/W for additional analysis and intuitive result visualization function with color mapping in user friendly fashion that helps analyze efficiently.

  19. Implementation of sepsis algorithm by nurses in the intensive care unit

    Directory of Open Access Journals (Sweden)

    Paula Pedroso Peninck

    2012-04-01

    Full Text Available Sepsis is defined as a clinical syndrome consisting of a systemic inflammatory response associated to an infection, which may determine malfunction or failure of multiple organs. This research aims to verify the application of implementation of sepsis algorithm by nurses in the Intensive Care Unit and create an operational nursing assistance guide. This is an exploratory, descriptive study with quantitative approach. A data collection instrument based on relevant literature was elaborated, assessed, corrected and validated. The sample consisted of 20 intensive care unit nurses. We obtained satisfactory evaluations on nurses’ performance, but some issues did not reach 50% accuracy. We emphasize the importance of greater numbers of nurses getting acquainted and correctly applying the sepsis algorithm. Based on the above, an operational septic patient nursing assistance guide was created, based on the difficulties that arose vis-à-vis the variables applied in research and relevant literature.

  20. Implementation of Human Trafficking Education and Treatment Algorithm in the Emergency Department.

    Science.gov (United States)

    Egyud, Amber; Stephens, Kimberly; Swanson-Bierman, Brenda; DiCuccio, Marge; Whiteman, Kimberly

    2017-11-01

    Health care professionals have not been successful in recognizing or rescuing victims of human trafficking. The purpose of this project was to implement a screening system and treatment algorithm in the emergency department to improve the identification and rescue of victims of human trafficking. The lack of recognition by health care professionals is related to inadequate education and training tools and confusion with other forms of violence such as trauma and sexual assault. A multidisciplinary team was formed to assess the evidence related to human trafficking and make recommendations for practice. After receiving education, staff completed a survey about knowledge gained from the training. An algorithm for identification and treatment of sex trafficking victims was implemented and included a 2-pronged identification approach: (1) medical red flags created by a risk-assessment tool embedded in the electronic health record and (2) a silent notification process. Outcome measures were the number of victims who were identified either by the medical red flags or by silent notification and were offered and accepted intervention. Survey results indicated that 75% of participants reported that the education improved their competence level. The results demonstrated that an education and treatment algorithm may be an effective strategy to improve recognition. One patient was identified as an actual victim of human trafficking; the remaining patients reported other forms of abuse. Education and a treatment algorithm were effective strategies to improve recognition and rescue of human trafficking victims and increase identification of other forms of abuse. Copyright © 2017 Emergency Nurses Association. Published by Elsevier Inc. All rights reserved.

  1. IMPLEMENTATION OF A REAL-TIME STACKING ALGORITHM IN A PHOTOGRAMMETRIC DIGITAL CAMERA FOR UAVS

    Directory of Open Access Journals (Sweden)

    A. Audi

    2017-08-01

    Full Text Available In the recent years, unmanned aerial vehicles (UAVs have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery need a longexposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the Nth image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesn’t seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation

  2. Corticostriatal circuit mechanisms of value-based action selection: Implementation of reinforcement learning algorithms and beyond.

    Science.gov (United States)

    Morita, Kenji; Jitsev, Jenia; Morrison, Abigail

    2016-09-15

    Value-based action selection has been suggested to be realized in the corticostriatal local circuits through competition among neural populations. In this article, we review theoretical and experimental studies that have constructed and verified this notion, and provide new perspectives on how the local-circuit selection mechanisms implement reinforcement learning (RL) algorithms and computations beyond them. The striatal neurons are mostly inhibitory, and lateral inhibition among them has been classically proposed to realize "Winner-Take-All (WTA)" selection of the maximum-valued action (i.e., 'max' operation). Although this view has been challenged by the revealed weakness, sparseness, and asymmetry of lateral inhibition, which suggest more complex dynamics, WTA-like competition could still occur on short time scales. Unlike the striatal circuit, the cortical circuit contains recurrent excitation, which may enable retention or temporal integration of information and probabilistic "soft-max" selection. The striatal "max" circuit and the cortical "soft-max" circuit might co-implement an RL algorithm called Q-learning; the cortical circuit might also similarly serve for other algorithms such as SARSA. In these implementations, the cortical circuit presumably sustains activity representing the executed action, which negatively impacts dopamine neurons so that they can calculate reward-prediction-error. Regarding the suggested more complex dynamics of striatal, as well as cortical, circuits on long time scales, which could be viewed as a sequence of short WTA fragments, computational roles remain open: such a sequence might represent (1) sequential state-action-state transitions, constituting replay or simulation of the internal model, (2) a single state/action by the whole trajectory, or (3) probabilistic sampling of state/action. Copyright © 2016. Published by Elsevier B.V.

  3. Spiral-CT-angiography of acute pulmonary embolism: factors that influence the implementation into standard diagnostic algorithms

    International Nuclear Information System (INIS)

    Bankier, A.; Herold, C.J.; Fleischmann, D.; Janata-Schwatczek, K.

    1998-01-01

    Purpose: Debate about the potential implementation of Spiral-CT in diagnostic algorithms of pulmonary embolism are often focussed on sensitivity and specificity in the context of comparative methodologic studies. We intend to investigate whether additional factors might influence this debate. Results: The factors availability, acceptance, patient-outcome, and cost-effectiveness-studies do have substantial influence on the implementation of Spiral-CT in the diagnostic algorithms of pulmonary embolism. Incorporation of these factors into the discussion might lead to more flexible and more patient-oriented algorithms for the diagnosis of pulmonary embolism. Conclusion: Availability of equipment, acceptance among clinicians, patient-out-come, and cost-effectiveness evaluations should be implemented into the debate about potential implementation of Spiral-CT in routine diagnostic imaging algorithms of pulmonary embolism. (orig./AJ) [de

  4. Implementation of the U.S. Environmental Protection Agency's Waste Reduction (WAR) Algorithm in Cape-Open Based Process Simulators

    Science.gov (United States)

    The Sustainable Technology Division has recently completed an implementation of the U.S. EPA's Waste Reduction (WAR) Algorithm that can be directly accessed from a Cape-Open compliant process modeling environment. The WAR Algorithm add-in can be used in AmsterChem's COFE (Cape-Op...

  5. Implementation of QR-decomposition based on CORDIC for unitary MUSIC algorithm

    Science.gov (United States)

    Lounici, Merwan; Luan, Xiaoming; Saadi, Wahab

    2013-07-01

    The DOA (Direction Of Arrival) estimation with subspace methods such as MUSIC (MUltiple SIgnal Classification) and ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique) is based on an accurate estimation of the eigenvalues and eigenvectors of covariance matrix. QR decomposition is implemented with the Coordinate Rotation DIgital Computer (CORDIC) algorithm. QRD requires only additions and shifts [6], so it is faster and more regular than other methods. In this article the hardware architecture of an EVD (Eigen Value Decomposition) processor based on TSA (triangular systolic array) for QR decomposition is proposed. Using Xilinx System Generator (XSG), the design is implemented and the estimated logic device resource values are presented for different matrix sizes.

  6. Progress in parallel implementation of the multilevel plane wave time domain algorithm

    KAUST Repository

    Liu, Yang

    2013-07-01

    The computational complexity and memory requirements of classical schemes for evaluating transient electromagnetic fields produced by Ns dipoles active for Nt time steps scale as O(NtN s 2) and O(Ns 2), respectively. The multilevel plane wave time domain (PWTD) algorithm [A.A. Ergin et al., Antennas and Propagation Magazine, IEEE, vol. 41, pp. 39-52, 1999], viz. the extension of the frequency domain fast multipole method (FMM) to the time domain, reduces the above costs to O(NtNslog2Ns) and O(Ns α) with α = 1.5 for surface current distributions and α = 4/3 for volumetric ones. Its favorable computational and memory costs notwithstanding, serial implementations of the PWTD scheme unfortunately remain somewhat limited in scope and ill-suited to tackle complex real-world scattering problems, and parallel implementations are called for. © 2013 IEEE.

  7. Implementation of a Multichannel Serial Data Streaming Algorithm using the Xilinx Serial RapidIO Solution

    Science.gov (United States)

    Doxley, Charles A.

    2016-01-01

    In the current world of applications that use reconfigurable technology implemented on field programmable gate arrays (FPGAs), there is a need for flexible architectures that can grow as the systems evolve. A project has limited resources and a fixed set of requirements that development efforts are tasked to meet. Designers must develop robust solutions that practically meet the current customer demands and also have the ability to grow for future performance. This paper describes the development of a high speed serial data streaming algorithm that allows for transmission of multiple data channels over a single serial link. The technique has the ability to change to meet new applications developed for future design considerations. This approach uses the Xilinx Serial RapidIO LOGICORE Solution to implement a flexible infrastructure to meet the current project requirements with the ability to adapt future system designs.

  8. Implementation of the ALICE HLT hardware cluster finder algorithm in Vivado HLS

    Energy Technology Data Exchange (ETDEWEB)

    Gruell, Frederik; Engel, Heiko; Kebschull, Udo [Infrastructure and Computer Systems in Data Processing, Goethe University Frankfurt (Germany); Collaboration: ALICE-Collaboration

    2016-07-01

    The FastClusterFinder algorithm running in the ALICE High-Level Trigger (HLT) read-out boards extracts clusters from raw data from the Time Projection Chamber (TPC) detector and forwards them to the HLT data processing framework for tracking, event reconstruction and compression. It serves as an early stage of feature extraction in the FPGA of the board. Past and current implementations are written in VHDL on reconfigurable hardware for high throughput and low latency. We examine Vivado HLS, a high-level language that promises an increased developer productivity, as an alternative. The implementation of the application is compared to descriptions in VHDL and MaxJ in terms of productivity, resource usage and maximum clock frequency.

  9. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    International Nuclear Information System (INIS)

    Tian, Zhen; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B.; Peng, Fei

    2015-01-01

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is

  10. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun; Jia, Xun, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Jiang, Steve B., E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu [Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas 75390 (United States); Peng, Fei [Computer Science Department, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213 (United States)

    2015-06-15

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is

  11. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    Energy Technology Data Exchange (ETDEWEB)

    Santi, Peter Angelo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cutler, Theresa Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favalli, Andrea [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Koehler, Katrina Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Henzl, Vladimir [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Henzlova, Daniela [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parker, Robert Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Croft, Stephen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-01

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects in all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.

  12. Parallel Implementation and Scaling of an Adaptive Mesh Discrete Ordinates Algorithm for Transport

    International Nuclear Information System (INIS)

    Howell, L H

    2004-01-01

    Block-structured adaptive mesh refinement (AMR) uses a mesh structure built up out of locally-uniform rectangular grids. In the BoxLib parallel framework used by the Raptor code, each processor operates on one or more of these grids at each refinement level. The decomposition of the mesh into grids and the distribution of these grids among processors may change every few timesteps as a calculation proceeds. Finer grids use smaller timesteps than coarser grids, requiring additional work to keep the system synchronized and ensure conservation between different refinement levels. In a paper for NECDC 2002 I presented preliminary results on implementation of parallel transport sweeps on the AMR mesh, conjugate gradient acceleration, accuracy of the AMR solution, and scalar speedup of the AMR algorithm compared to a uniform fully-refined mesh. This paper continues with a more in-depth examination of the parallel scaling properties of the scheme, both in single-level and multi-level calculations. Both sweeping and setup costs are considered. The algorithm scales with acceptable performance to several hundred processors. Trends suggest, however, that this is the limit for efficient calculations with traditional transport sweeps, and that modifications to the sweep algorithm will be increasingly needed as job sizes in the thousands of processors become common

  13. Design and Implementation of the Automated Rendezvous Targeting Algorithms for Orion

    Science.gov (United States)

    DSouza, Christopher; Weeks, Michael

    2010-01-01

    The Orion vehicle will be designed to perform several rendezvous missions: rendezvous with the ISS in Low Earth Orbit (LEO), rendezvous with the EDS/Altair in LEO, a contingency rendezvous with the ascent stage of the Altair in Low Lunar Orbit (LLO) and a contingency rendezvous in LLO with the ascent and descent stage in the case of an aborted lunar landing. Therefore, it is not difficult to realize that each of these scenarios imposes different operational, timing, and performance constraints on the GNC system. To this end, a suite of on-board guidance and targeting algorithms have been designed to meet the requirement to perform the rendezvous independent of communications with the ground. This capability is particularly relevant for the lunar missions, some of which may occur on the far side of the moon. This paper will describe these algorithms which are designed to be structured and arranged in such a way so as to be flexible and able to safely perform a wide variety of rendezvous trajectories. The goal of the algorithms is not to merely fly one specific type of canned rendezvous profile. Conversely, it was designed from the start to be general enough such that any type of trajectory profile can be flown.(i.e. a coelliptic profile, a stable orbit rendezvous profile, and a expedited LLO rendezvous profile, etc) all using the same rendezvous suite of algorithms. Each of these profiles makes use of maneuver types which have been designed with dual goals of robustness and performance. They are designed to converge quickly under dispersed conditions and they are designed to perform many of the functions performed on the ground today. The targeting algorithms consist of a phasing maneuver (NC), an altitude adjust maneuver (NH), and plane change maneuver (NPC), a coelliptic maneuver (NSR), a Lambert targeted maneuver, and several multiple-burn targeted maneuvers which combine one of more of these algorithms. The derivation and implementation of each of these

  14. Algorithm and Implementation of Distributed ESN Using Spark Framework and Parallel PSO

    Directory of Open Access Journals (Sweden)

    Kehe Wu

    2017-04-01

    Full Text Available The echo state network (ESN employs a huge reservoir with sparsely and randomly connected internal nodes and only trains the output weights, which avoids the suboptimal problem, exploding and vanishing gradients, high complexity and other disadvantages faced by traditional recurrent neural network (RNN training. In light of the outstanding adaption to nonlinear dynamical systems, ESN has been applied into a wide range of applications. However, in the era of Big Data, with an enormous amount of data being generated continuously every day, the data are often distributed and stored in real applications, and thus the centralized ESN training process is prone to being technologically unsuitable. In order to achieve the requirement of Big Data applications in the real world, in this study we propose an algorithm and its implementation for distributed ESN training. The mentioned algorithm is based on the parallel particle swarm optimization (P-PSO technique and the implementation uses Spark, a famous large-scale data processing framework. Four extremely large-scale datasets, including artificial benchmarks, real-world data and image data, are adopted to verify our framework on a stretchable platform. Experimental results indicate that the proposed work is accurate in the era of Big Data, regarding speed, accuracy and generalization capabilities.

  15. Implementation vigenere algorithm using microcontroller for sending SMS in monitoring radioactive substances transport system

    International Nuclear Information System (INIS)

    Adi Abimanyu; Nurhidayat; Jumari

    2013-01-01

    Aspects of safety and security of radioactive substances from the sender to the receiver is to be secured for not to harm humans. In general, monitoring the transport of radioactive materials is done by communication with a telephone conversation to determine the location and rate of exposure radioactive substances. From the aspect of safety, communication through telephone conversations easily interpreted by others, in addition the possibility of human-error is quite high. SMS service is known for its ease in terms of use so that SMS can be used as a substitute for communication through telephone conversations to monitor the rate of radiation exposure and the position of radioactive substances in the transport of radioactive substances. The system monitors the transport of radioactive materials developed by implement vigenere algorithms using a microcontroller for sending SMS (Short Message Service) to communicate. Tests was conducted to testing encryption and description and computation time required. From the test results obtained they have been successfully implemented vigenere algorithm to encrypt and decrypt the messages on the transport of radioactive monitoring system and the computational time required to encrypt and decrypt the data is 13.05 ms for 36 characters and 13.61 for 37 characters. So for every single character require computing time 0.56 ms. (author)

  16. GillespieSSA: Implementing the Gillespie Stochastic Simulation Algorithm in R

    Directory of Open Access Journals (Sweden)

    Mario Pineda-Krch

    2008-02-01

    Full Text Available The deterministic dynamics of populations in continuous time are traditionally described using coupled, first-order ordinary differential equations. While this approach is accurate for large systems, it is often inadequate for small systems where key species may be present in small numbers or where key reactions occur at a low rate. The Gillespie stochastic simulation algorithm (SSA is a procedure for generating time-evolution trajectories of finite populations in continuous time and has become the standard algorithm for these types of stochastic models. This article presents a simple-to-use and flexible framework for implementing the SSA using the high-level statistical computing language R and the package GillespieSSA. Using three ecological models as examples (logistic growth, Rosenzweig-MacArthur predator-prey model, and Kermack-McKendrick SIRS metapopulation model, this paper shows how a deterministic model can be formulated as a finite-population stochastic model within the framework of SSA theory and how it can be implemented in R. Simulations of the stochastic models are performed using four different SSA Monte Carlo methods: one exact method (Gillespie's direct method; and three approximate methods (explicit, binomial, and optimized tau-leap methods. Comparison of simulation results confirms that while the time-evolution trajectories obtained from the different SSA methods are indistinguishable, the approximate methods are up to four orders of magnitude faster than the exact methods.

  17. Automated Software Acceleration in Programmable Logic for an Efficient NFFT Algorithm Implementation: A Case Study.

    Science.gov (United States)

    Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian

    2017-03-28

    Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation.

  18. Firmware implementation of algorithms for the new topological processor in the ATLAS first level trigger

    Energy Technology Data Exchange (ETDEWEB)

    Maldaner, Stephan; Caputo, Regina; Schaefer, Ulrich; Tapprogge, Stefan [Universitaet Mainz, Staudingerweg 7, 55128 Mainz (Germany)

    2013-07-01

    After the upgrade of the Large Hadron Collider in 2013/2014 proton-proton collisions will be provided at a center-of-mass energy of up to 14 TeV with an instantaneous luminosity of at least 1 . 10{sup 34} cm{sup -2}s{sup -1}. During this upgrade a new FPGA based electronics system (Topological Processor) will be included in the ATLAS trigger chain to keep up with the increased rate of events. To reduce rates while maintaining high signal efficiency of the trigger the processor will make its decisions based upon topological criteria like angular cuts and mass calculations. As a hardware based trigger, it will have to fit into the tight first level trigger latency budget of 2.5 μs and thus provides the challenge of making decisions within very short time. Beside the latency, the main constraints on the algorithms are the required amount of logic resources of the FPGA which will be implemented as firmware. Therefore to be able to use as much information as possible, each module will be equipped with 2 state-of-the-art Xilinx Virtex 7 FPGAs to process the incoming data. This talk will present some of the topological algorithms and discuss properties of their implementation in firmware.

  19. Demonstration of quantum advantage in machine learning

    Science.gov (United States)

    Ristè, Diego; da Silva, Marcus P.; Ryan, Colm A.; Cross, Andrew W.; Córcoles, Antonio D.; Smolin, John A.; Gambetta, Jay M.; Chow, Jerry M.; Johnson, Blake R.

    2017-04-01

    The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, limiting the obtained advantage. Here we solve an oracle-based problem, known as learning parity with noise, on a five-qubit superconducting processor. Executing classical and quantum algorithms using the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a significant quantum advantage already emerges in existing noisy systems.

  20. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  1. Towards a HPC-oriented parallel implementation of a learning algorithm for bioinformatics applications.

    Science.gov (United States)

    D'Angelo, Gianni; Rampone, Salvatore

    2014-01-01

    The huge quantity of data produced in Biomedical research needs sophisticated algorithmic methodologies for its storage, analysis, and processing. High Performance Computing (HPC) appears as a magic bullet in this challenge. However, several hard to solve parallelization and load balancing problems arise in this context. Here we discuss the HPC-oriented implementation of a general purpose learning algorithm, originally conceived for DNA analysis and recently extended to treat uncertainty on data (U-BRAIN). The U-BRAIN algorithm is a learning algorithm that finds a Boolean formula in disjunctive normal form (DNF), of approximately minimum complexity, that is consistent with a set of data (instances) which may have missing bits. The conjunctive terms of the formula are computed in an iterative way by identifying, from the given data, a family of sets of conditions that must be satisfied by all the positive instances and violated by all the negative ones; such conditions allow the computation of a set of coefficients (relevances) for each attribute (literal), that form a probability distribution, allowing the selection of the term literals. The great versatility that characterizes it, makes U-BRAIN applicable in many of the fields in which there are data to be analyzed. However the memory and the execution time required by the running are of O(n(3)) and of O(n(5)) order, respectively, and so, the algorithm is unaffordable for huge data sets. We find mathematical and programming solutions able to lead us towards the implementation of the algorithm U-BRAIN on parallel computers. First we give a Dynamic Programming model of the U-BRAIN algorithm, then we minimize the representation of the relevances. When the data are of great size we are forced to use the mass memory, and depending on where the data are actually stored, the access times can be quite different. According to the evaluation of algorithmic efficiency based on the Disk Model, in order to reduce the costs of

  2. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    Energy Technology Data Exchange (ETDEWEB)

    Schaerer, Roman Pascal, E-mail: schaerer@mathcces.rwth-aachen.de; Bansal, Pratyuksh; Torrilhon, Manuel

    2017-07-01

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) , we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.

  3. FPGA-based implementation for steganalysis: a JPEG-compatibility algorithm

    Science.gov (United States)

    Gutierrez-Fernandez, E.; Portela-García, M.; Lopez-Ongil, C.; Garcia-Valderas, M.

    2013-05-01

    Steganalysis is a process to detect hidden data in cover documents, like digital images, videos, audio files, etc. This is the inverse process of steganography, which is the used method to hide secret messages. The widely use of computers and network technologies make digital files very easy-to-use means for storing secret data or transmitting secret messages through the Internet. Depending on the cover medium used to embed the data, there are different steganalysis methods. In case of images, many of the steganalysis and steganographic methods are focused on JPEG image formats, since JPEG is one of the most common formats. One of the main important handicaps of steganalysis methods is the processing speed, since it is usually necessary to process huge amount of data or it can be necessary to process the on-going internet traffic in real-time. In this paper, a JPEG steganalysis system is implemented in an FPGA in order to speed-up the detection process with respect to software-based implementations and to increase the throughput. In particular, the implemented method is the JPEG-compatibility detection algorithm that is based on the fact that when a JPEG image is modified, the resulting image is incompatible with the JPEG compression process.

  4. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    International Nuclear Information System (INIS)

    Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel

    2017-01-01

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) , we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.

  5. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    Science.gov (United States)

    Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel

    2017-07-01

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.

  6. An Implementation of RC4+ Algorithm and Zig-zag Algorithm in a Super Encryption Scheme for Text Security

    Science.gov (United States)

    Budiman, M. A.; Amalia; Chayanie, N. I.

    2018-03-01

    Cryptography is the art and science of using mathematical methods to preserve message security. There are two types of cryptography, namely classical and modern cryptography. Nowadays, most people would rather use modern cryptography than classical cryptography because it is harder to break than the classical one. One of classical algorithm is the Zig-zag algorithm that uses the transposition technique: the original message is unreadable unless the person has the key to decrypt the message. To improve the security, the Zig-zag Cipher is combined with RC4+ Cipher which is one of the symmetric key algorithms in the form of stream cipher. The two algorithms are combined to make a super-encryption. By combining these two algorithms, the message will be harder to break by a cryptanalyst. The result showed that complexity of the combined algorithm is θ(n2 ), while the complexity of Zig-zag Cipher and RC4+ Cipher are θ(n2 ) and θ(n), respectively.

  7. Implementing O(N N-Body Algorithms Efficiently in Data-Parallel Languages

    Directory of Open Access Journals (Sweden)

    Yu Hu

    1996-01-01

    Full Text Available The optimization techniques for hierarchical O(N N-body algorithms described here focus on managing the data distribution and the data references, both between the memories of different nodes and within the memory hierarchy of each node. We show how the techniques can be expressed in data-parallel languages, such as High Performance Fortran (HPF and Connection Machine Fortran (CMF. The effectiveness of our techniques is demonstrated on an implementation of Anderson's hierarchical O(N N-body method for the Connection Machine system CM-5/5E. Of the total execution time, communication accounts for about 10–20% of the total time, with the average efficiency for arithmetic operations being about 40% and the total efficiency (including communication being about 35%. For the CM-5E, a performance in excess of 60 Mflop/s per node (peak 160 Mflop/s per node has been measured.

  8. Implementation of ESPRIT Algorithm on GPS TEC for Percussive Signatures of Earthquakes in Ionosphere

    Science.gov (United States)

    Kiran, Uday; Koteswara Rao, S.; Ramesh, K. S.

    2017-01-01

    As Global Positioning System is very effective mechanism to find out the disturbances in Ionosphere during the solar events. Spectral estimation of the ionospheric total electron content perturbations leads to better interpretation of their source mechanisms. Seismo-ionospheric perturbations of an earthquake occurred at 12th December 2013 was considered in the present work. Estimation of signal parameters via rotational in variance technique (ESPRIT) is implemented on the vertical total electron content data. It was clearly observed that during disturbance the power spectral density of the dominant frequency had reduced to -2.487 dB from 7.841 dB. The application of ESPRIT algorithm on seismic perturbations in GPS TEC has found the dominant frequency in the spectrum and new frequency present at the time of perturbations

  9. The readout system and the trigger algorithm implementation for the UFFO Pathfinder

    DEFF Research Database (Denmark)

    Na, G.W.; Ahmad, S.; Barrillon, P.

    2012-01-01

    ) Pathfinder, to take the sub-minute data for the early photons from GRB. The UFFO Pathfinder has a coded-mask X-ray camera to search the GRB location by the UBAT trigger algorithm. To determine the direction of GRB as soon as possible it requires the fast processing. We have ultimately implemented all...... have been measured within a minute after the gamma ray signal. This lack of sub-minute data limits the study for the characteristics of the UV-optical light curve of the short-hard type GRB and the fast-rising GRB. Therefore, we have developed the telescope named the Ultra-Fast Flash Observatory (UFFO...

  10. First massively parallel algorithm to be implemented in Apollo-II code

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1994-01-01

    The collision probability (CP) method in neutron transport, as applied to arbitrary 2D XY geometries, like the TDT module in APOLLO-II, is very time consuming. Consequently RZ or 3D extensions became prohibitive. Fortunately, this method is very suitable for parallelization. Massively parallel computer architectures, especially MIMD machines, bring a new breath to this method. In this paper we present a CM5 implementation of the CP method. Parallelization is applied to the energy groups, using the CMMD message passing library. In our case we use 32 processors for the standard 99-group APOLLIB-II library. The real advantage of this algorithm will appear in the calculation of the future fine multigroup library (about 8000 groups) of the SAPHYR project with a massively parallel computer (to the order of hundreds of processors). (author). 3 tabs., 4 figs., 4 refs

  11. First massively parallel algorithm to be implemented in APOLLO-II code

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1994-01-01

    The collision probability method in neutron transport, as applied to arbitrary 2-dimensional geometries, like the two dimensional transport module in APOLLO-II is very time consuming. Consequently 3-dimensional extension became prohibitive. Fortunately, this method is very suitable for parallelization. Massively parallel computer architectures, especially MIMD machines, bring a new breath to this method. In this paper we present a CM5 implementation of the collision probability method. Parallelization is applied to the energy groups, using the CMMD massage passing library. In our case we used 32 processors for the standard 99-group APOLLIB-II library. The real advantage of this algorithm will appear in the calculation of the future multigroup library (about 8000 groups) of the SAPHYR project with a massively parallel computer (to the order of hundreds of processors). (author). 4 refs., 4 figs., 3 tabs

  12. Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.

    Science.gov (United States)

    Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane

    2016-08-01

    Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.

  13. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  14. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  15. Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm

    Directory of Open Access Journals (Sweden)

    Sandeep Kakde

    2017-12-01

    Full Text Available For binary field and long code lengths, Low Density Parity Check (LDPC code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algorithm. VLSI Architecture is proposed which uses the value re-use property of min-sum algorithm and gives high throughput. The proposed work has been implemented and tested on Xilinx Virtex 5 FPGA. The MATLAB result of LDPC decoder for low bit error rate (BER gives bit error rate in the range of 10-1 to 10-3.5 at SNR=1 to 2 for 20 no of iterations. So it gives good bit error rate performance. The latency of the parallel design of LDPC decoder has also reduced. It has accomplished 141.22 MHz maximum frequency and throughput of 2.02 Gbps while consuming less area of the design.

  16. Design and implementation of a hybrid MPI-CUDA model for the Smith-Waterman algorithm.

    Science.gov (United States)

    Khaled, Heba; Faheem, Hossam El Deen Mostafa; El Gohary, Rania

    2015-01-01

    This paper provides a novel hybrid model for solving the multiple pair-wise sequence alignment problem combining message passing interface and CUDA, the parallel computing platform and programming model invented by NVIDIA. The proposed model targets homogeneous cluster nodes equipped with similar Graphical Processing Unit (GPU) cards. The model consists of the Master Node Dispatcher (MND) and the Worker GPU Nodes (WGN). The MND distributes the workload among the cluster working nodes and then aggregates the results. The WGN performs the multiple pair-wise sequence alignments using the Smith-Waterman algorithm. We also propose a modified implementation to the Smith-Waterman algorithm based on computing the alignment matrices row-wise. The experimental results demonstrate a considerable reduction in the running time by increasing the number of the working GPU nodes. The proposed model achieved a performance of about 12 Giga cell updates per second when we tested against the SWISS-PROT protein knowledge base running on four nodes.

  17. The implementation of depth measurement and related algorithms based on binocular vision in embedded AM5728

    Science.gov (United States)

    Deng, Zhiwei; Li, Xicai; Shi, Junsheng; Huang, Xiaoqiao; Li, Feiyan

    2018-01-01

    Depth measurement is the most basic measurement in various machine vision, such as automatic driving, unmanned aerial vehicle (UAV), robot and so on. And it has a wide range of use. With the development of image processing technology and the improvement of hardware miniaturization and processing speed, real-time depth measurement using dual cameras has become a reality. In this paper, an embedded AM5728 and the ordinary low-cost dual camera is used as the hardware platform. The related algorithms of dual camera calibration, image matching and depth calculation have been studied and implemented on the hardware platform, and hardware design and the rationality of the related algorithms of the system are tested. The experimental results show that the system can realize simultaneous acquisition of binocular images, switching of left and right video sources, display of depth image and depth range. For images with a resolution of 640 × 480, the processing speed of the system can be up to 25 fps. The experimental results show that the optimal measurement range of the system is from 0.5 to 1.5 meter, and the relative error of the distance measurement is less than 5%. Compared with the PC, ARM11 and DMCU hardware platforms, the embedded AM5728 hardware is good at meeting real-time depth measurement requirements in ensuring the image resolution.

  18. Automatically tuned adaptive differencing algorithm for 3-D SN implemented in PENTRAN

    International Nuclear Information System (INIS)

    Sjoden, G.; Courau, T.; Manalo, K.; Yi, C.

    2009-01-01

    We present an adaptive algorithm with an automated tuning feature to augment optimum differencing scheme selection for 3-D S N computations in Cartesian geometry. This adaptive differencing scheme has been implemented in the PENTRAN parallel S N code. Individual fixed zeroth spatial transport moment based schemes, including Diamond Zero (DZ), Directional Theta Weighted (DTW), and Exponential Directional Iterative (EDI) 3-D S N methods were evaluated and compared with solutions generated using a code-tuned adaptive algorithm. Model problems considered include a fixed source slab problem (using reflected y- and z-axes) which contained mixed shielding and diffusive regions, and a 17 x 17 PWR assembly eigenvalue test problem; these problems were benchmarked against multigroup MCNP5 Monte Carlo computations. Both problems were effective in highlighting the performance of the adaptive scheme compared to single schemes, and demonstrated that the adaptive tuning handles exceptions to the standard DZ-DTW-EDI adaptive strategy. The tuning feature includes special scheme selection provisions for optically thin cells, and incorporates the ratio of the angular source density relative to the total angular collision density to best select the differencing method. Overall, the adaptive scheme demonstrated the best overall solution accuracy in the test problems. (authors)

  19. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  20. Implementation of Naive Bayes Classifier Algorithm to Evaluation in Utilizing Online Hotel Tax Reporting Application

    Directory of Open Access Journals (Sweden)

    R. Dimas Adityo

    2017-10-01

    Full Text Available The current implementation of tax reporting regional Pasuruan hotels have used online (Web-based, with the aim of reporting systems can run effectively and efficiently in receiving the financial statements especially from taxpayer property. Pasuruan as one small town quite rapidly in East Java, have implemented role models online tax filing system starting in 2015, with the amount of 6 hotels, there are several classes of hotels ranging from the budget class up to class three stars. After the application of the system running for 18 months (2015-2016, from existing data, conducted research on the analysis of the level of compliance of taxpayers reporting incomes in a hotel. On the research was designed and built a system to evaluate the level of compliance with the performance from the taxpayer (WP in the 2nd year (2016 and are classified in categories (1 the taxpayer (WP very obedient (ST, (2 the taxpayer (WP is quite obedient (CT, (3 Taxpayers (WP less obedient (KT. Input data will be processed using the technique of data mining algorithms Naive Bayes Classifier (NBC to form the table of probability as a basis for the process of classification levels of taxpayer compliance. Based on the results of the measurement, the test results show with an accuracy of 50% i.e. 3 taxpayers is the very obedient (ST to pay taxes. Then from the classification, the study could be made of recommendation solutions to guide the taxpayer in reporting revenues well and true.

  1. A CCTV system with SMS alert (CMDSA): An implementation of pixel processing algorithm for motion detection

    Science.gov (United States)

    Rahman, Nurul Hidayah Ab; Abdullah, Nurul Azma; Hamid, Isredza Rahmi A.; Wen, Chuah Chai; Jelani, Mohamad Shafiqur Rahman Mohd

    2017-10-01

    Closed-Circuit TV (CCTV) system is one of the technologies in surveillance field to solve the problem of detection and monitoring by providing extra features such as email alert or motion detection. However, detecting and alerting the admin on CCTV system may complicate due to the complexity to integrate the main program with an external Application Programming Interface (API). In this study, pixel processing algorithm is applied due to its efficiency and SMS alert is added as an alternative solution for users who opted out email alert system or have no Internet connection. A CCTV system with SMS alert (CMDSA) was developed using evolutionary prototyping methodology. The system interface was implemented using Microsoft Visual Studio while the backend components, which are database and coding, were implemented on SQLite database and C# programming language, respectively. The main modules of CMDSA are motion detection, capturing and saving video, image processing and Short Message Service (SMS) alert functions. Subsequently, the system is able to reduce the processing time making the detection process become faster, reduce the space and memory used to run the program and alerting the system admin instantly.

  2. A Weighted Spatial-Spectral Kernel RX Algorithm and Efficient Implementation on GPUs

    Directory of Open Access Journals (Sweden)

    Chunhui Zhao

    2017-02-01

    Full Text Available The kernel RX (KRX detector proposed by Kwon and Nasrabadi exploits a kernel function to obtain a better detection performance. However, it still has two limits that can be improved. On the one hand, reasonable integration of spatial-spectral information can be used to further improve its detection accuracy. On the other hand, parallel computing can be used to reduce the processing time in available KRX detectors. Accordingly, this paper presents a novel weighted spatial-spectral kernel RX (WSSKRX detector and its parallel implementation on graphics processing units (GPUs. The WSSKRX utilizes the spatial neighborhood resources to reconstruct the testing pixels by introducing a spectral factor and a spatial window, thereby effectively reducing the interference of background noise. Then, the kernel function is redesigned as a mapping trick in a KRX detector to implement the anomaly detection. In addition, a powerful architecture based on the GPU technique is designed to accelerate WSSKRX. To substantiate the performance of the proposed algorithm, both synthetic and real data are conducted for experiments.

  3. Explorations of the implementation of a parallel IDW interpolation algorithm in a Linux cluster-based parallel GIS

    Science.gov (United States)

    Huang, Fang; Liu, Dingsheng; Tan, Xicheng; Wang, Jian; Chen, Yunping; He, Binbin

    2011-04-01

    To design and implement an open-source parallel GIS (OP-GIS) based on a Linux cluster, the parallel inverse distance weighting (IDW) interpolation algorithm has been chosen as an example to explore the working model and the principle of algorithm parallel pattern (APP), one of the parallelization patterns for OP-GIS. Based on an analysis of the serial IDW interpolation algorithm of GRASS GIS, this paper has proposed and designed a specific parallel IDW interpolation algorithm, incorporating both single process, multiple data (SPMD) and master/slave (M/S) programming modes. The main steps of the parallel IDW interpolation algorithm are: (1) the master node packages the related information, and then broadcasts it to the slave nodes; (2) each node calculates its assigned data extent along one row using the serial algorithm; (3) the master node gathers the data from all nodes; and (4) iterations continue until all rows have been processed, after which the results are outputted. According to the experiments performed in the course of this work, the parallel IDW interpolation algorithm can attain an efficiency greater than 0.93 compared with similar algorithms, which indicates that the parallel algorithm can greatly reduce processing time and maximize speed and performance.

  4. Reflections of Practical Implementation of the academic course Analysis and Design of Algorithms taught in the Universities of Pakistan

    Directory of Open Access Journals (Sweden)

    Faryal Shamsi

    2017-12-01

    Full Text Available This Analysis and Design of Algorithm is considered as a compulsory course in the field of Computer Science. It increases the logical and problem solving skills of the students and make their solutions efficient in terms of time and space.  These objectives can only be achieved if a student practically implements what he or she has studied throughout the course. But if the contents of this course are merely studied and rarely practiced then the actual goals of the course is not fulfilled. This article will explore the extent of practical implementation of the course of analysis and design of algorithm. Problems faced by the computer science community and major barriers in the field are also enumerated. Finally, some recommendations are made to overcome the obstacles in the practical implementation of analysis and design of algorithms.

  5. Research and implementation of the algorithm for unwrapped and distortion correction basing on CORDIC for panoramic image

    Science.gov (United States)

    Zhang, Zhenhai; Li, Kejie; Wu, Xiaobing; Zhang, Shujiang

    2008-03-01

    The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and corrected to conventional rectangular image without distortion, which is much more coincident with people's vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction has the lower computation complexity and the architecture for dynamic panoramic image processing has lower hardware cost and power consumption. And the proposed algorithm is valid.

  6. Implementation of Super-Encryption with Trithemius Algorithm and Double Transposition Cipher in Securing PDF Files on Android Platform

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.; Jessica

    2018-03-01

    This study aims to combine the trithemus algorithm and double transposition cipher in file security that will be implemented to be an Android-based application. The parameters being examined are the real running time, and the complexity value. The type of file to be used is a file in PDF format. The overall result shows that the complexity of the two algorithms with duper encryption method is reported as Θ (n 2). However, the processing time required in the encryption process uses the Trithemius algorithm much faster than using the Double Transposition Cipher. With the length of plaintext and password linearly proportional to the processing time.

  7. Design and implementation of three-dimension texture mapping algorithm for panoramic system based on smart platform

    Science.gov (United States)

    Liu, Zhi; Zhou, Baotong; Zhang, Changnian

    2017-03-01

    Vehicle-mounted panoramic system is important safety assistant equipment for driving. However, traditional systems only render fixed top-down perspective view of limited view field, which may have potential safety hazard. In this paper, a texture mapping algorithm for 3D vehicle-mounted panoramic system is introduced, and an implementation of the algorithm utilizing OpenGL ES library based on Android smart platform is presented. Initial experiment results show that the proposed algorithm can render a good 3D panorama, and has the ability to change view point freely.

  8. Understanding conflict-resolution taskload: Implementing advisory conflict-detection and resolution algorithms in an airspace

    Science.gov (United States)

    Vela, Adan Ernesto

    2011-12-01

    From 2010 to 2030, the number of instrument flight rules aircraft operations handled by Federal Aviation Administration en route traffic centers is predicted to increase from approximately 39 million flights to 64 million flights. The projected growth in air transportation demand is likely to result in traffic levels that exceed the abilities of the unaided air traffic controller in managing, separating, and providing services to aircraft. Consequently, the Federal Aviation Administration, and other air navigation service providers around the world, are making several efforts to improve the capacity and throughput of existing airspaces. Ultimately, the stated goal of the Federal Aviation Administration is to triple the available capacity of the National Airspace System by 2025. In an effort to satisfy air traffic demand through the increase of airspace capacity, air navigation service providers are considering the inclusion of advisory conflict-detection and resolution systems. In a human-in-the-loop framework, advisory conflict-detection and resolution decision-support tools identify potential conflicts and propose resolution commands for the air traffic controller to verify and issue to aircraft. A number of researchers and air navigation service providers hypothesize that the inclusion of combined conflict-detection and resolution tools into air traffic control systems will reduce or transform controller workload and enable the required increases in airspace capacity. In an effort to understand the potential workload implications of introducing advisory conflict-detection and resolution tools, this thesis provides a detailed study of the conflict event process and the implementation of conflict-detection and resolution algorithms. Specifically, the research presented here examines a metric of controller taskload: how many resolution commands an air traffic controller issues under the guidance of a conflict-detection and resolution decision-support tool. The goal

  9. Implementation of the k -Neighbors Technique in a recommender algorithm for a purchasing system using NFC and Android

    Directory of Open Access Journals (Sweden)

    Oscar Arley Riveros

    2017-01-01

    Full Text Available Introduction: This paper aims to present the design of a mobile application involving NFC technology and a collaborative recommendation algorithm under the K-neighbors technique, allowing to observe personalized suggestions for each client. Objective: Design and develop a mobile application, using NFC technologies and K-Neighbors Technique in a recommendation algorithm, for a Procurement System. Methodology: The process followed for the design and development of the application focuses on: • Review of the state of the art in mobile shopping systems. • State-of-the-art construction in the use of NFC technology and AI techniques for recommending systems focused on K-Neighbors Algorithms • Proposed system design • Parameterization and implementation of the K-Neighbors Technique and integration of NFC Technology • Proposed System Implementation and Testing. Results: Among the results obtained are detailed: • Mobile application that integrates Android, NFC Technologies and a Technique of Algorithm Recommendation • Parameterization of the K-Neighbors Technique, to be used within the recommended algorithm. • Implementation of functional requirements that allow the generation of personalized recommendations for purchase to the user, user ratings Conclusions: The k-neighbors technique in a recommendation algorithm allows the client to provide a series of recommendations with a level of security, since this algorithm performs calculations taking into account multiple parameters and contrasts the results obtained for other users, finding the articles with a Greater degree of similarity with the customer profile. This algorithm starts from a sample of similar, complementary and other unrelated products, applying its respective formulation, we obtain that the recommendation is made only with the complementary products that obtained higher qualification; Making a big difference with most recommending systems on the market, which are limited to

  10. One-Step Leapfrog LOD-BOR-FDTD Algorithm with CPML Implementation

    Directory of Open Access Journals (Sweden)

    Yi-Gang Wang

    2016-01-01

    Full Text Available An unconditionally stable one-step leapfrog locally one-dimensional finite-difference time-domain (LOD-FDTD algorithm towards body of revolution (BOR is presented. The equations of the proposed algorithm are obtained by the algebraic manipulation of those used in the conventional LOD-BOR-FDTD algorithm. The equations for z-direction electric and magnetic fields in the proposed algorithm should be treated specially. The new algorithm obtains a higher computational efficiency while preserving the properties of the conventional LOD-BOR-FDTD algorithm. Moreover, the convolutional perfectly matched layer (CPML is introduced into the one-step leapfrog LOD-BOR-FDTD algorithm. The equation of the one-step leapfrog CPML is concise. Numerical results show that its reflection error is small. It can be concluded that the similar CPML scheme can also be easily applied to the one-step leapfrog LOD-FDTD algorithm in the Cartesian coordinate system.

  11. A FPGA implementation of solder paste deposit on printed circuit boards errors detector based in a bright and contrast algorithm

    OpenAIRE

    De Luca-Pennacchia, A.; Sánchez-Martínez, M. Á.

    2007-01-01

    Solder paste deposit on printed circuit boards (PCB) is a critical stage. It is known that about 60% of functionality defects in this type of boards are due to poor solder paste printing. These defects can be diminished by means of automatic optical inspection of this printing. Actually, this process is implemented by image processing software with its inherent high computational time cost. In this paper we propose to implement a high parallel degree image comparison algorithm suitable to be ...

  12. Development of tight-binding based GW algorithm and its computational implementation for graphene

    Energy Technology Data Exchange (ETDEWEB)

    Majidi, Muhammad Aziz [Departemen Fisika, FMIPA, Universitas Indonesia, Kampus UI Depok (Indonesia); NUSNNI-NanoCore, Department of Physics, National University of Singapore (NUS), Singapore 117576 (Singapore); Singapore Synchrotron Light Source (SSLS), National University of Singapore (NUS), 5 Research Link, Singapore 117603 (Singapore); Naradipa, Muhammad Avicenna, E-mail: muhammad.avicenna11@ui.ac.id; Phan, Wileam Yonatan; Syahroni, Ahmad [Departemen Fisika, FMIPA, Universitas Indonesia, Kampus UI Depok (Indonesia); Rusydi, Andrivo [NUSNNI-NanoCore, Department of Physics, National University of Singapore (NUS), Singapore 117576 (Singapore); Singapore Synchrotron Light Source (SSLS), National University of Singapore (NUS), 5 Research Link, Singapore 117603 (Singapore)

    2016-04-19

    Graphene has been a hot subject of research in the last decade as it holds a promise for various applications. One interesting issue is whether or not graphene should be classified into a strongly or weakly correlated system, as the optical properties may change upon several factors, such as the substrate, voltage bias, adatoms, etc. As the Coulomb repulsive interactions among electrons can generate the correlation effects that may modify the single-particle spectra (density of states) and the two-particle spectra (optical conductivity) of graphene, we aim to explore such interactions in this study. The understanding of such correlation effects is important because eventually they play an important role in inducing the effective attractive interactions between electrons and holes that bind them into excitons. We do this study theoretically by developing a GW method implemented on the basis of the tight-binding (TB) model Hamiltonian. Unlike the well-known GW method developed within density functional theory (DFT) framework, our TB-based GW implementation may serve as an alternative technique suitable for systems which Hamiltonian is to be constructed through a tight-binding based or similar models. This study includes theoretical formulation of the Green’s function G, the renormalized interaction function W from random phase approximation (RPA), and the corresponding self energy derived from Feynman diagrams, as well as the development of the algorithm to compute those quantities. As an evaluation of the method, we perform calculations of the density of states and the optical conductivity of graphene, and analyze the results.

  13. Development of tight-binding based GW algorithm and its computational implementation for graphene

    International Nuclear Information System (INIS)

    Majidi, Muhammad Aziz; Naradipa, Muhammad Avicenna; Phan, Wileam Yonatan; Syahroni, Ahmad; Rusydi, Andrivo

    2016-01-01

    Graphene has been a hot subject of research in the last decade as it holds a promise for various applications. One interesting issue is whether or not graphene should be classified into a strongly or weakly correlated system, as the optical properties may change upon several factors, such as the substrate, voltage bias, adatoms, etc. As the Coulomb repulsive interactions among electrons can generate the correlation effects that may modify the single-particle spectra (density of states) and the two-particle spectra (optical conductivity) of graphene, we aim to explore such interactions in this study. The understanding of such correlation effects is important because eventually they play an important role in inducing the effective attractive interactions between electrons and holes that bind them into excitons. We do this study theoretically by developing a GW method implemented on the basis of the tight-binding (TB) model Hamiltonian. Unlike the well-known GW method developed within density functional theory (DFT) framework, our TB-based GW implementation may serve as an alternative technique suitable for systems which Hamiltonian is to be constructed through a tight-binding based or similar models. This study includes theoretical formulation of the Green’s function G, the renormalized interaction function W from random phase approximation (RPA), and the corresponding self energy derived from Feynman diagrams, as well as the development of the algorithm to compute those quantities. As an evaluation of the method, we perform calculations of the density of states and the optical conductivity of graphene, and analyze the results.

  14. Algorithms for a parallel implementation of Hidden Markov Models with a small state space

    DEFF Research Database (Denmark)

    Nielsen, Jesper; Sand, Andreas

    2011-01-01

    Two of the most important algorithms for Hidden Markov Models are the forward and the Viterbi algorithms. We show how formulating these using linear algebra naturally lends itself to parallelization. Although the obtained algorithms are slow for Hidden Markov Models with large state spaces...

  15. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation

    Directory of Open Access Journals (Sweden)

    Ju-Chi Liu

    2016-01-01

    Full Text Available A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI. The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN, and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM and accuracy-recognition mode (AM, were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR. When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  16. Implementation of combined SVM-algorithm and computer-aided perception feedback for pulmonary nodule detection

    Science.gov (United States)

    Pietrzyk, Mariusz W.; Rannou, Didier; Brennan, Patrick C.

    2012-02-01

    This pilot study examines the effect of a novel decision support system in medical image interpretation. This system is based on combining image spatial frequency properties and eye-tracking data in order to recognize over and under calling errors. Thus, before it can be implemented as a detection aided schema, training is required during which SVMbased algorithm learns to recognize FP from all reported outcomes, and, FN from all unreported prolonged dwelled regions. Eight radiologists inspected 50 PA chest radiographs with the specific task of identifying lung nodules. Twentyfive cases contained CT proven subtle malignant lesions (5-20mm), but prevalence was not known by the subjects, who took part in two sequential reading sessions, the second, without and with support system feedback. MCMR ROC DBM and JAFROC analyses were conducted and demonstrated significantly higher scores following feedback with p values of 0.04, and 0.03 respectively, highlighting significant improvements in radiology performance once feedback was used. This positive effect on radiologists' performance might have important implications for future CAD-system development.

  17. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation.

    Science.gov (United States)

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  18. THE ALGORITHM IMPLEMENTATION OF THE RISK MANAGEMENT SYSTEM ON THE MARKET OF TOURIST SERVICES

    Directory of Open Access Journals (Sweden)

    S. M. Agafonov

    2015-01-01

    Full Text Available Summary. In the article the author conducted a comprehensive assessment of the factors and the level of operational risk, environmental risk, security risk, political risk, marketing risk, economic risk and infrastructure risk market of tourist services in Russia in 2015. As a result of the analysis of risks and measures for risk management, applied in Russian market of tourist services has now been found that the most serious risk is the risk of reducing the demand for travel companies for various reasons, the main of which is the reduction of incomes of the population and the preference of consumers to buy tourism services directly from the enterprises in the hospitality without the participation of tourist companies through the spread of information and communication technologies. It offers innovative risk manag ement tools in the field of tourism companies and the market of tourist services, such as: creating a site with reviews based on tourism and the provision of professional advice on tourism for their customers; collaboration with the insurance companies and the provision of a large insurance tourists from unsuccessful rest and bad experiences; sales booked, but hotel rooms sold abroad at an auction; creation of a network of hotels where you can pay in Russian rubles. An author algorithm implementation of the risk management system on the market of tourist services.

  19. Efficient parallel implementations of approximation algorithms for guarding 1.5D terrains

    Directory of Open Access Journals (Sweden)

    Goran Martinović

    2015-03-01

    Full Text Available In the 1.5D terrain guarding problem, an x-monotone polygonal line is dened by k vertices and a G set of terrain points, i.e. guards, and a N set of terrain points which guards are to observe (guard. This involves a weighted version of the guarding problem where guards G have weights. The goal is to determine a minimum weight subset of G to cover all the points in N, including a version where points from N have demands. Furthermore, another goal is to determine the smallest subset of G, such that every point in N is observed by the required number of guards. Both problems are NP-hard and have a factor 5 approximation [3, 4]. This paper will show that if the (1+ϵ-approximate solver for the corresponding linear program is a computer, for any ϵ > 0, an extra 1+ϵ factor will appear in the final approximation factor for both problems. A comparison will be carried out the parallel implementation based on GPU and CPU threads with the Gurobi solver, leading to the conclusion that the respective algorithm outperforms the Gurobi solver on large and dense inputs typically by one order of magnitude.

  20. Implementation of a conjugate gradient algorithm for thermal diffusivity identification in a moving boundaries system

    International Nuclear Information System (INIS)

    Perez, L; Autrique, L; Gillet, M

    2008-01-01

    The aim of this paper is to investigate the thermal diffusivity identification of a multilayered material dedicated to fire protection. In a military framework, fire protection needs to meet specific requirements, and operational protective systems must be constantly improved in order to keep up with the development of new weapons. In the specific domain of passive fire protections, intumescent coatings can be an effective solution on the battlefield. Intumescent materials have the ability to swell up when they are heated, building a thick multi-layered coating which provides efficient thermal insulation to the underlying material. Due to the heat aggressions (fire or explosion) leading to the intumescent phenomena, high temperatures are considered and prevent from linearization of the mathematical model describing the system state evolution. Previous sensitivity analysis has shown that the thermal diffusivity of the multilayered intumescent coating is a key parameter in order to validate the predictive numerical tool and therefore for thermal protection optimisation. A conjugate gradient method is implemented in order to minimise the quadratic cost function related to the error between predicted temperature and measured temperature. This regularisation algorithm is well adapted for a large number of unknown parameters.

  1. Technical Report: Algorithm and Implementation for Quasispecies Abundance Inference with Confidence Intervals from Metagenomic Sequence Data

    Energy Technology Data Exchange (ETDEWEB)

    McLoughlin, Kevin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-11

    This report describes the design and implementation of an algorithm for estimating relative microbial abundances, together with confidence limits, using data from metagenomic DNA sequencing. For the background behind this project and a detailed discussion of our modeling approach for metagenomic data, we refer the reader to our earlier technical report, dated March 4, 2014. Briefly, we described a fully Bayesian generative model for paired-end sequence read data, incorporating the effects of the relative abundances, the distribution of sequence fragment lengths, fragment position bias, sequencing errors and variations between the sampled genomes and the nearest reference genomes. A distinctive feature of our modeling approach is the use of a Chinese restaurant process (CRP) to describe the selection of genomes to be sampled, and thus the relative abundances. The CRP component is desirable for fitting abundances to reads that may map ambiguously to multiple targets, because it naturally leads to sparse solutions that select the best representative from each set of nearly equivalent genomes.

  2. Implementation of a Monte Carlo algorithm for neutron transport on a massively parallel SIMD machine

    International Nuclear Information System (INIS)

    Baker, R.S.

    1992-01-01

    We present some results from the recent adaptation of a vectorized Monte Carlo algorithm to a massively parallel architecture. The performance of the algorithm on a single processor Cray Y-MP and a Thinking Machine Corporations CM-2 and CM-200 is compared for several test problems. The results show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when the algorithms are applied to realistic problems which require extensive variance reduction. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well

  3. Implementation of a Monte Carlo algorithm for neutron transport on a massively parallel SIMD machine

    International Nuclear Information System (INIS)

    Baker, R.S.

    1993-01-01

    We present some results from the recent adaptation of a vectorized Monte Carlo algorithm to a massively parallel architecture. The performance of the algorithm on a single processor Cray Y-MP and a Thinking Machine Corporations CM-2 and CM-200 is compared for several test problems. The results show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when the algorithms are applied to realistic problems which require extensive variance reduction. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well. (orig.)

  4. VLSI implementation of MIMO detection for 802.11n using a novel adaptive tree search algorithm

    International Nuclear Information System (INIS)

    Yao Heng; Jian Haifang; Zhou Liguo; Shi Yin

    2013-01-01

    A 4×4 64-QAM multiple-input multiple-output (MIMO) detector is presented for the application of an IEEE 802.11n wireless local area network. The detectoris the implementation of a novel adaptive tree search(ATS) algorithm, and multiple ATS cores need to be instantiated to achieve the wideband requirement in the 802.11n standard. Both the ATS algorithm and the architectural considerations are explained. The latency of the detector is 0.75 μs, and the detector has a gate count of 848 k with a total of 19 parallel ATS cores. Each ATS core runs at 67 MHz. Measurement results show that compared with the floating-point ATS algorithm, the fixed-point implementation achieves a loss of 0.9 dB at a BER of 10 −3 . (semiconductor integrated circuits)

  5. Designing and implementing of improved cryptographic algorithm using modular arithmetic theory

    Directory of Open Access Journals (Sweden)

    Maryam Kamarzarrin

    2015-05-01

    Full Text Available Maintaining the privacy and security of people information are two most important principles of electronic health plan. One of the methods of creating privacy and securing of information is using Public key cryptography system. In this paper, we compare two algorithms, Common And Fast Exponentiation algorithms, for enhancing the efficiency of public key cryptography. We express that a designed system by Fast Exponentiation Algorithm has high speed and performance but low power consumption and space occupied compared with Common Exponentiation algorithm. Although designed systems by Common Exponentiation algorithm have slower speed and lower performance, designing by this algorithm has less complexity, and easier designing compared with Fast Exponentiation algorithm. In this paper, we will try to examine and compare two different methods of exponentiation, also observe performance Impact of these two approaches in the form of hardware with VHDL language on FPGA.

  6. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  7. Cartoon computation: quantum-like computing without quantum mechanics

    International Nuclear Information System (INIS)

    Aerts, Diederik; Czachor, Marek

    2007-01-01

    We present a computational framework based on geometric structures. No quantum mechanics is involved, and yet the algorithms perform tasks analogous to quantum computation. Tensor products and entangled states are not needed-they are replaced by sets of basic shapes. To test the formalism we solve in geometric terms the Deutsch-Jozsa problem, historically the first example that demonstrated the potential power of quantum computation. Each step of the algorithm has a clear geometric interpretation and allows for a cartoon representation. (fast track communication)

  8. Spatial updating grand canonical Monte Carlo algorithms for fluid simulation: generalization to continuous potentials and parallel implementation.

    Science.gov (United States)

    O'Keeffe, C J; Ren, Ruichao; Orkoulas, G

    2007-11-21

    Spatial updating grand canonical Monte Carlo algorithms are generalizations of random and sequential updating algorithms for lattice systems to continuum fluid models. The elementary steps, insertions or removals, are constructed by generating points in space either at random (random updating) or in a prescribed order (sequential updating). These algorithms have previously been developed only for systems of impenetrable spheres for which no particle overlap occurs. In this work, spatial updating grand canonical algorithms are generalized to continuous, soft-core potentials to account for overlapping configurations. Results on two- and three-dimensional Lennard-Jones fluids indicate that spatial updating grand canonical algorithms, both random and sequential, converge faster than standard grand canonical algorithms. Spatial algorithms based on sequential updating not only exhibit the fastest convergence but also are ideal for parallel implementation due to the absence of strict detailed balance and the nature of the updating that minimizes interprocessor communication. Parallel simulation results for three-dimensional Lennard-Jones fluids show a substantial reduction of simulation time for systems of moderate and large size. The efficiency improvement by parallel processing through domain decomposition is always in addition to the efficiency improvement by sequential updating.

  9. Get Your Atoms in Order--An Open-Source Implementation of a Novel and Robust Molecular Canonicalization Algorithm.

    Science.gov (United States)

    Schneider, Nadine; Sayle, Roger A; Landrum, Gregory A

    2015-10-26

    Finding a canonical ordering of the atoms in a molecule is a prerequisite for generating a unique representation of the molecule. The canonicalization of a molecule is usually accomplished by applying some sort of graph relaxation algorithm, the most common of which is the Morgan algorithm. There are known issues with that algorithm that lead to noncanonical atom orderings as well as problems when it is applied to large molecules like proteins. Furthermore, each cheminformatics toolkit or software provides its own version of a canonical ordering, most based on unpublished algorithms, which also complicates the generation of a universal unique identifier for molecules. We present an alternative canonicalization approach that uses a standard stable-sorting algorithm instead of a Morgan-like index. Two new invariants that allow canonical ordering of molecules with dependent chirality as well as those with highly symmetrical cyclic graphs have been developed. The new approach proved to be robust and fast when tested on the 1.45 million compounds of the ChEMBL 20 data set in different scenarios like random renumbering of input atoms or SMILES round tripping. Our new algorithm is able to generate a canonical order of the atoms of protein molecules within a few milliseconds. The novel algorithm is implemented in the open-source cheminformatics toolkit RDKit. With this paper, we provide a reference Python implementation of the algorithm that could easily be integrated in any cheminformatics toolkit. This provides a first step toward a common standard for canonical atom ordering to generate a universal unique identifier for molecules other than InChI.

  10. Implementation of digital image encryption algorithm using logistic function and DNA encoding

    Science.gov (United States)

    Suryadi, MT; Satria, Yudi; Fauzi, Muhammad

    2018-03-01

    Cryptography is a method to secure information that might be in form of digital image. Based on past research, in order to increase security level of chaos based encryption algorithm and DNA based encryption algorithm, encryption algorithm using logistic function and DNA encoding was proposed. Digital image encryption algorithm using logistic function and DNA encoding use DNA encoding to scramble the pixel values into DNA base and scramble it in DNA addition, DNA complement, and XOR operation. The logistic function in this algorithm used as random number generator needed in DNA complement and XOR operation. The result of the test show that the PSNR values of cipher images are 7.98-7.99 bits, the entropy values are close to 8, the histogram of cipher images are uniformly distributed and the correlation coefficient of cipher images are near 0. Thus, the cipher image can be decrypted perfectly and the encryption algorithm has good resistance to entropy attack and statistical attack.

  11. An implementation for the algorithm of the Hirota bilinear Baecklund transformation of integrable hierarchies

    International Nuclear Information System (INIS)

    Yu Guofu; Duan Qihua

    2010-01-01

    In this paper, based on the Hirota bilinear method, a reliable algorithm for generating the bilinear Baecklund transformation (BT) of integrable hierarchies is described. With the help of Maple symbolic computation the algorithm would be very helpful and powerful for looking for the bilinear BT of integrable systems especially for those high-order integrable hierarchies. The BTs of bilinear Ramani hierarchy are deduced for the first time by using the algorithm.

  12. A Novel Enhanced Positioning Trilateration Algorithm Implemented for Medical Implant In-Body Localization

    Directory of Open Access Journals (Sweden)

    Peter Brida

    2013-01-01

    Full Text Available Medical implants based on wireless communication will play crucial role in healthcare systems. Some applications need to know the exact position of each implant. RF positioning seems to be an effective approach for implant localization. The two most common positioning data typically used for RF positioning are received signal strength and time of flight of a radio signal between transmitter and receivers (medical implant and network of reference devices with known position. This leads to positioning methods: received signal strength (RSS and time of arrival (ToA. Both methods are based on trilateration. Used positioning data are very important, but the positioning algorithm which estimates the implant position is important as well. In this paper, the proposal of novel algorithm for trilateration is presented. The proposed algorithm improves the quality of basic trilateration algorithms with the same quality of measured positioning data. It is called Enhanced Positioning Trilateration Algorithm (EPTA. The proposed algorithm can be divided into two phases. The first phase is focused on the selection of the most suitable sensors for position estimation. The goal of the second one lies in the positioning accuracy improving by adaptive algorithm. Finally, we provide performance analysis of the proposed algorithm by computer simulations.

  13. Implementation and analysis of list mode algorithm using tubes of response on a dedicated brain and breast PET

    Science.gov (United States)

    Moliner, L.; Correcher, C.; González, A. J.; Conde, P.; Hernández, L.; Orero, A.; Rodríguez-Álvarez, M. J.; Sánchez, F.; Soriano, A.; Vidal, L. F.; Benlloch, J. M.

    2013-02-01

    In this work we present an innovative algorithm for the reconstruction of PET images based on the List-Mode (LM) technique which improves their spatial resolution compared to results obtained with current MLEM algorithms. This study appears as a part of a large project with the aim of improving diagnosis in early Alzheimer disease stages by means of a newly developed hybrid PET-MR insert. At the present, Alzheimer is the most relevant neurodegenerative disease and the best way to apply an effective treatment is its early diagnosis. The PET device will consist of several monolithic LYSO crystals coupled to SiPM detectors. Monolithic crystals can reduce scanner costs with the advantage to enable implementation of very small virtual pixels in their geometry. This is especially useful for LM reconstruction algorithms, since they do not need a pre-calculated system matrix. We have developed an LM algorithm which has been initially tested with a large aperture (186 mm) breast PET system. Such an algorithm instead of using the common lines of response, incorporates a novel calculation of tubes of response. The new approach improves the volumetric spatial resolution about a factor 2 at the border of the field of view when compared with traditionally used MLEM algorithm. Moreover, it has also shown to decrease the image noise, thus increasing the image quality.

  14. Implementation and analysis of list mode algorithm using tubes of response on a dedicated brain and breast PET

    International Nuclear Information System (INIS)

    Moliner, L.; Correcher, C.; González, A.J.; Conde, P.; Hernández, L.; Orero, A.; Rodríguez-Álvarez, M.J.; Sánchez, F.; Soriano, A.; Vidal, L.F.; Benlloch, J.M.

    2013-01-01

    In this work we present an innovative algorithm for the reconstruction of PET images based on the List-Mode (LM) technique which improves their spatial resolution compared to results obtained with current MLEM algorithms. This study appears as a part of a large project with the aim of improving diagnosis in early Alzheimer disease stages by means of a newly developed hybrid PET-MR insert. At the present, Alzheimer is the most relevant neurodegenerative disease and the best way to apply an effective treatment is its early diagnosis. The PET device will consist of several monolithic LYSO crystals coupled to SiPM detectors. Monolithic crystals can reduce scanner costs with the advantage to enable implementation of very small virtual pixels in their geometry. This is especially useful for LM reconstruction algorithms, since they do not need a pre-calculated system matrix. We have developed an LM algorithm which has been initially tested with a large aperture (186 mm) breast PET system. Such an algorithm instead of using the common lines of response, incorporates a novel calculation of tubes of response. The new approach improves the volumetric spatial resolution about a factor 2 at the border of the field of view when compared with traditionally used MLEM algorithm. Moreover, it has also shown to decrease the image noise, thus increasing the image quality

  15. Design, implementation and evaluation of a practical pseudoknot folding algorithm based on thermodynamics

    Directory of Open Access Journals (Sweden)

    Giegerich Robert

    2004-08-01

    Full Text Available Abstract Background The general problem of RNA secondary structure prediction under the widely used thermodynamic model is known to be NP-complete when the structures considered include arbitrary pseudoknots. For restricted classes of pseudoknots, several polynomial time algorithms have been designed, where the O(n6time and O(n4 space algorithm by Rivas and Eddy is currently the best available program. Results We introduce the class of canonical simple recursive pseudoknots and present an algorithm that requires O(n4 time and O(n2 space to predict the energetically optimal structure of an RNA sequence, possible containing such pseudoknots. Evaluation against a large collection of known pseudoknotted structures shows the adequacy of the canonization approach and our algorithm. Conclusions RNA pseudoknots of medium size can now be predicted reliably as well as efficiently by the new algorithm.

  16. Design and implementation of universal mathematical library supporting algorithm development for FPGA based systems in high energy physics experiments

    International Nuclear Information System (INIS)

    Jalmuzna, W.

    2006-02-01

    The X-ray free-electron laser XFEL that is being planned at the DESY research center in cooperation with European partners will produce high-intensity ultra-short Xray flashes with the properties of laser light. This new light source, which can only be described in terms of superlatives, will open up a whole range of new perspectives for the natural sciences. It could also offer very promising opportunities for industrial users. SIMCON (SIMulator and CONtroller) is the project of the fast, low latency digital controller dedicated for LLRF system in VUV FEL experiment based on modern FPGA chips It is being developed by ELHEP group in Institute of Electronic Systems at Warsaw University of Technology. The main purpose of the project is to create a controller for stabilizing the vector sum of fields in cavities of one cryomodule in the experiment. The device can be also used as the simulator of the cavity and testbench for other devices. Flexibility and computation power of this device allow implementation of fast mathematical algorithms. This paper describes the concept, implementation and tests of universal mathematical library for FPGA algorithm implementation. It consists of many useful components such as IQ demodulator, division block, library for complex and floating point operations, etc. It is able to speed up implementation time of many complicated algorithms. Library have already been tested using real accelerator signals and the performance achieved is satisfactory. (Orig.)

  17. Design and implementation of universal mathematical library supporting algorithm development for FPGA based systems in high energy physics experiments

    Energy Technology Data Exchange (ETDEWEB)

    Jalmuzna, W.

    2006-02-15

    The X-ray free-electron laser XFEL that is being planned at the DESY research center in cooperation with European partners will produce high-intensity ultra-short Xray flashes with the properties of laser light. This new light source, which can only be described in terms of superlatives, will open up a whole range of new perspectives for the natural sciences. It could also offer very promising opportunities for industrial users. SIMCON (SIMulator and CONtroller) is the project of the fast, low latency digital controller dedicated for LLRF system in VUV FEL experiment based on modern FPGA chips It is being developed by ELHEP group in Institute of Electronic Systems at Warsaw University of Technology. The main purpose of the project is to create a controller for stabilizing the vector sum of fields in cavities of one cryomodule in the experiment. The device can be also used as the simulator of the cavity and testbench for other devices. Flexibility and computation power of this device allow implementation of fast mathematical algorithms. This paper describes the concept, implementation and tests of universal mathematical library for FPGA algorithm implementation. It consists of many useful components such as IQ demodulator, division block, library for complex and floating point operations, etc. It is able to speed up implementation time of many complicated algorithms. Library have already been tested using real accelerator signals and the performance achieved is satisfactory. (Orig.)

  18. Design and implementation of adaptive inverse control algorithm for a micro-hand control system

    Directory of Open Access Journals (Sweden)

    Wan-Cheng Wang

    2014-01-01

    Full Text Available The Letter proposes an online tuned adaptive inverse position control algorithm for a micro-hand. First, the configuration of the micro-hand is discussed. Next, a kinematic analysis of the micro-hand is investigated and then the relationship between the rotor position of micro-permanent magnet synchronous motor and the tip of the micro-finger is derived. After that, an online tuned adaptive inverse control algorithm, which includes an adaptive inverse model and an adaptive inverse control, is designed. The online tuned adaptive inverse control algorithm has better performance than the proportional–integral control algorithm does. In addition, to avoid damaging the object during the grasping process, an online force control algorithm is proposed here as well. An embedded micro-computer, cRIO-9024, is used to realise the whole position control algorithm and the force control algorithm by using software. As a result, the hardware circuit is very simple. Experimental results show that the proposed system can provide fast transient responses, good load disturbance responses, good tracking responses and satisfactory grasping responses.

  19. Does videothoracoscopy improve clinical outcomes when implemented as part of a pleural empyema treatment algorithm?

    Directory of Open Access Journals (Sweden)

    Ricardo Mingarini Terra

    Full Text Available OBJECTIVE: We aimed to evaluate whether the inclusion of videothoracoscopy in a pleural empyema treatment algorithm would change the clinical outcome of such patients. METHODS: This study performed quality-improvement research. We conducted a retrospective review of patients who underwent pleural decortication for pleural empyema at our institution from 2002 to 2008. With the old algorithm (January 2002 to September 2005, open decortication was the procedure of choice, and videothoracoscopy was only performed in certain sporadic mid-stage cases. With the new algorithm (October 2005 to December 2008, videothoracoscopy became the first-line treatment option, whereas open decortication was only performed in patients with a thick pleural peel (>2 cm observed by chest scan. The patients were divided into an old algorithm (n = 93 and new algorithm (n = 113 group and compared. The main outcome variables assessed included treatment failure (pleural space reintervention or death up to 60 days after medical discharge and the occurrence of complications. RESULTS: Videothoracoscopy and open decortication were performed in 13 and 80 patients from the old algorithm group and in 81 and 32 patients from the new algorithm group, respectively (p<0.01. The patients in the new algorithm group were older (41 +1 vs. 46.3+ 16.7 years, p = 0.014 and had higher Charlson Comorbidity Index scores [0(0-3 vs. 2(0-4, p = 0.032]. The occurrence of treatment failure was similar in both groups (19.35% vs. 24.77%, p = 0.35, although the complication rate was lower in the new algorithm group (48.3% vs. 33.6%, p = 0.04. CONCLUSIONS: The wider use of videothoracoscopy in pleural empyema treatment was associated with fewer complications and unaltered rates of mortality and reoperation even though more severely ill patients were subjected to videothoracoscopic surgery.

  20. THE ALGORITHM IMPLEMENTATION OF THE DIVERSIFICATION STRATEGY IN SMALL AND MEDIUM-SIZED ENTERPRISES (FOR EXAMPLE, THE HOSPITALITY INDUSTRY

    Directory of Open Access Journals (Sweden)

    Наталья Николаевна Масюк

    2013-09-01

    Full Text Available Diversification in small businesses in the general sense is an extension of business activities to new areas of business (expanding the range of products, types of services provided, etc.. Application of the strategy of diversification in the small and medium business is justified in cases where the industry does not give us opportunities for further growth or when growth opportunities outside the industry more attractive. To determine whether diversification overdue and justified, the entrepreneur must clearly define the algorithm for their actions.Purpose: To determine the algorithm implementation of the strategy of diversification.Methodology: Desk research.Results: The developed algorithm.Practical implications: Management.DOI: http://dx.doi.org/10.12731/2218-7405-2013-9-19

  1. Optical pattern recognition architecture implementing the mean-square error correlation algorithm

    Science.gov (United States)

    Molley, Perry A.

    1991-01-01

    An optical architecture implementing the mean-square error correlation algorithm, MSE=.SIGMA.[I-R].sup.2 for discriminating the presence of a reference image R in an input image scene I by computing the mean-square-error between a time-varying reference image signal s.sub.1 (t) and a time-varying input image signal s.sub.2 (t) includes a laser diode light source which is temporally modulated by a double-sideband suppressed-carrier source modulation signal I.sub.1 (t) having the form I.sub.1 (t)=A.sub.1 [1+.sqroot.2m.sub.1 s.sub.1 (t)cos (2.pi.f.sub.o t)] and the modulated light output from the laser diode source is diffracted by an acousto-optic deflector. The resultant intensity of the +1 diffracted order from the acousto-optic device is given by: I.sub.2 (t)=A.sub.2 [+2m.sub.2.sup.2 s.sub.2.sup.2 (t)-2.sqroot.2m.sub.2 (t) cos (2.pi.f.sub.o t] The time integration of the two signals I.sub.1 (t) and I.sub.2 (t) on the CCD deflector plane produces the result R(.tau.) of the mean-square error having the form: R(.tau.)=A.sub.1 A.sub.2 {[T]+[2m.sub.2.sup.2.multidot..intg.s.sub.2.sup.2 (t-.tau.)dt]-[2m.sub.1 m.sub.2 cos (2.tau.f.sub.o .tau.).multidot..intg.s.sub.1 (t)s.sub.2 (t-.tau.)dt]} where: s.sub.1 (t) is the signal input to the diode modulation source: s.sub.2 (t) is the signal input to the AOD modulation source; A.sub.1 is the light intensity; A.sub.2 is the diffraction efficiency; m.sub.1 and m.sub.2 are constants that determine the signal-to-bias ratio; f.sub.o is the frequency offset between the oscillator at f.sub.c and the modulation at f.sub.c +f.sub.o ; and a.sub.o and a.sub.1 are constant chosen to bias the diode source and the acousto-optic deflector into their respective linear operating regions so that the diode source exhibits a linear intensity characteristic and the AOD exhibits a linear amplitude characteristic.

  2. Simulation of subwavelength metallic gratings using a new implementation of the recursive convolution finite-difference time-domain algorithm.

    Science.gov (United States)

    Banerjee, Saswatee; Hoshino, Tetsuya; Cole, James B

    2008-08-01

    We introduce a new implementation of the finite-difference time-domain (FDTD) algorithm with recursive convolution (RC) for first-order Drude metals. We implemented RC for both Maxwell's equations for light polarized in the plane of incidence (TM mode) and the wave equation for light polarized normal to the plane of incidence (TE mode). We computed the Drude parameters at each wavelength using the measured value of the dielectric constant as a function of the spatial and temporal discretization to ensure both the accuracy of the material model and algorithm stability. For the TE mode, where Maxwell's equations reduce to the wave equation (even in a region of nonuniform permittivity) we introduced a wave equation formulation of RC-FDTD. This greatly reduces the computational cost. We used our methods to compute the diffraction characteristics of metallic gratings in the visible wavelength band and compared our results with frequency-domain calculations.

  3. A Sparse Self-Consistent Field Algorithm and Its Parallel Implementation: Application to Density-Functional-Based Tight Binding.

    Science.gov (United States)

    Scemama, Anthony; Renon, Nicolas; Rapacioli, Mathias

    2014-06-10

    We present an algorithm and its parallel implementation for solving a self-consistent problem as encountered in Hartree-Fock or density functional theory. The algorithm takes advantage of the sparsity of matrices through the use of local molecular orbitals. The implementation allows one to exploit efficiently modern symmetric multiprocessing (SMP) computer architectures. As a first application, the algorithm is used within the density-functional-based tight binding method, for which most of the computational time is spent in the linear algebra routines (diagonalization of the Fock/Kohn-Sham matrix). We show that with this algorithm (i) single point calculations on very large systems (millions of atoms) can be performed on large SMP machines, (ii) calculations involving intermediate size systems (1000-100 000 atoms) are also strongly accelerated and can run efficiently on standard servers, and (iii) the error on the total energy due to the use of a cutoff in the molecular orbital coefficients can be controlled such that it remains smaller than the SCF convergence criterion.

  4. OpenCL Implementation of a Parallel Universal Kriging Algorithm for Massive Spatial Data Interpolation on Heterogeneous Systems

    Directory of Open Access Journals (Sweden)

    Fang Huang

    2016-06-01

    Full Text Available In some digital Earth engineering applications, spatial interpolation algorithms are required to process and analyze large amounts of data. Due to its powerful computing capacity, heterogeneous computing has been used in many applications for data processing in various fields. In this study, we explore the design and implementation of a parallel universal kriging spatial interpolation algorithm using the OpenCL programming model on heterogeneous computing platforms for massive Geo-spatial data processing. This study focuses primarily on transforming the hotspots in serial algorithms, i.e., the universal kriging interpolation function, into the corresponding kernel function in OpenCL. We also employ parallelization and optimization techniques in our implementation to improve the code performance. Finally, based on the results of experiments performed on two different high performance heterogeneous platforms, i.e., an NVIDIA graphics processing unit system and an Intel Xeon Phi system (MIC, we show that the parallel universal kriging algorithm can achieve the highest speedup of up to 40× with a single computing device and the highest speedup of up to 80× with multiple devices.

  5. Fast volume reconstruction in positron emission tomography: Implementation of four algorithms on a high-performance scalable parallel platform

    International Nuclear Information System (INIS)

    Egger, M.L.; Scheurer, A.H.; Joseph, C.

    1996-01-01

    The issue of long reconstruction times in PET has been addressed from several points of view, resulting in an affordable dedicated system capable of handling routine 3D reconstruction in a few minutes per frame: on the hardware side using fast processors and a parallel architecture, and on the software side, using efficient implementations of computationally less intensive algorithms. Execution times obtained for the PRT-1 data set on a parallel system of five hybrid nodes, each combining an Alpha processor for computation and a transputer for communication, are the following (256 sinograms of 96 views by 128 radial samples): Ramp algorithm 56 s, Favor 81 s and reprojection algorithm of Kinahan and Rogers 187 s. The implementation of fast rebinning algorithms has shown our hardware platform to become communications-limited; they execute faster on a conventional single-processor Alpha workstation: single-slice rebinning 7 s, Fourier rebinning 22 s, 2D filtered backprojection 5 s. The scalability of the system has been demonstrated, and a saturation effect at network sizes above ten nodes has become visible; new T9000-based products lifting most of the constraints on network topology and link throughput are expected to result in improved parallel efficiency and scalability properties

  6. Functional segmentation of dynamic PET studies: Open source implementation and validation of a leader-follower-based algorithm.

    Science.gov (United States)

    Mateos-Pérez, José María; Soto-Montenegro, María Luisa; Peña-Zalbidea, Santiago; Desco, Manuel; Vaquero, Juan José

    2016-02-01

    We present a novel segmentation algorithm for dynamic PET studies that groups pixels according to the similarity of their time-activity curves. Sixteen mice bearing a human tumor cell line xenograft (CH-157MN) were imaged with three different (68)Ga-DOTA-peptides (DOTANOC, DOTATATE, DOTATOC) using a small animal PET-CT scanner. Regional activities (input function and tumor) were obtained after manual delineation of regions of interest over the image. The algorithm was implemented under the jClustering framework and used to extract the same regional activities as in the manual approach. The volume of distribution in the tumor was computed using the Logan linear method. A Kruskal-Wallis test was used to investigate significant differences between the manually and automatically obtained volumes of distribution. The algorithm successfully segmented all the studies. No significant differences were found for the same tracer across different segmentation methods. Manual delineation revealed significant differences between DOTANOC and the other two tracers (DOTANOC - DOTATATE, p=0.020; DOTANOC - DOTATOC, p=0.033). Similar differences were found using the leader-follower algorithm. An open implementation of a novel segmentation method for dynamic PET studies is presented and validated in rodent studies. It successfully replicated the manual results obtained in small-animal studies, thus making it a reliable substitute for this task and, potentially, for other dynamic segmentation procedures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Implementation and Comparison of the Lifting 5/3 and 9/7 Algorithms in MatLab on GPU

    Directory of Open Access Journals (Sweden)

    Randa Khemiri

    2016-06-01

    Full Text Available In order to accelerate the Discrete Wavelet Transform DWT, we have implemented and compared the lifting "Le Gall5/3" and "Cohen-Daubechies-Feauveau9/7" (CDF9/7 algorithms on a low cost NVIDIA’s GPU. The suggested implementation is realized in MatLab using the in-house parallel computation toolbox (PCT. Our experimental results indicate, that the speedup is proportional to the image size until it attains a maximum at 20482 pixels, beyond these values the curve decreases. The performance with GPU enhances above a factor of 2~3 compared with CPU.

  8. Final Report for Award #DE-SC3956 Separating Algorithm and Implementation via programming Model Injection (SAIMI)

    Energy Technology Data Exchange (ETDEWEB)

    Strout, Michelle [Colorado State Univ., Fort Collins, CO (United States)

    2015-08-15

    Programming parallel machines is fraught with difficulties: the obfuscation of algorithms due to implementation details such as communication and synchronization, the need for transparency between language constructs and performance, the difficulty of performing program analysis to enable automatic parallelization techniques, and the existence of important "dusty deck" codes. The SAIMI project developed abstractions that enable the orthogonal specification of algorithms and implementation details within the context of existing DOE applications. The main idea is to enable the injection of small programming models such as expressions involving transcendental functions, polyhedral iteration spaces with sparse constraints, and task graphs into full programs through the use of pragmas. These smaller, more restricted programming models enable orthogonal specification of many implementation details such as how to map the computation on to parallel processors, how to schedule the computation, and how to allocation storage for the computation. At the same time, these small programming models enable the expression of the most computationally intense and communication heavy portions in many scientific simulations. The ability to orthogonally manipulate the implementation for such computations will significantly ease performance programming efforts and expose transformation possibilities and parameter to automated approaches such as autotuning. At Colorado State University, the SAIMI project was supported through DOE grant DE-SC3956 from April 2010 through August 2015. The SAIMI project has contributed a number of important results to programming abstractions that enable the orthogonal specification of implementation details in scientific codes. This final report summarizes the research that was funded by the SAIMI project.

  9. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    Science.gov (United States)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  10. Implementation of ternary Shor’s algorithm based on vibrational states of an ion in anharmonic potential

    Science.gov (United States)

    Liu, Wei; Chen, Shu-Ming; Zhang, Jian; Wu, Chun-Wang; Wu, Wei; Chen, Ping-Xing

    2015-03-01

    It is widely believed that Shor’s factoring algorithm provides a driving force to boost the quantum computing research. However, a serious obstacle to its binary implementation is the large number of quantum gates. Non-binary quantum computing is an efficient way to reduce the required number of elemental gates. Here, we propose optimization schemes for Shor’s algorithm implementation and take a ternary version for factorizing 21 as an example. The optimized factorization is achieved by a two-qutrit quantum circuit, which consists of only two single qutrit gates and one ternary controlled-NOT gate. This two-qutrit quantum circuit is then encoded into the nine lower vibrational states of an ion trapped in a weakly anharmonic potential. Optimal control theory (OCT) is employed to derive the manipulation electric field for transferring the encoded states. The ternary Shor’s algorithm can be implemented in one single step. Numerical simulation results show that the accuracy of the state transformations is about 0.9919. Project supported by the National Natural Science Foundation of China (Grant No. 61205108) and the High Performance Computing (HPC) Foundation of National University of Defense Technology, China.

  11. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform.

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance.

  12. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications

    Directory of Open Access Journals (Sweden)

    Khairi Nor Asilah

    2017-01-01

    Full Text Available An Internet of Things (IoT device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.

  13. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications

    Science.gov (United States)

    Asilah Khairi, Nor; Bahari Jambek, Asral

    2017-11-01

    An Internet of Things (IoT) device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.

  14. On the implementation of the Ford | Fulkerson algorithm on the Multiple Instruction and Single Data computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2014-01-01

    Full Text Available Algorithms of optimization in networks and direct graphs find a broad application when solving the practical tasks. However, along with large-scale introduction of information technologies in human activity, requirements for volumes of input data and retrieval rate of solution are aggravated. In spite of the fact that by now the large number of algorithms for the various models of computers and computing systems have been studied and implemented, the solution of key problems of optimization for real dimensions of tasks remains difficult. In this regard search of new and more efficient computing structures, as well as update of known algorithms are of great current interest.The work considers an implementation of the search-end algorithm of the maximum flow on the direct graph for multiple instructions and single data computer system (MISD developed in BMSTU. Key feature of this architecture is deep hardware support of operations over sets and structures of data. Functions of storage and access to them are realized on the specialized processor of structures processing (SP which is capable to perform at the hardware level such operations as: add, delete, search, intersect, complete, merge, and others. Advantage of such system is possibility of parallel execution of parts of the computing tasks regarding the access to the sets to data structures simultaneously with arithmetic and logical processing of information.The previous works present the general principles of the computing process arrangement and features of programs implemented in MISD system, describe the structure and principles of functioning the processor of structures processing, show the general principles of the graph task solutions in such system, and experimentally study the efficiency of the received algorithms.The work gives command formats of the SP processor, offers the technique to update the algorithms realized in MISD system, suggests the option of Ford-Falkersona algorithm

  15. Extended Adaptive Biasing Force Algorithm. An On-the-Fly Implementation for Accurate Free-Energy Calculations.

    Science.gov (United States)

    Fu, Haohao; Shao, Xueguang; Chipot, Christophe; Cai, Wensheng

    2016-08-09

    Proper use of the adaptive biasing force (ABF) algorithm in free-energy calculations needs certain prerequisites to be met, namely, that the Jacobian for the metric transformation and its first derivative be available and the coarse variables be independent and fully decoupled from any holonomic constraint or geometric restraint, thereby limiting singularly the field of application of the approach. The extended ABF (eABF) algorithm circumvents these intrinsic limitations by applying the time-dependent bias onto a fictitious particle coupled to the coarse variable of interest by means of a stiff spring. However, with the current implementation of eABF in the popular molecular dynamics engine NAMD, a trajectory-based post-treatment is necessary to derive the underlying free-energy change. Usually, such a posthoc analysis leads to a decrease in the reliability of the free-energy estimates due to the inevitable loss of information, as well as to a drop in efficiency, which stems from substantial read-write accesses to file systems. We have developed a user-friendly, on-the-fly code for performing eABF simulations within NAMD. In the present contribution, this code is probed in eight illustrative examples. The performance of the algorithm is compared with traditional ABF, on the one hand, and the original eABF implementation combined with a posthoc analysis, on the other hand. Our results indicate that the on-the-fly eABF algorithm (i) supplies the correct free-energy landscape in those critical cases where the coarse variables at play are coupled to either each other or to geometric restraints or holonomic constraints, (ii) greatly improves the reliability of the free-energy change, compared to the outcome of a posthoc analysis, and (iii) represents a negligible additional computational effort compared to regular ABF. Moreover, in the proposed implementation, guidelines for choosing two parameters of the eABF algorithm, namely the stiffness of the spring and the mass

  16. PyCPR - a python-based implementation of the Conjugate Peak Refinement (CPR) algorithm for finding transition state structures.

    Science.gov (United States)

    Gisdon, Florian J; Culka, Martin; Ullmann, G Matthias

    2016-10-01

    Conjugate peak refinement (CPR) is a powerful and robust method to search transition states on a molecular potential energy surface. Nevertheless, the method was to the best of our knowledge so far only implemented in CHARMM. In this paper, we present PyCPR, a new Python-based implementation of the CPR algorithm within the pDynamo framework. We provide a detailed description of the theory underlying our implementation and discuss the different parts of the implementation. The method is applied to two different problems. First, we illustrate the method by analyzing the gauche to anti-periplanar transition of butane using a semiempirical QM method. Second, we reanalyze the mechanism of a glycyl-radical enzyme, namely of 4-hydroxyphenylacetate decarboxylase (HPD) using QM/MM calculations. In the end, we suggest a strategy how to use our implementation of the CPR algorithm. The integration of PyCPR into the framework pDynamo allows the combination of CPR with the large variety of methods implemented in pDynamo. PyCPR can be used in combination with quantum mechanical and molecular mechanical methods (and hybrid methods) implemented directly in pDynamo, but also in combination with external programs such as ORCA using pDynamo as interface. PyCPR is distributed as free, open source software and can be downloaded from http://www.bisb.uni-bayreuth.de/index.php?page=downloads . Graphical Abstract PyCPR is a search tool for finding saddle points on the potential energy landscape of a molecular system.

  17. Implementation of the LandTrendr Algorithm on Google Earth Engine

    Directory of Open Access Journals (Sweden)

    Robert E Kennedy

    2018-05-01

    Full Text Available The LandTrendr (LT algorithm has been used widely for analysis of change in Landsat spectral time series data, but requires significant pre-processing, data management, and computational resources, and is only accessible to the community in a proprietary programming language (IDL. Here, we introduce LT for the Google Earth Engine (GEE platform. The GEE platform simplifies pre-processing steps, allowing focus on the translation of the core temporal segmentation algorithm. Temporal segmentation involved a series of repeated random access calls to each pixel’s time series, resulting in a set of breakpoints (“vertices” that bound straight-line segments. The translation of the algorithm into GEE included both transliteration and code analysis, resulting in improvement and logic error fixes. At six study areas representing diverse land cover types across the U.S., we conducted a direct comparison of the new LT-GEE code against the heritage code (LT-IDL. The algorithms agreed in most cases, and where disagreements occurred, they were largely attributable to logic error fixes in the code translation process. The practical impact of these changes is minimal, as shown by an example of forest disturbance mapping. We conclude that the LT-GEE algorithm represents a faithful translation of the LT code into a platform easily accessible by the broader user community.

  18. FPGA Implementation of an Efficient Algorithm for the Calculation of Charged Particle Trajectories in Cosmic Ray Detectors

    Science.gov (United States)

    Villar, Xabier; Piso, Daniel; Bruguera, Javier D.

    2014-02-01

    This paper presents an FPGA implementation of an algorithm, previously published, for the the reconstruction of cosmic rays' trajectories and the determination of the time of arrival and velocity of the particles. The accuracy and precision issues of the algorithm have been analyzed to propose a suitable implementation. Thus, a 32-bit fixed-point format has been used for the representation of the data values. Moreover, the dependencies among the different operations have been taken into account to obtain a highly parallel and efficient hardware implementation. The final hardware architecture requires 18 cycles to process every particle, and has been exhaustively simulated to validate all the design decisions. The architecture has been mapped over different commercial FPGAs, with a frequency of operation ranging from 300 MHz to 1.3 GHz, depending on the FPGA being used. Consequently, the number of particle trajectories processed per second is between 16 million and 72 million. The high number of particle trajectories calculated per second shows that the proposed FPGA implementation might be used also in high rate environments such as those found in particle and nuclear physics experiments.

  19. Bayesian Network Constraint-Based Structure Learning Algorithms: Parallel and Optimized Implementations in the bnlearn R Package

    Directory of Open Access Journals (Sweden)

    Marco Scutari

    2017-03-01

    Full Text Available It is well known in the literature that the problem of learning the structure of Bayesian networks is very hard to tackle: Its computational complexity is super-exponential in the number of nodes in the worst case and polynomial in most real-world scenarios. Efficient implementations of score-based structure learning benefit from past and current research in optimization theory, which can be adapted to the task by using the network score as the objective function to maximize. This is not true for approaches based on conditional independence tests, called constraint-based learning algorithms. The only optimization in widespread use, backtracking, leverages the symmetries implied by the definitions of neighborhood and Markov blanket. In this paper we illustrate how backtracking is implemented in recent versions of the bnlearn R package, and how it degrades the stability of Bayesian network structure learning for little gain in terms of speed. As an alternative, we describe a software architecture and framework that can be used to parallelize constraint-based structure learning algorithms (also implemented in bnlearn and we demonstrate its performance using four reference networks and two real-world data sets from genetics and systems biology. We show that on modern multi-core or multiprocessor hardware parallel implementations are preferable over backtracking, which was developed when single-processor machines were the norm.

  20. DMPDS: A Fast Motion Estimation Algorithm Targeting High Resolution Videos and Its FPGA Implementation

    Directory of Open Access Journals (Sweden)

    Gustavo Sanchez

    2012-01-01

    Full Text Available This paper presents a new fast motion estimation (ME algorithm targeting high resolution digital videos and its efficient hardware architecture design. The new Dynamic Multipoint Diamond Search (DMPDS algorithm is a fast algorithm which increases the ME quality when compared with other fast ME algorithms. The DMPDS achieves a better digital video quality reducing the occurrence of local minima falls, especially in high definition videos. The quality results show that the DMPDS is able to reach an average PSNR gain of 1.85 dB when compared with the well-known Diamond Search (DS algorithm. When compared to the optimum results generated by the Full Search (FS algorithm the DMPDS shows a lose of only 1.03 dB in the PSNR. On the other hand, the DMPDS reached a complexity reduction higher than 45 times when compared to FS. The quality gains related to DS caused an expected increase in the DMPDS complexity which uses 6.4-times more calculations than DS. The DMPDS architecture was designed focused on high performance and low cost, targeting to process Quad Full High Definition (QFHD videos in real time (30 frames per second. The architecture was described in VHDL and synthesized to Altera Stratix 4 and Xilinx Virtex 5 FPGAs. The synthesis results show that the architecture is able to achieve processing rates higher than 53 QFHD fps, reaching the real-time requirements. The DMPDS architecture achieved the highest processing rate when compared to related works in the literature. This high processing rate was obtained designing an architecture with a high operation frequency and low numbers of cycles necessary to process each block.

  1. Deterministic implementations of single-photon multi-qubit Deutsch–Jozsa algorithms with linear optics

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Hai-Rui, E-mail: hrwei@ustb.edu.cn; Liu, Ji-Zhen

    2017-02-15

    It is very important to seek an efficient and robust quantum algorithm demanding less quantum resources. We propose one-photon three-qubit original and refined Deutsch–Jozsa algorithms with polarization and two linear momentums degrees of freedom (DOFs). Our schemes are constructed by solely using linear optics. Compared to the traditional ones with one DOF, our schemes are more economic and robust because the necessary photons are reduced from three to one. Our linear-optic schemes are working in a determinate way, and they are feasible with current experimental technology.

  2. FPGA Implementation of Gaussian Mixture Model Algorithm for 47 fps Segmentation of 1080p Video

    Directory of Open Access Journals (Sweden)

    Mariangela Genovese

    2013-01-01

    Full Text Available Circuits and systems able to process high quality video in real time are fundamental in nowadays imaging systems. The circuit proposed in the paper, aimed at the robust identification of the background in video streams, implements the improved formulation of the Gaussian Mixture Model (GMM algorithm that is included in the OpenCV library. An innovative, hardware oriented, formulation of the GMM equations, the use of truncated binary multipliers, and ROM compression techniques allow reduced hardware complexity and increased processing capability. The proposed circuit has been designed having commercial FPGA devices as target and provides speed and logic resources occupation that overcome previously proposed implementations. The circuit, when implemented on Virtex6 or StratixIV, processes more than 45 frame per second in 1080p format and uses few percent of FPGA logic resources.

  3. A comparison of native GPU computing versus OpenACC for implementing flow-routing algorithms in hydrological applications

    Science.gov (United States)

    Rueda, Antonio J.; Noguera, José M.; Luque, Adrián

    2016-02-01

    In recent years GPU computing has gained wide acceptance as a simple low-cost solution for speeding up computationally expensive processing in many scientific and engineering applications. However, in most cases accelerating a traditional CPU implementation for a GPU is a non-trivial task that requires a thorough refactorization of the code and specific optimizations that depend on the architecture of the device. OpenACC is a promising technology that aims at reducing the effort required to accelerate C/C++/Fortran code on an attached multicore device. Virtually with this technology the CPU code only has to be augmented with a few compiler directives to identify the areas to be accelerated and the way in which data has to be moved between the CPU and GPU. Its potential benefits are multiple: better code readability, less development time, lower risk of errors and less dependency on the underlying architecture and future evolution of the GPU technology. Our aim with this work is to evaluate the pros and cons of using OpenACC against native GPU implementations in computationally expensive hydrological applications, using the classic D8 algorithm of O'Callaghan and Mark for river network extraction as case-study. We implemented the flow accumulation step of this algorithm in CPU, using OpenACC and two different CUDA versions, comparing the length and complexity of the code and its performance with different datasets. We advance that although OpenACC can not match the performance of a CUDA optimized implementation (×3.5 slower in average), it provides a significant performance improvement against a CPU implementation (×2-6) with by far a simpler code and less implementation effort.

  4. Implementation of Automatic Focusing Algorithms for a Computer Vision System with Camera Control.

    Science.gov (United States)

    1983-08-15

    obtainable from real data, rather than relying on a stock database. Often, computer vision and image processing algorithms become subconsciously tuned to...two coils on the same mount structure. Since it was not possible to reprogram the binary system, we turned to the POPEYE system for both its grey

  5. A Concurrent Implementation of the Cascade-Correlation Algorithm, Using the Time Warp Operating System

    Science.gov (United States)

    Springer, P.

    1993-01-01

    This paper discusses the method in which the Cascade-Correlation algorithm was parallelized in such a way that it could be run using the Time Warp Operating System (TWOS). TWOS is a special purpose operating system designed to run parellel discrete event simulations with maximum efficiency on parallel or distributed computers.

  6. Parallel Implementation of Isothermal and Isoenergetic Dissipative Particle Dynamics using Shardlow-like Splitting Algorithms

    Czech Academy of Sciences Publication Activity Database

    Larentzos, J.P.; Brennan, J.K.; Moore, J.D.; Lísal, Martin; Mattson, w.D.

    2014-01-01

    Roč. 185, č. 7 (2014), s. 1987-1998 ISSN 0010-4655 Grant - others:ARL(US) W911NF-10-2-0039 Institutional support: RVO:67985858 Keywords : dissipative particle dynamics * shardlow splitting algorithm * numerical integration Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.112, year: 2014

  7. A Fully Parallel VLSI-implementation of the Viterbi Decoding Algorithm

    DEFF Research Database (Denmark)

    Sparsø, Jens; Jørgensen, Henrik Nordtorp; Paaske, Erik

    1989-01-01

    In this paper we describe the implementation of a K = 7, R = 1/2 single-chip Viterbi decoder intended to operate at 10-20 Mbit/sec. We propose a general, regular and area efficient floor-plan that is also suitable for implementation of decoders for codes with different generator polynomials...

  8. Prospective implementation of an algorithm for bedside intravascular ultrasound-guided filter placement in critically ill patients.

    Science.gov (United States)

    Killingsworth, Christopher D; Taylor, Steven M; Patterson, Mark A; Weinberg, Jordan A; McGwin, Gerald; Melton, Sherry M; Reiff, Donald A; Kerby, Jeffrey D; Rue, Loring W; Jordan, William D; Passman, Marc A

    2010-05-01

    Although contrast venography is the standard imaging method for inferior vena cava (IVC) filter insertion, intravascular ultrasound (IVUS) imaging is a safe and effective option that allows for bedside filter placement and is especially advantageous for immobilized critically ill patients by limiting resource use, risk of transportation, and cost. This study reviewed the effectiveness of a prospectively implemented algorithm for IVUS-guided IVC filter placement in this high-risk population. Current evidence-based guidelines were used to create a clinical decision algorithm for IVUS-guided IVC filter placement in critically ill patients. After a defined lead-in phase to allow dissemination of techniques, the algorithm was prospectively implemented on January 1, 2008. Data were collected for 1 year using accepted reporting standards and a quality assurance review performed based on intent-to-treat at 6, 12, and 18 months. As defined in the prospectively implemented algorithm, 109 patients met criteria for IVUS-directed bedside IVC filter placement. Technical feasibility was 98.1%. Only 2 patients had inadequate IVUS visualization for bedside filter placement and required subsequent placement in the endovascular suite. Technical success, defined as proper deployment in an infrarenal position, was achieved in 104 of the remaining 107 patients (97.2%). The filter was permanent in 21 (19.6%) and retrievable in 86 (80.3%). The single-puncture technique was used in 101 (94.4%), with additional dual access required in 6 (5.6%). Periprocedural complications were rare but included malpositioning requiring retrieval and repositioning in three patients, filter tilt >/=15 degrees in two, and arteriovenous fistula in one. The 30-day mortality rate for the bedside group was 5.5%, with no filter-related deaths. Successful placement of IVC filters using IVUS-guided imaging at the bedside in critically ill patients can be established through an evidence-based prospectively

  9. The Analysis of Alpha Beta Pruning and MTD(f) Algorithm to Determine the Best Algorithm to be Implemented at Connect Four Prototype

    Science.gov (United States)

    Tommy, Lukas; Hardjianto, Mardi; Agani, Nazori

    2017-04-01

    Connect Four is a two-player game which the players take turns dropping discs into a grid to connect 4 of one’s own discs next to each other vertically, horizontally, or diagonally. At Connect Four, Computer requires artificial intelligence (AI) in order to play properly like human. There are many AI algorithms that can be implemented to Connect Four, but the suitable algorithms are unknown. The suitable algorithm means optimal in choosing move and its execution time is not slow at search depth which is deep enough. In this research, analysis and comparison between standard alpha beta (AB) Pruning and MTD(f) will be carried out at the prototype of Connect Four in terms of optimality (win percentage) and speed (execution time and the number of leaf nodes). Experiments are carried out by running computer versus computer mode with 12 different conditions, i.e. varied search depth (5 through 10) and who moves first. The percentage achieved by MTD(f) based on experiments is win 45,83%, lose 37,5% and draw 16,67%. In the experiments with search depth 8, MTD(f) execution time is 35, 19% faster and evaluate 56,27% fewer leaf nodes than AB Pruning. The results of this research are MTD(f) is as optimal as AB Pruning at Connect Four prototype, but MTD(f) on average is faster and evaluates fewer leaf nodes than AB Pruning. The execution time of MTD(f) is not slow and much faster than AB Pruning at search depth which is deep enough.

  10. Implementation of intensity ratio change and line-of-sight rate change algorithms for imaging infrared trackers

    Science.gov (United States)

    Viau, C. R.

    2012-06-01

    The use of the intensity change and line-of-sight (LOS) change concepts have previously been documented in the open-literature as techniques used by non-imaging infrared (IR) seekers to reject expendable IR countermeasures (IRCM). The purpose of this project was to implement IR counter-countermeasure (IRCCM) algorithms based on target intensity and kinematic behavior for a generic imaging IR (IIR) seeker model with the underlying goal of obtaining a better understanding of how expendable IRCM can be used to defeat the latest generation of seekers. The report describes the Intensity Ratio Change (IRC) and LOS Rate Change (LRC) discrimination techniques. The algorithms and the seeker model are implemented in a physics-based simulation product called Tactical Engagement Simulation Software (TESS™). TESS is developed in the MATLAB®/Simulink® environment and is a suite of RF/IR missile software simulators used to evaluate and analyze the effectiveness of countermeasures against various classes of guided threats. The investigation evaluates the algorithm and tests their robustness by presenting the results of batch simulation runs of surface-to-air (SAM) and air-to-air (AAM) IIR missiles engaging a non-maneuvering target platform equipped with expendable IRCM as self-protection. The report discusses how varying critical parameters such track memory time, ratio thresholds and hold time can influence the outcome of an engagement.

  11. Supercomputer implementation of finite element algorithms for high speed compressible flows. Progress report, period ending 30 June 1986

    International Nuclear Information System (INIS)

    Thornton, E.A.; Ramakrishnan, R.

    1986-06-01

    Prediction of compressible flow phenomena using the finite element method is of recent origin and considerable interest. Two shock capturing finite element formulations for high speed compressible flows are described. A Taylor-Galerkin formulation uses a Taylor series expansion in time coupled with a Galerkin weighted residual statement. The Taylor-Galerkin algorithms use explicit artificial dissipation, and the performance of three dissipation models are compared. A Petrov-Galerkin algorithm has as its basis the concepts of streamline upwinding. Vectorization strategies are developed to implement the finite element formulations on the NASA Langley VPS-32. The vectorization scheme results in finite element programs that use vectors of length of the order of the number of nodes or elements. The use of the vectorization procedure speeds up processing rates by over two orders of magnitude. The Taylor-Galerkin and Petrov-Galerkin algorithms are evaluated for 2D inviscid flows on criteria such as solution accuracy, shock resolution, computational speed and storage requirements. The convergence rates for both algorithms are enhanced by local time-stepping schemes. Extension of the vectorization procedure for predicting 2D viscous and 3D inviscid flows are demonstrated. Conclusions are drawn regarding the applicability of the finite element procedures for realistic problems that require hundreds of thousands of nodes

  12. Fast quantum search algorithm for databases of arbitrary size and its implementation in a cavity QED system

    International Nuclear Information System (INIS)

    Li, H.Y.; Wu, C.W.; Liu, W.T.; Chen, P.X.; Li, C.Z.

    2011-01-01

    We propose a method for implementing the Grover search algorithm directly in a database containing any number of items based on multi-level systems. Compared with the searching procedure in the database with qubits encoding, our modified algorithm needs fewer iteration steps to find the marked item and uses the carriers of the information more economically. Furthermore, we illustrate how to realize our idea in cavity QED using Zeeman's level structure of atoms. And the numerical simulation under the influence of the cavity and atom decays shows that the scheme could be achieved efficiently within current state-of-the-art technology. -- Highlights: ► A modified Grover algorithm is proposed for searching in an arbitrary dimensional Hilbert space. ► Our modified algorithm requires fewer iteration steps to find the marked item. ► The proposed method uses the carriers of the information more economically. ► A scheme for a six-item Grover search in cavity QED is proposed. ► Numerical simulation under decays shows that the scheme can be achieved with enough fidelity.

  13. Development and clinical implementation of an enhanced display algorithm for use in networked electronic portal imaging

    International Nuclear Information System (INIS)

    Heuvel, Frank van den; Han, Ihn; Chungbin, Suzanne; Strowbridge, Amy; Tekyi-Mensah, Sam; Ragan, Don P.

    1999-01-01

    Purpose: To introduce and clinically validate a preprocessing algorithm that allows clinical images from an electronic portal imaging device (EPID) to be displayed on any computer monitor, without loss of clinical usability. The introduction of such a system frees EPI systems from the constraints of fixed viewing workstations and increases mobility of the images in a department. Methods and Materials: The preprocessing algorithm, together with its variable parameters is introduced. Clinically, the algorithm is tested using an observer study of 316 EPID images of the pelvic region in the framework of treatment of carcinoma of the cervix and endometrium. Both anterior-posterior (AP/PA) and latero-lateral (LAT) images were used. The images scored were taken from six different patients, five of whom were obese, female, and postmenopausal. The result is tentatively compared with results from other groups. The scoring system, based on the number of visible landmarks in the port, is proposed and validated. Validation was performed by having the observer panel score images with artificially induced noise levels. A comparative study was undertaken with a standard automatic window and leveling display technique. Finally, some case studies using different image sites and EPI detectors are presented. Results: The image quality for all images in this study was deemed to be clinically useful (mean score > 1). Most of the images received a score which was second highest (AP/PA landmarks ≥ 6 and LAT landmarks ≥ 5). Obesity, which has been an important factor determining the image quality, was not seen to be a factor here. Compared to standard techniques a highly significant improvement was determined with regard to clinical usefulness. The algorithm performs fast (less than 9 seconds) and needs no additional user interaction in most of the cases. The algorithm works well on both direct detection portal imagers and camera-based imagers whether analog or digital cameras

  14. How to implement a quantum algorithm on a large number of qubits by controlling one central qubit

    Science.gov (United States)

    Zagoskin, Alexander; Ashhab, Sahel; Johansson, J. R.; Nori, Franco

    2010-03-01

    It is desirable to minimize the number of control parameters needed to perform a quantum algorithm. We show that, under certain conditions, an entire quantum algorithm can be efficiently implemented by controlling a single central qubit in a quantum computer. We also show that the different system parameters do not need to be designed accurately during fabrication. They can be determined through the response of the central qubit to external driving. Our proposal is well suited for hybrid architectures that combine microscopic and macroscopic qubits. More details can be found in: A.M. Zagoskin, S. Ashhab, J.R. Johansson, F. Nori, Quantum two-level systems in Josephson junctions as naturally formed qubits, Phys. Rev. Lett. 97, 077001 (2006); and S. Ashhab, J.R. Johansson, F. Nori, Rabi oscillations in a qubit coupled to a quantum two-level system, New J. Phys. 8, 103 (2006).

  15. Optimization of welding parameters using a genetic algorithm: A robotic arm–assisted implementation for recovery of Pelton turbine blades

    Directory of Open Access Journals (Sweden)

    Luis Pérez Pozo

    2015-11-01

    Full Text Available This work presents the operational optimization of a welding operation involving using genetic algorithms. The welding curves correspond to the profile of a blade-shaped Pelton turbine. The procedure involved the development of a series of tests and observation of the parameters that will be controlled during the welding process. After the tests were performed, the samples were prepared for chemical attack, which allowed observation of the penetration, weld area, and dilution. After that, mathematical models were developed that correlate the controllable welding parameters with the aforementioned bead parameters. In those mathematical models, the optimization of the process parameters was performed using genetic algorithms. Specially programmed functions for mutation, reproduction, and initialization processes were written and used in the implemented model. After the optimization process was completed, the results were evaluated through new tests to verify whether the obtained objective functions properly describe the characteristics of the weld. The comparisons showed errors of less than 6%.

  16. Parallel implementation and evaluation of motion estimation system algorithms on a distributed memory multiprocessor using knowledge based mappings

    Science.gov (United States)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.

  17. Conforming to interface structured adaptive mesh refinement: 3D algorithm and implementation

    Science.gov (United States)

    Nagarajan, Anand; Soghrati, Soheil

    2018-03-01

    A new non-iterative mesh generation algorithm named conforming to interface structured adaptive mesh refinement (CISAMR) is introduced for creating 3D finite element models of problems with complex geometries. CISAMR transforms a structured mesh composed of tetrahedral elements into a conforming mesh with low element aspect ratios. The construction of the mesh begins with the structured adaptive mesh refinement of elements in the vicinity of material interfaces. An r-adaptivity algorithm is then employed to relocate selected nodes of nonconforming elements, followed by face-swapping a small fraction of them to eliminate tetrahedrons with high aspect ratios. The final conforming mesh is constructed by sub-tetrahedralizing remaining nonconforming elements, as well as tetrahedrons with hanging nodes. In addition to studying the convergence and analyzing element-wise errors in meshes generated using CISAMR, several example problems are presented to show the ability of this method for modeling 3D problems with intricate morphologies.

  18. Parallel implementation of D-Phylo algorithm for maximum likelihood clusters.

    Science.gov (United States)

    Malik, Shamita; Sharma, Dolly; Khatri, Sunil Kumar

    2017-03-01

    This study explains a newly developed parallel algorithm for phylogenetic analysis of DNA sequences. The newly designed D-Phylo is a more advanced algorithm for phylogenetic analysis using maximum likelihood approach. The D-Phylo while misusing the seeking capacity of k -means keeps away from its real constraint of getting stuck at privately conserved motifs. The authors have tested the behaviour of D-Phylo on Amazon Linux Amazon Machine Image(Hardware Virtual Machine)i2.4xlarge, six central processing unit, 122 GiB memory, 8  ×  800 Solid-state drive Elastic Block Store volume, high network performance up to 15 processors for several real-life datasets. Distributing the clusters evenly on all the processors provides us the capacity to accomplish a near direct speed if there should arise an occurrence of huge number of processors.

  19. Implementation of Finite Volume based Navier Stokes Algorithm Within General Purpose Flow Network Code

    Science.gov (United States)

    Schallhorn, Paul; Majumdar, Alok

    2012-01-01

    This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.

  20. Design and Implementation of DC-DC Converter with Inc-Cond Algorithm

    OpenAIRE

    Mustafa Engin Basoğlu; Bekir Çakır

    2015-01-01

    The most important component affecting the efficiency of photovoltaic power systems are solar panels. In other words, efficiency of these systems are significantly affected due to the being low efficiency of solar panel. Thus, solar panels should be operated under maximum power point conditions through a power converter. In this study, design of boost converter has been carried out with maximum power point tracking (MPPT) algorithm which is incremental conductance (Inc-Co...

  1. Image processing algorithm design and implementation for real-time autonomous inspection of mixed waste

    International Nuclear Information System (INIS)

    Schalkoff, R.J.; Shaaban, K.M.; Carver, A.E.

    1996-01-01

    The ARIES number-sign 1 (Autonomous Robotic Inspection Experimental System) vision system is used to acquire drum surface images under controlled conditions and subsequently perform autonomous visual inspection leading to a classification as 'acceptable' or 'suspect'. Specific topics described include vision system design methodology, algorithmic structure,hardware processing structure, and image acquisition hardware. Most of these capabilities were demonstrated at the ARIES Phase II Demo held on Nov. 30, 1995. Finally, Phase III efforts are briefly addressed

  2. The (black) art of runtime evaluation: Are we comparing algorithms or implementations?

    DEFF Research Database (Denmark)

    Kriegel, Hans-Peter; Schubert, Erich; Zimek, Arthur

    2017-01-01

    Any paper proposing a new algorithm should come with an evaluation of efficiency and scalability (particularly when we are designing methods for “big data”). However, there are several (more or less serious) pitfalls in such evaluations. We would like to point the attention of the community...... general recommendations but maintain that the design of fair and conclusive experiments will always remain a challenge for researchers and an integral part of the scientific endeavor....

  3. The effects of implementing a nutritional support algorithm in critically ill medical patients.

    Science.gov (United States)

    Sungur, Gonul; Sahin, Habibe; Tasci, Sultan

    2015-08-01

    To determine the effect of the enteral nutrition algorithm on nutritional support in critically ill medical patients. The quasi-experimental study was conducted at a medical Intensive Care Unit of a university hospital in central Anatolia region in Turkey from June to December 2008. The patients were divided into two equal groups: the historical group was fed in routine clinical applications, while the study group was fed according to the enteral nutritional algorithm. Prior to collecting data, nurses were trained interactively about enteral nutrition and the nutritional support algorithm. The nutrition of the study group was directed by the nurses. Data were recorded during 3 days of care. SPSS 22 was used for statistical analysis. The 40 patients in the study were divided into two equal groups of 20(50%) each. The energy intake of study group was 62% of the prescribed energy requirement on the 1st, 68.5% on the 2nd and 63% on the 3rd day, whereas in the historical group 38%, 56.5% and 60% of the prescribed energy requirement were met. The consumed energy of the historical group on the 1st 2nd and 3rd day was significantly different (p=0.020). In the study group, serum total protein and albumin levels decreased significantly (pgroup, any of the serum parameters did not change. Enteral nutrition-induced complications, duration of stay in intensive care unit were not significantly different between the groups (p>0.05). The use of standard algorithms for enteral nutrition may be an effective way to meet the nutritional requirements of patients.

  4. Implementation of Naive Bayes Classifier Algorithm to Evaluation in Utilizing Online Hotel Tax Reporting Application

    OpenAIRE

    R. Dimas Adityo; Herti Miawarni

    2017-01-01

    The current implementation of tax reporting regional Pasuruan hotels have used online (Web-based), with the aim of reporting systems can run effectively and efficiently in receiving the financial statements especially from taxpayer property. Pasuruan as one small town quite rapidly in East Java, have implemented role models online tax filing system starting in 2015, with the amount of 6 hotels, there are several classes of hotels ranging from the budget class up to class three stars. After th...

  5. Towards a practical implementation of the MLE algorithm for positron emission tomography

    International Nuclear Information System (INIS)

    Llacer, J.; Andreae, S.; Veklerov, E.; Hoffman, E.J.

    1986-01-01

    Recognizing that the quality of images obtained by application of the Maximum Likelihood Estimator (MLE) to Positron Emission Tomography (PET) and Single Photon Emission Tomography (SPECT) appears to be substantially better than those obtained by conventional methods, the authors have started to develop methods that will facilitate the necessary research for a good evaluation of the algorithm and may lead to its practical application for research and routine tomography. They have found that the non-linear MLE algorithm can be used with pixel sizes which are smaller than the sampling distance, without interpolation, obtaining excellent resolution and no noticeable increase in noise. They have studied the role of symmetry in reducing the amount of matrix element storage requirements for full size applications of the algorithm and have used that concept to carry out two reconstructions of the Derenzo phantom with data from the ECAT-III instrument. The results show excellent signal-to-noise (S/N) ratio, particularly for data with low total counts, excellent sharpness, but low contrast at high frequencies when using the Shepp-Vardi model for probability matrices

  6. Implementing a Topology Management Algorithm for Mobile Ad-Hoc Networks

    Directory of Open Access Journals (Sweden)

    Mrinal K. Naskar

    2008-01-01

    Full Text Available In this paper, we propose to maintain the topology of a MANET by suitably selecting multiple coordinators among the nodes constituting the MANET. The maintenance of topology in a mobile ad–hoc network is of primary importance because the routing techniques can only work if we have a connected network. Thus of the burning issues at present is to device algorithms which ensure that the network topology is always maintained. The basic philosophy behind our algorithm is to isolate two coordinators amongst the system based on positional data. Once elected, they are entrusted with the responsibility to emit signals of different frequencies while the other nodes individually decide the logic they need to follow in order to maintain the topology, thereby greatly reducing the overhead. As far as our knowledge goes, we are the first ones to introduce the concept of multiple coordinators which not only reduces the workload of the coordinator, but also eliminates the need of different signal ranges thereby ensuring greater efficiency. We have simulated the algorithm with the help of a number of robots using embedded systems. The results we have obtained have been quite encouraging.

  7. Development of a sensorimotor algorithm able to deal with unforeseen pushes and its implementation based on VHDL

    OpenAIRE

    Lezcano Giménez, Pablo Gabriel

    2015-01-01

    Development of a Sensorimotor Algorithm Able to Deal with Unforeseen Pushes and Its Implementation Based on VHDL is the title of my thesis which concludes my Bachelor Degree in the Escuela Técnica Superior de Ingeniería y Sistemas de Telecomunicación of the Universidad Politécnica de Madrid. It encloses the overall work I did in the Neurorobotics Research Laboratory from the Beuth Hochschule für Technik Berlin during my ERASMUS year in 2015. This thesis is focused on the field of robotics, sp...

  8. Implementation of an algorithm for absorbed dose calculation in high energy photon beams at off axis points

    International Nuclear Information System (INIS)

    Matos, M.F.; Alvarez, G.D.; Sanz, D.E.

    2008-01-01

    Full text: A semiempirical algorithm for absorbed dose calculation at off-axis points in irregular beams was implemented. It is well known that semiempirical methods are very useful because of their easy implementation and its helpfulness in dose calculation in the clinic. These methods can be used as independent tools for dosimetric calculation in many applications of quality assurance. However, the applicability of such methods has some limitations, even in homogeneous media, specially at off axis points, near beam fringes or outside the beam. Only methods derived from tissue-air-ratio (TAR) or scatter-maximum-ratio (SMR) have been devised for those situations, many years ago. Despite there have been improvements for these manual methods, like the Sc-Sp ones, no attempt has been made to extend their usage at off axis points. In this work, a semiempirical formalism was introduced, based on the works of Venselaar et al. (1999) and Sanz et al. (2004), aimed to the Sc-Sp separation. This new formalism relies on the separation of primary and secondary components of the beam although in a relative way. The data required by the algorithm are reduced to a minimal, allowing for experimental easy. According to modern recommendations, reference measurements in water phantom are performed at 10 cm depth, keeping away electron contamination. Air measurements are done using a mini phantom instead of the old equilibrium caps. Finally, the calculation at off-axis points are done using data measured on the central beam axis; but correcting the results with the introduction of a measured function which depends on the location of the off axis point. The measurements for testing the algorithm were performed in our Siemens MXE linear accelerator. The algorithm was used to determine specific dose profiles for a great number of different beam configurations, and the results were compared with direct measurements to validate the accuracy of the algorithm. Additionally, the results were

  9. Experiences with Implementing a Distributed and Self-Organizing Scheduling Algorithm for Energy-Efficient Data Gathering on a Real-Life Sensor Network Platform

    NARCIS (Netherlands)

    Zhang, Y.; Chatterjea, Supriyo; Havinga, Paul J.M.

    2007-01-01

    We report our experiences with implementing a distributed and self-organizing scheduling algorithm designed for energy-efficient data gathering on a 25-node multihop wireless sensor network (WSN). The algorithm takes advantage of spatial correlations that exist in readings of adjacent sensor nodes

  10. Implementation of Evolution Strategies (ES Algorithm to Optimization Lovebird Feed Composition

    Directory of Open Access Journals (Sweden)

    Agung Mustika Rizki

    2017-05-01

    Full Text Available Lovebird current society, especially popular among bird lovers. Some people began to try to develop the cultivation of these birds. In the cultivation process to consider the composition of feed to produce a quality bird. Determining the feed is not easy because it must consider the cost and need for vitamin Lovebird. This problem can be solved by the algorithm Evolution Strategies (ES. Based on test results obtained optimal fitness value of 0.3125 using a population size of 100 and optimal fitness value of 0.3267 in the generation of 1400. 

  11. Implementation of Automatic Clustering Algorithm and Fuzzy Time Series in Motorcycle Sales Forecasting

    Science.gov (United States)

    Rasim; Junaeti, E.; Wirantika, R.

    2018-01-01

    Accurate forecasting for the sale of a product depends on the forecasting method used. The purpose of this research is to build motorcycle sales forecasting application using Fuzzy Time Series method combined with interval determination using automatic clustering algorithm. Forecasting is done using the sales data of motorcycle sales in the last ten years. Then the error rate of forecasting is measured using Means Percentage Error (MPE) and Means Absolute Percentage Error (MAPE). The results of forecasting in the one-year period obtained in this study are included in good accuracy.

  12. Development and Implementation of an Advanced Power Management Algorithm for Electronic Load Sensing on a Telehandler

    DEFF Research Database (Denmark)

    Hansen, Rico Hjerm; Andersen, Torben Ole; Pedersen, Henrik C.

    2010-01-01

    The relevance of electronic control of mobile hydraulic systems is increasing as hydraulic components are implemented with more electrical sensors and actuators. This paper presents how the traditional Hydro-mechanical Load Sensing (HLS) control of a specific mobile hydraulic application......, a telehandler, can be replaced with electronic control, i.e. Electronic Load Sensing (ELS). The motivation is the potential of improved dynamic performance and power utilization, along with reducing the mechanical complexity by moving traditional hydro-mechanical implemented features such as pressure control...

  13. An Algorithm of an X-ray Hit Allocation to a Single Pixel in a Cluster and Its Test-Circuit Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Deptuch, G. W. [AGH-UST, Cracow; Fahim, F. [Fermilab; Grybos, P. [AGH-UST, Cracow; Hoff, J. [Fermilab; Maj, P. [AGH-UST, Cracow; Siddons, D. P. [Brookhaven; Kmon, P. [AGH-UST, Cracow; Trimpl, M. [Fermilab; Zimmerman, T. [Fermilab

    2017-05-06

    An on-chip implementable algorithm for allocation of an X-ray photon imprint, called a hit, to a single pixel in the presence of charge sharing in a highly segmented pixel detector is described. Its proof-of-principle implementation is also given supported by the results of tests using a highly collimated X-ray photon beam from a synchrotron source. The algorithm handles asynchronous arrivals of X-ray photons. Activation of groups of pixels, comparisons of peak amplitudes of pulses within an active neighborhood and finally latching of the results of these comparisons constitute the three procedural steps of the algorithm. A grouping of pixels to one virtual pixel that recovers composite signals and event driven strobes to control comparisons of fractional signals between neighboring pixels are the actuators of the algorithm. The circuitry necessary to implement the algorithm requires an extensive inter-pixel connection grid of analog and digital signals that are exchanged between pixels. A test-circuit implementation of the algorithm was achieved with a small array of 32×32 pixels and the device was exposed to an 8 keV highly collimated to a diameter of 3 μm X-ray beam. The results of these tests are given in the paper assessing physical implementation of the algorithm.

  14. NBA-Palm: prediction of palmitoylation site implemented in Naïve Bayes algorithm

    Directory of Open Access Journals (Sweden)

    Jin Changjiang

    2006-10-01

    Full Text Available Abstract Background Protein palmitoylation, an essential and reversible post-translational modification (PTM, has been implicated in cellular dynamics and plasticity. Although numerous experimental studies have been performed to explore the molecular mechanisms underlying palmitoylation processes, the intrinsic feature of substrate specificity has remained elusive. Thus, computational approaches for palmitoylation prediction are much desirable for further experimental design. Results In this work, we present NBA-Palm, a novel computational method based on Naïve Bayes algorithm for prediction of palmitoylation site. The training data is curated from scientific literature (PubMed and includes 245 palmitoylated sites from 105 distinct proteins after redundancy elimination. The proper window length for a potential palmitoylated peptide is optimized as six. To evaluate the prediction performance of NBA-Palm, 3-fold cross-validation, 8-fold cross-validation and Jack-Knife validation have been carried out. Prediction accuracies reach 85.79% for 3-fold cross-validation, 86.72% for 8-fold cross-validation and 86.74% for Jack-Knife validation. Two more algorithms, RBF network and support vector machine (SVM, also have been employed and compared with NBA-Palm. Conclusion Taken together, our analyses demonstrate that NBA-Palm is a useful computational program that provides insights for further experimentation. The accuracy of NBA-Palm is comparable with our previously described tool CSS-Palm. The NBA-Palm is freely accessible from: http://www.bioinfo.tsinghua.edu.cn/NBA-Palm.

  15. NBA-Palm: prediction of palmitoylation site implemented in Naïve Bayes algorithm.

    Science.gov (United States)

    Xue, Yu; Chen, Hu; Jin, Changjiang; Sun, Zhirong; Yao, Xuebiao

    2006-10-17

    Protein palmitoylation, an essential and reversible post-translational modification (PTM), has been implicated in cellular dynamics and plasticity. Although numerous experimental studies have been performed to explore the molecular mechanisms underlying palmitoylation processes, the intrinsic feature of substrate specificity has remained elusive. Thus, computational approaches for palmitoylation prediction are much desirable for further experimental design. In this work, we present NBA-Palm, a novel computational method based on Naïve Bayes algorithm for prediction of palmitoylation site. The training data is curated from scientific literature (PubMed) and includes 245 palmitoylated sites from 105 distinct proteins after redundancy elimination. The proper window length for a potential palmitoylated peptide is optimized as six. To evaluate the prediction performance of NBA-Palm, 3-fold cross-validation, 8-fold cross-validation and Jack-Knife validation have been carried out. Prediction accuracies reach 85.79% for 3-fold cross-validation, 86.72% for 8-fold cross-validation and 86.74% for Jack-Knife validation. Two more algorithms, RBF network and support vector machine (SVM), also have been employed and compared with NBA-Palm. Taken together, our analyses demonstrate that NBA-Palm is a useful computational program that provides insights for further experimentation. The accuracy of NBA-Palm is comparable with our previously described tool CSS-Palm. The NBA-Palm is freely accessible from: http://www.bioinfo.tsinghua.edu.cn/NBA-Palm.

  16. Design and Implementation of PV based Energy Harvester for WSN Node with MAIC algorithm

    Directory of Open Access Journals (Sweden)

    RAJENDRAN, H.

    2015-05-01

    Full Text Available Wireless sensor networks (WSNs are hardly in need of an additional source of power other than the normally used batteries, to increase the lifetime considerably. In this paper, mathematical modeling of photovoltaic energy harvesting (PVEH system for the WSN is presented. The system comprises of the solar PV panel, boost converter as maximum power point tracker with moving averaged incremental conductance (MAIC maximum power point (MPP algorithm, Ni-MH battery for energy storage, compensator, buck regulator and the mathematically modeled WSN mote. MAIC algorithm is proposed to avoid the effect of drastic variations in input irradiance, in locking the MPP point. WSN mote is modeled in both active and sleep state based on the power consumption. To maintain the voltage stability, proper compensator has been designed for the proposed system. The performance of the system is tested for dynamic variations of environmental conditions using MATLAB simulation. The proposed system has 50 to 60 percent improved conversion efficiency when compared to the conventional direct coupling method. The parameters of the photovoltaic panel model have been validated through experimentation. Also the practical verification of the operation of MPPT circuit has been performed.

  17. An effective, robust and parallel implementation of an interior point algorithm for limit state optimization

    DEFF Research Database (Denmark)

    Dollerup, Niels; Jepsen, Michael S.; Frier, Christian

    2014-01-01

    A robust and effective finite element based implementation of lower bound limit state analysis applying an interior point formulation is presented in this paper. The lower bound formulation results in a convex optimization problem consisting of a number of linear constraints from the equilibrium...

  18. Algorithms for the extension of precise and imprecise conditional probability assessments: an implementation with maple V

    Directory of Open Access Journals (Sweden)

    Veronica Biazzo

    2000-05-01

    Full Text Available In this paper, we illustrate an implementation with Maple V of some procedures which allow to exactly propagate precise and imprecise probability assessments. The extension of imprecise assessments is based on a suitable generalization of the concept of coherence of de Finetti. The procedures described are supported by some examples and relevant cases.

  19. Development and implementation of algorithms in a population of cooperative autonomous mobile robots

    CSIR Research Space (South Africa)

    Namoshe, M

    2007-10-01

    Full Text Available An increase in the number of mobile robot users has lead to the design and implementation of cooperative autonomous mobile robots. Autonomous robots require the ability to build maps of an unknown environment while simultaneously using these maps...

  20. Design and implementation of an algorithm for creating templates for the purpose of iris biometric authentication through the analysis of textures implemented on a FPGA

    International Nuclear Information System (INIS)

    Giacometto, F J; Vilardy, J M; Torres, C O; Mattos, L

    2011-01-01

    Currently addressing problems related to security in access control, as a consequence, have been developed applications that work under unique characteristics in individuals, such as biometric features. In the world becomes important working with biometric images such as the liveliness of the iris which are for both the pattern of retinal images as your blood vessels. This paper presents an implementation of an algorithm for creating templates for biometric authentication with ocular features for FPGA, in which the object of study is that the texture pattern of iris is unique to each individual. The authentication will be based in processes such as edge extraction methods, segmentation principle of John Daugman and Libor Masek's, and standardization to obtain necessary templates for the search of matches in a database and then get the expected results of authentication.

  1. Design and implementation of an algorithm for creating templates for the purpose of iris biometric authentication through the analysis of textures implemented on a FPGA

    Energy Technology Data Exchange (ETDEWEB)

    Giacometto, F J; Vilardy, J M; Torres, C O; Mattos, L, E-mail: franciscogiacometto@unicesar.edu.co [Laboratorio de Optica e Informatica, Universidad Popular del Cesar, Sede balneario Hurtado, Valledupar, Cesar (Colombia)

    2011-01-01

    Currently addressing problems related to security in access control, as a consequence, have been developed applications that work under unique characteristics in individuals, such as biometric features. In the world becomes important working with biometric images such as the liveliness of the iris which are for both the pattern of retinal images as your blood vessels. This paper presents an implementation of an algorithm for creating templates for biometric authentication with ocular features for FPGA, in which the object of study is that the texture pattern of iris is unique to each individual. The authentication will be based in processes such as edge extraction methods, segmentation principle of John Daugman and Libor Masek's, and standardization to obtain necessary templates for the search of matches in a database and then get the expected results of authentication.

  2. Long-term power generation expansion planning with short-term demand response: Model, algorithms, implementation, and electricity policies

    Science.gov (United States)

    Lohmann, Timo

    Electric sector models are powerful tools that guide policy makers and stakeholders. Long-term power generation expansion planning models are a prominent example and determine a capacity expansion for an existing power system over a long planning horizon. With the changes in the power industry away from monopolies and regulation, the focus of these models has shifted to competing electric companies maximizing their profit in a deregulated electricity market. In recent years, consumers have started to participate in demand response programs, actively influencing electricity load and price in the power system. We introduce a model that features investment and retirement decisions over a long planning horizon of more than 20 years, as well as an hourly representation of day-ahead electricity markets in which sellers of electricity face buyers. This combination makes our model both unique and challenging to solve. Decomposition algorithms, and especially Benders decomposition, can exploit the model structure. We present a novel method that can be seen as an alternative to generalized Benders decomposition and relies on dynamic linear overestimation. We prove its finite convergence and present computational results, demonstrating its superiority over traditional approaches. In certain special cases of our model, all necessary solution values in the decomposition algorithms can be directly calculated and solving mathematical programming problems becomes entirely obsolete. This leads to highly efficient algorithms that drastically outperform their programming problem-based counterparts. Furthermore, we discuss the implementation of all tailored algorithms and the challenges from a modeling software developer's standpoint, providing an insider's look into the modeling language GAMS. Finally, we apply our model to the Texas power system and design two electricity policies motivated by the U.S. Environment Protection Agency's recently proposed CO2 emissions targets for the

  3. Simulation, hardware implementation and control of a multilevel inverter with simulated annealing algorithm

    Directory of Open Access Journals (Sweden)

    Fayçal Chabni

    2017-09-01

    Full Text Available Harmonic pollution is a very common issue in the field of power electronics, Harmonics can cause multiple problems for power converters and electrical loads alike, this paper introduces a modulation method called selective harmonic elimination pulse width modulation (SHEPWM, this method allows the elimination of a specific order of harmonics and also control the amplitude of the fundamental component of the output voltage. In this work SHEPWM strategy is applied to a five level cascade inverter. The objective of this study is to demonstrate the total control provided by the SHEPWM strategy over any rank of harmonics using the simulated annealing optimization algorithm and also control the amplitude of the fundamental component at any desired value. Simulation and experimental results are presented in this work.

  4. Alteration of Box-Jenkins methodology by implementing genetic algorithm method

    Science.gov (United States)

    Ismail, Zuhaimy; Maarof, Mohd Zulariffin Md; Fadzli, Mohammad

    2015-02-01

    A time series is a set of values sequentially observed through time. The Box-Jenkins methodology is a systematic method of identifying, fitting, checking and using integrated autoregressive moving average time series model for forecasting. Box-Jenkins method is an appropriate for a medium to a long length (at least 50) time series data observation. When modeling a medium to a long length (at least 50), the difficulty arose in choosing the accurate order of model identification level and to discover the right parameter estimation. This presents the development of Genetic Algorithm heuristic method in solving the identification and estimation models problems in Box-Jenkins. Data on International Tourist arrivals to Malaysia were used to illustrate the effectiveness of this proposed method. The forecast results that generated from this proposed model outperformed single traditional Box-Jenkins model.

  5. Development and Validation of a Spike Detection and Classification Algorithm Aimed at Implementation on Hardware Devices

    Directory of Open Access Journals (Sweden)

    E. Biffi

    2010-01-01

    Full Text Available Neurons cultured in vitro on MicroElectrode Array (MEA devices connect to each other, forming a network. To study electrophysiological activity and long term plasticity effects, long period recording and spike sorter methods are needed. Therefore, on-line and real time analysis, optimization of memory use and data transmission rate improvement become necessary. We developed an algorithm for amplitude-threshold spikes detection, whose performances were verified with (a statistical analysis on both simulated and real signal and (b Big O Notation. Moreover, we developed a PCA-hierarchical classifier, evaluated on simulated and real signal. Finally we proposed a spike detection hardware design on FPGA, whose feasibility was verified in terms of CLBs number, memory occupation and temporal requirements; once realized, it will be able to execute on-line detection and real time waveform analysis, reducing data storage problems.

  6. FPGA-based real-time phase measuring profilometry algorithm design and implementation

    Science.gov (United States)

    Zhan, Guomin; Tang, Hongwei; Zhong, Kai; Li, Zhongwei; Shi, Yusheng

    2016-11-01

    Phase measuring profilometry (PMP) has been widely used in many fields, like Computer Aided Verification (CAV), Flexible Manufacturing System (FMS) et al. High frame-rate (HFR) real-time vision-based feedback control will be a common demands in near future. However, the instruction time delay in the computer caused by numerous repetitive operations greatly limit the efficiency of data processing. FPGA has the advantages of pipeline architecture and parallel execution, and it fit for handling PMP algorithm. In this paper, we design a fully pipelined hardware architecture for PMP. The functions of hardware architecture includes rectification, phase calculation, phase shifting, and stereo matching. The experiment verified the performance of this method, and the factors that may influence the computation accuracy was analyzed.

  7. Error analysis and algorithm implementation for an improved optical-electric tracking device based on MEMS

    Science.gov (United States)

    Sun, Hong; Wu, Qian-zhong

    2013-09-01

    In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.

  8. Faster implementation of the hierarchical search algorithm for detection of gravitational waves from inspiraling compact binaries

    International Nuclear Information System (INIS)

    Sengupta, Anand S.; Dhurandhar, Sanjeev; Lazzarini, Albert

    2003-01-01

    The first scientific runs of kilometer scale laser interferometric detectors such as LIGO are under way. Data from these detectors will be used to look for signatures of gravitational waves from astrophysical objects such as inspiraling neutron-star-black-hole binaries using matched filtering. The computational resources required for online flat-search implementation of the matched filtering are large if searches are carried out for a small total mass. A flat search is implemented by constructing a single discrete grid of densely populated template waveforms spanning the dynamical parameters--masses, spins--which are correlated with the interferometer data. The correlations over the kinematical parameters can be maximized a priori without constructing a template bank over them. Mohanty and Dhurandhar showed that a significant reduction in computational resources can be accomplished by using a hierarchy of such template banks where candidate events triggered by a sparsely populated grid are followed up by the regular, dense flat-search grid. The estimated speedup in this method was a factor ∼25 over the flat search. In this paper we report an improved implementation of the hierarchical search, wherein we extend the domain of hierarchy to an extra dimension--namely, the time of arrival of the signal in the bandwidth of the interferometer. This is accomplished by lowering the Nyquist sampling rate of the signal in the trigger stage. We show that this leads to further improvement in the efficiency of data analysis and speeds up the online computation by a factor of ∼65-70 over the flat search. We also take into account and discuss issues related to template placement, trigger thresholds, and other peculiar problems that do not arise in earlier implementation schemes of the hierarchical search. We present simulation results for 2PN waveforms embedded in the noise expected for initial LIGO detectors

  9. Eco-physiological Baltic picoplankton analysis and its implementation in Synechoccocus species life cycle numerical algorithm

    Science.gov (United States)

    Cieszyńska, Agata; Śliwińska-Wilczewska, Sylwia

    2017-04-01

    mixtures of conditions were applied in the laboratory experiments. Results from these experiments were the foundation to create picocyanobacteria life cycle algorithm - pico-bioalgorithm. The form of algorithm bases on the Ecological Regional Ocean Model formulas for functional phytoplankton groups. According to this, in pico-bioalgorithm the dependence on temperature and salinity of water body and the occurrence of nutrients are provided along with the coefficients determining mortality of picoplankton cells and coefficients of respiration and growth rates. In order to prescribe the limiting properties, modified Michaelis-Menten formula with squared arguments as a limiting function was used. Picoplanktonic organisms are very specific and can live in environments, which may be initially defined as impossible for such organisms to survive. The issue of picoplanktonic species inhabiting the Baltic Sea needs to be explored in details. Present study and proposed algorithm comprise an important step in this scientific exploration. This work has been funded by the National Centre of Science project (contract number: 2012/07/N/ST10/03485) entitled: "Improved understanding of phytoplankton blooms in the Baltic Sea based on numerical models and existing data sets". The Author (AC) received funding from National Centre of Sciences in doctoral scholarship program (contract number: 2016/20/T/ST10/00214);

  10. Implementation and optimization of ultrasound signal processing algorithms on mobile GPU

    Science.gov (United States)

    Kong, Woo Kyu; Lee, Wooyoul; Kim, Kyu Cheol; Yoo, Yangmo; Song, Tai-Kyong

    2014-03-01

    A general-purpose graphics processing unit (GPGPU) has been used for improving computing power in medical ultrasound imaging systems. Recently, a mobile GPU becomes powerful to deal with 3D games and videos at high frame rates on Full HD or HD resolution displays. This paper proposes the method to implement ultrasound signal processing on a mobile GPU available in the high-end smartphone (Galaxy S4, Samsung Electronics, Seoul, Korea) with programmable shaders on the OpenGL ES 2.0 platform. To maximize the performance of the mobile GPU, the optimization of shader design and load sharing between vertex and fragment shader was performed. The beamformed data were captured from a tissue mimicking phantom (Model 539 Multipurpose Phantom, ATS Laboratories, Inc., Bridgeport, CT, USA) by using a commercial ultrasound imaging system equipped with a research package (Ultrasonix Touch, Ultrasonix, Richmond, BC, Canada). The real-time performance is evaluated by frame rates while varying the range of signal processing blocks. The implementation method of ultrasound signal processing on OpenGL ES 2.0 was verified by analyzing PSNR with MATLAB gold standard that has the same signal path. CNR was also analyzed to verify the method. From the evaluations, the proposed mobile GPU-based processing method has no significant difference with the processing using MATLAB (i.e., PSNRe., 11.31). From the mobile GPU implementation, the frame rates of 57.6 Hz were achieved. The total execution time was 17.4 ms that was faster than the acquisition time (i.e., 34.4 ms). These results indicate that the mobile GPU-based processing method can support real-time ultrasound B-mode processing on the smartphone.

  11. A Fast C++ Implementation of Neural Network Backpropagation Training Algorithm: Application to Bayesian Optimal Image Demosaicing

    Directory of Open Access Journals (Sweden)

    Yi-Qing Wang

    2015-09-01

    Full Text Available Recent years have seen a surge of interest in multilayer neural networks fueled by their successful applications in numerous image processing and computer vision tasks. In this article, we describe a C++ implementation of the stochastic gradient descent to train a multilayer neural network, where a fast and accurate acceleration of tanh(· is achieved with linear interpolation. As an example of application, we present a neural network able to deliver state-of-the-art performance in image demosaicing.

  12. A hybrid algorithm of BSC and QFD to determine the criteria affecting implementation of successful outsourcing

    Directory of Open Access Journals (Sweden)

    Mohammad Hemati

    2012-04-01

    Full Text Available Successful organizations share some identical factors that pave the way for their success. Among these factors, strategic management is the key to success for organizations to contribute more to the competitive world market of today. In this respect, the pivotal role of outsourcing cannot be denied. This research parallelizes the criteria affecting the outsourcing success as presented in Elmuti model with the Balanced score card method in the Tose'e Ta'avon Bank. In this research, questionnaires and interviews with experts helped determine the strategic goals at four perspectives of balanced score card method (at Tose'e Ta'avon Bank and the relative weights were computed for each of balance score card (BSC perspectives by using AHP method. As the next step, the indexes were prioritized by applying the quality function development(QFD technique and considering strategic goals at four perspectives in section "WHAT" and the outsourcing success criteria of Elmuti model in section "HOW". At the end of algorithm, the results are compared with the Elmuti method. Based on the results, the hybrid proposed technique seems to perform better than Elmuti.

  13. Hardware-efficient implementation of digital FIR filter using fast first-order moment algorithm

    Science.gov (United States)

    Cao, Li; Liu, Jianguo; Xiong, Jun; Zhang, Jing

    2018-03-01

    As the digital finite impulse response (FIR) filter can be transformed into the shift-add form of multiple small-sized firstorder moments, based on the existing fast first-order moment algorithm, this paper presents a novel multiplier-less structure to calculate any number of sequential filtering results in parallel. The theoretical analysis on its hardware and time-complexities reveals that by appropriately setting the degree of parallelism and the decomposition factor of a fixed word width, the proposed structure may achieve better area-time efficiency than the existing two-dimensional (2-D) memoryless-based filter. To evaluate the performance concretely, the proposed designs for different taps along with the existing 2-D memoryless-based filters, are synthesized by Synopsys Design Compiler with 0.18-μm SMIC library. The comparisons show that the proposed design has less area-time complexity and power consumption when the number of filter taps is larger than 48.

  14. The implementation of an automated tracking algorithm for the track detection of migratory anticyclones affecting the Mediterranean

    Science.gov (United States)

    Hatzaki, Maria; Flocas, Elena A.; Simmonds, Ian; Kouroutzoglou, John; Keay, Kevin; Rudeva, Irina

    2013-04-01

    Migratory cyclones and anticyclones mainly account for the short-term weather variations in extra-tropical regions. By contrast to cyclones that have drawn major scientific attention due to their direct link to active weather and precipitation, climatological studies on anticyclones are limited, even though they also are associated with extreme weather phenomena and play an important role in global and regional climate. This is especially true for the Mediterranean, a region particularly vulnerable to climate change, and the little research which has been done is essentially confined to the manual analysis of synoptic charts. For the construction of a comprehensive climatology of migratory anticyclonic systems in the Mediterranean using an objective methodology, the Melbourne University automatic tracking algorithm is applied, based to the ERA-Interim reanalysis mean sea level pressure database. The algorithm's reliability in accurately capturing the weather patterns and synoptic climatology of the transient activity has been widely proven. This algorithm has been extensively applied for cyclone studies worldwide and it has been also successfully applied for the Mediterranean, though its use for anticyclone tracking is limited to the Southern Hemisphere. In this study the performance of the tracking algorithm under different data resolutions and different choices of parameter settings in the scheme is examined. Our focus is on the appropriate modification of the algorithm in order to efficiently capture the individual characteristics of the anticyclonic tracks in the Mediterranean, a closed basin with complex topography. We show that the number of the detected anticyclonic centers and the resulting tracks largely depend upon the data resolution and the search radius. We also find that different scale anticyclones and secondary centers that lie within larger anticyclone structures can be adequately represented; this is important, since the extensions of major

  15. MATLAB algorithm to implement soil water data assimilation with the Ensemble Kalman Filter using HYDRUS.

    Science.gov (United States)

    Valdes-Abellan, Javier; Pachepsky, Yakov; Martinez, Gonzalo

    2018-01-01

    Data assimilation is becoming a promising technique in hydrologic modelling to update not only model states but also to infer model parameters, specifically to infer soil hydraulic properties in Richard-equation-based soil water models. The Ensemble Kalman Filter method is one of the most widely employed method among the different data assimilation alternatives. In this study the complete Matlab© code used to study soil data assimilation efficiency under different soil and climatic conditions is shown. The code shows the method how data assimilation through EnKF was implemented. Richards equation was solved by the used of Hydrus-1D software which was run from Matlab. •MATLAB routines are released to be used/modified without restrictions for other researchers•Data assimilation Ensemble Kalman Filter method code.•Soil water Richard equation flow solved by Hydrus-1D.

  16. Software for evaluating magnetic induction field generated by power lines: implementation of a new algorithm

    International Nuclear Information System (INIS)

    Comelli, M.; Benes, M.; Bampo, A.; Villalta, R.

    2006-01-01

    The Regional Environment Protection Agency of Friuli Venezia Giulia (A.R.P.A. F.V.G., Italy) has performed an analysis on existing software designed to calculate magnetic induction field generated by power lines. As far as the agency requirements are concerned the tested programs display some difficulties in the immediate processing of electrical and geometrical data supplied by plant owners, and in certain cases turn out to be inadequate in representing complex configurations of power lines. Furthermore, none of them is preset for cyclic calculus to determine the time evolution of induction in a certain exposure area. Finally, the output data are not immediately importable by ArcView, the G.I.S. used by A.R.P.A. F.V.G., and it is not always possible to implement the territory orography to determine the field at specified heights above the ground. P.h.i.d.e.l., an innovative software, tackles and works out al l the above mentioned problems. The power line wires interested in its implementation are represented by poly lines, and the field is analytically calculated, with no further approximation, not even when more power lines are concerned. Therefore, the obtained results, when compared with those of other programs, are the closest to experimental measurements. The output data can be employed both in G.I.S. and Excel environments, allowing the immediate overlaying of digital cartography and the determining of the 3 and 10 μT bands, in compliance with the Italian Decree of the President of the Council of Ministers of 8 July 2003. (authors)

  17. Implementation of electron beam position measurement algorithm and embedded web server using MCS-51 microcontroller for Booster Synchrotron

    International Nuclear Information System (INIS)

    Shrivastava, B.B.; Chouhan, Manish; Puntambekar, T.A.; Tiwari, A.N.

    2015-01-01

    The Booster Synchrotron at RRCAT caters as Injector Machine for Indus-1 and Indus-2 with the repetition rate of 1Hz. In Booster Synchrotron, energy of electron bunches are increased from 20 MeV to 450 MeV (in ∼ 280 ms) and 550 MeV (in ∼ 340 ms ) for Indus-1 and Indus-2 respectively. An algorithm for microcontroller based beam position measurement system has been developed for the Booster Synchrotron to measure the fast changes in the beam position of electron bunches during energy ramping. In this paper, software implementation in microcontroller and its optimization to achieve beam position update rate of 1 kHz is discussed. (author)

  18. Mathematical analysis and algorithms for efficiently and accurately implementing stochastic simulations of short-term synaptic depression and facilitation

    Directory of Open Access Journals (Sweden)

    Mark D McDonnell

    2013-05-01

    Full Text Available The release of neurotransmitter vesicles after arrival of a pre-synaptic action potential at cortical synapses is known to be a stochastic process, as is the availability of vesicles for release. These processes are known to also depend on the recent history of action-potential arrivals, and this can be described in terms of time-varying probabilities of vesicle release. Mathematical models of such synaptic dynamics frequently are based only on the mean number of vesicles released by each pre-synaptic action potential, since if it is assumed there are sufficiently many vesicle sites, then variance is small. However, it has been shown recently that variance across sites can be significant for neuron and network dynamics, and this suggests the potential importance of studying short-term plasticity using simulations that do generate trial-to-trial variability. Therefore, in this paper we study several well-known conceptual models for stochastic availability and release. We state explicitly the random variables that these models describe and propose efficient algorithms for accurately implementing stochastic simulations of these random variables in software or hardware. Our results are complemented by mathematical analysis and statement of pseudo-code algorithms.

  19. A neural network based implementation of an MPC algorithm applied in the control systems of electromechanical plants

    Science.gov (United States)

    Marusak, Piotr M.; Kuntanapreeda, Suwat

    2018-01-01

    The paper considers application of a neural network based implementation of a model predictive control (MPC) control algorithm to electromechanical plants. Properties of such control plants implicate that a relatively short sampling time should be used. However, in such a case, finding the control value numerically may be too time-consuming. Therefore, the current paper tests the solution based on transforming the MPC optimization problem into a set of differential equations whose solution is the same as that of the original optimization problem. This set of differential equations can be interpreted as a dynamic neural network. In such an approach, the constraints can be introduced into the optimization problem with relative ease. Moreover, the solution of the optimization problem can be obtained faster than when the standard numerical quadratic programming routine is used. However, a very careful tuning of the algorithm is needed to achieve this. A DC motor and an electrohydraulic actuator are taken as illustrative examples. The feasibility and effectiveness of the proposed approach are demonstrated through numerical simulations.

  20. Implementation of a Tour Guide Robot System Using RFID Technology and Viterbi Algorithm-Based HMM for Speech Recognition

    Directory of Open Access Journals (Sweden)

    Neng-Sheng Pai

    2014-01-01

    Full Text Available This paper applied speech recognition and RFID technologies to develop an omni-directional mobile robot into a robot with voice control and guide introduction functions. For speech recognition, the speech signals were captured by short-time processing. The speaker first recorded the isolated words for the robot to create speech database of specific speakers. After the speech pre-processing of this speech database, the feature parameters of cepstrum and delta-cepstrum were obtained using linear predictive coefficient (LPC. Then, the Hidden Markov Model (HMM was used for model training of the speech database, and the Viterbi algorithm was used to find an optimal state sequence as the reference sample for speech recognition. The trained reference model was put into the industrial computer on the robot platform, and the user entered the isolated words to be tested. After processing by the same reference model and comparing with previous reference model, the path of the maximum total probability in various models found using the Viterbi algorithm in the recognition was the recognition result. Finally, the speech recognition and RFID systems were achieved in an actual environment to prove its feasibility and stability, and implemented into the omni-directional mobile robot.

  1. Absorption cooling sources atmospheric emissions decrease by implementation of simple algorithm for limiting temperature of cooling water

    Science.gov (United States)

    Wojdyga, Krzysztof; Malicki, Marcin

    2017-11-01

    Constant strive to improve the energy efficiency forces carrying out activities aimed at reduction of energy consumption hence decreasing amount of contamination emissions to atmosphere. Cooling demand, both for air-conditioning and process cooling, plays an increasingly important role in the balance of Polish electricity generation and distribution system in summer. During recent years' demand for electricity during summer months has been steadily and significantly increasing leading to deficits of energy availability during particularly hot periods. This causes growing importance and interest in trigeneration power generation sources and heat recovery systems producing chilled water. Key component of such system is thermally driven chiller, mostly absorption, based on lithium-bromide and water mixture. Absorption cooling systems also exist in Poland as stand-alone systems, supplied with heating from various sources, generated solely for them or recovered as waste or useless energy. The publication presents a simple algorithm, designed to reduce the amount of heat for the supply of absorption chillers producing chilled water for the purposes of air conditioning by reducing the temperature of the cooling water, and its impact on decreasing emissions of harmful substances into the atmosphere. Scale of environmental advantages has been rated for specific sources what enabled evaluation and estimation of simple algorithm implementation to sources existing nationally.

  2. Redesigned-Scale-Free CORDIC Algorithm Based FPGA Implementation of Window Functions to Minimize Area and Latency

    Directory of Open Access Journals (Sweden)

    Supriya Aggarwal

    2012-01-01

    Full Text Available One of the most important steps in spectral analysis is filtering, where window functions are generally used to design filters. In this paper, we modify the existing architecture for realizing the window functions using CORDIC processor. Firstly, we modify the conventional CORDIC algorithm to reduce its latency and area. The proposed CORDIC algorithm is completely scale-free for the range of convergence that spans the entire coordinate space. Secondly, we realize the window functions using a single CORDIC processor as against two serially connected CORDIC processors in existing technique, thus optimizing it for area and latency. The linear CORDIC processor is replaced by a shift-add network which drastically reduces the number of pipelining stages required in the existing design. The proposed design on an average requires approximately 64% less pipeline stages and saves up to 44.2% area. Currently, the processor is designed to implement Blackman windowing architecture, which with slight modifications can be extended to other widow functions as well. The details of the proposed architecture are discussed in the paper.

  3. Implementation of dataflow programming based Fuzzy Logic algorithm for gas concentration index in around of Sidoarjo mudflow, Indonesia

    Directory of Open Access Journals (Sweden)

    Widasari Edita Rosana

    2018-01-01

    Full Text Available Sidoarjo mudflow or known as Lapindo mudflow erupted since 2006. The Sidoarjo mudflow is located in Sidoarjo City, East Java, Indonesia. The mudflow-affected area has high air pollution level and high health risk. Therefore, in this paper was implemented a system that can categorize the level of air pollution into several categories. The air quality index can be categorized using fuzzy logic algorithm based on the concentration of air pollutant parameters in the mudflow-affected area. Furthermore, Dataflow programming is used to process the fuzzy logic algorithm. Based on the result, the measurement accuracy of the air quality index in the mudflow-affected area has an accuracy rate of 93.92% in Siring Barat, 93.34% in Mindi, and 95.96% in Jatirejo. The methane concentration is passes the standard quality even though the air quality index is safe. Hence, the area is indicated into Hazardous level. In addition, Mindi has highest and stable methane concentration. It means that Mindi has high-risk air pollution.

  4. Estimating Cloud optical thickness from SEVIRI, for air quality research, by implementing a semi-analytical cloud retrieval algorithm

    Science.gov (United States)

    Pandey, Praveen; De Ridder, Koen; van Looy, Stijn; van Lipzig, Nicole

    2010-05-01

    Clouds play an important role in Earth's climate system. As they affect radiation hence photolysis rate coefficients (ozone formation),they also affect the air quality at the surface of the earth. Thus, a satellite remote sensing technique is used to retrieve the cloud properties for air quality research. The geostationary satellite, Meteosat Second Generation (MSG) has onboard, the Spinning Enhanced Visible and Infrared Imager (SEVIRI). The channels in the wavelength 0.6 µm and 1.64 µm are used to retrieve cloud optical thickness (COT). The study domain is over Europe covering a region between 35°N-70°N and 5°W-30°E, centred over Belgium. The steps involved in pre-processing the EUMETSAT level 1.5 images are described, which includes, acquisition of digital count number, radiometric conversion using offsets and slopes, estimation of radiance and calculation of reflectance. The Sun-earth-satellite geometry also plays an important role. A semi-analytical cloud retrieval algorithm (Kokhanovsky et al., 2003) is implemented for the estimation of COT. This approach doesn't involve the conventional look-up table approach, hence it makes the retrieval independent of numerical radiative transfer solutions. The semi-analytical algorithm is implemented on a monthly dataset of SEVIRI level 1.5 images. Minimum reflectance in the visible channel, at each pixel, during the month is accounted as the surface albedo of the pixel. Thus, monthly variation of COT over the study domain is prepared. The result so obtained, is compared with the COT products of Satellite Application Facility on Climate Monitoring (CM SAF). Henceforth, an approach to assimilate the COT for air quality research is presented. Address of corresponding author: Praveen Pandey, VITO- Flemish Institute for Technological Research, Boeretang 200, B 2400, Mol, Belgium E-mail: praveen.pandey@vito.be

  5. Efficient Hardware Implementation of the Horn-Schunck Algorithm for High-Resolution Real-Time Dense Optical Flow Sensor

    Science.gov (United States)

    Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek

    2014-01-01

    This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1, 920 × 1, 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems. PMID:24526303

  6. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    Science.gov (United States)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  7. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    International Nuclear Information System (INIS)

    Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R

    2009-01-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  8. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    Energy Technology Data Exchange (ETDEWEB)

    Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R [Department of Radiology, Emory University Hospital, 1364 Clifton Road, N.E. Atlanta, GA 30322 (United States)], E-mail: John.Votaw@Emory.edu

    2009-02-07

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  9. Implementing a combined polar-geostationary algorithm for smoke emissions estimation in near real time

    Science.gov (United States)

    Hyer, E. J.; Schmidt, C. C.; Hoffman, J.; Giglio, L.; Peterson, D. A.

    2013-12-01

    Polar and geostationary satellites are used operationally for fire detection and smoke source estimation by many near-real-time operational users, including operational forecast centers around the globe. The input satellite radiance data are processed by data providers to produce Level-2 and Level -3 fire detection products, but processing these data into spatially and temporally consistent estimates of fire activity requires a substantial amount of additional processing. The most significant processing steps are correction for variable coverage of the satellite observations, and correction for conditions that affect the detection efficiency of the satellite sensors. We describe a system developed by the Naval Research Laboratory (NRL) that uses the full raster information from the entire constellation to diagnose detection opportunities, calculate corrections for factors such as angular dependence of detection efficiency, and generate global estimates of fire activity at spatial and temporal scales suitable for atmospheric modeling. By incorporating these improved fire observations, smoke emissions products, such as NRL's FLAMBE, are able to produce improved estimates of global emissions. This talk provides an overview of the system, demonstrates the achievable improvement over older methods, and describes challenges for near-real-time implementation.

  10. Computational design of RNA parts, devices, and transcripts with kinetic folding algorithms implemented on multiprocessor clusters.

    Science.gov (United States)

    Thimmaiah, Tim; Voje, William E; Carothers, James M

    2015-01-01

    With progress toward inexpensive, large-scale DNA assembly, the demand for simulation tools that allow the rapid construction of synthetic biological devices with predictable behaviors continues to increase. By combining engineered transcript components, such as ribosome binding sites, transcriptional terminators, ligand-binding aptamers, catalytic ribozymes, and aptamer-controlled ribozymes (aptazymes), gene expression in bacteria can be fine-tuned, with many corollaries and applications in yeast and mammalian cells. The successful design of genetic constructs that implement these kinds of RNA-based control mechanisms requires modeling and analyzing kinetically determined co-transcriptional folding pathways. Transcript design methods using stochastic kinetic folding simulations to search spacer sequence libraries for motifs enabling the assembly of RNA component parts into static ribozyme- and dynamic aptazyme-regulated expression devices with quantitatively predictable functions (rREDs and aREDs, respectively) have been described (Carothers et al., Science 334:1716-1719, 2011). Here, we provide a detailed practical procedure for computational transcript design by illustrating a high throughput, multiprocessor approach for evaluating spacer sequences and generating functional rREDs. This chapter is written as a tutorial, complete with pseudo-code and step-by-step instructions for setting up a computational cluster with an Amazon, Inc. web server and performing the large numbers of kinefold-based stochastic kinetic co-transcriptional folding simulations needed to design functional rREDs and aREDs. The method described here should be broadly applicable for designing and analyzing a variety of synthetic RNA parts, devices and transcripts.

  11. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  12. Real-Time Signal Processing for Multiantenna Systems: Algorithms, Optimization, and Implementation on an Experimental Test-Bed

    Directory of Open Access Journals (Sweden)

    Haustein Thomas

    2006-01-01

    Full Text Available A recently realized concept of a reconfigurable hardware test-bed suitable for real-time mobile communication with multiple antennas is presented in this paper. We discuss the reasons and prerequisites for real-time capable MIMO transmission systems which may allow channel adaptive transmission to increase link stability and data throughput. We describe a concept of an efficient implementation of MIMO signal processing using FPGAs and DSPs. We focus on some basic linear and nonlinear MIMO detection and precoding algorithms and their optimization for a DSP target, and a few principal steps for computational performance enhancement are outlined. An experimental verification of several real-time MIMO transmission schemes at high data rates in a typical office scenario is presented and results on the achieved BER and throughput performance are given. The different transmission schemes used either channel state information at both sides of the link or at one side only (transmitter or receiver. Spectral efficiencies of more than 20 bits/s/Hz and a throughput of more than 150 Mbps were shown with a single-carrier transmission. The experimental results clearly show the feasibility of real-time high data rate MIMO techniques with state-of-the-art hardware and that more sophisticated baseband signal processing will be an essential part of future communication systems. A discussion on implementation challenges towards future wireless communication systems supporting higher data rates (1 Gbps and beyond or high mobility concludes the paper.

  13. Thermo-mechanical Modelling of Pebble Beds in Fusion Blankets and its Implementation by a Return-Mapping Algorithm

    International Nuclear Information System (INIS)

    Gan, Yixiang; Kamlah, Marc

    2008-01-01

    In this investigation, a thermo-mechanical model of pebble beds is adopted and developed based on experiments by Dr. Reimann at Forschungszentrum Karlsruhe (FZK). The framework of the present material model is composed of a non-linear elastic law, the Drucker-Prager-Cap theory, and a modified creep law. Furthermore, the volumetric inelastic strain dependent thermal conductivity of beryllium pebble beds is taken into account and full thermo-mechanical coupling is considered. Investigation showed that the Drucker-Prager-Cap model implemented in ABAQUS can not fulfill the requirements of both the prediction of large creep strains and the hardening behaviour caused by creep, which are of importance with respect to the application of pebble beds in fusion blankets. Therefore, UMAT (user defined material's mechanical behaviour) and UMATHT (user defined material's thermal behaviour) routines are used to re-implement the present thermo-mechanical model in ABAQUS. An elastic predictor radial return mapping algorithm is used to solve the non-associated plasticity iteratively, and a proper tangent stiffness matrix is obtained for cost-efficiency in the calculation. An explicit creep mechanism is adopted for the prediction of time-dependent behaviour in order to represent large creep strains in high temperature. Finally, the thermo-mechanical interactions are implemented in a UMATHT routine for the coupled analysis. The oedometric compression tests and creep tests of pebble beds at different temperatures are simulated with the help of the present UMAT and UMATHT routines, and the comparison between the simulation and the experiments is made. (authors)

  14. Increasing feasibility of the field-programmable gate array implementation of an iterative image registration using a kernel-warping algorithm

    Science.gov (United States)

    Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.

    2017-09-01

    Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.

  15. Small-scale quantum information processing with linear optics

    International Nuclear Information System (INIS)

    Bergou, J.A.; Steinberg, A.M.; Mohseni, M.

    2005-01-01

    Full text: Photons are the ideal systems for carrying quantum information. Although performing large-scale quantum computation on optical systems is extremely demanding, non scalable linear-optics quantum information processing may prove essential as part of quantum communication networks. In addition efficient (scalable) linear-optical quantum computation proposal relies on the same optical elements. Here, by constructing multirail optical networks, we experimentally study two central problems in quantum information science, namely optimal discrimination between nonorthogonal quantum states, and controlling decoherence in quantum systems. Quantum mechanics forbids deterministic discrimination between nonorthogonal states. This is one of the central features of quantum cryptography, which leads to secure communications. Quantum state discrimination is an important primitive in quantum information processing, since it determines the limitations of a potential eavesdropper, and it has applications in quantum cloning and entanglement concentration. In this work, we experimentally implement generalized measurements in an optical system and demonstrate the first optimal unambiguous discrimination between three non-orthogonal states with a success rate of 55 %, to be compared with the 25 % maximum achievable using projective measurements. Furthermore, we present the first realization of unambiguous discrimination between a pure state and a nonorthogonal mixed state. In a separate experiment, we demonstrate how decoherence-free subspaces (DFSs) may be incorporated into a prototype optical quantum algorithm. Specifically, we present an optical realization of two-qubit Deutsch-Jozsa algorithm in presence of random noise. By introduction of localized turbulent airflow we produce a collective optical dephasing, leading to large error rates and demonstrate that using DFS encoding, the error rate in the presence of decoherence can be reduced from 35 % to essentially its pre

  16. Microprocessor-based integration of microfluidic control for the implementation of automated sensor monitoring and multithreaded optimization algorithms.

    Science.gov (United States)

    Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov

    2015-08-01

    Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits.

  17. Optical implementation of neural learning algorithms based on cross-gain modulation in a semiconductor optical amplifier

    Science.gov (United States)

    Li, Qiang; Wang, Zhi; Le, Yansi; Sun, Chonghui; Song, Xiaojia; Wu, Chongqing

    2016-10-01

    Neuromorphic engineering has a wide range of applications in the fields of machine learning, pattern recognition, adaptive control, etc. Photonics, characterized by its high speed, wide bandwidth, low power consumption and massive parallelism, is an ideal way to realize ultrafast spiking neural networks (SNNs). Synaptic plasticity is believed to be critical for learning, memory and development in neural circuits. Experimental results have shown that changes of synapse are highly dependent on the relative timing of pre- and postsynaptic spikes. Synaptic plasticity in which presynaptic spikes preceding postsynaptic spikes results in strengthening, while the opposite timing results in weakening is called antisymmetric spike-timing-dependent plasticity (STDP) learning rule. And synaptic plasticity has the opposite effect under the same conditions is called antisymmetric anti-STDP learning rule. We proposed and experimentally demonstrated an optical implementation of neural learning algorithms, which can achieve both of antisymmetric STDP and anti-STDP learning rule, based on the cross-gain modulation (XGM) within a single semiconductor optical amplifier (SOA). The weight and height of the potentitation and depression window can be controlled by adjusting the injection current of the SOA, to mimic the biological antisymmetric STDP and anti-STDP learning rule more realistically. As the injection current increases, the width of depression and potentitation window decreases and height increases, due to the decreasing of recovery time and increasing of gain under a stronger injection current. Based on the demonstrated optical STDP circuit, ultrafast learning in optical SNNs can be realized.

  18. A hybrid, massively parallel implementation of a genetic algorithm for optimization of the impact performance of a metal/polymer composite plate

    KAUST Repository

    Narayanan, Kiran; Mora Cordova, Angel; Allsopp, Nicholas; El Sayed, Tamer S.

    2012-01-01

    A hybrid parallelization method composed of a coarse-grained genetic algorithm (GA) and fine-grained objective function evaluations is implemented on a heterogeneous computational resource consisting of 16 IBM Blue Gene/P racks, a single x86 cluster

  19. Estimation of cloud optical thickness by processing SEVIRI images and implementing a semi analytical cloud property retrieval algorithm

    Science.gov (United States)

    Pandey, P.; De Ridder, K.; van Lipzig, N.

    2009-04-01

    Clouds play a very important role in the Earth's climate system, as they form an intermediate layer between Sun and the Earth. Satellite remote sensing systems are the only means to provide information about clouds on large scales. The geostationary satellite, Meteosat Second Generation (MSG) has onboard an imaging radiometer, the Spinning Enhanced Visible and Infrared Imager (SEVIRI). SEVIRI is a 12 channel imager, with 11 channels observing the earth's full disk with a temporal resolution of 15 min and spatial resolution of 3 km at nadir, and a high resolution visible (HRV) channel. The visible channels (0.6 µm and 0.81 µm) and near infrared channel (1.6µm) of SEVIRI are being used to retrieve the cloud optical thickness (COT). The study domain is over Europe covering the region between 35°N - 70°N and 10°W - 30°E. SEVIRI level 1.5 images over this domain are being acquired from the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) archive. The processing of this imagery, involves a number of steps before estimating the COT. The steps involved in pre-processing are as follows. First, the digital count number is acquired from the imagery. Image geo-coding is performed in order to relate the pixel positions to the corresponding longitude and latitude. Solar zenith angle is determined as a function of latitude and time. The radiometric conversion is done using the values of offsets and slopes of each band. The values of radiance obtained are then used to calculate the reflectance for channels in the visible spectrum using the information of solar zenith angle. An attempt is made to estimate the COT from the observed radiances. A semi analytical algorithm [Kokhanovsky et al., 2003] is implemented for the estimation of cloud optical thickness from the visible spectrum of light intensity reflected from clouds. The asymptotical solution of the radiative transfer equation, for clouds with large optical thickness, is the basis of

  20. Implementation of a cone-beam reconstruction algorithm for the single-circle source orbit with embedded misalignment correction using homogeneous coordinates

    International Nuclear Information System (INIS)

    Karolczak, Marek; Schaller, Stefan; Engelke, Klaus; Lutz, Andreas; Taubenreuther, Ulrike; Wiesent, Karl; Kalender, Willi

    2001-01-01

    We present an efficient implementation of an approximate cone-beam image reconstruction algorithm for application in tomography, which accounts for scanner mechanical misalignment. The implementation is based on the algorithm proposed by Feldkamp et al. [J. Opt. Soc. Am. A 6, 612-619 (1984)] and is directed at circular scan paths. The algorithm has been developed for the purpose of reconstructing volume data from projections acquired in an experimental x-ray microtomography (μCT) scanner [Engelke et al., Der Radiologe 39, 203-212 (1999)]. To mathematically model misalignment we use matrix notation with homogeneous coordinates to describe the scanner geometry, its misalignment, and the acquisition process. For convenience analysis is carried out for x-ray CT scanners, but it is applicable to any tomographic modality, where two-dimensional projection acquisition in cone beam geometry takes place, e.g., single photon emission computerized tomography. We derive an algorithm assuming misalignment errors to be small enough to weight and filter original projections and to embed compensation for misalignment in the backprojection. We verify the algorithm on simulations of virtual phantoms and scans of a physical multidisk (Defrise) phantom

  1. Algorithmic implementation of particle-particle ladder diagram approximation to study strongly-correlated metals and semiconductors

    Science.gov (United States)

    Prayogi, A.; Majidi, M. A.

    2017-07-01

    In condensed-matter physics, strongly-correlated systems refer to materials that exhibit variety of fascinating properties and ordered phases, depending on temperature, doping, and other factors. Such unique properties most notably arise due to strong electron-electron interactions, and in some cases due to interactions involving other quasiparticles as well. Electronic correlation effects are non-trivial that one may need a sufficiently accurate approximation technique with quite heavy computation, such as Quantum Monte-Carlo, in order to capture particular material properties arising from such effects. Meanwhile, less accurate techniques may come with lower numerical cost, but the ability to capture particular properties may highly depend on the choice of approximation. Among the many-body techniques derivable from Feynman diagrams, we aim to formulate algorithmic implementation of the Ladder Diagram approximation to capture the effects of electron-electron interactions. We wish to investigate how these correlation effects influence the temperature-dependent properties of strongly-correlated metals and semiconductors. As we are interested to study the temperature-dependent properties of the system, the Ladder diagram method needs to be applied in Matsubara frequency domain to obtain the self-consistent self-energy. However, at the end we would also need to compute the dynamical properties like density of states (DOS) and optical conductivity that are defined in the real frequency domain. For this purpose, we need to perform the analytic continuation procedure. At the end of this study, we will test the technique by observing the occurrence of metal-insulator transition in strongly-correlated metals, and renormalization of the band gap in strongly-correlated semiconductors.

  2. Open-source implementation of an algorithm for photopeaks search and analysis in gamma-ray spectrometry with semiconductor detectors

    International Nuclear Information System (INIS)

    Maduar, Marcelo F.; Pecequilo, Brigitte R.S.

    2009-01-01

    Radioactivity quantification of gamma-ray emitter radionuclides in samples measured by HPGe gamma spectrometers relies on the analysis of the photopeaks present in the spectra, especially on the accurate determination of their net areas. This paper presents a methodology and an algorithm description for the peak search and analysis in order to obtain the relevant peaks parameters and their uncertainties. The procedure is performed on a three step approach: a preliminary search is done by using the second-difference method; experimental peaks widths are assessed in order to obtain a width vs. channel relationship and to define regions with single or overlapping peaks; a non-linear fit is applied to each region of the spectrum with candidate peaks. The final target function is in the form G(x) = B(x) + F(x), where B(x) is the baseline composed by a sum of a weighed left-side B L (x) and right-side B R (x) base-line quadratic functions and the photopeaks term F(x) is a sum of Gaussian functions. The computational implementation is released entirely in open-source license. The code was developed in C++ language and the interface was developed with Qt GUI software toolkit. GNU scientific library, GSL, was employed to perform linear and non-linear fitting procedures as needed. Spectra previously generated at our laboratories were analyzed with the presented methodology and with the commercial software package WinnerGamma. Results obtained are consistent with those obtained with the aforementioned package, suggesting that it could be safely used in general-purpose gamma-ray spectrometry. (author)

  3. Implementation and testing of a simple data assimilation algorithm in the regional air pollution forecast model, DEOM

    DEFF Research Database (Denmark)

    Frydendall, Jan; Brandt, J.; Christensen, J. H.

    2009-01-01

    A simple data assimilation algorithm based on statistical interpolation has been developed and coupled to a long-range chemistry transport model, the Danish Eulerian Operational Model (DEOM), applied for air pollution forecasting at the National Environmental Research Institute (NERI), Denmark....... In this paper, the algorithm and the results from experiments designed to find the optimal setup of the algorithm are described. The algorithm has been developed and optimized via eight different experiments where the results from different model setups have been tested against measurements from the EMEP...... (European Monitoring and Evaluation Programme) network covering a half-year period, April-September 1999. The best performing setup of the data assimilation algorithm for surface ozone concentrations has been found, including the combination of determining the covariances using the Hollingsworth method...

  4. Implementing a C++ Version of the Joint Seismic-Geodetic Algorithm for Finite-Fault Detection and Slip Inversion for Earthquake Early Warning

    Science.gov (United States)

    Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Guillemot, C.; Murray, J. R.

    2015-12-01

    The earthquake early warning (EEW) systems in California and elsewhere can greatly benefit from algorithms that generate estimates of finite-fault parameters. These estimates could significantly improve real-time shaking calculations and yield important information for immediate disaster response. Minson et al. (2015) determined that combining FinDer's seismic-based algorithm (Böse et al., 2012) with BEFORES' geodetic-based algorithm (Minson et al., 2014) yields a more robust and informative joint solution than using either algorithm alone. FinDer examines the distribution of peak ground accelerations from seismic stations and determines the best finite-fault extent and strike from template matching. BEFORES employs a Bayesian framework to search for the best slip inversion over all possible fault geometries in terms of strike and dip. Using FinDer and BEFORES together generates estimates of finite-fault extent, strike, dip, preferred slip, and magnitude. To yield the quickest, most flexible, and open-source version of the joint algorithm, we translated BEFORES and FinDer from Matlab into C++. We are now developing a C++ Application Protocol Interface for these two algorithms to be connected to the seismic and geodetic data flowing from the EEW system. The interface that is being developed will also enable communication between the two algorithms to generate the joint solution of finite-fault parameters. Once this interface is developed and implemented, the next step will be to run test seismic and geodetic data through the system via the Earthworm module, Tank Player. This will allow us to examine algorithm performance on simulated data and past real events.

  5. Implementation and testing of a simple data assimilation algorithm in the regional air pollution forecast model, DEOM

    Directory of Open Access Journals (Sweden)

    J. Frydendall

    2009-08-01

    Full Text Available A simple data assimilation algorithm based on statistical interpolation has been developed and coupled to a long-range chemistry transport model, the Danish Eulerian Operational Model (DEOM, applied for air pollution forecasting at the National Environmental Research Institute (NERI, Denmark. In this paper, the algorithm and the results from experiments designed to find the optimal setup of the algorithm are described. The algorithm has been developed and optimized via eight different experiments where the results from different model setups have been tested against measurements from the EMEP (European Monitoring and Evaluation Programme network covering a half-year period, April–September 1999. The best performing setup of the data assimilation algorithm for surface ozone concentrations has been found, including the combination of determining the covariances using the Hollingsworth method, varying the correlation length according to the number of adjacent observation stations and applying the assimilation routine at three successive hours during the morning. Improvements in the correlation coefficient in the range of 0.1 to 0.21 between the results from the reference and the optimal configuration of the data assimilation algorithm, were found. The data assimilation algorithm will in the future be used in the operational THOR integrated air pollution forecast system, which includes the DEOM.

  6. Implementation of the resonant vibratory feeders control algorithm on Simatic S7-1200 from MATLAB Simulink enviroment

    Directory of Open Access Journals (Sweden)

    Mitrović Radomir B.

    2016-01-01

    Full Text Available Simulink is an important tool for modeling and simulation of process and control algorithms. It's expansion, PLC Coder, enables direct conversion of model subsystem into SCL, structured text code, which is then used by PLC IDE to create function blocks. This shortens developing time of algorithms for PLC controller. Also, this reduces possibility for a coding error. This paper describes Simulink PLC Coder and workflow for developing PID control algorithm for Siemens Simatic S7-1200 PLC. Control object used here is resonant vibratory feeder having electromagnetic drive.

  7. Introduction to quantum information science

    CERN Document Server

    Hayashi, Masahito; Kawachi, Akinori; Kimura, Gen; Ogawa, Tomohiro

    2015-01-01

    This book presents the basics of quantum information, e.g., foundation of quantum theory, quantum algorithms, quantum entanglement, quantum entropies, quantum coding, quantum error correction and quantum cryptography. The required knowledge is only elementary calculus and linear algebra. This way the book can be understood by undergraduate students. In order to study quantum information, one usually has to study the foundation of quantum theory. This book describes it from more an operational viewpoint which is suitable for quantum information while traditional textbooks of quantum theory lack this viewpoint. The current  book bases on Shor's algorithm, Grover's algorithm, Deutsch-Jozsa's algorithm as basic algorithms. To treat several topics in quantum information, this book covers several kinds of information quantities in quantum systems including von Neumann entropy. The limits of several kinds of quantum information processing are given. As important quantum protocols,this book contains quantum teleport...

  8. Specification of technical means for implementation of supervisory algorithms of the status of a nuclear reactor and of the main coolant pump of a NPP

    International Nuclear Information System (INIS)

    Jirsa, P.

    2000-11-01

    Inclusion into the programming of inputs of the supervisory algorithm (data collection from the monitoring system, transmission of diagnostic output fro other system and transmission of technological data), of the supervisory process proper based on the data obtained (data analysis) and of the output (presentation of the results to the operator, communication with the master and archiving systems, etc.) requires knowledge of the format of the data transmitted, their availability, communication network protocols, operating system, etc. Hence, the environment for which the algorithm will be developed should be specified, roughly at least. The following topics are addressed: Description of technical means of Czech nuclear power plants (Dukovany, Temelin, Mochovce), and Proposal for technical means to implement the monitoring algorithm (Requirements related to the monitoring systems, Identification of the reference system, Parameters of the selected system). Since no domestic manufacturer of HW for monitoring and diagnostic systems exists, a novel system of the Brueel and Kjaer company for on-line diagnosis and monitoring, COMPASS, was selected as a model model system for the implementation of the supervisory algorithms. (P.A.)

  9. Rayleigh’s quotient–based damage detection algorithm: Theoretical concepts, computational techniques, and field implementation strategies

    DEFF Research Database (Denmark)

    NJOMO WANDJI, Wilfried

    2017-01-01

    levels are targeted: existence, location, and severity. The proposed algorithm is analytically developed from the dynamics theory and the virtual energy principle. Some computational techniques are proposed for carrying out computations, including discretization, integration, derivation, and suitable...

  10. The island model for parallel implementation of evolutionary algorithm of Population-Based Incremental Learning (PBIL) optimization

    International Nuclear Information System (INIS)

    Lima, Alan M.M. de; Schirru, Roberto

    2000-01-01

    Genetic algorithms are biologically motivated adaptive systems which have been used, with good results, for function optimization. The purpose of this work is to introduce a new parallelization method to be applied to the Population-Based Incremental Learning (PBIL) algorithm. PBIL combines standard genetic algorithm mechanisms with simple competitive learning and has ben successfully used in combinatorial optimization problems. The development of this algorithm aims its application to the reload optimization of PWR nuclear reactors. Tests have been performed with combinatorial optimization problems similar to the reload problem. Results are compared to the serial PBIL ones, showing the new method's superiority and its viability as a tool for the nuclear core reload problem solution. (author)

  11. Implementation of a Multi-Robot Coverage Algorithm on a Two-Dimensional, Grid-Based Environment

    Science.gov (United States)

    2017-06-01

    different approaches is based on “ behavior -based” versus “system theory based” approaches to the problem. 1. Behavior –Based Approach Most behavior -based...otherwise, it performs the behavior of IG. This kind of algorithm could be classified as a system theory based approach since the change in the...systems, robot agents are likely to take over mine countermeasure (MCM) missions one day. The path planning coverage algorithm is an essential topic for

  12. Performance Comparison of GPU, DSP and FPGA implementations of image processing and computer vision algorithms in embedded systems

    OpenAIRE

    Fykse, Egil

    2013-01-01

    The objective of this thesis is to compare the suitability of FPGAs, GPUs and DSPs for digital image processing applications. Normalized cross-correlation is used as a benchmark, because this algorithm includes convolution, a common operation in image processing and elsewhere. Normalized cross-correlation is a template matching algorithm that is used to locate predefined objects in a scene image. Because the throughput of DSPs is low for efficient calculation of normalized cross-correlation, ...

  13. Experimental implementation of a robust damped-oscillation control algorithm on a full-sized, two-degree-of-freedom, AC induction motor-driven crane

    International Nuclear Information System (INIS)

    Kress, R.L.; Jansen, J.F.; Noakes, M.W.

    1994-01-01

    When suspended payloads are moved with an overhead crane, pendulum like oscillations are naturally introduced. This presents a problem any time a crane is used, especially when expensive and/or delicate objects are moved, when moving in a cluttered an or hazardous environment, and when objects are to be placed in tight locations. Damped-oscillation control algorithms have been demonstrated over the past several years for laboratory-scale robotic systems on dc motor-driven overhead cranes. Most overhead cranes presently in use in industry are driven by ac induction motors; consequently, Oak Ridge National Laboratory has implemented damped-oscillation crane control on one of its existing facility ac induction motor-driven overhead cranes. The purpose of this test was to determine feasibility, to work out control and interfacing specifications, and to establish the capability of newly available ac motor control hardware with respect to use in damped-oscillation-controlled systems. Flux vector inverter drives are used to investigate their acceptability for damped-oscillation crane control. The purpose of this paper is to describe the experimental implementation of a control algorithm on a full-sized, two-degree-of-freedom, industrial crane; describe the experimental evaluation of the controller including robustness to payload length changes; explain the results of experiments designed to determine the hardware required for implementation of the control algorithms; and to provide a theoretical description of the controller

  14. Implementation and preliminary evaluation of 'C-tone': A novel algorithm to improve lexical tone recognition in Mandarin-speaking cochlear implant users.

    Science.gov (United States)

    Ping, Lichuan; Wang, Ningyuan; Tang, Guofang; Lu, Thomas; Yin, Li; Tu, Wenhe; Fu, Qian-Jie

    2017-09-01

    Because of limited spectral resolution, Mandarin-speaking cochlear implant (CI) users have difficulty perceiving fundamental frequency (F0) cues that are important to lexical tone recognition. To improve Mandarin tone recognition in CI users, we implemented and evaluated a novel real-time algorithm (C-tone) to enhance the amplitude contour, which is strongly correlated with the F0 contour. The C-tone algorithm was implemented in clinical processors and evaluated in eight users of the Nurotron NSP-60 CI system. Subjects were given 2 weeks of experience with C-tone. Recognition of Chinese tones, monosyllables, and disyllables in quiet was measured with and without the C-tone algorithm. Subjective quality ratings were also obtained for C-tone. After 2 weeks of experience with C-tone, there were small but significant improvements in recognition of lexical tones, monosyllables, and disyllables (P C-tone were greater for disyllables than for monosyllables. Subjective quality ratings showed no strong preference for or against C-tone, except for perception of own voice, where C-tone was preferred. The real-time C-tone algorithm provided small but significant improvements for speech performance in quiet with no change in sound quality. Pre-processing algorithms to reduce noise and better real-time F0 extraction would improve the benefits of C-tone in complex listening environments. Chinese CI users' speech recognition in quiet can be significantly improved by modifying the amplitude contour to better resemble the F0 contour.

  15. A Comparative Evaluation of Algorithms in the Implementation of an Ultra-Secure Router-to-Router Key Exchange System

    Directory of Open Access Journals (Sweden)

    Nishaal J. Parmar

    2017-01-01

    Full Text Available This paper presents a comparative evaluation of possible encryption algorithms for use in a self-contained, ultra-secure router-to-router communication system, first proposed by El Rifai and Verma. The original proposal utilizes a discrete logarithm-based encryption solution, which will be compared in this paper to RSA, AES, and ECC encryption algorithms. RSA certificates are widely used within the industry but require a trusted key generation and distribution architecture. AES and ECC provide advantages in key length, processing requirements, and storage space, also maintaining an arbitrarily high level of security. This paper modifies each of the four algorithms for use within the self-contained router-to-router environment system and then compares them in terms of features offered, storage space and data transmission needed, encryption/decryption efficiency, and key generation requirements.

  16. Implementation of Winnowing Algorithm Based K-Gram to Identify Plagiarism on File Text-Based Document

    Directory of Open Access Journals (Sweden)

    Nurdiansyah Yanuar

    2018-01-01

    Full Text Available Plagiarism occurs when the students have tasks and pursued by the deadline. Plagiarism is considered as the fastest way to accomplish the tasks. This reason makes the author tried to build a plagiarism detection system with Winnowing algorithm as document similarity search algorithm. The documents that being tested are Indonesian journals with extension .doc, .docx, and/or .txt. Similarity calculation process through two stages, the first is the process of making a document fingerprint using Winnowing algorithm and the second is using Jaccard coefficient similarity. In order to develop this system, the author used iterative waterfall model approach. The main objective of this project is to determine the level of plagiarism. It is expected to prevent plagiarism either intentionally or unintentionally before our journal published by displaying the percentage of similarity in the journals that we make.

  17. SU-F-SPS-06: Implementation of a Back-Projection Algorithm for 2D in Vivo Dosimetry with An EPID System

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez Reyes, B; Rodriguez Perez, E; Sosa Aquino, M [Universidad de Guanajuato, Leon, Guanajuato (Mexico)

    2016-06-15

    Purpose: To implement a back-projection algorithm for 2D dose reconstructions for in vivo dosimetry in radiation therapy using an Electronic Portal Imaging Device (EPID) based on amorphous silicon. Methods: An EPID system was used to calculate dose-response function, pixel sensitivity map, exponential scatter kernels and beam hardenig correction for the back-projection algorithm. All measurements were done with a 6 MV beam. A 2D dose reconstruction for an irradiated water phantom (30×30×30 cm{sup 3}) was done to verify the algorithm implementation. Gamma index evaluation between the 2D reconstructed dose and the calculated with a treatment planning system (TPS) was done. Results: A linear fit was found for the dose-response function. The pixel sensitivity map has a radial symmetry and was calculated with a profile of the pixel sensitivity variation. The parameters for the scatter kernels were determined only for a 6 MV beam. The primary dose was estimated applying the scatter kernel within EPID and scatter kernel within the patient. The beam hardening coefficient is σBH= 3.788×10{sup −4} cm{sup 2} and the effective linear attenuation coefficient is µAC= 0.06084 cm{sup −1}. The 95% of points evaluated had γ values not longer than the unity, with gamma criteria of ΔD = 3% and Δd = 3 mm, and within the 50% isodose surface. Conclusion: The use of EPID systems proved to be a fast tool for in vivo dosimetry, but the implementation is more complex that the elaborated for pre-treatment dose verification, therefore, a simplest method must be investigated. The accuracy of this method should be improved modifying the algorithm in order to compare lower isodose curves.

  18. Implementation and evaluation of an ordered subsets reconstruction algorithm for transmission PET studies using median root prior and inter-update median filtering

    International Nuclear Information System (INIS)

    Bettinardi, V.; Gilardi, M.C.; Fazio, F.; Alenius, S.; Ruotsalainen, U.; Numminen, P.; Teraes, M.

    2003-01-01

    An ordered subsets (OS) reconstruction algorithm based on the median root prior (MRP) and inter-update median filtering was implemented for the reconstruction of low count statistics transmission (TR) scans. The OS-MRP-TR algorithm was evaluated using an experimental phantom, simulating positron emission tomography (PET) whole-body (WB) studies, as well as patient data. Various experimental conditions, in terms of TR scan time (from 1 h to 1 min), covering a wide range of TR count statistics were evaluated. The performance of the algorithm was assessed by comparing the mean value of the attenuation coefficient (MVAC) of known tissue types and the coefficient of variation (CV) for low-count TR images, reconstructed with the OS-MRP-TR algorithm, with reference values obtained from high-count TR images reconstructed with a filtered back-projection (FBP) algorithm. The reconstructed OS-MRP-TR images were then used for attenuation correction of the corresponding emission (EM) data. EM images reconstructed with attenuation correction generated by OS-MRP-TR images, of low count statistics, were compared with the EM images corrected for attenuation using reference (high statistics) TR data. In all the experimental situations considered, the OS-MRP-TR algorithm showed: (1) a tendency towards a stable solution in terms of MVAC; (2) a difference in the MVAC of within 5% for a TR scan of 1 min reconstructed with the OS-MRP-TR and a TR scan of 1 h reconstructed with the FBP algorithm; (3) effectiveness in noise reduction, particularly for low count statistics data [using a specific parameter configuration the TR images reconstructed with OS-MRP-TR(1 min) had a lower CV than the corresponding TR images of a 1-h scan reconstructed with the FBP algorithm]; (4) a difference of within 3% between the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 min and the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 h; (5

  19. Complexity optimization and high-throughput low-latency hardware implementation of a multi-electrode spike-sorting algorithm.

    Science.gov (United States)

    Dragas, Jelena; Jackel, David; Hierlemann, Andreas; Franke, Felix

    2015-03-01

    Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction.

  20. Image-based point spread function implementation in a fully 3D OSEM reconstruction algorithm for PET.

    Science.gov (United States)

    Rapisarda, E; Bettinardi, V; Thielemans, K; Gilardi, M C

    2010-07-21

    The interest in positron emission tomography (PET) and particularly in hybrid integrated PET/CT systems has significantly increased in the last few years due to the improved quality of the obtained images. Nevertheless, one of the most important limits of the PET imaging technique is still its poor spatial resolution due to several physical factors originating both at the emission (e.g. positron range, photon non-collinearity) and at detection levels (e.g. scatter inside the scintillating crystals, finite dimensions of the crystals and depth of interaction). To improve the spatial resolution of the images, a possible way consists of measuring the point spread function (PSF) of the system and then accounting for it inside the reconstruction algorithm. In this work, the system response of the GE Discovery STE operating in 3D mode has been characterized by acquiring (22)Na point sources in different positions of the scanner field of view. An image-based model of the PSF was then obtained by fitting asymmetric two-dimensional Gaussians on the (22)Na images reconstructed with small pixel sizes. The PSF was then incorporated, at the image level, in a three-dimensional ordered subset maximum likelihood expectation maximization (OS-MLEM) reconstruction algorithm. A qualitative and quantitative validation of the algorithm accounting for the PSF has been performed on phantom and clinical data, showing improved spatial resolution, higher contrast and lower noise compared with the corresponding images obtained using the standard OS-MLEM algorithm.

  1. Implementation of an Evidence-Based and Content Validated Standardized Ostomy Algorithm Tool in Home Care: A Quality Improvement Project.

    Science.gov (United States)

    Bare, Kimberly; Drain, Jerri; Timko-Progar, Monica; Stallings, Bobbie; Smith, Kimberly; Ward, Naomi; Wright, Sandra

    Many nurses have limited experience with ostomy management. We sought to provide a standardized approach to ostomy education and management to support nurses in early identification of stomal and peristomal complications, pouching problems, and provide standardized solutions for managing ostomy care in general while improving utilization of formulary products. This article describes development and testing of an ostomy algorithm tool.

  2. MICADO: Parallel implementation of a 2D-1D iterative algorithm for the 3D neutron transport problem in prismatic geometries

    International Nuclear Information System (INIS)

    Fevotte, F.; Lathuiliere, B.

    2013-01-01

    The large increase in computing power over the past few years now makes it possible to consider developing 3D full-core heterogeneous deterministic neutron transport solvers for reference calculations. Among all approaches presented in the literature, the method first introduced in [1] seems very promising. It consists in iterating over resolutions of 2D and ID MOC problems by taking advantage of prismatic geometries without introducing approximations of a low order operator such as diffusion. However, before developing a solver with all industrial options at EDF, several points needed to be clarified. In this work, we first prove the convergence of this iterative process, under some assumptions. We then present our high-performance, parallel implementation of this algorithm in the MICADO solver. Benchmarking the solver against the Takeda case shows that the 2D-1D coupling algorithm does not seem to affect the spatial convergence order of the MOC solver. As for performance issues, our study shows that even though the data distribution is suited to the 2D solver part, the efficiency of the ID part is sufficient to ensure a good parallel efficiency of the global algorithm. After this study, the main remaining difficulty implementation-wise is about the memory requirement of a vector used for initialization. An efficient acceleration operator will also need to be developed. (authors)

  3. Research progress on quantum informatics and quantum computation

    Science.gov (United States)

    Zhao, Yusheng

    2018-03-01

    Quantum informatics is an emerging interdisciplinary subject developed by the combination of quantum mechanics, information science, and computer science in the 1980s. The birth and development of quantum information science has far-reaching significance in science and technology. At present, the application of quantum information technology has become the direction of people’s efforts. The preparation, storage, purification and regulation, transmission, quantum coding and decoding of quantum state have become the hotspot of scientists and technicians, which have a profound impact on the national economy and the people’s livelihood, technology and defense technology. This paper first summarizes the background of quantum information science and quantum computer and the current situation of domestic and foreign research, and then introduces the basic knowledge and basic concepts of quantum computing. Finally, several quantum algorithms are introduced in detail, including Quantum Fourier transform, Deutsch-Jozsa algorithm, Shor’s quantum algorithm, quantum phase estimation.

  4. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  5. Thermo-economic multi-objective optimization of solar dish-Stirling engine by implementing evolutionary algorithm

    International Nuclear Information System (INIS)

    Ahmadi, Mohammad H.; Sayyaadi, Hoseyn; Mohammadi, Amir H.; Barranco-Jimenez, Marco A.

    2013-01-01

    Highlights: • Thermo-economic multi-objective optimization of solar dish-Stirling engine is studied. • Application of the evolutionary algorithm is investigated. • Error analysis is done to find out the error through investigation. - Abstract: In the recent years, remarkable attention is drawn to Stirling engine due to noticeable advantages, for instance a lot of resources such as biomass, fossil fuels and solar energy can be applied as heat source. Great number of studies are conducted on Stirling engine and finite time thermo-economic is one of them. In the present study, the dimensionless thermo-economic objective function, thermal efficiency and dimensionless power output are optimized for a dish-Stirling system using finite time thermo-economic analysis and NSGA-II algorithm. Optimized answers are chosen from the results using three decision-making methods. Error analysis is done to find out the error through investigation

  6. A Novel Algorithm for Determining the Contextual Characteristics of Movement Behaviors by Combining Accelerometer Features and Wireless Beacons: Development and Implementation.

    Science.gov (United States)

    Magistro, Daniele; Sessa, Salvatore; Kingsnorth, Andrew P; Loveday, Adam; Simeone, Alessandro; Zecca, Massimiliano; Esliger, Dale W

    2018-04-20

    Unfortunately, global efforts to promote "how much" physical activity people should be undertaking have been largely unsuccessful. Given the difficulty of achieving a sustained lifestyle behavior change, many scientists are reexamining their approaches. One such approach is to focus on understanding the context of the lifestyle behavior (ie, where, when, and with whom) with a view to identifying promising intervention targets. The aim of this study was to develop and implement an innovative algorithm to determine "where" physical activity occurs using proximity sensors coupled with a widely used physical activity monitor. A total of 19 Bluetooth beacons were placed in fixed locations within a multilevel, mixed-use building. In addition, 4 receiver-mode sensors were fitted to the wrists of a roving technician who moved throughout the building. The experiment was divided into 4 trials with different walking speeds and dwelling times. The data were analyzed using an original and innovative algorithm based on graph generation and Bayesian filters. Linear regression models revealed significant correlations between beacon-derived location and ground-truth tracking time, with intraclass correlations suggesting a high goodness of fit (R 2 =.9780). The algorithm reliably predicted indoor location, and the robustness of the algorithm improved with a longer dwelling time (>100 s; error location of an individual within an indoor environment. This novel implementation of "context sensing" will facilitate a wealth of new research questions on promoting healthy behavior change, the optimization of patient care, and efficient health care planning (eg, patient-clinician flow, patient-clinician interaction). ©Daniele Magistro, Salvatore Sessa, Andrew P Kingsnorth, Adam Loveday, Alessandro Simeone, Massimiliano Zecca, Dale W Esliger. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 20.04.2018.

  7. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components

    Science.gov (United States)

    2015-12-24

    manufacturing today (namely, the 14nm FinFET silicon CMOS technology). The JPEG algorithm is selected as a motivational example since it is widely...TIFF images of a U.S. Air Force F-16 aircraft provided by the University of Southern California Signal and Image Processing Institute (SIPI) image...silicon CMOS technology currently in high volume manufac- turing today (the 14 nm FinFET silicon CMOS technology). The main contribution of this

  8. Implementation of advanced feedback control algorithms for controlled resonant magnetic perturbation physics studies on EXTRAP T2R

    International Nuclear Information System (INIS)

    Frassinetti, L.; Olofsson, K.E.J.; Brunsell, P.R.; Drake, J.R.

    2011-01-01

    The EXTRAP T2R feedback system (active coils, sensor coils and controller) is used to study and develop new tools for advanced control of the MHD instabilities in fusion plasmas. New feedback algorithms developed in EXTRAP T2R reversed-field pinch allow flexible and independent control of each magnetic harmonic. Methods developed in control theory and applied to EXTRAP T2R allow a closed-loop identification of the machine plant and of the resistive wall modes growth rates. The plant identification is the starting point for the development of output-tracking algorithms which enable the generation of external magnetic perturbations. These algorithms will then be used to study the effect of a resonant magnetic perturbation (RMP) on the tearing mode (TM) dynamics. It will be shown that the stationary RMP can induce oscillations in the amplitude and jumps in the phase of the rotating TM. It will be shown that the RMP strongly affects the magnetic island position.

  9. Implementation of advanced feedback control algorithms for controlled resonant magnetic perturbation physics studies on EXTRAP T2R

    Science.gov (United States)

    Frassinetti, L.; Olofsson, K. E. J.; Brunsell, P. R.; Drake, J. R.

    2011-06-01

    The EXTRAP T2R feedback system (active coils, sensor coils and controller) is used to study and develop new tools for advanced control of the MHD instabilities in fusion plasmas. New feedback algorithms developed in EXTRAP T2R reversed-field pinch allow flexible and independent control of each magnetic harmonic. Methods developed in control theory and applied to EXTRAP T2R allow a closed-loop identification of the machine plant and of the resistive wall modes growth rates. The plant identification is the starting point for the development of output-tracking algorithms which enable the generation of external magnetic perturbations. These algorithms will then be used to study the effect of a resonant magnetic perturbation (RMP) on the tearing mode (TM) dynamics. It will be shown that the stationary RMP can induce oscillations in the amplitude and jumps in the phase of the rotating TM. It will be shown that the RMP strongly affects the magnetic island position.

  10. Bayesian Algorithm Implementation in a Real Time Exposure Assessment Model on Benzene with Calculation of Associated Cancer Risks

    Directory of Open Access Journals (Sweden)

    Pavlos A. Kassomenos

    2009-02-01

    Full Text Available The objective of the current study was the development of a reliable modeling platform to calculate in real time the personal exposure and the associated health risk for filling station employees evaluating current environmental parameters (traffic, meteorological and amount of fuel traded determined by the appropriate sensor network. A set of Artificial Neural Networks (ANNs was developed to predict benzene exposure pattern for the filling station employees. Furthermore, a Physiology Based Pharmaco-Kinetic (PBPK risk assessment model was developed in order to calculate the lifetime probability distribution of leukemia to the employees, fed by data obtained by the ANN model. Bayesian algorithm was involved in crucial points of both model sub compartments. The application was evaluated in two filling stations (one urban and one rural. Among several algorithms available for the development of the ANN exposure model, Bayesian regularization provided the best results and seemed to be a promising technique for prediction of the exposure pattern of that occupational population group. On assessing the estimated leukemia risk under the scope of providing a distribution curve based on the exposure levels and the different susceptibility of the population, the Bayesian algorithm was a prerequisite of the Monte Carlo approach, which is integrated in the PBPK-based risk model. In conclusion, the modeling system described herein is capable of exploiting the information collected by the environmental sensors in order to estimate in real time the personal exposure and the resulting health risk for employees of gasoline filling stations.

  11. Bayesian algorithm implementation in a real time exposure assessment model on benzene with calculation of associated cancer risks.

    Science.gov (United States)

    Sarigiannis, Dimosthenis A; Karakitsios, Spyros P; Gotti, Alberto; Papaloukas, Costas L; Kassomenos, Pavlos A; Pilidis, Georgios A

    2009-01-01

    The objective of the current study was the development of a reliable modeling platform to calculate in real time the personal exposure and the associated health risk for filling station employees evaluating current environmental parameters (traffic, meteorological and amount of fuel traded) determined by the appropriate sensor network. A set of Artificial Neural Networks (ANNs) was developed to predict benzene exposure pattern for the filling station employees. Furthermore, a Physiology Based Pharmaco-Kinetic (PBPK) risk assessment model was developed in order to calculate the lifetime probability distribution of leukemia to the employees, fed by data obtained by the ANN model. Bayesian algorithm was involved in crucial points of both model sub compartments. The application was evaluated in two filling stations (one urban and one rural). Among several algorithms available for the development of the ANN exposure model, Bayesian regularization provided the best results and seemed to be a promising technique for prediction of the exposure pattern of that occupational population group. On assessing the estimated leukemia risk under the scope of providing a distribution curve based on the exposure levels and the different susceptibility of the population, the Bayesian algorithm was a prerequisite of the Monte Carlo approach, which is integrated in the PBPK-based risk model. In conclusion, the modeling system described herein is capable of exploiting the information collected by the environmental sensors in order to estimate in real time the personal exposure and the resulting health risk for employees of gasoline filling stations.

  12. Programmatic implications of implementing the relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory sites, test volumes, platform distribution and space requirements

    Directory of Open Access Journals (Sweden)

    Naseem Cassim

    2017-02-01

    Full Text Available Introduction: CD4 testing in South Africa is based on an integrated tiered service delivery model that matches testing demand with capacity. The National Health Laboratory Service has predominantly implemented laboratory-based CD4 testing. Coverage gaps, over-/under-capacitation and optimal placement of point-of-care (POC testing sites need investigation. Objectives: We assessed the impact of relational algebraic capacitated location (RACL algorithm outcomes on the allocation of laboratory and POC testing sites. Methods: The RACL algorithm was developed to allocate laboratories and POC sites to ensure coverage using a set coverage approach for a defined travel time (T. The algorithm was repeated for three scenarios (A: T = 4; B: T = 3; C: T = 2 hours. Drive times for a representative sample of health facility clusters were used to approximate T. Outcomes included allocation of testing sites, Euclidian distances and test volumes. Additional analysis included platform distribution and space requirement assessment. Scenarios were reported as fusion table maps. Results: Scenario A would offer a fully-centralised approach with 15 CD4 laboratories without any POC testing. A significant increase in volumes would result in a four-fold increase at busier laboratories. CD4 laboratories would increase to 41 in scenario B and 61 in scenario C. POC testing would be offered at two sites in scenario B and 20 sites in scenario C. Conclusion: The RACL algorithm provides an objective methodology to address coverage gaps through the allocation of CD4 laboratories and POC sites for a given T. The algorithm outcomes need to be assessed in the context of local conditions.

  13. Continuous-Variable Quantum Computation of Oracle Decision Problems

    Science.gov (United States)

    Adcock, Mark R. A.

    Quantum information processing is appealing due its ability to solve certain problems quantitatively faster than classical information processing. Most quantum algorithms have been studied in discretely parameterized systems, but many quantum systems are continuously parameterized. The field of quantum optics in particular has sophisticated techniques for manipulating continuously parameterized quantum states of light, but the lack of a code-state formalism has hindered the study of quantum algorithms in these systems. To address this situation, a code-state formalism for the solution of oracle decision problems in continuously-parameterized quantum systems is developed. Quantum information processing is appealing due its ability to solve certain problems quantitatively faster than classical information processing. Most quantum algorithms have been studied in discretely parameterized systems, but many quantum systems are continuously parameterized. The field of quantum optics in particular has sophisticated techniques for manipulating continuously parameterized quantum states of light, but the lack of a code-state formalism has hindered the study of quantum algorithms in these systems. To address this situation, a code-state formalism for the solution of oracle decision problems in continuously-parameterized quantum systems is developed. In the infinite-dimensional case, we study continuous-variable quantum algorithms for the solution of the Deutsch--Jozsa oracle decision problem implemented within a single harmonic-oscillator. Orthogonal states are used as the computational bases, and we show that, contrary to a previous claim in the literature, this implementation of quantum information processing has limitations due to a position-momentum trade-off of the Fourier transform. We further demonstrate that orthogonal encoding bases are not unique, and using the coherent states of the harmonic oscillator as the computational bases, our formalism enables quantifying

  14. A 0.13-µm implementation of 5 Gb/s and 3-mW folded parallel architecture for AES algorithm

    Science.gov (United States)

    Rahimunnisa, K.; Karthigaikumar, P.; Kirubavathy, J.; Jayakumar, J.; Kumar, S. Suresh

    2014-02-01

    A new architecture for encrypting and decrypting the confidential data using Advanced Encryption Standard algorithm is presented in this article. This structure combines the folded structure with parallel architecture to increase the throughput. The whole architecture achieved high throughput with less power. The proposed architecture is implemented in 0.13-µm Complementary metal-oxide-semiconductor (CMOS) technology. The proposed structure is compared with different existing structures, and from the result it is proved that the proposed structure gives higher throughput and less power compared to existing works.

  15. Template characterization and correlation algorithm created from segmentation for the iris biometric authentication based on analysis of textures implemented on a FPGA

    International Nuclear Information System (INIS)

    Giacometto, F J; Vilardy, J M; Torres, C O; Mattos, L

    2011-01-01

    Among the most used biometric signals to set personal security permissions, taker increasingly importance biometric iris recognition based on their textures and images of blood vessels due to the rich in these two unique characteristics that are unique to each individual. This paper presents an implementation of an algorithm characterization and correlation of templates created for biometric authentication based on iris texture analysis programmed on a FPGA (Field Programmable Gate Array), authentication is based on processes like characterization methods based on frequency analysis of the sample, and frequency correlation to obtain the expected results of authentication.

  16. PRELIMINARY STUDY FOR THE IMPLEMENTATION OFAN IMAGE ANALYSIS ALGORITHM TO DETECT DAIRY COW PRESENCE AT THE FEED BARRIER

    Directory of Open Access Journals (Sweden)

    Simona M.C. Porto

    2012-06-01

    Full Text Available The objective of this study was to investigate the applicability of the Viola-Jones algorithm for continuous detection of the feeding behaviour of dairy cows housed in an open free-stall barn. A methodology was proposed in order to train, test and validate the classifier. A lower number of positive and negative images than those used by Viola and Jones were required during the training. The testing produced the following results: hit rate of about 97.85%, missed rate of about 2.15%, and false positive rate of about 0.67%. The validation was carried out by an accuracy assessment procedure which required the time-consuming work of an operator who labelled the true position of the cows within the barn and their behaviours. The accuracy assessment revealed that among the 715 frames about 90.63% contained only true positives, whereas about 9.37% were affected by underestimation, i.e., contained also one or two false negatives. False positives occurred only in 2.93% of the analyzed frames. Though a moderate mismatch between the testing and the validation performances was registered, the results obtained revealed the adequacy of the Viola-Jones algorithm for detecting the feeding behaviour of dairy cows housed in open free-stall barns. This, in turn, opens up opportunities for an automatic analysis of cow behaviour.

  17. Implementation of trigger algorithms and studies for the measurement of the Higgs boson self-coupling in the ATLAS experiment at the LHC

    CERN Document Server

    Dahlhoff, Andrea

    2006-01-01

    At the LHC in Geneva the ATLAS experiment will start at 2007. The first part of the present work describes the implementation of trigger algorithms for the Jet/Energy Processor (JEP) as well as all other required features like controlling, diagnostics and read-out. The JEP is one of three processing units of the ATLAS Level-1 Calorimeter Trigger. It identifies and finds the location of jets, and sums total and missing transverse energy information from the trigger data. The Jet/Energy Module (JEM) is the main module of the JEP. The JEM prototype is designed to be functionally identical to the final production module for ATLAS. The thesis presents a description of the architecture, required functionality, and jet and energy summation algorithm of the JEM. Various input test vector patterns were used to check the performance of the comlete energy summation algorithm. The test results using two JEM prototypes are presented and discussed. The subject of the second part is a Monte-Carlo study which determines the ...

  18. Implementation of the diagonalization-free algorithm in the self-consistent field procedure within the four-component relativistic scheme.

    Science.gov (United States)

    Hrdá, Marcela; Kulich, Tomáš; Repiský, Michal; Noga, Jozef; Malkina, Olga L; Malkin, Vladimir G

    2014-09-05

    A recently developed Thouless-expansion-based diagonalization-free approach for improving the efficiency of self-consistent field (SCF) methods (Noga and Šimunek, J. Chem. Theory Comput. 2010, 6, 2706) has been adapted to the four-component relativistic scheme and implemented within the program package ReSpect. In addition to the implementation, the method has been thoroughly analyzed, particularly with respect to cases for which it is difficult or computationally expensive to find a good initial guess. Based on this analysis, several modifications of the original algorithm, refining its stability and efficiency, are proposed. To demonstrate the robustness and efficiency of the improved algorithm, we present the results of four-component diagonalization-free SCF calculations on several heavy-metal complexes, the largest of which contains more than 80 atoms (about 6000 4-spinor basis functions). The diagonalization-free procedure is about twice as fast as the corresponding diagonalization. Copyright © 2014 Wiley Periodicals, Inc.

  19. An Implementation of Document Image Reconstruction System on a Smart Device Using a 1D Histogram Calibration Algorithm

    Directory of Open Access Journals (Sweden)

    Lifeng Zhang

    2014-01-01

    Full Text Available In recent years, the smart devices equipped with imaging functions are widely spreading for consumer application. It is very convenient for people to record information using these devices. For example, people can photo one page of a book in a library or they can capture an interesting piece of news on the bulletin board when walking on the street. But sometimes, one shot full area image cannot give a sufficient resolution for OCR soft or for human visual recognition. Therefore, people would prefer to take several partial character images of a readable size and then stitch them together in an efficient way. In this study, we propose a print document acquisition method using a device with a video camera. A one-dimensional histogram based self-calibration algorithm is developed for calibration. Because the calculation cost is low, it can be installed on a smartphone. The simulation result shows that the calibration and stitching are well performed.

  20. Two-step digit-set-restricted modified signed-digit addition-subtraction algorithm and its optoelectronic implementation.

    Science.gov (United States)

    Qian, F; Li, G; Ruan, H; Jing, H; Liu, L

    1999-09-10

    A novel, to our knowledge, two-step digit-set-restricted modified signed-digit (MSD) addition-subtraction algorithm is proposed. With the introduction of the reference digits, the operand words are mapped into an intermediate carry word with all digits restricted to the set {1, 0} and an intermediate sum word with all digits restricted to the set {0, 1}, which can be summed to form the final result without carry generation. The operation can be performed in parallel by use of binary logic. An optical system that utilizes an electron-trapping device is suggested for accomplishing the required binary logic operations. By programming of the illumination of data arrays, any complex logic operations of multiple variables can be realized without additional temporal latency of the intermediate results. This technique has a high space-bandwidth product and signal-to-noise ratio. The main structure can be stacked to construct a compact optoelectronic MSD adder-subtracter.

  1. Radioiodine therapy of hyperfunctioning thyroid nodules: usefulness of an implemented dose calculation algorithm allowing reduction of radioiodine amount.

    Science.gov (United States)

    Schiavo, M; Bagnara, M C; Pomposelli, E; Altrinetti, V; Calamia, I; Camerieri, L; Giusti, M; Pesce, G; Reitano, C; Bagnasco, M; Caputo, M

    2013-09-01

    Radioiodine is a common option for treatment of hyperfunctioning thyroid nodules. Due to the expected selective radioiodine uptake by adenoma, relatively high "fixed" activities are often used. Alternatively, the activity is individually calculated upon the prescription of a fixed value of target absorbed dose. We evaluated the use of an algorithm for personalized radioiodine activity calculation, which allows as a rule the administration of lower radioiodine activities. Seventy-five patients with single hyperfunctioning thyroid nodule eligible for 131I treatment were studied. The activities of 131I to be administered were estimated by the method described by Traino et al. and developed for Graves'disease, assuming selective and homogeneous 131I uptake by adenoma. The method takes into account 131I uptake and its effective half-life, target (adenoma) volume and its expected volume reduction during treatment. A comparison with the activities calculated by other dosimetric protocols, and the "fixed" activity method was performed. 131I uptake was measured by external counting, thyroid nodule volume by ultrasonography, thyroid hormones and TSH by ELISA. Remission of hyperthyroidism was observed in all but one patient; volume reduction of adenoma was closely similar to that assumed by our model. Effective half-life was highly variable in different patients, and critically affected dose calculation. The administered activities were clearly lower with respect to "fixed" activities and other protocols' prescription. The proposed algorithm proved to be effective also for single hyperfunctioning thyroid nodule treatment and allowed a significant reduction of administered 131I activities, without loss of clinical efficacy.

  2. Implementation of the ALERT algorithm, a new dispatcher-assisted telephone cardiopulmonary resuscitation protocol, in non-Advanced Medical Priority Dispatch System (AMPDS) Emergency Medical Services centres.

    Science.gov (United States)

    Stipulante, Samuel; Tubes, Rebecca; El Fassi, Mehdi; Donneau, Anne-Francoise; Van Troyen, Barbara; Hartstein, Gary; D'Orio, Vincent; Ghuysen, Alexandre

    2014-02-01

    Early bystander cardiopulmonary resuscitation (CPR) is a key factor in improving survival from out-of-hospital cardiac arrest (OHCA). The ALERT (Algorithme Liégeois d'Encadrement à la Réanimation par Téléphone) algorithm has the potential to help bystanders initiate CPR. This study evaluates the effectiveness of the implementation of this protocol in a non-Advanced Medical Priority Dispatch System area. We designed a before and after study based on a 3-month retrospective assessment of victims of OHCA in 2009, before the implementation of the ALERT protocol in Liege emergency medical communication centre (EMCC), and the prospective evaluation of the same 3 months in 2011, immediately after the implementation. At the moment of the call, dispatchers were able to identify 233 OHCA in the first period and 235 in the second. Victims were predominantly male (59%, both periods), with mean ages of 64.1 and 63.9 years, respectively. In 2009, only 9.9% victims benefited from bystander CPR, this increased to 22.5% in 2011 (p<0.0002). The main reasons for protocol under-utilisation were: assistance not offered by the dispatcher (42.3%), caller physically remote from the victim (20.6%). Median time from call to first compression, defined here as no flow time, was 253s in 2009 and 168s in 2011 (NS). Ten victims were admitted to hospital after ROSC in 2009 and 13 in 2011 (p=0.09). From the beginning and despite its under-utilisation, the ALERT protocol significantly improved the number of patients in whom bystander CPR was attempted. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. A parallel implementation of the Wuchty algorithm with additional experimental filters to more thoroughly explore RNA conformational space.

    Directory of Open Access Journals (Sweden)

    Jonathan W Stone

    Full Text Available We present new modifications to the Wuchty algorithm in order to better define and explore possible conformations for an RNA sequence. The new features, including parallelization, energy-independent lonely pair constraints, context-dependent chemical probing constraints, helix filters, and optional multibranch loops, provide useful tools for exploring the landscape of RNA folding. Chemical probing alone may not necessarily define a single unique structure. The helix filters and optional multibranch loops are global constraints on RNA structure that are an especially useful tool for generating models of encapsidated viral RNA for which cryoelectron microscopy or crystallography data may be available. The computations generate a combinatorially complete set of structures near a free energy minimum and thus provide data on the density and diversity of structures near the bottom of a folding funnel for an RNA sequence. The conformational landscapes for some RNA sequences may resemble a low, wide basin rather than a steep funnel that converges to a single structure.

  4. Design and Implementation of a Smart LED Lighting System Using a Self Adaptive Weighted Data Fusion Algorithm

    Science.gov (United States)

    Sung, Wen-Tsai; Lin, Jia-Syun

    2013-01-01

    This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, e.g., smartphones, tablets, and so forth. The status of energy use is reflected by readings displayed on a handheld device, and it is treated as a criterion in the lighting mode design of a system. A multimeter, a wireless light dimmer, an IR learning remote module, etc. are connected to a server by means of RS 232/485 and a human computer interface on a touch screen. The wireless data communication is designed to operate in compliance with the ZigBee standard, and signal processing on sensed data is made through a self adaptive weighted data fusion algorithm. A low variation in data fusion together with a high stability is experimentally demonstrated in this work. The wireless light dimmer as well as the IR learning remote module can be instructed directly by command given on the human computer interface, and the reading on a multimeter can be displayed thereon via the server. This proposed smart LED lighting system can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission. Hence, this proposal is validated as an approach to power monitoring for home appliances, and is demonstrated as a digital home network in consideration of energy efficiency.

  5. Ant colony algorithm implementation in electron and photon Monte Carlo transport: Application to the commissioning of radiosurgery photon beams

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Pareja, S.; Galan, P.; Manzano, F.; Brualla, L.; Lallena, A. M. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' ' Carlos Haya' ' , Avda. Carlos Haya s/n, E-29010 Malaga (Spain); Unidad de Radiofisica Hospitalaria, Hospital Xanit Internacional, Avda. de los Argonautas s/n, E-29630 Benalmadena (Malaga) (Spain); NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Hufelandstr. 55, D-45122 Essen (Germany); Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)

    2010-07-15

    Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within {approx}3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.

  6. Ant colony algorithm implementation in electron and photon Monte Carlo transport: Application to the commissioning of radiosurgery photon beams

    International Nuclear Information System (INIS)

    Garcia-Pareja, S.; Galan, P.; Manzano, F.; Brualla, L.; Lallena, A. M.

    2010-01-01

    Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within ∼3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.

  7. Implementation of strength pareto evolutionary algorithm II in the multiobjective burnable poison placement optimization of KWU pressurized water reactor

    International Nuclear Information System (INIS)

    Gharari, Rahman; Poursalehi, Navid; Abbasi, Mohmmadreza; Aghale, Mahdi

    2016-01-01

    In this research, for the first time, a new optimization method, i.e., strength Pareto evolutionary algorithm II (SPEA-II), is developed for the burnable poison placement (BPP) optimization of a nuclear reactor core. In the BPP problem, an optimized placement map of fuel assemblies with burnable poison is searched for a given core loading pattern according to defined objectives. In this work, SPEA-II coupled with a nodal expansion code is used for solving the BPP problem of Kraftwerk Union AG (KWU) pressurized water reactor. Our optimization goal for the BPP is to achieve a greater multiplication factor (K-e-f-f) for gaining possible longer operation cycles along with more flattening of fuel assembly relative power distribution, considering a safety constraint on the radial power peaking factor. For appraising the proposed methodology, the basic approach, i.e., SPEA, is also developed in order to compare obtained results. In general, results reveal the acceptance performance and high strength of SPEA, particularly its new version, i.e., SPEA-II, in achieving a semioptimized loading pattern for the BPP optimization of KWU pressurized water reactor

  8. Design and Implementation of a Smart LED Lighting System Using a Self Adaptive Weighted Data Fusion Algorithm

    Directory of Open Access Journals (Sweden)

    Wen-Tsai Sung

    2013-12-01

    Full Text Available This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, e.g., smartphones, tablets, and so forth. The status of energy use is reflected by readings displayed on a handheld device, and it is treated as a criterion in the lighting mode design of a system. A multimeter, a wireless light dimmer, an IR learning remote module, etc. are connected to a server by means of RS 232/485 and a human computer interface on a touch screen. The wireless data communication is designed to operate in compliance with the ZigBee standard, and signal processing on sensed data is made through a self adaptive weighted data fusion algorithm. A low variation in data fusion together with a high stability is experimentally demonstrated in this work. The wireless light dimmer as well as the IR learning remote module can be instructed directly by command given on the human computer interface, and the reading on a multimeter can be displayed thereon via the server. This proposed smart LED lighting system can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission. Hence, this proposal is validated as an approach to power monitoring for home appliances, and is demonstrated as a digital home network in consideration of energy efficiency.

  9. Ant colony algorithm implementation in electron and photon Monte Carlo transport: application to the commissioning of radiosurgery photon beams.

    Science.gov (United States)

    García-Pareja, S; Galán, P; Manzano, F; Brualla, L; Lallena, A M

    2010-07-01

    In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within approximately 3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.

  10. Implementation of strength pareto evolutionary algorithm II in the multiobjective burnable poison placement optimization of KWU pressurized water reactor

    Energy Technology Data Exchange (ETDEWEB)

    Gharari, Rahman [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of); Poursalehi, Navid; Abbasi, Mohmmadreza; Aghale, Mahdi [Nuclear Engineering Dept, Shahid Beheshti University, Tehran (Iran, Islamic Republic of)

    2016-10-15

    In this research, for the first time, a new optimization method, i.e., strength Pareto evolutionary algorithm II (SPEA-II), is developed for the burnable poison placement (BPP) optimization of a nuclear reactor core. In the BPP problem, an optimized placement map of fuel assemblies with burnable poison is searched for a given core loading pattern according to defined objectives. In this work, SPEA-II coupled with a nodal expansion code is used for solving the BPP problem of Kraftwerk Union AG (KWU) pressurized water reactor. Our optimization goal for the BPP is to achieve a greater multiplication factor (K-e-f-f) for gaining possible longer operation cycles along with more flattening of fuel assembly relative power distribution, considering a safety constraint on the radial power peaking factor. For appraising the proposed methodology, the basic approach, i.e., SPEA, is also developed in order to compare obtained results. In general, results reveal the acceptance performance and high strength of SPEA, particularly its new version, i.e., SPEA-II, in achieving a semioptimized loading pattern for the BPP optimization of KWU pressurized water reactor.

  11. Accelerating object detection via a visual-feature-directed search cascade: algorithm and field programmable gate array implementation

    Science.gov (United States)

    Kyrkou, Christos; Theocharides, Theocharis

    2016-07-01

    Object detection is a major step in several computer vision applications and a requirement for most smart camera systems. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware [field programmable gate arrays (FPGAs)], and relevant research has produced quite fascinating results, in both the accuracy of the detection algorithms as well as the performance in terms of frames per second (fps) for use in embedded smart camera systems. Detecting objects in images, however, is a daunting task and often involves hardware-inefficient steps, both in terms of the datapath design and in terms of input/output and memory access patterns. We present how a visual-feature-directed search cascade composed of motion detection, depth computation, and edge detection, can have a significant impact in reducing the data that needs to be examined by the classification engine for the presence of an object of interest. Experimental results on a Spartan 6 FPGA platform for face detection indicate data search reduction of up to 95%, which results in the system being able to process up to 50 1024×768 pixels images per second with a significantly reduced number of false positives.

  12. THE IMPLEMENTATION OF A SIMPLE LINIER REGRESSIVE ALGORITHM ON DATA FACTORY CASSAVA SINAR LAUT AT THE NORTH OF LAMPUNG

    Directory of Open Access Journals (Sweden)

    Dwi Marisa Efendi

    2018-04-01

    Full Text Available Cassava is one type of plant that can be planted in tropical climates. Cassava commodity is one of the leading sub-sectors in the plantation area. Cassava plant is the main ingredient of sago flour which is now experiencing price decline. The condition of the abundant supply of sago or tapioca flour production is due to the increase of cassava planting in each farmer. With the increasing number of cassava planting in farmer's plantation cause the price of cassava received by farmer is not suitable. So for the need of making sago or tapioca flour often excess in buying raw material of cassava This resulted in a lot of rotten cassava and the factory bought cassava for a low price. Based on the problem, this research is done using data mining modeled with multiple linear regression algorithm which aim to estimate the amount of Sago or Tapioca flour that can be produced, so that the future can improve the balance between the amount of cassava supply and tapioca production. The variables used in linear regression analysis are dependent variable and independent variable . From the data obtained, the dependent variable is the number of Tapioca (kg symbolized by Y while the independent variable is milled cassava symbolized by X. From the results obtained with an accuracy of 95% confidence level, then obtained coefficient of determination (R2 is 1.00. While the estimation results almost closer to the actual data value, with an average error of 0.00.

  13. Implementation of a data fusion algorithm for RODS, a real-time outbreak and disease surveillance system.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Douglas (Sandia National Laboratories, Livermore, CA); Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA)

    2005-10-01

    Due to the nature of many infectious agents, such as anthrax, symptoms may either take several days to manifest or resemble those of less serious illnesses leading to misdiagnosis. Thus, bioterrorism attacks that include the release of such agents are particularly dangerous and potentially deadly. For this reason, a system is needed for the quick and correct identification of disease outbreaks. The Real-time Outbreak Disease Surveillance System (RODS), initially developed by Carnegie Mellon University and the University of Pittsburgh, was created to meet this need. The RODS software implements different classifiers for pertinent health surveillance data in order to determine whether or not an outbreak has occurred. In an effort to improve the capability of RODS at detecting outbreaks, we incorporate a data fusion method. Data fusion is used to improve the results of a single classification by combining the output of multiple classifiers. This paper documents the first stages of the development of a data fusion system that can combine the output of the classifiers included in RODS.

  14. IFACEwat: the interfacial water-implemented re-ranking algorithm to improve the discrimination of near native structures for protein rigid docking.

    Science.gov (United States)

    Su, Chinh; Nguyen, Thuy-Diem; Zheng, Jie; Kwoh, Chee-Keong

    2014-01-01

    native structures found. As our implementation so far targeted to improve the results of ZDOCK3.0.2, and particularly for the Antigen/Antibody complexes, it is expected in the near future that more implementations will be conducted to be applicable for other initial rigid docking algorithms.

  15. Implementation of Freeman-Wimley prediction algorithm in a web-based application for in silico identification of beta-barrel membrane proteins

    Directory of Open Access Journals (Sweden)

    José Antonio Agüero-Fernández

    2015-11-01

    Full Text Available Beta-barrel type proteins play an important role in both, human and veterinary medicine. In particular, their localization on the bacterial surface, and their involvement in virulence mechanisms of pathogens, have turned them into an interesting target in studies to search for vaccine candidates. Recently, Freeman and Wimley developed a prediction algorithm based on the physicochemical properties of transmembrane beta-barrels proteins (TMBBs. Based on that algorithm, and using Grails, a web-based application was implemented. This system, named Beta Predictor, is capable of processing from one protein sequence to complete predicted proteomes up to 10000 proteins with a runtime of about 0.019 seconds per 500-residue protein, and it allows graphical analyses for each protein. The application was evaluated with a validation set of 535 non-redundant proteins, 102 TMBBs and 433 non-TMBBs. The sensitivity, specificity, Matthews correlation coefficient, positive predictive value and accuracy were calculated, being 85.29%, 95.15%, 78.72%, 80.56% and 93.27%, respectively. The performance of this system was compared with TMBBs predictors, BOMP and TMBHunt, using the same validation set. Taking into account the order mentioned above, the following results were obtained: 76.47%, 99.31%, 83.05%, 96.30% and 94.95% for BOMP, and 78.43%, 92.38%, 67.90%, 70.17% and 89.78% for TMBHunt. Beta Predictor was outperformed by BOMP but the latter showed better behavior than TMBHunt

  16. Glycaemic control and implementation of the ADA/EASD-2006 consensus algorithm in type 2 diabetes mellitus patients in primary care in Spain.

    Science.gov (United States)

    Alvarez-Guisasola, F

    2014-01-01

    In 2006, the American Diabetes Association and the European Association for the Study of Diabetes established a consensus algorithm (ADA/EASD-2006) for the adjustment of drug therapy for type 2 diabetes mellitus (T2DM). To study glycaemic control in T2DM patients and the implementation of the ADA/EASD-2006 recommendations in primary care centres in Spain. Prospective observational study in 1194 patients with T2DM conducted in 250 primary care centres in Spain. Patients were assessed at study inclusion (V0) and at 3 (V1) and 6 months (V2) post baseline. Information was collected at the level of DM control, HbA(1c) ADA/EASD-2006 guidelines. Type 2 diabetes mellitus patients (53% women; mean age 64.9 years) had a mean (SD) HbA(1c) 7.8 (1.4)% and HbC 25.2% at baseline, 95% of them were receiving oral antihyperglycaemic agents (AAs) only. At V1, HbA(1c) was 7.3 (1.1)% and HbC was 38.1%; 65.0% of patients were receiving oral AAs, 5.6% insulin and 27.9% oral AAs plus insulin. At V2, HbA(1c) was 7.1 (0.9)% and HbC was 48.0%; 57.1% of patients were receiving oral AAs, 5.0% insulin and 36.9% oral AAs plus insulin. The ADA/EASD-2006 algorithm was adhered to in 33% patients up to study month 3, vs. 17.2% throughout the entire 6-month period. In patients with T2DM seen in primary care, the HbA1c target was met in 48.0% after adjusting their AAs. However, this is not reflected in greater implementation of the ADA/EASD-2006 guidelines, which are adhered to in only 17%. © 2013 John Wiley & Sons Ltd.

  17. Comparison of measurement methods with a mixed effects procedure accounting for replicated evaluations (COM3PARE): method comparison algorithm implementation for head and neck IGRT positional verification.

    Science.gov (United States)

    Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R

    2015-08-28

    Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed

  18. NETWORK SIMPLEX, ALGORITHM E IMPLEMENTATION

    OpenAIRE

    JOAQUIM PEDRO DE V CORDEIRO

    2008-01-01

    Este trabalho busca desenvolver o método Simplex para Redes na solução de problemas de Fluxo de Custo Mínimo. Este método consiste em uma adaptação do método Simplex primal em que são exploradas as características específicas da rede subjacente ao problema ao se buscar a solução ótima em um número finito de árvores geradoras. A árvore geradora ótima será obtida iterativamente através de sucessivas melhorias na estrutura de cada árvore formada. A maior ef...

  19. A hybrid, massively parallel implementation of a genetic algorithm for optimization of the impact performance of a metal/polymer composite plate

    KAUST Repository

    Narayanan, Kiran

    2012-07-17

    A hybrid parallelization method composed of a coarse-grained genetic algorithm (GA) and fine-grained objective function evaluations is implemented on a heterogeneous computational resource consisting of 16 IBM Blue Gene/P racks, a single x86 cluster node and a high-performance file system. The GA iterator is coupled with a finite-element (FE) analysis code developed in house to facilitate computational steering in order to calculate the optimal impact velocities of a projectile colliding with a polyurea/structural steel composite plate. The FE code is capable of capturing adiabatic shear bands and strain localization, which are typically observed in high-velocity impact applications, and it includes several constitutive models of plasticity, viscoelasticity and viscoplasticity for metals and soft materials, which allow simulation of ductile fracture by void growth. A strong scaling study of the FE code was conducted to determine the optimum number of processes run in parallel. The relative efficiency of the hybrid, multi-level parallelization method is studied in order to determine the parameters for the parallelization. Optimal impact velocities of the projectile calculated using the proposed approach, are reported. © The Author(s) 2012.

  20. SYSTEM OF MODEL FOR TRAINING FUTURE MASTERS OF TOURISM, AS WELL AS THE ALGORITHM OF ITS PRODUCTIVE IMPLEMENTATION IN HIGHER EDUCATION

    Directory of Open Access Journals (Sweden)

    Larisa Beskorovaynaya

    2017-03-01

    Full Text Available On the basis of theoretical analysis author substantiates the system model of training future masters of tourism in higher education. The author found the methodological basis of preparation of future tourism masters in the field of higher education. In addition, theoretically grounded and model of the system of training of future masters of tourism, opened its components, a set of organizational and pedagogical conditions. System model of professional training of future masters of tourism in higher education, which is considered by us as an open, integrative, multi, mobile, adequate social requirements and individual needs of students, the educational system contains components: theoretical, methodological, structural and functional, design and technology, analytical criterion. The author proved that the model provides the opportunity to reflect, recreate individual readiness of future masters of tourism with a view to understanding its forecasting features, operation and further successful implementation in educational practice.The author researched and proposed algorithm productive use of the system model of training future tourism masters in the field of higher education. Based on the results of research made a some conclusion. Further prospective research directions are also provided.

  1. On the implementation of new versions of the algorithms of calculation of dose absorbed in radiotherapy external; Sobre la implementacion de nuevas versiones de los algoritmos de calculo de dosis absorbida en radioterapia externa

    Energy Technology Data Exchange (ETDEWEB)

    Latorre-Musoll, A.; Carrasco de Fez, P.; Lizondo Gisbert, M.; Jordi-Ollero, O.; Jornet Sala, N.; Eudaldo Puell, T.; Ruiz Martinez, A.; Ribas Morales, M.

    2015-07-01

    The changes of version of the algorithms of calculation of dose absorbed in radiotherapy external should implement in a time reduced due to the pressure care. A set reduced of checks could pass by high discrepancies significant between the stones and the measures experimental, as illustrate in this work. (Author)

  2. Implementation of a computationally efficient least-squares algorithm for highly under-determined three-dimensional diffuse optical tomography problems.

    Science.gov (United States)

    Yalavarthy, Phaneendra K; Lynch, Daniel R; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2008-05-01

    Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman-Morrison-Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg-Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photo-multiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2.

  3. An easy-to-implement and efficient data assimilation method for the identification of the initial condition: the Back and Forth Nudging (BFN) algorithm

    International Nuclear Information System (INIS)

    Auroux, Didier; Bansart, Patrick; Blum, Jacques

    2008-01-01

    This paper deals with a new data assimilation algorithm called the Back and Forth Nudging. The standard nudging technique consists in adding to the model equations a relaxation term, which is supposed to force the model to the observations. The BFN algorithm consists of repeating forward and backward resolutions of the model with relaxation (or nudging) terms, that have opposite signs in the direct and inverse resolutions, so as to make the backward evolution numerically stable. We then applied the Back and Forth Nudging algorithm to a simple non-linear model: the ID viscous Burgers' equations. The tests were carried out through several cases relative to the precision and density of the observations. These simulations were then compared with both the variational assimilation (VAR) and quasi-inverse (QIL) algorithms. The comparisons deal with the programming, the convergence, and time computing for each of these three algorithms.

  4. An easy-to-implement and efficient data assimilation method for the identification of the initial condition: the Back and Forth Nudging (BFN) algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Auroux, Didier [Institut de Mathematiques, Universite Paul Sabatier Toulouse 3, 31062 Toulouse cedex 9 (France); Bansart, Patrick; Blum, Jacques [Laboratoire J. A. Dieudonne, Universite de Nice Sophia-Antipolis, Pare Valrose, 06108 Nice cedex 2 (France)], E-mail: didier.auroux@math.univ-toulouse.fr

    2008-11-01

    This paper deals with a new data assimilation algorithm called the Back and Forth Nudging. The standard nudging technique consists in adding to the model equations a relaxation term, which is supposed to force the model to the observations. The BFN algorithm consists of repeating forward and backward resolutions of the model with relaxation (or nudging) terms, that have opposite signs in the direct and inverse resolutions, so as to make the backward evolution numerically stable. We then applied the Back and Forth Nudging algorithm to a simple non-linear model: the ID viscous Burgers' equations. The tests were carried out through several cases relative to the precision and density of the observations. These simulations were then compared with both the variational assimilation (VAR) and quasi-inverse (QIL) algorithms. The comparisons deal with the programming, the convergence, and time computing for each of these three algorithms.

  5. Cascade Error Projection: A New Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.

    1995-01-01

    A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.

  6. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  7. The design and implementation of cost-effective algorithms for direct solution of banded linear systems on the vector processor system 32 supercomputer

    Science.gov (United States)

    Samba, A. S.

    1985-01-01

    The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.

  8. Using the Pebb Universal Controller to Modify Control Algorithms for DC-To-DC Converters and Implement Closed-Loop Control of ARCP Inverters

    National Research Council Canada - National Science Library

    Floodeen, David

    1998-01-01

    The objective of this thesis is two-fold. The first goal is to expand the operational capabilities of the Ship's Service Converter Module control algorithm for a DC-to-DC converter using the Universal Controller...

  9. Implementation of Freeman-Wimley prediction algorithm in a web-based application for in silico identification of beta-barrel membrane proteins

    OpenAIRE

    José Antonio Agüero-Fernández; Lisandra Aguilar-Bultet; Yandy Abreu-Jorge; Agustín Lage-Castellanos; Yannier Estévez-Dieppa

    2015-01-01

    Beta-barrel type proteins play an important role in both, human and veterinary medicine. In particular, their localization on the bacterial surface, and their involvement in virulence mechanisms of pathogens, have turned them into an interesting target in studies to search for vaccine candidates. Recently, Freeman and Wimley developed a prediction algorithm based on the physicochemical properties of transmembrane beta-barrels proteins (TMBBs). Based on that algorithm, and using Grails, a web-...

  10. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  11. Nurse-led implementation of an insulin-infusion protocol in a general intensive care unit: improved glycaemic control with increased costs and risk of hypoglycaemia signals need for algorithm revision

    Directory of Open Access Journals (Sweden)

    Bull Eva M

    2008-01-01

    Full Text Available Abstract Background Strict glycaemic control (SGC has become a contentious issue in modern intensive care. Physicians and nurses are concerned about the increased workload due to SGC as well as causing harm through hypoglycaemia. The objective of our study was to evaluate our existing degree of glycaemic control, and to implement SGC safely in our ICU through a nurse-led implementation of an algorithm for intensive insulin-therapy. Methods The study took place in the adult general intensive care unit (11 beds of a 44-bed department of intensive care at a tertiary care university hospital. All patients admitted during the 32 months of the study were enrolled. We retrospectively analysed all arterial blood glucose (BG results from samples that were obtained over a period of 20 months prior to the implementation of SGC. We then introduced an algorithm for intensive insulin therapy; aiming for arterial blood-glucose at 4.4 – 6.1 mmol/L. Doctors and nurses were trained in the principles and potential benefits and risks of SGC. Consecutive statistical analyses of blood samples over a period of 12 months were used to assess performance, provide feedback and uncover incidences of hypoglycaemia. Results Median BG level was 6.6 mmol/L (interquartile range 5.6 to 7.7 mmol/L during the period prior to implementation of SGC (494 patients, and fell to 5.9 (IQR 5.1 to 7.0 mmol/L following introduction of the new algorithm (448 patients. The percentage of BG samples > 8 mmol/L was reduced from 19.2 % to 13.1 %. Before implementation of SGC, 33 % of samples were between 4.4 to 6.1 mmol/L and 12 patients (2.4 % had one or more episodes of severe hypoglycaemia ( Conclusion The retrospective part of the study indicated ample room for improvement. Through the implementation of SGC the fraction of samples within the new target range increased from 33% to 45.8%. There was also a significant increase in severe hypoglycaemic episodes. There continues to be potential

  12. Comparison of turnaround time and total cost of HIV testing before and after implementation of the 2014 CDC/APHL Laboratory Testing Algorithm for diagnosis of HIV infection.

    Science.gov (United States)

    Chen, Derrick J; Yao, Joseph D

    2017-06-01

    Updated recommendations for HIV diagnostic laboratory testing published by the Centers for Disease Control and Prevention and the Association of Public Health Laboratories incorporate 4th generation HIV immunoassays, which are capable of identifying HIV infection prior to seroconversion. The purpose of this study was to compare turnaround time and cost between 3rd and 4th generation HIV immunoassay-based testing algorithms for initially reactive results. The clinical microbiology laboratory database at Mayo Clinic, Rochester, MN was queried for 3rd generation (from November 2012 to May 2014) and 4th generation (from May 2014 to November 2015) HIV immunoassay results. All results from downstream supplemental testing were recorded. Turnaround time (defined as the time of initial sample receipt in the laboratory to the time the final supplemental test in the algorithm was resulted) and cost (based on 2016 Medicare reimbursement rates) were assessed. A total of 76,454 and 78,998 initial tests were performed during the study period using the 3rd generation and 4th generation HIV immunoassays, respectively. There were 516 (0.7%) and 581 (0.7%) total initially reactive results, respectively. Of these, 304 (58.9%) and 457 (78.7%) were positive by supplemental testing. There were 10 (0.01%) cases of acute HIV infection identified with the 4th generation algorithm. The most frequent tests performed to confirm an HIV-positive case using the 3rd generation algorithm, which were reactive initial immunoassay and positive HIV-1 Western blot, took a median time of 1.1 days to complete at a cost of $45.00. In contrast, the most frequent tests performed to confirm an HIV-positive case using the 4th generation algorithm, which included a reactive initial immunoassay and positive HIV-1/-2 antibody differentiation immunoassay for HIV-1, took a median time of 0.4 days and cost $63.25. Overall median turnaround time was 2.2 and 1.5 days, and overall median cost was $63.90 and $72.50 for

  13. An Ordering Linear Unification Algorithm

    Institute of Scientific and Technical Information of China (English)

    胡运发

    1989-01-01

    In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.

  14. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  15. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  16. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  17. Examination of the suitability of an implementation of the Jette localized heterogeneities fluence term L(1)(x,y,z) in an electron beam treatment planning algorithm

    Science.gov (United States)

    Rodebaugh, Raymond Francis, Jr.

    2000-11-01

    In this project we applied modifications of the Fermi- Eyges multiple scattering theory to attempt to achieve the goals of a fast, accurate electron dose calculation algorithm. The dose was first calculated for an ``average configuration'' based on the patient's anatomy using a modification of the Hogstrom algorithm. It was split into a measured central axis depth dose component based on the material between the source and the dose calculation point, and an off-axis component based on the physics of multiple coulomb scattering for the average configuration. The former provided the general depth dose characteristics along the beam fan lines, while the latter provided the effects of collimation. The Gaussian localized heterogeneities theory of Jette provided the lateral redistribution of the electron fluence by heterogeneities. Here we terminated Jette's infinite series of fluence redistribution terms after the second term. Experimental comparison data were collected for 1 cm thick x 1 cm diameter air and aluminum pillboxes using the Varian 2100C linear accelerator at Rush-Presbyterian- St. Luke's Medical Center. For an air pillbox, the algorithm results were in reasonable agreement with measured data at both 9 and 20 MeV. For the Aluminum pill box, there were significant discrepancies between the results of this algorithm and experiment. This was particularly apparent for the 9 MeV beam. Of course a one cm thick Aluminum heterogeneity is unlikely to be encountered in a clinical situation; the thickness, linear stopping power, and linear scattering power of Aluminum are all well above what would normally be encountered. We found that the algorithm is highly sensitive to the choice of the average configuration. This is an indication that the series of fluence redistribution terms does not converge fast enough to terminate after the second term. It also makes it difficult to apply the algorithm to cases where there are no a priori means of choosing the best average

  18. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  19. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  20. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry